top of page

Discussion Trading Gratuite Ouvert à tous

Public·28 membres

Feedback Control of Dynamic Systems: A Comprehensive Textbook for Engineers and Scientists



Feedback Control of Dynamic Systems, 6th Edition




Feedback control is a technique that allows a system to adjust its behavior based on the difference between its desired output and its actual output. Feedback control is widely used in engineering, science, and everyday life to achieve various goals such as stability, performance, robustness, and adaptability. In this article, we will introduce the basic concepts of feedback control, review some classical and modern methods for designing feedback controllers, and answer some frequently asked questions about feedback control of dynamic systems.




Feedback Control of Dynamic Systems, 6th Edition


Download File: https://www.google.com/url?q=https%3A%2F%2Fmiimms.com%2F2ucfVG&sa=D&sntz=1&usg=AOvVaw2klxT7weDJ4nqmrqA7qfUh



What is feedback control and why is it important?




A feedback control system consists of four main elements: a plant, a controller, a sensor, and an actuator. The plant is the system that we want to control, such as a robot, a car, or a chemical reactor. The controller is the device that determines how to manipulate the plant based on the feedback from the sensor. The sensor is the device that measures the output of the plant and compares it with the desired output or reference. The actuator is the device that applies the input to the plant as instructed by the controller.


The basic idea of feedback control is to use the sensor to monitor the output of the plant and compare it with the reference. If there is a difference or error between them, the controller will generate an input that will reduce the error and drive the output closer to the reference. This process is repeated continuously until the output matches the reference or reaches an acceptable range.


Feedback control is important because it can improve the performance and stability of a system in the presence of uncertainties and disturbances. Uncertainties are variations in the parameters or dynamics of the plant that are unknown or hard to predict. Disturbances are external forces or inputs that affect the output of the plant but are not controlled by the controller. For example, a robot may encounter friction, wear, or changes in load that affect its motion. A car may face wind, road conditions, or traffic that affect its speed. A chemical reactor may experience fluctuations in temperature, pressure, or concentration that affect its product quality. Feedback control can compensate for these effects by adjusting the input accordingly.


How to design feedback control systems using classical methods




Classical methods are techniques that rely on mathematical tools such as differential equations, Laplace transforms, and linear algebra to analyze and design feedback controllers. Classical methods are suitable for linear time-invariant (LTI) systems, which are systems that have constant coefficients and do not change with time. Classical methods can be divided into three categories: root locus method, frequency response method, and state-space method.


The root locus method




The root locus method is a graphical technique that shows how the poles of the closed-loop system (the system that includes the feedback loop) change as a function of a parameter in the controller, usually the gain. The poles are the solutions of the characteristic equation of the system, which determine its stability and transient behavior. The root locus method can help us to find the range of the parameter that ensures stability and to choose the value that optimizes the performance.


How to plot the root locus




To plot the root locus, we need to follow these steps:



  • Write the transfer function of the open-loop system (the system that excludes the feedback loop) in standard form, where the numerator and denominator are polynomials in s, the Laplace variable.



  • Find the poles and zeros of the open-loop system by setting the numerator and denominator equal to zero, respectively. Mark them on the s-plane with crosses and circles, respectively.



Find the asymptotes of the root locus by using these formulas:


  • The number of asymptotes is equal to the difference between the number of poles and zeros of the open-loop system.



  • The centroid of the asymptotes is given by (sum of poles - sum of zeros) / (number of poles - number of zeros).



  • The angles of the asymptotes are given by (2k + 1) * 180 / (number of poles - number of zeros), where k is an integer from 0 to (number of poles - number of zeros - 1).



  • Find the breakaway and break-in points of the root locus by setting the derivative of the denominator of the open-loop system equal to zero and solving for s. These are the points where the root locus branches enter or leave the real axis.



  • Find the intersection of the root locus with the imaginary axis by using the Routh-Hurwitz criterion, which is a tabular method that can determine the number and location of roots in the left-half or right-half plane.



  • Find the angle of departure or arrival of the root locus at complex poles or zeros by using this formula: angle = 180 - sum of angles from other poles + sum of angles from other zeros.



Sketch the root locus by following these rules:


  • The root locus starts at the poles and ends at the zeros as the parameter goes from zero to infinity.



  • The root locus is symmetric about the real axis.



  • The root locus has (number of poles - number of zeros) branches.



  • The root locus lies on the real axis to the left of an odd number of real poles or zeros.



  • The root locus approaches the asymptotes as s goes to infinity.



  • The root locus crosses the imaginary axis at the points determined by the Routh-Hurwitz criterion.



  • The root locus departs from or arrives at complex poles or zeros at the angles determined by the formula.



How to use the root locus for design




To use the root locus for design, we need to follow these steps:



  • Specify the desired performance specifications, such as rise time, settling time, overshoot, and steady-state error.



  • Determine the corresponding regions on the s-plane that satisfy the specifications, such as damping ratio and natural frequency circles.



  • Find the intersection points between the root locus and the regions, and the corresponding values of the parameter.



  • Select the value of the parameter that meets the specifications and minimizes the trade-offs.



  • Verify the results by plotting the step response or using simulation tools.



The frequency response method




The frequency response method is a graphical technique that shows how the magnitude and phase of the output of a system change as a function of the frequency of a sinusoidal input. The frequency response method can help us to analyze the stability and performance of a system in terms of its gain margin and phase margin, which are measures of how much the gain and phase can vary before causing instability. The frequency response method can also help us to design feedback controllers using frequency-domain specifications, such as bandwidth, resonance peak, and cut-off frequency.


How to obtain the frequency response




To obtain the frequency response, we need to follow these steps:



  • Write the transfer function of the open-loop system in standard form, where the numerator and denominator are polynomials in s, the Laplace variable.



s with jw, where j is the imaginary unit and w is the frequency in radians per second.


  • Plot the magnitude and phase of the transfer function as functions of w using a Bode plot, which is a logarithmic plot that consists of two subplots: one for the magnitude (in decibels) versus the logarithm of the frequency, and one for the phase (in degrees) versus the logarithm of the frequency.



  • Find the gain crossover frequency (wgc), which is the frequency at which the magnitude of the transfer function is equal to one (or zero decibels).



  • Find the phase crossover frequency (wpc), which is the frequency at which the phase of the transfer function is equal to -180 degrees.



  • Find the gain margin (GM), which is the reciprocal of the magnitude of the transfer function at wpc.



  • Find the phase margin (PM), which is the difference between -180 degrees and the phase of the transfer function at wgc.



How to use the frequency response for design




To use the frequency response for design, we need to follow these steps:



  • Specify the desired frequency-domain specifications, such as bandwidth, resonance peak, and cut-off frequency.



  • Determine the corresponding regions on the Bode plot that satisfy the specifications, such as slopes, intersections, and break points.



  • Find the intersection points between the Bode plot and the regions, and the corresponding values of w.



  • Select a type of controller, such as proportional (P), proportional-integral (PI), proportional-derivative (PD), or proportional-integral-derivative (PID), that can shape the frequency response to meet the specifications.



  • Tune the parameters of the controller, such as the proportional gain (Kp), the integral gain (Ki), and the derivative gain (Kd), by using trial-and-error or analytical methods, such as Ziegler-Nichols or Cohen-Coon rules.



  • Verify the results by plotting the step response or using simulation tools.



The state-space method




The state-space method is a matrix-based technique that shows how the state variables of a system change as a function of time in response to an input. The state variables are the minimum set of variables that can describe the behavior of the system completely. The state-space method can help us to analyze and design feedback controllers using state feedback or output feedback techniques. State feedback is a technique that uses the state variables to generate the input for the system. Output feedback is a technique that uses the output and an observer to estimate the state variables and generate the input for the system.


How to represent a system in state-space form




To represent a system in state-space form, we need to follow these steps:



  • Identify the state variables of the system, such as position, velocity, angle, or temperature.



  • Write the state equations of the system, which are differential equations that relate the state variables and their derivatives to the input. The state equations can be written in matrix form as x'(t) = Ax(t) + Bu(t), where x(t) is the state vector, u(t) is the input vector, A is the system matrix, and B is the input matrix.



  • Write the output equation of the system, which is an algebraic equation that relates the output to the state variables and the input. The output equation can be written in matrix form as y(t) = Cx(t) + Du(t), where y(t) is the output vector, C is the output matrix, and D is the feedforward matrix.



How to use state-space methods for design




To use state-space methods for design, we need to follow these steps:



  • Specify the desired performance specifications, such as rise time, settling time, overshoot, and steady-state error.



  • Determine the corresponding regions on the s-plane that satisfy the specifications, such as damping ratio and natural frequency circles.



  • Select a type of controller, such as state feedback or output feedback, that can place the poles of the closed-loop system in the desired regions.



  • Design the state feedback controller by using the pole placement or the linear quadratic regulator (LQR) techniques. The pole placement technique is a method that finds the state feedback matrix K that places the poles of the closed-loop system at the desired locations. The LQR technique is a method that finds the state feedback matrix K that minimizes a quadratic cost function that balances the control effort and the state deviation.



  • Design the output feedback controller by using the observer or the Kalman filter techniques. The observer is a device that estimates the state variables from the output and the input using a dynamic model of the system. The Kalman filter is a device that estimates the state variables from the output and the input using a stochastic model of the system that accounts for noise and uncertainty.



  • Verify the results by plotting the step response or using simulation tools.



How to design feedback control systems using modern methods




Modern methods are techniques that rely on advanced mathematical tools such as optimization, game theory, and artificial intelligence to analyze and design feedback controllers. Modern methods are suitable for nonlinear, time-varying, or uncertain systems, which are systems that have varying coefficients or dynamics that change with time or depend on external factors. Modern methods can be divided into three categories: optimal control method, robust control method, and adaptive control method.


The optimal control method




The optimal control method is a technique that finds the best input for a system that minimizes or maximizes a given performance criterion, such as fuel consumption, tracking error, or profit. The optimal control method can help us to design feedback controllers that achieve optimal performance under various constraints, such as input limits, state limits, or terminal conditions.


How to formulate an optimal control problem




To formulate an optimal control problem, we need to follow these steps:



  • Define the system dynamics, which are differential equations that relate the state variables and their derivatives to the input. The system dynamics can be written in vector form as x'(t) = f(x(t), u(t), t), where x(t) is the state vector, u(t) is the input vector, and f is the vector function that describes the system behavior.



  • Define the performance criterion, which is a scalar function that quantifies the objective of the problem. The performance criterion can be written in integral form as J = phi(x(tf), tf) + integral(g(x(t), u(t), t) dt) from t0 to tf, where J is the scalar function to be minimized or maximized, phi is the terminal cost function, g is the running cost function, t0 is the initial time, and tf is the final time.



  • Define the constraints, which are inequalities or equalities that limit the values of the state variables or the input. The constraints can be written in algebraic form as h(x(t), u(t), t) <= 0 or h(x(t), u(t), t) = 0, where h is a vector function that describes the constraints.



How to solve an optimal control problem using dynamic programming or calculus of variations




To solve an optimal control problem using dynamic programming or calculus of variations, we need to follow these steps:



  • Solve the Hamilton-Jacobi-Bellman (HJB) equation or the Euler-Lagrange equation, which are partial differential equations that relate the performance criterion and the system dynamics. The HJB equation can be written in scalar form as 0 = min(u)[g(x,u,t) + grad(J)*f(x,u,t)], where grad(J) is the gradient of J with respect to x. The Euler-Lagrange equation can be written in vector form as 0 = d/dt(grad(L)*f(x,u,t)) - grad(L), where L is the Lagrangian function defined as L = g - grad(J)*f.



  • Find the optimal input u*(t) by setting the derivative of J with respect to u equal to zero or by minimizing the Hamiltonian function H defined as H = g + grad(J)*f.



  • Find the optimal state x*(t) by integrating the system dynamics with respect to t using u*(t).



  • Verify the results by checking the optimality conditions, such as Pontryagin's minimum principle or Karush-Kuhn-Tucker (KKT) conditions, which are necessary conditions for optimality that involve Lagrange multipliers or co-state variables.



The robust control method




technique that finds the best input for a system that ensures stability and performance in the presence of uncertainty and disturbance. The robust control method can help us to design feedback controllers that achieve robustness, which is the ability to cope with variations in the system parameters or dynamics without compromising the desired specifications.


How to model uncertainty and disturbance in a system




To model uncertainty and disturbance in a system, we need to follow these steps:



  • Identify the sources of uncertainty and disturbance in the system, such as modeling errors, parameter variations, measurement noise, or external inputs.



  • Quantify the uncertainty and disturbance using mathematical tools such as probability theory, set theory, or interval analysis. Probability theory is a tool that describes the uncertainty and disturbance using random variables and their distributions. Set theory is a tool that describes the uncertainty and disturbance using sets and their operations. Interval analysis is a tool that describes the uncertainty and disturbance using intervals and their arithmetic.



  • Represent the uncertainty and disturbance using appropriate models such as stochastic models, norm-bounded models, or polytopic models. Stochastic models are models that use random variables or processes to capture the uncertainty and disturbance. Norm-bounded models are models that use norms or metrics to measure the size or effect of the uncertainty and disturbance. Polytopic models are models that use convex polytopes or regions to bound the uncertainty and disturbance.



How to design a robust controller using H-infinity or mu-synthesis techniques




To design a robust controller using H-infinity or mu-synthesis techniques, we need to follow these steps:



  • Formulate the robust control problem as an optimization problem that minimizes or maximizes a given performance criterion subject to stability and performance constraints. The performance criterion can be a function of the H-infinity norm or the structured singular value (mu) of the system transfer matrix, which are measures of how much the system output can be amplified by the uncertainty and disturbance.



  • Solve the optimization problem using numerical methods such as linear matrix inequalities (LMIs), Riccati equations, or D-K iterations. LMIs are matrix inequalities that can be solved efficiently using convex optimization techniques. Riccati equations are matrix equations that can be solved iteratively using algebraic methods. D-K iterations are iterative methods that can refine an initial solution by solving a sequence of LMIs.



  • Find the robust controller by extracting it from the solution of the optimization problem.



  • Verify the results by checking the stability and performance margins of the closed-loop system using simulation tools or analytical methods.



The adaptive control method




The adaptive control method is a technique that finds the best input for a system that adapts to changes in the system parameters or dynamics over time. The adaptive control method can help us to design feedback controllers that achieve adaptability, which is the ability to learn from the system behavior and update the controller parameters accordingly.


How to identify the parameters of a system online




To identify the parameters of a system online, we need to follow these steps:



Assume a parametric model for the system dynamics, which is a mathematical expression that relates the output of the system to the input and a vector of unknown parameters. The parametric model can be written in linear form as y(t) = phi(t)*theta + e(t), where y(t) is the output, phi(t) is the regression vector, theta is the parameter vector, and e(t) is the error term.</


À propos

Bienvenue dans le groupe ! Vous pouvez communiquer avec d'au...
bottom of page