Essentials of Control Techniques and Theory:
Simulation, Sensing and Computer Control
by John Billingsley
1. Introduction: Control in a nutshell, history,
art and practice.
There are two faces of automatic control. First there is the
theory that is required to support the art of designing a working
controller. Then there is further and to some extent
theory that is required to convince a client, employer or examiner of
Both are covered here, carefully arranged to separate the essential
from the ornamental. But perhaps that is too dismissive of
concepts that help us to understand the processes that underpin the
1.1. The origins of
1.2. Early days of feedback.
1.3. The origins of simulation.
1.4. Discrete time.
2. Modelling time.
differential equations, simulation both digital and analogue.
Before designing any but the simplest control system, we must
understand the dynamic behaviour of the system. When an input
changed, it causes changes that can affect the future values of an
output. 'States' within the system 'remember' the effects of
past values of inputs that have been applied. A set of
differential equations can express the rates-of-change of the state
variables that determine exactly what is going on at any
In years gone by, the equations have been solved with mechanical and
electronic systems, but today by far the most convenient way is to use
software. Methods of solving the differential equations are
2.2. A simple system.
2.4. Choosing a computing platform
2.5. An alternative platform.
2.6. Solving the first-order equation.
2.7. A second order problem.
2.8. Matrix state equations
2.9. Analogue simulation.
2.10. Closed loop equations
Simulation links for Chapter 2
Three examples are explained. First a Java applet serves as a
'whiteboard' on which simulation results are plotted. The
simulation code can be edited inside a simple text box shown on a web
page. The method requires no software environment beyond the
of a simple browser.
The second example uses similar code to move images around the screen,
to make a picture of the dynamic system. The third example is
similar to the first, except that the state equations are represented
in a standard matrix form.
3.2. How a Jollies simulation is made up
3.3. Moving images without an applet
3.4. A generic simulation
Simulation links for Chapter 3
4. Practical control systems.
Control is all about manipulating the inputs to make the system do what
we want it to, whether it is an aircraft or a simple domestic
oven. Anything that we want to control, we must be able to
measure or estimate. Having decided what input to apply to
system, we must have some means of applying it. The process
making the inputs depend on the measured output is given the term
The concepts of accuracy and precision are explored, while outlining
methods of measuring many useful variables.
An on-line position control experiment is outlined and an
inverted-pendulum experiment is promised for later. The
simulation allows feedback strategies to be explored.
4.2. The nature of sensors.
4.3. Velocity and acceleration
4.4. Output transducers
4.5. A control experiment.
Simulation links for Chapter 4
5. Adding control.
When we have some equations for a linear system, we can add further
equations to represent a controller. We see that the
system is just another linear system. We can use heavyweight
matrix methods to analyse such a system.
5.2. Vector state equations.
5.4. Another approach.
5.5. A change of variables
5.6. Systems with time delay and the PID controller.
5.7. Simulating the water heater experiment.
Simulation links for Chapter 5
6. Systems with real components and saturating signals - use
So far, it looks as if we can choose feedback values to obtain any
response that we want. A motor with a saturating drive changes
picture. But even if the system and its controller are highly
non-linear, we can investigate the controlled performance with the aid
of simulation. Alternatively we can try some
6.1. An early glimpse of pole
6.2. The effect of saturation
6.3. Meet the phase plane.
6.4. Phase plane for saturating drive
6.5. Bang-bang control and sliding mode.
Simulation links for Chapter 6
7. Frequency domain methods.
Testing a system with sinusoidal signals, the frequency response, gain
and phase, the effect of gain on stability of feedback, poles and zeros.
Why is "classical control theory" packed out with frequency domain
methods? We see historical reasons and meet straightforward
analytical methods, useful 'rules of thumb' such as 'gain margin',
'phase margin' and 'roll-off'.
7.2. Sine-wave fundamentals
7.3. Complex amplitudes.
7.4. More complex still - complex frequencies.
7 5. Eigenfunctions and gain.
7.6 A surfeit of feedback
7.7. Poles and polynomials.
7.8. Complex manipulations
7.9. Decibels and octaves.
7.10. Frequency plots and compensators.
7.11. Second order responses.
7.12. Excited poles.
Links for Chapter 7
8. Discrete time systems and computer control.
Discrete time control is revealed to be at least as easy as continuous
time. Discrete time equations are introduced with the state
transition matrix. It is shown that a line or two of software can
be used to estimate velocity.
8.2. State transition.
8.3. Discrete-time state equations and feedback.
8.4. Solving discrete time equations
8.5. Matrices and eigenvectors.
8.6. Eigenvalues and continuous time equations.
8.7. Simulation of a discrete-time system.
8.8. A practical example of discrete time control.
8.9. And there's more.
8.10. Controllers with added dynamics.
Links for Chapter 8
9. Controlling an inverted pendulum.
A simulation is progressively built up to include drive limitation,
friction, sensor errors, estimation of velocities by software and drive
9.1. Deriving the state
9.2. Simulating the pendulum
9.3. Adding reality
9.4. A better choice of poles
9.5. Increasing the realism.
9.6. Tuning the feedback pragmatically.
9.7. Constrained demand
9.8. In conclusion
Links for Chapter 9
10. More frequency domain background theory
10.2. Complex planes and mappings.
10.3. The Cauchy-Riemann equations
10.4. Complex integration.
10.5. Differential equations and the Laplace
10.6. The Fourier Transform
Links for Chapter 10
11. More Frequency Domain Methods
11.2. The Nyquist plot.
11.3. Nyquist with M-circles
11.4. Software for computing the diagrams.
11.5. The 'curly-squares' plot.
11.6. Completing the mapping.
11.7. Nyquist summary.
11.8. The Nichols chart.
11.9. The Inverse Nyquist diagram.
11.10. Summary of Experimental Methods.
Links for Chapter 11
12. The Root Locus.
12.2. Root locus and mappings.
12.3. A root locus plot.
12.4. Plotting with poles and zeroes.
12.5. Poles and polynomials.
12.6. Compensators and other examples.
Links for Chapter 12
13. Fashionable topics in control
13.2. Adaptive Control.
13.3. Optimal control
13.4. Bang-bang and fuzzy control
13.5. Neural nets
13.6. Heuristic and genetic algorithms.
14. Linking the time and frequency domains
14.2. State space and transfer functions.
14.3. Deriving the transfer function matrix.
14.4. Transfer functions and time responses.
14.5. Filters in software.
14.6. Software filters for data.
14.7. State equations in the companion form
Links for Chapter 14
15. Time, frequency and convolution
15.1. Delays and the unit
15.2. The convolution integral.
15.3. Finite impulse response filters.
Links for Chapter 15
16. More about time and state equations.
16.2. Juggling the matrices.
16.3. Eigenvectors and eigenvalues revisited.
16.4. Splitting a system into independent subsystems.
16.5. Repeated roots.
16.6. Controllability and observability.
17. Practical observers, feedback with
17.2. The Kalman Filter.
17.3. Reduced-state observers.
17.4. Control with added dynamics.
18. Digital control in more detail.
18.2. Finite differences - the beta operator.
18.3. Meet the z-transform.
18.4. Trains of impulses.
18.5. Some properties of the z-transform.
18.6. Initial and final value theorems.
18.7. Dead-beat response.
18.8. Discrete-time observers.
Links for Chapter 18
19. Relationship between z- and other
19.2. The impulse modulator.
19.3. Cascading transforms.
19.4. Tables of transforms
19.5. The beta and w transforms.
20. Design methods for computer control.
20.2. The digital-to analogue convertor as zero order
20.4. A position control example, discrete time root
20.5. Discrete time dynamic control - assessing
Links for Chapter 20
21. Errors and noise.
21.2. Practical design considerations.
21.3. Delays and sample rates.
22. Optimal control - nothing but the best.
22.1. Introduction: the end point
22.2. Dynamic programming.
22.3. Optimal control of a linear system.
22.4. Time optimal control of a second-order system.
22.5. Optimal or suboptimal?
22.6. Quadratic cost functions.
22.5. In conclusion
Links for Chapter 22 - predictive control
Simulation example links for