John Billingsley,
Faculty of Engineering and Surveying,
University of Southern Queensland
These remarks are intended to stimulate discussion. Whether you strongly agree or strongly disagree with them does not really matter - it is only if you are indifferent that they have failed.
As it is taught, control theory is preoccupied with stability and the optimisation of linear systems. In contrast, few mechatronic systems are linear and even fewer require linear control. Many do not even have to be asymptotically stable, operating most successfully in a limit cycle. Others, such as lifts and lunar landers, are one-shot operations for which stability is irrelevant and where attempts at optimisation have in the past led to disaster.
Each year throws up another buzzword and another technique hailed as a magic solution to all control problems. All too often these are misleading or irrelevant, or else are a tried and proven traditional method in a fancy new disguise.
Sometimes there is a kernel of truth in the new fashion, such as a perception of the advantages that sliding mode control can bring - although sliding mode featured in papers published nearly thirty years ago. But the perception so often becomes tinged with obsession.
Having recently become a grandfather, I think I am entitled to reminisce a little.
Over thirty years ago, I entered industry to work on the design of autopilots. Servomotor systems which tracked the aircraft attitude had been designed by 'engineers' with a depth of experience but few formal qualifications beyond a Higher National Certificate. In response to a step change in input these systems ran at full speed until the target was approached, then stopped as though they had run into a brick wall with no perceptible overshoot or error.
Responding to a request from the apprentice training school, I modified one such system for teaching purposes. In order to demonstrate the linear, second-order response my Degree course had taught me to expect, its performance had to be grossly degraded, made slow to respond and settle, easily perturbed from the target and altogether a mess.
The pragmatic design process was as follows. First choose a motor and a gearbox which will give the required maximum slew-rate. Now decide on the 'proportional band'. This is the static position error which will cause full drive to be applied to the motor. It should not be much more than the required settling accuracy.
The proportional band determines the position gain. Enough velocity signal is now added to the feedback to cause the system to 'put on the brakes' early enough to avoid an overshoot and stability is then not really in question.
Each year I ask a class to look at the task of moving a one-kilogram mass over a distance of up to a metre using a force of up to one kilogram, to settle to within a millimetre of the target in a time of less than a second.
The 'obvious' starting point is to look at linear feedback of position and velocity, presenting the designer with the choice of the two coefficients. We could perhaps try to find an appropriate quadratic cost function to minimise, but let us take a more direct approach. We can choose coefficients to give roots to the differential equation which seem 'right'.
It might appear reasonable to assign the poles with two equal time-constants of 0.1 seconds. The resulting position gain is seen to be 100. But this gives a proportional band of ten centimetres before the maximum drive of ten newtons is reached - hopelessly weak and floppy for a robot axis. The position gain has to be a hundred times as great to result in ten newtons force for a one millimetre error.
Now the velocity gain must still be chosen. 'Conventional wisdom' will say that critical damping is over generous. However a phase-plane plot will soon show that for this choice of velocity gain the response rattles to rest with a great succession of overshoots. From the initial one-metre error, the mass keeps accelerating until it is within ten centimetres of the target before 'applying the brakes'. The first overshoot is eighty centimetres!
Only when the mass is making its final lurch to the target will it exhibit the response expected of critical damping.
To avoid an overshoot the gain must be increased until braking is applied at the 'half-way' point. It is easy to discover that the system now has a damping factor of eight! The pragmatic design process flies in the face of many of the 'rules of thumb' of linear system design.
The addition of an integral term to a linear controller may seem to give the desired long-term stiffness - but for this sort of problem its benefit is an illusion. If a deflecting force is suddenly applied, the initial disturbance is almost as great as it was without the integral term. It takes time for the error to drive the integral to a value which will compensate for the force, although if the force remains absolutely steady the error will be reduced completely to zero. If the force is removed, however, another excursion occurs in the opposite direction.
In contrast, the saturating controller described above will yield less than a millimetre for any disturbance up to the motor limit of one kilogram. Beyond that force, of course, no amount of ingenuity can stop the load being swept aside.
The truth is that non-saturating linear control is weak and ineffectual for most mechatronic problems. The neural netters and fuzzy setters have no difficulty at all in showing that their nonlinear algorithms are superior.
When the linear controller is described as "robust" it is more feeble than ever!
In those days we used Heaviside notation, the differential operator big-D representing the time derivative. Transfer functions involved polynomials in D and their ratios. Mathematicians might get finicky with the rigour of manipulating them, but they were effective in use. Step response functions of time could be looked up from a table of pre-concocted solutions.
Nowadays, of course, Laplace reigns supreme. But engineers do not use the Laplace transform. They just use the notation and a theorem or two.
To invert a transform as simple as 1/(s+a) involves the mathematical process of integration over an infinite lozenge in the complex frequency plane - and who does that? Instead the engineer asks, "What system is described by this transfer function and what do we already know about its impulse response?"
Before we know it, we have a table linking transforms and time functions. It might come as little surprise that the table is identical with the old table in big-D, except for an extra s in each denominator - caused by the difference between representing step and impulse responses. For the actual solution of the problems, all considerations of infinite integrals in s are utterly irrelevant, so why are we so keen to teach the Laplace transform to engineers? Could it be that it looks impressive and is easy to examine?
The search for stability has been replaced by an obsession with optimisation. It took one form in the urge to invent quadratic cost functions, fiddled to fit the linear control of the old textbook solutions. It could take a page-and-a-half of algebra, elegantly ornamented with a matrix Riccati equation and a Luenberger observer, to produce a controller which contained its own dynamic filter. By block-diagram reduction though, the entire controller could be shown to be equivalent to the phase-advance those old autopilot designers would have selected by instinct.
A more radical form of optimisation was inspired by Bellman and Pontryagin, who showed that to optimise it is necessary to go to extremes. Here, at least, was a design process which encouraged the use of maximum saturating control where possible. Only too often, however, optimisation is found to be the worst possible option!
A distinguished Soviet professor came to Cambridge in the mid-sixties to give a seminar on spacecraft control. He had been asked to devise a strategy for optimising the descent of an unmanned lunar lander. The criterion he had been set was the minimum use of fuel. At this time, by the way, several landers were already buried deep in the lunar dust.
During his presentation I observed that the optimal strategy involved switching on the thrust at the last possible moment, then using the maximum possible drive throughout the descent. If all went well the vehicle would come to rest just as it reached the surface.
Now when the drive is first applied, the lander is moving at around a mile per second.. If the motor is a second late in starting, the only thing which is able to stop the vehicle short of one mile sub-luna is the hardness of the lunar crust.
If the motor were to be started several seconds earlier, the thrust being reduced as the vehicle neared the surface, the increase in fuel use would be almost infinitesimal while a valuable safety margin was preserved.
The professor gave the matter thought, returned to Moscow and the next mission landed safely.
Another optimal strategy might have been to minimise the thrust needed for descent, so that the fuel ran out at the very last moment. After all, surplus fuel is of no value on a one-way journey.
Optimisation usually involves dicing with disaster, taking the outcome to one of its extremes when any strategy between these extremes will lead to the safer accomplishment of the task.
When the system has multiple inputs and outputs, the possibilities for controller design are literally infinite. Multiple combinations of feedback coefficients will give exactly the same set of poles for the controlled system. Many design techniques, among them dyadic feedback, have the sole objective of eliminating all but a few of the alternatives. All too often the baby is discarded and all that remains is the bath water.
In practice, an acceptable controller is likely to be dictated as much by the constraints of the system as by its differential equations. Limits of drive and velocity, permissible excursion, acceptable tolerance, friction, stiction and disturbing forces are all of the utmost importance. Yet the student is given the impression that the only important task is to linearise everything so that it can be handled by frequency-domain methods. H-infinity rules supreme.
William Hazlitt was an irascible old codger who, in the last century, wrote a large number of wonderfully cynical essays. One of these, "On the ignorance of the learned," contains the following observations.
"Learning is the knowledge of that which none but the learned know. He is the most learned man who knows the most of what is farthest removed from common life and actual observation, that is of the least practical utility, and least liable to be brought to the test of experience."
Researchers and teachers are being led astray by a search for the exotic, by the latest fashionable buzzword. So what should the mechatronic student start by learning?
I first met the word 'mechatronics' in Finland fifteen years ago. With a language like theirs, the Finns should be forgiven if they were to blame - although maybe it was the fault of the Japanese. The word is meant to denote the art of blending mechanical and electronic components with a 'glue' of control theory and embedded software to achieve an integrated design.
Very many modern products cannot be regarded as exclusively mechanical or electronic. They range from consumer durables and ephemerals to military hardware and medical prostheses. In the early days of the microcomputer, such devices were the bastard offspring of separate mechanical and electronic design teams, each unable to comprehend the problems or subtleties of the other. 'User friendliness' meant no more than a machine asking 'Are you sure?' before a hasty keystroke sent the last two hours' work into oblivion.
With the identification of mechatronics as a discipline in its own right, a rising generation should be equipped to make value judgements between mechanical precision and sensor resolution, between amplifier gain and magnetic attractive force. Many of the educational modules can be drawn from the conventional disciplines but all too often they need close scrutiny to give the right emphasis to the syllabus.
Magnetic circuits and associated forces, phototransistors, drive amplifiers, gearboxes, linkages, interfaces, polling loops and interrupts all have their place in the mechatronic toolbox. The spotlight in this paper, though, is on the control theory needed to put them all together. What elements of the control engineer's bag of tricks must the mechatronics specialist know?
The first essential is the ability to look at a practical system and write down some meaningful differential or difference equations to describe it - 'state variable' spotting.
A close second is the ability to run up a simple simulation on a computer. In times gone by the computer would have been analogue. Now a few lines of code serve to define the equations while a few more will perform a simple-minded Euler integration. Often the system will have some intrinsic nonlinearity, in which case another line will allow an appropriate limit to be imposed on each variable or input as necessary.
An experimental controller can now take the form of an expression assigning a value to each input - an expression which can be linear, can involve hard or soft neural-style constraints, can entail fuzzy quantisation and table look-ups or which can use any pragmatic rules the experimenter feels are worth trying.
Remember always that when the gloves of linearity are off, all eye-gouging and kicking of the system's vitals become legitimate. Neural, fuzzy and variable-structure methods are but a mere subset of unbridled empiricism.
When the experimenter is tired of mere simulation, an interface will often allow the very same software control algorithm to be tried out on the 'real thing'.
The engineer should not use the computer as an excuse to avoid thinking, however. An insight into second-order systems can be gained through familiarity with phase-plane methods. Here piecewise linear systems and control algorithms can be mentally tested through the pursuit of a few isoclines.
So what of Bode, Nyquist, Lyapunov, the complex frequency plane, root locus, z-transform and the unit circle? A control engineer is of course naked without them. They provide explanations and answers, means of fine-tuning strategies outlined pragmatically, but they make a very poor introduction for the novice.
Despite all I have said so far, linear control theory does have an important role to play. Students should be made to realise that linear feedback transforms one linear system into another linear system, described by equations in an almost identical format but with roots which are somehow 'improved'. They must realise that further levels of feedback can be superimposed, and that there is not some mystical metaphysical difference between 'open loop' and 'closed loop'.
An appreciation of discrete-time control is of the utmost importance for on line computer control. All the usual techniques of discrete-time state equations and z-transforms have some relevance, but once again the ability to simulate mixed continuous and discrete systems will give a clearer overall insight.
It is important to look the occasional gift-horse in the mouth. Dead-beat control offers the promise that by applying corrections at rather lengthy intervals, disturbances can be eliminated completely in a finite number of steps. It has to be realised that a relatively minor mismatch between plant and controller can result in a great loss of performance or even instability. What is more, some time elapses before the controller even starts to act to correct sporadic disturbances.
Remember that the one and only purpose of a controller is to apply, right now, values to the inputs which will make the system perform in some desired manner. At our bidding, these values can be made to depend on any available present or past measurements. Control theory guides us in making these choices. Any other considerations are mere embroidery.
It would be very foolish to decry all new theory and advanced control methods. But in presenting them to the student we must be careful not to imbue them with greater wonder and magic than they really deserve.
This paper was presented as the opening keynote address to the second annual conference on Mechatronics and Machine Vision in Practice, Hong Kong, September 1995
It also appeared in a shortened form in IEE Computing and Control Engineering Journal, October 1995.