Computer Aided Disaster

...their foot shall slide in due time: for the day of their calamity is at hand, and the things that shall come upon them make haste (Deuteronomy 32:35.)

My father in law is one of the worlds leading authorities on metal fatigue. In fact the majority of his life has been spent researching one non-linear differential equation. Getting a correct solution to this equation is fundamental to the safety of every aeroplane that you and I will fly in. It is the equation used by just about every aeroplane modelling program to determine the effect of torsion on wing and nose joints. There is a raging contention about the precise approach to solving this equation, solve it one way and our aeroplanes are safe, solve it another way and bits start falling off. Mathematically it is perfectly valid to solve the equation using either method, the question is which one tells us the truth about our aeroplanes. Unfortunately I cannot complete this tale as parts are covered by the official secrets act, suffice it to say the results are genuinely a life or death issue.

Modeling

The above has nothing to do with computer programming, the original sums were done long before computer simulations were the norm, the point I was trying to drive home is that we are becoming increasingly dependant upon the models of our world (and society) that are being produced.

The other question I would like to ask is how many of us realise that we really are dealing with models. I was once described as a model student, I was quite flattered until I looked up what the word model meant "A small plasticy imitation of the real thing"

I'm sure that if I described your application in those terms you would be highly offended, but in reality I would probably be correct. Especially if you program procedurally. Any business or enterprise is (in itself) objective. You have real ware-houses, real stores, real delivery vans and products bought by real customers using (hopefully) real money. Yet the program you write is actually a set of procedures, a set of steps to go through in order to emulate the behaviour of the underlying objects. As long as the objects behave in the same way that they did when you wrote the program then probably your sham will go undiscovered. But what happens when part of the warehouse burns down, suddenly the computer model you wrote looks quite sick. You almost certainly don't have a 'torch the warehouse' menu option. In fact you commonly won't even have options to say 'stock is reduced by 30%'. I actually came across precisely this scenario, a lorry carrying stock had been set on fire and they had no way of telling the computer it had happened. In the end they hit the idea of making the insurance company 'buy' the stock at a suitable discount. To the best of my knowledge there is still a London insurance company that gets 'preferred customer' mailers in the hope that some day they'll buy another lorry load of TVs...

Objectivity

This is the argument for objective programming. A procedural program models a 'slice' of behaviour the objects exhibit at a given instant. An objective program aims to model the objects themselves, therefore, the theory goes, you can take your perfectly programmed warehouse object and drop it into any application and it will work according to the rules that govern objects.

And it works. Provided :

  1. You've programmed the object perfectly
  2. You're expecting it to behave by the rules that govern objects

Most of the upcoming series of articles will be dealing with a), but first let us deal with b).

Sir Isaac Newton produced a set of laws by which bodies (or objects) moved. The rules were essentially simple and they were scientific because they could be used to make predictions (hypothesis if you prefer) and these could be tested. And it worked. The effect was revolutionary, you can now fire canon-balls with precision, predict arrivals of enemy troops and work out how hard you need to kick a door to get it to shut. Suddenly the behaviour of the physical world could be mapped out with regularity and precision and future behaviour accurately predicted. Always. Well, nearly always. Accept for the planets. Somehow they didn't fit the picture. Many people attempted to tweak the model to make it work for the planets but none really succeeded until Einstein came along and dared to challenge the model. Special relativity was born. Newton was basically correct if you assume the observer (you) is stationary but if you are moving relative to the object then new equations apply. Einstein was right, he had equations that modelled seen behaviour, made predictions and the predictions could be verified. It worked. Suddenly the behaviour of the physical world could be mapped out with regularity and precision and future behaviour accurately predicted. Always. Well, nearly always. Somehow near large bodies the numbers still seemed to be a little warped. So General Relativity was born. A new set of equations for a new set of cases that made predictions that could be tested and shown to be correct until (of course) they were shown to be wrong in some cases at which point quantum mechanics came along.

Now quantum mechanics appeals to me greatly, in a nutshell (with apologies to any theoretical physicists present) "you don't know where anything is and if you go to find it the very act of doing so will make it appear to move, maybe."

Uncertain?

But what has all this got to do with us? Well, I have a hunch that objective programming is very similar in nature to Newton's laws. Each object has properties and a well defined set of rules governing inter-object interaction (the object interface). We find that in most everyday cases we are able to model the behaviour we see using objects. The effect can be revolutionary, a well designed object system can model an enterprise with extreme power and will often provide management information with a force that the simple gathering of statistics never will.

But I don't think it always works. Already scientists modelling some natural eco-systems have switched to using parallel computers. Initially this was simply for speed, but it was actually found that some of the 'information races' that happen in a heavily parallel computer net actually models the underlying structure rather better than an objective system dealing in a sequential fashion. "It ain't what yer know it's when yer know it". Special relativity perhaps?

Of course scientists grappling with some AI issues are in a fuzzier world still. The inputs to the system (say from a scan of a photograph) are just as likely to be wrong as the program inside. I'm sure most computer programmers have at some point fallen back on GIGO (garbage in - garbage out). But that really isn't good enough. A human can make good sense of a two dimensional, damaged, ill composed image and so should a computer. I suspect there will come a time when it is no longer considered reasonable to restrict a computer to what it can deduce from available data, accurate modelling will insist the computer intuit its' input data from an incomplete set and distorted set. A kind of general relativity.

But now consider this, if the computer is modeling our world with a speed and a precision we cannot, if it is intuiting its input data from what it is fed, and if it is composed of items that have been programmed individually but used collectively then how can we ever tell if it is correct? Worse yet, if the computers performing these computations are the ones guiding our aeroplanes, directing our roadways, deciding the amount of water we can draw from rivers, determining our climate control strategies then how can their model ever be correct as the very act of modelling will change the result. I suspect that in time to come we will cease to see computers as calculating machines that get the answer correct, but as complex fallible animals that are extremely powerful if handled with care; otherwise deadly.

So what?

You may be wondering what all this rambling has to do with Clarion. The answer is "not a lot, except they employ me". The way someone approaches the design of a complex system is obviously based upon that persons experience and skill; but also, fundamentally, on what that person expects to achieve. A car designer who believes he can build a perfect car will focus on the power of the engine. A car designer who expects his car to be driven by an idiot will focus on the crumple zone. I specialise in crumple zones. My justification is simple: if I'm wrong and you didn't need the crumple zone then I can start power-charging the engine. If he's wrong and you did need the crumple zone ...

David's laws of specification

Many people have written down rules for coding practise and design methodologies so I thought I would try something similar. I'm actually quite pleased with the result, if you bear these in mind then over time they will produce results for you :

  1. The specification is always wrong. See above arguments on the accuracy of modelling; the only way to get full information would be to watch the objects interacting for ever (e.g. does the warehouse capacity start to decrease in 30 years time when the roof begins to wear?)
  2. Insofar as the specification is right it is too vague to be useful. (If they really knew what they wanted then they wouldn't be paying you $$$ to write the program)
  3. Insofar as the specification is right and specific it almost certainly doesn't meet the system requirements. The nature of business is such that management are (correctly) focused on fixing those parts of the system that are broke. So if they can tell you precisely how to model some part of their business it is almost certainly not a part of the business they want replicated.
  4. Insofar as the specification is right and specific and meets the system requirements when you get the specification it won't when you finish the application. This is the most painful one to grasp, but if you think about it then it's obvious. If you are writing a significant application for a business then it is almost certainly because they want the application to improve the way they work. If you do manage to significantly improve the way they work then their work pattern will change and your application will not longer be accurately modelling the way they work?

I can actually give you a rock-solid easy to see example of 4). Go on to a VB forum and count how many messages are about 'speed of the application', then go on to the Clarion forum and perform the same count. You will instantly discover that VB programmers are almost fanatical about the speed their apps run. Most Clarion programmers couldn't care less. Kinda scary information for a compiler writer working for TopSpeed. Until you turn the statistics inside-out. VB customers are fanatical about speed because they desperately need some, Clarion customers are less concerned because the compiler has been around for five years.

The lesson?

The lesson is simple. Producing a program that meets the specification is a flawed, short-termist concept. What we should be attempting to do is produce a program structure that is able to meet the specification (model if you prefer) whatever the specification happens to be.

I believe that probably the most useful feature of object oriented programming is the way it encourages people to do this.

In the articles that will follow this one I have a number of objectives. One is to unwrap some of the technical parts of the TopSpeed system. Another is to educate in the usage of the system. But a third (and possibly most important) is to show some of the logic that goes into the construction of the system components. To this end I will me meandering from theory to practical examples with little comment. It should also be noted that I am discussing aspects of the system as I see them, others may, and often will, have opinions that differ and may be more readily defensible than my own.

The application converter

As the C4 project leader one of the requests I had most frequently was for an 'Application Converter'. Interestingly I could never find anyone that could actually provide me with a useful spec. If you feel like a challenge try to write one down, I can guarantee it will be wrong! Why, because even as I speak the converter is being changed to deal with situations and eventualities that had not themselves been conceived when the converter was written.

In fact the converter is a classic example of a program that has undefined input (any application), ill-defined output (any application working 'similarly') and an unspecified degree of processing in between.

Using traditional techniques writing this sort of program is a nightmare. First the code is complex and ugly, secondly the spec changes faster than the program, thirdly the program really needs to be written by 100 people each with specialist knowledge (all the template writers).

This is where objective programming (with a good dose of Bayliss cynicism) shines:

  1. Write an object that does the defined bits (read in a txa, split it up into sections, produce a txa from sections). In itself that job is not trivial but it is defined so can be coded. (Remember that for the purposes of this article we are assuming programmers are perfect).
  2. Produce a user interface that allows the user to map 'old' code to 'new' code. Again this is easy. A couple of list boxes, the right hand side has edit in place. Throw another list box at the bottom for notes, keep them all in sync. Again not easy but well defined.
  3. Now for the clever bit. Produce an interface so that other people can write objects (rules) that take a chunk of txa and perform some minor alteration. Potentially difficult but someone else's problem so who cares.
  4. Produce a suite of library routines to perform pattern matching and replacing to persuade the mugs programmers doing c) that we've done most of the work. This is first-year CS degree stuff, pleasant break from normal routine.

The plan seems simple, the coding for a, b, d was hard but not impossible. And c)? Well the amazing thing is that once a, b, d are done, and once you are focusing on producing one rule and letting others worry about the other 20 that are needed, then c) becomes a piece of cake too! Actually the simplest bit of the system! Why? Because they only have to deal with the problem at hand and they always have the options of doing nothing and giving the programmer the problem. (This puts you the right side of the 90-10 rule that says 10% of the effect takes 90% of the effort).

The results? Breathtaking! (At least to anyone that understood the problem!)

We now have a mechanism that allows us (or any third party) to convert from any know input to a known output with a minimum of fuss. Obviously we cannot get from all inputs to all outputs because our model of what people are doing will have flaws in.

But I believe we have got the most fundamental thing right, we have produced a technology that allows us to move our program to the conversion specification whatever it happens to be with the minimum amount of fuss. In my articles over the coming months I will hopefully be giving you some clues on how to do this. But first we need to tackle one of my early simplifying assumptions "assuming you code the object perfectly"...

Tweet  

The Christian Counter

The Fundamental Top 500