Passing the Buck...

Yea, I hated all my labour which I had taken under the sun: because I should leave it unto the man that shall be after me. And who knoweth whether he shall be a wise man or a fool? yet shall he have rule over all my labour wherein I have laboured, and wherein I have shewed myself wise under the sun. This is also vanity. Ecclesiastes 2:18-19

Under the wise rule of Solomon Israel had become the most powerful nation in its' locality. Great projects had been undertaken to improve both infrastructure and commerce. A good nose for a contract coupled with exploitation of Israel's position on the major N-S trade routes made Solomon rich and famous. A willingness to tackle and fund logistic problems had made his armies extremely effective (this principle was later exemplified by the Romans). But one thing clearly bothered him; what would happen to the empire when he died. Would the successor be able to use and maintain the intricate set of relationships that Solomon had constructed, or would it all collapse around his ears.

In the event Solomons fears proved well founded, Rehoboam (his son) came to the throne and within a very short period of time Israel had split into two sections. His error was to ignore a problem that had been festering in Solomon's time without erupting. To fund his expansion programs Solomon had to set high taxes and occasionally use forced labour. Upon his accession the people asked Rehoboam to lighten the load, he replied that he intended to make it heavier. The rest is history. (1 Kings 12).

Next!

It is unlikely that any of us will rule a kingdom and we will almost certainly never have Solomon's experience directly yet each of us (should) wrestle with this problem daily. In previous articles I have shown that a specification will be subject to change (almost) by definition. I have shown too that zero defects is an unrealistic goal and that therefore we should expect to be maintaining an application throughout its lifetime. In my last article I discussed the basis of offensive programming, attempting to produce an environment in which it is hard for bugs to survive undetected.

In this article I wish to look at strategies (and problems) involved with programming for the long term.

One of the first things to grasp is that contrary to expectation maintenance is much harder than programming (design is harder yet but we'll come to that later). Why? Because the person performing the maintenance is in a much weaker position than the developer.

I can give you an example here. Today I have been performing optimisations in the browse class, the code is intricate and delicate because we are aiming to produce SQL access times that are optimal. (If ever you see the word 'optimal' in a spec it means you are going to be on the wrong side of the 90-10 rule). Yet, as far as I can tell, the code was right first time. I certainly knew what I was trying to do and how to do it. In fact I even knew the (rough) line number the edits would go on before I started. In contrast I also had to go and read some code I wrote in the Report Writer print engine a couple of years ago. It took me twenty minutes just to find the file! (I couldn't remember the object names I had used). When I got to the code I had to read through line-by-line just to try and remember how it all worked! Although I had written the code and recognised the style I had to learn what it did just the same as I would, if I had picked up someone else's code.

The maintainers actual position is usually much weaker yet. It is easy to justify time spent designing code, it is usually early in the product cycle (before the old version stops selling) and you can hopefully show the benefit of writing code a particular way. Put another way we are used to there being an R in R&D. But you don't often hear of R&M (Research and Maintenance). Whilst the developer is paid to think and understand a whole application (or sub-application) the maintainer is expected to find the bug. (the implication is that there is only one line to worry about so it is a much simpler process). The effect of this is that the developer is working with knowledge of context, the maintainer is working without that contextual information even if they are the same person (see above).

So not only is our maintainer without inherent knowledge but also without the resources allocated to get that information. And it gets worse. The developer is working with a system designed from the ground up, everything should be clean and elegant. Over time that code degrades (see previous articles) so the maintainer is not just less qualified to make the changes but he is working with a rather more dangerous code base than the developer, which means he is more likely to hit a booby-trap.

Then consider the psychology. We all enjoy writing good code and we take a pride in our work. And we all hate fixing bugs, especially old bugs in old projects that we really would rather forget. Our attitude will therefore be different, we will be looking for simple expedient solutions to what could be complex and subtle problems. Look at it this way, if we got it wrong when we did know what we were doing what chance do we have when we don't?

All of this combines to make a simple fact. We are in a much better position than Solomon, we do know whether the person coming after us will be a wise man or a fool, he will be a fool.

Programming for fools

Having established that the person maintaining our code will be a fool we have to decide on a strategy of coding that allows for this. Here are some of the ones I have come across:

  1. Never give a sucker an even break This is the line taken by many real programmers. Live for the moment, get the pay-check and run like fury. In companies where development and maintenance are separate divisions, and where development like to feel superior to maintenance, this is quite a good solution. The person coming second is at a complete disadvantage, but very few people twigg this, so by making your code un-maintainable you manage to keep a competitive edge. I have even known section heads that insist on non-documentation so that they are irreplaceable. Of course the downside of this strategy is that it completely screws the companies computing system. Only really a good strategy if you are a mid-sized fish in a very big pond.
  2. Program like a fool

    This is quite often corporate strategy at a company that has encountered one or two 'A' type programmers. The company defines a development style that basically precludes any of the programmers getting too clever. Often you will find a number of language constructs out-lawed. Almost certainly there will be 'no assembler' rules and maybe 'no-API' rules. Sometimes it will even be defined that certain algorithms and coding techniques (eg recursion) are no-go areas. These rules are usually encapsulated within a 'standards' document which defines what is and isn't good coding.

    It is assumed that in an environment where all the code is extremely simple it will be easy to maintain.

    This scheme has some plus points and is quite popular. It does however have some major drawbacks.

    1. In one swoop you cripple your most able staff. The bright and the best are now forced to do things in a mindless manner. Provided your entire application is simple it is probably OK, but it runs foul of the age old rule that you should pick an algorithm (or data structure) for a problem. You are basically picking a solution (anything as long as the most junior programmer understands it) before you get the problem. Put another way, you have simply moved all of the disadvantages of the maintenance phase into the development phase too.
    2. You instil a counter-culture that says good solutions are bad, bad solutions are good. The only people that will remain in that environment are those for whom bad solutions come easily. In really big companies performing standard operations that may be acceptable. For any company attempting to keep ahead of the pack it is disastrous.
    3. I don't actually think 'simple' programming makes for simple debugging. Certainly every line of code may prove readable but if there really are ten times more lines of code than required then have you really helped the person debugging. Remember most of the work in maintaining is finding the problem, not fixing it. Also, by defining that all problems are solved the same way you remove information from the code as the maintainer doesn't know if a problem is solved a particular way because that was a 'good idea' or simply because that was 'standard'.
  3. Ignore the fool This is really the essence of defensive programming. Write your code in such a way that whatever the fool does the code keeps on working. This approach has some very good features. It moves the burden of work up-front and allows your smart guys to be smart. It also minimises the chances of a late cycle change wrecking the system. It does have some down-sides though (as detailed in a previous article) in that it hides bugs. But there is another disadvantage that fits in here. It means your program fights the maintainer! Someone is in your code because they want to change its' behaviour. If you have rigged things so that small changes have little or no visible effect then if a significant change is required you can expect your code to be 'scatter gunned' by the person maintaining it. The danger here is not simply that they have to edit more but that they will therefore be more careless. Possibly c) is the best strategy if you are sure that this code will never be extended only fixed (but ... see previous articles)
  4. Hold the fools hand

    This is the notion most popular with the maintainers themselves. The idea is that if the developer is careful enough, chooses good variable names, makes copious comments, uses clear programming constructs and generally thinks things out properly then the maintainer coming along will be able to see at a glance what is going on and fix it. The nice thing about this strategy from the maintainers point of view is that when the maintainer screws the code up it is the developers fault for not holding their hand tightly enough!

    And I do mean when. This is the strategy most likely to cause complete havoc. If there is one thing worse than a fool it is a confidant fool. This system is superb at tricking someone into thinking they understand what is going on when they don't. So why doesn't the hand-holding actually work? Because the information being passed on is not of adequate quality for a number of reasons :

    1. It is unchecked. Programs are checked for correctness (in beta), comments and variable names are not.
    2. Terminology changes and has different connotations for different people and over time.
    3. Even if the terminology is similar the developer is working in a different context to the maintainer, hence he will read more (or less) into similar words.
    4. The developer will put in the information he thinks of, and leave out the information that is not current in his mind. Almost by definition the problems are going to be in the bit the developer wasn't thinking of.
    5. Degradation and divergence. Even if all the names and comments start out correctly they themselves have to be maintained by someone with partial understanding, the second generation of comments and variable names are likely to be wrong.
    6. The vital bits are contextual and are not catered for by normal documentation techniques. To be of any real use the developer needs to do a complete brain dump, that would be too costly.
    7. Even if the information is there the increased readability makes the maintainer go faster to the point that he doesn't make use of what he read. (Try reading a 200 page trashy novel, then see how much information you can actually remember)

    I will be going into the pros and cons of different code documentation styles in a future article.

  5. Force the fool to think

    This is my preferred approach. My ideal of a version control system would allow the developer to set questions for each source module. A maintainer would be able to get read-access to the file just by asking. To modify the source he would need to be able to answer the ten questions set by the developer. Wrong answers, no modifications. It may seem radical but think about it? Do you want the maintainers ignorance found by the system or the beta testers (/ paying customers, see earlier articles).

    In the absence of such a VCS we need to come up with an alternative.

    My approach is to break the source into manageable chunks, define small interfaces so that the maintainer is in a closed world, then do everything in as tight, and correct a way as possible. I try to avoid putting anything in the code that gets between the reader and the algorithm. The code is as good as I can get it and the maintainer will have to work out what is going on. Further, as all the chunks are offensively programmed if the maintainer mis-maintains, the rest of the system will complain volubly. You could argue that I am slowing the maintainer down, and you are correct. The 'time to first edit' goes up, but the time to 'correct edit' comes down.

If you've been following my ramblings you will feel a single theme coming across. Maintenance costs big time, I believe it is better to have a reliable strategy to make maintenance manageable rather than an optimistic strategy that makes maintenance a lottery.

DABs rules for writing maintainable code

In future articles I want to expand on some of these general principles and actually get down to some examples from the ABC libraries, but there is no point looking at individual lines if the basis isn't in place. So I will beg your indulgence for a second set of DABs rules, this time looking at strategies for actually writing code.

  1. Don't write code

    The only sure fire way of avoiding the maintenance cost of code is to avoid writing it. This may seem obvious but it is often ignored. There is a maintenance cost (in $) for every line of code in your system. This leads to a simple fact, writing code is a bad thing. I am astonished when I hear of software shops paying programmers by the line! This is completely wrong. Programmers should be given a fixed bonus related to the functionality of the module they are writing, they should then be fined (from the bonus) for every line of code they use implementing the solution. This more accurately reflects the true effect on the software house of the programmer working (you get paid for the functionality, and then have to pay for the support).

    A key methodology for avoiding code-writing is code reuse. This is one of the promises of OOP which I shall investigate next month

  2. If you must write something, then write something else

    This is tied in to the previous rule set 'The spec is always wrong'. If you are having to write some code (and you have tried to re-use) then it suggests you are stepping into the unknown. Given you are now heading this way you will probably head this way again so over-engineer the solution. Try to work out what you will need for this release and the next and design accordingly. It may be that for time reasons you cannot actually implement all of your design up front but you can at least avoid burning bridges. The counter argument is always the calendar. Go with your instinct. If you know that what you are writing is really a waste of time then just hack it, if this code is going to be strategic then code it properly and take the heat. If you have to hack it then put the code in a separate module and pay extra attention to 5.

  3. Focus the pain

    In any application there will be a diversity of problems. Some are simple (easy), some detailed (lots of typing but easy), some complex (ice-pack job). Make sure these problems are separated out in your source.

    This is so easy to do but can help tremendously. Let me show you two little snippets of compiler source :-

    case KWD_REPORT:

    switch ( ka )

    {

    case KWD_AT:
    case KWD_COLOR:
    case KWD_FONT:
    case KWD_LANDSCAPE:
    case KWD_MILLIMETERS:
    case KWD_PAPER:
    case KWD_POINTS:
    case KWD_PRE:
    case KWD_PREVIEW:
    case KWD_THOUS:


    return TRUE;

    default:

    return FALSE;

    }

    now, I would imagine that without any clues most of you could guess what this does. I would be confident that if I said "disable Landscape support on reports" you could work out what to do. The interesting this is this snippet comes from the largest procedure in the compiler (by a big distance).

    Now for a little baby procedure :

    t = *tt;

    for ( fldptr *ft = &t->link.fld; *ft; ft = &(*ft)->next )

    {

    fldptr fn = new fldtag;

    *fn = **ft; // Copy across old field record

    fn->number = ++fieldno;

    *ft = fn; // Point old next field at new record

    if ( (*ft)->id ) // New prefixed id

    (*ft)->id = (*ft)->id -> newprefix( x, newprefix );

    fn->type = fn->type->copygroup( x );// Copy

    }

    return tt == &this ? t : this;

    Get the picture? As soon as I know which procedure a problem is in (or an extension needs to be made) I have accurate information about the danger and timeliness of any changes required. On a larger project it would also enable me to distribute code to others in a suitable fashion.

    Most importantly in minimises the amount of 'nasty' code. Imagine I had scattered 100 lines of complexity amongst 4000 lines of source (these numbers are taken from code I have seen). I now have 4100 lines of code any one of which could be lethal. Separate them into different sections and I have 100 nasty lines and 4,000 lines of code I don't have to worry about. A 40x productivity increase for little cost.

  4. Focus on the pain

    Always do the nastiest, most complex bit first (unless it is completely peripheral to the execution sequence, such as an import routine). There are many reasons for this.

    1. If you can't do it there's no point doing the rest
    2. The hardest bit will change most often so start freezing it sooner
    3. The need to manage complexity will usually define what can be done with the interfaces
    4. It makes you time-scale predictions more accurate as you move on to known ground sooner
    5. It is best to cut corners (which tends to happen late-cycle) on the easy bits
  5. Get those interfaces water-tight

    This is really an insurance policy as much as anything else. As discussed previously it helps reduce the effect of bugs, but it also reduces the flux of the system. I always cringe if I ask for a change in one thing and am told "that will mean we have to change x,y & z". If you ignore 2->4 then get this one right, provided 5 is in place you can ruthlessly chop the system into shape when you need to. (Defining good interfaces will be a future article).

  6. Admit to failures early

    Once we have written a piece of code we tend to feel paternal towards it, we like to feel it will remain in the system unscathed for years, even if it doesn't quite work. It is not uncommon for a particular code lump to gain notoriety even during the development phase. Often it will be a piece of experimental code that worked so well it was adopted lock, stock and barrel. Then slowly the warts and wrinkles start coming out but we try to patch it together to 'make release'. Rip the code out and put in some code that works.

  7. Code as well as you possibly can

    You are coding to a specification that has been designed to last. You know the level of complexity or detail you are dealing with. You are inside a watertight compartment so you only have to deal with the problem in hand. Other bits of the system have to live up to spec or they are removed. Now its' all down to you, so GO FOR IT! Give it your best shot. Do everything as well as you can. You will feel better, the system will run better, and once your maintainer has come up to speed they will be better for having followed you (and the system shouldn't degrade over time).

Tweet  

The Christian Counter

The Fundamental Top 500