Writing better code (Part 1)
As we all know, Visual FoxPro provides an extremely rich and varied development environment but sometimes too much of a good thing leads us into bad habits. When writing code there are usually several ways of achieving the same result, but all too often there are significant differences in performance and the only way to ensure that our code is optimized for the best performance is to test, test again and re-test under as many different conditions as can be devised. Having said that, it is equally important to recognize that the first requirement of any code is that it is functionally correct. As Marcia Akins (Microsoft MVP, Author and Co-Owner of Tightline Computers Inc) has been known to say “doing something wrong as fast as possible is not really very helpful”.
But too often we tend to treat getting the correct functionality as the end of the story and, once something is working, simply move on to the next issue. The reality is that most developers typically review and optimize their code at the same time as they go back to add the comments (i.e. never)!
However, by applying some basic rules and techniques you can ensure that you avoid some of the more common problems and produce better and more efficient code the first time around. The more frequently you can do that, the less time you will need to spend revisiting and ‘tuning’ functional code. This has two quite separate benefits for any developer:
· The less tweaking of code you have to do once it is working correctly, the less chance there is of introducing bugs into functional code
· Getting it right immediately saves time, not having to revisit code is always quicker than re-factoring to improve performance or usability
The purpose of this series of articles is to review some of the things that we can do, as we are writing code, to ensure that our software is as efficient and as usable as possible and to minimize the need to re-visit working code to tweak it. We’ll begin with some of the basics and get a little more advanced in the later articles in this series.
Warn your users, but don’t treat them like idiots
Someone (sorry, but I don’t remember who) once remarked to the effect that the only two things end-users want from their software is that it won’t make them look silly in front of the boss by giving them the wrong answers, and that it won’t treat them like idiots. Unfortunately we, as developers, tend to concentrate so much on the first, that we forget about the second. Yet one of the most basic things that we can do in our applications is to try and strike the proper balance between providing relevant warnings and ‘nagging’.
One of my personal pet hates in this area comes from VFP itself. Have you ever noticed that when you are stepping through code in the debugger and hit “Fix” you get an immediate dialog that says “Cancel Program?”. I understand that the intention here is to warn me in case, say, I inadvertently opened the drop down and chose “fix” option when I really wanted some other option (an I really that dumb?). But in this dialog the ‘default’ option is “YES” which, while is not really consistent with the reason for displaying the dialog in the first place (i.e. to ‘fail safe’). Still, you can argue that it makes sense because the chances really are that if I chose “fix” I do want to fix the code.
However, if the code in question is a class definition, choosing ‘fix’ is no longer sufficient because as soon as you try to edit in the opened code window you get another dialog – and this time it asks:
“Remove Classes from Memory?”
Now hang on a moment, we have already told VFP that:
[1] We want to fix the code that is running
[2] Yes, we really do want to cancel the running program
and now it asks if we want to remove the class? How are we supposed to fix it if we DON’T? To make matters worse, the selected default option is “Ignore”!
So if you happen to have tried to insert a new line by pressing the enter key as the first step in your edit (and how often is that NOT the first thing you want to do?) – this idiot dialog flashes on your screen, and goes away, selecting “ignore” and nothing happens. Now look, I am, after all, a developer and surely if I am attempting to edit a class definition I actually WANT to do it? Who does VFP think it is to assume that I don’t know what I am doing? This is really annoying, not to say insulting!
Now consider how often in your own applications you have dialogs that nag the user like this? The classic is the “Are you Sure?” question. Here’s the scenario; the user opens the search screen, does a locate for some value and fetches a record. They then select, from your options, “Delete”. A dialog box pops up saying “This will delete this record, are you sure?” with “NO” as the default option (It’s “fail safe” time, folks…). How insulting is that? Of course they want to delete the record, they just spent 20 minutes finding the darn record, and now you ask them if they are sure this is what they meant to do?
Of course, I hear you say, there is always the possibility that they hit delete by accident. But whose fault is that? Answer, YOURS! You are the one who made it possible to hit ‘delete’ by accident, no-one else. If the delete functionality is so sensitive, then the user interface is wrong to make it so casually available. (Do you ask “Are you sure?” when they want to Add a record, or Save changes….?).
Why not make enabling the “delete” button a positive action so that the user has to do something to initiate the process and does not then have to deal with the “This will delete a record” followed by “Are you sure?”, followed by “Are you really, really sure” and so on ad infinitum. At the end of the day you, the developer, have to either execute the delete command or cancel the operation – better to warn them, and give them the chance to cancel, before they have invested their time in the process.
Inform your users, but don’t compromise performance to do so
Here is some code that I came across in a real-life application recently. The application in question was one that was written some time ago and for which data volumes had grown considerably over the years. The code in question is fairly common, and simply updates a WAIT window with a message indicating the progress of an operation that was running on every record in a table. Here is the relevant part of the SCAN loop:
*!* Initialize record progress counter
lnCnt = 0
lcOfRex = " of " + TRANSFORM( RECCOUNT( ALIAS() ) )
SCAN
*!* Update progress display
lnCnt = lnCnt + 1
lcTxt = 'Processing Record ' + TRANSFORM( lnCnt ) + lcOfRex
WAIT lcTxt WINDOW NOWAIT
Now the interesting thing about this process was that it was running against a table that now more than contained 125,000 records. So what? I hear you say, well the time taken to execute the process was about 3 minutes. But try this code on your local machine:
LOCAL lnCnt, lcOfRex, lnSt, lnNum, lcTxt, lnEn
lnCnt = 0
lcOfRex = " of 125000"
lnSt = SECONDS()
FOR lnNum = 1 TO 125000
lnCnt = lnCnt + 1
lcTxt = 'Processing Record ' + TRANSFORM( lnCnt ) + lcOfRex
WAIT lcTxt WINDOW NOWAIT
NEXT
lnEn = SECONDS()
? STR( lnEn - lnSt, 8, 4 )
Now, on my PC this code took just over 32 seconds to run and what does it do? NOTHING at all! The screen display is not even readable. The only conclusion that could be drawn was that just this little bit of, utterly useless, code was taking more than 15% of the total run time. Try the following version of the same code:
LOCAL lnCnt, lcOfRex, lnSt, lnNum, lcTxt, lnEn
lnCnt = 0
lcOfRex = " of 125000"
lnSt = SECONDS()
FOR lnNum = 1 TO 125000
lnCnt = lnCnt + 1
IF MOD( lnCnt, 10000 ) = 0
lcTxt = 'Processing Record ' + TRANSFORM( lnCnt ) + lcOfRex
WAIT lcTxt WINDOW NOWAIT
ENDIF
NEXT
lnEn = SECONDS()
? STR( lnEn - lnSt, 8, 4 )
This runs, on my machine, in less than 0.3 of a second – that is more than 100 times faster! Now if we consider the actual process in question, that was dealing with 125000 records in about 3 minutes, that meant it is running at about 700 records per second. Can the user even see a screen updating at that rate, let alone derive any useful benefit from it? Of course not, so why do it?
The question that this is all leading up to is, therefore;
What is a reasonable interval at which to update the screen?
Unfortunately there is no ‘right’ answer, but I would suggest that you can apply some common sense. The first requirement is that you need to have some idea about the total length of the process in question. Obviously if the process runs for three hours, updating every ten seconds is probably unnecessary, conversely if it takes three minutes, then a ten-second update interval seems reasonable.
The general rule that thumb I use is to try and update my user information display 200 times per process (i.e. every 0.5% step completion). My progress bar therefore has 200 units and I set my update interval by calculating the expected progress that constitutes 0.5% of the total by getting the number of records, and the average time to process each.
How do I know the average time? From testing!
When I am developing the code, I test it. And I base my assessment of the average processing time on a test that uses a volume of data that is at least 50% larger than I expect to see in production. Yes, this sometimes means that my progress updates are too fast when the system first goes into use, but as data volumes grow, the display rate typically gets closer to my target 0.5% completion. Even if I was way off in my estimate, and the process ends up taking twice as long, per record, as I expected I am still updating the display every 1% of way – which in a three-hour process would mean the screen gets updated every 100 seconds or so.
This may all sound very simple and obvious, but as so often in application development, it is the little things that make the difference – especially when they are obvious to the end user.
Published Tuesday, March 07, 2006 3:23 PM by andykr