Saturday, April 24, 2021

Writing better code (Part 1)

Writing better code (Part 1)

As we all know, Visual FoxPro provides an extremely rich and varied development environment but sometimes too much of a good thing leads us into bad habits.  When writing code there are usually several ways of achieving the same result, but all too often there are significant differences in performance and the only way to ensure that our code is optimized for the best performance is to test, test again and re-test under as many different conditions as can be devised. Having said that, it is equally important to recognize that the first requirement of any code is that it is functionally correct. As Marcia Akins (Microsoft MVP, Author and Co-Owner of Tightline Computers Inc) has been known to say  “doing something wrong as fast as possible is not really very helpful”.
But too often we tend to treat getting the correct functionality as the end of the story and, once something is working, simply move on to the next issue. The reality is that most developers typically review and optimize their code at the same time as they go back to add the comments (i.e. never)!
However, by applying some basic rules and techniques you can ensure that you avoid some of the more common problems and produce better and more efficient code the first time around. The more frequently you can do that, the less time you will need to spend revisiting and ‘tuning’ functional code. This has two quite separate benefits for any developer:
·         The less tweaking of code you have to do once it is working correctly, the less chance there is of introducing bugs into functional code
·         Getting it right immediately saves time, not having to revisit code is always quicker than re-factoring to improve performance or usability
The purpose of this series of articles is to review some of the things that we can do, as we are writing code, to ensure that our software is as efficient and as usable as possible and to minimize the need to re-visit working code to tweak it. We’ll begin with some of the basics and get a little more advanced in the later articles in this series.
Warn your users, but don’t treat them like idiots
Someone (sorry, but I don’t remember who) once remarked to the effect that the only two things end-users want from their software is that it won’t make them look silly in front of the boss by giving them the wrong answers, and that it won’t treat them like idiots. Unfortunately we, as developers, tend to concentrate so much on the first, that we forget about the second. Yet one of the most basic things that we can do in our applications is to try and strike the proper balance between providing relevant warnings and ‘nagging’.
One of my personal pet hates in this area comes from VFP itself. Have you ever noticed that when you are stepping through code in the debugger and hit “Fix” you get an immediate dialog that says “Cancel Program?”. I understand that the intention here is to warn me in case, say, I inadvertently opened the drop down and chose “fix” option when I really wanted some other option (an I really that dumb?). But in this dialog the ‘default’ option is “YES” which, while is not really consistent with the reason for displaying the dialog in the first place (i.e. to ‘fail safe’). Still, you can argue that it makes sense because the chances really are that if I chose “fix” I do want to fix the code.
However, if the code in question is a class definition, choosing ‘fix’ is no longer sufficient because as soon as you try to edit in the opened code window you get another dialog – and this time it asks:
“Remove Classes from Memory?”
Now hang on a moment, we have already told VFP that:
[1] We want to fix the code that is running
[2] Yes, we really do want to cancel the running program
and now it asks if we want to remove the class? How are we supposed to fix it if we DON’T? To make matters worse, the selected default option is “Ignore”!
So if you happen to have tried to insert a new line by pressing the enter key as the first step in your edit (and how often is that NOT the first thing you want to do?) – this idiot dialog flashes on your screen, and goes away, selecting “ignore” and nothing happens. Now look, I am, after all, a developer and surely if I am attempting to edit a class definition I actually WANT to do it? Who does VFP think it is to assume that I don’t know what I am doing?  This is really annoying, not to say insulting!
Now consider how often in your own applications you have dialogs that nag the user like this? The classic is the “Are you Sure?” question. Here’s the scenario; the user opens the search screen, does a locate for some value and fetches a record. They then select, from your options, “Delete”. A dialog box pops up saying “This will delete this record, are you sure?” with “NO” as the default option (It’s “fail safe” time, folks…). How insulting is that?  Of course they want to delete the record, they just spent 20 minutes finding the darn record, and now you ask them if they are sure this is what they meant to do?
Of course, I hear you say, there is always the possibility that they hit delete by accident. But whose fault is that? Answer, YOURS!  You are the one who made it possible to hit ‘delete’ by accident, no-one else. If the delete functionality is so sensitive, then the user interface is wrong to make it so casually available. (Do you ask “Are you sure?” when they want to Add a record, or Save changes….?).
Why not make enabling the “delete” button a positive action so that the user has to do something to initiate the process and does not then have to deal with the “This will delete a record” followed by “Are you sure?”,  followed by “Are you really, really sure” and so on ad infinitum. At the end of the day you, the developer,  have to either execute the delete command or cancel the operation – better to warn them, and give them the chance to cancel, before they have invested their time in the process.
Inform your users, but don’t compromise performance to do so
Here is some code that I came across in a real-life application recently. The application in question was one that was written some time ago and for which data volumes had grown considerably over the years. The code in question is fairly common, and simply updates a WAIT window with a message indicating the progress of an operation that was running on every record in a table. Here is the relevant part of the SCAN loop:
*!* Initialize record progress counter
lnCnt = 0
lcOfRex = " of " + TRANSFORM( RECCOUNT( ALIAS() ) )
SCAN
  *!* Update progress display
  lnCnt = lnCnt + 1
  lcTxt = 'Processing Record ' + TRANSFORM( lnCnt ) + lcOfRex
  WAIT lcTxt WINDOW NOWAIT
Now the interesting thing about this process was that it was running against a table that now more than contained 125,000 records. So what? I hear you say, well the time taken to execute the process was about 3 minutes. But try this code on your local machine:
LOCAL lnCnt, lcOfRex, lnSt, lnNum, lcTxt, lnEn
lnCnt = 0
lcOfRex = " of 125000"
lnSt = SECONDS()
FOR lnNum = 1 TO 125000
  lnCnt = lnCnt + 1
  lcTxt = 'Processing Record ' + TRANSFORM( lnCnt ) + lcOfRex
  WAIT lcTxt WINDOW NOWAIT
NEXT
lnEn = SECONDS()
? STR( lnEn - lnSt, 8, 4 )
Now, on my PC this code took just over 32 seconds to run and what does it do? NOTHING at all! The screen display is not even readable. The only conclusion that could be drawn was that just this little bit of, utterly useless, code was taking more than 15% of the total run time. Try the following version of the same code:
LOCAL lnCnt, lcOfRex, lnSt, lnNum, lcTxt, lnEn
lnCnt = 0
lcOfRex = " of 125000"
lnSt = SECONDS()
FOR lnNum = 1 TO 125000
  lnCnt = lnCnt + 1
  IF MOD( lnCnt, 10000 ) = 0
    lcTxt = 'Processing Record ' + TRANSFORM( lnCnt ) + lcOfRex
    WAIT lcTxt WINDOW NOWAIT
  ENDIF
NEXT
lnEn = SECONDS()
? STR( lnEn - lnSt, 8, 4 )
This runs, on my machine, in less than 0.3 of a second – that is more than 100 times faster! Now if we consider the actual process in question, that was dealing with 125000 records in about 3 minutes, that meant it is running at about 700 records per second. Can the user even see a screen updating at that rate, let alone derive any useful benefit from it? Of course not, so why do it?
The question that this is all leading up to is, therefore;
What is a reasonable interval at which to update the screen?
Unfortunately there is no ‘right’ answer, but I would suggest that you can apply some common sense. The first requirement is that you need to have some idea about the total length of the process in question. Obviously if the process runs for three hours, updating every ten seconds is probably unnecessary, conversely if it takes three minutes, then a ten-second update interval seems reasonable.
The general rule that thumb I use is to try and update my user information display 200 times per process (i.e. every 0.5% step completion). My progress bar therefore has 200 units and I set my update interval by calculating the expected progress that constitutes 0.5% of the total by getting the number of records, and the average time to process each.
How do I know the average time? From testing!
When I am developing the code, I test it. And I base my assessment of the average processing time on a test that uses a volume of data that is at least 50% larger than I expect to see in production. Yes, this sometimes means that my progress updates are too fast when the system first goes into use, but as data volumes grow, the display rate typically gets closer to my target 0.5% completion. Even if I was way off in my estimate, and the process ends up taking twice as long, per record, as I expected I am still updating the display every 1% of way – which in a three-hour process would mean the screen gets updated every 100 seconds or so.
This may all sound very simple and obvious, but as so often in application development, it is the little things that make the difference – especially when they are obvious to the end user.

Published Tuesday, March 07, 2006 3:23 PM by andykr

Friday, April 23, 2021

Writing Better Code (Part 2)

As I noted in the first article in this little series, one of the really great things about Visual FoxPro is that there are usually several different ways of doing the same thing. Unfortunately, this is also one of the worst things about it because, at least if there is only one way of doing something, you don’t have to think too much about it. One of the things that I notice whenever I am working on code (my own or someone else’s) is how easily we developers fall into patterns of doing things. Once we have figured out the way to tackle some particular type of operation we tend to stick to it unquestioningly.


I had occasion, recently, to re-visit some of the most basic commands in Visual FoxPro and to critically examine their performance. I found some of the results surprising and, in the hope that you will too, I have documented the results of my research.

How the tests were run

There are several problems when trying to assess the relative performance of Visual FoxPro commands and functions. The solution that I adopted was to set up my tests as individual functions, and then to use a random number generator to determine which test to call. By running the individual tests many thousands of times, in random sequences, the effects of caching and pre-compilation are effectively cancelled out. I used the same basic methodology for all tests and won’t bother to mention it again.
For example, the code used for Macro Expansion Test #5  (InDirect EVAL() for Object Value) was:
loObj = CREATEOBJECT( 'textbox' )
loObj.Value = "This is a test string value"
lcObj = "loObj"
lnSt = SECONDS()
FOR lnReps = 1 TO 10000
  luVal = EVALUATE( lcObj + '.Value' )
NEXT
lnEn = SECONDS()
(Note: 10,000 iterations was enough to be meaningfully measurable – the result was then averaged for each test). The results of each test were written out to a table and then aggregated and averaged to get the results illustrated in this article.

Name Expressions vs Macro Substitution

One of the great strengths of Visual FoxPro is its ability to use indirection at run time. Now, all experienced Visual FoxPro developers know that using a name expression (i.e. the EVALUATE() function) is ‘faster’ than macro substitution (& operator) for doing this. But the question is, does it really matter? Seems to me that there are three basic scenarios in which we may want to use indirection when working with data:
  • Use a reference to get a value: e.g. Retrieving the contents of a field using only its name
  • Use a reference to address an object: e.g. Get a property value
  • Use a reference to find some specific item of data: e.g. In a LOCATE command
The following table shows the results of carrying out the same operation in three different ways. First using traditional VFP macro substitution, then using the evaluate function indirectly (e.g. EVAL( lcFieldName )) and finally using a direct evaluation direct (e.g. EVAL( FIELD( nn )). 
Relative performance of macro expansion and name expressions
Test Name
Total Runs
Total Time
Avg (ms)
Macro Substitute for Table Value
107000
14.8230
0.1385
Indirect EVAL() for Table Value
142000
11.6680
0.0822
Direct EVAL() for Table Value
96000
6.4920
0.0676




Macro Substitute for Object Value
1090000
13.7840
0.0126
InDirect EVAL() for Object Value
1040000
8.7900
0.0085
Direct EVAL() for Object Value
1130000
9.1930
0.0081




Macro substitute in LOCATE
1010000
8.1700
0.0081
InDirect EVAL() in LOCATE
1140000
9.1250
0.0080
Direct EVAL() in LOCATE
1140000
9.1980
0.0081
The first thing to note is that there is no intrinsically slow method when considering each approach in isolation (the worst case scenario shows a difference of no more than 0.05 milliseconds between best and worst methodology – hardly detectable!). However, when being executed repeatedly (inside a tight loop for example), it is clear that EVAL() has a significant performance advantage when dealing with either data tables or objects and so the perceived wisdom that EVAL() is faster than &expansion is obviously well founded.
Interestingly there is little detectable difference between using direct and indirect evaluation and my personal preference, if only for improving readability of the code, is therefore to use the Indirect method. In other words, I will accept (the very small) performance hit to retain code like this:
lcField = FIELD( lnCnt )
luVal = EVALUATE( lcField )
in preference to the less obvious:
luVal = EVALUATE( FIELD( lnCnt ))
The main conclusion is, therefore, that there is no benefit in using macro substitution when EVAL() is available as an option. A secondary point to note is that as the number of repetitions increases the difference rapidly becomes significant and that EVAL() should always be used when processing repetitively.

Conditional Evaluation

When it comes to evaluating some condition and switching control depending on the result, VFP 9.0 has almost an embarrassment of riches. We now have four different ways of evaluating some condition ranging from the traditional IF…THEN…ELSE, through the IIF() function, the DO CASE…ENDCASE statement and finally the new ICASE() function. The question is, therefore, does it matter which we use in any given situation? The speed of execution of these commands was compared using a standardized multiple level test in the general form:
IF lnTest = 1
  lnRes = 1
ELSE
  IF lnTest = 2
    lnRes = 2
  ELSE
    IF lnTest = 3 
      lnRes = 3
    ELSE
      lnRes = 4
    ENDIF
  ENDIF
ENDIF
Relative performance of conditional testing
Test Name
Total Runs
Total Time
Avg (sec)
DO CASE Level 1
 282000
0.6910
 0.0025
DO CASE Level 2
 242000
0.7600
 0.0031
DO CASE Level 3
 275000
1.1820
 0.0043
DO CASE Otherwise
 247000
0.9900
 0.0040




ICASE Level 1
 313000
0.4020
 0.0013
ICASE Level 2
 281000
0.4210
 0.0015
ICASE Level 3
 300000
0.5610
 0.0019
ICASE Otherwise
 302000
0.5300
 0.0018




IF ELSE Level 1
 203000
0.3100
 0.0015
IF ELSE Level 2
 235000
0.6430
 0.0027
IF ELSE Level 3
 213000
0.7610
 0.0036
IF ELSE Otherwise
 227000
0.8510
 0.0037




IIF Level 1
 213000
0.2700
 0.0013
IIF Level 2
 236000
0.4240
 0.0018
IIF Level 3
 209000
0.4200
 0.0020
IIF Otherwise
 222000
0.5100
 0.0023




Level 2 of nested DO CASE inside DO CASE
67000
         0.4300
0.0064
Level 2 of nested IF…ELSE inside IF…ELSE
56000
0.1400
0.0025
Once again the first thing to note is that nothing here is exactly slow. However there are some differences and a couple of anomalies that are worth noting.
One thing that stands out clearly is that the DO…CASE is invariably the slowest way of handling this sort of test although it is also the most readable. Perhaps oddly, the fastest way to execute a simple evaluation of this type is to use the ICASE() function – though of course this, like the IIF() is not really relevant when a block of code follows the conditional test. The main problem with both the IIF() and the ICASE() functions is, of course, the readability of the code. For example the ICASE that duplicates the IF…ELSE ladder above is:
lnRes = ICASE( lnTest = 1, 1, lnTest = 2, 2, lnTest = 3, 4 )
which is hardly comprehensible without considerable effort. Of course, the commonest scenario for this type of logic is inside an SQL Query and so speed of execution is, rightly, paramount in that case.
The anomaly I referred to above is that with all methods tested, the execution time increases as you go further down the levels – which is precisely what you would expect. After all it makes sense that it should take longer to make three tests than two, and longer for two than for one. However in both the CASE and the ICASE the ‘Otherwise’ condition is executed faster than the level which immediately precedes it, not much faster admittedly, but undeniably and consistently faster. I first noticed this phenomenon back in VFP 7.0 and it is interesting that it persists not only in the CASE statement in VFP 9.0 but also in the new ICASE. What it means, I do not know, but it is an observed phenomenon.
The conclusions here are, first, that if you can get away with it, use the functions in preference to the plain commands. Second that if you are doing this repeatedly, then an IF…ELSE ladder is likely to give you better performance than a DO CASE – especially if the nesting exceeds one level. The caveat is that the more complex the code becomes the harder it is to read and understand when using an IF…ELSE ladder or the functions. My personal preference here is to use the CASE structure as a comment  in my code, but actually execute it using one of the other methods.

Looping Through Data

One of the most fundamental operations in almost every data centric application is the requirement to retrieve a set of records and them process them sequentially. In Visual FoxPro there are basically three ways of doing this, the original xBase DO WHILE command, the newer SCAN command and newest of all (though still pretty ancient J) the FOR…NEXT loop. There are, of course, many variants and variations but these three constructs provide the basis for all of them. In the next little batch of tests I evaluated the speed with which each record in a large name and address table could be visited and a specific column value retrieved (and, for the purposes of this test, immediately discarded).
Each of the three constructs was tested unconditionally (i.e. start at the first record and proceed to the last) and for a set of records meeting a specific condition (in this case for addresses in the state of Ohio).
The reason for this particular test is that it represents the two commonest scenarios. However unlike the other comparisons we have done to date, the code required to perform this operation is completely different for each option.

UnconditionalConditional
DO WHILEGO TOP
DO WHILE NOT EOF()

  luRes = cState

  
SKIP
ENDDO
=SEEK( ‘OH’, ‘bigtable’, ‘cState’ )
DO WHILE cState == ‘OH’ AND NOT EOF()

  
lures = cState
  
SKIP
ENDDO
SCANGO TOP
SCAN

  
luRes = cState
ENDSCAN
*** SCAN FOR
GO TOP
SCAN FOR cstate == ‘OH’

  
luRes = cState
ENDSCAN
*** SEEK() and SCAN WHILE
=SEEK( ‘OH’, ‘bigtable’, ‘cState’ )
SCAN FOR cstate == ‘OH’

  
luRes = cState
ENDSCAN
FOR…NEXT*** Using GOTO ***
lnRex = RECCOUNT(‘bigtable’)
FOR lnCnt = 1 TO lnRex

 
  GOTO (lnCnt)
 
 IF EOF()
    
EXIT
  
ENDIF
  
luRes = cState
NEXT
lnRex = RECCOUNT( 'bigtable' )
FOR lnCnt = 1 TO lnRex

  
GOTO (lnCnt)
  
IF EOF()
    
EXIT
  
ENDIF
  
IF NOT cState == 'OH'
    
LOOP
  
ENDIF
  
luRes = cState
NEXT
*** Using SKIP
lnRex = RECCOUNT( 'bigtable' )
FOR lnCnt = 1 TO lnRex

  
SKIP
  
IF EOF()
    
EXIT
  
ENDIF
  
luRes = cState
NEXT
 The table used has 125,000 records and so is moderately large for this type of operation. The results may surprise you, they certainly did me!
Relative performance of the Looping Tests
Test Name
Total Runs
Max (sec)
Min (sec)
Avg (sec)
DO WHILE NOT EOF() with NO controlling index
26
0.6110
0.4510
0.5160769
SCAN...ENDSCAN with NO controlling index
29
0.3610
0.3310
0.3536667
FOR...NEXT with SKIP with NO controlling index
19
1.1220
0.4500
0.5746250
FOR...NEXT & GOTO with NO controlling index
26
0.6600
0.4710
0.5691667





DO WHILE NOT EOF() with controlling index
21
2.7040
2.4240
2.5595714
SCAN...ENDSCAN with controlling index
25
2.9640
2.3530
2.5836154
FOR...NEXT with SKIP with controlling index
9
3.0840
2.4140
2.6314444
FOR...NEXT & GOTO with controlling index
26
0.9510
0.4810
0.6214500





Scan FOR cState = 'OH' with NO index set
26
0.0710
0.0600
0.0680000
Scan FOR cState = 'OH' with index set
19
0.1510
0.1100
0.1208571
FOR...NEXT With test for cState = ‘OH’
16
0.6300
0.5410
0.5867500





SEEK() & DO WHILE cState = 'OH'
19
0.0910
0.0700
0.0777500
SEEK() & SCAN WHILE cState = 'OH'
29
0.0900
0.0700
0.0735000
There are a number of things to note from this table.
First, when processing all records in a table, the impact of a controlling index is significant if you are using any loop control that relies on the SKIP command – whether explicitly (DO…WHILE and FOR…NEXT) or implicitly (SCAN…ENDSCAN).
Second, SCAN FOR is also slower when a controlling index is set. In these tests there was an optimizable index on the column used and, even though SCAN is optimizable, the addition of the controlling index slowed the process noticeably.
Third, the controlling index does not matter when we are using SEEK() and a WHILE clause with either a SCAN or a DO WHILE. Here it makes little difference which we use as both deliver essentially the same performance and we usually need a controlling index when using a WHILE as the scope anyway.
The following conclusions can be drawn from these results:
  • When processing all records. Ensure that there is no controlling index when using a command that includes a SKIP. Probably the best generalization is to use the FOR loop, with a GOTO, since it is essentially unaffected by the presence or absence of a controlling index
  • The most efficient way to handle selective processing depends on whether an index suitable for use with a SEEK() exists.  If it does, then use a combination of SEEK() and WHILE scope otherwise use SCAN FOR with no controlling index set on the table
Published Monday, March 13, 2006 4:43 PM by andykr

Thursday, April 22, 2021

Writing Better Code (3)

Writing Better Code (3)

In the third article of this little series I am going to talk about Procedures and Functions. Visual FoxPro, like its ancestors FoxPro and FoxBase supports two different ways of declaring, and calling, code.

Creating a Procedure

A procedure is simply a block of code that is addressed by name. It may, but is not required to, accept one or more parameters and the reason for creating a procedure is to avoid the necessity of writing the same code in many places. Procedures are called using the “DO” command and, by default, any parameters are passed by reference. There is, therefore no need to have procedures return  values – they can modify any values in the calling code that are passed to them.
Here is a simple example of the sort of code you might put into a procedure. All it does is to accept a Record Number and Table Alias. It validates that the record number is valid in the context of the specified table and, if so, moves the record pointer. (Note: the code, as written, does not take account of buffered records - that have a negative recorde number).  If anything fails, or is invalid, an error is generated:

********************************************************************
*** Name.....: GOSAFE
*** Author...: Andy Kramek & Marcia Akins
*** Date.....: 03/20/2006
*** Notice...: Copyright (c) 2006 Tightline Computers, Inc
*** Compiler.: Visual FoxPro 09.00.0000.3504 for Windows
*** Function.: If the specified record number is valid in the
*** .........: specified alias go to it, otherwise generate an error
********************************************************************
PROCEDURE gosafe
PARAMETERS tnRecNum, tcAlias
TRY
  *********************************
  *** Validate the parameters first
  *********************************
  *** We must get a record number!
  IF VARTYPE( tnRecNum ) # "N" OR EMPTY( tnRecNum )
    ERROR "Must pass a record number to GoSafe"
  ENDIF

  *** If no Alias specified, assume current alias
  lcAlias = IIF( VARTYPE( tcAlias ) # "C" OR EMPTY( tcAlias ), LOWER( ALIAS()), LOWER( ALLTRIM( tcAlias )) )
  IF EMPTY( lcAlias ) OR NOT USED( lcAlias )
    *** No table!
    ERROR "Must specify, or select, an open table when calling GoSafe"
  ENDIF

  *********************************
  *** Check that the record number is valid for the alias
  *********************************
  IF BETWEEN( tnRecNum, 1, RECCOUNT( lcAlias ) )
    *** This is OK
    GOTO (tnRecNum) IN (lcAlias)
  ELSE
    *** Nope, the record number is no good
    ERROR "record " + TRANSFORM( tnRecNum ) + " is not valid for table " + lcAlias
  ENDIF    

CATCH TO loErr
  MESSAGEBOX( loErr.Message, 16, "GoSafe Failed" )

ENDTRY
RETURN
To set up and call the procedure we use:

SET PROCEDURE TO procfile ADDITIVE
*** Save the record pointer in Account
lcOldAlias = ‘account’
lnOldRec = RECNO( lcOldAlias )
<<more code here>>
*** Restore the record pointer
DO GoSafe WITH lnOldRec, lcOldAlias
This particular procedure doesn’t modify anything and although in fact (like all VFP Methods, Functions and Procedures) it actually returns a logical “.T.” there is no need, or even facility, to capture that value. Note that if we wanted to pass values to a procedure by value, instead of by reference, then we have to pass them as the actual values and not as variables (Yes, we could also change the setting of UDFPARMS but that is NOT recommended because it has other, usually unwanted, side-effects. Besides, using a global solution for a local issue is generally a bad idea anyway). So to pass a record number ‘by value’ to this procedure we would use:
DO GoSafe WITH INT( lnOldRec ), lcOldAlias

Creating a Function

The other method for calling code is to create a function. The basic difference between a procedure and a function is that functions ALWAYS return a value and so whenever you call a function, whether it is a native VFP function, or one of your own, you should always check the result. Functions are called by suffixing the name with a pair of parentheses. As with procedures, functions may accept one or more parameters but unlike procedures, parameters passed to functions are, by default, passed by value. The consequence is that functions do not modify values in the code that called them. (Yes, this is the exact opposite of the behavior for procedures). Here is a simple function that returns the time as a character string from a number of seconds.

********************************************************************
*** Name.....: GETTIMEINWORDS
*** Author...: Andy Kramek & Marcia Akins
*** Date.....: 03/20/2006
*** Notice...: Copyright (c) 2006 Tightline Computers, Inc
*** Compiler.: Visual FoxPro 09.00.0000.3504 for Windows
*** Function.: Return number of Days/Hours and Minutes from a number of seconds
*** Returns..: Character String
********************************************************************
FUNCTION GetTimeInWords( tnElapsedSeconds, tlIncludeSeconds )
LOCAL lcRetval, lnDays, lnHrs, lnMins
*** Initialize the variables
STORE '' TO lcRetval
STORE 0 TO lnDays, lnHrs, lnMins*** Handle the Days first
lnDays = INT( tnElapsedSeconds / 86400 )
IF  lnDays > 0
  lcRetVal = PADL( lnDays, 3 ) + ' Days '
ENDIF
*** Next the hours
lnHrs = INT(( tnElapsedSeconds % 86400 ) / 3600 )
IF lnHrs > 0
  lcRetVal = lcRetVal + PADL( lnHrs, 2, '0' ) + ' Hrs '
ENDIF
*** Now the minutes
lnMins = INT(( tnElapsedSeconds % 3600 ) / 60 )
*** And check for seconds
IF tlIncludeSeconds
  *** If we want the seconds - add them exactly
  lcRetVal = lcRetVal + PADL( lnMins, 2, '0') + ' Min '
  lcRetVal = lcRetVal + PADL( INT( tnElapsedSeconds % 60 ), 2, '0' )+' Sec '
ELSE
  *** Round minutes UP if >= 30 seconds
  lnMins = lnMins + IIF( INT( tnElapsedSeconds % 60 ) >= 30, 1, 0 )
  lcRetVal = lcRetVal + PADL( lnMins, 2, '0') + ' Min '
ENDIF
RETURN lcRetVal
As you can see, unlike the procedure, this code actually creates, and explicitly returns, a character string to whatever calls it. (The rationale behind this function is, of course, that if you obtain the difference between two DateTime values the result is in seconds). So when calling this function the return value must be handled in some way. We can display it:
?  GetTimeInWords( 587455, .T. )  && Displays:  6 Days 19 Hrs 10 Min 55 Sec
or store it to a variable:
lcTime = GetTimeInWords( 587455 )
orevn the clipboard (to paste into some other application for example)
_cliptext = GetTimeInWords( 587455 )
Remember that the default behavior of Visual FoxPro is that parameters are passed by value to a function so if you do need to pass values by reference (and this most commonly arises when dealing with arrays) then you must pass prefixing the reference with an “@” sign, like this:
DIMENSION gaMyArray[3,2]
luRetVal = SomeFunction( @gaMyArray )

So what’s the difference

Although I declared this code as a FUNCTION and the GoSafe code as a PROCEDURE, in practice VFP does not care. We can call code that is declared as a procedure as if it were a function (and vice versa). Thus we can actually call the “GoSafe” procedure referred to above like this:
llStatus = GoSafe( lnOldRec, lcOldAlias )
The return value in llStatus would, given the way the procedure is written ALWAYS be .T. but if we modified the code just slightly, we could have the return value reflect whether or not the procedure succeeded in setting the record pointer correctly. The only modification needed is to test for success after moving the record pointer, thus:

  IF BETWEEN( tnRecNum, 1, RECCOUNT( lcAlias ) )
    *** This is OK
    GOTO (tnRecNum) IN (lcAlias)
    *** Check that we ended up on the right record
    *** If so, set Return Value = .T.
    STORE ( RECNO( lcAlias ) = tnRecNum ) TO llRetVal
  ELSE
    *** Nope, the record number is no good
    ERROR "record " + TRANSFORM( tnRecNum ) + " is not valid for table " + lcAlias
  ENDIF    

CATCH TO loErr
  MESSAGEBOX( loErr.Message, 16, "GoSafe Failed" )
  *** Force the return Value to be false in all errors
  llRetVal = .F.
ENDTRY
*** Return Status
RETURN llRetVal
Of course we could also run the GetTimeInWords function by calling it as a procedure:
DO GetTimeInWords WITH 587455, .T.
But it wouldn’t do us much good, because the function does nothing apart from return a value – which in this case we cannot detect. The interesting thing about all this is that it makes two points:
·         First, VFP doesn’t actually care whether something is written as a PROCEDURE or as a FUNCTION. It is just code to be executed. What does matter is whether the code is called as a procedure (using DO gosafe ) or as a function ( llStatus = gosafe() )
·         Second, if you are calling a function it is vitally important to check the return value. While procedures usually embody their own error handling (as in the example given here), the value returned by a function is usually the only indication that whether it did what was expected, or not. This is true whether the function in question is defined in a procedure file, as a stand-alone program, a method on an object or a native Visual FoxPro function.

What has all this to do with writing better code?

Now you may be thinking that this is all very basic, and obvious, so why am I bothering with it. But you know, one of the commonest things that I see on the various forums that I frequent is a message titled something like:
TableUpdate Not Working
Apart from the obvious comment that it is highly unlikely that an error in the operation of something so fundamental as TableUpdate() would get through testing and into a released version of VFP, what this really means is that “I cannot get TableUpdate to work”.  Usually the question develops along these lines:
When users change the data, their changes are visible in the form but are not being saved to the database. There is no error, but the table is not updated. Please help!
And when the person finally posts their code (often after much prompting – why are some people so reluctant to post their code?) we find this:
=TableUpdate( .T. )
Now this one single line of code, that has been in every version of the VFP Help file (yes, including VFP 9.0) is, I believe, responsible for more wasted developer hours than almost anything else ever written. What’s wrong with it? Well, let’s see:
First, TableUpdate() is a function, which as we have seen, means that it returns a value. Where is the check for that value? Even though the help file clearly states that it has a return value there is no indication in the examples in the help file that you need to check it! However, it is clear, when you read the fine print, why it is IMPERATIVE that you check the return value:
Return Value
Logical data type. TABLEUPDATE( ) returns True (.T.) if changes to all records are committed. Otherwise, TABLEUPDATE( ) returns False (.F.) indicating a failure. An ON ERROR routine isn't executed. The AERROR( ) function can be used to retrieve information about the cause of the failure.

What does this mean? Quite simply that if a call to TableUpdate() fails to update a record the ONLY way you will know is if you test the return value.
Second, it is passing only one of the FOUR possible parameters for TableUpdate(), not the least important of which is the third, (the name of the table that is supposed to being updated).
Third, it uses the old (VFP 3.0, VFP 5.0) logical value for the first parameter, despite the fact that the possibility of using extended functionality was introduced as far back as Version 6.0! In fact about the only thing right about this example code is the spelling of the word “TableUpdate”!
How should this example look? Like this!

llOk = TABLEUPDATE( 1, .F., 'employee' )  && Commits changes to current row only
IF llOk
  *** TableUpdate Succeeded
  ? 'Updated cLastName value: '
  ?? cLastName  && Displays current cLastName value (Jones).
ELSE
  *** TableUpdate Failed - WHY?
  AERROR( laErr )
  MESSAGEBOX( laErr[ 1, 2], 16, 'TableUpdate Failed' )
ENDIF

For full details on the use of TableUpdate() (and TableRevert()) see my blog article “Handling buffered data in Visual FoxPro” from December 2005.

Conclusion

The point here is that you MUST get into the habit, if you do not already have it, of checking the return values from function calls – even those whose result you KNOW in advance (how many times do you use the SEEK() function but NOT check the result? Be honest!).
The only possible exception I can think of, off-hand, would be TableRevert() because I am not sure how it can possibly fail, or even what you can do about it if it does. This seems to be more of a command, or  procedure, than a true function although, of course, it does return a value (i.e. the number of rows reverted).


Published Monday, March 20, 2006 7:47 PM by andykr

Writing better code (Part 1)

Writing better code (Part 1) As we all know, Visual FoxPro provides an extremely rich and varied development environment but sometimes to...