·.

ASK SITARA
LATEST DEVELOPMENTS AT SITARA
REGISTRATION FORM

A Personal Software Process for Testing

Raghav S. Nandyal

General Manager

 

Intelligroup, Inc.

Methods Tools and Products Division

My Home Sarovar Plaza, Ground Floor, Secretariat Road Hyderabad-500 004

Abstract:

The emphasis laid on the process adopted to test “software products” is not as rigorously understood and followed as is the case with the process used for product development. The justification behind this statement is the common fatal application failure messages that we are so used to seeing! “An error has occurred in your application. If you choose Ignore, you should save your work in a new file. If you choose Close, your application will terminate”. Or the more cryptic “Fatal Error: Run-time Error 123”. This is commonly called GPF (General Protection Failure) due to stack faults and is tolerated with a hurt feeling and a deep sense of frustration!

The hallmark of this personal software process for testing is its ability to prevent such run-time and application errors from occurring. No known testing technique has so far been able to address the very common fatal “run time errors while the product is in the testing process”. This is because these errors depends largely upon the memory model and has nothing to do with either the functionality of the application or the user interface. Such errors surface only when the two are integrated without a good integration test strategy. Hence the reason for the numerous encounters with “Run Time Error” messages seen even with the most popular software products.

This paper provides an understanding of how to manage and handle testing of software products in practical and real-life scenarios such as ever changing product requirements with unclear expectations of the user interface. How to effectively manage the testing process by separating the testing of application code and the user interface code in such a scenario is described with the help of an example. After testing the application code and the user interface code separately, the process of integrating the two is addressed.

It highlights the use of language constructs in C to effectively narrow down the bug in what is termed the "Fencing Technique". This technique has been put to use by the author in numerous software product development and testing initiatives.

A product development initiative in the Windows environment using C language for the application development is used as an example to explain the process since ideas in this paper could be termed - a Personal Software Process for Testing. However, this process could be applicable to product development initiatives in other environments. Native language constructs and environment features would then have to be exploited to gain full benefits from what is presented in this paper.

Index Terms:

Black Box Testing, Error classifications, Fencing Techniques, Personal Software Process (PSP), Run time errors, White Box Testing

 

INTRODUCTION 

Before delving into the PSP for testing, a brief introduction to the software testing process would be in order. Software testing has consistently evolved with the changing paradigm to software development. White box to emphasize structural integrity and black box testing to emphasize functional integrity were relevant and applicable when structured methods was the most acceptable way of problem decomposition [1][3]. With the maturing of object oriented methodologies and solutions seeking to be object focused, classes having built-in-self-test (BIST) features similar to digital hardware design could become a reality [2]. This is primarily possible due to the high degree of modularization of function and associated data into classes. So, the BIST functions or methods could be developed to potentially stress the other class methods.  

Metrics collection itself assumes a new meaning when we are also able to qualitatively assess the test strategy rather than merely be able to predict defects and effort [5]. Fencing Technique is one such strategy, which can potentially trap run-time and application errors in a systematic manner. 

Test strategy: The test strategy adopted in this PSP for testing is to perform independent testing of the application code and the user interface code and defer the integration of the two until after requirements stability is gained. The application code developed can be exercised to check completion and correctness as described in section 2. User interface testing is described in section 3. The final integration and the fencing technique to rule out the possibilities of run time errors, is described in sections 4 and 5. 

2          APPLICATION CODE TESTING

Testing of software is very much a process of systematic elimination of the reasons for failure. A broad classification of bugs or failure conditions is functionality failures (bugs local to functions or modules), interface failures (bugs between two communicating modules due to incorrect data reference-pass by value versus pass by reference) and global parameter failures (bugs due to incorrect data initializations and settings) [6]. These failure conditions are not dependent upon the method employed to build software. The application code developed using structured methods is just so much more prone to having the third category of failures. Real-life product testing effort would require that testing be planned with design for change, in mind.

Unit testing of the application code is iterative and is performed by tracing the branching of the logical paths that the code can potentially take. Test cases are identified to ensure 100% line coverage [NOTE: Not path coverage]. Unreachable code, if any, and bugs due to exercising the different lines of code can be uncovered with these test cases. Black box testing of modules or units can then be adopted to determine functional correctness at the unit level to contain all functional defects [6]. Black box testing of a unit would mean that each unit or module is considered to be the system and ensuring adequate test cases are employed to highlight the unit behavior. Ideally, such tests should be conducted in a bottom-up fashion. In a structured methods approach, for instance, we start module testing with the module specification, which cannot be decomposed further. With reference to figure 1, testing begins with Function_1_AtLevel3() and the other functions at this level. When testing is complete, functional and structural defects/bugs due to this module is practically zero.  After all the modules which are at this final level of decomposition are tested in both a white box and black box sense, the module which is immediately higher in the hierarchy is tested. With reference to Figure 1, Function_1_AtLevel2() would be tested after all the functions at level 3 are individually tested. What is accomplished by this manner of bottom-up integration is that, zero interface errors or defects would exist after testing the integrating module in a white box and black box sense. This approach is continued all the way up until the system is completely integrated. With reference to figure 1, the system is complete after integrating all the modules from the main() function which is at level 1. A detailed elaboration of this method can be found in [7].

The virtue of such an application integration process is that modifications to requirements can be handled with relative ease. With reference to figure 1, let us assume that a change to requirement resulted in the creation of New_Function_AtLevel3(). With the above testing strategy, this would be tested after all dependencies such as Function_1_AtLevel4() and Function_2_AtLevel4() are tested both in a white box and black box sense.

It is useful to mention that functions or modules can be effectively tested using driver functions. Development of these driver functions to exercise the behavior is a very useful step in the process of integration of the application code with the user interface. Suffice it to say here, that drivers are recommended and were used in this PSP strategy to stress the functionality of each one of the modules. For a more detailed elaboration, refer to [6].

3          USER INTERFACE TESTING

Figure 2 is a sample user interface structure chart, which will be used as an example to develop the idea further.

In a Windows environment the GUI begins with the development of WinMain() function. Several initialization functions are called before windows commands begin to get interpreted by WndProc() function. The WM_COMMAND message of WndProc() is where all the action happens. A parameter called the wParam contains the menu activation. It is here that dialogs get popped up and user inputs are received. After sufficient information to process the request is available, the application is called as using callback functions. The user interface testing is restricted to doing a thorough job of ensuring functional and structural correctness of this much of the final product. The next few paragraphs describe testing of what was briefly presented to be the user-interface of the product.

WinMain() in figure 2, is the GUI equivalent of main() in figure 1. In a manner similar to the testing of the application code (section 2), testing of the UI begins in a bottom-up manner. So, the message loop handling the processing of windows messages in function WndProc() is tested in both a white box and black box sense.

Use of stubs comes in very handy. Stubs are what the application code will eventually replace. The Windows commands dispatched from the WinMain() function is checked for recognition and acknowledgement in the UI testing process. Stubs could be dummy dialogs which say that when a message called IDM_OPEN is received a PopOpenFileDialog(), say is called. It could just be a modal dialog which can be dismissed as having fulfilled the requirements of the functionality behind the FileOpen() function, the behavior of which is delegated to the application code development and tested during the application code testing. Stubs are mere informational dialogs, which are acknowledgements to the occurrence of an event.

When the UI is tested in this manner, what is possible is to eliminate all known causes for UI mal-functioning. Since the user input capture involves designing dialog boxes which in itself is quite an involved process, segregating the development of UI as a parallel activity to the application code development results in significant cycle time reduction and defect elimination. Whereby a good chance of building a quality product on time! Every developer’s dream in a real world involving unstable and numerous changes to both the application specs and user interface design.

  4          INTEGRATION PROCESS

After the application code and UI are independently tested and known to be of proven quality, the integration process can be indulged in. If the strategy adopted for testing the application code is through a driver and that adopted for testing the UI is through a stub, the integration process is a mere replacement of the driver with the UI event handler and the stub with the application callback function. An extremely useful step in further eliminating possibilities of defects due to ad hoc integration.

At this stage of development, the only possible causes for application failure are run-time errors or defects. Practically, there is no reason for the application code or the UI by themselves to have any defects. Here is where the “Fencing Technique” of the author comes in handy.

5          FENCING TECHNIQUE

The good old #ifdef, #endif pair is all it boils down to! For languages that do not support preprocessor directives, there are numerous other ways to achieve the same effect. What is important is to understand the technique of applying them, which is elaborated below.

In the integration process, what is done is that the event handler of the UI replaces the driver of the application code. And, the stub in the UI is replaced by the application code. The only modifications necessary are to ensure that the application callback functions are called using the right UI parameters after taking proper care to ensure their initialization. Here again, beginning the integration process with application code involving simple processing logic is a good starting point. The executable generated after the stub is replaced by the application call back, is known to potentially harbor defects which would occur only during run-time.

When the executable is run and the event working the application callback is exercised, if all goes well then that is the end of integration of this feature of the application code with its corresponding UI. Other application callbacks can then be integrated.

If there are run-time errors however, then what is best to adopt is to block out the entire application call back with #ifdef 1, #endif statements. Regenerate the executable with all of the application code blocked out in this manner by putting a fence around the application callback – “The Fencing Technique”. When this executable is exercised, obviously there will be no run-time errors. This is because the run-time errors are always due to the application code “after” integrating with the UI. And, since all of the application code is blocked out using the fence, the executable will stay without causing run-time errors. Now, the fence (#ifdef 1, #endif pair) has to be re-laid so that we get to the sticky part in a systematic manner. 

In the Fencing Technique this is done “block-by-block” beginning with the outermost logic block and converging rapidly to the logic block causing the problem in a series of executable generation.

So, we re-lay the fence by blocking out most of the logic blocks inside the next largest logic block. And continue. With this technique it is extremely easy to locate the cause in the application code that is causing the run-time failure to occur. After the logic block is isolated in this manner, the fix mostly seems to suggest that the reason for failure is improper reference to the manner in which variables are interpreted.

REFERRING NOTE: In a VC++ environment, a corresponding class equivalent mapping onto a base class is the basis for the product evolution.  

For example:

BEGIN_MESSAGE_MAP(

theClass, CwinApp )

 

-Begins the evolution of theClass application. Included in this map would be elaboration of the window messages and commands.  

Ideas presented earlier on can now be applied after the application class is initialized.  

CONCLUSION

Identifying and eliminating run-time errors in products is a nightmare if a systematic approach to testing of the application and the UI is not under taken. Ideas in this paper describes a very practical and a personal software testing process that seems to have worked very well for the author in numerous product development initiatives.

The Fencing Technique introduces with the help of an example how to apply ideas in this paper to large product development initiatives. From the tone set in the paper, it is clear that this technique is not restricted by the methodology employed. In fact, it is very clear that the methodology employed or the manner in which the specifications are handled is immaterial when it comes to identifying certain fatal application failures, which are so very common.

What is described is a very systematic bottom-up testing and integration approach of the application code and the UI code. When done in parallel, significant cycle time reductions have been seen to result. Additionally, a controlled and systematic process of eliminating all known causes of failure in the application code and the UI code as “separate activities”, helps to build confidence in the integration process.

During integration of the application code and the UI code, the Fencing Technique is the only effective saving grace to rapidly converge on the application logic block which is prone to run-time errors. A useful technique resulting from years of developing and testing products is presented in this paper.

 

REFERENCES  

[1] Boris Beizer, “Black Box Testing”, John Wiley, 1995.

[2] Brian Marick, “The Craft of Software Testing”, Prentice Hall, 1995

[3] Boris Beizer, “Software Testing Techniques”, Coriolis Group, 1990

[4] Edward Kit et al., “Software Testing in the Real World”, Addison-Wesley, 1995

[5] “Metrics in Object-Oriented Design and Programming”, Software Development, October 1993.

[6] R.S. Nandyal, “Function Analysis Tool: FAT”, Software Engineering Symposium, Motorola.  Phoenix, Arizona.  1993.  [Best Tool Award]

[7] R.S. Nandyal, “Making the Use of Structured Methods More Effective”, Tata McGraw Hill, CSI-96 Bangalore.  1997 ISBN 0-07-463322-8

 

BIOGRAPHY

Raghav S. Nandyal is a General Manager heading Methods, Tools and Products Division for Intelligroup Asia Pvt. Ltd.

His URL is: https://members.tripod.com/~raghavn.

For ease of navigation, use the selection box below ...

 

Contact SITARA Technologies

Mailing Address:  SITARA Technologies Pvt. Ltd., Corporate Office

#54, "Sri Hari Krupa", 6th Main Road, Malleswaram, Bangalore KA 560 003 INDIA 

Voice: +(91)-984-523-3222, +(91)-984-563-3222

Fax: +(91-80) 2334-3222