|
|
.· ·. .· |
||
| ASK SITARA | |||
| LATEST DEVELOPMENTS AT SITARA | |||
| REGISTRATION FORM | |||
|
|
|||
|
A Personal Software Process for Testing |
|
Abstract:
The
emphasis laid on the process adopted to test “software products”
is not as rigorously understood and followed as is the case with the
process used for product development. The justification behind this
statement is the common fatal application failure messages that we are
so used to seeing! “An error has occurred in your application. If you
choose Ignore, you should save your work in a new file. If you choose
Close, your application will terminate”. Or the more cryptic “Fatal
Error: Run-time Error 123”. This is commonly called GPF (General
Protection Failure) due to stack faults and is tolerated with a hurt
feeling and a deep sense of frustration! The
hallmark of this personal software process for testing is its ability to
prevent such run-time and application errors from occurring. No known
testing technique has so far been able to address the very common fatal
“run time errors while the product is in the testing process”. This
is because these errors depends largely upon the memory model and has
nothing to do with either the functionality of the application or the
user interface. Such errors surface only when the two are integrated
without a good integration test strategy. Hence the reason for the
numerous encounters with “Run Time Error” messages seen even with
the most popular software products. This
paper provides an understanding of how to manage and handle testing of
software products in practical and real-life scenarios such as ever
changing product requirements with unclear expectations of the user
interface. How to effectively manage the testing process by separating
the testing of application code and the user interface code in such a
scenario is described with the help of an example. After testing the
application code and the user interface code separately, the process of
integrating the two is addressed. It
highlights the use of language constructs in C to effectively narrow
down the bug in what is termed the "Fencing Technique". This
technique has been put to use by the author in numerous software product
development and testing initiatives. A
product development initiative in the Windows environment using C
language for the application development is used as an example to
explain the process since ideas in this paper could be termed - a Personal
Software Process for Testing. However, this process could be
applicable to product development initiatives in other environments.
Native language constructs and environment features would then have to
be exploited to gain full benefits from what is presented in this paper.
Index
Terms:
Black
Box Testing, Error classifications, Fencing Techniques, Personal
Software Process (PSP), Run time errors, White Box Testing INTRODUCTION
Before
delving into the PSP for testing, a brief introduction to the software
testing process would be in order. Software testing has consistently
evolved with the changing paradigm to software development. White box to
emphasize structural integrity and black box testing to emphasize
functional integrity were relevant and applicable when structured
methods was the most acceptable way of problem decomposition [1][3].
With the maturing of object oriented methodologies and solutions seeking
to be object focused, classes having built-in-self-test (BIST) features
similar to digital hardware design could become a reality [2]. This is
primarily possible due to the high degree of modularization of function
and associated data into classes. So, the BIST functions or methods
could be developed to potentially stress the other class methods. Metrics
collection itself assumes a new meaning when we are also able to
qualitatively assess the test strategy rather than merely be able to
predict defects and effort [5]. Fencing Technique is one such strategy,
which can potentially trap run-time and application errors in a
systematic manner. Test
strategy: The
test strategy adopted in this PSP for testing is to perform independent
testing of the application code and the user interface code and defer
the integration of the two until after requirements stability is gained.
The application code developed can be exercised to check completion and
correctness as described in section 2. User interface testing is
described in section 3. The final integration and the fencing technique
to rule out the possibilities of run time errors, is described in
sections 4 and 5. 2
APPLICATION CODE TESTING
Testing
of software is very much a process of systematic elimination of the
reasons for failure. A broad classification of bugs or failure
conditions is functionality failures (bugs local to functions or
modules), interface failures (bugs between two communicating modules due
to incorrect data reference-pass by value versus pass by reference) and
global parameter failures (bugs due to incorrect data initializations
and settings) [6]. These failure conditions are not dependent upon the
method employed to build software. The application code developed using
structured methods is just so much more prone to having the third
category of failures. Real-life product testing effort would require
that testing be planned with design for change, in mind. Unit
testing of the application code is iterative and is performed by tracing
the branching of the logical paths that the code can potentially take.
Test cases are identified to ensure 100% line coverage [NOTE: Not
path coverage]. Unreachable code, if any, and bugs due to
exercising the different lines of code can be uncovered with these test
cases. Black box testing of modules or units can then be adopted to
determine functional correctness at the unit level to contain all
functional defects [6]. Black box testing of a unit would mean that each
unit or module is considered to be the system and ensuring adequate test
cases are employed to highlight the unit behavior. Ideally, such tests
should be conducted in a bottom-up fashion. In a structured methods
approach, for instance, we start module testing with the module
specification, which cannot be decomposed further. With reference to
figure 1, testing begins with Function_1_AtLevel3()
and the other functions at this level. When testing is complete,
functional and structural defects/bugs due to this module is practically
zero. After all the modules
which are at this final level of decomposition are tested in both a
white box and black box sense, the module which is immediately higher in
the hierarchy is tested. With reference to Figure 1, Function_1_AtLevel2()
would be tested after all the functions at level 3 are individually
tested. What is accomplished by this manner of bottom-up integration is
that, zero interface errors or defects would exist after testing the
integrating module in a white box and black box sense. This approach is
continued all the way up until the system is completely integrated. With
reference to figure 1, the system is complete after integrating all the
modules from the main()
function which is at level 1. A detailed elaboration of this method can
be found in [7]. The
virtue of such an application integration process is that modifications
to requirements can be handled with relative ease. With reference to
figure 1, let us assume that a change to requirement resulted in the
creation of New_Function_AtLevel3().
With the above testing strategy, this would be tested after all
dependencies such as Function_1_AtLevel4()
and Function_2_AtLevel4()
are tested both in a white box and black box sense. It
is useful to mention that functions or modules can be effectively tested
using driver functions. Development of these driver functions to
exercise the behavior is a very useful step in the process of
integration of the application code with the user interface. Suffice it
to say here, that drivers are recommended and were used in this PSP
strategy to stress the functionality of each one of the modules. For a
more detailed elaboration, refer to [6]. 3
USER INTERFACE TESTING
Figure
2 is a sample user interface structure chart, which will be used as an
example to develop the idea further. In
a Windows environment the GUI begins with the development of WinMain()
function. Several initialization functions are called before windows
commands begin to get interpreted by WndProc()
function.
The WM_COMMAND
message of WndProc()
is where all the action happens. A parameter called the wParam contains
the menu activation. It is here that dialogs get popped up and user
inputs are received. After sufficient information to process the request
is available, the application is called as using callback functions. The
user interface testing is restricted to doing a thorough job of ensuring
functional and structural correctness of this much of the final product.
The next few paragraphs describe testing of what was briefly presented
to be the user-interface of the product. WinMain()
in figure 2, is the GUI equivalent of main()
in figure 1. In a manner similar to the testing of the application code
(section 2), testing of the UI begins in a bottom-up manner. So, the
message loop handling the processing of windows messages in function WndProc()
is tested in both a white box and black box sense. Use
of stubs comes in very handy. Stubs are what the application code will
eventually replace. The Windows commands dispatched from the WinMain()
function is checked for recognition and acknowledgement in the UI
testing process. Stubs could be dummy dialogs which say that when a
message called IDM_OPEN
is received a PopOpenFileDialog(),
say is called. It could just be a modal dialog which can be dismissed as
having fulfilled the requirements of the functionality behind the FileOpen()
function, the behavior of which is delegated to the application code
development and tested during the application code testing. Stubs are
mere informational dialogs, which are acknowledgements to the occurrence
of an event. When
the UI is tested in this manner, what is possible is to eliminate all
known causes for UI mal-functioning. Since the user input capture
involves designing dialog boxes which in itself is quite an involved
process, segregating the development of UI as a parallel activity to the
application code development results in significant cycle time reduction
and defect elimination. Whereby a good chance of building a quality
product on time! Every developer’s dream in a real world involving
unstable and numerous changes to both the application specs and user
interface design.
|
For ease of navigation, use the selection box below ...
|
|