TestWorks/Windows C/C++
Take a Tour of TestWorks TestWorking Scribble |
TestWorks, a fully integrated suite of software testing tools, will organize and automate your software testing process. TestWorks offers an end-to-end solution that covers all aspects of your process life-cycle: development, execution, management, reporting, organization, call-pair coverage, branch coverage analysis ... more!
TestWorks can be applied to the full spectrum of functions needed for producing quality software quicker, better, and less expensively.
This page shows you a sampling of screens taken from a TestWorks Application Note TestWorking Scribble on Windows.
The version used as an example here, "chapter 8", consists of 1433 lines of C++ organized into 102 modules/functions located into 9 separate source files. Compiled without the debugging features its executable occupies about 40 KB; when fully instrumented by TCAT it occupies about 44 KB.
1. Main Menu for CAPBAK/MSW.
Each test you record with CAPBAK/MSW starts by using this simple control menu.
It functions just like a VCR.
2. HotKey Popup Menu for CAPBAK/MSW.
You control how you make your recording using either function keys or the
HotKey popup, shown here.
Recordings that include extra synchronization beyond the Automatic Output Synchronization
that is built into the product can enhance the reliability of playback in unusual
circumstances, or can be used to synchronize playback on events such as the arrival
of a particular ASCII text to a particular area of a window.
3. Example Recording of Scribbling on "Scribble" With CAPBAK/MSW.
Here is a TrueTime mode recording of a steady hand (?) drawing into Scribble's free-form
area.
Playback of this recording can play at exactly the same speed it was recorded: you re-enact
exactly the same session you recorded. Or, you can play back your recording at
continuously variable increased or decreased playback rates.
4. Extraction of Text from Window Area Using Built-In OCR Feature of CAPBAK/MSW.
Here is an extraction from a graphical screen, where the OCR engine has identified the
terms "To print the document" by taking the image from the screen as a pixel region
and reconverting it back to ASCII.
You can set up tests to PASS or FAIL based on the ASCII content of the images
you have recorded.
5. Image Map Comparison of Two Saved Images with Built-In Image Differencer. You can use the built-in image comparison feature, which can be programmed with masks to help you disregard unimportant areas of the images, to decide if your PASSes or FAILs.
Here in the sample image are two image snaps of the clock at two different times.
They are shown to be different -- the differential image is non-blank -- on the right hand side output image.
Image compares when the two images differ by one or more
pixels are deemed non-identical and the corresponding test FAILs.
6. SMARTS Main GUI.
You use SMARTS to catalog all of your tests into a test tree, specified by a user controlled
ATS file
(not shown here), that organizes each test into groups, of groups of groups, of groups of groups of groups, etc.
You start up SMARTS with this simple GUI.
7. Test Tree Creation Wizard.
SMARTS comes with a powerful Automated Test Script (ATS) Creation Wizard.
You organize the structure of your test suite in a simple tabulated "tree" structure like
that shown here.
This structure can be changed as your test suite evolves without having to re-create the
entire test suite.
8. Result of Test Tree Creation Wizard: A Complete Test Tree.
The ATS Creation Wizard wizard converts your simple tree into the complete Test Tree.
In this example all of the individual test sequences and associated comparison commands
are automatically inserted into the Test Tree.
9. Test Tree with SMARTS and the Run-Tests Window.
Here is part of a test tree -- the test tree can be as large as you like.
Also shown is the source of the ATS file, which is expressed
as a set of simple "C" functions.
You don't have to be a "C" programmer to use SMARTS; there's a utility
provided that helps you generate the initial test tree script from a simple
outline format.
10. SMARTS Report Window.
Once a set of tests has been set up and run -- in the background or overnight --
you can look at any individual test result, the test results for a group of tests,
a group of groups of tests, and so on.
Here you see a very simple report showing that one particular test PASSed.
11. TCAT C/C++ Integrated with MS-Visual C++ v6.0 Main Window.
This image shows the complete
VC 6.n window with six TCAT C/C++ action icons (see below) installed.
12. Visual Studio ToolBar Icons.
Shown above are the six TCAT C/C++ toolbar icons plus a SMARTS/MSW and a CAPBAK/MSW
toolbar icons that have been added to the Visual Studio VC++ environment
during the TCAT C/C++ installation process.
These icons let programmer/testers perform these functions:
12a. Configure TCAT: Select among alternative modes of instrumentation.12b. Build Instrumented Application: Instrument the application before compiling.
12c. Run Instrumented Application: Run the instrumented and compiled application.
12d. Analyze Coverage: Analyze the coverage achieved by one or more tests (see below). (C1 is the name used in TestWorks for branch coverage; TCAT measures both C1 = branch coverage and S1 = call-pair coverage).
12e. Show Digraph: Bring up the Digraph display for the selected object (see below).
12f. Show CallTree: Bring up the CallTree display for the selected object (see below).
12g. Run SMARTS Application: Organizes and executes a collection of tests which may use recorded scripts that are executed by CAPBAK from SMARTS control.
12h. Run CAPBAK Application: Captures and plays back keystrokes, mouse movements, object manipulations, and user-selected images and ASCII passages (screen-scraping).
13. Example of Scribble (Chapter 8) During Execution After Instrumentation
Here is a sample of execution of the instrumented scribble2 program.
Note that the recording includes a combination of both TrueTime and ObjectMode operation.
CAPBAK/MSW always tries to make a recording, reverting to TrueTime mode when
it cannot identify an object.
This maximizes a testers productivity by assuring that tests recorded are going to play back
without editing or other manual intervention.
14. Open Workspace Dialog Box
Here is the selection of the scribble2 project.
The particular project is being selected for instrumentation.
15. Project Setting Dialog Box
Here is the display showing the current C/C++ project selection.
The particular project is being selected for instrumentation.
TCAT C/C++ is a powerful instrumenter and coverage analyzer that
gives you three different views of your test data.
16. Source Code Displayed From Coverage Report
This is the basic coverage report showing
the connection between the cover values and the source code listing.
17. TCAT's CallTree Display.
The calltree display shows the connections between the various functions in your build.
TCAT C/C++ generates this information when it processes your code prior to execution
of the instrumented test object.
You can navigate your code by clicking on edges. Here the user has clicked
on the connection from the CScribView function to the OnPrepareDC function,
the latter of which is highlighted for you in the original source file from your project.
18. TCAT's Digraph Display for a Function.
TCAT also shows you the structural details of your individual functions.
Here the digraph is for the CScribView function.
You can tell which edges are associated with the parts of your code by clicking
on the display; the corresponding part of the program shows up in color.
19. Viewing Associated Source Code from Digraph
This is digraph -- an indication
of control-flow structure -- with an edge selected to reflect to the source code, shown highlighted.
20. TCAT Coverage Report for Project. Here you see the main window for TCAT C/C++ showing coverage values for the current project. Coverage is reported at the branch level (C1 Coverage) and at the call-pair level (S1 Coverage). The data in the picture is shown for the current test and for all prior tests (the data is kept in an Archive file that you can update when you want to accumulate coverage from test to test.
The display starts with filenames.
Click on a filename and the display expands to the functions in that filename.
Click on a function name and the display expands to show the segment and call-pair
numbers for that function.
Click on the segment number or click on the call-pair number and you're led directly
to the line in your code that gave rise to that segment or callpair.
You can pinpoint exactly where you missed a feature, where tests need to be
beefed up, and where, during the next instrumentation cycle you may choose
to de-instrument some files because you already have enough coverage.