sss ssss      rrrrrrrrrrr
                      ssss    ss       rrrr   rrrr
                     sssss     s       rrrr    rrrr
                     ssssss            rrrr    rrrr
                      ssssssss         rrrr   rrrr
                          ssssss       rrrrrrrrr
                    s      ssssss      rrrr  rrrr
                    ss      sssss      rrrr   rrrr
                    sss    sssss       rrrr    rrrr
                    s  sssssss        rrrrr     rrrrr
         +===================================================+
         +=======    Quality Techniques Newsletter    =======+
         +=======            February 2000            =======+
         +===================================================+

QUALITY TECHNIQUES NEWSLETTER (QTN) (Previously Testing Techniques
Newsletter) is E-mailed monthly to subscribers worldwide to support the
Software Research, Inc. (SR), TestWorks, QualityLabs, and eValid WebTest
Services user community and to provide information of general use to the
worldwide software and internet quality and testing community.

Permission to copy and/or re-distribute is granted, and secondary
circulation is encouraged by recipients of QTN provided that the entire
document/file is kept intact and this complete copyright notice appears
with it in all copies.  (c) Copyright 2003 by Software Research, Inc.

========================================================================

   o  13th Annual International Software & Internet Quality Week 2000

   o  The EC 5th Framework Programme on Research and Technology

   o  Software Testing - Myth or Reality? (Part 2 of 3)

   o  More About Basis Paths

   o  eValid Services Reveal Wide Daily Dial-up Variances

   o  Ten Principles from 'Competitive Engineering' by Tom Gilb

   o  Federal Computer Week News Editor Dan Verton Interviewed on CNN
      Regarding Cyber Security

   o  First Asia-Pacific Conference on Quality Software (APAQS 2000)

   o  SERG Report 383: Deriving Real-Time Monitors from System
      Requirements Documentation, by Dr. Dennis Peters

   o  Report about SPI Available

   o  Latest CAPBAK/Web Release Includes Many New Features

   o  Bob Binder Response on Test Patterns

   o  The Prices were in British Pounds, Of Course (A Correction)

   o  QTN SUBMITTAL, SUBSCRIPTION INFORMATION

========================================================================

    13th Annual International Software & Internet Quality Week 2000

                          30 May - 2 June 2000
                   San Francisco Bay Area, California

             CONFERENCE THEME: New Century, New Beginnings

QW2000 is the 13th in the continuing series of International Conferences
that focus on advances in software test technology, quality control,
risk management, software safety, and test automation.

This year marks the introduction of QW2000 as a Double Conference.
Parallel to Software Quality Week 2000, Internet Quality Week 2000 will
draw specific attention to Quality Issues of the Internet, such as e-
commerce, web site testing and web site quality.

PRE-CONFERENCE-TUTORIALS (Tuesday, 30 May 2000):

  > Rob Baarda (IQUIP Informatica BV) "Stepwise Improvement of the
    Testing Process using TPI(tm)"

  > G. Bazzana & E. Fagnoni (ONION s.r.l.) "Testing Web-based
    Applications: Techniques for Conformance Testing"

  > Ross Collard (Collard & Company) "Test Planning Workshop"

  > Adrian Cowderoy (MMHQ) "Cool Q - Quality Improvement for Multi-
    disciplinary Tasks in Website Development"

  > Michael Deck (Cleanroom Software Engineering, Inc.) "Requirements
    Analysis Using Formal Methods"

  > Bill Deibler (SSQC) "Making the CMM Work: Streamlining the CMM for
    Small Projects and Organizations"

  > Tom Gilb (Result Planning Limited ) "Requirements Engineering for
    Software Developers and Testers"

  > John Musa (Consultant) "Developing More Reliable Software Faster and
    Cheaper"

  > Johanna Rothman (Rothman Consulting Group) "Life as a New Test
    Manager"

  > Robert Sabourin (Purkinje Inc.) "The Effective SQA Manager - Getting
    Things Done"

  > Norman Schneidewind (Naval Postgraduate School) "A Roadmap to
    Distributed Client-Server Software Reliability Engineering"

POST-CONFERENCE-WORKSHOPS (Friday, 2 June 2000):

  > Douglas Hoffmann (Software Quality Methods LLC) "Oracle Strategies
    for Automated Testing"

  > Cem Kaner "Bug Advocacy"

  > Ed Kit (Software Development Technologies) "Software Testing in the
    Real World"

  > Edward Miller (Software Research, Inc.) "WebSite Quality Workshop"

KEYNOTE TALKS from industry experts include presentations by:

  > Stu Feldman (IBM) "(Internet and E-Commerce: Issues and Answers"

  > Leon Osterweil (University of Massachusetts) "Determining the
    Quality of Electronic Commerce Processes"

PARALLEL TRACKS:

There are five parallel technical tracks in QW2000 with over 60
presentations:

  > Technology: New software quality technology offerings, with emphasis
    on Java and WebSite issues.

  > Applications: How-to presentations that help attendees learn useful
    take-home skills.

  > Internet: Special focus on the critical quality and performance
    issues that are beginning to dominate the software quality field.

  > Management: Software process and management issues, with special
    emphasis on WebSite production, performance, and quality.

  > QuickStart: Special get-started seminars, taught by world experts,
    to help you get the most out of QW2000.

SPECIAL EVENTS:

  > Nick Borelli (Microsoft) "Ask The Experts" (Panel Session), a
    session supported by an interactive website to collect the most-
    asked questions about software quality.

  > Doug Whitney and Pete Nordquist (Intel Corporation) "Protecting
    Intellectual Property in an Open Source World (Panel)"

  > Special reserved section for QW2000 attendees to see the SF Giants
    vs. the Philadelphia Phillies on Wednesday evening, 31 May 2000 in
    San Francisco's new downtown Pac Bell Park.

  > Birds-Of-A-Feather Sessions [organized for QW2000 by Advisory Board
    Member Mark Wiley (nCUBE), ].

Complete information from  or go to the QW2000 website:

 <http://www.soft.com/QualWeek/QW2000>

========================================================================

       The EC 5th Framework Programme on Research and Technology

Within the EC 5th Framework Programme on research and technology
development, in response to the 1st 1999 call of the Competitive and
Sustainable Growth, the LIPN (Laboratoire d'Informatique de Paris-Nord)
of the university Paris 13 participates to the IDD (Integrated Design
process for on-board Diagnosis) project.

Industrial partners are the car constructors CRF (Fiat Research Center,
Orbassano, Italy), prime contractor, DaimlerChrysler (Stuttgart,
Germany), PSA (Peugeot Citroen Automobiles, Neuilly sur Seine, France)
and REGIENOV (REGIE Renault Recherche et Innovation, Guyancourt,
France), the supplier Magneti Marelli (Milano,Italy) and the software
company OCC'M (Oberhaching/Deisenhofen, Germany). Academic partners are
the university of Torino (Italy), the technical university of Munchen
(Germany) and the university of Paris 13 (France).

The objectives of IDD are the following:  to make a contribution to re-
organising the design process to include aspects of diagnosis in early
steps; to develop a methodology for integrating the analysis of
diagnosability and avoidance of fault effects in the design chain and a
set of tools that support the designer in this analysis; to create
interfaces between current used tools such as CAD and numerical
modelling and simulation and advanced Model Based systems; ultimately to
improve performance of car with respect to reliability, safety and
environmental impact.

IDD project begins 1 February 2000 and has a duration of 36 months.

The LIPN participates for 30 person-months (pms), i.e. has a position
for one post-doc/engineer during 30 months. The beginning of the
position is planned for july 1st 2000.  29 pm are shared among:

 - WP1: requirements on the on-board diagnosis design process and
   specification (6 pm)
 - WP2: integration of models for model based concurrent design (12 pm)
 - WP3: specification and development of the tool box for model based
   co-design (11 pm)

The LIPN is looking for a highly-qualified researcher with a great
autonomy, having sound background in Artificial Intelligence techniques
of model-based modelling and reasoning for diagnosis, in CAD systems and
numerical modelling and simulation systems, such as MATLAB/SIMULINK and
Statemate, in object-oriented (UML) formalisms for software
specification and analysis and in programming.

Inquiries to:

Philippe DAGUE
LIPN - UPRESA 7030               Tel.  33 - 1 49 40 36 17
Universite' Paris 13             Fax.  33 - 1 48 26 07 12
99 Av. J-B. Clement              Email.  dague@lipn.univ-paris13.fr
93430 Villetaneuse France        http://www-lipn.univ-paris13.fr/

========================================================================

           Software Testing - Myth or Reality? (Part 2 of 3)

                         By Romilla Karunakaran
                         InterWorld Corporation

                    Email: bala450@worldnet.att.net

The seriousness and importance of software testing should not be doubted
as the failure to ensure the required quality in the product can lead to
dire consequences. In 1990, AT&T reported a failure in its long distance
service when switching errors in its call-handling computers caused its
long distance network to be inoperable for nine hours, resulting in a
severe telephone outage in the country. The disaster was eventually
traced to a single faulty line of code. Another case involved the Atomic
Energy of Canada Limited (AECL) whose faulty software in a Therac-25
radiation-treatment machine lead to several cancer patients receiving
lethal overdoses of radiation. An article outlining the 10 most
notorious bugs in history can be referenced at
http://coverage.cnet.com/Content/Features/Dlife/Bugs/ss05.html.

Defining the Test Objectives

One of the first but most difficult tasks of the software tester is to
first define the objectives behind the software testing effort. This
involves determining a carefully structured master test plan that
addresses the challenges involved in the software testing effort and the
paths taken to realize the successful delivery of the software product
in the quality that it is expected to have. It can be a tiresome effort
as most often, the challenge arises simply from determining what
constitutes the testing objectives, at what stage they should be
expected, and how they should be prioritized. However, once testing
objectives are determined, the testing team has got a theme on how and
what should be involved and executed during the various stages of the
testing process. The master test plan is also a predefined toolkit about
the means of addressing future testing issues, and contains both the
high-level and low-level objectives for each specific type of testing
implemented at each stage of the testing process.

With test objectives in place, the testing manager is accomplishing the
following:

* Instituting a Structure in the Testing Effort.

This means that the testing team is establishing the required standards
and guidelines in meeting its testing objectives and defining the means
of ensuring the completeness of its testing coverage. With testing
objectives in place, the testing team can determine the required steps
towards defect reporting and management and the communication it should
take should corrective action need to be undertaken to address serious
bugs reported.

* Determining the Testing Process.

By having testing objectives in place, the testing manager is also
ensuring that the testing goals are accomplished at each stage of the
testing process. By charting out a defined path in the test plan, each
tester in the testing team will be able to determine the progress of
his/her testing effort and how much he/she has to perform to accomplish
the given task.  The testing manager has the duty to ensure that all
testers in the team share the testing objectives at the operational
level. Instituting a formal test plan helps remind the testers of the
testing objectives where such objectives and intent could often be lost
after a meeting or a coffee chat.

* Ensuring that an Effective Testing Strategy is Adopted.

Given the test objectives, the testing manager will be able to define
the required testing strategy that would ensure that testing coverage is
maximized. Testers are often presented with the challenging task of
determining which combination of inputs would result in a software
failure. Based on time, personnel and budget constraints, it is
virtually impossible to conduct a full-blown testing that would use all
the various input combinations - that would leave the testing effort
always incomplete! The testing manager can only therefore hope to
accomplish a "reasonable" subset of these various testing scenarios and
ensure that the test suite developed would be effective in ferreting out
the defects that would lead to software failure. The selection of a
suitable test suite would certainly depend on the experience and ability
of the tester as well.

* Providing Direction to the Tester.

Setting up test objectives in a large testing environment is certainly
vital in ensuring that each tester shares the same insights as his/her
testing mate. This also lessens the chances of each tester acting
independently of the other. Not only does the test objectives offer
repeatability in the testing process,  it also ensures that each tester
is able to share a common understanding about such issues as defect
isolation and defect reporting. Giving the tester a set of directives
for his/her task also helps to iron out potential personality conflicts
that could arise among teammates. Test objectives should outline the
expectations of each tester contributing towards the testing effort and
determine a path that each tester should follow. Most often, conflicting
issues amongst co-team mates can reduce work productivity and create
resentments within a workplace when these could have been resolved with
a proper task plan.

The testing objectives further provide traceability to the checklists
within a master test plan. For instance, the testing manager should be
able to trace each objective created to the respective checklist(s),
which will in turn ensure the completeness and consistency of the
integration testing effort as an example. While structured, the test
plan should however be flexible enough to accommodate changes as needed.
Occasionally, it becomes necessary to clarify existing objectives and to
formulate new ones based on the requirement or specification changes to
the software product. This process can be iterative and requires the
testing team to be versatile enough to maneuver the testing effort
towards realizing the objectives. At all times, each tester should be
comfortable and satisfied that the test plan reflects the kind of
testing effort he/she is responsible for.

A Tester's Creed

A software tester's task is to remove as much uncertainties as he/she
can of the software product to be delivered to the user community. The
means of doing this is of course left to the testing manager to decide
what constitutes the best test suite that would address the varying
combinations of inputs and data values which could be used within the
context of the user community. This would be a daunting challenge as the
testing team has to be creative enough to simulate the business
environment of its client and develop the various subsets of test cases
which could best ensure the performance and reliability of the software
product. It therefore makes it imperative that a master test plan be set
up to address the checklists and objectives of the testing process.
Test plans are designed to formally reduce the risk of shipping a
software product with too many bugs.

Reporting Bugs

The typical feeling is that a software tester's job is to report as many
bugs as possible. However, what really constitutes a bug? Often, a
tester is required to report as many bugs as possible because that is
the best means of ensuring that the application works the way it is
designed to and in a manner that does not lead to a software failure.
Granted, the risk of shipping a completed untested software product is
very high but it is important that a tester understands that a right bug
will lead to the right fix and naturally what will follow is an appl
ication that the users want.

                           TO BE CONTINUED...

========================================================================

                         More About Basis Paths

    From: "The Warbrittons"
    To:  
    Subject: Module testing
    Date: Feb 2000 16:11:11 -0700

    Our software group writes in C, for embedded systems.  I'm working on
    writing a Module Testing spec. (module defined as a function in C).
    I've looked at Basis path testing, and while I've found that calculating
    the cyclomatic complexity fairly simple, I can see that calculating all
    the input and output values by hand to be slow, cumbersome, and error
    prone.  Do you have a product that will, at a minimum, calculate all
    input/output combinations (basis paths) to use for testing?  We use
    Windows '95 for development, and a variety of emulators.

This is a good, fair question.  With your OK I'll put it in the next
QTN...

Our UNIX coverage products calculate all of the Basis Paths for testing
-- in many not-too-complicated programs there are 100's or 1000's of
them -- some day we will add this unique feature to our Windows coverage
products.

But I know of no product, and can cite 100's of references explaining
why inferring the needed input values from the sequence of logicals on a
particular pay, can never work.  In fact, that problem is what they call
"NP Hard" in the mathematical sense.

Most of the time testers use the test coverage report to see in the
source what was NOT tested, and then do the mental calculation on what
to change in the input.

Edward Miller


========================================================================

          eValid Services Reveal Wide Daily Dial-up Variances

Results from several months of collecting Website performance data under
our eValid Website Monitoring Activity (see
<http://www.soft.com/Products/Web/eValid/index.html>) suggest there is
as much as a 70% fluctuation in hourly response times, measured  over a
24-hour period, in delivered WWW performance.  This is quite a bit more
than what some web-based ISP-to-ISP measurement activities show -- but
seems to correlate with everyone's intuition about the "World Wide
Wait".

The eValid data is being collected with playbacks done using versions of
CAPBAK/Web connected to the WWW with either standard 56 Kbps dialup or
DSL lines.  Some of the scripts are quite complex, and some are very
simple.

In all cases, the eValid timing measurements are all done after first
having CAPBAK/Web automatically empty the internal browser cache. This
step assures that the timings really do reflect download of the full
content of each URL in each test.

For a sample of the daily reports that are available to eValid customers
go to:

<http://www.soft.com/Product/Web/eValid/database/example.services.html>

In particularly, take a look at both the P4 and the EC20 measures to get
a feel for how wide the variation in response times is as a function of
time of day.

Many website mangers use this kind of performance timing information,
combined with detailed timings of page download characteristics from a
56 Kbps line (also available from CAPBAK/Web based measurements), to
make adjustments to their website pages to minimize customers' click-
aways that often result from too-slow appearance of useful information
on their screen.

If you would like to see your own website measured with the eValid
QuickLook service please send email to .

========================================================================

       Ten Principles from 'Competitive Engineering' by Tom Gilb

                       (Gilb@Result-planning.com)

Here are some new Principles, each from Chapters of my forthcoming

Principles are fundamental teachings which you can use as guides to
sensible action.

A principle can be in conflict with (and thus must give way for) some
higher priority consideration (such as a customer contract
requirements).  So Principles must be used with common sense, and with
an overview of total real world considerations.

Here are some fundamental principles:

0. CONTROLLING RISK. There is lots of uncertainty, and risk-of-deviation
   from plans, in any project. You cannot eliminate risk. But, you can
   document it, plan and design for it, accept it, measure it, and
   reduce it to acceptable levels.  You may want to avoid risk, but it
   doesn't want to avoid you.

1. GOALS BEAT ALL.  Meeting requirements is more fundamental than any
   other process or principle.  The top strategy is; getting the
   results.

2. REASONABLE BALANCE. Requirements must be balanced. You cannot meet an
   arbitrarily large number of arbitrarily good goals.  Reach for
   dreams; but don't let one of them destroy all the others.

3.YOU CANT HAVE IT ALL. Some goals will be judged by you, and your
   stakeholders,
    to have higher priority than others, at particular times, places and
   conditions.  Knowing the best horses, saves resources.

4. UNKNOWABLE COMPLEXITY. You cannot have correct knowledge of all
   interesting requirements levels for a large and complex system in
   advance.  The "reasonable balance" must be continuously re-discovered
   through an evolutionary process of partial delivery to the real
   world.  You must feed a lion to find out how hungry it is.

5. ETERNAL PROJECTS. The process of delivery of results has no end, if
   you are in competition for survival.  Survival is a lifetime project.

6. TO ERR IS HUMAN, TO CLEAN UP IS A PRE-REQUISITE. The human tendency
   to err,  when planning, or engineering, is so bad, that a strong
   quality control process is required, to clean up a dozen or more
   defects per page, before serious initial use of their specifications.
   Clean up your own mess; nobody else wants it.

7. QUANTIFICATION MANDATORY FOR CONTROL. The multiple concurrent
   quality-and-cost demands of most systems, means that a quantified and
   testable set of requirements is necessary, to get control over
   quality and costs.  If you cant quantify it, you cant control it.

8. SPECIFICATION ENTROPY. Any requirement or design specification, once
   made, will become gradually less valid, as time changes the world for
   which they were intended.  Even gourmet, decays.

9. GOODIES CONTROL BEATS BEAN COUNTING.  The main point of any project
   or change effort is to comparatively improve benefits of a system.
   The benefits must be at least as well-controlled as the resources
   needed to get them, otherwise the benefits will lose out, at the
   hands of always-limited resources.  Focus on getting the Goodies.
   Their costs will be forgiven.

These Principles are intended to be "deep wisdom," and "useful practical
concepts," about how to do product and systems development, planning,
engineering, and management. You should be able to observe their
validity.

You cannot expect to whiz through them in a few minutes, and thereby
master their use and understanding. You will have to be patient. The
Principles will be waiting for you, years from now. They are ready, when
you are ready for them.

Can you put an argument in writing that any of these principles are
generally untrue? Can you formulate better principles (do share them
with us!)?  Are any of the Principles always true (Laws)?

Gilb to be published by Addison-Wesley in 2000. Full manuscript copies
can be had for free at <http://www.result-planning.com> or
<http://www.pimsl.com/TomGilb>.

========================================================================

             Federal Computer Week News Editor Dan Verton
              Interviewed on CNN Regarding Cyber Security

FALLS CHURCH, VA  (February 4, 2000) -- Federal Computer Week's Online
News Editor, Dan Verton, was interviewed last night on CNN by Ann Kellan
in regard to Internet security and Former CIA Director John Deutch's
alleged use of a home computer to store classified materials.  The
controversy has sparked a security scare in the U.S. intelligence
community and brought a potential problem to the attention of Internet
users.  Verton was tapped by CNN for his expertise in Internet security
policies and procedures in the defense and intelligence communities.

Excerpt from Interview:

 "There are known foreign intelligence agents operating on the Internet
today ... and they are actively seeking U.S. intelligence on the
Internet," said Daniel Verton of Federal Computer Week.

"It's hard to know exactly what he had on his home computer," Verton
said.  "But we do know that it was thousands of pages in length, we do
know it was top secret and probably ranged the entire breadth of
classifications, from unclassified to top secret code word information."

You can log on to the CNN Web site for the complete interview:

<http://www.cnn.com/2000/TECH/computing/02/04/pc.security/index.html>


========================================================================

     First Asia-Pacific Conference on Quality Software (APAQS 2000)

                     http://www.csis.hku.hk/~apaqs

                     HONG KONG, OCTOBER 30-31, 2000

ORGANIZED BY

-  The Software Engineering Group, The University of Hong Kong
-  Software Technology Centre, Vocational Training Council, Hong Kong

BACKGROUND:
The quality of software has an important bearing on the financial and
safety aspects in our daily lives.  Unfortunately, software systems
often fail to deliver according to promises.  It is well known that
there are still unresolved errors in many of the software systems that
we are using every day.  The Asia-Pacific region is far from being
immune to these problems.  The prime objective of the conference is to
provide a forum to bring together researchers and practitioners from
this region to address this issue seriously.

CALL FOR PAPERS

We are soliciting full-length research papers and experience reports on
various aspects of software testing or quality assurance.  Specific
topics include, but are not limited to, the following areas:

-  Automated software testing
-  Configuration management and version control
-  Conformance testing
-  Debugging
-  Economics of software testing
-  Formal methods
-  Metrics and measurement
-  Performance testing
-  Process assessment and certification
-  Quality management
-  Quality measurement and benchmarking
-  Reliability
-  Review, inspection, and walkthroughs
-  Robustness testing
-  Safety and security
-  Testability
-  Testing tools
-  Testing standards
-  Testing of object-oriented software
-  Testing of real-time systems
-  Testing processes
-  Testing strategies
-  Application areas such as e-commerce, component-based systems,
   digital libraries, distributed systems, embedded systems, enterprise
   applications, information systems, Internet, mobile applications,
   multimedia, and Web-based systems

All the papers submitted to the conference will be refereed by three
members of the program committee according to technical quality,
originality, significance, clarity of presentation, and appropriateness
for the conference.  The conference proceedings will be published by
IEEE Computer Society.

CONTACTS:

-  Dr. T.H. Tse
   Department of Computer Science and Information Systems
   The University of Hong Kong
   Pokfulam Road
   Hong Kong

   Email: mailto:tse@csis.hku.hk
   Fax: +852 / 2559 8447
   Telephone: +852 / 2859 2183

-  Dr. T.Y. Chen
   Department of Computing and Mathematics
   Hong Kong Institute of Vocational Education
   Vocational Training Council
   30 Shing Tai Road
   Chai Wan
   Hong Kong

   Email: mailto:tychen@vtc.edu.hk
   Fax: +852 / 2505 4216
   Telephone: +852 / 2595 8152

========================================================================

           SERG Report 383: Deriving Real-Time Monitors from
                   System Requirements Documentation
                                  by
                           Dr. Dennis Peters

When designing safety- or mission-critical real-time systems, a
specification of the required behavior of the system should be produced
and reviewed by domain experts. Also, after the system has been
implemented, it should be thoroughly tested to ensure that it behaves
correctly.  This, however, can be difficult if the requirements are
complex or involve strict time constraints. A monitor is a system that
observes the behavior of a target system and reports if that behavior is
consistent with the requirements. Such a monitor can be used as an
oracle during testing or as a supervisor during operation.  This thesis
presents a technique and tool for generating software for such a monitor
from a system requirements document.

A system requirements documentation technique, based on [102], is
presented, in which the required system behavior is described in terms
of the environmental quantities that the system is required to observe
and control, which are modeled as functions of time. The relevant
history of these quantities is abstracted as the initial conditions and
a sequence of events. The required value of all controlled quantities is
specified, possibly using modes---equivalence classes of histories---to
simplify the presentation.  Deviations from the ideal behavior are
described using either tolerance or accuracy functions.

The monitor will be affected by the limitations of the devices it uses
to observe the environmental quantities, resulting in the potential for
false negative or positive reports. The conditions under which these
occur are discussed.

The  generation of monitor software from the requirements documentation
for a realistic system is presented. This monitor is used to test an
implementation of the system, and is able to detect errors in the
behavior that were not detected by previous testing. For this example
the time required for the monitor software to evaluate the behavior is
less than the interval between events.

NOTE: The web address for downloading reports is:

        <http://www.crl.mcmaster.ca/SERG/serg.publications.html>

========================================================================

                       Report about SPI Available

Last month a book I wrote about SPI (in Italian) came out. Could it be
possible to insert the news in TTN-Online (in the December 1999 issue it
was made with the CMM book from an Infosys people)?  If the answer is
yes, there are all information at the following address (or tell me if
you prefere something different):

<http://www.esi.es/About_esi/News/newllb.html>
        (in English on the ESI website)
<http://www.francoangeli.it/libri/724000020.htm>
        (Publisher website)

CONTACT:
        Luigi Buglione, Ph.D.
        SPI Measurement
        mailto:luigi.buglione@esi.es
        mailto:luigi.buglione@computer.org
        http://www.geocities.com/ResearchTriangle/Campus/6025/

        European Software Institute
        Parque Tecnologico # 204
        E-48170 Zamudio
        Bizkaia - SPAIN

        Tel: ++34-94 420 95 19      http://www.esi.es
        Fax: ++34-94 420 94 20      mailto:info@esi.es

========================================================================

          Latest CAPBAK/Web Release Includes Many New Features

CAPBAK/Web(tm) is a full-featured Test Enabled Web Browser(tm) based on
IE 4.n or IE 5.n for Windows 95/98/NT.  This powerful tool lets you
perform all of the functions needed for detailed WebSite dynamic
testing, QA/Validation, and load emulation -- including functions many
that are difficult or awkward or impossible with other approaches, e.g.
testing Java applets  -- very easily and efficiently.

CAPBAK/Web's WebSite test and validation functions include:
  > Intuitive GUI for all test functions built into the CAPBAK/Web
    browser.

  > Recording and playback of user sessions in combined [optional]
    true-time and object mode.

  > Fully editable recordings/scripts expressed as ASCII files in "C".

  > Pause/SingleStep/Resume control during playback for debugging and
    demonstrations.

  > A full range of user-interactive runtime validation options,
    including document features, selected text, selected image, and all
    images and applets.

  > Test "wizards" that create scripts that exercise all links on a
    page, push all buttons on a FORM, and manipulate a FORM's contents.

  > Support for JavaScript and VBScript and Java applet navigation.

  > Support for Java applets including navigation, keyboard and mouse
    and mouse-over events.

  > Views of the keysave file (editable), messages file, errors file and
    event-log files (all files spreadsheet compatible).

  > Timer with 1 msec resolution for accurate performance measurement.

  > Cache management (you can play back tests without any cachen or with
    an initially empty cache).

  > Many User Preferences provide a variety of recording and playback
    effects.

  > Multiple playback capability (multiple independent copies can play
    back simultaneously).

  > Batch mode command-line interface.

Take a quick look at the GUI and other material about the product at:

<http://www.soft.com/Products/Web/CAPBAK/Documentation.IE/CBWeb.quickstart.html>

Download the latest CAPBAK/Web release at:

<http://www.soft.com/Products/Downloads/down.capbakweb.html>


========================================================================

                  Bob Binder Response on Test Patterns

> Daniel Creasy  wrote in message
>
> I have been given the task of defining a test design pattern for C.
>
> 1. A Process Error Test pattern (to expose faults in C code that might arise
> as a result of  process errors - arithmetic, initialization, termination,
> control, sequence and logic errors).
>
> 2. A Bottom-up Intergration Test pattern (to expose faults arising in
> connection with the integration of functions within the system under test).
>
> Anyone know of any URL's/Reading material that may be of use ?
>
> Thank you
> Daniel Creasy
> dan@dcreasy.freeserve.co.uk

I think you'll want to start with my new book, "Testing Object-Oriented
Systems: Models, Patterns, and Tools"
<http://www.rbsc.com/pages/TOOSMPT.htm> which introduces the test design
pattern schema.  The book also presents 37 test patterns, including
"Bottom Up Integration".

The test design schema is (1) not OO specific and (2) is significantly
different from design patterns, in that it deals with questions that
arise only in test design.

Brad Appleton's pages are the best general source for pattern
development that I know of:

  <http://www.enteract.com/~bradapp/links/sw-pats.html>.

Bob Binder           <http://www.rbsc.com>    RBSC Corporation
312 214-3280  tel    Software Engineering     3 First National Plaza
312 214-3110  fax    Process Improvement      Suite 1400
rbinder@rbsc.com                              Chicago, IL 60602-4205

========================================================================

      The Prices were in British Pounds, Of Course (A Correction)

In QTN-Online January, you reported a story where the price of a
television set was rounded up from 3299 dollars and 99 cents to 33
dollars.  Are you sure about this?  Why would prices on a British web
site be quoted in dollars and not pounds?  Why would the web site be
doing arithmetic in dollars and not in pounds?

Anyway, I've looked on the Argos web site at http://www.argos.co.uk, and
I can't see any television product which costs over 650 pounds (about a
thousand United States dollars).  In the light of that, this story looks
a bit suspect.

Looking at the Daily Telegraph's web site for 9 September 1999:

<http://www.telegraph.co.uk:80/et?ac=001047225059655&rtmo=pbbNhSMe
        &atmo=kkkkkkGu&pg=/et/99/9/9/nargos09.html>

one learns that the prices were in fact 299.99 pounds real price,
rounded incorrectly to three pounds:  an embarrassing and costly
mistake, but much less so than your report.

Regards,

Jonathan Headland
TakeFive Software GmbH
Jakob-Haringer-Str. 8
A-5020 Salzburg, Austria

========================================================================
------------>>>          QTN SUBMITTAL POLICY            <<<------------
========================================================================

QTN is E-mailed around the 15th of each month to subscribers worldwide.
To have your event listed in an upcoming issue E-mail a complete
description and full details of your Call for Papers or Call for
Participation to "ttn@sr-corp.com".

QTN's submittal policy is:

o Submission deadlines indicated in "Calls for Papers" should provide at
  least a 1-month lead time from the QTN issue date.  For example,
  submission deadlines for "Calls for Papers" in the January issue of
  QTN On-Line should be for February and beyond.
o Length of submitted non-calendar items should not exceed 350 lines
  (about four pages).  Longer articles are OK but may be serialized.
o Length of submitted calendar items should not exceed 60 lines.
o Publication of submitted items is determined by Software Research,
  Inc., and may be edited for style and content as necessary.

DISCLAIMER:  Articles and items are the opinions of their authors or
submitters; QTN disclaims any responsibility for their content.

TRADEMARKS:  STW, TestWorks, CAPBAK, SMARTS, EXDIFF, Xdemo, Xvirtual,
Xflight, STW/Regression, STW/Coverage, STW/Advisor, TCAT, and the SR
logo are trademarks or registered trademarks of Software Research, Inc.
All other systems are either trademarks or registered trademarks of
their respective companies.

========================================================================
----------------->>>  QTN SUBSCRIPTION INFORMATION  <<<-----------------
========================================================================

To SUBSCRIBE to QTN, to CANCEL a current subscription, to CHANGE an
address (a CANCEL and a SUBSCRIBE combined) or to submit or propose an
article, use the convenient Subscribe/Unsubscribe facility at:

         <http://www.soft.com/News/QTN-Online/subscribe.html>.

Or, send E-mail to "qtn@sr-corp.com" as follows:

TO SUBSCRIBE: Include this phrase in the body of your message:

   subscribe your-E-mail-address

TO UNSUBSCRIBE: Include this phrase in the body of your message:

   unsubscribe your-E-mail-address

   NOTE: Please, when subscribing or unsubscribing via email, type YOUR
   email address, NOT the phrase "your-E-mail-address".

               QUALITY TECHNIQUES NEWSLETTER
	       Software Research, Inc.
	       1663 Mission Street, Suite 400
	       San Francisco, CA  94103  USA
	       
	       Phone:     +1 (415) 861-2800
	       Toll Free: +1 (800) 942-SOFT (USA Only)
	       Fax:       +1 (415) 861-9801
	       Email:     qtn@sr-corp.com
	       Web:       <http://www.soft.com/News/QTN-Online>

                               ## End ##