sss ssss      rrrrrrrrrrr
                      ssss    ss       rrrr   rrrr
                     sssss     s       rrrr    rrrr
                     ssssss            rrrr    rrrr
                      ssssssss         rrrr   rrrr
                          ssssss       rrrrrrrrr
                    s      ssssss      rrrr  rrrr
                    ss      sssss      rrrr   rrrr
                    sss    sssss       rrrr    rrrr
                    s  sssssss        rrrrr     rrrrr
         +===================================================+
         +=======    Quality Techniques Newsletter    =======+
         +=======             March 2001              =======+
         +===================================================+

QUALITY TECHNIQUES NEWSLETTER (QTN) is E-mailed monthly to Subscribers
worldwide to support the Software Research, Inc. (SR), TestWorks,
QualityLabs, and eValid user communities and other interested parties to
provide information of general use to the worldwide internet and
software quality and testing community.

Permission to copy and/or re-distribute is granted, and secondary
circulation is encouraged by recipients of QTN provided that the entire
document/file is kept intact and this complete copyright notice appears
with it in all copies.  Information on how to subscribe or unsubscribe
is at the end of this issue.  (c) Copyright 2003 by Software Research,
Inc.

========================================================================

                         Contents of This Issue

   o  Testing WAP Applications with eValid, by George Montemayor

   o  14th Annual International Internet & Software Quality Week
      (QW2001)

   o  When is Software Ready to Ship? by David Fern

   o  W3C Workshop on Quality Assurance, April 2001

   o  What Is A Quality Culture by Boris Beizer

   o  Word Play, By Ann Schadt

   o  Special Issue of the Informatica on Component Based Software
      Engineering

   o  Program Reliability Estimation Tool, SERG Report 391 by S.M. Reza
      Nejat

   o  QTN Article Submittal, Subscription Information

========================================================================

                  Testing WAP Applications with eValid
                                  by
                           George Montemayor

                              Introduction

Wireless Application Protocol (WAP) is used to support a wide range of
hand-held devices that are WAP-enabled.  WAP overlays HTTP, HTML and
other Web protocols and in many cases WAP-enabled WebSite servers adapt
by delivering special WAP compatible pages.

As use of WAP grows there is a corresponding increase in interest and
concern about how to test them easily and reliably.  This page describes
how the eValid test engine can be used to test WAP applications.

                       General Technical Approach

eValid operates entirely as a client-side window on a WebSite.  To test
a WAP application requires use of a browser-based WAP emulator or a
browser-activated Java applet that emulates WAP operation.  The material
below describes a number of available WAP emulators that have been
verified to operate correctly with eValid.

Go to the following URL to to see a sample screenshot showing eValid in
operation with this emulator.
  <http://www.soft.com/eValid/Applications/wap/images/gelon.wap1.gif>

              Browser Based Available WAP Testing Support

In this solution you point the eValid browser to a specific server (it
is specially equipped to support the browser emulation).  You also
specify the URL for the candidate WAP-enabled site that you want to
test.  The special server converts your input into WAP-style requests to
the target WAP-enabled site, and re-converts the responses back ito a
format suitable for display on the WAP emulator that you see in the
eValid browser window.

    Gelon Server-Interactive Solution:  Gelon has a WAP browser called
    "The Wapalizer" that uses a web browser like IE to render the
    content.  This involves using a particular page on the Gelon site
    and specifying the URL on which the WAP emulation is to be done.

    The Gelon WebSite allows you to view a WAP page by submitting a link
    to their emulator located in their server to process.  Go to:
    <http://www.gelon.net>

    One of the WAP emulators is for the Ericsson Model R320:
       <http://www.gelon.net/cgi-bin/wapalizeericssonr320.cgi>

    We have checked and found that all nine of the emulations shown on
    this page work well with eValid.

               Applet Based Available WAP Testing Support

In this solution you use a local Java Applet in your eValid browser that
emulates the WAP interface.  You give the emulator the URL of the WAP-
enabled site you want to test and it emulates the WAP interchanges to
the designated site.

To see how these emulators work go to this URL:
  <http://gelon.net/cgi-
bin/wapalizeericssonr320.cgi?url=http://wap.gelon.net>

You'll see the WAP enabled phone and you can use your browser to drive
around its menus.  For a sample screenshot showing eValid in operation
on the Ericsson R380 Emulator go to.
  <http://www.soft.com/eValid/Applications/wap/images/ericsson.R380.gif>

    Yospace Applet Solution:  The <http://www.yospace.com> WebSite
    provides a variety of WEB-enbled SmartPhone Emulators.

    Two of their emulators  have been verified as compatible with
    eValid:
      Motorola Timeport P7389
        <http://www.yospace.com/p7389.html>
      Ericsson Model R380.
        <http://www.yospace.com/r380.html> for the

Note: For further information about e-Valid go to <http:www.e-valid.com>.

========================================================================

  14th ANNUAL INTERNATIONAL INTERNET & SOFTWARE QUALITY WEEK (QW2001)

                       29 May 2001 - 1 June 2001

                     San Francisco, California  USA

                  <http://www.qualityweek.com/QW2001/>

                  CONFERENCE THEME: The Internet Wave

The 16-page full-color brochure for the QW2001 conference is now
available for download from the conference WebSite.  To print your own
copy go to:

   <http://www.qualityweek.com/QW2001/brochure.phtml>

to get either the PostScript (*ps) or the Portable Document Format
(*pdf) version.  If you'd like you own printed copy you can request it
from that same page.

Conference Description:

    QW2001 is the 14th in the continuing series of International
    Internet and Software Quality Week Conferences that focus on
    advances in internet quality, software test technology, quality
    control, risk management, software safety, and test automation.
    There are 14 in-depth tutorials on Tuesday, and 4 Post-Conference
    workshops on Friday.

KEYNOTE TALKS from industry experts include presentations by:

      > Dr. Dalibor Vrsalovic (Intel Corporation) "Internet
        Infrastructure: The Shape Of Things To Come"
      > Mr. Dave Lilly (SiteROCK) "Internet Quality of Service (QoS):
        The State Of The Practice"
      > Mr. Thomas Drake (Integrated Computer Concepts, Inc ICCI)
        "Riding The Wave -- The Future For Software Quality"
      > Dr. Linda Rosenberg (GSFC NASA) "Independent Verification And
        Validation Implementation At NASA"
      > Ms. Lisa Crispin (iFactor-e) "The Need For Speed: Automating
        Functional Testing In An eXtreme Programming Environment (QW2000
        Best Presentation)"
      > Mr. Ed Kit (SDT Corporation) "Test Automation -- State of the
        Practice"
      > Mr. Hans Buwalda (CMG) "The Three "Holy Grails" of Test
        Development (...adventures of a mortal tester...)"

SIX PARALLEL PRESENTATION TRACKS with over 60 presentations:

      > Internet: Special focus on the critical quality and performance
        issues that are beginning to dominate the software quality
        field.
      > Technology: New software quality technology offerings, with
        emphasis on Java and WebSite issues.
      > Management: Software process and management issues, with special
        emphasis on WebSite production, performance, and quality.
      > Applications: How-to presentations that help attendees learn
        useful take-home skills.
      > QuickStart: Special get-started seminars, taught by world
        experts, to help you get the most out of QW2001.
      > Vendor Technical Track: Selected technical presentations from
        leading vendors.

SPECIAL EVENTS:

      > Birds-Of-A-Feather Sessions (BOFS) [organized for QW2001 by
        Advisory Board Member Mark Wiley (nCUBE),
        ].
      > Special reserved sections for QW2001 attendees to see the SF
        Giants vs. the Arizona Diamondbacks on Wednesday evening, 30 May
        2001 in San Francisco's world-famous downtown Pacific Bell Park.
      > Nick Borelli (Microsoft) "Ask The Experts" (Panel Session), a
        session supported by an interactive WebSite to collect the
        most-asked questions about software quality.

Get complete QW2001 information by Email from  or go to:

                  <http://www.qualityweek.com/QW2001/>

========================================================================

                    When is Software Ready To Ship?

                               David Fern
                          MICROS Systems Inc.
                           

Deciding when software is ready to ship is a difficult one. You have
pressure from all sides to release perfect software, with added
features, yesterday. The engineers have said that the code was complete
months ago, but are still making code changes. Sales people promised the
software to major accounts months ago and are making commitments for the
next release.  The product manager wants a few more features added and
you want to release a zero defect software. Having no specific release
criteria, many organizations wait for the manager to say "Ship it", not
knowing when or why the decision was made at that time. In other cases,
there is a general group consensus to just  push the software out the
door because the tired and stressed group wants to move out from the
seemingly never ending cycles.

You can never take all of the stress out of shipping good software, but
by planning ahead and setting up a good defect tracking repository,
predetermining acceptable bug counts, properly testing the software and
effectively triaging the defects you can always know the current state
of the software, but most importantly know when the software is ready to
ship.

                       Defect Tracking Repository

The most important tool to determine the state of the software is the
defect tracking repository. The repository must include the following
information:

  > A short descriptive name for the defect
  > A description of the defect
  > The actual results if the testers test
  > The expected results of the testers test
  > The steps to recreating the defect
  > The version in which the defect was found
  > The module in which the defect was found
  > The severity of the defect
  > Resolution notes

The repository must be able to provide information required for the
triage process. The team should be able to query the database and gather
information such as defect counts at various severity levels, and module
levels, descriptions of the defects and steps to reproduce them. The
repository becomes the archive for the project and it becomes important
that correct and complete information be entered in order to use this
information for later projects as a planning tool.

                  Predetermined Acceptable Bug Counts

The goal of shipping software with no defects cannot be achieved given
the limited time and resources so, in the preliminary planning of the
project exit criteria must be set up. The exit criteria should then be
translated into the test plan outline, which should include acceptable
bug count levels in order to ship the software.

The exact acceptable bug count level is not some magic number that will
insure a successful product, but more a target goal that the group can
use to see that they are moving forward toward a common goal. An
acceptable bug count statement may look like this:

  > Showstoppers - There may not be any.
  > High - There may not be any.
  > Medium - There may not be any in specified core areas. There may not
    be any that have a high probability of  appearance to the customer.
  > Low - There may be a minimal amount in specified core areas and all
    other areas may have any amount.

In order for the entire group to know the state of the software at all
times it would be preferable to produce a weekly build grading
procedure.  The idea is that each week the build would be treated as if
it were the software to be shipped and given a grade. The weekly grade
keeps the team current on the software state and the team can see the
software progressively improve. This quantitative method takes a lot of
the guesswork out of the equation.

                             Proper Testing

The key to shipping quality software is finding and fixing all defects.
The testers responsibility becomes finding the defects and properly
reporting them. In order for the testers to find the defects the lead
tester must set up in the test plan outline, the specific stages of
testing to ensure that the process is organized and covers all aspects
of the software. The test plan may look as follows:

        Phase 1- Test Case and Test Suite Planning
        Phase 2 - Unit Testing
        Phase 3 - Regression Testing
        Phase 4 - System Testing
        Phase 5 - Regression testing
        Phase 6 - Performance Testing
        Phase 7 - Release Candidate Testing

The goal is to turn up as many defects as early as possible in order to
make the fixes easier on the engineers and to provide ample time for
regression testing since every time the code is altered there is a
greater risk of creating more defects.

As important as finding the defect, the tester must be able to correctly
report it in the repository. This becomes extremely important in large
projects where the defect counts may rise in the thousands

                            Effective Triage

The Lead Tester and Engineer must sit down regularly to evaluate the
reported defects and drive the process forward in the most efficient
manner for the engineers as well as the testers. Much of the decision
making here is based on queries from the defect repository, which shows
the importance of the accuracy and completeness of the repository.

The creation of a "Hot List" which is a list of important bugs is a
great tool to create. This list is a spreadsheet, which identifies the
most important defects, their module and a brief description.  The "Hot
List" is great to use in triage to identify the defect action items for
the upcoming week.

Defects that prevent further testing in a certain area must be given
precedence and the engineers can assist in the risk analysis of future
modules to assist the testers in their test case development.

In the determination of what to fix next generally it is advantageous to
group and fix defects relating to the same module at the same time even
though there may be other important defects in other modules. Grouping
of defects assists the engineers finish a module and move on.

Shipping quality software is by no means an easy task but by being
organized from the start of the project the ship date and defect counts
don't have to be a "gut feeling".  The project must have a Defect
Tracking Repository that must be maintained and updated continually with
correct and complete information. The Repository is used as the main
tool for an effective triage process and in conjunction with the Test
Plan Outline that must be created at the start of the project describing
in detail the exit criteria to ship the software. Finally the entire
process rests on proper testing and the use of proper test planning
which is what uncovers the defects. If you do not uncover the defects
the users of the software will.

By following the process that I have discussed it is possible to know
the state of the software at any time so that the entire team knows the
goal, sees the progress and knows when the software is ready to ship.

========================================================================

           W3C Workshop on Quality Assurance, 3-4 April 2001

QTN readers ought to point their browsers to:

                   <http://www.w3.org/2001/01/qa-ws/>

to learn about the W3C (World Wide Web Consortium) workshop on Quality
Assurance.  The Workshop is hosted by NIST and includes a number of
topics that will interest QTN readers.

For information please contact Daniel Dardailler (W3C) at
 who is the Workshop Co-Chair with Lynne Rosenthal
(NIST).

========================================================================

                       What Is A Quality Culture
                                   by
                              Boris Beizer

      Note:  This article is taken from a collection of Dr. Boris
      Beizer's essays called "Software Quality Reflections" and is
      reprinted with permission.  We plan to include additional
      items from this collection in future months.  You can
      contact Dr. Beizer at .

There is a process.  The details of the process aren't important.
There's a lot of cultural diversity in process.  It's not a question of
where the boxes are and how the bullets are arranged, but that there are
certain fundamental parts to the process.

 1. Formalized Requirements.  Here's what the leaders are doing. There
    is some kind of formality to requirements.  They do market studies
    to learn what the users want.  They instrument their software to
    learn the user's habits.  They use independent polling organizations
    to find out if the users are satisfied instead of planning based on
    myths.  They also do prototyping.  In other words we don't have a
    bunch of guys sitting around saying "you know, wouldn't it be nice
    if...."

 2. Distinguish Between Quality Control and Quality Assurance.  Quality
    control is primarily testing these days; quality assurance is
    process management.  The leaders have a clear?cut process management
    function.  How do we tweak the process to improve the quality of the
    software we produce?  The focus is on bug prevention, not bug
    detection because bug detection is already efficient and prevention
    promises a bigger payoff.

 3. Formal Inspections. Inspections of requirements, of design,  of
    code, and inspections of tests.  I'm talking about formal
    inspections as exposited by Fagan, Weinberg, etc. and not the old
    style reviews under a different name.  Some kind of inspection is in
    place or is getting into place.

 4. Good Unit Testing.  By "good" unit testing I mean they are at least
    providing 100% branch coverage and in the better cases 100%
    predicate condition coverage.  And they are using a variety of test
    techniques.

 5. Independent Testing. They have independent testing, which does not
    mean, incidentally, that they have an independent test group.  I
    want to distinguish between independent testing and independent test
    groups.  It's more efficient not to have independent test groups.
    The independent test group is something that organizations
    apparently must go through in order to get the quality culture
    embedded.  You need independent test groups when testers need
    protection.  When you have a culturally primitive organization, the
    tester comes along and says "you can't ship this product", and what
    happens?  The development manager says "you're fired!"  Once that
    nonsense stops and quality ideals and methodology are part of the
    culture at every level, then it's safe to fold the independent test
    group back into the development organization.

    But there is still independent testing.  The way some organizations
    do it is to cross test.  That is "I'll  test your software if you'll
    test my software."  There are many workable variations on that
    theme.  6. Negative Testing. Most testing is what I call "dirty" or
    "negative testing."  The primary purpose of a dirty test is to break
    the software.  The primary purpose of a clean test is to demonstrate
    that the software works.  In immature organizations there's about 5
    clean tests for every dirty test.  Mature organization by contrast
    have 5 dirty tests for every clean test.  It's not done by reducing
    the clean tests: it's done by a 25? fold increase in the dirty
    tests.

 7. Test Configuration Control.  Another characteristic of a mature
    software organization is that their test suites are configuration
    controlled.  If a company develops software and has their software
    under configuration control but doesn't have their tests under
    configuration control, since testing consumes half of their labor,
    they're throwing away half of their labor content.

 8. Tester Qualifications.   Among the leaders there are no differences
    in the qualifications of testers versus programmers.  I admit that
    I'm an elitist when it comes to testers' qualifications.  What I
    call "key pounders", people whose image of testing is pounding the
    keys, who don't know programming, are a passing phase.  They will
    probably be driving taxi cabs in the not too distant future.  As for
    real testers, the only distinctions I can make between them and
    programmers are: however much disk space a programmer needs, a
    tester needs 5 times as much, and people in test groups need better
    social skills.  You can be an antisocial programmer but if you have
    to interact with programmers on one hand and users on the other, you
    aren't going to get far if you don't have social skills.  Other than
    that, they have the same education, the same training, the same
    experience.

 9. Metrics.  Good organizations measure.  Measure bug rates, measure
    labor content, etc. etc.  More metrics than you can shake a stick
    at.  The measurements are used to continually attempt to improve the
    process.  That is the overriding thing; as much is quantified as
    possible.  A consequence of metrics, however, is accountability.
    The leaders aren't afraid of that:  the losers, by contrast, do
    everything they can to block metrics so that they won't be held
    accountable for things like cost and schedule.  People predict,
    based on validated model like COCOMO, what the labor content will be
    in the different phases of software development.  Not only do they
    predict the labor content and schedule, but they meet both.  It can
    be done.

10. Clear Quality Objectives.  The fundamental issue in quality is
    misunderstood by many people.  There is no God-given requirement for
    quality.  Quality is not the goal, it never has been.  The goal is
    the bottom line.  Quality just happens to be a major component of
    that goal.  You put quality into software because it's a lot cheaper
    than a whole bunch of people on the hot?line.  One of my clients, a
    big software vendor, said that it costs them $45 every time they
    pick up the phone to answer a user question.  But when you sell your
    package for $50, and you try to make a profit, you have to have your
    quality up there.  Not just quality in the product, but quality in
    the manual, help files, etc.  Otherwise, you'll be out of business.

Clear quality objectives, means that ideally you're writing and  testing
software to a certain failure rate.  Not to a certain bug rate, but a
failure rate.  There is a big distinction.  Bugs per line of code is of
no interest to the user.  Why should the user care how many bugs you've
got?  That's a developer's concern.  The user is concerned with whether
the software will fail, or worse, corrupt data.  So the emphasis is on
some kind of mean time to failure in operation.   And the leaders, of
course, do measure that or attempt to measure that, despite the problems
in the field.

========================================================================

                        Word Play by Ann Schadt

[The rules of this game are as follows:  take a word, add a letter and
then give the definition of the newly formed word.]


Trice: instant rice.
Aeroticism: sexiness in space.
Gaspoline: overpriced fuel.
Sindex: a list of one's vices.
Dinjury: hearing loss from loud noise.
Hindslight: the retort you wish you had thought of.
Rumbrella: that little paper parasol decorating a tropical cocktail.
Ployal: swearing allegiance, but with a hidden plan.
Jambiguity: a homemade jam of wild berries picked by children.
Splatoon: like a spittoon, only bigger.
Gabolition: monastic silence.
Sneasoning: a heavy dose of pepper.
Sourprise: small grapefruit originally thought to be an orange.
Bidiocy: buying an overpriced antique at auction.
Carthritis: how one's hands feel about holding the steering wheel too
tightly.
Barithmetic: figuring out who owes what for drinks.
Kinventory: a family tree.
Sexpress: a drive-through brothel.
Relopement: second try at a botched runaway marriage.
Sexpedition: a heavy date.
Debtutante: a young woman who maxed out her credit cards to pay for her
coming-out party.
Pecstasy: the feeling one gets looking at a gorgeous body-builder.
Sexport: where sailors want to go.
Slottery: a bunch of one-armed bandits.
Reeferendum: a nationwide vote on marijuana legalization.
Lobserve: easy tennis shot.
Shovercraft: a boat that you have to push.


========================================================================

                    Special Issue of the Informatica
         An International Journal of Computing and Informatics

                              Dedicated to
                 "Component Based Software Development"

Previously unpublished high quality papers are solicited for a special
issue on Component Based Software Development in the International
Journal of Computing and Informatics - Informatica.

Component-based software development (CBSD) focuses on building large
software systems by integrating existing software components. At the
foundation of this approach is the assumption that certain parts of
large software systems reappear with sufficient regularity that common
parts should be written once, and systems should be assembled through
reuse rather than rewritten. For successful CBSD several prerequisites
are needed: technology base in form of component models; a selection of
commercial of the shelf components; integration techniques; run-time
environments; development methods; development tools; etc.

The focus of the special issue of the journal is on advances in all
fields of component based software development. Papers describing
state-of-the-art research on various component based software
development topics including, but not limited to, the following areas
are solicited:

   - Component models
   - Component integration methods
   - Component development methods
   - Run-time environments, containers and application servers
   - Performance of component based systems
   - Component frameworks
   - Component contracts
   - Design patterns
   - Distributed and real-time components
   - Software architectures for efficient component based applications
   - Component testing, component based application testing and metrics
   - Programming languages and environments for CBSD
   - Component deployment and adaptation
   - Component configuration management
   - Domain specific component specifications
   - Theoretical foundations

Authors interested in submitting a paper for the issue should contact
Matjaz B. Juric  for submission details.

========================================================================


                            SERG REPORT 391
                  Program Reliability Estimation Tool
                            S.M. Reza Nejat

Abstract:  Testing is a very demanding procedure in software production,
that takes a lot of effort, time and resources during both development
and maintenance.  Moreover, statistical testing is a very costly
procedure, especially if high reliability requirements are placed on the
software as in safety-critical, or safety-related software cases. The
main question is when to stop testing, or how many tests are needed?

Singh et al. [49], using the method of the negative binomial, developed
a procedure for quantifying the reliability of a module. According to
their approach, the number of tests can be computed based on hypothesis
testing.  We implemented this method for a reliability estimation of a
program.

In this work, a prototype black-box automated testing tool, called
Program Reliability Estimation Tool (PRET) was developed as a
statistical test generator and reliability estimation tool based on an
operational profile (a proposed testing model) and negative binomial
sampling.

The tool has a command line user interface.  The inputs to the PRET are:
an integer (0 or 1) to choose the usage (0: only generate test cases, 1:
does the testing process), the test specification context file name, the
data file name, the program under test name, and the oracle name.

PRET computes the number of test cases, generates test cases, runs the
generated test cases, evaluates the result of each test run by using an
oracle, and estimates the reliability of the program based on test
results.

========================================================================
      ------------>>> QTN ARTICLE SUBMITTAL POLICY <<<------------
========================================================================

QTN is E-mailed around the middle of each month to over 9000 subscribers
worldwide.  To have your event listed in an upcoming issue E-mail a
complete description and full details of your Call for Papers or Call
for Participation to .

QTN's submittal policy is:

o Submission deadlines indicated in "Calls for Papers" should provide at
  least a 1-month lead time from the QTN issue date.  For example,
  submission deadlines for "Calls for Papers" in the March issue of QTN
  On-Line should be for April and beyond.
o Length of submitted non-calendar items should not exceed 350 lines
  (about four pages).  Longer articles are OK but may be serialized.
o Length of submitted calendar items should not exceed 60 lines.
o Publication of submitted items is determined by Software Research,
  Inc., and may be edited for style and content as necessary.

DISCLAIMER:  Articles and items appearing in QTN represent the opinions
of their authors or submitters; QTN disclaims any responsibility for
their content.

TRADEMARKS:  eValid, STW, TestWorks, CAPBAK, SMARTS, EXDIFF,
STW/Regression, STW/Coverage, STW/Advisor, TCAT, and the SR logo are
trademarks or registered trademarks of Software Research, Inc. All other
systems are either trademarks or registered trademarks of their
respective companies.

========================================================================
          -------->>> QTN SUBSCRIPTION INFORMATION <<<--------
========================================================================

To SUBSCRIBE to QTN, to UNSUBSCRIBE a current subscription, to CHANGE an
address (an UNSUBSCRIBE and a SUBSCRIBE combined) please use the
convenient Subscribe/Unsubscribe facility at:

         <http://www.soft.com/News/QTN-Online/subscribe.html>.

As a backup you may send Email direct to  as follows:

   TO SUBSCRIBE: Include this phrase in the body of your message:
           subscribe 

   TO UNSUBSCRIBE: Include this phrase in the body of your message:
           unsubscribe 

Please, when using either method to subscribe or unsubscribe, type the
 exactly and completely.  Requests to unsubscribe that do
not match an email address on the subscriber list are ignored.


		QUALITY TECHNIQUES NEWSLETTER
		Software Research, Inc.
		1663 Mission Street, Suite 400
		San Francisco, CA  94103  USA

		Phone:     +1 (415) 861-2800
		Toll Free: +1 (800) 942-SOFT (USA Only)
		Fax:       +1 (415) 861-9801
		Email:     qtn@sr-corp.com
		Web:       <http://www.soft.com/News/QTN-Online>