sss ssss      rrrrrrrrrrr
                      ssss    ss       rrrr   rrrr
                     sssss     s       rrrr    rrrr
                     ssssss            rrrr    rrrr
                      ssssssss         rrrr   rrrr
                          ssssss       rrrrrrrrr
                    s      ssssss      rrrr  rrrr
                    ss      sssss      rrrr   rrrr
                    sss    sssss       rrrr    rrrr
                    s  sssssss        rrrrr     rrrrr
         +===================================================+
         +=======    Quality Techniques Newsletter    =======+
         +=======            November 2000            =======+
         +===================================================+

QUALITY TECHNIQUES NEWSLETTER (QTN) is E-mailed monthly to Subscribers
worldwide to support the Software Research, Inc. (SR), TestWorks,
QualityLabs, and eValid user communities and other interested parties to
provide information of general use to the worldwide internet and
software quality and testing community.

Permission to copy and/or re-distribute is granted, and secondary
circulation is encouraged by recipients of QTN provided that the entire
document/file is kept intact and this complete copyright notice appears
with it in all copies.  Information on how to subscribe or unsubscribe
is at the end of this issue.  (c) Copyright 2003 by Software Research,
Inc.


========================================================================

   o  4th Annual International Software & Internet Quality Week Europe:
      Conference Theme: Initiatives For The Future.

   o  Managing Requirements: From Battleship to Heat-Seeking Missile
      (Part 2 of 2), by David L. Moore

   o  News Flash: Seven Software Companies added to "Watch List" by
      Yerby Craig

   o  QW2001 (20 May 2001 - 1 June 2001) Call For Papers

   o  Stress Relief (Part 2 of 2) by Richard Ellison

   o  Testing Point Of Sale Software, by David Fern

   o  Call For Papers, 1st Asia-Pacific Conference on Web Intelligence,
      October 2001, Maebashi City, Japan.

   o  QTN Article Submittal, Subscription Information

========================================================================

                  Final QWE2000 Registration Reminder

     * * * QWE2000 Keynote Speakers Address Internet Futures * * *

We've added two new Keynote Speakers who will provide a distinctly
European view of the QWE2000 Conference Theme "Initiatives for the
Future":

  > Minister Rik Daems, Telecommunications Ministry for Belgium, will
    present "Belgium's Five-Star Plan to Develop the Information
    Society".
  > Dr. Philippe Aigrain, Head of the Software Technologies Sector of
    the European Commission, will describe "A Wider Look at Software
    Quality".

The QWE2000 program includes 11 half-day and full-day tutorials, 60
regular presentations in five tracks, a 2-day exhibit with 14+ companies
represented.  In addition, you can take a 2-day seminar and stand for
the ISEB exam and receive your Software Testing Certificate.

The complete technical program for QWE2000 can be found at:
  <http://www.soft.com/QualWeek/QWE2K/qwe2k.program.html>

                  * * * QWE2000 Special Events * * *

In addition to the rich technical program, there are many special events
at QWE2000:

  o  Welcome Reception at the Belgian Brewery Museum (Tuesday, 5:30 PM)
  o  Cocktail Party at the Conference Hotel (Wednesday, 5:00 PM)
  o  Tour a Small Brussels Chocolate Factory (Wednesday, 6:00 PM)
  o  Tour the Museum of Gueuze Beer (Thursday: 5:30 PM)
  o  Conference Dinner in an Art Nouveau restaurant (Thursday: 8:00 PM)

Come to QWE2000 and join us in the fun!  Complete details at:
  <http://www.soft.com/QualWeek/QWE2K/qwe2k.specialevents.html>

             * * * QWE2000 Registration Information * * *

If you have not already done so, please *DO* register yourself for
QWE2000 as soon as possible.  Registrations are running well ahead of
last year and space is limited!  Register on the web at:
  <http://www.soft.com/QualWeek/QWE2K/qwe2k.register.html>

========================================================================

               Managing Requirements: From Battleship to
                    Heat-Seeking Missile (Part 2/2)
                                   by
             David L. Moore, Independent Testing Consultant

So what do good requirements look like?

The characteristics that individual requirements should possess are as
follows:

* Correct - what is being stated has to be right and the requirement
should, where applicable, contain precise information.

* Clear - the objective of the requirement should be obvious and free
from wordiness. You shouldn't have to think too hard about what the
requirement is trying to say. Is it understood by everyone?

* Unambiguous -the requirement should not be interpretable in more than
one way. For example, "The system shall only allow a logged in user to
login once." Does this include the current session or can we log in once
more because we are already logged in?

* Avoid Implications - the requirement should not imply that something
extra would be allowed. Requirements with implications are often stated
in the negative rather than the positive. For example "The system
administrator shall not be able to login" - but he can do everything
else and other people can login under the same circumstance, right?

* Complete - individual requirements should be free from any obvious
omissions. If a group of requirements make up a complete picture of some
functionality - is everything there? Have all cases be clearly stated?
Have any assumptions been made? Is the requirement finished i.e. not a
"to be determined" by the time an answer is needed?

* Consistent -the requirement should make sense in its current context.
Is it worded within documentation standards or conventions set out in
the specification? Is the requirement conflicting with or contradicting
any other requirements? Is the requirement free of unnecessary jargon,
homonyms and synonyms?

* Reasonable - what is being described must be achievable and practical.
Can the requirement be implemented within the constraints of the project
budget, schedule and available resources? Does it sit well with other
requirements? Is the requirement justifiable?

* Clear Dependencies - if the requirement has any dependencies they
should be identified. Examples include other requirements, project
dependencies and technological dependencies.

* Keyword/identifier - requirements should contain a keyword or an
identifier. For example, 'shall' for mandatory requirements and 'will'
for guidance. These make it clear to everyone using the document which
statements are requirements, guidance or narrative. A keyword provides
visibility to the requirements.

* Traceable - your requirement is uniquely identified. The simplest and
best way to do this is to number your requirements from 1 to n.

* Unique - each requirement should be the only one that says the same
thing. Why say it twice and give yourself a maintenance hassle or risk
of contradiction?

* Specific Cross-references - cross-references that merely point to a
standard, or another document, imply that the whole thing applies. In
contractual circumstances this can lead to enormous unexpected blowouts
in scope. Only include what is intended.

* Relevant - The requirement should actually be mandatory. Perhaps the
requirement is better expressed as a 'will' i.e. guidance or non-
essential functionality. Is the requirement in fact merely narrative
that has no bearing on the functionality of the system? Is the
requirement something that is outside of the systems control and just
not desirable as a requirement?

* Testable - the requirement must be specific rather than vague. Has it
been quantified? Have tolerances and ranges been expressed? Can you
think of a reasonable way to test it? Will the effects of this
requirement be visible?

* Clear Usage - can you identify a use or a user for the requirement?
Does it have a reason for living?

* Design-free - there should not be too much detail. Has the requirement
started to infringe on the design process by describing how things will
be done, rather than what should be done? If you start identifying
requirements that appear to be only used by the workings of the system
then you may be looking at design statements or guidance rather than
requirements.

* No Assumptions - all assumptions should be clearly stated. An unstated
element in a requirement can take on different meaning for each
individual that reads it. This is ambiguity by stealth and can just as
easily lead to incorrect implementation and dispute. Recording
assumptions removes the potential for misinterpretation and opens up the
opportunity to fix them.

* Prioritized - the importance of a requirement should be obvious. Some
will be more important than others. If the author or reviewer possesses
such knowledge it should be conveyed it to the reader. Perhaps via style
convention, flag or location within the document.

Some of the characteristics applied to individual requirements above
also need to apply to the document as a whole. However, overall a
specification also needs to posses the following features:

* Modifiable - The document needs to be structured in such a way that a
change to an individual requirement does not cause excessive impact on
other items.

* Complete - Everything that is going to be in the system is described
in the specification. If the size of the specification is such that it
needs to be delivered in stages or volumes, ensure that each piece of
the puzzle is identified so that it can clearly be identified as a part
of the whole. Avoid gaps and unnecessary duplication.

* Verifiable - The specification can only be considered as verifiable if
every requirement within the specification can be verified. To be able
to sign off and deliver the whole document, every component has to be
correct.

* Traceable - The source of all the requirements in the specification
should be known and all requirements within the specification should be
traceable to subsequent documents.

* Usable - The specification is a tool. It is essential that it is
usable to all parties that will rely on its content. The style of the
document, accessibility, visibility and feedback methods all impact on
the life span of the specification. The success of a specification comes
from its use not its completion.


Tips and Tricks

* Use the word 'shall' to indicate mandatory requirements and 'will' to
indicate guidance or non-essential functionality.  This is what the
military prefers.  A keyword can be searched for by a requirements
management tool and automatically added to the tools database. Some
people like to use many keywords e.g. may, could, should, can etc.
However, there is already enough to look at in a specification without
having to remember multiple keywords and what level of compulsion is
attached to them. I would not recommend this.

* As soon as your requirements are publicly numbered, never renumber
them, or recycle numbers from deleted requirements, even if you have a
tool. Trust me, it will only lead to confusion.

* Don't delete the requirements from the specification. If you aren't
using a tool, strike-out or hide the text but don't remove it. This way
it is easier to recover if needed and helps you to spot 'requirements
thrash'.

* Requirements management tools may mandate that you have some form of
prefix as part of the identifier e.g. SRS-035. Keep these small and
unique. A sentence with a single keyword may contain multiple
requirements. For example "The administrator shall be able to add,
delete, modify and suspend users." In this instance a tool may have
automatically marked the entire sentence as a requirement; "[SRS-035]The
administrator shall be able to add, delete, modify and suspend users."
However, you would want to prove each operation individually so it
becomes necessary to manually break the sentence into four requirements;
"The administrator shall be able to [SRS-072]add, [SRS-073]delete,
[SRS-074]modify and [SRS-075]suspend users." You can see that the prefix
on each number is becoming intrusive. It is a small price to pay for the
benefits of using a tool but a good reason to keep your prefixes small.

* Don't be tempted to use automatic word processor document features in
place of requirements numbering. There is significant risk that the
document will look different from site to site or even machine to
machine. Also, one change in the document can result in unintentional
requirements renumbering and a traceability nightmare.

* Make a considered judgement on the trade off between duplication and
modularization of capabilities that are similar. There is a danger of
removing the thought process from the specification writing when cutting
and pasting takes place. However, superficially similar functionality
may not be all that similar. This tempts the author down the path of
numerous 'special cases' rather than a new section. The iterative
approach can help address this issue.

* When implementing a requirements management tool for the first time
avoid using features other than the basic capture and tracing of
requirements. There is a danger that features of the tool will be
explored at the expense of getting the requirements right.

* Go to the NASA Software Assurance Technology Center (SATC) web site at
http://satc.gsfc.nasa.gov/ and download a copy of the Automated
Requirements Measurement tool (ARM). ARM performs analysis of free form
language specifications. It does a good job and should be a part of your
specification review activities. It is by no means a replacement for
human inspection or review but it is a good first step.

* Use and maintain checklists for writing and reviewing specifications.

* Avoid the temptation to write your specification as a database or
spread-sheet of chopped up components. It may seem like you are getting
a jump-start on some of the clerical work that is to follow, but the
reality is that you are removing the value of 'context'. You may also
turn the subsequent tasks of design and test planning into 'data
processing' tasks rather than 'cerebral' tasks. Besides, requirements
management tools put the requirements into an appropriate database
format anyway.

* Create and maintain a project or company Glossary. Where there is even
the slightest chance that jargon, a word, or an acronym may be used
outside of their normal meaning they should be included. The glossary
should be in, or referenced by, the specification.

Hitting the Target

Writing and using good software requirements specifications is integral
to the success of a project. Specifications carry the initial vision for
a project through to its completion and are a vehicle for correcting the
vision along the way. Write and review them early and keep on doing it.
Good specifications are also key to managing customer expectation rather
than being a victim of it. Use them for realistic and honest two-way
communication. In this way your lumbering ad-hoc battleship developments
can turn into smart missiles that have a better chance of hitting the
target.

David L. Moore is a Sydney based independent software testing consultant
specializing in inception to completion process improvement. He can be
contacted by email at dlmgecko@tpg.com.au.

========================================================================

       NEWS FLASH! SEVEN SOFTWARE COMPANIES ADDED TO "WATCH LIST"

          Forwarded By: Yerby Craig MTSP 

New York - People for Ethical Treatment of Software (PETS) announced
today that seven more software companies have been added to the group's
"watch list" of companies that regularly practice software testing.

"There is no need for software to be mistreated in this way so that
companies like these can market new products," said Ken Grandola,
spokesperson for PETS.  "Alternative methods of testing these products
are available."

According to PETS, these companies force software to undergo lengthy and
arduous tests, often without rest, for hours or days at time.  Employees
are assigned to "break" the software by any means necessary, and inside
sources report that they often joke about "torturing" the software.

"It's no joke," said Grandola. "Innocent programs, from the day they are
compiled, are cooped up in tiny rooms and "crashed" for hours on end.
They spend their whole lives on dirty, ill-maintained computers, and are
unceremoniously deleted when they're not needed anymore."

Grandola said the software is kept in unsanitary conditions and is
infested with bugs.

"We know that alternatives to this horror exist," he said, citing
industry giant Microsoft Corporation as a company that has become
successful without resorting to software testing.

========================================================================

                   QW2001 -- CALL FOR PARTICIPATION

     INTERNATIONAL INTERNET & SOFTWARE QUALITY WEEK 2001 (QW2001)

                  Conference Theme: The Internet Wave

                    San Francisco, California  USA

                         May 29 - June 1, 2001

The 21st century introduced new technologies and new challenges.

We need answers to critical questions: What are the software quality
issues for 2001 and beyond? What about quality on the Internet? What
about embedded System Quality? What about E-Commerce Quality? Where do
the biggest Quality problems arise? What new Quality approaches and
techniques will be needed most?

QW2001 will bring focus on the future of software quality with a
careful, honest look at the recent past, and a future-oriented view of
the coming decades.

QW2001 OFFERS:

The QW2001 program consists of four days of mini-tutorials, panels,
technical papers and workshops that focus on software test automation
and new internet technology.  QW2001 provides the Software Testing and
QA / QC community with:

   o Real-World Experience from Leading Industry and Government
      Technologists.
   o Quality Assurance and Test involvement in the development process.
   o E-commerce Reliability / Assurance.
   o State-of-the-Art information on software & internet quality
      methods.
   o Vendor Technical Presentations.
   o Six parallel tracks with over 80 Presentations.

IMPORTANT DATES:

        Abstracts and Proposals Due:            15 December 2000
        Notification of Participation:          25 February 2001
        Camera Ready Materials Due:             31 March 2001
        Final Paper Length:                     10 - 20 pages
        Slides / View Graphs:                   Max 15 pages ( < 2 slides/page)

We are soliciting 45 and 90-minute presentations or participation in
panel discussions on any area of testing and automation, including:

        E-Commerce Reliability / Assurance      Object Oriented Testing
        Application of Formal Methods           Outsourcing
        Automated Inspection Methods            Process Improvement
        Software Reliability Studies            Productivity and Quality Issues
        Client / Server Testing                 Real-Time Software
        CMM/PMM Process Assessment              Test Automation Technology and
        Cost / Schedule Estimation              Experience
        Website Monitoring                      Web Testing
        Test Data Generation and Techniques     Real-World Experiences
        Test Documentation Standards            Defect Tracking / Monitoring
        GUI Test Technology and Test            Risk Management
        Management                              Test Planning Methods
        Integrated Test Environments            Test Policies and Standards
        Application Quality of Service (QoS)    New and Novel Test Methods
        Website Load Generation and Analysis    Website Quality Issues

SUBMISSION INFORMATION:

Abstracts should be 1 - 2 pages long, with enough detail to give members
of QW2001's International Advisory Board an understanding of the final
paper / presentation, including a rough outline of its contents.  Please
indicate if the most likely audience is technology, process,
application, or internet oriented.

   1. Prepare your Abstract as an ASCII file, a MS Word document, in
      PostScript, or in PDF format.  Email your submission to:
      .

   2. Please include with your submission:
       > Three keywords / phrases describing the paper.
       > A brief biographical sketch of each author, and a photo of each
         author.

   3. Fill out the Speaker Data Sheet giving some essential facts about
      you and about your proposed presentation at:
        <http://www.soft.com/QualWeek/QW2001/speaker.data.html>

   4. As a backup to Email you can also send material by postal mail to:
      Ms. Rita Bral Software Research Institute 1663 Mission Street,  
      Suite 400, San Francisco, CA  94103  USA

SOFTWARE RESEARCH INSTITUTE, 1663 MISSION STREET, SAN FRANCISCO, CA 94103 USA

                      Phone: [+1] (415) 861-2800
                      FAX:   [+1] (415) 861-9801
                          Email: qw@sr-corp.com
             WebSite: http://www.soft.com/QualWeek/QW2001/

========================================================================

                      Stress Relief (Part 2 of 2)
                                   by
                            Richard Ellison

Reprinted by permission of Justin Kestelyn (CMP)

<http://www.intelligententerprise.com> Examples of Failure

Recently I purchased some circus tickets online and received a
confirmation.  But when I arrived at the will call window with my
excited five-year-old daughter, I was turned away because the box office
had no record of the purchase. Infuriated, I resorted to buying scalped
tickets from the nearest scalper. As it turned out, the scalped tickets
actually cost less than they would have conveniently cost online. Old-
fashioned ticket scalping saved the day.  In essence, the ticketing
system's front end outpaced the load capability of the back end and lost
data in between. The system looked available, but actually wasn't
completing the order on the back end. This problem is a common one.

Many businesses also bring misfortunes on themselves by charging forward
with new sales systems without developing the support systems first. For
example, I recently did business with an online airline ticket bidding
company. This company has a large, automated sales engine and a tiny,
manual error handling process. I caught a snafu within two minutes of
initiating the transaction, but it took me three hours on the telephone
and two emails to get the company to refund my money minus a minor
handling fee. The revenue producing part of the site could marginally
handle the load, but the support part couldn't keep pace -- a
scalability problem all in itself.

A casual scan of the trade media will reveal multiple headlines about
failed transactions, sites crashing under a load, denial of service
attacks, and disasters resulting in lost transaction data. Furthermore,
as companies increasingly outsource Web-site hosting to external
companies, a new kind of story is becoming increasingly common: those
about customers suing hosting vendors when the Web site doesn't perform
up to specifications.

The Solution: Load Test

Many categories of quality assurance exist in the software development
arena, but the type pertaining to our discussion here is called load (or
stress) testing. Load testing emulates the presence of masses of
customers on the system in order to find the performance "knee" and
maximum capacity before causing a system outage.  The performance knee
is the point where one additional unit of customer usage decreases the
performance greater than one unit of performance.  In economic terms,
it's known as the law of diminishing returns.  It marks the volume where
performance degrades more rapidly to the point of unsuitability.
Performance testing, another quality assurance specialty, measures
system response to a certain volume of customers and compares this
response time against the design performance specifications.

Some companies also use these types of tests in development, capacity
planning, and disaster recovery planning.  Three of the six largest U.S.
banks (Bank of America, Citicorp, and First Union) apply these
techniques, and the dominant database vendor, Oracle, uses them in its
own product development process.  I've also found load-testing
programming hooks in Sun Microsystems' NetDynamics application server
platform.  The goal is to answer the bottom line question: "How much
computing power do we need to run our business?" accurately and
objectively. More specifically, as your business grows, you need to know
the "economic order capacity" at which to buy more computing power
before your business performance dips below a certain standard.

The Method

Developing scalable applications requires careful attention to
performance at every stage of the software development lifecycle. It all
starts with defining an acceptable response time, perhaps by using
prototypes that gauge how users react to certain response times.

The practical application of this meth- odology is to first measure
system performance with the anticipated number of customers involved.
Let's say that 1,000 customers have access to our internal system. As a
business process solution, the system will probably be used heavily
during business hours; the expected concurrent usage is 10 percent of
the customer base, which is 100. (This percentage is a starting point
based on the pilot or beta customer usage. You should monitor this
number carefully as the customer base increases; a 1 percent change can
mean a large change in real numbers.) Thus, we'll design a test to run
10 test rounds in increments of 10: 10, 20, 30, and so on. We measure
the time each process takes and compare it against the design standard.
Perhaps the standard states that a customer should be able to perform a
query for one item in less than five seconds and 10 items in less than
30 seconds -- and tally the results for evaluation. Let's say the
application performed up to standard at the current maximum anticipated
load of 100.

The next step is to perform a load test. Based on the performance
measurement results, now the goal is to find the performance knee
bottlenecks, maximum capacity, and point of failure. Thus, we'll ramp up
the numbers of customers, to 90, 100, 110, 120, 130, 140, and 150. Let's
say that the system performs up to 100 just fine, but at 110, the
performance falls below the standard. The server monitoring also shows
that application server A is consumed with hard-disk paging and a high
processor usage number. We have found that perhaps application server A
needs more memory. We continue with the next round of 120, and the
application starts logging errors in 50 of the 120 virtual customers.
This particular project has defined these errors as points of failure
because processing errors are unacceptable. Thus, we cease the testing,
and the systems manager adds more memory to application server A. Now
that we've identified and fixed one bottleneck, we can run the test
again.  As you can see, this iterative process is designed to tune the
systems, software, and database to the point of maximum efficiency.
Through tuning and performance enhancements, this system can handle 140
customers. If we want to go to 280 or 420 customers, we need to add two
or three times the computing capacity to accommodate this business
requirement. The corporation in our example needs three months to order
and install a new system, the sales staff is subscribing 100 new
customers per month, and the current customer base is 1,000. Therefore,
the computer equipment order process has to start within one month in
order to up the capacity above 140 by the fourth month, when 1,400 total
customers will translate into a load of 140 concurrent users. For the
sake of brevity, this simple example focuses on only a portion of the
capacity calculations involved. It does not address many other aspects
such as concurrent usage trends, disaster recovery, and the amount of
business risk associated with this application.

Other Applications

Load testing has several other uses that testers have serendipitously
found while preparing load test scripts. Running a load test magnifies
the processing, making it easier and faster to find some problems.

One such problem is a memory leak, a software programming error in which
every usage increases the allocated memory without releasing that memory
resource when the program is finished with it. The amount of allocated
memory increases until it exceeds the physical memory; at this point,
performance degrades dramatically and ultimately crashes the system.
During a "normal" testing process in which only a few people perform
functional testing, it can take over a day to recreate a memory leak.
Furthermore, during an iterative development process, the machine may be
restarted before it crashes from a memory leak, thereby concealing the
problem until it reaches a crisis level in production. In contrast, with
a load test scenario, you can recreate a memory leak problem within an
hour.

Load testing is also helpful in determining the optimal configuration of
a load balancing solution.  One person at the input end can "present"
numerous customers to the system while the systems administrator tweaks
the settings on a load-balancing tool.

Because you can configure a tool to run as fast as the application can
respond to input, it can also find timing problems among the server-side
components.  These timing issues often exist in a production load but
usually remain hidden in a conventional functional quality assurance
process.  These problems can crash a Web site and keep it crashing after
each reboot until the issues are solved.

Stay Vigilant

Performance is becoming a more important issue as systems become more
complex. Your company can gain a competitive edge and acquire more
customers more rapidly through a top performing e-commerce solution.
Conversely, without a load testing strategy in place, reputations can be
ruined and revenues lost at the speed of e-commerce.

Richard Ellison(richard@leesystems.com) is currently an independent
consultant on a large development project involving Internet-based
business banking applications. A performance developer, he writes load
test programs and tests the applications.

========================================================================

                  Testing Point of Sale (POS) Software

                                   by

                               David Fern
                            Software Tester
                          Micros, Columbia MD
                           (dfern@micros.com)

The face of software testing and quality assurance is ever changing as a
direct result of the dynamic nature of the software industry today. The
POS (Point of Sale) market for the hospitality industry is no exception,
long gone are the days of Ad Hoc testing to release a new version of
software once a year with many added features making it appear as a new
product and not an upgrade. This is a business of money and the company
must appear to be continually updating and modifying its software by
making new releases every few months with smaller more subtle changes in
order to remain profitable. These faster product cycles are forcing the
testing to be more systematic and precise.

POS software for the hospitality industry has aspects embracing all
areas of technology from interface connectivity with hundreds of
devices, full database functionality and the 24-hour operation. The
extremely sophisticated software used today is required to print custom
receipts, handle credit card and room charges, perform payroll
functions, create reports and interface with a wide variety of
peripheral hardware.

The task of the tester is to ensure that the products limitations are
known, while the companies idealistic goal is to create a zero defect
product. It is common knowledge that a zero defect product is
unattainable so the more realistic goal becomes to produce a product
with minimal bugs as seen by the actual user. This predetermined number
of "acceptable bugs" must be determined in the planning stages of the
project between testing, engineering and management as their goal.

            What Is It That Makes Testing POS So Different?

The testing of a POS system is so very unique from other types of
software. The hospitality industry encompasses such a wide range of
businesses. This industry includes small sub shops with a one POS device
and a few employees to huge casinos, hotels, airports and theme parks.
Many of the large installations can have hundreds of POS devices and
thousands of employees and support all types of businesses such as
retail shops to purchase a T-shirt, newsstands, spas, restaurants, pro-
shops and tee times.  The commonality in most instances is that these
facilities generally function around the clock, in less than a tidy
office environment, with a high level of employee turn over and all
require precise accounting down to the penny for all monies. It is of
the utmost importance that the tester keeps in mind the wide audience
that will use this new software and what their specific business needs
are.  As the product moves through each stage of testing the tester can
only speculate using their own knowledge of the industry how a business
may want to use the software.

                           Tools of the Trade

Though the Hospitality industry and its applications are unique the
testers tools remain the same as those required by testers of any other
type of software, with the exception of the Point of Sale devices and
interfaces, which are industry specific.

The tools of the trade for the tester consist of a test case repository,
a defect tracking tool, access to all equipment that is defined in the
product requirements, the product requirements and lastly a good
understanding of what business problem the software will resolve.

A test case repository is essential in that all tests can be
systematically designed, executed and reused as needed. This repository
should be used as an archive continually growing over time so that time
and resources are not spent reinventing test cases but on refining their
steps and focusing their scope.

A defect tracking tool is required to store and categorize bugs and
other issues that are uncovered during testing. These bugs and issues
are stored along with any related files and logs for easy access so that
the engineers can reproduce and correct the problems. The quantity of
bugs and issues relates directly to the health of the project and an
organized tracking mechanism can assist in prioritizing bug fix
schedules. This makes a defect tracking tool the number one tool used by
the testers, engineers and managers as well.

The third important tool for the tester is the proper hardware and
software.  The tester must be able to test on equipment that is defined
in the product requirements which should layout in great detail what
peripherals and interfaces the new software is designed to work with.

A very important tool of the tester is the product specification. This
document should spell out all functions, features and limitations of the
system. This is the Bible giving the tester the hows, whats and to what
limits to test the software.

Finally the most valuable tool to the tester is having the knowledge of
what the software's purpose is and who will use it. The tester will have
more insight into the testing of the software if they have had
experience using a similar product or have been in the hospitality
industry to know the industry quirks.

                         The Testing Functions

The processes involved in testing POS software are very much like those
of testing other software. The differences as you will see become the
many interfaces, the wide variety of users and their diverse business
needs and requirements. These differences will become very evident as we
follow through the process outlined below.

Once the engineers have complete a substantial portion of the code it is
handed off to the testers. The testers write test plans and cases
encompassing each of the functions for testing as set out in the
sections that follow. The bugs that are encountered are carefully
recorded with available logs and other pertinent information so that the
developers can reproduce them. The developers will continue to create
updated software with the bug fixes throughout the process constantly
refining the application and its functionality.

The testing begins with a unit testing of each module and process. It is
important to start with the lowest common denominator to ensure that the
module functions on its own by testing GUI operability, boundary and
equivalence testing before moving on to the next process. The GUI
testing becomes extremely critical as the POS software touches so many
different functions with different users. Managers want to quickly and
easily run reports, servers want to ring up food and kitchen personnel
just want to clock in and out. Most of the application users need the
software as a tool but few have the time or desire to really learn the
entire application and all of it functionality. The tester must keep
these users in mind to ensure that the software is intuitive and easy to
learn and use.

The system testing involves pulling all of the unit tested parts
together to ensure their functionality as a whole. This is the
opportunity to test that every unit works in cooperation with the
software as a whole. Most facilities using the software will interface
with different types peripherals. A few of the more common peripherals
include printers, credit card drivers, scales, Property Management
Systems, Video Display Units, Video Security Systems, coin changers,
customer displays, scanners, bar code printers and a wide variety of
third party peripherals that do every thing from dispense liquor to
connecting to video equipment that records specified events. The testers
job is to ensure that these parts can be easily pulled together to
complete the full functioning system.

Performance Testing is the next testing phase. This is where all
functions are timed to ensure that all of processes associated with the
program complete in a reasonable predetermined amount of time. Many
tests are created to benchmark and evaluate the systems performance as
enhancements are made. This would also include some stress testing by
setting up looping programs to simulate a real system in action like a
busy restaurant continually ringing up orders. Many large facilities
will have multiple revenue centers and as we all know everyone wants to
eat at the same time so restaurants will have peak times when the
busiest traffic will occur. The tester needs to simulate more traffic
than the business will generate and over a longer period of time. This
testing should include testing all functionality of the system
simultaneously including the running of reports, employees clocking in
and out and the usual entering transactions all to ensure that the
software can perform properly and to know the limitations.

The Configuration Testing is used to ensure that every device that is
incorporated into the system is compatible with the other features.
Every possible combination can not be tested but a large sample must be
tested because of the wide diversity in product usage by the system.
Once an establishment has a central software system all revenue centers
must be connected to it. Large sites like casinos will need to interface
5 star restaurants, room charges, golf course greens fees, shops, gas
pumps and much more to this one system. All these interfaces are very
different but all must be able to efficiently work. The tester must
anticipate all combinations and possibilities to ensure that the
software can function properly in any configuration established in the
product requirements.

Recovery testing is extremely important because software and hardware
failures do occur and it becomes imperative that the amount of data and
time lost during recovery is minimized. The hospitality industry is a
24-hour business with its hotels and casinos but just as important are
the stadiums and amusement parks that depend heavily on the software and
can not shut down due to a computer failure.  The tester must test
scenarios and plan for the worst by testing redundancy and back up
plans.

The concept of Guerilla Raids is a relatively new one, its purpose is to
allot a chunk of time to informally go in and just try whatever the
tester may have some concerns about. This often brings out new scenarios
that could not have been planned and unusual bugs. The tester can have a
little fun and break from the usual systematic testing to just think
like a server or manager and push the software to its limits.

The Security Testing of POS is very critical because one of the primary
functions of the system is to account for monies. Security is imperative
to prevent theft by unauthorized persons. Many people from cashiers,
servers, managers to auditors will be monitoring and moving the monies
from the customer to the bank. The tester must always be on the lookout
for any potential holes in the system that would constitute a lack of
security. The security not only concerns the direct contact with the
monies but extends to payroll and time clock operations as well as
product inventories.

Business Application Testing arguably the most critical portion of the
testing process ensures that all calculations are completed properly.
The POS industry started with cash registers to account for the monies
and this remains the core functionality of the system. Many of the
software users rely heavily on the validity of the reports created by
the system for planning, accounting, taxes and payroll. Many restaurants
and hotels are now using the software programs to automatically order
new stock, pay employees and make room and restaurant reservations so
their accuracy becomes imperative. The pertinence of the reporting also
comes into play, it is as important to give the users reports with the
information that they need as it is to give them the correct
information. The tester must ensure that all required functions are
present are calculated properly and displayed in suitable format.

Regression testing is performed at logical break points during the
testing process. The main purpose of this type of testing is to ensure
that as bugs have been fixed no new ones have been introduced. This type
of testing is not unique to POS testing, all testers should be
regression testing through out the entire testing cycle. Many of the
stored tests can be recycled during this process to save time. It is
important to keep in mind that the code  tested at the beginning of the
cycle can be extremely different by the end of the cycle due to fixes
and changes along the way that may introduce new bugs.

The aforementioned functions are repeated recycling and refining the
tests until the testers are confident that the "acceptable bug count" is
reached and the software is ready to be installed in a Beta site. The
Beta site should be one that represents the average user of the
software. This site will be continuously monitored and scrutinized to
ensure that the software is functioning properly often many unforeseen
situations that need immediate action are uncovered and repaired. The
installation of a new system in a large facility is a big task and there
are always changes as the installation proceeds. These large
installations bring about special and unforeseen situations that could
not have been planned by the tester but need to be resolved quickly.
Once the Beta site is functioning properly the software is ready for
release. This is by no means the end of the testers job. As the software
is used sites encounter questions and problems that the help desk simply
are not able to resolve, these concerns are escalated to the test group
for resolution because the testers are a great resource and probably
know the product better than anyone else.

The testers are ready to do it again with the next version by pulling
out old test cases and more experience under their belt to better
evaluate the next version.

                               Conclusion

The hospitality point of sale industry is a very profitable area for
software and hardware producers. The software testers in this niche
market face many of the same hurdles that all testers face and many
others that are very industry specific. The POS Software tester must be
a jack of all trades because the scope of the market is enormous,
continually growing and changing. The hospitality industry as mentioned
previously encompasses many types and sizes of businesses, a wide
variety of users with very diverse needs and skills, and a wide variety
of interfaces with peripherals.  The testers purpose is to push the
software and hardware to its limit by simulating usage of  the products
by each category of business, ensuring that each type of user in each
category of the industry has their  business needs met, and that every
peripheral described in the product specification functions  properly.

The testing must be completed with both time and resource constraints.
This forces the testers to understand the industry enough to write and
execute test cases that touch on each type of business, each type of
user and configure every peripheral that will be used with the product.
The only way for the testers to be successful is to recycle and
continually build on the tests plans. The theories of software testing
are generic but it is up to the individual tester to understand how the
business will want to use the software and ensure that they can.

The software testing process is always in a state of flux as refinements
in processes and procedures are adopted to meet the industry demands. We
are always open to any feedback that will help us to more finely tune
our system or open our eyes to new opportunities.

========================================================================

        ************************************************
        *              CALL FOR PAPERS                 *
        *                                              *
        *     The First Asia-Pacific Conference on     *
        *         Web Intelligence (WI'2001)           *
        *                                              *
        *     Maebashi TERRSA, Maebashi City, Japan    *
        *             October 23-26, 2001              *
        ************************************************

              WI'2001 will be jointly held with
             The Second Asia-Pacific Conference on
             Intelligent Agent Technology (IAT'2001)

The 21st century is the age of the Internet and World Wide Web. The Web
revolutionizes the way we gather, process, and use information. At the
same time, it also redefines the meanings and processes of business,
commerce, marketing, finance, publishing, education, research,
development, as well as other aspects of our daily life.  Although
individual Web-based information systems are constantly being deployed,
advanced issues and techniques for developing and for benefiting from
Web intelligence still remain to be systematically studied.

Broadly speaking, Web Intelligence (WI) exploits AI and advanced
information technology on the Web and Internet.  It is the key and the
most urgent research field of IT for business intelligence.

The Asia-Pacific Conference on Web Intelligence (WI) is an international
forum for researchers and practitioners

  (1) to present the state-of-the-art in the development of Web
      intelligence;
  (2) to examine performance characteristics of various approaches in
      Web-based intelligent information technology;
  (3) to cross-fertilize ideas on the development of Web-based
      intelligent information systems among different domains.

By idea-sharing and discussions on the underlying foundations and the
enabling technologies of Web intelligence, WI'2001 is expected to
stimulate the future development of new models, new methodologies, and
new tools for building a variety of embodiments of Web-based intelligent
information systems.

The Asia-Pacific Conference on Web Intelligence (WI) is a high-quality,
high-impact biennial conference series.  It will be jointly held with
the Asia-Pacific Conference on Intelligent Agent Technology (IAT).

Contact:

                Prof. Yiyu Yao (WI'2001)
                Department of Computer Science
                University of Regina
                Regina, Saskatchewan
                Canada S4S 0A2

                E-mail: yyao@cs.uregina.ca
                Phone: (306) 585-5213/5226
                Fax: (306) 585-4745

========================================================================
      ------------>>> QTN ARTICLE SUBMITTAL POLICY <<<------------
========================================================================

QTN is E-mailed around the middle of each month to over 9000 subscribers
worldwide.  To have your event listed in an upcoming issue E-mail a
complete description and full details of your Call for Papers or Call
for Participation to .

QTN's submittal policy is:

o Submission deadlines indicated in "Calls for Papers" should provide at
  least a 1-month lead time from the QTN issue date.  For example,
  submission deadlines for "Calls for Papers" in the January issue of
  QTN On-Line should be for February and beyond.
o Length of submitted non-calendar items should not exceed 350 lines
  (about four pages).  Longer articles are OK but may be serialized.
o Length of submitted calendar items should not exceed 60 lines.
o Publication of submitted items is determined by Software Research,
  Inc., and may be edited for style and content as necessary.

DISCLAIMER:  Articles and items appearing in QTN represent the opinions
of their authors or submitters; QTN disclaims any responsibility for
their content.

TRADEMARKS:  eValid, STW, TestWorks, CAPBAK, SMARTS, EXDIFF,
STW/Regression, STW/Coverage, STW/Advisor, TCAT, and the SR logo are
trademarks or registered trademarks of Software Research, Inc. All other
systems are either trademarks or registered trademarks of their
respective companies.

========================================================================
          -------->>> QTN SUBSCRIPTION INFORMATION <<<--------
========================================================================

To SUBSCRIBE to QTN, to UNSUBSCRIBE a current subscription, to CHANGE an
address (an UNSUBSCRIBE and a SUBSCRIBE combined) please use the
convenient Subscribe/Unsubscribe facility at:

         <http://www.soft.com/News/QTN-Online/subscribe.html>.

As a backup you may send Email direct to  as follows:

   TO SUBSCRIBE: Include this phrase in the body of your message:
           subscribe 

   TO UNSUBSCRIBE: Include this phrase in the body of your message:
           unsubscribe 

Please, when using either method to subscribe or unsubscribe, type the
 exactly and completely.  Requests to unsubscribe that do
not match an email address on the subscriber list are ignored.

		QUALITY TECHNIQUES NEWSLETTER
		Software Research, Inc.
		1663 Mission Street, Suite 400
		San Francisco, CA  94103  USA

		Phone:     +1 (415) 861-2800
		Toll Free: +1 (800) 942-SOFT (USA Only)
		Fax:       +1 (415) 861-9801
		Email:     qtn@sr-corp.com
		Web:       <http://www.soft.com/News/QTN-Online>

                               ## End ##