sss ssss      rrrrrrrrrrr
                      ssss    ss       rrrr   rrrr
                     sssss     s       rrrr    rrrr
                     ssssss            rrrr    rrrr
                      ssssssss         rrrr   rrrr
                          ssssss       rrrrrrrrr
                    s      ssssss      rrrr  rrrr
                    ss      sssss      rrrr   rrrr
                    sss    sssss       rrrr    rrrr
                    s  sssssss        rrrrr     rrrrr
         +===================================================+
         +=======    Quality Techniques Newsletter    =======+
         +=======              May 2001               =======+
         +===================================================+

QUALITY TECHNIQUES NEWSLETTER (QTN) is E-mailed monthly to Subscribers
worldwide to support the Software Research, Inc. (SR), TestWorks,
QualityLabs, and eValid user communities and other interested parties to
provide information of general use to the worldwide internet and
software quality and testing community.

Permission to copy and/or re-distribute is granted, and secondary
circulation is encouraged by recipients of QTN provided that the entire
document/file is kept intact and this complete copyright notice appears
with it in all copies.  Information on how to subscribe or unsubscribe
is at the end of this issue.  (c) Copyright 2003 by Software Research,
Inc.

========================================================================

                         Contents of This Issue

   o  Conference Description: QW2001

   o  Software Is Different, by Boris Beizer (Part 2 of 3)

   o  Comparative Report on eValid Available

   o  Brooks "Mythical Man-Month": A Book Report and Review, by Don
      O'Neill

   o  QWE2001 Call for Participation (November 2001)

   o  SERG Reports Available

   o  QTN Article Submittal, Subscription Information

========================================================================

           Conference Description: 14th Annual International
               Internet & Software Quality Week (QW2001)

                       29 May 2001 - 1 June 2001

                     San Francisco, California  USA

                  <http://www.qualityweek.com/QW2001/>

                  CONFERENCE THEME: The Internet Wave

Conference Chairman's Comments:  If you read the newspapers or watch the
news these days you might begin to think that "internet" has somehow
become an offensive word.  It seems that almost any mention of Internet
and "Dot Coms" carries a derisive, negative tone and strikes fears into
investors who, upon hearing the words immediately jump to the phone and
yell to their broker, "Sell!"

The QW2001 conference this year has the theme "The Internet Wave", and
that's why this is a timely topic.  If you wanted to you could easily
joke about it:

        The Internet Wave arrived at the beach,
                and I didn't even get my feet wet!

        I tried to ride the Internet Wave but,
                reality knocked me off by board.

But the recent events in the industry, clearly without precedent, are, I
believe, less a crisis then they are a real opportunity.  It seems I'm
in good company.

Andy Grove, Chairman of Intel, interviewed by John Heilemann in this
month's Wired Magazine (May 2001), would seem to agree.  The headline
may say it all, "Andy Grove has some advice: Believe In The Internet
More Than Ever."  But Grove goes on to point out that "...internet
penetration in the US is substantially ahead of the rest of the world...
[and in the next five years]...the rest of the world is going to
replicate what's happened here...  companies that are behind are going
to get where we are today and then start changing their business
processes."

Where does "internet and software quality" fit into this picture?  Does
general quality go by the boards just because there is a economic
downturn?  Hardly, it seems to me.

Look at the situation in power generation in California -- another
current crisis that's also much in the news.  For whatever the reasons
-- and they were good ones I am sure -- the fact is that no new power
generation capabilities have been added in California for 12 years.  So,
generating capacity has stayed the same.

The peak load has been growing, slowly of course, but it has grown.
Right now if the power demand in California goes over ~45,000 Megawatts
somebody has to turn some lights off.  Or have the lights turned off for
them!  Interestingly, the alternative of "lower quality power" i.e.
delivery of lower voltage [a brown out] is just not acceptable.

It seems to me that investments in internet and software quality -- in
the hardware and software systems and the means to assure their
reliability -- are like investments in power generation capability.  If
you do the right things now, you don't have to worry about the bad
things in the future.

I hope that Quality Week 2001 can be thought of as a beacon pointing to
ways and means to assure internet and software quality now and far into
the future.

-Edward Miller, Chairman

           o       o       o       o       o       o       o

QW2001 KEYNOTE TALKS from industry experts include presentations by:

  > Mr. Thomas Drake (Integrated Computer Concepts, Inc ICCI) "Riding
    The Wave -- The Future For Software Quality"
  > Dr. Dalibor Vrsalovic (Intel Corporation) "Internet Infrastructure:
    The Shape Of Things To Come"
  > Mr. Dave Lilly (SiteROCK) "Internet Quality of Service (QoS): The
    State Of The Practice"
  > Dr. Linda Rosenberg (GSFC NASA) "Independent Verification And
    Validation Implementation At NASA"
  > Ms. Lisa Crispin (iFactor-e) "The Need For Speed: Automating
    Functional Testing In An eXtreme Programming Environment (QWE2000
    Best Presentation)"
  > Mr. Ed Kit (SDT Corporation) "Test Automation -- State of the
    Practice"
  > Mr. Hans Buwalda (CMG) "The Three "Holy Grails" of Test Development
    (...adventures of a mortal tester...)"

QW2001 PARALLEL PRESENTATION TRACKS with over 60 presentations:

  > Internet: Special focus on the critical quality and performance
    issues that are beginning to dominate the software quality field.
  > Technology: New software quality technology offerings, with emphasis
    on Java and WebSite issues.
  > Management: Software process and management issues, with special
    emphasis on WebSite production, performance, and quality.
  > Applications: How-to presentations that help attendees learn useful
    take-home skills.
  > QuickStart: Special get-started seminars, taught by world experts,
    to help you get the most out of QW2001.
  > Vendor Technical Track: Selected technical presentations from
    leading vendors.

QW2001 SPECIAL EVENTS:

  > Birds-Of-A-Feather Sessions (BOFS) [organized for QW2001 by Advisory
    Board Member Mark Wiley (nCUBE), ].
  > Special reserved sections for QW2001 attendees to see the SF Giants
    vs. the Arizona Diamondbacks on Wednesday evening, 30 May 2001 in
    San Francisco's world-famous downtown Pacific Bell Park.
  > Nick Borelli (Microsoft) "Ask The Experts" (Panel Session), a
    session supported by an interactive WebSite to collect the most-
    asked questions about software quality.

Get complete QW2001 information by Email from  or go to:

                  <http://www.qualityweek.com/QW2001/>

========================================================================

                  Software Is Different (Part 2 of 3)
                                   by
                              Boris Beizer

      Note:  This article is taken from a collection of Dr.  Boris
      Beizer's essays "Software Quality Reflections" and is
      reprinted with permission of the author.  We plan to include
      additional items from this collection in future months.  You
      can contact Dr.  Beizer at .

2.5.  Complexity 2.5.1.  General

The issues of software quality are all about complexity and its
management.  Software is complicated.  Software developers don't make it
so: the users' legitimate demands and expectations do.  Let's compare
software with physical systems.  Which is more complex:

   1. A word processor or a supertanker?

   2. A database management package or a 100-floor building?

   3. All the monuments, pyramids, and tombs of Egypt combined or an
      operating system?

 I can think of only two things more complicated than software: aircraft
 (even excluding their software) and a legal system.  650 megabytes of
 software can be stuffed into a little 11.5 cm CD-ROM and that's a
 decent part of a law library.  We can measure the complexity of an
 engineered product by one of two reasonable ways: total engineering
 labor content or total mass of documentation produced.  The supertanker
 probably represents less than 10 work years of engineering labor
 content -- never mind how many work years it takes to build it
 physically.  The word processor is about 200 work years.  A 100-story
 building is about 30 work years.  The data base package is also about
 200 work years.  Each of those monuments probably took at most a year
 to design by a master builder and an assistant or two.  So we might
 have a few hundred work years of engineering in the combined Egyptian
 buildings.  Operating system labor contents (including testing) is
 measured in thousands of work years.  Similarly, today it is easy to
 measure documentation size for general engineering products and for
 software.  Software documentation is measured in gigabytes; most other
 engineered products in megabytes.

 Comparing software to a legal code is more appropriate than comparing
 it to physical products.  Humans have had only stumbling success in
 crafting their legal codes and have been at it for five thousand years
 that we know of.  Overall, I think that what software engineers have
 accomplished in 1/100 of that time is remarkable.

 2.5.2.  Proportional Complexity

 If you add an increment of functionality to most physical products, it
 is done at a proportional increment in complexity.  Think of a car or
 appliance.  More features, higher price.  Not just because the vendor
 can get it, but because there is an underlying proportional cost, and
 therefore, complexity increase.  Complexity is generally additive in
 the physical world.  In software, by contrast, complexity tends to
 follow a product law.  That is, if you have two physical components A
 and B, with complexity CA and CB, respectively, the complexity of the
 combination (A+B) is proportional to CA + CB; but for software the
 resulting complexity is likelier to be closer to CA*CB or worse!  How
 often have you heard "We only added a small feature and the whole thing
 fell apart?"

 Here, if there is blame to spread, I must put it onto software
 developers, managers, and especially marketeers, who despite years of
 sad experience continue to ignore this fundamental fact of software
 life.  There is a constant, but ever-unfulfilled, expectation that it
 is always possible to add another bell, another whistle, another
 feature, without jeopardizing the integrity of the whole.  We can't do
 that for buildings even though it probably follows proportional
 complexity growth.  How many floors can you add to an existing building
 before it exceeds its safety factor and collapses? How much more
 traffic can you allow a bridge to take before it collapses? It is
 difficult enough to add incremental complexity to physical products and
 we realize that ultimately, safety margins will be exceeded.  The same
 applies to software, but because the complexity impact tends to a
 product or exponential law, the collapse seems unpredictable,
 catastrophic, and "unjust."

 2.5.3.  Complexity/Functionality Inversion

 In most physical products, more functionality means more complexity.
 Add features to a product and there is more for the user to master.
 There's a direct relation between a product's physical complexity and
 the operational complexity that the product's users see.  Software, by
 contrast, usually has an inverse relation between the operational
 complexity the user sees and the internal complexity of the product.
 That's not unique to software: it is an aspect of most complex
 products.  How easy it is to dial an international telephone call:
 think of the trillions of electronic components distributed throughout
 the world that it takes to achieve that operational simplicity.  While
 not unique to software -- in software, unlike physical products, this
 inversion is the rule.

 Users of software rightfully demand operational simplicity.  Menu-
 driven software based on a Windows motif is easier to use than
 command-driven software: so they want windows.  I'd rather move a
 document by grabbing it with the mouse and dropping into another
 directory than type "MOVE document_name TO target_directory_name." I
 remember the bad old days when to get a program to run you had to fill
 out two dozen job control cards in one of the worst languages ever
 devised, JCL.  Double-clicking an icon is much easier.  But what is the
 cost of this convenience?

 My latest word processor catches me when I type "hte" instead of "the,"
 or catches my error when I type "sOftware" instead of "software" -- and
 don't think that getting these deliberate errors to stick was easy! A
 new graphics package learned the pattern of my editing after a few
 figures and automatically highlighted the right object on the next
 slide, saving me a dozen key strokes and mouse motions.  And the latest
 voice-writer software eliminates the keyboard for the fumble-fingered
 typists of the world.  All great stuff! But what are the consequences?
 Internal complexity!

 The increased internal complexity can take several forms:

    1. Increased Code Size.  This is the typical form it takes.

    2. Algorithmic and Intellectual Complexity.  The code mass can
       actually decrease, but this is deceiving because code complexity
       has been traded for intellectual complexity.  The resulting
       software is harder to understand, harder to test, and line-for-
       line, likelier to have a buggy implementation.  Furthermore, not
       only must the implementation of the algorithm be verified, but
       the algorithm itself must first be proven -- adding yet more
       opportunities for bugs.

    3. Architectural Complexity.  The best example here is object-
       oriented software.  The individual components can be very simple,
       but the over-all structure, because of such things as
       inheritance, dynamic binding, and very rich interactions, is very
       complex.

    4. Operational convenience in software use is usually bought at the
       cost of great increases in internal complexity.

 2.5.4.  Safety Limits

 It is incredible to me that the notion of safety limits and the
 uncompromised ethical principle of traditional engineering that such
 safety limits are never to be exceeded are discarded when it comes to
 software.  The traditional engineer when faced with uncertainty over
 safety limits has always opted to be conservative.  It took decades to
 gradually reduce the safety limits for iron bridges when metal began to
 replace stone in the Eighteenth century.  It is only through experience
 that safety margins are reduced.  Yet, when it comes to software,
 perhaps because software has no physical reality, not only are software
 developers urged to throw traditional engineering caution aside and
 boldly go where none have gone before, but even the very notion that
 there might be (as yet unknown) safety limits is discarded.  And sadly,
 all too often, it is an engineering executive trained in traditional
 engineering that urges that safety limits be discarded.

 What are the safety limits for software? I don't know -- nobody knows.
 Nevertheless, we agree that it has something to do with our ability to
 maintain intellectual control.  That, in turn, is intimately tied into
 complexity and how it grows.  One of these days (I hope) we will have
 "Nakamura's Law." This (yet to be discovered law by an as yet unborn
 author) will tell us how to measure complexity and predict reasonable
 safety margins for software products.  But we don't yet have Nakamura's
 Law.  So what should we do, as responsible engineers, when faced with a
 situation in which we don't know how to predict safety margins? Do what
 our traditional engineering forebearers did two centuries ago when they
 didn't know how to calculate safety limits for iron bridges -- be very
 conservative.

 In sailing, we say that the time to reduce your sails is when the
 thought first occurs to you -- because if you don't shorten your sails
 then, by the time the wind is really strong and you must reduce your
 sails, the very strength of the wind will make it impossible to do so.
 The time at which you have lost intellectual control is the time at
 which it occurs to you that you might be in danger of doing so.  If you
 think that it might be too complicated, it is.  "We can't do that," the
 marketeer says.  "We'll lose too much market share to our competitor if
 we don't add this bell and that whistle!" Back to iron bridges.  What
 will your long-term market share be if half your bridges collapse?

 2.6.  Composition and Decomposition

 2.6.1.  Composition Principle

 The composition principle of engineering says that if you know the
 characteristics of a component, then you can, by applying appropriate
 rules, calculate the equivalent characteristics of a system constructed
 from those components.  This allows us to deduce the strength of a
 bridge from the strength of its girders and design without building and
 testing that bridge.  Similarly, the behavior of a circuit can be
 inferred from the behavior of resistors, transistors, etc.  and the
 circuit design.  Nowhere is the principle of composition more important
 than for reliability.  There is a well-proven hardware reliability
 theory that allows us to predict the reliability of a system from the
 reliability of its components without actually testing the system: for
 most complex hardware systems, there would not be enough time in the
 expected lifetime of the system to do the testing needed to
 experimentally confirm the reliability.  Typically, expected test time
 to confirm a reliability value is an order of magnitude greater than
 that value.  Thus, to confirm a mean time to failure for an aircraft
 autopilot of 10,000 years, we need 100,000 years of testing (one
 autopilot for 100,000 years or 100,000 autopilots for a year).
 However, because hardware reliability theory is composable, we don't
 have to do this.  We can get a trustworthy prediction by experimentally
 testing components and by using analytical models to predict the
 reliability of the autopilot without running 100,000 years of tests.

 Does a similar composition principle hold for software? No! Or if one
 exists, it hasn't been found yet.  The only way to infer the
 reliability of a piece of software is to measure it in use (either real
 or simulated) and there is no known theoretical model that allows one
 to infer the reliability of a software system from the reliability of
 its components.  It is reasonable for a user to expect that his
 operating system will not cause unknown data corruption more than once
 in every ten years.  But to assure that to statistically valid
 certitude would require 100 years of testing; and because of the vast
 variability of system configurations and user behaviors, what is
 learned from one case can't be transferred to another.

 So even if we could build bug-free software, we have no means to
 confirm if we have or haven't achieved our goal, or the quantitative
 extent to which we have failed to meet the users' very reasonable
 quality expectations.  This is an area of intensive research, but
 progress has been slow.  The user is driving this.  But they can't have
 it both ways.  They can't, on the one hand, ask for ever-increasing
 sophistication and functionality, AND on the other hand, simultaneously
 not only demand that we maintain the reliability of the program's
 previous, simpler incarnation, but that we improve the reliability, and
 furthermore, prove that we have done so.

 The above has addressed only the limited objective of reliability
 determination.  But it is not the only composability issue.  There are
 important composability questions for performance, security,
 accountability, and privacy, to name a few.  For more general
 composability issues, the problem is worse and progress is even more
 meager.  Composability, which is fundamental to traditional engineering
 disciplines, cannot be assumed for software.

 2.6.2.  Decomposition Principle

 "Divide and conquer!" The analysis of complex problems in engineering
 is simplified by this fundamental strategy: break it down into its
 components, analyze the components, and then compose the analyses to
 obtain what you want to know about the entire thing.  We take it as
 given that in traditional engineering, decomposition, and therefore
 divide-and-conquer, is usually possible.  Of course software engineers
 adopt this strategy to the extent that they can.  But unlike
 traditional engineering, there are as yet, no formal decomposition
 methods.  There is the beginnings of such methods, pragmatically useful
 heuristics, lots of folklore, but nothing rigorous yet.  As laudable as
 hierarchical decomposition and top-down design might be, for example,
 they are nevertheless heuristic and do not have the mathematically
 solid foundation that, say, decomposition of a Laplace transforms in
 circuit theory has.

 The biggest trouble is that when it comes to quality issues and bugs,
 the very notion of decomposition, and even its possibility disappears.
 This is so because the act of decomposing hides the bug for which we
 are looking.  Two routines, A and B, by themselves work.  Even if there
 is no direct communication between A and B it is possible that the
 combination does not work.  Conversely, two routines A and B are buggy,
 but one bug corrects the other so that the combination does work.
 Divide and conquer and decomposition works and is useful for simple
 unit bugs.  But for competent software developers it is rarely the
 simple unit bug that causes the catastrophic failure in the field.

 2.6.3.  Composition/Decomposition Reciprocity

 Composition and decomposition are opposite sides of the same
 engineering coin.  Another slice at the same concepts is the idea of
 analysis versus synthesis.  All traditional engineering fields have
 both an analytical side (tell me what I want to know about the behavior
 of this thing) and a synthetical side (tell me how to build a thing
 that will have the specified behavior).  Traditional engineering fields
 alternate between periods of analysis dominance and synthesis
 dominance.  That is, at any given point in time, one or the other
 dominates the new literature and the emphasis, especially in teaching.
 I'll use electronics as an example.  In the Eighteenth century, there
 wasn't much to synthesize about electricity, other than to make it
 happen.  Then people such as Franklin started to study it (analysis).
 The analytical view dominated, culminating in Maxwell's equations,
 which seemed to explain everything.  Then, as electricity became an
 industry, the focus shifted to synthetical methods -- how do I design?
 During the Second World War, design and synthesis outstripped analysis.
 It didn't matter how a radar-tube (e.g., magnetron) or waveguides
 worked: we needed working radar, whatever the analytical principle
 behind it.  After the war, the emphasis shifted back to analysis to
 explain how all those strange devices crafted by trial and error during
 war worked.  Now, in semiconductor circuit design, synthesis appears
 again to have outstripped analysis, which is playing catch-up because
 the new synthesis tools depend on it.

 The computer industry is only 50 years old.  It has (understandably)
 been dominated by synthesis -- how to write working code -- albeit
 guided by heuristics instead of formal synthesis tools.  We, speaking
 for software developers, don't yet have an analytical infrastructure.
 We're only into the first round of synthesis-analysis alternations and
 it will take a few more rounds before we know what we're doing.  It
 would be nice if we had a few centuries to learn how to do what we do,
 but our users won't let us.  I don't offer this as an apology, but as
 an explanation.  It is also a matter of setting realistic expectations;
 for software developers and for users.  Users always want magic, so
 it's about time that we first admit to ourselves that we don't have
 firm guidance for what we do and perhaps then to our users that there
 are risks associated with ever-increasing complexity without benefit of
 either analytical principles or synthesis tools.

                            (To Be Continued)

 ========================================================================

                 Comparative Report on eValid Available

 There is a very nice comparative report that matches eValid up against
 several products that aim to do WebSite testing.  The report was done
 by a group of people at the University of Ottawa, under the auspices of
 Prof. Robert Probert and led by Mr. Victor Sawma.  The report can be
 read by going to this URL:
   <http://www.site.uottawa.ca/~vsawma/websitetesting/>

 There you can get the report in PDF format by going to:
   <http://www.site.uottawa.ca/~vsawma/websitetesting/finalreport.pdf>

 ========================================================================

         Brooks "Mythical Man-Month": A Book Report and Critique

                                   by

                   Don O'Neill, Independent Consultant

 This is a book report and critique on "The Mythical Man-Month"  by Fred
 Brooks, Second Printing, July 1978 prepared by Don O'Neill.

 "The Mythical Man-Month"  by Fred Brooks has been a popular book for
 nearly thirty years.  We should not continue to let this work stand
 since it is flawed in many ways.  On the positive side, the book
 provides a glimpse into the old style software engineering.  On the
 negative side, this book is damaging in that software people and non
 software people alike believe the myths set forth, quote them, and
 possibly act on them.  This book review attempts to identify some of
 these flaws.

 In Chapter 1 entitled "The Tar Pit", the author unwittingly reveals
 that he is not a programmer at heart when he lists the need for
 perfection as a "woe" not a "joy".  Real programmers thrive on the
 pursuit of perfection.  Imitators seek ways to cut slack.

 In Chapter 2 entitled "The Mythical Man-Month", the claim that software
 projects get behind one day at a time provides an striking example of
 management ignorance.  The principal causes of missed schedules are bad
 estimation and uncontrolled change.  By shifting the onus to the daily
 performance of programmers, the author shifts management responsibility
 for schedule slippage to programmers, a practice that continues today.

 In Chapter 3 entitled "The Surgical Team", the author identifies
 surgical teams and Chief Programmer Team structures as models for
 software projects.  These management structures are essentially flawed.
 The flaw stems from elitism and lies in a profound misunderstanding of
 professionalism and lack of understanding of the pursuit of perfection.
 The tasks of code composition, source code entry, and library
 management  are inseparable in actual programming practice.  These
 tasks must be done by the same person.  Any programmer knows this; non
 programmers cannot understand this.  The reason is tied to the pursuit
 of perfection and the extreme focus and concentration it demands, a joy
 for programmers, a woe for others.

 In Chapter 4 entitled "Aristocracy, Democracy, and System Design", the
 author demonstrated that he had an inkling that architecture was
 important but failed to realize that architecture was the integrating
 element, unifying force, and the direction for the project.  The author
 reveals that his view of architecture was narrow and limited to just
 the user interface missing the opportunity to include domain
 architecture, intercommunicating protocols, and data management
 facilities.

 In Chapter 5 entitled "The Second-System Effect", the author
 demonstrated some insight in recognizing that the first system was
 incomplete and possibly flawed, that the second system received
 excessive changes as pent up demand was unleashed, and that the third
 system would be right.  This begins to recognize that software is a
 process of experimentation.  It is not enough to avoid the pitfalls of
 the second system effect.  What is needed is to operate within a
 systematic process of experimentation- setting hypotheses, collecting
 data, analyzing results, selecting alternatives, and resetting
 hypotheses.  Real programmers do this on every assignment.  Managers
 resist the resulting non determinism and prefer hard coded,
 deterministic practices.

 In Chapter 6 entitled"Passing the Word", the author struggles with the
 problem of disseminating information when all decisions emanate from
 the top.  In software projects, there is no substitute for superior
 knowledge.  When all authorized knowledge is controlled at the top, it
 must be pushed to users.  When knowledge is created and controlled at
 the point of origin where superior knowledge exists, it need only be
 made available to be shared and pulled by users.

 In Chapter 7 entitled "Why Did the Tower of Babel Fail?", the author
 was limited by a top down view.  The word "push back" does not appear
 in the book.  In Chapter 8 entitled "Calling the Shot", the
 preoccupation with the vagaries of estimation and the emerging
 realization that measurements of complex systems behave in a nonlinear
 fashion seem naive.  The author reports measurements from various
 sources as if these measurements contained some universal truth.

 In Chapter 9 entitled "Ten Pounds in a Five Pound Sack",  the author's
 preoccupation with memory space explains the practice of two digit
 dates of such great concern in the Y2K crisis and remediation.

 In Chapter 10 entitled "The Documentation Hypothesis", the
 documentation tactics discussed seem inadequate.

 In Chapter 11 entitled "Plan to Throw One Away", the author again
 discovers that software is a process experimentation.

 In Chapter 12 entitled "Sharp Tools", the author discusses the basic
 tools in use at the time.

 In Chapter 13 entitled "The Whole and the Parts", the author reveals a
 misunderstanding of the nature of software defects and trivializes
 their impact on users and their operations in promulgating the word
 "bug".

 In Chapter 14 entitled "Hatching a Catastrophe", the author, as someone
 intimately attached to the industry's most notable software
 catastrophe, disappoints by offering no career altering insights on the
 experience.

 In Chapter 15 entitled "The Other Face", the author concludes this epic
 work with ramblings on documentation forms in use at the time.

 ========================================================================

                    Call For Participation: QWE2001

  International Internet & Software Quality Week EuropE 2001 (QWE2001)
                          November 12 - 16 2001
                           Brussels, BELGIUM

                     Conference Theme: Internet NOW!

                 <http://www.soft.com/QualWeek/QWE2001>

 QWE2001, the 19th in the continuing series of Quality Week Conferences,
 is the 5th International Internet & Software Quality Week/Europe
 Conference.  QWE2001 focuses on advances in software test technology,
 reliability assessment, software quality processes, quality control,
 risk management, software safety and reliability, and test automation
 as it applies to client-server applications and WebSites.

 QWE2001 papers are reviewed and selected by a distinguished
 International Advisory Board made up of Industry and Academic Experts
 from Europe and the United States.  The Conference is produced by
 Software Research Institute.

 The mission of the QWE2001 Conference is to increase awareness of the
 entire spectrum of methods used to achieve internet & software quality.
 QWE2001 provides technical education, in addition to  opportunities for
 practical experience exchange within the software development, QA and
 testing community.

 QWE2001 OFFERS:

 The QWE2001 program consists of five days of tutorials, panels,
 technical papers and workshops that focus on software quality, test
 automation and new internet technology.  QWE2001 provides the Software
 Testing and Web QA community with:

   o Real-World Experience from Leading Industry, Academic and
     Government Technologists.
   o State-of-the-art Information on Software Quality & Web Quality
     Methods.
   o Quality Assurance and Test Involvement in the Development Process.
   o E-commerce Reliability / Assurance.
   o Case Studies, Lessons Learned and Success Stories.
   o Latests Trends and Tools.
   o Two Days of carefully chosen half-day and full-day Tutorials from
     Internationally Recognized Experts.
   o Three-Day Conference with: Technology, Internet, Process,
     Applications and Vendor Technical Presentations.
   o Two-Day Vendor Exhibitions and Demonstrations of the latest Tools.
   o Five Parallel Tracks with over 50 Presentations.

 QWE2001 IS SOLICITING:

   o Full-day and half-day Tutorials
   o Proposals for, or Participation in,  Panel Discussions
   o 45- and 90-minute Presentations on any area of Quality, Testing
     and Automation, including:

     E-Commerce Reliability          Object Oriented Testing
     Application of Formal Methods   Outsourcing
     Automated Inspection Methods    Process Improvement
     Software Reliability Studies    Productivity and Quality Issues
     Client / Server Testing         Real-Time Software
     CMM/PMM Process Assessment      Test Automation Technology
     Cost / Schedule Estimation      Test Data Generation
     WebSite Monitoring              WebSite Testing
     Test Documentation Standards    Defect Tracking / Monitoring
     GUI Test Technology             Risk Management
     Test Management                 Test Planning Methods
     Integrated Test Environments    Test Policies and Standards
     Quality of Service (QoS)        New/Novel Test Methods
     WebSite Load Generation         WebSite Quality Issues

IMPORTANT DATES:

        Abstracts and Proposals Due:    30 June 2001
        Notification of Participation:  15 August 2001
        Camera Ready Materials Due:     22 September 2001
        Final Paper Length:             10-20 pages
        Powerpoint / View Graphs:       Max 15 pages (2 slides/page)

SUBMISSION INFORMATION:

There are two steps to submitting material for review in QWE2001:

 1. Prepare your Abstract as an ASCII file, a MS Word document, in
    PostScript, or in PDF format.  Abstracts should be 1 - 2 pages long,
    with enough detail to give members of QWE2001's International
    Advisory Board an understanding of the final paper / presentation,
    including a rough outline of its contents.

    Email your submission to:  as a MIME attachment.

 2. Fill out the Speaker Data Sheet giving some essential facts about
    you and about your proposed presentation at:
      <http://www.soft.com/QWE2001/speaker.data.html>

    This information includes:
      o Author's Name, Title, Organization and contact information
      o Target Audience Level, Target Track and Basis of Paper
      o Three bullet/phrases describing the main points of the paper.
      o Short Abstract of the paper for publication on the WebSite
      o Brief Biographical sketch of each author (and later a photo of
        each author).
      o Lessons to be learned from this presentation.

As a backup to e-mail, you can also send material by postal mail to:

      Ms. Rita Bral,
      Software Research Institute,
      1663 Mission Street, Suite 400,
      San Francisco, CA  94103  USA

       SR/INSTITUTE, 1663 MISSION, SUITE 400, SAN FRANCISCO, CA  94103  USA

                      Phone: [+1] (415) 861-2800
                      FAX:   [+1] (415) 861-9801
                          Email: qw@sr-corp.com
            WebSite: <http://www.soft.com/QualWeek/QWE2001>

========================================================================

                         SERG Reports Available

Below are abstracts for two new SERG Reports which have been recently
completed by McMaster University's Software Engineering Group.  The
reports are on our web page and are available in both PostScript and PDF
formats.  The web address for downloading reports is:

        <http://www.crl.mcmaster.ca/SERG/serg.publications.html>

   SERG 394: Formal Verification of Real-Time Software, by Hongyu Wu

Abstract:  Automated theorem provers (ATPs) such as SRI's Prototype
Verification System (PVS) have been successfully used in the formal
verification of functional properties.  However, these methods do not
verify satisfaction of real-time software requirements.

This report, extends functional verification methods to the verification
of real-time control properties by developing a PVS library for the
specification and verification of real-time control system.  New
developments include the definition of strong clock induction and
several lemmas regarding real-time properties.  These definitions, when
combined with PVS's support for tabular notations, provide a useful
environment for the specification and verification of basic real-time
control properties.

To illustrate the utility of the method, two timing blocks of an
industrial real-time control system are analyzed.  The PVS specification
and proof techniques are described in sufficient detail to show how
errors or invalid assumptions were detected in the proposed
implementation and the original specifications.  Finally the report
presents a proof that corrected versions of the implementation satisfy
the updated versions of the specifications.

      SERG 395: Preliminary Requirements Checking Tool, by Ou Wei

Abstract:  This report discusses tools for checking application-
independent properties in requirements documents.  Application-
independent properties are simple properties derived from the underlying
formal requirements model and specification notation.  The large size of
detailed requirements documents means that reviewers would spend
considerable time and effort checking simple but important properties.
Computer-supported preliminary checking tools are necessary to make
industrial application of these methods practical.

This report describes the Preliminary Requirements Checking Tool (PRCT),
which checks the application-independent properties for SCR style
requirements.  The properties checked by PRCT are derived from the Four
Variable Requirements Model and McMaster's Generalized Tabular Notation.
The tool builds on on previous work on the Table Tool System (TTS).
PRCT checks for errors such as incorrect syntax, undefined variables and
circular definitions in requirements specifications, and will serve as a
preprocessor for more advanced tools that will check critical
application-dependent properties.

========================================================================
      ------------>>> QTN ARTICLE SUBMITTAL POLICY <<<------------
========================================================================

QTN is E-mailed around the middle of each month to over 9000 subscribers
worldwide.  To have your event listed in an upcoming issue E-mail a
complete description and full details of your Call for Papers or Call
for Participation to .

QTN's submittal policy is:

o Submission deadlines indicated in "Calls for Papers" should provide at
  least a 1-month lead time from the QTN issue date.  For example,
  submission deadlines for "Calls for Papers" in the March issue of QTN
  On-Line should be for April and beyond.
o Length of submitted non-calendar items should not exceed 350 lines
  (about four pages).  Longer articles are OK but may be serialized.
o Length of submitted calendar items should not exceed 60 lines.
o Publication of submitted items is determined by Software Research,
  Inc., and may be edited for style and content as necessary.

DISCLAIMER:  Articles and items appearing in QTN represent the opinions
of their authors or submitters; QTN disclaims any responsibility for
their content.

TRADEMARKS:  eValid, STW, TestWorks, CAPBAK, SMARTS, EXDIFF,
STW/Regression, STW/Coverage, STW/Advisor, TCAT, and the SR logo are
trademarks or registered trademarks of Software Research, Inc. All other
systems are either trademarks or registered trademarks of their
respective companies.

========================================================================
          -------->>> QTN SUBSCRIPTION INFORMATION <<<--------
========================================================================

To SUBSCRIBE to QTN, to UNSUBSCRIBE a current subscription, to CHANGE an
address (an UNSUBSCRIBE and a SUBSCRIBE combined) please use the
convenient Subscribe/Unsubscribe facility at:

         <http://www.soft.com/News/QTN-Online/subscribe.html>.

As a backup you may send Email direct to  as follows:

   TO SUBSCRIBE: Include this phrase in the body of your message:
           subscribe 

   TO UNSUBSCRIBE: Include this phrase in the body of your message:
           unsubscribe 

Please, when using either method to subscribe or unsubscribe, type the
 exactly and completely.  Requests to unsubscribe that do
not match an email address on the subscriber list are ignored.

		QUALITY TECHNIQUES NEWSLETTER
		Software Research, Inc.
		1663 Mission Street, Suite 400
		San Francisco, CA  94103  USA

		Phone:     +1 (415) 861-2800
		Toll Free: +1 (800) 942-SOFT (USA Only)
		Fax:       +1 (415) 861-9801
		Email:     qtn@sr-corp.com
		Web:       <http://www.soft.com/News/QTN-Online>