sss ssss      rrrrrrrrrrr
                      ssss    ss       rrrr   rrrr
                     sssss     s       rrrr    rrrr
                     ssssss            rrrr    rrrr
                      ssssssss         rrrr   rrrr
                          ssssss       rrrrrrrrr
                    s      ssssss      rrrr  rrrr
                    ss      sssss      rrrr   rrrr
                    sss    sssss       rrrr    rrrr
                    s  sssssss        rrrrr     rrrrr
         +===================================================+
         +=======    Quality Techniques Newsletter    =======+
         +=======             April 2003              =======+
         +===================================================+

QUALITY TECHNIQUES NEWSLETTER (QTN) is E-mailed monthly to
Subscribers worldwide to support the Software Research, Inc. (SR),
TestWorks, QualityLabs, and eValid user communities and other
interested parties to provide information of general use to the
worldwide internet and software quality and testing community.

Permission to copy and/or re-distribute is granted, and secondary
circulation is encouraged by recipients of QTN provided that the
entire document/file is kept intact and this complete copyright
notice appears with it in all copies.  Information on how to
subscribe or unsubscribe is at the end of this issue.  (c) Copyright
2003 by Software Research, Inc.

========================================================================

                       Contents of This Issue

   o  SR Moves to New Facility

   o  Testing Big Systems, by Boris Beizer, Ph. D.

   o  Buggy Software Article Pointer, by Bernard Homes

   o  XP2003: 4th International Conference on eXtreme Programming
      and Agile Processes

   o  More Difficult Questions in a More Difficult Time

   o  More Reliable Software Faster and Cheaper: Distance Learning
      Version, by John Musa

   o  Comparative Analysis of Websites

   o  Workshop on Remote Analysis and Measurement of Software
      Systems (RAMSS'03)

   o  Special Issua: Contract-Drive Coordination and Collaboration
      in the Internet Context

   o  eValid Updates and Details <http://www.e-valid.com>

   o  Workshop on Intelligent Technologies for Software Engineering
      (WITSE/03)

   o  QTN Article Submittal, Subscription Information

========================================================================

                     SR Moves to New Facility

We operated for 4+ years from an open space barn-like facility with
too many skylights located in the South of Market (SOMA) area of San
Francisco -- through the peak of the "Dot Com" bubble and its recent
catastrophic collapse.

Now that "...gravity seems to have returned to the IT industry" (to
paraphrase an industry spokesman) we have taken advantage of the
availability of space to move our operations to a new, fully
renovated office facility that has expansion potential, is close to
public transport, is centrally located in San Francisco, has great
views, and is a much more professional space.  If you're visiting
San Francsico we invite you to stop by at any time.  The page below
explains where we are:

<http://www.soft.com/Corporate/GettingToSR.html>

     - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

      Please Note New Coordinates for SR, eValid, SR/Institute

Please make a note of the new postal address and phone/fax numbers
for Software Research and its associated organizations.  All email
and web addresses remain unchanged.

                      Software Research, Inc.
                            eValid, Inc.
                            SR/Institute
                        1663 Mission Street
                    San Francisco, CA  94103 USA

                      Phone: +1 (415) 861-2800
                       FAX: +1 (415) 861-9801

========================================================================

                        Testing Big Systems
                                 by
                            Boris Beizer

      Note:  This article is taken from a collection of Dr.
      Boris Beizer's essays "Software Quality Reflections" and
      is reprinted with permission of the author.  We plan to
      include additional items from this collection in future
      months.

      Copies of "Software Quality Reflections," "Software
      Testing Techniques (2nd Edition)," and "Software System
      Testing and Quality Assurance," can be obtained directly
      from the author at .

Testing big systems is a refinement of system testing, which in turn
is a refinement of testing.  The point is to first do proper
testing. When you have removed all the bugs that can be practically
removed by ordinary testing, then you can consider system testing
issues and the specialized techniques that might apply there.  Only
after ordinary testing and system testing have been thoroughly done
should you tackle the special issues of big systems.  It is not my
purpose here to provide a tutorial on how unit/component,
integration, and system testing should be donethat takes several
books, of which I have written a few and have yet to write more.
The purpose of this essay is to clarify the issues in ordinary and
big system testing and to present some philosophical speculations
about these issues.

                            1.  Testing

By "testing" or "ordinary testing" I mean proper unit/component
testing and proper integration testing of already tested units and
components.  For those not familiar with the terminology, please
note the following recursive definitions:

  o unit: the work of one programmer (typically) the smallest
    separately testable element of a system, possibly using stubs
    and/or drivers. Excludes or simulates called components.  Ditto
    for communicating components in non-procedural languages.

  o component:  a unit integrated with called and/or communicating
    components.

  o integration testing: the testing of the interfaces and
    incompatibilities between otherwise correctly working components
    that have been previously tested at a lower component level.

Most easy bugs (about half) will be found by proper unit, component,
and integration testing.  In OO (or other non-procedural languages)
the role of unit/component testing declines but there is a sharp
increase in the need for,  and the effectiveness of, integration
testing. There is no point in dealing with higher-order testing
until the easier bugs have been removed by lower-level (and
therefore cheaper) unit and component testing.  Attempting to do
system testing (never mind big system testing) before proper
unit/component/integration testing has been done is self-deceptive
and a waste of time.  All that you will accomplish is to defer the
discovery of unit bugs until so-called system testing when you can
find them at a much higher cost.   Unit testing can be bounded by
various testing criteria.  Similarly, but less strictly so,
component and integration testing can be bounded (i.e., you can
reasonably well know and predict when it will be done.)

                         2.  System Testing

System testing is the exploration of misbehavior that cannot be
detected except in the context of a complete system.  Examples
include: feature interaction bugs, timing and synchronization bugs,
resource loss bugs, performance, throughput, re-entrance, priority.
System testing is primarily behavioral testing.  That is,
requirements-driven feature testing.  System testing is unbounded
i.e. potentially infinite.  Practical considerations lead us to the
position that we will test until the software is "good enough."
What is good enough is determined in part by:  transaction
importance, user behavior, risk associated with transaction loss or
garble, etc.

Of all the system testing issues, one of the nastiest is feature
interaction.  While it is mandatory that every feature be tested by
itself, it is impractical to test all feature interactions.  In
general, you test only the combination of features that have been
shown to be statistically significant in terms of user behavior, or
for which potential loss caused by a bug is not acceptable.  That
is, feature-interaction testing is risk-driven.

                       3.  Big System Testing

We could define a big system as software with over 5 million lines
of code, but that is not productive.  One could have a 50,000,000
line program that would not be a "big" system as I will define it.
Similarly, I can hypothesize "big" systems that have only 500,000
lines of codealthough either possibility is unlikely.
 The most important "big" system issues are: feature interaction
richness and performance and the bugs associated with them.

A "big" system, as I define it, is one in which subsystem and/or
feature interaction is so rich that it is unpredictable.  For
example, because of inheritance and dynamic binding, it is not
really known, or knowable, just what transactions a system might
perform in the future.  In ordinary system testing, you can test
individual transactions (in some risk/priority order, say) and then
test interactions between features in similar risk-driven order.  In
big systems, there is no clearly defined set of transactions to
prioritize because transactions are dynamically created by the
system itself in response to unpredictable user behavior.  There is
no point in dealing with issues of dynamically created (and
therefore unpredictable) interactions until you have properly tested
all the static transactions and a reasonable subset of their
interactions that you know about.  That is, thorough, risk-driven,
ordinary system testing should precede "big system" testing.

I'm not going to give you a set of approaches to use because we just
don't know enough to lay down prescriptions for big system testing
as I have defined big systems.  Ordinary system testing approaches
(e.g., 100% feature cover) do not work because the feature list is
unknown. Furthermore, the cost of finding that hypothetical rich
feature (and interaction list) -- i.e., what should we test? --
seems to dominate the problem.

An unfortunate common example of what I call "big" system testing
was the Y2K problem.  Here the system was "big" not necessarily
because of inherent complexity, dynamic feature interactions, etc.
but because past programming practices and lack of suitable
documentation made the identification of Y2K sensitive fields
unknown; although not unknowable in principle, the combination of
big hunks of legacy software combined with a rigid time-table made
the Y2K stuff unknowable in practical terms.  Finding out what to
test and what to look at dominated the problem. The Y2K problem and
the way to go about it, led to radically different approaches and
processes.  By "radical" I mean dropping such notions as 100% branch
cover as an abstract absolute -- don't get me wrong on this --  you
still must do 100% cover in unit testing -- what I mean is that the
notions of testing that are appropriate to units and small systems
simply fall apart in the big system context.

To use an analogy, we have three domains:  component, system, big
system.  The analogy is to physics: quantum mechanics for the
microscopic, Newtonian physics for the familiar, and relativity on
the cosmic scale.  Except in our case it is the cosmic scale that is
uncertain in contrast with physics where uncertainty manifests
itself on the microscopic scale.  Perhaps, we should consider not
"big" systems, but "uncertain" systems.  By which I do not mean that
you don't know out of ignorance or lack of trying to know, but you
don't know because you can't know.

Ordinary risk models don't apply.  Such risk models as one should
use to guide ordinary system testing (especially of feature
interactions) do not apply because we don't have a finite feature
list and we certainly don't have a finite feature interaction list
to prioritize.  Furthermore, the cost of obtaining such a list
exceeds potential values.

Is there a solution?  I don't know.  I think that we're mostly in
the philosophy stage here and far from substantive answers.  I also
think that we should be mighty careful in building "big" systems --
which OO and other component technologies makes so easy.  It is also
time to change our fundamental assumptions about testing.  We cannot
assume that anything we can build is testable.

We must change our basic assumption to:

                       NOTHING IS TESTABLE...
             UNLESS YOU CAN PROVE THAT IT IS TESTABLE.

========================================================================

                   Buggy Software Article Pointer
                                 by
                           Bernard Homes
                      Mail: bhomes@wanadoo.fr
                       Cell: +33 612 252 636

Check this link out, it speaks about software testing and it is very
much in the main stream of things:

<http://www.cnn.com/2003/TECH/ptech/04/27/buggy.software.ap/index.html>

========================================================================

  XP 2003: 4TH International Conference on eXtreme Programming and
              Agile Processes in Software Engineering
                         May 25 - 29, 2003
                           Genova, Italy
                      <http://www.xp2003.org/>

Building on the success of XP2000, XP2001, and XP2002, the Fourth
International Conference on eXtreme Programming and Agile Processes
in Software Engineering will be a forum to discuss theories,
practices, experiences, and tools on XP and other agile software
processes, like SCRUM, the Adaptive Software Process, Feature Driven
Development and the Crystal series.

XP2003 will bring together people from industry and academia to
share experiences and ideas and to provide an archival source for
important papers on agile process-related topics.

The conference is also meant to provide information and education to
practitioners, identify directions for further research, and to be
an ongoing platform for technology transfer.

Many gurus have already confirmed their participation to the
conference.  Among them there are: Kent Beck, Prof. Mike Cusumano,
Jim Highsmith, Michele Marchesi, Ken Schwaber, Giancarlo Succi, and
many others.

The conference wants to maintain the informal, active, productive,
and family friendly structures of its previous edition.

It will consist of a dynamic mixture of keynote presentations,
Technical presentations, panels, poster sessions, activity sessions,
workshops, tutorials.

A rich companions' program has also been prepared.

CONFERENCE TOPICS

The conference will stress practical applications and implications
of XP and other agile methodologies (AMs). Conference topics
include, but are not limited to:

- Foundations and rationale of XP and AMs
- XP/AMs and Web Services
- Introducing XP/AMs into an organization
- Relation to the CMM and ISO 9001
- Organizational and management issues and patterns
- Use of supportive software development tools and environments
- Education and training
- Unit and acceptance testing: practices and experiences
- Methodology and process
- Case studies; empirical findings of the effectiveness of XP/AMs
- Refactoring and continuous integrations
- XP and AMs practices
- Relation to reuse

SPONSORED BY:

Microsoft Corporation (http://www.microsoft.com/)
ThoughtWorks (http://www.thoughtworks.com/)
The Agile Alliance (http://www.exoftware.com/)
eXoftware (http://www.exoftware.com/)

========================================================================

         More Difficult Questions in a More Difficult Time
                          by Edward Miller

Last Fall I asked QTN readers to suggest what they thought were the
main concerns for the times regarding the general area of software
quality.

The questions concerned Quality Technology, Issues about the Web,
Industry Awareness of Quality Issues, XP, Process Methodologies such
as CMM and SPICE and ISO/9000, and Security and Integrity concerns.

As good as those responses were -- and they were "right on" in many
cases -- it seems to me in the present business and technological
climate there are some even deeper questions that present some
unique challenges.

So, again, below are some really hard questions that, I believe,
need to be asked within the software quality community -- and might
be the basis for some very good discussions.

Not to even think about these things is to avoid reality, and that
can't be a good thing to do.  To think about them may bring better
focus onto the real issues facing the community.  So, here goes...

* ECONOMIC ISSUES.  Everyone in the QA/Test community is suffering
  -- is this news to any of our readers?  What are the factors
  holding back QA & Test business.  How do consultants, and small
  business operations survive the slowdown?

* TECHNICAL ISSUES.  It's hard to believe "everything has been
  invented", but could it be true?  What are the real technical
  issues facing the software quality community?  Or are there any?
  Are there really any problems remaining that need to be solved
  that are not addressed by current methods?  If so, what are they?

* MANAGERIAL ISSUES.  Test/QA people are in many instances "second
  class citizens" -- is this news?  What keeps there from being more
  emphasis on systematic QA & Test?  How do we "get respect?"  Is
  there something that can actually be done about this, other than
  wait?

Please send your responses -- and, of course, any additional "tough
questions" that you think ought to be asked -- to me at
.  We'll publish a selection next month.

========================================================================

             More Reliable Software Faster and Cheaper:
                     Distance Learning Version
                            By John Musa

A distance learning version of the software reliability engineering
course "More Reliable Software Faster and Cheaper" is now available.
This is the same course that has been taken by thousands of
participants over the past few years in a classroom version and has
been continually updated and thoroughly polished and optimized.  It
has been designed to meet the needs of several types of prospective
participants whose needs could not be met by the classroom version:

1. Individuals and groups that are too small (generally less that
    10) to justify bringing the classroom version onsite.

2. Individuals or small groups wanting to try out the course on an
    inexpensive basis before committing time or funds to an onsite
    classroom course.

3. Individuals or small groups who need flexibility as to when,
    where, and at what pace they take the course because of the
    schedule demands of work or their travel or physical location
    situation.

4. Individuals or small groups with budget constraints.  We are all
    suffering from hard economic times, but because of that fact, we
    need to work more efficiently.

5. Individuals who could not attend public presentations of the
    course because of travel restrictions or schedule conflicts.

6. Individuals and small groups outside of North America.

JOHN D. MUSA
39 Hamilton Road
Morristown, NJ 07960-5341
Phone: 1-973-267-5284

j.musa@ieee.org
Software Reliability Engineering website:
http://members.aol.com/JohnDMusa/

========================================================================

                  Comparative Analysis of Websites

Recently we posted some initial applications of eValid's InBrowser
spidering capability to the question of assessing the relative "Best
Online Experience" of selected WebSites.  Please see this page for
explanations of the methodology and our "first try" at understanding
the data:

<http://www.soft.com/eValid/Promotion/Comparative/Analysis/first.try.html>

Here are pointers to the four accompanying reports.

 o The Fortune 500 Top 10 Companies:
   <http://www.soft.com/eValid/Promotion/Comparative/Fortune/Top10/summary.html>

 o The Fortune 500 Top 10 Commercial Banks:
   <http://www.soft.com/eValid/Promotion/Comparative/Fortune/Commercial.Banks/summary.html>

 o The Fortune 500 Fastest Growing Small Companies:
   <http://www.soft.com/eValid/Promotion/Comparative/Fortune/Fastest.Small.Companies/summary.html>

 o Ten Well-Known Hardware Manufacturers:
   <http://www.soft.com/eValid/Promotion/Comparative/Fortune/Hardware/summary.html>

If you're interested in this general area -- external, objective
analysis of websites -- please contact  with
your suggestions for additional sites to analyze and your
recommendations about additional metrics.

========================================================================

            Workshop on Remote Analysis and Measurement
                        of Software Systems

                   Portland, Oregon, May 9, 2003
              <http://measure.cc.gt.atl.ga.us/ramss/>

                   Co-Located Event of ICSE 2003

The way software is produced and used is changing radically.  Not so
long ago software systems had only a few users, and ran on a limited
number of mostly disconnected computers.  Today the situation is
unquestionably different. Nowadays the number of software systems,
computers, and users has dramatically increased.  Moreover, most
computers are connected through the Internet.  This situation has
opened the way for new development paradigms, such as the open-
source model, shortened development lead times, and spurred the
development and acceptance of increasingly distributed,
heterogeneous computing systems.

Although these changes raise new issues for software engineers, they
also represent new opportunities to greatly improve the quality and
performance of software systems.  Consider, for example, software
analysis and measurement tasks such as testing and performance
optimization.  Usually, these activities are performed in-house, on
developer platforms, using developer-provided inputs, and at great
cost.  As a result, these activities often do not reflect actual
in-the-field performance, which ultimately leads to the release of
software with missing functionality, poor performance, errors that
cause in-the-field failures and, more generally, users'
dissatisfaction.

The goal of this workshop is to bring together researchers and
practitioners interested in exploring how the characteristics of
today's computing environment (e.g., high connectivity, substantial
computing power for the average user, higher demand for and
expectation of frequent software updates) can be leveraged to
improve software quality and performance.  In particular, the
workshop aims to discuss how software engineers can shift
substantial portions of their analysis and measurement activities to
actual user environments, so as to leverage in-the-field
computational power, human resources, and actual user data to
investigate the behavior of their systems after deployment and to
improve their quality and performance.

Areas of interest include, but are not limited to:

    * Continuous, lightweight monitoring of deployed systems
    * Detection and diagnosis of problems and failures in
      deployed systems
    * Evolution, optimization, or adaptation of systems
      based on data-driven feedback from the field
    * Dynamic analysis and profiling of web-based and
      distributed systems
    * Identification of user profiles based on real usage
      data
    * Dynamic modification of deployed programs, such as
      adaptation, instrumentation, and updating
    * Data mining and visualization of feedback-data from
      the field

PARTICIPATION

The workshop will be open to all participants interested in the
topic (up to a maximum of 40 participants). Registration for the
workshop does NOT require invitation by the PC.

ACCEPTED PAPERS

"Sampling User Executions for Bug Isolation", by Ben Liblit, Alex
Aiken, Alice X. Zheng, and Michael I. Jordan

"Toward the Extension of the DynInst Language", by Will Portnoy and
David Notkin

"Continuous Remote Analysis for Improving Distributed Systems'
Performance", by Antonio Carzaniga and Alessandro Orso

"Distributed Continuous Quality Assurance: The Skoll Project Cemal
Yilmaz, Adam Porter, and Douglas C. Schmidt

"A Reconfiguration Language for Remote Analysis and Applications
Adaptation", by Marco Castaldi, Guglielmo De Angelis, and Paola
Inverardi

"Proactive System Maintenance Using Software Telemetry", by Kenny C.
Gross, Scott McMaster, Adam Porter, Aleksey Urmanov, and Lawrence G.
Votta

"Enterprise Application Performance Optimization based on Request-
centric Monitoring and Diagnosis", by Jorg P. Wadsack

"Deploying Instrumented Software to Assist the Testing Activity", by
Sebastian Elbaum and Madeline Hardojo

"Runtime Monitoring of Requirements", by Stephen Fickas, Tiller
Beauchamp, and Ny Aina Razermera Mamy

"Improving Impact Analysis and Regression Testing Using Field Data",
by Alessandro Orso, Taweesup Apiwattanapong, and Mary Jean Harrold

"Non-invasive Measurement of the Software Development Process", by
Alberto Sillitti, Andrea Janes, Giancarlo Succi, and Tullio Vernazza

Program Co-chairs:

    * Alessandro Orso
      College of Computing, Georgia Institute of Technology
      
    * Adam Porter
      Department of Computer Science, University of Maryland
      

========================================================================

   Special Issue: Contract-Driven Coordination and Collaboration
                      in the Internet Context

Guest editors:  Willem-Jan van den Heuvel, Hans Weigand (Tilburg
University)

AIM:  This special issues aims at addressing the wide spectre of
issues that are relevant for supporting web-enabled interactions
with coordination models, languages and applications. The focus is
on the coordination of behavior by means of contracts (TPA,
collaboration agreement, ..). Both a theoretical perspective and an
industrial perspective are encouraged.  Although attention to
CSCW/groupware/Negotiation Support is not excluded, the focus is on
system interactions, not human interactions.

BACKGROUND:  Currently, Internet is often associated with the
dissemination of information, such as via the WWW. In the future, it
will also be used more and more as a platform for linking
applications, in the form of web services or agents. For example,
for the purpose of cross-organizational workflow, supply chain
management, or Enterprise Application Integration.  The question is
how these distributed and autonomous applications can cooperate. Or
to put it more general: how is coordination achieved between
autonomous systems? One possible approach is to separate the
coordination aspects from the functionality of the application, and
describe these coordination aspects in the form of "contracts" that
are somehow set up via the Internet, agreed upon and monitored. What
can or should be described in these contracts? What are the benefits
of such an approach, and what are its limitations?

POSSIBLE THEMES:
* Coordination and interoperable transactions
* Planning coordination patterns
* Formal semantics of contracts
* Agent society architectures
* Event-driven coordination languages for distributed applications
* B2B Protocol Standards and Coordination (e.g ebXML, TPA)
* Coordination and Service Flow Composition
* Modeling control in Cross-Organizational Collaborations
* Tool Support for Coordination-Based Software Evolution
* Theories and models of coordination and collaboration
* Contract monitoring

Please send your abstract in PDF format by email to
 or .

INFORMATION ON Data & Knowledge Engineering
http://www.elsevier.com/locate/datak

Dr. Willem-Jan van den Heuvel           Phone : +31 13 466 2767
InfoLab, Tilburg University             Fax   : +31 13 466 3069
PO Box 90153, 5000 LE Tilburg,
The Netherlands
<http://infolab.uvt.nl/people/wjheuvel>

========================================================================

                     eValid Updates and Details
                     <http://www.e-valid.com>

                 New Download and One-Click Install

You can qualify for a free evaluation for Ver. 4.0 including a "one
click install" process.  Please give us basic details about yourself
at:
<http://www.soft.com/eValid/Products/Download.40/down.evalid.40.phtml?status=FORM>

If the eValid license key robot doesn't give you the EVAL key you
need, please write to us  and we will get an
eValid evaluation key sent to you ASAP!

                     New eValid Bundle Pricing

The most-commonly ordered eValid feature key collections are now
available as discounted eValid bundles.  See the new bundle pricing
at:

<http://www.soft.com/eValid/Products/bundle.pricelist.4.html>

Or, if you like, you can compose your own feature "bundle" by
checking the pricing at:

<http://www.soft.com/eValid/Products/feature.pricelist.4.html>

Check out the complete product feature descriptions at:

<http://www.soft.com/eValid/Products/Documentation.40/release.4.0.html>

Tell us the combination of features you want and we'll work out an
attractive discounted quote for you!  Send email to  and be assured of a prompt reply.

               Purchase Online, Get Free Maintenance

That's right, we provide you a full 12-month eValid Maintenance
Subscription if you order eValid products direct from the online
store:

<http://store.yahoo.com/srwebstore/evalid.html>

========================================================================

Workshop on Intelligent Technologies for Software Engineering (WITSE'03)

          9th European Software Engineering Conference and
11th International Symposium on the Foundations of Software Engineering
                          (ESEC/FSE 2003)
                   <http://witse.soi.city.ac.uk>
                September 1, 2003, Helsinki, Finland

The increasing complexity of software systems and a number of recent
advances in the field of computational intelligence (CI) have been
providing a fruitful integration between software engineering (SE)
and intelligent technologies. This is particularly true in the
following CI areas: model checking, fuzzy logic and abductive
reasoning, uncertainty management and belief based reasoning,
artificial neural networks and machine learning, genetic and
evolutionary computing, case-based reasoning; and the following SE
areas: requirements analysis and evolution, traceability, multiple
viewpoints, inconsistency management, human-computer interaction
design, software risk assessment and software verification.

The Workshop on Intelligent Technologies for Software Engineering is
intended to provide a forum for presentation and discussion of a
wide range of topics related to the applicability of new intelligent
technologies to software engineering problems. The aim of this
workshop is to bring together researchers from academia and
industry, and practitioners working in the areas of computational
intelligence and software engineering to discuss existing issues,
recent developments, applications, experience reports, and software
tools of intelligent technologies in all aspects of software
engineering.

We seek contributions addressing the theoretic foundations and
practical techniques related, but not limited, to:

* Intelligent methods of requirements analysis and evolution.
* Machine learning for change management and risk assessment.
* Intelligent approaches for inconsistency management of software
  systems.
* Intelligent architectures for software evolution.
* Intelligent human-computer interaction design.
* Intelligent technologies for traceability management.
* Intelligent techniques for software validation, verification, and
  testing.
* Empirical studies, experience, and lessons learned on applying
  computational intelligence to software development.

General questions concerning the workshop should be addressed to
witse@soi.city.ac.uk.

========================================================================
    ------------>>> QTN ARTICLE SUBMITTAL POLICY <<<------------
========================================================================

QTN is E-mailed around the middle of each month to over 10,000
subscribers worldwide.  To have your event listed in an upcoming
issue E-mail a complete description and full details of your Call
for Papers or Call for Participation to .

QTN's submittal policy is:

o Submission deadlines indicated in "Calls for Papers" should
  provide at least a 1-month lead time from the QTN issue date.  For
  example, submission deadlines for "Calls for Papers" in the March
  issue of QTN On-Line should be for April and beyond.
o Length of submitted non-calendar items should not exceed 350 lines
  (about four pages).  Longer articles are OK but may be serialized.
o Length of submitted calendar items should not exceed 60 lines.
o Publication of submitted items is determined by Software Research,
  Inc., and may be edited for style and content as necessary.

DISCLAIMER:  Articles and items appearing in QTN represent the
opinions of their authors or submitters; QTN disclaims any
responsibility for their content.

TRADEMARKS:  eValid, SiteWalker, TestWorks, STW, STW/Regression,
STW/Coverage, STW/Advisor, TCAT, and the SR, eValid, and TestWorks
logo are trademarks or registered trademarks of Software Research,
Inc. All other systems are either trademarks or registered
trademarks of their respective companies.

========================================================================
        -------->>> QTN SUBSCRIPTION INFORMATION <<<--------
========================================================================

To SUBSCRIBE to QTN, to UNSUBSCRIBE a current subscription, to
CHANGE an address (an UNSUBSCRIBE and a SUBSCRIBE combined) please
use the convenient Subscribe/Unsubscribe facility at:

       <http://www.soft.com/News/QTN-Online/subscribe.html>.

As a backup you may send Email direct to  as follows:

   TO SUBSCRIBE: Include this phrase in the body of your message:
           subscribe 

   TO UNSUBSCRIBE: Include this phrase in the body of your message:
           unsubscribe 

Please, when using either method to subscribe or unsubscribe, type
the  exactly and completely.  Requests to unsubscribe
that do not match an email address on the subscriber list are
ignored.

               QUALITY TECHNIQUES NEWSLETTER
               Software Research, Inc.
               1663 Mission Street, Suite 400
               San Francisco, CA  94103  USA

               Phone:     +1 (415) 861-2800
               Toll Free: +1 (800) 942-SOFT (USA Only)
               FAX:       +1 (415) 861-9801
               Email:     qtn@sr-corp.com
               Web:       <http://www.soft.com/News/QTN-Online>