sss ssss      rrrrrrrrrrr
                      ssss    ss       rrrr   rrrr
                     sssss     s       rrrr    rrrr
                     ssssss            rrrr    rrrr
                      ssssssss         rrrr   rrrr
                          ssssss       rrrrrrrrr
                    s      ssssss      rrrr  rrrr
                    ss      sssss      rrrr   rrrr
                    sss    sssss       rrrr    rrrr
                    s  sssssss        rrrrr     rrrrr
         +===================================================+
         +=======    Quality Techniques Newsletter    =======+
         +=======           February 2003             =======+
         +===================================================+

QUALITY TECHNIQUES NEWSLETTER (QTN) is E-mailed monthly to
Subscribers worldwide to support the Software Research, Inc. (SR),
TestWorks, QualityLabs, and eValid user communities and other
interested parties to provide information of general use to the
worldwide internet and software quality and testing community.

Permission to copy and/or re-distribute is granted, and secondary
circulation is encouraged by recipients of QTN provided that the
entire document/file is kept intact and this complete copyright
notice appears with it in all copies.  Information on how to
subscribe or unsubscribe is at the end of this issue.  (c) Copyright
2003 by Software Research, Inc.

========================================================================

                       Contents of This Issue

   o  eValid chosen as Jolt Award Finalist

   o  Quality is Not the Goal!, by Boris Beizer, Ph. D.

   o  CT-Labs Gives eValid High Scores

   o  CMMI Interpretive Guidance Project

   o  San Jose State University Adopts eValid For Testing Course

   o  23rd Conference on Foundations of Software Technology and
      Theoretical Computer Science (FSTTCS 2003)

   o  SQRL Research Report Abstracts (Report No. 8, 9).

   o  eValid Updates and Specials

   o  QTN Article Submittal, Subscription Information

========================================================================

               eValid Chosen as Jolt Award Finalist

For the past dozen years the Software Development Jolt Product
Excellence and Productivity Awards have been presented annually to
products that have "jolted" the industry with their significance and
made the task of creating products, software, and websites faster,
easier and more efficient.

This year, finalists were selected from over 700 nominations in 11
categories, by a team of Software Development editors, columnists
and industry gurus.

"This year the Jolt Awards have evolved to reflect the ever changing
technological landscape by the addition of several new categories to
include tools targeted for Web development, business integration,
testing and management," said Rosalyn Lum, Technical Editor for
Software Development Magazine. "Vendor participation in these new
categories was highly competitive and the fact that eValid is a
finalist in the testing category reflects their company's commitment
to offering the best products in the evolving web quality
marketplace."

"We are very pleased to have the eValid product reach Jolt Award
Finalist Status," said Edward Miller, eValid CEO. "eValid was
created to help improve website quality by enabling web teams to
easily identify and prevent problems and ensure website quality and
performance."

"We think eValid represents a real "jolt" for web masters, web
developers, and web performance engineers because of its superb mix
of functionality, flexibility, and ease of use. We have a really
unique product -- the only one that puts test functionality entirely
inside a fully functioning browser -- and our experience is that
this approach offers a number of technical and ease-of-use
advantages over other less sophisticated desk-top based and client-
server based technologies."

"It is very rewarding to learn that the Jolt Award committed and
judges, and the larger software development community, see the value
eValid can bring to users," Miller concluded.

Jolt Award winners will be announced at a ceremony at the Software
Development West Conference and Exhibition, Wednesday March 26, 2003
at the Santa Clara Convention Center in Santa Clara, California.

========================================================================

                      Quality Is Not The Goal!
                                 by
                        Boris Beizer, Ph. D.

      Note:  This article is taken from a collection of Dr.
      Boris Beizer's essays "Software Quality Reflections" and
      is reprinted with permission of the author.  We plan to
      include additional items from this collection in future
      months.

      Copies of "Software Quality Reflections," "Software
      Testing Techniques (2nd Edition)," and "Software System
      Testing and Quality Assurance," can be obtained directly
      from the author.  Inquire by email:
      .
             o       o       o       o       o       o

                          1.  An Epiphany

My cat Dumbo1  is very handsome, very sweet, but stupid.  Yet, a
while back, he made an intellectual breakthrough -- he discovered
television.  After fourteen years of seeing TV as a random pattern
of flashing lights he understood what it was.  Now he's a couch-
potato.  His favorite shows are nature programs that feature birds
and small furtive animals.  One wonders about the synaptic
connection that had been dormant so long and was finally made.  What
triggered his sudden awareness?

Dumbo's intellectual epiphany was on my mind a few weeks later when
I was addressing a group of data processing managers at a big New
York City financial organization.  I've mentioned the cat because it
was his prodigious mental feat that led me to a realization that
much of what I believed about testing and quality assurance was
wrong.  After briefing them on the state of the testing art and
practice we had discussion on how test automation could be
implemented in their organization.  A manager asked me: "Give us
some data we can use to support a cost-benefit analysis of software
test automation."  I was about to give her the party line when I
realized that her request was wrong and that if she did the cost-
benefit analysis rather than enhancing the test automation cause she
would doom it.  It was a set-up for failure because any analysis she
could do at their stage of development could be discredited by even
a junior accountant.

I had a heavy dose of operations research in the formative years of
my career.  Cost-benefit analyses, trade-offs, and optimization were
second nature to me.
 In more than three decades I had never questioned their
appropriateness.  Yet now, I saw how destructive such tools could be
in some circumstances.  So instead of  honoring her request I said:
"You don't want to do that -- because you can't! "

"Well what do you expect our management to do?  Buy-in on faith
alone?"  she countered.

"Precisely!"  I responded.

My answer may have shaken her complacency, but not as much as it
shook mine.  Later that evening on the train back to Philadelphia, I
questioned other paradigms and recognized other new truths about
software quality assurance, testing, and test automation:

    1.  Quality is not the goal.
    2.  Ship the product with bugs
    3.  The market and the competition are not feature-driven.
    4.  Don't do inter-operability testing.
    5.  Don't try to justify test automation.
    6.  Don't do cost-benefit analyses
    7.  Speed and efficiency don't matter.

And:

    8.  Forget the schedule.

So let's reexamine our beliefs and maybe see our goals from a
different point of view.

                   2.  Quality Is Not The Goal.

Quality isn't the goal-quality, however you measure it, is just an
abstract number.  The goal has always been and is, suitability for
use and therefore, user-satisfaction.  What does it mean to our user
that our software has 0.0001 defects/KLOC when they don't know how
many lines of code we have in the product or how to translate defect
rates into an operationally meaningful metric?  The objective is
profit if you're a software vendor, operating cost if you're a user.
The Japanese understand that, as do contemporary leaders of the
quality movement.  Quality is just one, albeit important, ingredient
that leads to a better bottom line.  The error in treating quality
as a primary goal is that it falls short of the whole solution and
blinds you to the many other things that must be done even if your
product has sufficient quality.

                  3.  Ship the Product With Bugs.

Our software should be shipped with bugs.  That's what we're doing
anyhow.  That's what we've always done, and based on every
theoretical and practical thing we know about testing and QA, that's
what we're going to continue doing as long as we write software.
Instead of perpetuating the search for the unachievable holy grail
of zero-defects and the corresponding myth that we have achieved it,
let's rationalize our objectives based on the presumption that the
software we ship to our users will have bugs.

Shipping products with bugs doesn't mean shipping ill-behaved or
useless software.  It isn't an excuse for slovenly design and
primitive QA practices.  It's a rallying call to further excellence
and a challenge for those of us who would achieve it.  It means
being open with users about the bugs we find rather than hiding
them.  It means sending notices or including the bug list in the
READ.ME files.  It means publishing the work-arounds when we have
them or being honest if we haven't found them yet; it means warning
users that some features are unripe and that there might be danger
in using them; being honest and open about what we have and haven't
yet tested and what we do and don't plan to test in the near future;
publishing industry-wide bug statistics so that we can show users
that we're doing as well as can be expected (if we are, that is);
educating users about bugs and their expectations about bugs.  Most
of all, it means treating users as adults who can, should, and will
make the right risk decisions (for them) if  we give them the facts
on which to base such decisions.

       4.  The Market and Competition Are Not Feature-driven.

Software developers like to stick the blame on users. They claim
that users drive complexity.  They'd like to build, they protest, a
simpler and more robust product, but users always want another
feature.  And if we don't supply it then our competitors will.  This
has an eery tone of deja vu.  Didn't we hear that one back in the
50's and 60's when Detroit said that Americans didn't want smaller,
fuel-efficient cars; that tail-fins, 5-ton passenger vehicles, 8-
liter engines, and huge chrome bumpers with protruding knobs were
what users wanted?  Proper market surveys never reached Detroit's
board rooms and insulated design studios because if they had,
Japanese auto builders would not dominate the market today.

Sure users have wish list whose implementation means more
complexity.  It's a wish list and wish lists don't take costs and
consequences into account.  Also, the users who do take the trouble
to tell their wishes to us (note, wishes, not needs) don't represent
the user community-only that small but verbal minority that writes
letters.  Most market studies aren't done rationally by sending
questionnaires to a statistically significant user sample.  Nor do
we tell them what each feature will cost in terms of increased disc
and RAM, reduced throughput, and increased vulnerability to
unforeseen interactions.

Many wish-lists are dreamed up by salesmen at COMDEX who look over
their competitors' software and believe that what they see in a
clever demo is what the users will get and need.  If I wanted to
sink a competitor I'd build a demo for COMDEX with so many snazzy
features that it would make my competitors' eyes bug out and then
laugh all the way to the bank as they dissipated their substance on
building something that I never intended to bring out.

Software products should, must, evolve -- but not so quickly that
the next round of enhancements is introduced before the previous
round has stabilized. We want to respond to the users' needs but we
want to know from them what they want, not from some psychic sales-
type whose market perceptions come from martinis.  How do users
exploit the current features? Measure their usage patterns on-line,
store the results, and give them an incentive such as a free upgrade
to send the data back to you.  Use valid polling methods  to get
essential feedback.  Use questionnaires that reflect cost and risk
(e.g., "Would you rather  do A or B")?

              5.  Don't Do Inter-operability Testing.

This may seem to be mostly a problem for PC software vendors, but it
affects everyone.  It's because the issues are clearer in PC
software that I'll use it illustrate the situation.  Your software
must coexist with other software.  How do we test their
compatibility, or rather, their incompatibility?  How do we convince
ourselves that our spreadsheet will work with their word processor?
It seems a reasonable desire but the cost of testing is not.  The
growth of the test suite for compatibility testing, based on the
best-known techniques follows a square law (if not worse) because
each product must be tested for compatibility with all the other
products.  It's worse because testing them one pair at a time leads
to a square law, but we have to consider them three-at-a-time,
four-at-a-time, and so on for a factorial growth.  We can divide the
problem into three cases,  two of which are soluble and one that is
not.

The Virtual Processor Interface.  That's the interface with the
virtual processor over which your application is written-
e.g.,Windows.  You can't avoid testing that interface-it will happen
implicitly as you integrate your application.  If you misunderstand
how that interface works, it will be obvious soon enough. Testing
your interface with the virtual processor should not be confused
with testing the virtual processor itself.  If you believe that the
virtual processor needs testing, then it's not much of a virtual
processor and you should think about using a different one if
possible.  Testing the interface with the virtual processors can be
done because you don't have to worry about the interaction of
everything with everything else.

The Explicit Interfaces. Most applications interface with other
applications either directly, but usually with import/export
options.  Those interfaces must be tested, much as the operating
system or network interfaces must be tested.  The situation is
similar to testing the virtual processor interface and  the test
suite growth law is linear.

Inter-operability Testing.  True inter-operability testing concerns
testing for unforeseen interactions with other packages with which
your software has no direct connection.  In some quarters, inter-
operability testing labor equals all other testing combined.  This
is the kind of testing that I say shouldn't be done because it can't
be done.

What can possibly be wrong with testing our package's compatibility
with say, its 10 most-likely coexisting packages?  Several things.

a.   The package that displays the symptom is often the last one
loaded and not the package that caused the problem.

b.   Coexistence isn't enough.  It also depends on load order, how
virtual space is mapped at the moment, hardware and software
configurations, and the history of what took place hours or days
before.  It's probably an exponentially hard problem rather than a
square-law problem.

c.   It's morally wrong.  These interactions come about because
somebody (hopefully not you) is breaking virtual processor rules.
It's an expensive vigilante approach to testing where everyone is
spending many bucks trying to find a wrong-doer.  Testing for this
condones the rule breakers because if they're found, we're likelier
to build-in protection against them (and yet more complexity) than
to take them to task for their error.

d.   It gives you a false sense of security.  You've tested hard
against the popular neighbors and you therefore feel that your
software won't be vulnerable.
 That feeling is unwarranted.  How hard you've tested has little to
do with the chance there will be a bad interaction.  Even vast sums
spent on this kind of testing doesn't give you the  right to make
statistical inferences about the probability of successful
coexistence.  I'd rather see you worried about an unsolved (and
probably unsolvable problem) than be vulnerable and feel secure.

Instead of testing against an ever-growing list of potential bad
neighbors, spend the money on better testing of your virtual
processor and explicit interfaces so that you can be confident that
if there is an interaction problem you aren't to blame even if it
was your program in which the symptoms were first exhibited.  Buy-in
to the virtual processor compatibility test suites that Microsoft is
promoting for Windows and Novell for its NetWare.  Support-no,
demand-that your virtual processor supplier create similar
comprehensive, public, compatibility test suites.

If useful inter-operability testing is to be done, then it can only
be done as a cooperative venture of all the potentially interacting
parties.  This can be within your organization, user groups,
industry groups, etc.  The current approach of endlessly redesigning
private inter-operability test suites is expensive and doesn't work.

             6.  Don't Try To Justify Test Automation.

Being asked, as we so often are, to justify the cost and disruption
of automation has it backwards.  What must be justified is not
automating when it is possible to so.  We are, in the third century
of the industrial revolution, still fighting the battles the mill
owners fought against the Luddites2 .  Is there anything more
bizarre than the idea that anyone in the computer business, which
after all is the soul of automation, should do something by hand
when it can be automated?  What are computers for?  People who ask
us to justify automated test methods don't really believe in
computers. They have rejected four decades of development.  They're
stuck in the 18th century. We've tried to convince them with reason,
with analysis, and with an unending stream of examples starting with
the assembler and higher order languages and yet they still have to
be convinced.  Maybe it's time to try ridicule.

                7.  Don't Do Cost-benefit Analyses.

Doing a cost-benefit analysis presumes that your process is under
control, that you know the costs, and that you can determine the
benefits.  For most software developers today, none of those
preconditions are valid and therefore, any putative cost-benefit
analyses based on such shaky foundations are at best science
fiction.  If you can't predict costs, schedules, and bug densities
on the projects you now do with your imperfect process and tools,
how can you hope to make valid predictions of the impact of new
tools and methods?  If you don't have true life-cycle costs for
software so detailed that you can ascribe individual service
instances to individual routines, then how can you even start the
cost part of the cost-benefit analysis?  Does your software cost
include everything?  Initial development costs?  Lost business?  The
hot line and service costs? Installation expenses?  And most
important, the operational cost to your user?  Do you do any kind of
bug analysis and feed that information back as part of a
comprehensive QA process?  Do you know what your "apprenticeship"
learn-it-on-the-job, so-called training is costing you now?  Have
you figured in the cost of a high turnover rate among programmers
and testers who are tired of working 60 hour weeks months in and
out?

Until your process is under control and you know enough about it to
make meaningful measurements of costs and benefits, any formal
analysis is worthless. You don't have the informational
infrastructure needed for the job.  And how to get to the point
where you can do the analysis?  By adopting the methods and
techniques that among other things, help you bring the process under
control.  The very methods, techniques, and tools whose
justification prompted the analysis in the first place.

               8.  Speed and Efficiency Don't Matter.

Speed and efficiency have probably driven more bad designs than any
other goal.  We believe, or we believe our users believe, that speed
is valued above all.  That seems appropriate because weren't
computers invented to do things much faster than they could be done
by hand?  The trouble with speed as an objective is that once you
adopt a good language with an optimizing compiler and use the best
algorithms appropriate to the task, there's not much you can do to
improve the execution speed of a program without doing violence to
the operating environment.  If speed can be improved by a better
optimizer or algorithm and if it's statistically noticeable, and if
it seems to make the user like your software more, and if it can be
done with low risk -- many ifs  -- then do it.

Unfortunately, in the mature processing environment, be it mainframe
or PC, often the only way to improve apparent speed is too short-cut
the operating system; that is, to bypass the virtual processor that
interfaces your application to the system.  In the PC world, for
example, an application written to run under Windows sees Windows as
the virtual processor. If you execute a direct I/O (bypassing the
BIOS, say), then you may have short-cut the virtual processor and
your apparent speed increase has been obtained at great risk. It's
not one virtual processor but a hierarchy of virtual processors,
each of which has made assumptions about how it will interface with
the virtual processor above and below it in the hierarchy.
 You can't know what those assumptions are unless you're privy to
the documentation and what's inside the designer's mind.  That's a
risky position at best and a catastrophic one at worst.

"But the user demands speed,"  you say.  No!  The user wants net
productivity after you add the down-time and interruption caused by
your chancy approach to the processing time.  The chancy approach is
only apparently better because you didn't count the time lost by
bugs and recovery.  It's tax time as I write this essay and like
many of you I use a PC-based tax package.  I have a fast computer,
but it still takes a long time to unzip the seemingly infinite
number of tax forms the government dictates we use.  Based on
operational evidence gathered over a frustrating series of
installation attempts, here's what this vendor cost me by his
attempt to provide a fast installation that bypassed the Windows
virtual processor.

             o       o       o       o       o       o

Install time without breaking virtual processor rules 30 minutes

Install time as written                               12 minutes

Apparent net gain (benefit)                           18 minutes

First failed installation attempts                    12 minutes

Boot up with dumb AUTOEXEC.BAT and CONFIG.SYS         03 minutes

Second failed installation attempt                    12 minutes

Rework AUTOEXEC.BAT and CONFIG.SYS                    11 minutes

Boot up with dumber AUTOEXEC.BAT and CONFIG.SYS       03 minutes

Third failed installation attempt                     12 minutes

First call to customer service (mostly waiting)       15 minutes

Their time                                            05 minutes

My phone cost                                         $4.53

Set Screen resolution to 640x400                      06 minutes

Boot up with dumbest AUTOEXEC and CONFIG              07 minutes

Fourth attempted installation                         16 minutes

Second call to customer service                       15 minutes

Their time                                            8 minutes

Phone cost                                            $5.75

Their call back with APPS.INI conjecture and research 19 minutes

 their time                                           65 minutes

Phone cost (theirs)                                   $6.75

Find bad APPS.INI file and correct                    16 minutes

Do disc check and find other corrupted files          22 minutes

Restore entire system from tape                       47 minutes

Fifth install attempt (successful)                    16 minutes

Restore normal configuration                          12 minutes

Run another disc check                                22 minutes

Follow-up call from customer service                  12 minutes

Their time                                            12 minutes

Phone cost (theirs)                                   $4.20

My time cost:
  278 minutes @ $135/hour = $625.50 + tel $10.28 = $635.78

Their time cost:
  90 minutes @ $65/hour $97.50 + tel $10.95 = $108.45

             o       o       o       o       o       o

The package sells for $50.00 mail order and let's assume that they
make a profit of $10.00 per package. They will have to sell 10 more
package to recover their costs on this one incident and if my costs
are taken into account the 18 minutes -- saved -- was equivalent to
64 packages.  I would have preferred a slow but sure installation --
and so should they!!

Let's educate ourselves and our users.  Let's put in a turbo-option
to take care of the speed freaks who want instantaneous speed,
consequences be damned.  Give the user a switch to select either the
fast-but-risky mode or the slow- (actually, operationally faster)
-but-safe mode.  Usually, this can be accomplished by encapsulating
the risky stuff in a special driver or subroutines.  Because risk
may depend on configuration (i.e., what other applications are
running, hardware and operating system parameters, file structures,
and so on) some users may find that the turbo-mode is safe for them
and faster.  Instead of dictating the risk posture to the user,
we've made him a partner to our design and given him the option to
choose.

                      9.  Forget The Schedule.

Every software builder claims to be schedule-driven. Either it's
COMDEX, the year-end audit, the competitor's release, or the vernal
equinox.  We know how harmful to quality driving development by
schedules is, yet it is claimed that this is the prevailing
operative mode of the software industry.  A hypothetical man-from-
Mars would see it differently. She would see detailed schedules
drawn up prior to the start of a six-month project say, and as the
project evolved over the next eighteen months, as one after another
milestone was slipped, the man from Mars would see yet another
prolongation of the supposedly inflexible schedule.

How can you claim to be schedule-driven if you've yet to meet a
schedule?  At what point do we decide that the fault isn't in
developers, testers, or QA and their failure to work diligently (60
hours a week or more) and conclude that the fault is in the schedule
makers (who are rarely the ones who have to execute the schedule)
and the scheduling process?  We have excellent models and tools for
doing rational scheduling.3   Most groups in the software
development game have been through enough projects to warrant using
past experience as a basis for projecting the future.  Chronic
schedule slips are a consequence of bad planning, ignoring
experience, unrealistic dreams about the absence of bugs, and just
plain glory hounding-because in our topsy-turvy sense of values,
there's no glory in finishing calmly on time, only in snatching
defeat from the jaws of victory at the last possible moment.  Once
you start meeting (realistic) schedules as a regular thing, it's
time enough to think about the dollar value of schedule slippages
and the extent to which schedules should be allowed to drive your
software development.

Let's say that there are commercial realities, such as manufacturing
and distribution that means that there really are inflexible
delivery points.  If you're going to change from what is today, in
reality, a panic-driven  rather than a schedule-driven mode, there
is an alternative.   You prioritize the features you will implement
and also the testing of those features.  If it is not going to be
possible to fully test a given feature, then the feature is disabled
in the inflexibly scheduled next release.  With easy downloads
available on the Internet, not only is this option honest, but also
effective.

                          10.  What To Do.

It seems like an impasse, doesn't it?  A Catch-22.  A chicken-and-
egg situation.  There's an easy way out called "faith." Faith that
your software development process and problems aren't different from
those of your competitors or others in your line of business.  Faith
that what has demonstrably worked for one group can and will work
for yours if given the chance.  Faith that the central ideas of the
quality movement apply to software as much as they apply to widgets.
Faith that if you don't do it, your software development entity will
be eventually replaced by one that does.  But I don't mean blind,
unreasoned faith in some Guru or another who makes a living
preaching the quality gospel.  I mean reasoned faith based on the
experience of others:

1.   Get Benchmarked.  Get information from a structurally similar
organization that is a few years ahead of you in the process.  The
best possible source is a willing competitor.  One tenet of the
quality movement is the imperative to share experiences with others,
including competitors.  It's no small part of the open secret of
Japan's success.  Ask the tool vendors to put you in touch with
successful groups. It's cheaper than hiring a consultant.

2.   Get Evaluated.  Get your testing and QA practices evaluated by
a professional outfit such as the Software Engineering Institute to
get a realistic appraisal of where you stand compared to where you
could be and compared to where the leaders are.

3.   Do the Balridge Award Exercise. Get the Balridge award booklet
(from the Department of Commerce and go through it as if you were
actually going to go for it.  Just seeing what information you now
lack is an important part of the process.  Note that  if you don't
know the answer to a question in that booklet and/or don't know how
to get the answer, then the correct score is probably zero.

4.   Start!  You can do one study after another, one implementation
plan follows another, and there's yet another justification memo,
but all that's an excuse for maintaining the status-quo.  Start!
And remember that forgiveness is easier to get than permission.

========================================================================

                  CT Labs Gives eValid High Scores

Summary:  CT Labs, a highly respected technical evaluator of
hardware and software, has completed a detailed review of eValid.
Here's their Executive Summary:

    "CT Labs found the eValid product to be a powerful tool for
    testing websites. Because each browser instance looks like a
    real user and behaves like one, it can test anything that
    can be browsed. The ability to use ActiveX controls, easily
    customize scripts to look like multiple users, and to do
    this on a single machine is impressive."  "CT Labs verified
    the load created by eValid to be a real server load -- not
    just downloads off the base web pages as do some competing
    products. The number of simulated users one can create with
    eValid is limited only by the memory and CPU of the client
    machine being used."

    "Overall, CT Labs found eValid to be powerful, intuitive,
    and flexible -- users will only need to remember to scale
    their server resources appropriately to suit their testing
    density needs."

                              Scoring

As part of their evaluation, CT Labs scored eValid on a number of
different aspects, as shown in the table below (maximum possible
score 10):


      Test Category                       Score

      Ease of installation                 9.5
      Documentation and Integrated Help    9.0
      User interface ease of use           9.5
      Product features                     9.0
      Functionality                        9.0

                           About CT Labs

CT Labs is an independent, full-service testing and product analysis
lab focused exclusively on Converged Communication (CC), IP
Telephony (IPT), and Graphical User Interfaces (GUIs).  Read the
Complete CT Labs Report (PDF) at:
<http://www.soft.com/eValid/Promotion/Articles/CTLabs/Review.2003.pdf>.
Read about CT Labs at:  <http://www.ct-labs.com>.

========================================================================

                CMMI Interpretive Guidance Project

Since the release of Version 1.1 of the Capability Maturity Model
[R] Integrated (CMMI [R]) Product Suite, the Software Engineering
Institute (SEI [SM]) has been working to aid the adoption of CMMI.
Currently, a project to understand how CMMI is being utilized by
software, information technology (IT), and information systems (IS)
organizations is underway.

Rather than presuming that we know the real issues, the SEI
continues to collect information from software, IT, and IS
professionals like you. We are collecting this information in the
following ways:

  * Birds-of-a-feather sessions at conferences
  * Workshops at SPIN (software process improvement network) meetings
  * An expert group
  * Detailed interviews with select software organizations
  * Feedback from SCAMPI(SM) appraisals
  * Web-based questionnaire

Your involvement is critical to the success of this effort. We hope
that you are able to participate in one of these activities.

The SEI is holding birds-of-a-feather (BoF) sessions and SPIN
workshops.  The purpose of these sessions is to identify the areas
of CMMI that may need further interpretation or guidance based on
the participants' experience with CMMI.  Upcoming sessions are
scheduled on February 24 and 26 at the SEPG(SM) Conference in
Boston; March 7, at the Southern California SPIN; and March 10 at
the San Diego SPIN.  If you plan to attend, download and complete
the background questionnaire and outline available on the SEI
Website at:
<http://www.sei.cmu.edu/cmmi/adoption/interpretiveguidance.html>

The SEI is establishing an expert group to represent the CMMI
adoption concerns of the software, IT, and IS communities, and to
provide advice and recommendations to the Interpretive Guidance
project. A call for nominations is attached to this announcement and
is available on the SEI Website at
http://www.sei.cmu.edu/cmmi/adoption/expertgroup.pdf

Detailed interviews are being conducted with select software
organizations that have implemented CMMI. We are also conducting
detailed interviews with organizations that have considered CMMI but
have not chosen to adopt it. If you are interested in participating
in this activity, please contact us at cmmi-software@sei.cmu.edu

The SEI is also encouraging SCAMPI Lead Appraisers(SM) to send
feedback about model interpretation issues they encountered during
appraisals.

Finally, in March the SEI will distribute an interpretive guidance
Web-based questionnaire that will enable a larger number of people
who may not have been able to participate in any of the other
activities to share their issues, concerns, and experiences.

As you can see, there are many ways for you to participate in this
project. For more information about interpretive guidance related
events, see the Interpretive Guidance Web page at
<http://www.sei.cmu.edu/cmmi/adoption/interpretiveguidance.html>

If you have questions, send e-mail to cmmi-software@sei.cmu.edu or
contact Mary Beth Chrissis at 412-268-5757. We hope you can
participate in this project.

             o       o       o       o       o       o

The Software Engineering Institute (SEI) is a federally funded
research and development center sponsored by the U.S. Department of
Defense and operated by Carnegie Mellon University.

========================================================================

    San Jose State University Adopts eValid For Testing Course

Located in the heart of Silicon Valley, San Jose State University's
Dr.Jerry Gao decided it was time for the Computer Engineering
Department to train its body of 1400 graduate students and 1500
undergraduate students on the new Internet Technology. This semester
he started a class on Software Quality and Testing and set up an
Internet Technology Laboratory for their assignments.

"Internet Technology is one of the hottest fields in todays'
information community. As it is becoming the main stream in many
applications areas, there is a strong demand in Silicon Valley for
engineers who master updated skills in the latest internet
knowledge. Experts believe that it will cause a lot of impact on
various aspects of software engineering, including software process,
reuse, software development and software engineering methodology and
practice. Jose State University is a major provider for engineers in
the valley, so our school is well placed to become a world-class
institute on selected Internet technologies and on spreading this
knowledge to serve the Silicon Valley industry," said Dr. Gao.

Twenty undergraduate students and over sixty graduate students
signed up for his class and lab work. The major research work
focuses on engineering methodologies for web-based software
applications and engineering Internet for global software
production.

Software Research is proud to announce that Dr. Jerry Gao chose
eValid for educating his students on the latest Internet technology.
eValid's novel technology approach and intuitive methods made it the
natural tool for training his students on their assignments in the
Internet Lab Tutorials.

For more details on the Software Testing Course and the Internet
Technology Lab please go to Prof. Jerry Gao's Home Page
<http://www.engr.sjsu.edu/gaojerry>

========================================================================

       23rd Conference on Foundations of Software Technology
                  and Theoretical Computer Science
                            (FSTTCS '03)

                       December 15--17,  2003
           Indian Institute of Technology, Bombay, India

The Indian Association for Research in Computing Science, IARCS,
announces the 23nd Annual FSTTCS Conference in Mumbai.

The FSTTCS conference is a forum for presenting original results in
foundational aspects of Computer Science and Software Technology.
The conference proceedings are published by Springer-Verlag as
Lecture Notes in Computer Science (LNCS).

Representative areas of interest for the conference include:

   Automata, Languages and Computability
   Automated Reasoning, Rewrite Systems, and Applications
   Combinatorial Optimization
   Computational Geometry
   Computational Biology
   Complexity Theory
   Concurrency Theory
   Cryptography and Security Protocols
   Database Theory  and Information Retrieval
   Data Structures
   Graph and Network Algorithms
   Implementation of Algorithms
   Logic, Proof Theory, Model Theory and Applications
   Logics of Programs and Temporal Logics
   New Models of Computation
   Parallel and Distributed Computing
   Programming language design
   Randomized and Approximation Algorithms
   Real-time and Hybrid Systems
   Semantics of Programming Languages
   Software Specification and Verification
   Static Analysis and  Type Systems
   Theory of Functional and Constraint-based Programming

Contact: FSTTCS
School of Technology and Computer Science
Tata Institute of Fundamental Research
Mumbai 400 005
India

fsttcs@tifr.res.in
Phone: +91 22 2215 2971
Fax:   +91 22 2215 2110

========================================================================

                         SQRL Report No. 8

          Right on Time: Pre-verified Software Components
               for Construction of Real-Time Systems
                                 by
                    Mark Lawford and Xiayong Hu

                             Abstract:
We present a method that makes use of the theorem prover PVS to
specify, develop and verify real-time software components for
embedded control systems software with periodic tasks. The method is
based on an intuitive discrete time "Clocks" theory by Dutertre and
Stavridou that models periodic timed trajectories representing
dataflows. We illustrate the method by considering a Held_For
operator on dataflows that is used to specify real-time
requirements. Recursive functions using the PVS TABLE construct are
used to model both the system requirements and the design. A
software component is designed to implement the Held_For operator
and then verified in PVS.  This pre-verified component is then used
to guide design of more complex components and decompose their
design verification into simple inductive proofs. Finally, we
demonstrate how the rewriting and propositional simplification
capabilities of PVS can be used to refine a component based
implementation to improve the performance while still guaranteeing
correctness. An industrial control subsystem design problem is used
to illustrate the process.

                         SQRL Report No. 9

 Robust Nonblocking Supervisory Control of Discrete-Event Systems
                                 by
        Sean E. Bourdon, Mark Lawford and W. Murray Wonham

Abstract:  In the first part of this paper, we generalize a notion
of robust supervisory control to deal with marked languages.  We
show how to synthesize a supervisor to control a family of plant
models, each with its own specification. The solution we obtain is
the most general in that it provides the closest approximation to
the supremal controllable sublanguage for each plant/specification
pair. The second part of this paper extends these results to deal
with timed discrete-event systems.

The web address for downloading reports is:
<http://www.cas.mcmaster.ca/sqrl/sqrl_reports.html> Please contact:


========================================================================

                    eValid Updates and Specials
                     <http://www.e-valid.com>

               Purchase Online, Get Free Maintenance

That's right, we provide you a full 12-month eValid Maintenance
Subscription if you order eValid products direct from the online
store:  <http://store.yahoo.com/srwebstore/evalid.html>

                 New Download and One-Click Install

Even if you already got your free evaluation key for Ver. 3.2 we
have reprogrammed the eValid key robot so you can still qualify for
a free evaluation for Ver. 4.0.  Please give us basic details about
yourself at:
<http://www.soft.com/eValid/Products/Download.40/down.evalid.40.phtml?status=FORM>

If the key robot doesn't give you the keys you need, please write to
us  and we will get an eValid evaluation key
sent to you ASAP!

                     New eValid Bundle Pricing

The most-commonly ordered eValid feature key collections are now
available as discounted eValid bundles.  See the new bundle pricing
at:  <http://www.soft.com/eValid/Products/bundle.pricelist.4.html>

Or, if you like, you can compose your own feature "bundle" by
checking the pricing at:

<http://www.soft.com/eValid/Products/feature.pricelist.4.html>

Check out the complete product feature descriptions at:

<http://www.soft.com/eValid/Products/Documentation.40/release.4.0.html>

Tell us the combination of features you want and we'll work out an
attractive discounted quote for you!  Send email to  and be assured of a prompt reply.

========================================================================
    ------------>>> QTN ARTICLE SUBMITTAL POLICY <<<------------
========================================================================

QTN is E-mailed around the middle of each month to over 10,000
subscribers worldwide.  To have your event listed in an upcoming
issue E-mail a complete description and full details of your Call
for Papers or Call for Participation to .

QTN's submittal policy is:

o Submission deadlines indicated in "Calls for Papers" should
  provide at least a 1-month lead time from the QTN issue date.  For
  example, submission deadlines for "Calls for Papers" in the March
  issue of QTN On-Line should be for April and beyond.
o Length of submitted non-calendar items should not exceed 350 lines
  (about four pages).  Longer articles are OK but may be serialized.
o Length of submitted calendar items should not exceed 60 lines.
o Publication of submitted items is determined by Software Research,
  Inc., and may be edited for style and content as necessary.

DISCLAIMER:  Articles and items appearing in QTN represent the
opinions of their authors or submitters; QTN disclaims any
responsibility for their content.

TRADEMARKS:  eValid, STW, TestWorks, CAPBAK, SMARTS, EXDIFF,
STW/Regression, STW/Coverage, STW/Advisor, TCAT, and the SR logo are
trademarks or registered trademarks of Software Research, Inc. All
other systems are either trademarks or registered trademarks of
their respective companies.

========================================================================
        -------->>> QTN SUBSCRIPTION INFORMATION <<<--------
========================================================================

To SUBSCRIBE to QTN, to UNSUBSCRIBE a current subscription, to
CHANGE an address (an UNSUBSCRIBE and a SUBSCRIBE combined) please
use the convenient Subscribe/Unsubscribe facility at:

       <http://www.soft.com/News/QTN-Online/subscribe.html>.

As a backup you may send Email direct to  as follows:

   TO SUBSCRIBE: Include this phrase in the body of your message:
           subscribe 

   TO UNSUBSCRIBE: Include this phrase in the body of your message:
           unsubscribe 

Please, when using either method to subscribe or unsubscribe, type
the  exactly and completely.  Requests to unsubscribe
that do not match an email address on the subscriber list are
ignored.

               QUALITY TECHNIQUES NEWSLETTER
               Software Research, Inc.
               1663 Mission Street, Suite 400
               San Francisco, CA  94103  USA

               Phone:     +1 (415) 861-2800
               Toll Free: +1 (800) 942-SOFT (USA Only)
               FAX:       +1 (415) 861-9801
               Email:     qtn@sr-corp.com
               Web:       <http://www.soft.com/News/QTN-Online>