From: Tim Bradshaw
Subject: Continuity (Engineering envy...)
Date: 
Message-ID: <ey38zhyc591.fsf@cley.com>
I can't even find a relevant article in the engineering envy thread
now, so I'm going to start another one.

One of the things that I (I think) said somewhere back at the start
was that SW `engineering' is ludicrously bad compared to mechanical
engineering.  I think a lot of the reason for this is to do with
continuity, or the lack of it.

A physical system, like a bridge or a car has continuous relationships
amongst its parts.  A small error in a part will produce a small error
in the whole, in most cases, and the errors are continuously related.
This gives you huge power when designing these systems - you can do
all sorts of error analysis to tell you how good the parts need to be
to make the whole as good as you need it to be.

Of course there are problems - nonlinear systems (which most are) have
now-well-known problems where, in some regions of the state space,
tiny errors can cause exponential blow-up in other errors with
resulting instability and chaos (in the chaos-theory sense).  I
suspect that a significant issue in the engineering design of such
systems is making sure that they are a long way away from such bad
bits of the state space.

But for the most part systems have fairly smooth relationships amongst
their parts which let you use a huge mathematical machinery to reason
about them.

Software isn't like this.  Exaggerating slightly: in software most of
the smallest errors that can happen have catastrophic consequences.  A
single-bit error (the smallest error that can happen) will often cause
catastrophic failure.  

I think this is why SW engineering has such trouble: if you want to
get a software system correct, it has to be *exactly* correct.  It's
not enough to say that this array access doesn't overstep or understep
the array bound by more than a byte or two, it has to be exactly
right.  You can't do any kind of approximation at all.  Things aren't
quite this bad, because the state space of a software system is very
small compared to that of a physical system, and of course some bits
of it really don't matter.  But the state space is not small compared
to the smoothed-out state-space of a physical system that you actually
have to deal with, and it has really bad characteristics in terms of
tiny errors having huge consequences: you can't really smooth out the
system to make reasoning about it tractable.

This is why I was kind of rude about formal proofs: I don't think that
reasoning about a state space that is large and where tiny errors
matter is something that you can really do.  It's certainly not
something that engineers do: they spend a lot of time making
approximations which reduce the size of the state space, and then
using smoothness to make everything tractable to model.  And sometimes
they get it wrong, too.

Languages like Lisp actually help you a good deal here.  For instance
if I have some part of my program which deals with mashing some
arrays, then I know, providing the Lisp system is not buggy, that this
bit of the program is not going to write beyond the array bounds, or
follow wild pointers &c. In C I have to prove that.  But I don't think
that they really help much in terms of how much help is needed to make
things really tractable.  What you need is continuity, or something
like it.

--tim

(I've used `continuous' and `smooth' in very sloppy ways above.
I mean something like `C-sufficient sufficiently-almost-everywhere',
in the usual sloppy physics way.)

From: Wade Humeniuk
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <9iddg0$eqg$1@news3.cadvision.com>
>
> Software isn't like this.  Exaggerating slightly: in software most of
> the smallest errors that can happen have catastrophic consequences.  A
> single-bit error (the smallest error that can happen) will often cause
> catastrophic failure.

Interesting post Tim.  Reminds me that once there were analog computers.
Whatever happened to them?  I assume they did not fail because of the
smallest error, because there were no bits.

Wade
From: Christopher Stacy
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <ur8vpwklt.fsf@spacy.Boston.MA.US>
>>>>> On Mon, 9 Jul 2001 17:08:28 -0600, Wade Humeniuk ("Wade") writes:
 >> Software isn't like this.  Exaggerating slightly: in software most of
 >> the smallest errors that can happen have catastrophic consequences.  A
 >> single-bit error (the smallest error that can happen) will often cause
 >> catastrophic failure.

 Wade> Interesting post Tim.  Reminds me that once there were analog computers.
 Wade> Whatever happened to them?  I assume they did not fail because of the
 Wade> smallest error, because there were no bits.

The autopilots in (at least) small airplanes are analog computers.
From: Wade Humeniuk
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <9ie21m$m4v$1@news3.cadvision.com>
> The autopilots in (at least) small airplanes are analog computers.

Which reminds me,

I remember in university one professor saying that there were "fluidic"
computers used in some aircraft control systems.  Basically blocks of metals
with various channels that "programmed" the control of some flight surfaces.

Wade
From: Paolo Amoroso
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <3chKO9Pzs3W5rxV55aClaLsjLb1k@4ax.com>
On Mon, 9 Jul 2001 17:08:28 -0600, "Wade Humeniuk" <········@cadvision.com>
wrote:

> Interesting post Tim.  Reminds me that once there were analog computers.
> Whatever happened to them?  I assume they did not fail because of the

An old planetarium, such as a Zeiss II or IV, may be considered an analog
computer, and it even has a GUI :) The astrolabe is probably another kind
of analog computer.


Paolo
-- 
EncyCMUCLopedia * Extensive collection of CMU Common Lisp documentation
http://cvs2.cons.org:8000/cmucl/doc/EncyCMUCLopedia/
From: Alain Picard
Subject: Off-topic: Analog computers [was Continuity (Engineering envy...)]
Date: 
Message-ID: <86pub9kmmi.fsf_-_@gondolin.local.net>
"Wade Humeniuk" <········@cadvision.com> writes:

> Interesting post Tim.  Reminds me that once there were analog computers.
> Whatever happened to them?  I assume they did not fail because of the
> smallest error, because there were no bits.
> 

Off topic alert:

There is an analog computer at Palomar Observatory which controls
the position of the dome, relative to the telescope.  This is done
by having a scale model of the dome/telescope, in which the telescope
tube extends beyond the slit of the dome.  The model telescope is slaved
to the real telescope, and the real dome is slaved to the model dome.
When the real telescope approaches the opening of the slits too closely,
the model telescope touches the model dome, and moves it out of the
way, thus pushing on the real dome.  Very elegant.

Here's the good bit:

Modern systems, of course, do all this in software.  (Palomar was
opened in '49, if memory serves, but most of it was designed a decade
before.)  This software tends to have a problem: if the telescope is
trying to slew through zenith (i.e. straight up), the software needs
to divide by sin(0), and it blows up.  This reflects the fact that
domes cannot physically slew 180 degrees in zero seconds.

The analog computer has the SAME PROBLEM: if the dome is slewing through
zenith, it gets stuck pushing on the model dome, which doesn't know which
way to rotate, and a human must go correct the failure.

So maybe even analog computers can have software bugs, which are present
due to the non-ideal properties (e.g. having nonzero mass) of the real world.


This has nothing to do with lisp, but considering another thread
currently running, I may perhaps be forgiven the digression...


-- 
It would be difficult to construe        Larry Wall, in  article
this as a feature.			 <·····················@netlabs.com>
From: Tim Bradshaw
Subject: Re: Off-topic: Analog computers [was Continuity (Engineering envy...)]
Date: 
Message-ID: <ey3k81gc261.fsf@cley.com>
* Alain Picard wrote:

> So maybe even analog computers can have software bugs, which are present
> due to the non-ideal properties (e.g. having nonzero mass) of the real world.

I think this is a wonderful example of the kind of failure that you
get in continuous systems - there is actually some kind of physical
problem there as far as I can tell, and lo and behold the thing blows
up there because of some ill-definedness in the model...  If only
these kinds of problems were the worst things that afflicted SW!

--tim
From: Larry Loen
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <3B5450E3.353832FD@rchland.vnet.ibm.com>
Wade Humeniuk wrote:

> Interesting post Tim.  Reminds me that once there were analog computers.
> Whatever happened to them?  I assume they did not fail because of the
> smallest error, because there were no bits.
>
> Wade

They failed because no one could figure out how to do nontrivial things with them.  If you go to a museum you see stuff like mechanical calculation of
humidity, some very crude slide-rule style multipliers, and that's about it.

Maybe someone will get rich figuring out how to do more with them, but that's so far about the lot.

I suppose you could argue that certain types of peripheral chips (e.g. Analog to Digital converters) are analog computers, but even this suggestion
shows how limited the technology's role has remained and one could argue whether even that is a valid example.  I certainly never recall seeing an
analog computer that was a Turing machine.


Larry
From: Tim Bradshaw
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <ey3ofqjd0kw.fsf@cley.com>
* Larry Loen wrote:
> They failed because no one could figure out how to do nontrivial
> things with them.  If you go to a museum you see stuff like
> mechanical calculation of humidity, some very crude slide-rule style
> multipliers, and that's about it.

Well, not really.  People targetted naval guns with them.  Cars until
recently used simple analogue computers to arrange for the right
amount of fuel to get to the engine.  Lots of useful stuff.

> I certainly never recall seeing an analog computer that was a Turing
> machine.

I don't think the concept is really relevant.

--tim
From: Marcin Tustin
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <9j2130$sot$1@newsg4.svr.pol.co.uk>
Tim Bradshaw <···@cley.com> wrote in message
····················@cley.com...
> * Larry Loen wrote:
> > They failed because no one could figure out how to do nontrivial
> > things with them.  If you go to a museum you see stuff like
> > mechanical calculation of humidity, some very crude slide-rule style
> > multipliers, and that's about it.
>
> Well, not really.  People targetted naval guns with them.  Cars until
> recently used simple analogue computers to arrange for the right
> amount of fuel to get to the engine.  Lots of useful stuff.

    High quality differential analysers in meccano. Of course, a turing
machine is a discrete device.
From: Larry Loen
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <3B561000.65FDEFAB@rchland.vnet.ibm.com>
Tim Bradshaw wrote:

>
> > I certainly never recall seeing an analog computer that was a Turing
> > machine.
>
> I don't think the concept is really relevant.
>
> --tim

To this discussion it is.  I read the original implication as sort of "why don't analog computers do more stuff" and the answer is, in part, because
they are not Turing/vonNeuman type machines.

Larry
From: Marcin Tustin
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <9icucf$dpp$1@news8.svr.pol.co.uk>
Tim Bradshaw <···@cley.com> wrote in message
····················@cley.com...
> I can't even find a relevant article in the engineering envy thread
> now, so I'm going to start another one.
>
> One of the things that I (I think) said somewhere back at the start
> was that SW `engineering' is ludicrously bad compared to mechanical
> engineering.  I think a lot of the reason for this is to do with
> continuity, or the lack of it.
[snip]
> Software isn't like this.  Exaggerating slightly: in software most of
> the smallest errors that can happen have catastrophic consequences.  A
> single-bit error (the smallest error that can happen) will often cause
> catastrophic failure.

    Yes, very much so.

> I think this is why SW engineering has such trouble: if you want to
> get a software system correct, it has to be *exactly* correct.  It's
> not enough to say that this array access doesn't overstep or understep
> the array bound by more than a byte or two, it has to be exactly
> right.  You can't do any kind of approximation at all.  Things aren't
> quite this bad, because the state space of a software system is very
> small compared to that of a physical system, and of course some bits
> of it really don't matter.  But the state space is not small compared
> to the smoothed-out state-space of a physical system that you actually
> have to deal with, and it has really bad characteristics in terms of
> tiny errors having huge consequences: you can't really smooth out the
> system to make reasoning about it tractable.
>
> This is why I was kind of rude about formal proofs: I don't think that
> reasoning about a state space that is large and where tiny errors
> matter is something that you can really do.  It's certainly not
> something that engineers do: they spend a lot of time making
> approximations which reduce the size of the state space, and then
> using smoothness to make everything tractable to model.  And sometimes
> they get it wrong, too.

    Modular programming anyone...? The point is that if we decompose our
programs into modules which perform a well defined function, the code to
reason about is small. Then,ahppy with all the bits, we can reason about
their interactions in confidence. Clearly, we need forwards and backwards
reasoning about our programs - we write specifications of modules, then
write code, then verify it meets that specification. I usually adopt a
similar, yet sloppy approach in my programming, and when I do, I find that
my programs are more reliable, easier to debug, etc. Consider also that such
activity is not the be-all and end-all, but rather one more tool in
conducting activities like code reviews.

> Languages like Lisp actually help you a good deal here.  For instance
> if I have some part of my program which deals with mashing some
> arrays, then I know, providing the Lisp system is not buggy, that this
> bit of the program is not going to write beyond the array bounds, or
> follow wild pointers &c. In C I have to prove that.  But I don't think
> that they really help much in terms of how much help is needed to make
> things really tractable.  What you need is continuity, or something
> like it.

    It does mean that the program may fall down in a more benign manner, but
in most cases array writes beyind boundary will just cause a seg fault (Or
provide a well known exploit).
From: Tim Bradshaw
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <ey3y9pxc0oi.fsf@cley.com>
* Marcin Tustin wrote:

>     Modular programming anyone...? The point is that if we decompose our
> programs into modules which perform a well defined function, the code to
> reason about is small. Then,ahppy with all the bits, we can reason about
> their interactions in confidence. Clearly, we need forwards and backwards
> reasoning about our programs - we write specifications of modules, then
> write code, then verify it meets that specification. I usually adopt a
> similar, yet sloppy approach in my programming, and when I do, I find that
> my programs are more reliable, easier to debug, etc. Consider also that such
> activity is not the be-all and end-all, but rather one more tool in
> conducting activities like code reviews.

Yes, this is an approach to get state-space size down.  However I'm
not sure if it really helps with the continuity problem.  Tiny errors
still kill you.  Obviously it's a necessary technique, I don't think
it is sufficient, or anything like sufficient.

--tim
From: Paul Wallich
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <pw-0907011735220001@192.168.1.100>
In article <···············@cley.com>, Tim Bradshaw <···@cley.com> wrote:

>* Marcin Tustin wrote:
>
>>     Modular programming anyone...? The point is that if we decompose our
>> programs into modules which perform a well defined function, the code to
>> reason about is small. Then,ahppy with all the bits, we can reason about
>> their interactions in confidence. Clearly, we need forwards and backwards
>> reasoning about our programs - we write specifications of modules, then
>> write code, then verify it meets that specification. I usually adopt a
>> similar, yet sloppy approach in my programming, and when I do, I find that
>> my programs are more reliable, easier to debug, etc. Consider also that such
>> activity is not the be-all and end-all, but rather one more tool in
>> conducting activities like code reviews.
>
>Yes, this is an approach to get state-space size down.  However I'm
>not sure if it really helps with the continuity problem.  Tiny errors
>still kill you.  Obviously it's a necessary technique, I don't think
>it is sufficient, or anything like sufficient.

One of the other questions here is the definition of "kill you" --
both in terms of what kinds of bad behavior you're willing to
accept and in terms of what kinds of error handling and
redundancy you do. Many programs limit error handling to
"Oops, I wasn't expecting that, I'm going to die now" or
"Would you like to try that again in cases it works, or
should I die now?" with the occasional better solution
for certain user input errors. Fixup or trying another way
around the problem are typically right out.

Contrast this with mechanical objects, which are
designed to work acceptably in various states of
partial failure, and which are almost always designed
with multiply redundant parts (the strands in a cable
are such a cliche that one doesn't even think of them
as a form of redundancy)

How much CPU power and programming effort would
it take to do a typical thing (medium sized database,
a small phone switch or whatever) in a way that had
the kind of invisible redundancy that a mechanical
object does?

paul
From: Tim Bradshaw
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <ey3u20lbscd.fsf@cley.com>
* Paul Wallich wrote:

> One of the other questions here is the definition of "kill you" --
> both in terms of what kinds of bad behavior you're willing to
> accept and in terms of what kinds of error handling and
> redundancy you do. Many programs limit error handling to
> "Oops, I wasn't expecting that, I'm going to die now" or
> "Would you like to try that again in cases it works, or
> should I die now?" with the occasional better solution
> for certain user input errors. Fixup or trying another way
> around the problem are typically right out.

> Contrast this with mechanical objects, which are
> designed to work acceptably in various states of
> partial failure, and which are almost always designed
> with multiply redundant parts (the strands in a cable
> are such a cliche that one doesn't even think of them
> as a form of redundancy)

I think again this comes down to properties of the medium.  In a
mechanical system you typically have some kind of metric which
determines how bad a failure or error is.  In a cable, failure of a
strand in an n-strand cable might reduce its strength by 1/nth (it
might be more complex than that).  In something like a piston engine
you have all these infinitesimal failures as metal gets scraped off
the various surfaces which together make up `wear' and which is nicely
quantifiable - you can predict the life, predict how harder use will
alter the life, measure how worn the thing is and so on.

But in most software there's no real notion of something being a small
failure or a large failure.  A single-bit error can be catastrophic.
If there's to be a metric it has to be programmer-defined.

I think I understand why languages like CL are nicer in this respect:
although CL has the same problems with single-bit errors as, say, C,
it makes efforts to detect some of them (bounds checks, run-time type
checks) and to avoid others (integer overflow) which C does not.

> How much CPU power and programming effort would
> it take to do a typical thing (medium sized database,
> a small phone switch or whatever) in a way that had
> the kind of invisible redundancy that a mechanical
> object does?

I don't know.  I don't know what such a system would even look like.

--tim
From: David Thornley
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <vdH27.12161$B7.2322235@ruti.visi.com>
In article <···················@192.168.1.100>,
Paul Wallich <··@panix.com> wrote:
>In article <···············@cley.com>, Tim Bradshaw <···@cley.com> wrote:
>
>>* Marcin Tustin wrote:
>>
>>>     Modular programming anyone...? The point is that if we decompose our
>>> programs into modules which perform a well defined function, the code to
>>> reason about is small. Then,ahppy with all the bits, we can reason about
>>> their interactions in confidence.

This is where I find formal proofs breaking down.  They're useful
for small stuff, but the interactions of simple and well-understood
components can be arbitrarily complex.  As an example, consider
an artificial neural net:  a reasonably small number of completely
understood modules linked in a simple way, and capable of doing
(or failing to do) various interesting things that we just don't
understand.

>Contrast this with mechanical objects, which are
>designed to work acceptably in various states of
>partial failure, and which are almost always designed
>with multiply redundant parts (the strands in a cable
>are such a cliche that one doesn't even think of them
>as a form of redundancy)
>
The big problem here is implementing redundancy.  In hardware,
it makes sense to have multiple components to increase reliability
in most cases, since the components are well understood and tend
to fail independently of each other.  If you provide two computer
systems running the same software, they're very likely to fail
in the same way at the same time.

Conceptually, this is because software only fails in exceptional
cases, as opposed to cable strands, which can fail in normal use
as well as because of exceptional circumstances (unexpected strain,
cutting torch, whatever).  The kicker is that there are far more
exceptional cases in software.

>How much CPU power and programming effort would
>it take to do a typical thing (medium sized database,
>a small phone switch or whatever) in a way that had
>the kind of invisible redundancy that a mechanical
>object does?
>
It's possible to run parallel databases with two database servers
and some overhead, which can be significant but not overwhelming.
When I did it, it was with some expensive but available software
to allow communication between the two servers.  This means that
you've got a copy if an unexpected glitch brings one down, but
there are obvious vulnerabilities, such as if the database software
is buggy.


--
David H. Thornley                        | If you want my opinion, ask.
·····@thornley.net                       | If you don't, flee.
http://www.thornley.net/~thornley/david/ | O-
From: Marcin Tustin
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <9iffrg$g4k$1@newsg4.svr.pol.co.uk>
David Thornley <········@visi.com> wrote in message
···························@ruti.visi.com...

> The big problem here is implementing redundancy.  In hardware,
> it makes sense to have multiple components to increase reliability
> in most cases, since the components are well understood and tend
> to fail independently of each other.  If you provide two computer
> systems running the same software, they're very likely to fail
> in the same way at the same time.

    This is why voting based software use separate implementations (and
probably separate everywhere after requirements are agreed).

[snip parralleling data - another good way of getting redundancy]

    So for your mission critical application count votes between at least 3
implementations, and mirror the data between multiple sites! Safe, not fast!
From: Tim Bradshaw
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <ey3sng4a7di.fsf@cley.com>
* David Thornley wrote:

> This is where I find formal proofs breaking down.  They're useful
> for small stuff, but the interactions of simple and well-understood
> components can be arbitrarily complex.  As an example, consider
> an artificial neural net:  a reasonably small number of completely
> understood modules linked in a simple way, and capable of doing
> (or failing to do) various interesting things that we just don't
> understand.

I think that this is a case where having continuity (or whatever else
it is I claim helps other kinds of engineering) doesn't help much.
You could build an ANN out of real components, not some digital
simulation of them, and you still would not be able to work out what
it did.

--tim
From: Kent M Pitman
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <sfwelrpbw7p.fsf@world.std.com>
Tim Bradshaw <···@cley.com> writes:

> I think this is why SW engineering has such trouble: if you want to
> get a software system correct, it has to be *exactly* correct.

I've heard it said that computers are "relentless judges of incompleteness".
I've always found that a very productive summary.
From: Andy Freeman
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <8bbd9ac3.0107100809.578815c0@posting.google.com>
> Of course there are problems - nonlinear systems (which most are) have
> now-well-known problems where, in some regions of the state space,
> tiny errors can cause exponential blow-up in other errors with
> resulting instability and chaos (in the chaos-theory sense).  I
> suspect that a significant issue in the engineering design of such
> systems is making sure that they are a long way away from such bad
> bits of the state space.
> 
> But for the most part systems have fairly smooth relationships amongst
> their parts which let you use a huge mathematical machinery to reason
> about them.

While continuity may be necessary, I don't think that it is sufficient.

With physical systems, it's almost always possible to measure relevant
properties, and those properties tend to be standardized.  When designing,
one can almost always express requirements in terms of those properties.

As a result, a physical system designer can choose a beam that will take
the expected load plus a margin of error, know what the expected load
actually is, and have confidence that the beam is up to the task.

The very concept of "margin" makes a lot of sense for physical systems.
(The designer may expect to put a 5 ton load on a beam, but can also
know that the worst case load will be only 6 tons.)

Software doesn't have a rich set of measureable properties, let alone
a way to usefully build in margin, and often is notoriously unstable.
(When a SW module fails, it can produce exponential demands on other
modules.  When a beam fails, it just drops its load on the things below.
They may well break, but the worst case impact is reasonably well
understood, even if the designer knows nothing about fracture or
other beam failure modes.)

Software usage is often illbehaved.  Outside of commercials, no one tries
to pull ocean liners with pickups, and when such silly usage occurs, the
user gets the blame.  Of course, this is related to the lack of measures,
as its hard to blame users for pushing a system too far when you can't
tell them what "too far" means.  (There are exceptions, but they're often
in the terms that are useless to potential users.)

-andy
From: Tim Bradshaw
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <ey366d0buo3.fsf@cley.com>
* Andy Freeman wrote:
> While continuity may be necessary, I don't think that it is sufficient.

No, it clearly isn't.  You need something like a metric too.  This
gives you some notion of distance between things so your examples of
the 5 ton / 6 ton beam can make sense.

Actually you may be able to do without continuity but do some topology
-> metric thing.  Maybe that would work for software.  But I can't
remember enough about all this from when I knew about it, and what I
knew about all assumed continuity because it was physics...

--tim
From: Johan Kullstam
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <m3lmlwh8xw.fsf@sysengr.res.ray.com>
Tim Bradshaw <···@cley.com> writes:

> * Andy Freeman wrote:
> > While continuity may be necessary, I don't think that it is sufficient.
> 
> No, it clearly isn't.  You need something like a metric too.  This
> gives you some notion of distance between things so your examples of
> the 5 ton / 6 ton beam can make sense.

note that the concept of continuity requires a topology.  this is true
in mathematics, but also here in this fuzzier analog.  without a
concept of what is near and what is far away, how could you even have
continuity?

unless you are going degenerate and use a discrete topology (in which
everything is continuous because every set is open).  hmm.  seems like
the software case.  it's trivially continuous, but not usefully so
because everything is far apart.

> Actually you may be able to do without continuity but do some topology
> -> metric thing.  Maybe that would work for software.  But I can't
> remember enough about all this from when I knew about it, and what I
> knew about all assumed continuity because it was physics...

schaums has a nice point-set topology book.

-- 
J o h a n  K u l l s t a m
[········@ne.mediaone.net]
sysengr
From: Tim Bradshaw
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <ey3ofqsa6we.fsf@cley.com>
* Johan Kullstam wrote:

> note that the concept of continuity requires a topology.  this is true
> in mathematics, but also here in this fuzzier analog.  without a
> concept of what is near and what is far away, how could you even have
> continuity?

Yes this is right, of course.

Do all topologies induce a metric?  I didn't think they did, or not
uniquely.  In fact, I'm sure they don't uniquely because you can
obviously have lots of interesting and different metrics which are
compatible with the standard R^n topology (I'm using terms vaguely,
sorry).

Of course you probably mean by `what is near and what is far away'
just the neighbourhoods of the topology, but I think that you do also
need a metric as an additional thing.


--tim
From: Johan Kullstam
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <m2hewksbgv.fsf@euler.axel.nom>
Tim Bradshaw <···@cley.com> writes:

> * Johan Kullstam wrote:
> 
> > note that the concept of continuity requires a topology.  this is true
> > in mathematics, but also here in this fuzzier analog.  without a
> > concept of what is near and what is far away, how could you even have
> > continuity?
> 
> Yes this is right, of course.
> 
> Do all topologies induce a metric?

no.

(mathematically, a topology is just a collection of subsets (defined
to be open) which satisfy a few axioms -- empty set and total set,
finite intersections, unlimited unions.  an open set containing a
point is a neighborhood.  this analogy can only be stretched so far.)

> I didn't think they did, or not
> uniquely.  In fact, I'm sure they don't uniquely because you can
> obviously have lots of interesting and different metrics which are
> compatible with the standard R^n topology (I'm using terms vaguely,
> sorry).
> 
> Of course you probably mean by `what is near and what is far away'
> just the neighbourhoods of the topology, but I think that you do also
> need a metric as an additional thing.

you do not need a metric.  all you need is some sense of what you
want, what is good enough which in turn induces some requirements on
the design.  it might not be quantifiable.  and even if it is, a
metric comes with such technical requirements as satisfying the
triangle inequality which might not be real requirements.

let the end requirements define some set of acceptible performance.
the preimage of the "design function" on this acceptible set is the
design requirements.  (iirc a continuous functions is defined as one
for which the pre-image of any open set in the range space is also
open.)  basically, creating similar things should give similar
results.  this works for iron bars and load bearing capacity.  i am
not sure it has anything to with details of software design.

fwiw the discrete topology has a metric

d(x,y) = 0 for x = y
         1 otherwise

hence, simply possessing a metric is not sufficient to be useful.  the
topology must have some meaning for what you are doing.  choosing the
right math framework to represent your task is still somewhat of an
art.

-- 
J o h a n  K u l l s t a m
[········@ne.mediaone.net]
Don't Fear the Penguin!
From: Tim Bradshaw
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <ey3g0c49z6l.fsf@cley.com>
* Johan Kullstam wrote:

> you do not need a metric. 

Well, I guess.  What I meant was that in mechanical engineering you do
typically have metrics, and I think that the ability to have the whole
machinery that they get you (really, classsical mechanics) is a huge
win.

--tim
From: Gareth McCaughan
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <slrn9kn2rj.3pj.Gareth.McCaughan@g.local>
Tim Bradshaw wrote:

> * Johan Kullstam wrote:
> 
> > note that the concept of continuity requires a topology.  this is true
> > in mathematics, but also here in this fuzzier analog.  without a
> > concept of what is near and what is far away, how could you even have
> > continuity?
> 
> Yes this is right, of course.
> 
> Do all topologies induce a metric?  I didn't think they did, or not
> uniquely.

Correct.

Any topology that comes from a metric has to be (for
instance) normal (i.e., every pair of disjoint closed
sets can be separated by disjoint open neighbourhoods),
so not all topologies have that property. And the metric
certainly isn't ever uniquely determined by the topology.

-- 
Gareth McCaughan  ················@pobox.com
.sig under construc
From: Wolfhard Buß
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <m3elrnn0xa.fsf@buss-14250.user.cis.dfn.de>
The famous Nagata-Smirnov-Bing Metrization Theorem
comes to mind.

 Theorem: Exactly the regular topological spaces with
 a countably locally discrete base are metrizable.
From: Paolo Amoroso
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <FMlKO3wH4cmFUkBpyAgoEjptxqmb@4ax.com>
On 09 Jul 2001 19:43:54 +0100, Tim Bradshaw <···@cley.com> wrote:

> This is why I was kind of rude about formal proofs: I don't think that

The few textbook examples I saw dealt with stacks. I wonder how well formal
proofs scale to larger and more complex systems.


Paolo
-- 
EncyCMUCLopedia * Extensive collection of CMU Common Lisp documentation
http://cvs2.cons.org:8000/cmucl/doc/EncyCMUCLopedia/
From: Martin Thornquist
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <xunae2c535m.fsf@levding.ifi.uio.no>
[ Paolo Amoroso ]

> On 09 Jul 2001 19:43:54 +0100, Tim Bradshaw <···@cley.com> wrote:
> 
>> This is why I was kind of rude about formal proofs: I don't think that
> 
> The few textbook examples I saw dealt with stacks. I wonder how well formal
> proofs scale to larger and more complex systems.

They don't. Or rather, it's extremely expensive. The formal
verification people are shifting focus to formal spesification because
of this.


Martin
-- 
"An ideal world is left as an exercise to the reader."
                                                 -Paul Graham, On Lisp
From: Marcin Tustin
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <9iff6r$tav$1@newsg3.svr.pol.co.uk>
Martin Thornquist <············@ifi.uio.no> wrote in message
····················@levding.ifi.uio.no...
> [ Paolo Amoroso ]
> > The few textbook examples I saw dealt with stacks. I wonder how well
formal
> > proofs scale to larger and more complex systems.
>
> They don't. Or rather, it's extremely expensive. The formal
> verification people are shifting focus to formal spesification because
> of this.

    Shifting? What do they do with the formal specification? A step-wise
refinement would be prohibitively expensive, surely?

> Martin
> --
> "An ideal world is left as an exercise to the reader."
>                                                  -Paul Graham, On Lisp
From: Martin Thornquist
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <xunr8voa3es.fsf@glejpnir.ifi.uio.no>
[ Marcin Tustin ]

>     Shifting? What do they do with the formal specification? A step-wise
> refinement would be prohibitively expensive, surely?

Well, I'm pretty much just looking over the edge of this field, but my
impression is that research has mostly left formal verification of
"normal" programming languages for high-level, easily verified
specification.

I've worked a little bit with B Toolkit by B-core
<URL:http://www.b-core.com/> and TLA by Leslie Lamport (se e.g.
<URL:http://www.research.compaq.com/SRC/tla/papers.html>). Both do
indeed use a step-wise refinement. The B Toolkit generates C after one
or two intermediate steps. I only used this for a university course so
I didn't test the performance of the generated code or anything, but
my impression is that the generated code is pretty slow. Today's
machines are however fast enough that it isn't a problem for many
applications -- if one managed to make computer control systems for
nuclear power plants in the 60's, with the speed of today's computers
one can surely have pretty ineffective code.

If you by expensive meant in man labor, no, it isn't, really. The
first refinement (again in B Toolkit) is mostly manual, but the lowest
C generation is fully automatic. Of course it takes longer than
hacking together something in a "normal" language you know well, but
it is *much* shorter time than the old approach of verifying code.


Martin
-- 
"An ideal world is left as an exercise to the reader."
                                                 -Paul Graham, On Lisp
From: Martin Thornquist
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <xun1ynn671h.fsf@levding.ifi.uio.no>
[ Martin Thornquist ]

<snip>

> nuclear power plants in the 60's, with the speed of today's computers
> one can surely have pretty ineffective code.
                             ^^^^^^^^^^^
Sorry, that should of course be _inefficient_.


Martin
-- 
"An ideal world is left as an exercise to the reader."
                                                 -Paul Graham, On Lisp
From: Kurt B. Kaiser
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <m3sng35pvp.fsf@float.ne.mediaone.com>
Tim Bradshaw <···@cley.com> writes:

> I can't even find a relevant article in the engineering envy thread
> now, so I'm going to start another one.
> 
> One of the things that I (I think) said somewhere back at the start
> was that SW `engineering' is ludicrously bad compared to mechanical
> engineering.  I think a lot of the reason for this is to do with
> continuity, or the lack of it.
> 
> A physical system, like a bridge or a car has continuous relationships
> amongst its parts.  A small error in a part will produce a small error
> in the whole, in most cases, and the errors are continuously related.
> This gives you huge power when designing these systems - you can do
> all sorts of error analysis to tell you how good the parts need to be
> to make the whole as good as you need it to be.
> 
> Of course there are problems - nonlinear systems (which most are) have
> now-well-known problems where, in some regions of the state space,
> tiny errors can cause exponential blow-up in other errors with
> resulting instability and chaos (in the chaos-theory sense).  I
> suspect that a significant issue in the engineering design of such
> systems is making sure that they are a long way away from such bad
> bits of the state space.
> 
> But for the most part systems have fairly smooth relationships amongst
> their parts which let you use a huge mathematical machinery to reason
> about them.
> 
> Software isn't like this.  Exaggerating slightly: in software most of
> the smallest errors that can happen have catastrophic consequences.  A
> single-bit error (the smallest error that can happen) will often cause
> catastrophic failure.  
> 

Hmm. Of course SW is highly malleable; half of what high level languages do is
to attempt to protect the programmer from that flexibility. e.g. if you have
two variables whose name differs by one letter, and you use the wrong one, you
can get interesting results. Doing that is rather like plugging a wire into the
wrong connector. The usual way of preventing that is to design the connectors
with different shapes and number of pins so they can't be mated
incorrectly. The Japanese have a term for this, "pokayoke", which I believe
means "idiot proof". SW attempts to use typing to prevent this kind of problem.
So physical systems are not necessarily "continuous" when considered across the
design/integration/operation spectrum.

Compared to some SW, most mechanical systems are pretty simple, especially
locally. In addition, the parts are generally tested, at least at a sampling
level, before integration. It is also pretty unusual to find a mechanical
device that is assembled for the first time just before use (only to find
square pins headed for round holes).  But you can design with the wrong bearing
load rating, and you can assemble things with the wrong bearing, also.  I've
seen some pretty amazing errors in mechanical parts, but they usually don't fit
into the assembly, so they get sorted out pretty quickly.

What are these small changes which can have catastrophic results? Can't one
design them out? For example, Gray code is often used to convert analog motions
to digital. If more than one bit changes at a time, that's a trappable
error. The square pin/round hole issue can be checked at
integration/compilation by strong typing. OO SW, which further increases the
level of "type" checking, can help. Run time errors caused by single bit
hardware glitches can be mitigated by CRC and parity checks.

Take a medical patient table. In a simplistic design, if a relay sticks, you
dump the patient on the floor. So you design in limit switches, and if they
fail, you have designed in hard stops. Same with SW. Further, it's common to
monitor the output of analog HW with A/Ds and processors and take appropriate
action if things get out of acceptable limits. This kind of design has greatly
improved the safety of electromechanical systems.

Most mechanical engineers have an intuitive feel for the objects they are
creating. On the other hand, many SW developers have a rather poor feel for
physical systems and the problems they can have; this can cause problems for,
e.g. airport baggage system implementation.

> I think this is why SW engineering has such trouble: if you want to
> get a software system correct, it has to be *exactly* correct.  It's
> not enough to say that this array access doesn't overstep or understep
> the array bound by more than a byte or two, it has to be exactly
> right.  You can't do any kind of approximation at all.  Things aren't
> quite this bad, because the state space of a software system is very
> small compared to that of a physical system, and of course some bits
> of it really don't matter.  But the state space is not small compared
> to the smoothed-out state-space of a physical system that you actually
> have to deal with, and it has really bad characteristics in terms of
> tiny errors having huge consequences: you can't really smooth out the
> system to make reasoning about it tractable.

Mechanical systems have redundant safety systems. For example, a digital
control is backed up with independent analog limit switches and contactors. A
similar design in SW would use a separate thread or process to monitor the
(pre-emptible) main routine.  Computer voting can be saved for safety critical
applications.

Many SW designs are overly complex and/or hacked together.  A mechanical system
done this way is called a kludge and is liable to fail catastrophically, also.
They are usually identifiable at a glance, and most people are knowledgeable
enough to avoid depending on them. Since source code is usually not visible, a
lot of unreliable/dangerous cruft survives.

Worse, SW systems are often built from scratch instead of building from tested
components. People who forget history are doomed to repeat it.

Decent exception handling can mitigate the requirement for *exact*
correctness. Design by contract a la Eiffel can help with integration and
testing, and can be switched off by levels and by module for production, if
performance requires.

Design for failure! That's one of the things engineers do! They are used to
wear, corrosion, fracture, and noise, i.e. the real world.  

Regards, KBK
"For want of a nail the kingdom was lost. That's pretty discontinuous."
From: Tim Bradshaw
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <ey31ynn9vve.fsf@cley.com>
* Kurt B Kaiser wrote:
> Compared to some SW, most mechanical systems are pretty simple, especially
> locally.

(I'm not avoiding replying to the rest of your followup, I'm just
thinking about it...)

I think that mechanical systems of pretty high complexity are built.
An example that I know something about is warships. Even a century ago
a battleship was a significantly complex bit of machinery.  In many
senses warship design underwent a simplification with the Dreadnought
in 1905 but they got a lot more complex again in the succeeding years.
And it's really important for a warship that you avoid any
single-point failures that you can, because your aim in life is to
have people throw great heavy explosive shells at you and survive long
enough to either run away or throw enough at them in return that they
sink.  Of course some ship designs had nasty bugs (although many of
the bugs actually turn out to be procedural bugs - leaving flash doors
open for instance).

Of course one difference between a dreadnought battleship and a bit of
software is cost - Battleships were *very* expensive things.  But they
had a lot of expensive good-quality steel &c, so I don't know how the
effort of producing one compared with the effort of producing a big
bit of software.

I think your comment about locality is bogus. Partly because software
is simple locally too, and partly because the whole notion of local
simplicity, or local smoothness is what makes mechanical systems
tractable.

--tim
From: Paul Wallich
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <pw-1107011649160001@192.168.1.100>
In article <···············@cley.com>, Tim Bradshaw <···@cley.com> wrote:

>* Kurt B Kaiser wrote:
>> Compared to some SW, most mechanical systems are pretty simple, especially
>> locally.
>
>(I'm not avoiding replying to the rest of your followup, I'm just
>thinking about it...)
>
>I think that mechanical systems of pretty high complexity are built.
>An example that I know something about is warships. Even a century ago
>a battleship was a significantly complex bit of machinery.  In many
>senses warship design underwent a simplification with the Dreadnought
>in 1905 but they got a lot more complex again in the succeeding years.
>And it's really important for a warship that you avoid any
>single-point failures that you can, because your aim in life is to
>have people throw great heavy explosive shells at you and survive long
>enough to either run away or throw enough at them in return that they
>sink.  Of course some ship designs had nasty bugs (although many of
>the bugs actually turn out to be procedural bugs - leaving flash doors
>open for instance).
 
>I think your comment about locality is bogus. Partly because software
>is simple locally too, and partly because the whole notion of local
>simplicity, or local smoothness is what makes mechanical systems
>tractable.

It seems that your battleship example undercuts that claim -- pretty
much all of what made for a good battleship was making sure that
local failures didn't propagate into the rest of the system. You could
blow out any one turret, or several watertight compartments, pretty
much anything except the central magazines, without fatally damaging
the ship or its mission.

It's hard to think what the equivalent would be for software, because
so much of so many programs is a serial thread of execution (or a set
of parallel-but-interdependent threads -- the important thing is that
the success condition is a logical AND) rather than a bunch of pieces
operating in concert, only some majority of which are required to work
right to get the job done.

I wonder whether, thanks to Moore et al, it might not be time for a
return to some kind of blackboard architecture, with a little reflection
thrown in for good measure....

paul
From: Tim Bradshaw
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <ey3sng27ytv.fsf@cley.com>
* Paul Wallich wrote:
> In article <···············@cley.com>, Tim Bradshaw <···@cley.com> wrote:
 
>> I think your comment about locality is bogus. Partly because software
>> is simple locally too, and partly because the whole notion of local
>> simplicity, or local smoothness is what makes mechanical systems
>> tractable.

> It seems that your battleship example undercuts that claim -- pretty
> much all of what made for a good battleship was making sure that
> local failures didn't propagate into the rest of the system. You could
> blow out any one turret, or several watertight compartments, pretty
> much anything except the central magazines, without fatally damaging
> the ship or its mission.

Assuming your referring to the bit I've quoted above, then I was
trying to argue that software is simple locally (or should be) in the
same way that mechanical systems are, not that local failures stayed
local in either. 

Pretty much what I'm trying to argue, I think, is that software
doesn't really have a notion of `local failure' - somehow everything
is close to everything else, so even tiny failures propagate
everywhere.

Or something, my whole analogy is pretty dubious!

--tim
From: Kurt B. Kaiser
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <m3n169c0nl.fsf@float.ne.mediaone.com>
Tim Bradshaw <···@cley.com> writes:
> 
> Assuming your referring to the bit I've quoted above, then I was
> trying to argue that software is simple locally (or should be) in the
> same way that mechanical systems are, not that local failures stayed
> local in either. 
> 
> Pretty much what I'm trying to argue, I think, is that software
> doesn't really have a notion of `local failure' - somehow everything
> is close to everything else, so even tiny failures propagate
> everywhere.
> 
> Or something, my whole analogy is pretty dubious!
> 

No, it's not. It's interesting, and I suspect there's something useful to be
learned there.

Go back to my example about plugging connectors incorrectly. As I said, one
uses different physical configurations to avoid errors. But you don't need to
use different configs everywhere in the system, because the cables won't reach;
you can reuse that female three pin connector again.

An equivalent in SW might be to not allow a procedure to call outside its
module without special permission. It seems common to hide procedures in
modules, but is it easy to block calling out? What is the SW equivalent of
short cables?

*************

And going back to my previous question, what are some examples of small changes
(which can't be easily mitigated) that cause catastrophic results?

Regards, KBK
From: David Thornley
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <wko37.13364$B7.2587597@ruti.visi.com>
In article <···················@192.168.1.100>,
Paul Wallich <··@panix.com> wrote:
>
>It seems that your battleship example undercuts that claim -- pretty
>much all of what made for a good battleship was making sure that
>local failures didn't propagate into the rest of the system. You could
>blow out any one turret, or several watertight compartments, pretty
>much anything except the central magazines, without fatally damaging
>the ship or its mission.
>
You know what?  People tried real hard to do that, and it didn't
always work.

Arizona, IIRC, was sunk by an armor-piercing bomb setting off
storage for black powder charges for her catapults, which set
off the main magazines.  There are several other cases of
ships being lost, or badly injured, because of something
flammable or explosive that was badly inserted.

The Italians used the Pugiliese system of torpedo defense,
involving a long crushable cylinder, and found out that it
tended in practice to allow large amounts of water into the
ship.

At a naval battle off Guadalcanal, the battleship South Dakota
suffered a shipwide power failure for ten minutes, and spent
that time basically being shelled by Japanese ships.  That
particular battle would have been a fiasco had not the other
US battleship present, the Washington, finally figured out
which blip was which and effectively ended the battle with
seventy-five shells.

The USN tried turboelelectric propulsion for capital ships
(the boilers running generators which ran electric motors
turning the screws) partly because it seemed to allow excellent
subdivision, increasing locality of damage.  However, the
whole propulsion plant seemed to be knocked out easily by
shock from, say, a torpedo.

I agree that it's easier to get locality in a battleship than
in a computer program, but even with all the work done on
these extremely expensive ships, and a good deal of experience
to draw on, it didn't always work.

>It's hard to think what the equivalent would be for software, because
>so much of so many programs is a serial thread of execution (or a set
>of parallel-but-interdependent threads -- the important thing is that
>the success condition is a logical AND) rather than a bunch of pieces
>operating in concert, only some majority of which are required to work
>right to get the job done.
>
Sometimes mostly right is all you need.  If you're calculating
something to make a decision in software, and you're a bit off,
that may work well.  On the other hand, we should all know about
the time AT&T went down because a programmer didn't understand C,
and misused a break statement.


--
David H. Thornley                        | If you want my opinion, ask.
·····@thornley.net                       | If you don't, flee.
http://www.thornley.net/~thornley/david/ | O-
From: Tim Bradshaw
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <ey3d77586su.fsf@cley.com>
* David Thornley wrote:

[Locality in battleships]

A lot of the locality issues in ships are a bit subtle I think.  For
instance in a big-gun ship you inherently have a lot of shells and
propellant in the ship, and if this gets set on fire you are probably
done for (there are things you can do, like having cordite that burns
more slowly, or allowing magazines to vent to the air, but it's pretty
bad).  All of this stuff has to be accessible from the main armament,
quickly.  So if you store it in lots of little magazines you have a
lot of things that are vulnerable all of which are connected somehow
(OK, you can use flashtight doors), so you might survive any one
magazine going, but you might get a cascade of explosions, and you
definitely have charge-handling problems. If you store it in one
magazine (per turret) then if that goes you're done for, but you can
armour it very well, and you don't have cascade problems or
charge-handling problems.  Then of course you forget to *enforce*
flashtightness between turret and magazine by mechanical interlock,
and in the heat of battle people leave the passages open, you get a
hit on a turret and that's it (this is one of the theories about what
happened to British ships at Jutland - I think the truth is somewhat
more complex).


> I agree that it's easier to get locality in a battleship than
> in a computer program, but even with all the work done on
> these extremely expensive ships, and a good deal of experience
> to draw on, it didn't always work.

It's interesting that at the systems level there are some really good
examples of these locality issues.  Microsoft had all their
nameservers behind one router early this year, and lost nameservice
for a really long time when someone misconfigured the router.  Even
now if you look around loads of really large companies have
obviously-vulnerable setups like this.  Microsoft, again (it's too
easy to pick on them unfortunately) had some massive outage in the
instant messaging service (whatever that is) which smells like some
single-point failure.  You often see storage systems configured with
really expensive RAIDs but some obvious single-point failure mechanism
(and worse, one that will corrupt all the data *then* take the system
down, so all your backups are no good either).

Doing software avoiding locality may be hard - however examples like
the above lead me to think that people are incredibly bad about the
kind of reasoning you need to avoid these problems even where you know
how to!

--tim
From: Kurt B. Kaiser
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <m3k81f5lzs.fsf@float.ne.mediaone.com>
Tim Bradshaw <···@cley.com> writes:

> * Kurt B Kaiser wrote:
> > Compared to some SW, most mechanical systems are pretty simple, especially
> > locally.
> 
> (I'm not avoiding replying to the rest of your followup, I'm just
> thinking about it...)
> 
> I think that mechanical systems of pretty high complexity are built.
> An example that I know something about is warships. Even a century ago
> a battleship was a significantly complex bit of machinery.  In many
> senses warship design underwent a simplification with the Dreadnought
> in 1905 but they got a lot more complex again in the succeeding years.
> And it's really important for a warship that you avoid any
> single-point failures that you can, because your aim in life is to
> have people throw great heavy explosive shells at you and survive long
> enough to either run away or throw enough at them in return that they
> sink.  Of course some ship designs had nasty bugs (although many of
> the bugs actually turn out to be procedural bugs - leaving flash doors
> open for instance).
> 
> Of course one difference between a dreadnought battleship and a bit of
> software is cost - Battleships were *very* expensive things.  But they
> had a lot of expensive good-quality steel &c, so I don't know how the
> effort of producing one compared with the effort of producing a big
> bit of software.
> 
> I think your comment about locality is bogus. Partly because software
> is simple locally too, and partly because the whole notion of local
> simplicity, or local smoothness is what makes mechanical systems
> tractable.

You hope SW is simple locally. A lot of it isn't, because of poor design. An
experienced mechanical engineer can inspect a battleship for a day and have an
excellent idea of how it is designed, and what its weak points are. This is
largely because its systems stand as separate entities.  But there are many SW
systems which would take an expert far longer than that (if ever :) to
understand, due to failure to observe modularity and locality of reference in
the design, and just plain muddled thinking/cryptic coding.

I hear what you are saying about smoothness, and it's intellectally attractive,
but I'm trying to get a grip on why it can't apply to SW (and how the concept
could be used to improve the situation in SW).

And by better design, I also mean better tools.

Regards, KBK
From: Kurt B. Kaiser
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <m3elrn5kp7.fsf@float.ne.mediaone.com>
···@shore.net (Kurt B. Kaiser) writes:
 
> You hope SW is simple locally. A lot of it isn't, because of poor design. An
> experienced mechanical engineer can inspect a battleship for a day and
> have an
> excellent idea of how it is designed, and what its weak points are. This is
> largely because its systems stand as separate entities.  But there are
> many SW
> systems which would take an expert far longer than that (if ever :) to
> understand, due to failure to observe modularity and locality of reference in
> the design, and just plain muddled thinking/cryptic coding.

As an example, I remember some GUI code rather vividly. The designer was
calling methods in a superclass. These methods called other methods in the same
superclass, and the designer had overridden some of them in the subclass, so
execution would jump back and forth between the superclass and the subclass.
Also, there were many levels of subclassing, and the jumps were among them, not
just between two.  The whole GUI was written like that.  Understanding what was
going on was a nightmare. Strictly Gestalt. Debugging was virtually impossible.

Regards, KBK
From: Tim Bradshaw
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <ey3wv5f8d8f.fsf@cley.com>
* Kurt B Kaiser wrote:
> Tim Bradshaw <···@cley.com> writes:

> You hope SW is simple locally. A lot of it isn't, because of poor
> design. 

I agree.

> An
> experienced mechanical engineer can inspect a battleship for a day and have an
> excellent idea of how it is designed, and what its weak points are. This is
> largely because its systems stand as separate entities.  

I think this is wrong.  If it was right the British wouldn't have lost
three ships by catastrophic explosion at Jutland!  You can detect
*some* weak points, and some were known (deck armour too thin in Hood
for instance, a known problem but very hard to deal with), but some
were really obscure bugs.

> But there are many SW
> systems which would take an expert far longer than that (if ever :) to
> understand, due to failure to observe modularity and locality of reference in
> the design, and just plain muddled thinking/cryptic coding.

Yes.

> I hear what you are saying about smoothness, and it's intellectally attractive,
> but I'm trying to get a grip on why it can't apply to SW (and how the concept
> could be used to improve the situation in SW).

Yes this is what I'm trying to understand too!

--tim
From: Will Deakin
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <3B4D7920.3020700@pindar.com>
Tim wrote:

> I think that mechanical systems of pretty high complexity are built.
> An example that I know something about is warships. Even a century ago
> a battleship was a significantly complex bit of machinery.  
 .... 
> I think your comment about locality is bogus. Partly because software
> is simple locally too, and partly because the whole notion of local
> simplicity, or local smoothness is what makes mechanical systems
> tractable.
This part of the thread has propted me to think this.

If I were 18-months old the *big* difference that I would see 
between a thing like a warship and a running piece of code is 
that I could not *see* the running piece of code.

Maybe this way lies the difference (or madness for that matter) 
and one of the fundamental problems with coding and not with 
battleship building is the level of abstraction involved.

An adult building a battleship can understand what the thing is 
that they are building on a whole stack of levels -- many of them 
concrete and similar to those that a small child can understand 
-- but, particularly if they are an experienced naval shipwright 
day, also on a whole stack of abstract levels. With code you 
don't get this you only have the abstract.

Unfortunately, people are fundamentally designed to understand 
concrete things and can -- to some extent -- learn to understand 
abstract things. So that, if I throw a brick I know what the 
brick will do, more or less, particularly since it's behaviour is 
continous. But if I wanted to do the equivalent in 
computationally terms, I would have no concrete examples to base 
my predictions on.

Hmmmm.

:)w
From: Tim Bradshaw
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <ey3ofqq7yov.fsf@cley.com>
* Will Deakin wrote:

> Maybe this way lies the difference (or madness for that matter) and
> one of the fundamental problems with coding and not with battleship
> building is the level of abstraction involved.

Yes, I think that's right.  I was really wrong earlier when I claimed
that an expert couldn't understand what a battleship did.  I was
right, I think, in the particular case that it might take really a
long time to understand specific flaws or details, but wrong in the
sense that an expert could look at a software system - especially
without source - for a very long time without any idea what it does
*at all*.

> Unfortunately, people are fundamentally designed to understand
> concrete things and can -- to some extent -- learn to understand
> abstract things. So that, if I throw a brick I know what the brick
> will do, more or less, particularly since it's behaviour is
> continous. But if I wanted to do the equivalent in computationally
> terms, I would have no concrete examples to base my predictions on.

Richard Gabriel has a tirade against abstraction somewhere - maybe
he's right.

--tim
From: Will Deakin
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <3B4EB728.4010409@pindar.com>
Tim wrote:

> I was really wrong earlier when I claimed that an expert couldn't 

> understand what a battleship did. I was right, I think, in the 
> particular case that it might take really a long time to understand 
> specific flaws or details,
Sure. Or to never understand specific flaws....

> Richard Gabriel has a tirade against abstraction somewhere - maybe
> he's right.

...and maybe he's wrong. Science -- physics in particular -- 
would be a nightmare without abstraction. The problem as I see it 
is that people have had a whole lot of evolution and hard wiring 
to cope with dealing with concrete and concrete abstract stuff[1] 
but much less stuff -- and stuff that has to be learnt -- to 
handle abstract stuff.

This is, of course, if you subscribe to evolution.

:)w

[1] Concrete abstract is used to talk about stuff like numbers. 
If you have three balls in front of you, you have a concrete 
example of three. If you talk about three or threeness or the 
number three you have an abstract idea of three. If you then 
*talk* or think about three balls then you have a concrete 
abstract idea of threeness. This is used in describing how kids 
learn about numbers and is quite interesting because there are 
often thresholds at which people have to jump between 
representations.

For example: you may find that a child can handle manipulate and 
handle operation like addition on numbers upto 7, can count 
apples in their head up to 15, and using blocks add numbers up to 
100.
From: Alain Picard
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <86y9pt87cy.fsf@gondolin.local.net>
Tim Bradshaw <···@cley.com> writes:

> 
> Richard Gabriel has a tirade against abstraction somewhere - maybe
> he's right.
> 

I don't know.  The tirade is in his book "Patterns of Software".
The example seems a bit contrived.  He shows the use of MISMATCH,
and code which, more or less, explicitly does a mismatch, and says
that code is simpler because you can read it right off, whereas if
you don't really know what MISMATCH does, you have to look it up
and study it closely before you understand it.

To me, what it says, it that some abstractions are _compelling_,
and it is those that people are happy to have encapsulated behind
a name�.  Those which are not compelling are harder to remember/understand,
hence one prefers to see the open code, to precisely re-read the intent
of the programmer.

If more programmers thought of their audience as human, rather than
machine (i.e. compiler), software would be easier to understand.
I know that's tautological, but there you go.


                                                                --ap


� Hence the supreme importance of naming things right.  Sometimes I think 
  90% of software development is coming up with the right names for things.

-- 
It would be difficult to construe        Larry Wall, in  article
this as a feature.			 <·····················@netlabs.com>
From: Harley
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <tksvi837gafa87@corp.supernews.com>
"Tim Bradshaw" <···@cley.com> wrote in message
····················@cley.com...
> I can't even find a relevant article in the engineering envy thread
> now, so I'm going to start another one.
>
> One of the things that I (I think) said somewhere back at the start
> was that SW `engineering' is ludicrously bad compared to mechanical
> engineering.  I think a lot of the reason for this is to do with
> continuity, or the lack of it.

This is a really interesting analysis.

I think there may be other factors in play as well.

For instance, software is very expensive to produce, but very cheap to
reproduce.  If you need a program that does something common, you can buy it
inexpensively.  You will (if you are at all rational) only go to the expense
of writing a new program if it is impossible to acquire a reasonable
existing one - or if the potential economic gains outweigh the risk of
writing new software.

The end result of this is that most software is new development.

On the other hand, for civil or mechanical engineering, it is very expensive
even to copy existing designs.  For instance, a new skyscraper identical in
all ways to the Empire State Building will still be extraordinarily
expensive to construct.  The risks of new designs is also very high.  As a
result, almost all civil and mechanical engineering work is very
conservative, and copies almost exactly existing work.  This reliance on
existing patterns reduces risk enormously.

So software engineering is all about innovation, while civil and mechanical
engineering is all about copying with minimal innovation.  Hence software
engineering is inherently riskier, with less reliance on proven patterns and
practices, and thus software ends up being perceived as of worse quality,
even though the individual engaged in it are certainly no less intelligent,
well-meaning, organized, etc. than those in other fields.

-- Harley
From: Marcin Tustin
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <9ip2vl$rlh$1@newsg3.svr.pol.co.uk>
Harley <······@speakeasy.net> wrote in message
···················@corp.supernews.com...
> conservative, and copies almost exactly existing work.  This reliance on
> existing patterns reduces risk enormously.
>
> So software engineering is all about innovation, while civil and
mechanical
> engineering is all about copying with minimal innovation.  Hence software
> engineering is inherently riskier, with less reliance on proven patterns
and
> practices, and thus software ends up being perceived as of worse quality,
> even though the individual engaged in it are certainly no less
intelligent,
> well-meaning, organized, etc. than those in other fields.

    I've never thought of this. What a good point!
From: Kent M Pitman
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <sfwae27hhew.fsf@world.std.com>
"Marcin Tustin" <·······@GUeswhatthisbitisfor.mindless.com> writes:

> Harley <······@speakeasy.net> wrote in message
> ···················@corp.supernews.com...
> > 
> > conservative, and copies almost exactly existing work.  This
> > reliance on existing patterns reduces risk enormously.
> >
> > So software engineering is all about innovation, while civil and
> > mechanical engineering is all about copying with minimal
> > innovation.  Hence software engineering is inherently riskier,
> > with less reliance on proven patterns and practices, and thus
> > software ends up being perceived as of worse quality, even though
> > the individual engaged in it are certainly no less intelligent,
> > well-meaning, organized, etc. than those in other fields.
> 
>     I've never thought of this. What a good point!

Following on that...

I've for a long time argued that copyright, while it ought to apply to 
code, should have its own set of rules distinct from human language 
copyright.  The reason is that I think it will ultimately be torn to
shreds by the case law process.

The use of code is what I call "convergent", while fiction is "divergent".
If a professor assigns a class to write fiction stories, you will be
graded down severely for writing the same story as someone else; while if
a professor assigns code as a project, you will be graded down severely for
NOT writing the same code as "the answer".  It's fine for all students to 
hand in the same coding answer, but NOT in the case of fiction writing.

In the case of engineering, we see an irksome consequence of this happen
as a sociological reaction to the problem of copyrights vs engineering.
People are encouraged by engineering discipline to agree, but encouraged
by copyright law not to copy.  So they are forced to pretend-innovate,
doing little more than introducing risk.

I'm a fan of trade secret and very limited-time intellectual property 
protection for sharing of code, but I think neither copyright nor patent
protection (if the latter is allowed at all) should stand for very long;
only time for the creator to get to market and make back some development
costs quickly, since the speed of software copying would otherwise not 
reward the creator for trying at all.  But some software is quickly obsolete
and others is quickly essential, and a balance must be struck or risk is
introduced through people having to walk around each other's inventions
instead of build upon them.
From: Andy Freeman
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <8bbd9ac3.0107141042.6492ef66@posting.google.com>
Kent M Pitman <······@world.std.com> wrote in message 
> I'm a fan of trade secret and very limited-time intellectual property 
> protection for sharing of code, but I think neither copyright nor patent
> protection (if the latter is allowed at all) should stand for very long;
> only time for the creator to get to market and make back some development
> costs quickly, since the speed of software copying would otherwise not 
> reward the creator for trying at all.  But some software is quickly obsolete
> and others is quickly essential, and a balance must be struck or risk is
> introduced through people having to walk around each other's inventions
> instead of build upon them.

I'm unsure why the fact that certain inventions can be expressed as software
should result in them being treated as software.

Consider an advance in linear programming.  That's hugely valuable.
Yet, KMP's scheme pretty much guarantees that said advances won't be
compensated.  (Just as KMP would rather not make a living doing support,
mathematicians would rather not make a living writing software.)

Note that we generally don't think that inventors should live off
products that they create, let alone biz.  Outside of software, we've
become accustomed to the idea that those are three separate activities.

In fact, the more I think of it, the less common the case that KMP is
worrying about seems to be.

Does anyone have a long list of software patents that have caused bad
things?  (One click wasn't a problem, RSA wasn't a problem, SUID wasn't
a problem, ....)

Or, is it the very thought, or a threat that we're worried about?

-andy
From: Kent M Pitman
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <sfwsnfzfgbc.fsf@world.std.com>
······@earthlink.net (Andy Freeman) writes:

> Kent M Pitman <······@world.std.com> wrote in message 
> > I'm a fan of trade secret and very limited-time intellectual
> > property protection for sharing of code, but I think neither
> > copyright nor patent protection (if the latter is allowed at all)
> > should stand for very long; only time for the creator to get to
> > market and make back some development costs quickly, since the
> > speed of software copying would otherwise not reward the creator
> > for trying at all.  But some software is quickly obsolete and
> > others is quickly essential, and a balance must be struck or risk
> > is introduced through people having to walk around each other's
> > inventions instead of build upon them.
> 
> I'm unsure why the fact that certain inventions can be expressed as
> software should result in them being treated as software.

I'm not saying that.  Things that are legitimately of another domain
perhaps deserve differentiated protection.  I didn't have a problem
with RSA for example, but probably because I didn't think of it as a
"programming patent".  I thought of it as a mathematical patent, FWIW.
I've never looked at it in detail.

> Consider an advance in linear programming.  That's hugely valuable.
> Yet, KMP's scheme pretty much guarantees that said advances won't be
> compensated.  (Just as KMP would rather not make a living doing support,
> mathematicians would rather not make a living writing software.)

Although I don't think it happens very often, I tend to think big deal
things like linear programming and RSA are rare compared to the more common
kinds of things people are patenting.
 
> Note that we generally don't think that inventors should live off
> products that they create, let alone biz.  Outside of software, we've
> become accustomed to the idea that those are three separate activities.

I'd rather an inventor get paid like a baseball star.  A lot of money per
invention up front, with the responsibility of the business to make good
on the invention.  Not money trickled out over time.

The reason I'd rather this is that I think the business should be pressured
to move on the invention or lose its protection.

> In fact, the more I think of it, the less common the case that KMP is
> worrying about seems to be.
 
This part I don't believe.

> Does anyone have a long list of software patents that have caused bad
> things?  (One click wasn't a problem, RSA wasn't a problem, SUID wasn't
> a problem, ....)

The patent on using a background bit-array to write into before
refreshing a screen wholesale.  The patent on use of xor to various
kinds of screen updates faster.  The patent on the shopping cart for
e-business.  The patent on the separation of text output into "window
panes".  The patent on various compressions (only because compression
seems "obvious" and I cna't figure out what is NOT infringing on the
various techniques).  The patent on the use of adjacent words as index
keys for full text search to improve context.

All of these are "patents" on seemingly trivial and easy-to-reproduce
situations that someone just got to first.  They didn't involve any
serious development time and don't deserve any long-term protection.

A good rule of thumb is that if you can't get a PhD thesis for it, you
shouldn't be getting a patent on it either.  I'd be hard-pressed to think
anyone deserved a PhD for something like the "shopping cart".  I could see
someone getting one for something like RSA or linear programming.

> Or, is it the very thought, or a threat that we're worried about?

We're knee-deep in "nuissance patents" where people just go around extorting
people for doing the obvious next thing, and it's getting worse.

I only wish I'd thought to patent the process of making a patent and then
not using it directly but instead using it through the courts when someone
else figures out a way...  too much prior art, I guess.  Sigh.
From: Andy Freeman
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <8bbd9ac3.0107151149.1df726c7@posting.google.com>
Kent M Pitman <······@world.std.com> wrote in message news:<···············@world.std.com>...
> ······@earthlink.net (Andy Freeman) writes:

> Although I don't think it happens very often, I tend to think big deal
> things like linear programming and RSA are rare compared to the more common
> kinds of things people are patenting.

The legal system isn't likely to be able to make these sorts of distinctions,
even when they're obvious up front.  (Linear programming is obvious up front
while many things that turn out to be important don't look that way up front,
and lots of things that look important up front don't turn out that way.)

> > Note that we generally don't think that inventors should live off
> > products that they create, let alone biz.  Outside of software, we've
> > become accustomed to the idea that those are three separate activities.
> 
> I'd rather an inventor get paid like a baseball star.  A lot of money per
> invention up front, with the responsibility of the business to make good
> on the invention.  Not money trickled out over time.

That's nice, but shortening the protection period doesn't have that effect.
Shortening the protection period merely reduces the value, whether it is
paid out over time or in a lump.

Also, when is "up front"?  It rarely is before the invention has proven
valuable, and if the protection period is short enough ....

> The reason I'd rather this is that I think the business should be pressured
> to move on the invention or lose its protection.

The other consequence is that there's no incentive to invest much in
inventions because the protection runs out before they recoup their costs.
(I'd like to see companies take risks, to work on things that might not
pan out.)

> > Does anyone have a long list of software patents that have caused bad
> > things?  (One click wasn't a problem, RSA wasn't a problem, SUID wasn't
> > a problem, ....)

I forgot an important qualifier, namely "that haven't expired".

Yes, during the initial land rush, there was, arguably, more "it's
only novel because you were the first person who wandered into the
domain" patents than we might prefer.  (However, I'd argue that
looking at things in new ways, looking at new things, and finding
new things to look at are all extremely valuable.  Consider
diagonalization....)  Also, remember that broad claims reduce the
scope of future patents.

Yes, I think that "extremely valuable" should result in protection
because compensation comes from protection and I'd like to increase
the number of valuable things.  I don't care how long someone worked
to produce the value.  I don't care whether they have credentials.
And so on.

> A good rule of thumb is that if you can't get a PhD thesis for it, you
> shouldn't be getting a patent on it either.

I think that's a horrible rule of thumb because the goals of a PhD thesis
have nothing to do with producing valuable work.  A PhD is a personal
qualification.  It's quite reasonable for someone to earn a PhD on something
that is not at all valuable.  An AHA doesn't result in a PhD.  And so on.

> We're knee-deep in "nuissance patents" where people just go around extorting
> people for doing the obvious next thing, and it's getting worse.

I mentioned one-click, because it's the canonical example, but that
example doesn't hold up; the sky didn't fall, and won't.  (I didn't
mention shopping carts because I happen to have great access to the
relevant prior art; virtualvineyards, later wine.com, predates the
shopping cart patents and that's what I'd copy.)

To my mind, the biggest problem with the US patent system is that it
isn't "loser pays".  Absent fraud, the most that a bogus patent holder
can lose their costs.  (With fraud, there are criminal penalties.)

I'm not thrilled about the recent US change to disclose applications
after 18 months.  A patent is a trade for disclosure, but if there's
no patent, there shouldn't be any disclosure either.  (There are lots
of things that can block issue.)

-andy
From: Kent M Pitman
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <sfwk819sys6.fsf@world.std.com>
······@earthlink.net (Andy Freeman) writes:

> Kent M Pitman <······@world.std.com> wrote in message news:<···············@world.std.com>...
>
> > I'd rather an inventor get paid like a baseball star.  A lot of money per
> > invention up front, with the responsibility of the business to make good
> > on the invention.  Not money trickled out over time.
> 
> That's nice, but shortening the protection period doesn't have that effect.
> Shortening the protection period merely reduces the value, whether it is
> paid out over time or in a lump.
> 
> Also, when is "up front"?  It rarely is before the invention has proven
> valuable, and if the protection period is short enough ....

Well, I'm guessing, and you're right it's hard to show numbers, that if you
incentivize sales in a particular period, people will try harder to devise
something that will clean up in that interval, before others enter the market.

> > The reason I'd rather this is that I think the business should be pressured
> > to move on the invention or lose its protection.
> 
> The other consequence is that there's no incentive to invest much in
> inventions because the protection runs out before they recoup their costs.

Well, no, that's not so.  You use this number to figure out what the
right length to run the protection is.  Maybe it does turn out to be
17 years or whatever the patent number is, but I really seriously
doubt it.  Not if you partition software as separate from other
processes.  I'm just SURE if you draw a graph of SOFTWARE patents and
money made on them over time and took the median distance to 90%
profit return, you'd find it was way way way shorter.  I bet in 99% of
cases you get 90% of your profit inside of 5 or 10 years.  I bet the number
of cases like RSA where you continue to generate revenue farther out is small,
and moreover I bet you can make a tidy sum (more than was necessary to recover
the cost of making that invention) in the median window.

> (I'd like to see companies take risks, to work on things that might not
> pan out.)

Me, too.  But not to the point of ridiculousness on the patent issue.

AND I don't think you can say companies are taking risks right now,
really, nor that the reason they aren't is the patent time period
window.

> > > Does anyone have a long list of software patents that have caused bad
> > > things?  (One click wasn't a problem, RSA wasn't a problem, SUID wasn't
> > > a problem, ....)
> 
> I forgot an important qualifier, namely "that haven't expired".
> 
> Yes, during the initial land rush, there was, arguably, more "it's
> only novel because you were the first person who wandered into the
> domain" patents than we might prefer.  (However, I'd argue that
> looking at things in new ways, looking at new things, and finding
> new things to look at are all extremely valuable.  Consider
> diagonalization....)  Also, remember that broad claims reduce the
> scope of future patents.

Only if the patent office sorts it out right.
 
> Yes, I think that "extremely valuable" should result in protection
> because compensation comes from protection and I'd like to increase
> the number of valuable things.  I don't care how long someone worked
> to produce the value.  I don't care whether they have credentials.
> And so on.
> 
> > A good rule of thumb is that if you can't get a PhD thesis for it, you
> > shouldn't be getting a patent on it either.
> 
> I think that's a horrible rule of thumb because the goals of a PhD thesis
> have nothing to do with producing valuable work.  A PhD is a personal
> qualification.  It's quite reasonable for someone to earn a PhD on something
> that is not at all valuable.  An AHA doesn't result in a PhD.  And so on.

"AHA"?  What's the equally pithy expression of what your target is for what
should be patentable?
 
> > We're knee-deep in "nuissance patents" where people just go around
> > extorting people for doing the obvious next thing, and it's
> > getting worse.
> 
> I mentioned one-click, because it's the canonical example, but that
> example doesn't hold up; the sky didn't fall, and won't.  (I didn't
> mention shopping carts because I happen to have great access to the
> relevant prior art; virtualvineyards, later wine.com, predates the
> shopping cart patents and that's what I'd copy.)

Another problem with US court law in general is that there's no way you can
get an advisory ruling before-the-fact when you have a question so that you
can proceed in secure knowledge that you are not breaking a law.  That's a
SERIOUS drawback to people trying to do honest business.

> To my mind, the biggest problem with the US patent system is that it
> isn't "loser pays".  Absent fraud, the most that a bogus patent holder
> can lose their costs.  (With fraud, there are criminal penalties.)
> 
> I'm not thrilled about the recent US change to disclose applications
> after 18 months.  A patent is a trade for disclosure, but if there's
> no patent, there shouldn't be any disclosure either.  (There are lots
> of things that can block issue.)

I'm not up on this but have to agree with you here.
From: Andy Freeman
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <8bbd9ac3.0107160714.692f199d@posting.google.com>
Kent M Pitman <······@world.std.com> wrote in message news:<···············@world.std.com>...
> ······@earthlink.net (Andy Freeman) writes:
> > That's nice, but shortening the protection period doesn't have that effect.
> > Shortening the protection period merely reduces the value, whether it is
> > paid out over time or in a lump.
> > 
> > Also, when is "up front"?  It rarely is before the invention has proven
> > valuable, and if the protection period is short enough ....
> 
> Well, I'm guessing, and you're right it's hard to show numbers, that if you
> incentivize sales in a particular period, people will try harder to devise
> something that will clean up in that interval, before others enter the market.

They'll always plan to make money.  A shorter time period has at least two
effects - it discourages people from trying (because they have less time
to recoup or to fix problems) and it encourages higher prices.  (They
may not get those prices, which has an additional discouraging effect.)

> > The other consequence is that there's no incentive to invest much in
> > inventions because the protection runs out before they recoup their costs.
> 
> Well, no, that's not so.  You use this number to figure out what the
> right length to run the protection is.

There isn't a "right" number, as any number affects how people behave.

> Not if you partition software as separate from other processes.

Why should I do this?

Let's suppose that software folks are making "excess profits" at a
particular period of time.  Those "excess profits" affect the behavior
of non-software folks - some become software folks.  If the supply
increases relative to the demand, the excess profits go away.

This argues for using roughly the same number across the board.  Like
I said at the beginning, I see a difference between copyright and
patent, but I don't see a difference between steel and software.  (I
wonder if steel folks had roughly the same arguments when metal working
became cheap?)

Remember, my goal isn't to help inventors recoup costs.  My goal is to
increase the amount of valuable stuff produced.

> > Yes, during the initial land rush, there was, arguably, more "it's
> > only novel because you were the first person who wandered into the
> > domain" patents than we might prefer.  (However, I'd argue that
> > looking at things in new ways, looking at new things, and finding
> > new things to look at are all extremely valuable.  Consider
> > diagonalization....)  Also, remember that broad claims reduce the
> > scope of future patents.
> 
> Only if the patent office sorts it out right.

The patent office doesn't do all the sorting.

I think that it's relevant that most patent holders don't recoup their
costs.  That fact seems inconsistent with many theories about the effects
of patents.

> > Yes, I think that "extremely valuable" should result in protection
> > because compensation comes from protection and I'd like to increase
> > the number of valuable things.  I don't care how long someone worked
> > to produce the value.  I don't care whether they have credentials.
> > And so on.
> > 
> > > A good rule of thumb is that if you can't get a PhD thesis for it, you
> > > shouldn't be getting a patent on it either.
> > 
> > I think that's a horrible rule of thumb because the goals of a PhD thesis
> > have nothing to do with producing valuable work.  A PhD is a personal
> > qualification.  It's quite reasonable for someone to earn a PhD on something
> > that is not at all valuable.  An AHA doesn't result in a PhD.  And so on.
> 
> "AHA"?

The moment of inspiration, the lightbulb turning on above your head in the
comic strip account of your life, the canonical expression of "invention".

Not all patents are the result of "AHA".  (I just realized that "Aha"
might have been clearer.  My apologies.)

> What's the equally pithy expression of what your target is for what
> should be patentable?

It's somewhere around "something that people active in the field hadn't
seen before".  Yes, there can be startup problems when there aren't enough
eyes, but the 20 year period sorts them out quickly.  (One might argue
that I hold patents to a higher standard than PhD theses; I think it's
a different standard.)

> Another problem with US court law in general is that there's no way you can
> get an advisory ruling before-the-fact when you have a question so that you
> can proceed in secure knowledge that you are not breaking a law.  That's a
> SERIOUS drawback to people trying to do honest business.

Actually, a possible infringer can go to court for a declaratory judgement
of invalidity and/or non-infringement.  I think that this is fairly typical
when someone shows up with a threatening patent; I don't know if it can be
used before then.

Also, anyone who thinks that a patent is bogus can request a re-examination
and submit relevant information.  The fee is $2-3k and there aren't subsequent
legal fees for the requestor because they're not arguing the case; the patent
owner gets to go through much of the examination process again, with the
attendant costs.

Re-examination has a range of possible outcomes.  The patent office may decide
to invalidate the patent.  I think that the PTO can throw out one or more
claims (but I don't think that that's common).  Or, the PTO may decide that
the invention is novel over the submitted information, which has the effect
of strengthening the patent.  (A patent is effectively a statement by the
PTO that the invention is novel over the disclosed prior art.  That statement
can be rebutted, but a large part of the strength of a patent is in its list
of prior art - re-examination lengthens that list.)

I don't think that the holder has the option of amending the claims to get
around the new art during a re-examination.  (During the initial application,
the applicant is required to amend claims to exclude any overlap with submitted
prior art.)  If I'm correct, if there really is a significant invention but
the claims happened to overlap some prior art in an avoidable way,
re-examination effectively throws that invention into the public domain.

The patent office can, and has, instituted re-examination on its own and
invalidated patents.

-andy
From: Don Geddis
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <m33d7squv8.fsf@jedi.tesserae.com>
Kent M Pitman <······@world.std.com> wrote in message news:<···············@world.std.com>...
> > What's the equally pithy expression of what your target is for what
> > should be patentable?

······@earthlink.net (Andy Freeman) writes:
> It's somewhere around "something that people active in the field hadn't
> seen before".

The current US Patent office is looser than this.  It only needs to be
1. not covered by an existing patent
2. not obvious to someone "skilled in the state of the art", which is
   generally interpreted to mean a few years of training (e.g. a college
   degree).

So, even if thousands of research scientists working in the field think the
idea is obvious, and have known about it for years, if it wasn't patented
before and if an undergraduate doesn't find it obvious, then you can patent
it yourself.

> Also, anyone who thinks that a patent is bogus can request a re-examination
> and submit relevant information.  The fee is $2-3k

Well, technically true, but pretty useless.  Yes, you can request a
re-examination.  Unfortunately, the patent office will basically reject all
such requests that come from a single party.  They basically want to see
community consensus, from a vast array of existing companies in the industry,
all jointly requesting the re-examination, before they'll agree to take another
look.

So this is rarely a route to accomplishing anything.  You basically have to
go to court against the patent holder, and get a judge to overturn it.

> The patent office can, and has, instituted re-examination on its own and
> invalidated patents.

The exception that proves the rule.  This is so infrequent that it's not worth
worrying about if confronted with any specific patent case.

        -- Don
_______________________________________________________________________________
Don Geddis                     www.goto.com                     ······@goto.com
Vice President of Research and Development
GoTo.com, 1820 Gateway Drive, Suite 360, San Mateo, CA 94404
Those who do not know Lisp are doomed to reimplement it.
From: Paolo Amoroso
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <SKhRO21zJEEctnh4Rk4TGpINpFke@4ax.com>
On Sat, 14 Jul 2001 19:42:31 GMT, Kent M Pitman <······@world.std.com>
wrote:

> I only wish I'd thought to patent the process of making a patent and then
> not using it directly but instead using it through the courts when someone
> else figures out a way...  too much prior art, I guess.  Sigh.

I suggest that you go ahead and file a patent anyway. Patent offices are so
bad at recognizing prior art that you might even succeed :)


Paolo
-- 
EncyCMUCLopedia * Extensive collection of CMU Common Lisp documentation
http://cvs2.cons.org:8000/cmucl/doc/EncyCMUCLopedia/
From: Kurt B. Kaiser
Subject: Re: Continuity (Engineering envy...)
Date: 
Message-ID: <m3snfzjxgd.fsf@float.ne.mediaone.com>
"Harley" <······@speakeasy.net> writes:
> "Tim Bradshaw" <···@cley.com> wrote in message
> ····················@cley.com...
> > One of the things that I (I think) said somewhere back at the start
> > was that SW `engineering' is ludicrously bad compared to mechanical
> > engineering.  I think a lot of the reason for this is to do with
> > continuity, or the lack of it.
> 
> This is a really interesting analysis.
> 
> I think there may be other factors in play as well.
> 
> For instance, software is very expensive to produce, but very cheap to
> reproduce.  If you need a program that does something common, you can buy it
> inexpensively.  You will (if you are at all rational) only go to the expense
> of writing a new program if it is impossible to acquire a reasonable
> existing one - or if the potential economic gains outweigh the risk of
> writing new software.
> 
> The end result of this is that most software is new development.

IMO most SW is incremental development of existing code. Maintenance and a few
new features to keep people buying (MS model) or preserve job security
(in-house model). 
 
> On the other hand, for civil or mechanical engineering, it is very expensive
> even to copy existing designs.  For instance, a new skyscraper identical in
> all ways to the Empire State Building will still be extraordinarily
> expensive to construct.  The risks of new designs is also very high.  As a
> result, almost all civil and mechanical engineering work is very
> conservative, and copies almost exactly existing work.  This reliance on
> existing patterns reduces risk enormously. 

It is very expensive and risky to distribute a new version of MS Windows, even
though the individual units are cheap.  Even a service pack is a big deal.
(Open source, OTOH, can be distributed on the web in small increments.)

> So software engineering is all about innovation, while civil and mechanical
> engineering is all about copying with minimal innovation.  

I don't think this conclusion follows from the premises of your argument. The
development of skyscrapers over, what, a thirty year period (without simulation
tools ;) was extremely risky.  What about Brunel?

"Isambard Kingdom Brunel went on to become the prototypical 19th-century
engineer. He built the famous two-mile-long Box Tunnel, several major
suspension and arch bridges, and 1000 miles of railway; and with each project
he expanded civil engineering techniques far beyond anything that had been
known or imagined.

But his crowning achievements were his steamships. In 1837 he produced the
paddle-driven Great Western -- one of the first transatlantic steamboats in
regular service. He followed it with a screw-propeller-driven steamship called
the Great Britain.

Then he bit off a mouthful that not even he could chew. In 1853 he began work
on the Great Eastern -- the grandest ship the world had ever seen.  Designed to
take 4000 passengers to Australia and back without refueling, it was 700 feet
long and weighed 20,000 tons.

The Great Eastern was launched in 1858, and Brunel died of stress and overwork
the next year. It was all it was meant to be, with one catch: it was only one
quarter as fuel-efficient as Brunel had expected, and that killed it as a
passenger liner."  http://www.uh.edu/engines/epi17.htm

> Hence 

Sorry, non sequitur. 

> software
> engineering is inherently riskier, with less reliance on proven patterns and
> practices, 

This is a matter of choice. Most of it _ain't_ engineering. It's egotistic
hackery.

There is no reason SW can't be constructed from tested components, either in
source or linkable form. We are only beginning to do that. Risk is a matter of
choice. You can develop on the dotcom model and shoot the moon, or you can take
the NASA shuttle SW approach. Which one works? Which one wins? The answers
aren't obvious, and are changing.

Market share in commercial SW is everything. Civil engineering doesn't scale
the same way at all.  The products of mechanical engineering fall in between,
from cars (not many suppliers left compared to 100 years ago) to small boats
(still lots of manufacturers, but half of them get killed every down cycle).

> and thus software ends up being perceived as of worse quality,
> even though the individual engaged in it are certainly no less intelligent,
> well-meaning, organized, etc. than those in other fields.
> 

Buggy commercial SW may be strategically sound if your goal is to get market
share and keep a revenue stream going.  Try that approach in medical equipment
SW, you won't last long.

The commercial model has infected the entire industry.  There is nothing
fundamental which says SW has to be developed that way.

> -- Harley

Regards, KBK