From: Espen Vestre
Subject: Re: "Programming is FUN again" rambling commentary
Date: 
Message-ID: <w6zpiw475d.fsf@gromit.nextel.no>
··@wetware.com (Bill Coderre) writes:

> So FUN might be "getting to translate your ideas into code efficiently,
> without having to spend a lot of time dealing with "bureacracy" code --
> stuff that isn't part of the problem at hand, but is required before you
> can see the results of your idea."

What strikes me as the most FUN part of Common Lisp programming these
days is the power that macros and CLOS give you to create extremely
compact and easy-to-read code.  Just a few lines of code can do wonders!
And the other really FUN part of programming in Common Lisp is the
incremental and dynamic style of work:  I am working on two different
TCP-server-applications, and they have both been up for more than
a month without any restarts - while I am rewriting and adding
code to them constantly!  And my very latest Common Lisp Joy is to 
start to use part of the MOP to make these programs redefine their 
class-hierarchies run-time....

> Perhaps if C programs did all of the error-checking and memory-management
> they were supposed to, they would also be slower. (I'm sure there's cases
> where Lisp is doing TOO MUCH error checking, and that's resulting in
> unnecessary speed loss, but hey.)

It's not generally the case that C programs are _fast_.  Quite in
opposite: The popular programs these days typically are _suprisingly_
_fat_ and _slow_.  Some reasons for this might be:

1) a lot of code is programmed by really bad C newbies
2) too much has to be done in too short time
3) too many software companies hold hardware shares

But, if you think of it, (1) is partly a consequence of (2).
And if you think more of it, both (1) and (2) may partly be
caused by the choice of language.  In Common Lisp fewer people
could do more in shorter time!

There will always be a need for programming at the level of C
or lower.  But today, there's a lot of programming being done
in C that could have been done a lot better with Common Lisp.

IMHO...

--

regards,
  Espen Vestre

From: Martin Rodgers
Subject: Re: "Programming is FUN again" rambling commentary
Date: 
Message-ID: <MPG.f720d97f8d467ac98997e@news.demon.co.uk>
Espen Vestre wheezed these wise words:

> 1) a lot of code is programmed by really bad C newbies

WHat they lack may be a deeper understanding of _programming_. I don't 
think that C is the culprit here. In the past, I've asked if a "Lisp 
for Dummies" might help. I still wonder if the lack of such books 
explains the attitude of many people to Lisp. Lisp books are written 
by experienced programmers _for_ experienced programmers. Even the 
tutorials aimed at "beginners" assume a great deal of intelligence.

I'd say that K&R's C tutorial does the same thing, which is why some C 
programmers claim that book is "too hard". No it isn't! If you can't 
understand that book, then you should touch C! (However, see below.)
That book assumes you already know how to program. _All_ the other C 
books I've seen don't teach you anything about programming, never mind 
how to program in C.

Not all Lisp books have much to say about programming, either. "The 
Little Schemer" is one that does this very well, as it encourages the 
reader to think about why we code something a particular way. Is there 
a Lisp book like "Elements of Programming Style"? No matter, as that's 
a book about general programming issues that apply to any language.

> 2) too much has to be done in too short time

Necessity is another culprit. There's no time to train people, to make 
sure programmers understand 

> 3) too many software companies hold hardware shares

Ahh, a conspiracy theory! ;) We could also point a finger at Lisp 
vendors, as some Lisp delivery systems also produce fat binaries.
Even if that's fixed overhead, the asusmption is that the developer 
can afford it. As has been noted, this ain't always true.

> There will always be a need for programming at the level of C
> or lower.  But today, there's a lot of programming being done
> in C that could have been done a lot better with Common Lisp.

A lot of people will suggest other alternatives to C. Today, the media 
and certain vendors are going mad about the current favourite 
alternative, Java.

While it's easy for us to laugh, IMHO we should be encouraging 
programmers for taking an important step forward. It may be small from 
our perspective, but any language that strongly supports garbage 
collection is a major improvement over malloc. Java is now being 
attacked in _exactly_ the same way that Lisp was, just a few years 
ago. So, from the point of view of those who resist "new ideas" like 
GC, Lisp and Java are practically the same. I find _that_ hilarious, 
but I'm comforted by the high profile and success of Java.

It's also easy to sneer at programmers who are only now discovering 
other neat ideas that have been known to Lisp folk for years. The 
arguments with which we can knock Java can also be used to sell Lisp 
to those who resist it. Instead of saying that Java is a poor person's 
Lisp, we can say that Lisp is a better Java; older and more mature.

There are two types of fool: one says "This is old and therefore 
good.", the other says "This is new, and therefore better."

Thus, we can still use C, embrace Java, and build on these tools in 
Lisp. What are Java people doing? Building front ends to network apps?
Lisp looks like a fabulous language to use at the back end, while Java 
is complimentary tool for the front end. C code can provide low level 
support at either end. The best of all worlds?
-- 
Please note: my email address is munged; You can never browse enough
                        "Oh no" - Marc Riley
From: Ken Nakata
Subject: Re: "Programming is FUN again" rambling commentary
Date: 
Message-ID: <6e9p0b$h0j@er3.rutgers.edu>
···@this.email.address.intentionally.left.crap.wildcard.demon.co.uk (Martin Rodgers) writes:
[...]
> I'd say that K&R's C tutorial does the same thing, which is why some C 
> programmers claim that book is "too hard".

If they claim that, they are no "C programmers".

> No it isn't! If you can't 
> understand that book, then you should touch C! (However, see below.)
                                       ^ maybe you forgot a "not" here?

[...]
> While it's easy for us to laugh, IMHO we should be encouraging 
> programmers for taking an important step forward. It may be small from 
> our perspective, but any language that strongly supports garbage 
> collection is a major improvement over malloc.

I would strongly second what you state here IF "malloc" were "free"
instead.  Who hates cons?  It's free() which I hate to have to call.

> Java is now being attacked in _exactly_ the same way that Lisp was,
> just a few years ago. So, from the point of view of those who resist
> "new ideas" like GC, Lisp and Java are practically the same. I find
> _that_ hilarious, but I'm comforted by the high profile and success
> of Java.

It seems to me that people have to be taught to trust GC, or the
simple fact that GC does it better than most of us do (how many times
did you hear about some server program leaking memory?  And how many
products are there to detect memory leakage in C programs?).  I've
seen someone so worried about when a Java instance is destroyed.  I
just couldn't see why it was so important.  Perhaps he was
braindamaged by extended exposure to C++ which he seemed to have.

Ken
-- 
Any unsolicited message soliciting purchase of any product or service
sent to any of my accounts is subject to a $50 handling charge per
message.  You have been notified.
From: Martin Rodgers
Subject: Re: "Programming is FUN again" rambling commentary
Date: 
Message-ID: <MPG.f735c9b879087bd989982@news.demon.co.uk>
Ken Nakata wheezed these wise words:

> > I'd say that K&R's C tutorial does the same thing, which is why some C 
> > programmers claim that book is "too hard".
> 
> If they claim that, they are no "C programmers".

Agreed. I certainly wouldn't employ them as C programmers.
 
> > No it isn't! If you can't 
> > understand that book, then you should touch C! (However, see below.)
>                                        ^ maybe you forgot a "not" here?

Well spotted! A small but significant word. Thanks.

> I would strongly second what you state here IF "malloc" were "free"
> instead.  Who hates cons?  It's free() which I hate to have to call.

Or malloc/free? I like a language to define constructor functions for 
me. When I use C, I have to write them myself. So I dislike malloc as 
well as free. A type system that can check for type erros either at 
compile or run time. C never does this - the programmer must do it.

Note the type of the object that malloc returns. Awooga! Awooga!
 
> It seems to me that people have to be taught to trust GC, or the
> simple fact that GC does it better than most of us do (how many times
> did you hear about some server program leaking memory?  And how many
> products are there to detect memory leakage in C programs?).  I've
> seen someone so worried about when a Java instance is destroyed.  I
> just couldn't see why it was so important.  Perhaps he was
> braindamaged by extended exposure to C++ which he seemed to have.

The finalisation issue. It's believed necessary for C++ objects to 
clean up themselves _and_ to remember any other objects they refer to. 
If you've ever managed a disk without a file system, you should be 
able to appreciate that a GC serves a similar role.

I don't see why we have to make such a big distinction between one 
level in the memory heirarchy and another, yet this is exactly what 
many C/C++ programmers do. The last time I had to explain this to one 
of these people, I discovered that he assumed that because malloc can 
return NULL, this is somehow significant; that allocating on the heap 
is vulnerable to a "memory full" error, while allocating on the stack 
is not. I pointed out that even stack memory is finite. He shut up.

George Orwell, in 1984, made a point about how language defines how we 
think. (This is one of the more accessible examples of this idea, so 
it should be ideal for recommending to programmers who've not thought 
about how this applies to computing.) If you don't have a word for 
"unhappy", you can only say "not happy". Similarly, if we don't have 
an error condition for stack overflow (or we're not aware of it), then 
we may - wrongly - assume that this can never happen.

I think that this is the error that some languages encourage us to 
make. Perhaps this could be a criticism of languages like Lisp, which 
tell us that objects persist indefinitely. In practice, we use GC to 
transparantly recover memory that is no longer used. If the lack of an 
error condition for "heap full" is a valid criticism of a language, 
then it also applied to filesystems, which many would deny.

The truth is morely that the programmer must take responsibility for 
handling errors, where ever they may occur. A "disk full" error can 
crash a program if it can't recover, which leads us neatly to the 
Halting Problem. So this isn't just a language issue. ;)

Meanwhile, ANSI Common Lisp addresses the "heap full" error with 
storage-condition. I've no idea what ANSI C++ will have.
-- 
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
          "As you read this: Am I dead yet?" - Rudy Rucker
              Please note: my email address is gubbish
From: Steve Gonedes
Subject: Re: "Programming is FUN again" rambling commentary
Date: 
Message-ID: <6ec6os$7a@bgtnsc03.worldnet.att.net>
Martin Rodgers wrote:
> 

> 
> The truth is morely that the programmer must take responsibility for
> handling errors, where ever they may occur. A "disk full" error can
> crash a program if it can't recover, which leads us neatly to the
> Halting Problem. So this isn't just a language issue. ;)


When this happened to me under Unix, the program kept running and
happily began to truncate all the files I tried to save (sans the error
message). What a pleasure.

I think the problem here is when writing programs that are trying to
work with the of Unix programs is the lack of a consistant, well defined
way for doing something as simple as getting the free space on a
partition (not all Unix programs have statfs I believe). How sad. The
kernel saves like %5 of the disk for itself too, sure would have been
nice if it was smart enough to share or warn me. (I didn't get an error
message for like a half hour; it was GNU mv that finally said hey bozo -
you have no space left.)
From: Frank A. Adrian
Subject: Re: "Programming is FUN again" rambling commentary (especially the rambling part)
Date: 
Message-ID: <6ecbd5$epg$1@client3.news.psi.net>
Steve Gonedes wrote in message <·········@bgtnsc03.worldnet.att.net>...
>When this happened to me under Unix, the program kept running and
>happily began to truncate all the files I tried to save (sans the error
>message). What a pleasure.
>
>I think the problem here is when writing programs that are trying to
>work with the of Unix programs is the lack of a consistant, well defined
>way for doing something as simple as getting the free space on a
>partition (not all Unix programs have statfs I believe). How sad. The
>kernel saves like %5 of the disk for itself too, sure would have been
>nice if it was smart enough to share or warn me. (I didn't get an error
>message for like a half hour; it was GNU mv that finally said hey bozo -
>you have no space left.)

Welcome to the world of "Worse Is Better".  It's more palatable for an
operation to fail silently than to raise an unmistakeable error signal that
must be handled.  It's better to allow the user to let an error code be
unchecked rather than to force him to pass a routine that would handle an
error.  It's better to rely on a falible programmer's judgement than to use
facilities to force him to at least acknowledge the possibility of the error
(if only for an explicit statement to ignore the error).  The scary thing is
that even Visual Basic on Windows has this type of thing more correct than C
libraries on UNIX systems.

The only thing necessary for bad systems to thrive is that you keep buying
into their badness.
--
Frank A. Adrian
First DataBank
············@firstdatabank.com (W)
······@europa.com (H)
This message does not necessarily reflect those of my employer,
its parent company, or any of the co-subsidiaries of the parent
company.
From: Martin Rodgers
Subject: Re: "Programming is FUN again" rambling commentary (especially the rambling part)
Date: 
Message-ID: <MPG.f7482fdb47833f0989987@news.demon.co.uk>
Frank A. Adrian wheezed these wise words:

> The scary thing is that even Visual Basic on Windows has this
> type of thing more correct than C libraries on UNIX systems.

Contrary to the propadanda spread by over zealous C++ programmers, VB 
does a lot things better. Perhaps it even gets some of them "right", 
with the result that programming in VB can be fun. (This is a thread 
about how programming can be fun?) Unfortunately, no tool can ever 
ensure that it will be used correctly. Responsible use is not a 
feature that you can code into software. It's a wetware issue.

Programming tools that are easy to use help experienced programmers, 
but they also help the inexperienced. The problem is that not everyone 
can appreciate the difference. Hence "Worse Is Better".

This is why [boring rant mode] I feel we need better education.
-- 
Please note: my email address is munged; You can never browse enough
                  "Oh knackers!" - Mark Radcliffe
From: Pierre Mai
Subject: Re: "Programming is FUN again" rambling commentary (especially the rambling part)
Date: 
Message-ID: <m3soojn3sp.fsf@torus.cs.tu-berlin.de>
>>>>> "MR" == Martin Rodgers <···@this.email.address.intentionally.left.crap.wildcard.demon.co.uk> writes:

    MR> Frank A. Adrian wheezed these wise words:
    >> The scary thing is that even Visual Basic on Windows has this
    >> type of thing more correct than C libraries on UNIX systems.

    MR> Contrary to the propadanda spread by over zealous C++
    MR> programmers, VB does a lot things better. Perhaps it even gets
    MR> some of them "right", with the result that programming in VB
    MR> can be fun. (This is a thread about how programming can be
    MR> fun?) Unfortunately, no tool can ever ensure that it will be
    MR> used correctly. Responsible use is not a feature that you can
    MR> code into software. It's a wetware issue.

<RANT mode"silly">
Well, I've *seriously* programmed in well over 15 different languages,
and diddled with many, many more, but VB (and it's many offspring) has
been the worst experience _ever_.  Even handwriting sendmail.cf rules
seems more enjoyable in retrospect, and certainly using Applesoft-BASIC
(remember -- when Microsoft was much much smaller than today ;) was
much more fun, than this.

Amongst the many misgivings I have, the paramount probably is, that VB
is totally unspecified.  Often you end up writing small scripts
just to try to figure out, what the real return type of some function
is (which is _defined_ nowhere in the manual, i.e. it just says
"returns a number", where you are left guessing what type that should
be), etc.

And noone can tell me, that this makes it easier for newbies to work.
It just leads to fragile, brittle code to say the least.

If you must draw pretty GUIs under Windows, Delphi is a much more
"mature" language/environment (which in comparison to real tools makes
it still brittle and sometimes painful, but at least it can be counted
as a programming language).
</RANT>

But then again, I'm probably a funny kind of person anyway, being
neither a member of the "C++ is the greatest/best/fastest" nor the
"C++ is rubbish/dangerous/evil" camps, nor hailing/condemming Java,
often using Common Lisp or Scheme for prototyping or "even"
implementation, but not shunning C++ if that seems appropriate (where
appropriate includes many non-technical factors as well), using
scripting languages without overhyping them (cf. the Scripting Paper
by Osterhout), etc.

IMHO there is programming/software engineering and there is
languages ;-)

To me this is like respect and clothes: If you have gained the first,
the second doesn't really matter, but if you haven't, no clothes will
change that.  But this doesn't mean, that a good, classic suit might
not be more enjoyable to wear than a tight S&M outfit ;)

    MR> Programming tools that are easy to use help experienced
    MR> programmers, but they also help the inexperienced. The problem
    MR> is that not everyone can appreciate the difference. Hence
    MR> "Worse Is Better".

Well, easy and underdefined or similiar fuzzyness (simple glossing
over of important differences/details) are IMHO different things.  In
my experience (which includes some teaching), when trying to teach
"newbies", always bear in mind what Einstein said:

Make it as simple as possible, but no simpler.

I have often seen teachers trying to gloss over facets which they
thought are too complex for the pupils, with the result, that the pupils
then were really lost.  You IMHO don't make this easier by leaving out
important detail, but by structuring your presentation of them well.

BTW: When you refer to "Worse is Better", this refers back to an essay
by Richard P. Gabriel (=> Lisp, Common Lisp, Lucid, ...).  And IMHO
Gabriel interprets this state of affairs quite a bit differently than
you do (i.e. less gloomy/biased).

Just my 2 centi-euros...

Regs, Pierre.

-- 
Pierre Mai <····@cs.tu-berlin.de>	http://home.pages.de/~trillian/
  "Such is life." -- Fiona in "Four Weddings and a Funeral" (UK/1994)
From: Martin Rodgers
Subject: Re: "Programming is FUN again" rambling commentary (especially the rambling part)
Date: 
Message-ID: <MPG.f7703f9fb237eae98998e@news.demon.co.uk>
Pierre Mai wheezed these wise words:

> <RANT mode"silly">
> Well, I've *seriously* programmed in well over 15 different languages,
> and diddled with many, many more, but VB (and it's many offspring) has
> been the worst experience _ever_.  Even handwriting sendmail.cf rules
> seems more enjoyable in retrospect, and certainly using Applesoft-BASIC
> (remember -- when Microsoft was much much smaller than today ;) was
> much more fun, than this.

Are you talking only about the language? Last time I checked, VB was a 
development system. Mind you, if I had to write 400 forms, I'd 
definitely consider using something that could help me write code that 
will write the forms and their code. I wouldn't choose VB.
 
> Amongst the many misgivings I have, the paramount probably is, that VB
> is totally unspecified.  Often you end up writing small scripts
> just to try to figure out, what the real return type of some function
> is (which is _defined_ nowhere in the manual, i.e. it just says
> "returns a number", where you are left guessing what type that should
> be), etc.

This is a language issue, I think. I've never seen any document for 
ANSI Basic, so I can't even comment on _that_, never mind VB, VB's 
conformance to any standard (whoever defines it), etc. If MS don't 
provide a spec for the VB syntax and semantics, they'd only be 
following a long tradition. The fact that it sucks farts from dead 
cats won't stop people from continuing this foul habit.
 
> And noone can tell me, that this makes it easier for newbies to work.
> It just leads to fragile, brittle code to say the least.

Did I mention newbies? [checks thread subject] Even some experienced 
programmers seem to like VB. I'm not one of them. I can only note that 
these people tend not to have discovered Lisp.
 
> If you must draw pretty GUIs under Windows, Delphi is a much more
> "mature" language/environment (which in comparison to real tools makes
> it still brittle and sometimes painful, but at least it can be counted
> as a programming language).
> </RANT>

Agreed. I'd love a Scheme or CL system that had an IDE like Delphi's.
I'd even like a C++ system like that. Not much, but for those times 
when C++ is a necessary evil, that's how I'd like to use it.
 
> But then again, I'm probably a funny kind of person anyway, being
> neither a member of the "C++ is the greatest/best/fastest" nor the
> "C++ is rubbish/dangerous/evil" camps, nor hailing/condemming Java,
> often using Common Lisp or Scheme for prototyping or "even"
> implementation, but not shunning C++ if that seems appropriate (where
> appropriate includes many non-technical factors as well), using
> scripting languages without overhyping them (cf. the Scripting Paper
> by Osterhout), etc.

Hey, we're both heretics. ;) I've been flamed for expressing such 
opinions. Frequently. I think it was last that I got flamed in 
comp.lang.lisp for refering to "religious fanatics" - I was thinking 
of C++ programmers, so go figure.
 
> IMHO there is programming/software engineering and there is
> languages ;-)

Most certainly. I learn new tricks in one language and apply them in 
anything else I use. One of the tricks that MS get right is to make 
heavily used activities simple to do, like creating forms. Perhaps 
this isn't a big deal for everyone, but I know that there _are_ people 
who care a great deal about such things - like users.

Anything that makes it easier for a programmer to deliver what a user 
demands is a good thing. That's what some of us get paid for. ;)
 
> To me this is like respect and clothes: If you have gained the first,
> the second doesn't really matter, but if you haven't, no clothes will
> change that.  But this doesn't mean, that a good, classic suit might
> not be more enjoyable to wear than a tight S&M outfit ;)

Exactly. This is why I like to point out that VB isn't just for 
newbies, nor is VB just Basic. There's also "Ruby". This is the bit 
that Borland reverse engineered for Delphi.
 
> Well, easy and underdefined or similiar fuzzyness (simple glossing
> over of important differences/details) are IMHO different things.  In
> my experience (which includes some teaching), when trying to teach
> "newbies", always bear in mind what Einstein said:
> 
> Make it as simple as possible, but no simpler.

This is why I like "The Little Schemer" book so much. Even people who 
don't use Scheme (or any other Lisp) will recommend it.
 
> I have often seen teachers trying to gloss over facets which they
> thought are too complex for the pupils, with the result, that the pupils
> then were really lost.  You IMHO don't make this easier by leaving out
> important detail, but by structuring your presentation of them well.

Why am I thinking of "The Peaceman"? ;) His failing appeared to be his 
(lack of) understanding of recursion. I asked him repeatedly if this 
could've been due to poor teaching, but he insisted on attacking 
recursion.
 
> BTW: When you refer to "Worse is Better", this refers back to an essay
> by Richard P. Gabriel (=> Lisp, Common Lisp, Lucid, ...).  And IMHO
> Gabriel interprets this state of affairs quite a bit differently than
> you do (i.e. less gloomy/biased).

My understanding of the paper is that he was being critical of Lisp in 
some places, and praising it in others. Nothing is so perfect that it 
can't be improved. However, there may be some debate over the reasons 
for the "Worse is Better" situation. Is it due to stupidity or just 
ignorance? I hope the latter is the reason, so we can do something to 
improve matters. Ignorance implies a failing of education.

It's not all bad. There are many smart programmers who've simply not 
yet discovered Lisp. All that they've heard so far has been bad; the 
myths and propaganda. If all they know is available is C++ and VB, 
because that's all they read about in the professional development 
magazines, then we can't blame them for thinking this is all there is. 

No doubt some vendors would like us to think that the choices are that 
simple: one of their development tools or the other. Some of us know 
of the alternatives, but that alone is not enough to change the world.

As I've often said before, I see this as a memetic battle. One meme 
says that "C++ is fun." Another says, "VB is more fun than C++". We 
also know a meme that says, "Lisp is more fun than anything else." 
This meme is distinctly different from the apparently better known 
memes, and this distinction is explained and (significantly) supported 
is the additional meme, "There are no limits to how much fun Lisp can 
be." This is an idea that programmers unfamiliar with Lisp will find 
very hard to believe. They may mistake it for marketing/advocacy BS.

Is it suprising that there are programmers so cynical that they 
disbelieve the claims we make about Lisp? Faith is not enough.  If you 
make bold claims, then others will justifiably demand hard evidence.
Is there a lack of such evidence? Does it require some level of faith 
before people will even _look_ at the evidence?

I know I'm not the first to ask these questions. I know that Lisp is 
fun, more fun than anything else I've used. Why do so few people 
believe me? Why the blank looks? Do we blame marketing/advocacy BS?
Clannish behaviour and too much testosterone? MS for not appreciating 
the value Lisp - or Gates' clannish preference for Basic? I don't 
believe that Gates is stupid, but somehow the "Basic is fun" meme 
seems to beat even the "C++ is fun" meme.

How healthy is the "Lisp is fun" meme? I don't mean in schools or 
online. The meme may be very strong in schools, but what happens when 
it enters the commercial world? Perhaps it really is as simple as form 
creation, and VB does this better than almost anything else. Never 
mind how awful Basic is for writing complex software, coz that's not 
how it gets used. And there's the software component issue. If you can 
drap 'n' drop a VBX/OCX spreadsheet tool into your form, you'll make a 
lot of business users and developers happy. This is also "fun".

So called "AI" techniques are now finding their way into business 
apps, but many are still hostile to this AI meme. I like writing code 
that writes code, which could be called compiler writing. If I call it 
that, people think I must be doing something incredibly difficult. Yet 
I've seen a VB programmer doing pretty much the same kind of thing. He 
just didn't call it a compiler. I might do it in Lisp, and do it 
better, but the principles are the same. I've just had more practice.

I could ramble like this all day. ;)
-- 
Please note: my email address is munged; You can never browse enough
         "There are no limits." -- ad copy for Hellraiser
From: Jonathan Guthrie
Subject: Re: "Programming is FUN again" rambling commentary (especially the rambling part)
Date: 
Message-ID: <6f0m6n$71u$1@news.hal-pc.org>
In comp.lang.scheme Frank A. Adrian <············@firstdatabank.com> wrote:
> Welcome to the world of "Worse Is Better".  It's more palatable for an
> operation to fail silently than to raise an unmistakeable error signal that
> must be handled.  It's better to allow the user to let an error code be
> unchecked rather than to force him to pass a routine that would handle an
> error.  It's better to rely on a falible programmer's judgement than to use
> facilities to force him to at least acknowledge the possibility of the error
> (if only for an explicit statement to ignore the error).  The scary thing is
> that even Visual Basic on Windows has this type of thing more correct than C
> libraries on UNIX systems.

> The only thing necessary for bad systems to thrive is that you keep buying
> into their badness.

While this is true, it is not necessarily true that forcing the programmer
to deal with possible error results is the best approach.  My reaction
to such a language might be to program emacs to automatically write those
codes needed to ignore the errors.  (Then again, I might be to scrap the
translator that required such a thing.)

You see, the most important thing that should happen when an error occurs
is that the program should continue to run.  Report an error, maybe, give
bogus results, maybe, but if the program just exits with a cryptic error
message (or a core dump or, perhaps, a dialog box with a register dump)
then I cannot suggest to the user what might be wrong and I cannot easily
diagnose what happened.

That, and the fact that there often is no in-program way to deal with the
actual error (therefore many errors are best simply ignored) is the reason
why "never test for an error that you don't know how to deal with" is such
a common, and useful, attitude WRT error trapping.  The expertise of the
programmer comes in at the point of deciding which errors can cause what
I used to call "erratic operation."  For example, some errors must be
trapped or otherwise good data is lost.  (A defensive posture WRT databases
is your best bet.)

In my opinion, the guys who did the C numeric extensions (I'm drawing a
blank on the actual name) got it right.  When you get an error, propagate
that error, but keep the calculation going.  When it's all over, you can
tell whether or not you got bogus results because of machine limits or
some such.  (You can get bogus results at any time by writing the wrong
program, and no programming language is going to be able to help you
with that.)  At that point, you can either simply display the results
(and be prepared to explain to people why their account balance shows as
"-#inf") or issue an error message saying that something bad happened.  

I guess what I'm trying to say is that program exceptions aren't a panacea.
They are wonderful for handling those cases that must be handled (cleaning
up after an error in the middle of a nested conditional leaps to mind)
but they're not always needed.  In my opinion, they are useful about as
often as they are inappropriate.

-- 
Jonathan Guthrie (········@brokersys.com)
Information Broker Systems   +281-895-8101   http://www.brokersys.com/
12703 Veterans Memorial #106, Houston, TX  77014, USA

We sell Internet access and commercial Web space.  We also are general
network consultants in the greater Houston area.
From: R. Toy
Subject: Re: "Programming is FUN again" rambling commentary (especially the rambling part)
Date: 
Message-ID: <351409F2.4C1359C4@mindspring.com>
Jonathan Guthrie wrote:
> 
> In my opinion, the guys who did the C numeric extensions (I'm drawing a
> blank on the actual name) got it right.  When you get an error, propagate
> that error, but keep the calculation going.  When it's all over, you can
> tell whether or not you got bogus results because of machine limits or
> some such.  (You can get bogus results at any time by writing the wrong
> program, and no programming language is going to be able to help you
> with that.)  At that point, you can either simply display the results
> (and be prepared to explain to people why their account balance shows as
> "-#inf") or issue an error message saying that something bad happened.

Yeah, I just really *LOVE* it when my simulation that's been running for
days finally finishes and says NaN for every answer.  So, which one of
those billion or trillion flops caused the problem?  Or worse yet,
something that isn't supposed to overflow does, corrupts other
computations, I take the reciprocal to get zero, and the results look ok
because they're just numbers, but they are totally bogus.  (Ok, this
isn't too likely, but you never know.)

In this particular case, I want my computations to die when something
unexpected happens, like overflow, invalid ops, etc.


-- 
---------------------------------------------------------------------------
----> Raymond Toy	····@mindspring.com
                        http://www.mindspring.com/~rtoy
From: Christopher B. Browne
Subject: Re: "Programming is FUN again" rambling commentary (especially the rambling part)
Date: 
Message-ID: <slrn6h8ohi.u7h.cbbrowne@knuth.brownes.org>
On Sat, 21 Mar 1998 13:41:54 -0500, R. Toy <····@mindspring.com> posted:
>Jonathan Guthrie wrote:
>> In my opinion, the guys who did the C numeric extensions (I'm drawing a
>> blank on the actual name) got it right.  When you get an error, propagate
>> that error, but keep the calculation going.  When it's all over, you can
>> tell whether or not you got bogus results because of machine limits or
>> some such.  (You can get bogus results at any time by writing the wrong
>> program, and no programming language is going to be able to help you
>> with that.)  At that point, you can either simply display the results
>> (and be prepared to explain to people why their account balance shows as
>> "-#inf") or issue an error message saying that something bad happened.
>
>Yeah, I just really *LOVE* it when my simulation that's been running for
>days finally finishes and says NaN for every answer.  So, which one of
>those billion or trillion flops caused the problem?  Or worse yet,
>something that isn't supposed to overflow does, corrupts other
>computations, I take the reciprocal to get zero, and the results look ok
>because they're just numbers, but they are totally bogus.  (Ok, this
>isn't too likely, but you never know.)
>
>In this particular case, I want my computations to die when something
>unexpected happens, like overflow, invalid ops, etc.

Which clearly establishes that there are multiple valid sorts of behaviour
for this.

In an "embedded" application (of whatever sort), it is utterly inappropriate
for the program to crash.  If there is no mechanism to deal with recovery,
then it makes little sense to allow a crash.  Hence, "propagate error, and
keep going."

There are also more "monitored" applications where it is quite appropriate
to have errors result in an explicit sort of "crash."  Stop the batch job;
notify the system operator that there's a problem; page the programmer.

Obviously the behaviour under conditions of error needs to be controlled so
that one can select from these sorts of actions when an error occurs.

-- 
Those who do not understand Unix are condemned to reinvent it, poorly. 	
-- Henry Spencer          <http://www.hex.net/~cbbrowne/lsf.html>
········@hex.net - "What have you contributed to Linux today?..."
From: R. Toy
Subject: Re: "Programming is FUN again" rambling commentary (especially the rambling part)
Date: 
Message-ID: <35147BCD.51BEBF45@mindspring.com>
Christopher B. Browne wrote:
> 
> Which clearly establishes that there are multiple valid sorts of behaviour
> for this.
> 
> In an "embedded" application (of whatever sort), it is utterly inappropriate
> for the program to crash.  If there is no mechanism to deal with recovery,
> then it makes little sense to allow a crash.  Hence, "propagate error, and
> keep going."
> 

Yes, I agree with multiple valid sorts of behavior.

Let's see, I don't know what to do about the error, so I'll just
continue and pull out the control rods, turn off the coolant, disconnect
the operator panel and quietly keep going until meltdown. :-)

However, I don't see much point of propagating the error.  It seems to
me that a reset and restart could hardly do worse.  But every
application should have to deal with this in some way, and the choices
that are made should be conscious decisions of the
designers/programmers, not some convenient default behavior that
everyone forgot about because it never happened during testing.

Ray
From: Christopher B. Browne
Subject: Re: "Programming is FUN again" rambling commentary (especially the rambling part)
Date: 
Message-ID: <slrn6h93q6.v1q.cbbrowne@knuth.brownes.org>
On Sat, 21 Mar 1998 21:47:41 -0500, R. Toy <····@mindspring.com> posted:
>Christopher B. Browne wrote:
>> 
>> Which clearly establishes that there are multiple valid sorts of behaviour
>> for this.
>> 
>> In an "embedded" application (of whatever sort), it is utterly inappropriate
>> for the program to crash.  If there is no mechanism to deal with recovery,
>> then it makes little sense to allow a crash.  Hence, "propagate error, and
>> keep going."
>> 
>
>Yes, I agree with multiple valid sorts of behavior.
>
>Let's see, I don't know what to do about the error, so I'll just
>continue and pull out the control rods, turn off the coolant, disconnect
>the operator panel and quietly keep going until meltdown. :-)

If the program doesn't know how to recover from the error, it may still be
unacceptable to 'do nothing.'

--> "I don't know what to do about the error, so I'll just ABEND/core
dump/..., leaving rods in place so that the reactor melts down..."

I *thought* that pulling out the control rods would cause reactions to stop,
which may be incorrect.

*My* reaction would be to dump the heavy water, which would stop reaction in
the pile at any of the reactors (CANDU) that I've visited, but I digress, as
nuclear chemistry isn't the question here.

In any case, for nuclear applications, there is presumably a need to
validate logic paths to a greater extent than is normal for less critical
systems.  The "meltdown" failure is a rather more catastrophic (and costly,
by almost *any* metric, whether financial, legal, or radiological) event
than most software failures.  

When the software I'm working on presently fails, someone may get annoyed
because their paycheque is a few dollars off.  Powers that be are less
concerned about verifying correctness there than they are about (say)
aircraft autopilot control systems.

-- 
Just be thankful Microsoft isn't a manufacturer of pharmaceuticals.
-- Henry Spencer          <http://www.hex.net/~cbbrowne/lsf.html>
········@hex.net - "What have you contributed to Linux today?..."
From: Raymond Toy
Subject: Re: "Programming is FUN again" rambling commentary (especially the rambling part)
Date: 
Message-ID: <4nsoo9v1t9.fsf@rtp.ericsson.se>
········@news.brownes.org (Christopher B. Browne) writes:

> 
> If the program doesn't know how to recover from the error, it may still be
> unacceptable to 'do nothing.'
> 
> --> "I don't know what to do about the error, so I'll just ABEND/core
> dump/..., leaving rods in place so that the reactor melts down..."
> 
> I *thought* that pulling out the control rods would cause reactions to stop,
> which may be incorrect.

I don't remember exactly how they work.

> 
> *My* reaction would be to dump the heavy water, which would stop reaction in
> the pile at any of the reactors (CANDU) that I've visited, but I digress, as
> nuclear chemistry isn't the question here.

Yes, but here you are taking explicit action when there is an error.
The point was just propagating the error without doing anything about
it until much, much later, if at all.

> 
> In any case, for nuclear applications, there is presumably a need to
> validate logic paths to a greater extent than is normal for less critical
> systems.  The "meltdown" failure is a rather more catastrophic (and costly,
> by almost *any* metric, whether financial, legal, or radiological) event
> than most software failures.  

But the scenario is possible since you can not or will not validate
all of them.  And the failure of the Ariadne launch was caused by a
software bug, and that must have cost hundreds of millions of dollars.

In any case, I think we agree:  Do something sensible (whatever that
might mean) when there are errors.  This includes "do nothing" if that 
makes sense.

This is moved way to far from lisp and scheme, so I'll be quiet now.

Ray
From: Kenneth P. Turvey
Subject: WAY OFF TOPIC (was: Re: "Programming is FUN again" rambling commentary (especially the rambling part)
Date: 
Message-ID: <slrn6hdic8.ggc.kturvey@www.sprocketshop.com>
On 23 Mar 1998 10:56:02 -0500, Raymond Toy <···@rtp.ericsson.se> wrote:
>········@news.brownes.org (Christopher B. Browne) writes:
>
>> 
>> If the program doesn't know how to recover from the error, it may still be
>> unacceptable to 'do nothing.'
>> 
>> --> "I don't know what to do about the error, so I'll just ABEND/core
>> dump/..., leaving rods in place so that the reactor melts down..."
>> 
>> I *thought* that pulling out the control rods would cause reactions to stop,
>> which may be incorrect.
>
>I don't remember exactly how they work.
>

The control rods are moderators.  The absorb neutrons flying out of the
pile.  You put the control rods into the pile to stop the reaction, and
pull them out to heat things up again.  

If your program crashes it should definitely drop the control rods :-)

-- 
Kenneth P. Turvey <·······@pug1.SprocketShop.com> 

The optimist thinks this is the best of all possible worlds. The
pessimist fears it is true.	
	-- Robert Oppenheimer
From: Martti Halminen
Subject: Re: WAY OFF TOPIC (was: Re: "Programming is FUN again" rambling commentary (especially the rambling part)
Date: 
Message-ID: <35178842.5209@dpe.fi>
Kenneth P. Turvey wrote:

> >> I *thought* that pulling out the control rods would cause reactions to stop,
> >> which may be incorrect.
> >
> >I don't remember exactly how they work.
> >
> 
> The control rods are moderators.  The absorb neutrons flying out of the
> pile.  You put the control rods into the pile to stop the reaction, and
> pull them out to heat things up again.
> 
> If your program crashes it should definitely drop the control rods :-)

Just hope that you know what type of reactor you have: the previous is
OK for pressurized water reactors, but boiling water reactors (at least
those used hereabouts) have the rod control mechanisms underneath the
reactor, so dropping the rods takes them out of the core!
From: Christopher B. Browne
Subject: Re: WAY OFF TOPIC (was: Re: "Programming is FUN again" rambling commentary (especially the rambling part)
Date: 
Message-ID: <slrn6hfdob.qlc.cbbrowne@knuth.brownes.org>
On Tue, 24 Mar 1998 12:17:38 +0200, Martti Halminen <···············@dpe.fi> posted:
>Kenneth P. Turvey wrote:
>
>> >> I *thought* that pulling out the control rods would cause reactions to stop,
>> >> which may be incorrect.
>> >
>> >I don't remember exactly how they work.
>> 
>> The control rods are moderators.  The absorb neutrons flying out of the
>> pile.  You put the control rods into the pile to stop the reaction, and
>> pull them out to heat things up again.
>> 
>> If your program crashes it should definitely drop the control rods :-)
>
>Just hope that you know what type of reactor you have: the previous is
>OK for pressurized water reactors, but boiling water reactors (at least
>those used hereabouts) have the rod control mechanisms underneath the
>reactor, so dropping the rods takes them out of the core!

And then there are "heavy water" reactors where it's not the rods that
control things - it's the presence (or absence) of the heavy water.  

If the program crashes --> dump the heavy water --> reaction stops.

Of course, this also means --> power goes off, so that you'd probably want
to make sure that the program isn't so buggy that this happens at stupid
times...

-- 
Those who do not understand Unix are condemned to reinvent it, poorly. 	
-- Henry Spencer          <http://www.hex.net/~cbbrowne/lsf.html>
········@hex.net - "What have you contributed to Linux today?..."
From: Michael Hobbs
Subject: Re: WAY OFF TOPIC (was: Re: "Programming is FUN again" rambling commentary (especially the rambling part)
Date: 
Message-ID: <3517CD7B.FEDAED9@ccmail.fingerhut.com>
Christopher B. Browne wrote:
> And then there are "heavy water" reactors where it's not the rods that
> control things - it's the presence (or absence) of the heavy water.
> 
> If the program crashes --> dump the heavy water --> reaction stops.

With all of these analogies flying around between programs and nuclear
reactor cores, it brings to mind the phrase "core dump". Hopefully, a
core dump in the program doesn't cause one in the reactor.

Another hopelessly off-topic thread: What is the origin of the word
"core" in "core dump"? I'm assuming it's a throwback to the days when
memory was composed of ferrite core; but I can't be certain, since I
wasn't around back then.
From: Charles Martin
Subject: Re: WAY OFF TOPIC (was: Re: "Programming is FUN again" rambling commentary (especially the rambling part)
Date: 
Message-ID: <3517DEEB.C144AE7D@connix.com>
Michael Hobbs wrote:

> 
> Another hopelessly off-topic thread: What is the origin of the word
> "core" in "core dump"? I'm assuming it's a throwback to the days when
> memory was composed of ferrite core; but I can't be certain, since I
> wasn't around back then.

Sure, go ahead, rub it in.

Yes, that's the reason.  There are still folks around (like me) who find
themselves referring to the amount of "core memory" rather than RAM in
weak moments.  And that's "ferrite coreS" -- even in the old days the
computers had more than one. :-)
From: Frank A. Adrian
Subject: Re: WAY OFF TOPIC (was: Re: "Programming is FUN again" rambling commentary (especially the rambling part)
Date: 
Message-ID: <6f900p$pbs$1@client2.news.psi.net>
Charles Martin wrote in message <·················@connix.com>...
>Yes, that's the reason.  There are still folks around (like me) who find
>themselves referring to the amount of "core memory" rather than RAM in
>weak moments.  And that's "ferrite coreS" -- even in the old days the
>computers had more than one. :-)

Yes, but not that many more...
--
Frank A. Adrian
First DataBank
············@firstdatabank.com (W)
······@europa.com (H)
This message does not necessarily reflect those of my employer,
its parent company, or any of the co-subsidiaries of the parent
company.
From: Raymond Toy
Subject: Re: WAY OFF TOPIC (was: Re: "Programming is FUN again" rambling commentary (especially the rambling part)
Date: 
Message-ID: <4n1zvr4zlq.fsf@rtp.ericsson.se>
"Frank A. Adrian" <············@firstdatabank.com> writes:

> Charles Martin wrote in message <·················@connix.com>...
> >Yes, that's the reason.  There are still folks around (like me) who find
> >themselves referring to the amount of "core memory" rather than RAM in
> >weak moments.  And that's "ferrite coreS" -- even in the old days the
> >computers had more than one. :-)
> 
> Yes, but not that many more...

In front of me is a card which contains 16K bytes of core memory.
(The card is about 1 ft square, the core itself is a small board about
5 in by 8 in.)  I didn't count them, but I assume there are 16k*8 tiny
little core magnets on the board.  I don't know how many such boards
were in the computer, but it's not exactly small number of "ferrite
coreS", but certainly not a large number for today's computer memory.

Ray
From: Dick Margulis
Subject: Re: WAY OFF TOPIC (was: Re: "Programming is FUN again" rambling commentary (especially the rambling part)
Date: 
Message-ID: <6f99gp$mbt@bgtnsc02.worldnet.att.net>
Raymond Toy wrote:
> 
> "Frank A. Adrian" <············@firstdatabank.com> writes:
> 
> > Charles Martin wrote in message <·················@connix.com>...
> > >Yes, that's the reason.  There are still folks around (like me) who find
> > >themselves referring to the amount of "core memory" rather than RAM in
> > >weak moments.  And that's "ferrite coreS" -- even in the old days the
> > >computers had more than one. :-)
> >
> > Yes, but not that many more...
> 
> In front of me is a card which contains 16K bytes of core memory.
> (The card is about 1 ft square, the core itself is a small board about
> 5 in by 8 in.)  I didn't count them, but I assume there are 16k*8 tiny
> little core magnets on the board.  I don't know how many such boards
> were in the computer, but it's not exactly small number of "ferrite
> coreS", but certainly not a large number for today's computer memory.
> 
> Ray


Ray,

Are you sure the card does not contain 16k bits, as opposed to bytes?
The IBM 1620 I worked on in the early 1960s had 20,000 6-bit BCD digits,
for a total of 120,000 bits. This occupied a cube-shaped array
approximately seven or eight inches on a side, give or take a bit. If
your card contains 128,000 or so bits, that would imply an eightfold
increase in packing density of ferrite cores somewhere between the 1620
and whatever machine your card came from, which seems somewhat
implausible given the nature of the beast.
From: Charles Martin
Subject: Re: WAY OFF TOPIC (was: Re: "Programming is FUN again" rambling commentary (especially the rambling part)
Date: 
Message-ID: <35182E02.CBF9F296@connix.com>
Raymond Toy wrote:
> 
> "Frank A. Adrian" <············@firstdatabank.com> writes:
> 
> > Charles Martin wrote in message <·················@connix.com>...
> > >Yes, that's the reason.  There are still folks around (like me) who find
> > >themselves referring to the amount of "core memory" rather than RAM in
> > >weak moments.  And that's "ferrite coreS" -- even in the old days the
> > >computers had more than one. :-)
> >
> > Yes, but not that many more...
> 
> In front of me is a card which contains 16K bytes of core memory.
> (The card is about 1 ft square, the core itself is a small board about
> 5 in by 8 in.)  I didn't count them, but I assume there are 16k*8 tiny
> little core magnets on the board.  I don't know how many such boards
> were in the computer, ....

One.

If you were lucky.
From: gregorys
Subject: Re: WAY OFF TOPIC (was: Re: "Programming is FUN again" rambling commentary (especially the rambling part)
Date: 
Message-ID: <35185dcd.0@news.one.net>
I maintaned a system about 15 years ago that,
if memory has not failed, 400,000 18bit words. they were in 5" square metal
boxes of 128,000 words each. These were microscopic cores. Still wander how
they were made?


--
········@one.net
Charles Martin wrote in message <·················@connix.com>...
>Raymond Toy wrote:
>>
>> "Frank A. Adrian" <············@firstdatabank.com> writes:
>>
>> > Charles Martin wrote in message <·················@connix.com>...
>> > >Yes, that's the reason.  There are still folks around (like me) who
find
>> > >themselves referring to the amount of "core memory" rather than RAM in
>> > >weak moments.  And that's "ferrite coreS" -- even in the old days the
>> > >computers had more than one. :-)
>> >
>> > Yes, but not that many more...
>>
>> In front of me is a card which contains 16K bytes of core memory.
>> (The card is about 1 ft square, the core itself is a small board about
>> 5 in by 8 in.)  I didn't count them, but I assume there are 16k*8 tiny
>> little core magnets on the board.  I don't know how many such boards
>> were in the computer, ....
>
>One.
>
>If you were lucky.
From: Frank A. Adrian
Subject: Re: WAY OFF TOPIC (was: Re: "Programming is FUN again" rambling commentary (especially the rambling part)
Date: 
Message-ID: <6f9bsa$t8k$1@client2.news.psi.net>
Raymond Toy wrote in message <··············@rtp.ericsson.se>...
>"Frank A. Adrian" <············@firstdatabank.com> writes:
>> Yes, but not that many more...
>
>In front of me is a card which contains 16K bytes of core memory.
>(The card is about 1 ft square, the core itself is a small board about
>5 in by 8 in.)  I didn't count them, but I assume there are 16k*8 tiny
>little core magnets on the board.  I don't know how many such boards
>were in the computer, but it's not exactly small number of "ferrite
>coreS", but certainly not a large number for today's computer memory.

That's what I'm comparing it to.  Few of the newbies have had the joy of
trying to fit a simulation into 8K words of memory or know the joy of the
"huge expanse" of a 64K memory!  Hell, if you really tried, you could get a
whole RTOS into less than 2K (of course it didn't do much more than handle
interrupts, directly access IO, and queue processes at that size, but the
fact that you could do it at all seems a miracle today).  Now that we've
bored everyone to tears - I've always thought that a good way to keep
programs fast was to place artificial constraints on memory.  It really
makes one focus on meory bandwidth (often THE bottleneck in today's
systems), structure sizes, etc.  Has anyone intentionally tried this
approach?  Also, lest others in these newsgroups think I've lost my mind,
let me state that I am quite aware that this approach also leads to a fair
amount of programmer pain as well as a marked decrease in programmer
productivity.  It just seems like a tradeoff one could make if one wanted a
"fast" system rather than a system fast.
--
Frank A. Adrian
First DataBank
············@firstdatabank.com (W)
······@europa.com (H)
This message does not necessarily reflect those of my employer,
its parent company, or any of the co-subsidiaries of the parent
company.
From: Ray Dillinger
Subject: Re: WAY OFF TOPIC (was: Re: "Programming is FUN again" rambling commentary (especially the rambling part)
Date: 
Message-ID: <351FD5BF.7C34@sonic.net>
Frank A. Adrian wrote:
>  - I've always thought that a good way to keep
> programs fast was to place artificial constraints on memory.  It really
> makes one focus on meory bandwidth (often THE bottleneck in today's
> systems), structure sizes, etc.  Has anyone intentionally tried this
> approach?  Also, lest others in these newsgroups think I've lost my mind,
> let me state that I am quite aware that this approach also leads to a fair
> amount of programmer pain as well as a marked decrease in programmer
> productivity.  It just seems like a tradeoff one could make if one wanted a
> "fast" system rather than a system fast.


What you are describing is "classicist" style programming, and yes 
there are sowe of us who do it for fun and profit.  One of my creations 
was ttte, the "teeny tiny text editor".   It was a full-screen editor. 
It ran in 8k. I haven't used it in two years. 

Actually the style is more "neoclassical" than "classical" -- some 
things that were done to save space way back when are manifestly 
bad ideas best forgotten, and the "classicists" leave them behind. 

					Bear
From: Rob Warnock
Subject: Re: WAY OFF TOPIC (was: Re: "Programming is FUN again" rambling commentary (especially the rambling part)
Date: 
Message-ID: <6fpvjh$851cp@fido.asd.sgi.com>
Ray Dillinger  <····@sonic.net> wrote:
+---------------
| Frank A. Adrian wrote:
| >  - I've always thought that a good way to keep
| > programs fast was to place artificial constraints on memory...
| 
| What you are describing is "classicist" style programming, and yes 
| there are sowe of us who do it for fun and profit...
+---------------

I've always asserted[*] that the reason placing "artificial" constraints on
*any* aspect of one's program seems to improve the quality is that it forces
one to look at the code more than once! ...which so often people don't. Their
code is "write-only".

It is in the process of "tuning" (and I really don't care *which* parameter
you're tuning) that one re-reads the whole program, and it is during this
re-reading that one discovers the *really* significant beneficial changes --
usually major algorithm changes.

This is the same reason that programs that are initially "buggy" often end
up having (after they're debugged, that is) better performance or memory
utilization than programs that worked the first time. [Note: Initially
buggy programs *don't* tend to have fewer post-delivery errors -- quite
the reverse!]


-Rob

[*] Back when I was chairing the DECUS DEC-10 SIG on Implementation Languages
(which mostly meant BLISS, in those days), I formulated the only "law" upon
which I've ever had the temerity to place my name:

	Warnock's Law For Why BLISS Programs Are So Big: It's because they
	mostly work the first time, and so they're never debugged. And since
	it's normally during the process of debugging that the programs are
	*read* (or reviewed) and the major algorithmic changes made that save
	substantial memory, these changes are not made, either. So you're
	left with a properly working but bloated program.

[Historical note: At the time this was first put forth, the primary language
for writing both the TOPS-10 operating system and all of the system utilities
was *assembler* (MAACRO-10).]


-----
Rob Warnock, 7L-551		····@sgi.com   http://reality.sgi.com/rpw3/
Silicon Graphics, Inc.		Phone: 650-933-1673 [New area code!]
2011 N. Shoreline Blvd.		FAX: 650-933-4392
Mountain View, CA  94043	PP-ASEL-IA
From: David Fox
Subject: Re: WAY OFF TOPIC (was: Re: "Programming is FUN again" rambling commentary (especially the rambling part)
Date: 
Message-ID: <y5a90pzaivv.fsf@fox.openix.com>
Charles Martin <········@connix.com> writes:

> Michael Hobbs wrote:
> 
> > 
> > Another hopelessly off-topic thread: What is the origin of the word
> > "core" in "core dump"? I'm assuming it's a throwback to the days when
> > memory was composed of ferrite core; but I can't be certain, since I
> > wasn't around back then.
> 
> Sure, go ahead, rub it in.
> 
> Yes, that's the reason.  There are still folks around (like me) who find
> themselves referring to the amount of "core memory" rather than RAM in
> weak moments.  And that's "ferrite coreS" -- even in the old days the
> computers had more than one. :-)

But why were they called "ferrite cores"?  They're donut shaped,
their cores are empty.
-- 
David Fox	   http://www.cat.nyu.edu/fox		xoF divaD
NYU Media Research Lab   ···@cat.nyu.edu   baL hcraeseR aideM UYN
From: Charles Martin
Subject: Re: WAY OFF TOPIC (was: Re: "Programming is FUN again" rambling commentary (especially the rambling part)
Date: 
Message-ID: <35182E4D.BDBDD4A1@connix.com>
David Fox wrote:
> 
> Charles Martin <········@connix.com> writes:
> 
> > Michael Hobbs wrote:
> >
> > >
> > > Another hopelessly off-topic thread: What is the origin of the word
> > > "core" in "core dump"? I'm assuming it's a throwback to the days when
> > > memory was composed of ferrite core; but I can't be certain, since I
> > > wasn't around back then.
> >
> > Sure, go ahead, rub it in.
> >
> > Yes, that's the reason.  There are still folks around (like me) who find
> > themselves referring to the amount of "core memory" rather than RAM in
> > weak moments.  And that's "ferrite coreS" -- even in the old days the
> > computers had more than one. :-)
> 
> But why were they called "ferrite cores"?  They're donut shaped,
> their cores are empty.
>

There's always one in any crowd.
From: Jens Kilian
Subject: Re: WAY OFF TOPIC (was: Re: "Programming is FUN again" rambling commentary (especially the rambling part)
Date: 
Message-ID: <sfbtuvm65m.fsf@bidra168.bbn.hp.com>
f o x @ c a t . n y u . e d u (David Fox) writes:
> But why were they called "ferrite cores"?  They're donut shaped,
> their cores are empty.

You may think it's funny, but the question is legitimate.  The ferrites are
called cores because the earliest ferrite memories had actual coils of wire,
and the ferrites were the *coils'* cores.  Later models (with improved sense
amplifiers) had the wires running through the ferrites:

		  |   /
		  |  /
		 /\ /
		/  \
           -----\   \-----
	         \  /
	        / \/
	       /  |
	      /   |

Bye,
	Jens (who never used core memory, but at least *learned* about it.)
--
··········@acm.org                 phone:+49-7031-14-7698 (HP TELNET 778-7698)
  http://www.bawue.de/~jjk/          fax:+49-7031-14-7351
PGP:       06 04 1C 35 7B DC 1F 26 As the air to a bird, or the sea to a fish,
0x555DA8B5 BB A2 F0 66 77 75 E1 08 so is contempt to the contemptible. [Blake]
From: ······@cit.org.by
Subject: Re: WAY OFF TOPIC (was: Re: "Programming is FUN again" rambling commentary (especially the rambling part)
Date: 
Message-ID: <6fbmu7$p99$1@nnrp1.dejanews.com>
In article <················@ccmail.fingerhut.com>,
  ··········@ccmail.fingerhut.com wrote:

......

> Another hopelessly off-topic thread: What is the origin of the word
> "core" in "core dump"? I'm assuming it's a throwback to the days when
> memory was composed of ferrite core; but I can't be certain, since I
> wasn't around back then.
>

Here's a quote from the Jargon file:

:core: n.  Main storage or RAM.  Dates from the days of
   ferrite-core memory; now archaic as techspeak most places outside
   IBM, but also still used in the Unix community and by old-time
   hackers or those who would sound like them.  Some derived idioms
   are quite current; `in core', for example, means `in memory'
   (as opposed to `on disk'), and both {core dump} and the `core
   image' or `core file' produced by one are terms in favor.  Some
   varieties of Commonwealth hackish prefer {store}.


Cheers,
   Eugene

-----== Posted via Deja News, The Leader in Internet Discussion ==-----
http://www.dejanews.com/   Now offering spam-free web-based newsreading
From: Jonathan Guthrie
Subject: Re: "Programming is FUN again" rambling commentary (especially the rambling part)
Date: 
Message-ID: <6f6117$9d2$1@news.hal-pc.org>
In comp.lang.scheme R. Toy <····@mindspring.com> wrote:
> Christopher B. Browne wrote:
 
> > Which clearly establishes that there are multiple valid sorts of behaviour
> > for this.
 
> > In an "embedded" application (of whatever sort), it is utterly inappropriate
> > for the program to crash.  If there is no mechanism to deal with recovery,
> > then it makes little sense to allow a crash.  Hence, "propagate error, and
> > keep going."
> > 

> Yes, I agree with multiple valid sorts of behavior.

> Let's see, I don't know what to do about the error, so I'll just
> continue and pull out the control rods, turn off the coolant, disconnect
> the operator panel and quietly keep going until meltdown. :-)

> However, I don't see much point of propagating the error.  It seems to
> me that a reset and restart could hardly do worse.

Never done it, have you?

I used to program gas-flow computers for a living.  We controlled big (36-
and 42-inch) transcontinental gas pipelines.  Closing the wrong valve or
closing valves in the wrong order can result in explosions, fires, loss of
life, and so forth.  We very carefully added methods to propagate errors
and not cause unit restarts.

Why?  Because if the unit's calculations have failed, it clearly is
incapable of handling the situation.  Quick: An input is out-of-range.
Should the valve open or close?  There is no a priori way of knowing.
In that situation, you need to involve more resources than the local
system has available.  If the unit is restarting, there is NOTHING
that an operator can do about it except notice that communications with
the unit has failed.  He'll dispatch a truck (maybe) and they'll get
there in 2-6 hours.  If the error is propagated, it can be reported to
the operator and he can take remote control over the unit and take
appropriate action.

That action may involve other units hundreds of miles apart, that the
original unit doesn't "know anything" about.  Most often, what the
operator does is what the unit does in that situation: nothing.  The
physical system is set up to be gracefully handle errors as well.  This
is because mechanical parts fail more often than the programming does and
there is little the program can do if the valve decides that today it's
going to be closed.

This system works very well and is quite robust.  (About the only thing I
ever saw cause a calculation failure was an open static pressure transducer
loop.)  So, I've BTDT, and I really did get a T-shirt.

-- 
Jonathan Guthrie (········@brokersys.com)
Information Broker Systems   +281-895-8101   http://www.brokersys.com/
12703 Veterans Memorial #106, Houston, TX  77014, USA

We sell Internet access and commercial Web space.  We also are general
network consultants in the greater Houston area.
From: Raymond Toy
Subject: Re: "Programming is FUN again" rambling commentary (especially the rambling part)
Date: 
Message-ID: <4nn2ehuzlm.fsf@rtp.ericsson.se>
Jonathan Guthrie <········@brokersys.com> writes:

> In comp.lang.scheme R. Toy <····@mindspring.com> wrote:
> > Christopher B. Browne wrote:
>  
> > > Which clearly establishes that there are multiple valid sorts of behaviour
> > > for this.
>  
> > > In an "embedded" application (of whatever sort), it is utterly inappropriate
> > > for the program to crash.  If there is no mechanism to deal with recovery,
> > > then it makes little sense to allow a crash.  Hence, "propagate error, and
> > > keep going."
> > > 
> 
> > However, I don't see much point of propagating the error.  It seems to
> > me that a reset and restart could hardly do worse.
> 
> Never done it, have you?

Yes I have.  In that case, the appropriate thing was to reset and
restart.

> 
> I used to program gas-flow computers for a living.  We controlled big (36-
> and 42-inch) transcontinental gas pipelines.  Closing the wrong valve or
> closing valves in the wrong order can result in explosions, fires, loss of
> life, and so forth.  We very carefully added methods to propagate errors
> and not cause unit restarts.

But here you very carefully decided how to handle errors.  Perhaps, I
was mistaken, but my interpretation of "propagate error and keep
going" was no special handling of the error is done at all.  Somewhat
like letting NaN keep generating more NaN until done.  In your case
you've decided exactly what to do with your "NaN": keep going and let
someone else handle it.

Ray
From: Jonathan Guthrie
Subject: Re: "Programming is FUN again" rambling commentary (especially the rambling part)
Date: 
Message-ID: <6f607e$8q7$1@news.hal-pc.org>
In comp.lang.scheme R. Toy <····@mindspring.com> wrote:
> > In my opinion, the guys who did the C numeric extensions (I'm drawing a
> > blank on the actual name) got it right.  When you get an error, propagate
> > that error, but keep the calculation going.

> Yeah, I just really *LOVE* it when my simulation that's been running for
> days finally finishes and says NaN for every answer.  So, which one of
> those billion or trillion flops caused the problem?  Or worse yet,
> something that isn't supposed to overflow does, corrupts other
> computations, I take the reciprocal to get zero, and the results look ok
> because they're just numbers, but they are totally bogus.  (Ok, this
> isn't too likely, but you never know.)

> In this particular case, I want my computations to die when something
> unexpected happens, like overflow, invalid ops, etc.

So tell it to die.  That's an available option.  However, don't be surprised
when the knowledge that the calculation went out of range HERE doesn't help
you very much to figure out what the real problem is.  Numeric errors tend
to propagate quite a bit before they get far enough out of range to cause
the system to barf.

Note that the last time I did the "simulation that runs for days" thing,
it was in Modula-3 (which has exceptions) and the results were often
bogus even though the exceptions were never tripped.  Finding the reasons
for bogus results is left as an exercise for the programmer.  (I had a
boss once who said that all programs go through four phases:  It crashes,
it doesn't crash, but it produces zero output, it produces nonzero
garbage, and it produces correct output.)

This is getting away from "programming is fun again" (an attitude which I
highly recommend---programming should be fun and programmers should always
be learning new languages  <insert TAO OF PROGRAMMING quote here>) and
sounding more like work.  The last time I had real joy from programming
was yesterday when I found the reversed-branch that I had been searching
for (even though I didn't know that I had been searching for it) for three
days.

-- 
Jonathan Guthrie (········@brokersys.com)
Information Broker Systems   +281-895-8101   http://www.brokersys.com/
12703 Veterans Memorial #106, Houston, TX  77014, USA

We sell Internet access and commercial Web space.  We also are general
network consultants in the greater Houston area.
From: Raymond Toy
Subject: Re: "Programming is FUN again" rambling commentary (especially the rambling part)
Date: 
Message-ID: <4npvjduzsm.fsf@rtp.ericsson.se>
Jonathan Guthrie <········@brokersys.com> writes:

> In comp.lang.scheme R. Toy <····@mindspring.com> wrote:
> > > In my opinion, the guys who did the C numeric extensions (I'm drawing a
> > > blank on the actual name) got it right.  When you get an error, propagate
> > > that error, but keep the calculation going.
> 
> > Yeah, I just really *LOVE* it when my simulation that's been running for
> > days finally finishes and says NaN for every answer.  So, which one of
> > those billion or trillion flops caused the problem?  Or worse yet,
[snip]
> 
> So tell it to die.  That's an available option.  However, don't be surprised
> when the knowledge that the calculation went out of range HERE doesn't help
> you very much to figure out what the real problem is.  Numeric errors tend
> to propagate quite a bit before they get far enough out of range to cause
> the system to barf.

But, surely, it's better to find out after the first few seconds than
after days of computations.  At the very least I didn't waste days of
time. 

In any case, I think we both agree that the appropriate actions should 
be taken, whatever they may be.

Ray
From: Thant Tessman
Subject: dealing with errors (was: "Programming is FUN again")
Date: 
Message-ID: <351692EA.41C6@bogus.acm.org>
Jonathan Guthrie wrote:

> [...]  That, and the fact that there often is no in-program 
> way to deal with the actual error (therefore many errors are 
> best  simply ignored) is the reason why "never test for an 
> error that you don't know how to deal with" is such a common, 
> and useful, attitude WRT error trapping.  [...]

 http://www.esrin.esa.it/htdocs/tidc/Press/Press96/ariane5rep.html

-thant
From: Brent A Ellingson
Subject: Re: dealing with errors (was: "Programming is FUN again")
Date: 
Message-ID: <351A72C3.5CEF1977@badlands.nodak.edu>
Thant Tessman wrote:
> 
> Jonathan Guthrie wrote:
> 
> > [...]  That, and the fact that there often is no in-program
> > way to deal with the actual error (therefore many errors are
> > best  simply ignored) is the reason why "never test for an
> > error that you don't know how to deal with" is such a common,
> > and useful, attitude WRT error trapping.  [...]
> 
>  http://www.esrin.esa.it/htdocs/tidc/Press/Press96/ariane5rep.html
> 
> -thant

As near as I can tell, Thant supplied this article to support Jon's 
statement.   

This article refers to the failure of a test flight of an Ariane 5
rocket.  From what I can tell, it failed partially because a software 
module on the guidance system didn't test for exceptions during the
conversion of a 64-bit floating point number to a 16 bit int.  This 
was considered fine -- the module didn't do anything useful after
take off, and if it failed after that (which it did) it wasn't 
going to affect anything.   Real world, physical constraints on the 
rocket prevented the values from going out of range before take off.

However the OS/hardware/whatever of the guidance system inexplicably
DID care, and when the exception was propogated back up to the OS,
it DID note the error.  I quote from the article:

    Although the source of the Operand Error has been identified, 
    this in itself did not cause the mission to fail. The 
    specification of the exception-handling mechanism also 
    contributed to the failure. In the event of any kind of 
    exception, the system specification stated that: the failure 
    should be indicated on the databus, the failure context 
    should be stored in an EEPROM memory (which was recovered 
    and read out for Ariane 501), and finally, the SRI processor 
    should be shut down.
    ...
    There is reason for concern that a software exception should 
    be allowed, or even required, to cause a processor to halt 
    while handling mission-critical equipment. Indeed, the loss 
    of a proper software function is hazardous ...  this resulted 
    in the switch-off of two still healthy critical units of 
    equipment.

In other words, why the hell did the equipment test for the exception
if it didn't know what to do with it, except crash and destroy the 
rocket?  Had it not tested for the exception on the non-critical
code, the rocket probably would not have failed.

Brent Ellingson
········@badlands.nodak.edu
From: Erik Naggum
Subject: Re: dealing with errors (was: "Programming is FUN again")
Date: 
Message-ID: <3099927027198005@naggum.no>
* Brent A Ellingson
| In other words, why the hell did the equipment test for the exception if
| it didn't know what to do with it, except crash and destroy the rocket?
| Had it not tested for the exception on the non-critical code, the rocket
| probably would not have failed.

  it is amazing that the view that "don't test for errors you don't know
  how to handle" is _still_ possible in the light of that report, but I
  guess that's the way with _beliefs_.  I cannot fathom how people will
  read all the contradictory evidence they can find and still end up
  believing in some braindamaged myths.

  the problem is: the equipment did _not_ test for the exception.  the
  exception was allowed to propagate unchecked until the "crash-and-burn"
  exception handler took care of it.  this could be viewed as silly, but
  the report clearly states why this was sound design: unhandled exceptions
  should be really serious and should indicate random hardware failure.

  the _unsound_ design was not in the exception handling at all, it was in
  allowing old code from Ariane 4 still run in Ariane 5, notably code that
  should run for a while into the launch sequence on Ariane 4 because it
  would enable shorter re-launch cycles -- which was not necessary at all
  on Ariane 5.  the error was thus not in stopping at the wrong time or
  under the wrong conditions -- it was in _running_ code at the wrong time
  and under the wrong conditions.

  "had it not run the bogus code, the rocket would not have failed in it."

  how can you expect to learn from mistakes when you insist that the errors
  you observe are caused by mistakes you think you have _already_ learned
  from, and that others (the dumbass people who use exceptions, in this
  case) are at fault for not learning from?

  rather than "don't test for error you don't know how to handle", I
  propose "don't run code with errors you aren't prepared to handle".

  did you notice how the report had the brilliant insight that we have
  gotten used to think that code is good until proven faulty and that this
  was the major cultural problem pervading the whole design and deployment
  process?  it's high time this insighed could sink into the right people
  and cause more focus on provably correct code and verification.  with the
  extremely arrogant attitude still held by Brent and many others like him,
  we will continue to produce crappy code that crash rockets for millenia
  to come without learning the problem really is: unchecked assumptions!
  
#:Erik
-- 
  religious cult update in light of new scientific discoveries:
  "when we cannot go to the comet, the comet must come to us."
From: Brent A Ellingson
Subject: Re: dealing with errors (was: "Programming is FUN again")
Date: 
Message-ID: <351BC8A3.7CE38A98@badlands.nodak.edu>
Erik Naggum wrote:
>   did you notice how the report had the brilliant insight that we have
>   gotten used to think that code is good until proven faulty and that this
>   was the major cultural problem pervading the whole design and deployment
>   process?  it's high time this insighed could sink into the right people
>   and cause more focus on provably correct code and verification.  with the
>   extremely arrogant attitude still held by Brent and many others like him,
>   we will continue to produce crappy code that crash rockets for millenia
>   to come without learning the problem really is: unchecked assumptions!

It is provably impossible to verify all code.  This isn't myth -- this
is fact.   Arrogant people like Erik keep believing the stuff they
learned in intro math classes at University isn't real, but simply a 
bunch of myths.   They will keep believing it is *possible* to verify
all code, and will keep continue to write crappy programs they
believe are "provably correct" and "can't" fail.

The single biggest mistake I see documented in the report was
the OS/hardware/whatever of the guidance system of the rocket was
designed and built on the assumption that the code it was running
was proven to be correct, and any software failures indicated a
critical problem.  As a result, the OS/hardware of the guidance
system was built on the incorrect idea that it should catch all
the software errors, including the errors that were clearly not
critical and which it had no sensible mechanism to correct.  This
resulted in the error being caught and dealt with in the stupidist 
way imaginable -- the guidance system crashed, the rocket nozzle went
to full deflection, and the engines continued to burn until the
whole thing was physically ripped to pieces.  That is an example of
catching an error you were definately better off ignoring.

The fact that *this* software failure was preventable only obscures
the fact that software failures, *in general*, are NOT preventable.  
Trying to verify code is good.  Mistakenly believing it is possible 
to create "provably correct code" is like believing you can tell
the future by a combination of voodoo and looking at the guts of a
slaughtered goat.  It can't be done.  Period.

Whatever,
Brent Ellingson
········@badlands.nodak.edu
From: Erik Naggum
Subject: Re: dealing with errors (was: "Programming is FUN again")
Date: 
Message-ID: <3100020676914073@naggum.no>
* Brent A Ellingson
| It is provably impossible to verify all code.  This isn't myth -- this is
| fact.  Arrogant people like Erik keep believing the stuff they learned in
| intro math classes at University isn't real, but simply a bunch of myths.
| They will keep believing it is *possible* to verify all code, and will
| keep continue to write crappy programs they believe are "provably
| correct" and "can't" fail.

  man, what did verification _do_ to you?  and why do you have to make such
  an incredibly stupid insult just to make yourself feel better?  just
  because you _obviously_ cannot write correct code doesn't mean those who
  say they can, _and_ back up their position with a ten-year history of
  code that just _doesn't_ fail, are frauds and liars.  I wonder what hurt
  you so badly, I really do, but I sure am glad it wasn't me.  please make
  sure you catch the guys, though -- your hostility is eating you up.

  the Department of Informatics at the University of Oslo is perhaps _the_
  pioneering site in verification.  I can assure you that this stuff is not
  "intro math classes", but you have nothing to learn from mistakes you
  don't already know how to handle, right?

| That is an example of catching an error you were definately better off
| ignoring.

  ok, so this _is_ the core credo of a religion with you, and I was in
  error for ridiculing your religious beliefs.  I'm really sorry.

| The fact that *this* software failure was preventable only obscures the
| fact that software failures, *in general*, are NOT preventable.

  yeah, while you're predicting the future and are obviously infallible in
  your own eyes, I'm arrogant.  I think I'll stick with arrogant.

| Trying to verify code is good.  Mistakenly believing it is possible to
| create "provably correct code" is like believing you can tell the future
| by a combination of voodoo and looking at the guts of a slaughtered goat.
| It can't be done.  Period.

  I feel deeply sorry for you, but I feel even sorrier for the poor people
  who might hire you or otherwise stumble into your code.

#:Erik
-- 
  religious cult update in light of new scientific discoveries:
  "when we cannot go to the comet, the comet must come to us."
From: Christopher Browne
Subject: Re: dealing with errors (was: "Programming is FUN again")
Date: 
Message-ID: <6fhjbk$4o$10@blue.hex.net>
On 27 Mar 1998 20:51:16 +0000, Erik Naggum <······@naggum.no> wrote:
>* Brent A Ellingson
>| It is provably impossible to verify all code.  This isn't myth -- this is
>| fact.  Arrogant people like Erik keep believing the stuff they learned in
>| intro math classes at University isn't real, but simply a bunch of myths.
>| They will keep believing it is *possible* to verify all code, and will
>| keep continue to write crappy programs they believe are "provably
>| correct" and "can't" fail.
>
>  man, what did verification _do_ to you?  and why do you have to make such
>  an incredibly stupid insult just to make yourself feel better?  just
>  because you _obviously_ cannot write correct code doesn't mean those who
>  say they can, _and_ back up their position with a ten-year history of
>  code that just _doesn't_ fail, are frauds and liars.  I wonder what hurt
>  you so badly, I really do, but I sure am glad it wasn't me.  please make
>  sure you catch the guys, though -- your hostility is eating you up.
>
>  the Department of Informatics at the University of Oslo is perhaps _the_
>  pioneering site in verification.  I can assure you that this stuff is not
>  "intro math classes", but you have nothing to learn from mistakes you
>  don't already know how to handle, right?

You're still left with the GIGO problems, and there's not just one of
them... 

- If the program is "proven" to be correct by whatever means, but you
then toss incorrect data at it, results will be difficult to predict. 

- The program cannot be more correct than the specifications used to
define what the program was supposed to do.  If the program was written
to behave the wrong way, it doesn't matter if the internals are proven
to be "verified correct," the results of running the program will be
incorrect. 

If you don't have time/opportunity to fully define the parameters of the
system that the program is supposed to somehow analyze or react to, then
you are quite limited as to how "verifyably correct" you can be.

- If my boss sneezes on a hankey, and says: "Here are the
specifications: Go write a program!" the value of verifying the
correctness of the program is rather limited. 

- If my boss tells me to go write a program that will run on an
unreliable OS platform, it doesn't much matter how little or how much
verification work I do on my program, it won't likely prevent the system
from crashing.

It is quite appropriate to do some verification to reduce the number of
possible errors that I may be responsible for introducing into the
system; there will be some point of diminishing returns on such efforts
where the cost of the efforts exceed the expected returns. 

-- 
Windows NT: The Mister Hankey of operating systems
········@hex.net -  <http://www.hex.net/~cbbrowne/lsf.html>
From: Frank Adrian
Subject: Re: dealing with errors (was: "Programming is FUN again")
Date: 
Message-ID: <6fogrc$6td$1@client3.news.psi.net>
Christopher Browne wrote in message <············@blue.hex.net>...
>
>You're still left with the GIGO problems, and there's not just one of
>them...
>
>- If the program is "proven" to be correct by whatever means, but you
>then toss incorrect data at it, results will be difficult to predict.

Well, if you take the specification down to the possible characters on an
input file stream or the possible events coming from the system, you can
prove that you've either (a) handled improper input or (b) chosen to ignore
it.  But you can't state "I've handled the error cases" and then let them
fail.

>- The program cannot be more correct than the specifications used to
>define what the program was supposed to do.  If the program was written
>to behave the wrong way, it doesn't matter if the internals are proven
>to be "verified correct," the results of running the program will be
>incorrect.

Actually, this shows a lack of knowledge of what program proofs are supposed
to provide.  Yes, if the spec is incomplete or inconsistent, a proof for
this spec will also have problems.  But guess what!  By trying to do the
proof based on the spec, you've shown that the spec IS incomplete or
inconsistent.  More importantly, you've usually shown HOW it's incomplete or
inconsistent so that the specification can be modfied to handle these cases.

>If you don't have time/opportunity to fully define the parameters of the
>system that the program is supposed to somehow analyze or react to, then
>you are quite limited as to how "verifyably correct" you can be.

Well, if you don't have time to produce a correct system, then all bets are
off anyway.  Lack of time makes us all stupid.  But I'd say that formal
verification will help deliver "more correct" code for a given time period
than a seat of the pants method.  And, if I had a white-box test opportunity
and proof tools, I could find test cases that break the code (or else a
proof that it does work as stated).  In time, if you don't let my white-box
test find the bugs, some user will.  No seat of the pants method approaches
that level of certainty.

>- If my boss sneezes on a hankey, and says: "Here are the
>specifications: Go write a program!" the value of verifying the
>correctness of the program is rather limited.

Of course, the simple answer is that you should get a better boss :-). OTOH,
proof techniques will show you it's the boss' hankie that's in error and not
your program (not that a boss like that would be happy having this pointed
out).

>- If my boss tells me to go write a program that will run on an
>unreliable OS platform, it doesn't much matter how little or how much
>verification work I do on my program, it won't likely prevent the system
>from crashing.

Actually, it may.  Code that is proven would probably not stress the system
as much as code that was throwing bogus pointers, leaking memory, and, in
general, making a nuisance of itself.

>It is quite appropriate to do some verification to reduce the number of
>possible errors that I may be responsible for introducing into the
>system; there will be some point of diminishing returns on such efforts
>where the cost of the efforts exceed the expected returns.

I don't think that anyone denies this.  It's just that most systems today
are done with NO verification and that is touted as a "good thing".
--
Frank A. Adrian
First DataBank
············@firstdatabank.com (W)
······@europa.com (H)
This message does not necessarily reflect those of my employer,
its parent company, or any of the co-subsidiaries of the parent
company.
From: Rob Warnock
Subject: Re: dealing with errors (was: "Programming is FUN again")
Date: 
Message-ID: <6fihvl$7bdrd@fido.asd.sgi.com>
Brent A Ellingson  <········@badlands.nodak.edu> wrote:
+---------------
| It is provably impossible to verify all code.  This isn't myth -- this
| is fact...
+---------------

Yes, but...

+---------------
| ...software failures, *in general*, are NOT preventable.  
+---------------

Yes, but... (Part of the problem is that "failure" isn't always a well-
defined technical term.)

+---------------
| Trying to verify code is good.  Mistakenly believing it is possible 
| to create "provably correct code" is like believing you can tell
| the future by a combination of voodoo and looking at the guts of a
| slaughtered goat.  It can't be done.  Period.
+---------------

True, but not a reason to throw up one's hands and "give up".

While it is true that one cannot in general prove an arbitrary already-
written program correct or incorrect (it's equivalent to "the halting
problem", which is provably not solvable in general), you can *construct*
correct programs by deriving them from their proofs. (Look in the literature
for what Dijkstra & Gries &c were doing a decade ago.)

Which leaves one a choice: If you want provably-correct programs, you *can*
start with a proof and derive the program from it. Or you can say, "It's
too hard to construct proofs of programs big enough to be interesting"
[despite some examples to the contrary], and blithely code away in whatever
style you fancy, but have no assurance that you'll ever be able to prove
anything (one way *or* the other) about your code ex post facto. (Though
*some* programs can be proved/disproved ex post facto, your program just
might be one of those uncountable number for which the program prover
never halts.)  Your choice.


-Rob

p.s. Actually, to me the "provability" argument is somewhat silly.
Sure, I'd like all my code to be "correct", but IMHO the *real* problem
is that "correct" can only be defined in terms of a specification, and
one thing's for *damn* sure, there's no human way to create "provably
correct" specifications!  (Or if you think there is, change "specifications"
to "requirements". Regress as necessary until you get back to human wants
and needs that led to the project [whatever it is] being instigated.
Let's see you "prove" something about *those*!)

-----
Rob Warnock, 7L-551		····@sgi.com   http://reality.sgi.com/rpw3/
Silicon Graphics, Inc.		Phone: 650-933-1673 [New area code!]
2011 N. Shoreline Blvd.		FAX: 650-933-4392
Mountain View, CA  94043	PP-ASEL-IA
From: Erik Naggum
Subject: Re: dealing with errors (was: "Programming is FUN again")
Date: 
Message-ID: <3100091181606046@naggum.no>
* Rob Warnock
| Actually, to me the "provability" argument is somewhat silly.  Sure, I'd
| like all my code to be "correct", but IMHO the *real* problem is that
| "correct" can only be defined in terms of a specification, and one
| thing's for *damn* sure, there's no human way to create "provably
| correct" specifications!  (Or if you think there is, change
| "specifications" to "requirements".  Regress as necessary until you get
| back to human wants and needs that led to the project [whatever it is]
| being instigated.  Let's see you "prove" something about *those*!)

  this argument, which is presented quite frequently, is hard to refute
  because it makes a number of assumptions that one needs to challenge at
  their root, not in their application.  I'll try, in no particular order.

  one assumption is that _all_ code should be provably correct, and that in
  the face of very hard cases one can somehow deduce something about the
  general provability issues.  this is not so.  an important argument in
  verifiable programming is that that which is encapsulated should be
  verified so you can trust your abstractions.  without this requirement,
  there is no upper bound to the complexity of proving anything, and those
  who argue against verification (or its costs) frequently assume that they
  will always deal with unverified code.  this is obviously not the case --
  people who have worked with these things for years have most probably
  done something pretty smart to make their work bear fruit, and so the
  often quite silly arguments about the fruitlessness of their endeavor are
  only true if you never verify anything.  this is somewhat like asking
  "but how would the stock market work if the government set all prices?"

  another assumption is that specifications need to be as precise as the
  program is.  this is not so.  code contains a lot of redundancy of
  expression and information and little expression of intent and purpose.
  specifications should contain what the code does not.  in particular,
  preconditions and postconditions can be expressed (and checked) by other
  means than computing the value.  invariants that are not expressed in
  code need to be provably maintained.  given such intents of the system
  that never get expressed in code, a specification can be fairly simple,
  and inconsistencies in a specification are far easier to spot than in a
  program.

  yet another assumption is that code would be allowed to be written with
  the same messy interaction of abstraction levels that programmers tend to
  use today.  this is not going to be allowed.  to make verification work,
  more information must be made explicit, while in most languages amenable
  to verification, the redundancy in information is minimized.  this latter
  point is also worth underlining: not just any language can be subject to
  verification.  C and C++, for instance, have so complex semantics that it
  is very hard to figure out what is actually going on unless the code is
  exceptionally cleanly written.

  creating correct software is a lot easier than creating buggy software
  that works.  however, if you start with buggy methodologies you'll never
  obtain correct software, and you might mistakenly believe that correct
  software is therefore impossible.

#:Erik
-- 
  religious cult update in light of new scientific discoveries:
  "when we cannot go to the comet, the comet must come to us."
From: Rob Warnock
Subject: Re: dealing with errors (was: "Programming is FUN again")
Date: 
Message-ID: <6fkjln$79u8n@fido.asd.sgi.com>
Erik Naggum  <······@naggum.no> wrote:
+---------------
| * Rob Warnock
| | Actually, to me the "provability" argument is somewhat silly.  Sure, I'd
| | like all my code to be "correct", but IMHO the *real* problem is that
| | "correct" can only be defined in terms of a specification, and one
| | thing's for *damn* sure, there's no human way to create "provably
| | correct" specifications! ...
| 
|   this argument, which is presented quite frequently, is hard to refute
|   because it makes a number of assumptions that one needs to challenge at
|   their root, not in their application.
+---------------

O.k., so I flippantly overstated the case. (But it *was* in a postscript to
an article [not quoted] in which I actually made the case *for* constructing
correct programs from their proofs, so I *do* have some respect for provability
concerns...]

+---------------
|   another assumption is that specifications need to be as precise as the
|   program is.  this is not so.
+---------------

Not as "precise", but certainly as "correct", yes? You won't deny, I hope,
that incorrect specifications usually lead to incorrect functioning of the
total system, *especially* when the code is proven to implement the
specification!

+---------------
|   specifications should contain what the code does not.  in particular,
|   preconditions and postconditions can be expressed (and checked) by other
|   means than computing the value.  invariants that are not expressed in
|   code need to be provably maintained.
+---------------

The problem I was noting in that postscript was that in attempting to
prove *total system* correctness [as opposed to proving correctness of
an encapsulated library component, which is often fairly straightforward]
one eventually must regress (step back) to the initial human desires
that led to the specification -- whereupon one runs smack bad into
the "DWIS/DWIM" problem ("Don't do what I *said*, do what I *meant*!"),
which at its root contains the conundrum that much of the time we
humans don't actually *know* exactly what we want!

+---------------
|   creating correct software is a lot easier than creating buggy software
|   that works.  however, if you start with buggy methodologies you'll never
|   obtain correct software, and you might mistakenly believe that correct
|   software is therefore impossible.
+---------------

We violently agree. However, I was trying to warn that that *still*
isn't enough to prevent disasters, since the best you'll ever get
with the best methodologies is code whose behavior meets the originally
stated goals...  WHICH MAY HAVE BEEN WRONG.

Yet I think we also agree that the truth of this point is no excuse
for not using the best methodologies we have access to *anyway*...


-Rob

-----
Rob Warnock, 7L-551		····@sgi.com   http://reality.sgi.com/rpw3/
Silicon Graphics, Inc.		Phone: 650-933-1673 [New area code!]
2011 N. Shoreline Blvd.		FAX: 650-933-4392
Mountain View, CA  94043	PP-ASEL-IA
From: Erik Naggum
Subject: Re: dealing with errors (was: "Programming is FUN again")
Date: 
Message-ID: <3100169086878095@naggum.no>
* Rob Warnock
| Not as "precise", but certainly as "correct", yes?  You won't deny, I
| hope, that incorrect specifications usually lead to incorrect functioning
| of the total system, *especially* when the code is proven to implement
| the specification!

  no, I won't deny that, but there is an important difference between what
  constitutes a correct program and a correct specification: the latter
  must not contain inconsistencies or conflicts.  a program must be allowed
  to contain inconsistencies and conflicts because it is impossible to do
  everything at once.  since a specification is a statement of the static
  properties of a program, and a program's execution is a dynamic process,
  the types of incorrectness that can occur are vastly different in nature.
  this all leads to simpler (to express and implement, anyway) requirements
  on specifications than on programs.  since the program should now be
  derived from the specification, we have removed a tremendous fraction of
  the randomness in the way humans think and process information.

  writing specifications, however, is much harder than writing programs,
  but at least you can always know whether it is internally consistent or
  not.

| The problem I was noting in that postscript was that in attempting to
| prove *total system* correctness [as opposed to proving correctness of an
| encapsulated library component, which is often fairly straightforward]
| one eventually must regress (step back) to the initial human desires that
| led to the specification -- whereupon one runs smack bad into the
| "DWIS/DWIM" problem ("Don't do what I *said*, do what I *meant*!"), which
| at its root contains the conundrum that much of the time we humans don't
| actually *know* exactly what we want!

  oh, yes.  total system correctness is often meaningless even if it can be
  proven just for this reason.  I see that we don't disagree on much, but I
  have become wary of the many paople who argue against proving correctness
  of components because of the human factors in "satisfiability" overshadow
  any correctness properties of a system at some point close to the users.

| We violently agree.  However, I was trying to warn that that *still*
| isn't enough to prevent disasters, since the best you'll ever get with
| the best methodologies is code whose behavior meets the originally stated
| goals... WHICH MAY HAVE BEEN WRONG.

  we violently agree, indeed.

| Yet I think we also agree that the truth of this point is no excuse
| for not using the best methodologies we have access to *anyway*...

  precisely, and to wrap this up: I think the report from the Ariane 5
  failure was incredibly intelligent and honest about the issues they were
  involved in.  would that similar efforts would be undertaken when
  software less critical also fails.  there is a lot to learn from mistakes
  that we probably never will learn from until we get rid of the obvious
  mistakes that we _believe_ we know how to handle.  I'm reminded of a
  "definition" of insanity that might apply to programming: to keep doing
  the same thing over and over while expecting different results.

  the irony of this whole verifiable programming situation is that we are
  moving towards the point where we can prove that human beings should not
  write software to begin with, and we let computers program computers.
  however, as long as C++ and similar repeatedly-unlearned-from mistakes
  hang around, programmers will still be highly paid and disasters will
  come as certainly as sunrise.

#:Erik
-- 
  religious cult update in light of new scientific discoveries:
  "when we cannot go to the comet, the comet must come to us."
From: Jon S Anthony
Subject: Re: dealing with errors (was: "Programming is FUN again")
Date: 
Message-ID: <ufyaxv4jvz.fsf@synquiry.com>
Erik Naggum <······@naggum.no> writes:

>   the problem is: the equipment did _not_ test for the exception.  the
>   exception was allowed to propagate unchecked until the "crash-and-burn"
>   exception handler took care of it.  this could be viewed as silly, but
>   the report clearly states why this was sound design: unhandled exceptions
>   should be really serious and should indicate random hardware failure.

Exactly.

>   the _unsound_ design was not in the exception handling at all, it was in
>   allowing old code from Ariane 4 still run in Ariane 5, notably code that
>   should run for a while into the launch sequence on Ariane 4 because it
>   would enable shorter re-launch cycles -- which was not necessary at all
>   on Ariane 5.  the error was thus not in stopping at the wrong time or
>   under the wrong conditions -- it was in _running_ code at the wrong time
>   and under the wrong conditions.

This is about the best succinct description of what went wrong and why
that I've seen.

>   extremely arrogant attitude still held by Brent and many others like him,
>   we will continue to produce crappy code that crash rockets for millenia
>   to come without learning the problem really is: unchecked assumptions!

The last bit here "unchecked assumptions" is a precise, simple and
accurate anatomy of what the actual problem really was (and _is_ all
over the place in software "engineering").  Of course it is extremely
unlikely that people like Brent will ever clue into this as their own
assumptions are blinding them to it.


/Jon

-- 
Jon Anthony
Synquiry Technologies, Ltd., Belmont, MA 02178, 617.484.3383
"Nightmares - Ha!  The way my life's been going lately,
 Who'd notice?"  -- Londo Mollari
From: Brent A Ellingson
Subject: Re: dealing with errors (was: "Programming is FUN again")
Date: 
Message-ID: <351FBDB2.49A6D9B1@badlands.nodak.edu>
Jon S Anthony (···@synquiry.com) wrote:
: Erik Naggum <······@naggum.no> writes:

: >   the problem is: the equipment did _not_ test for the exception. 
the
: >   exception was allowed to propagate unchecked until the
"crash-and-burn"
: >   exception handler took care of it.  this could be viewed as silly,
but
: >   the report clearly states why this was sound design: unhandled
exceptions
: >   should be really serious and should indicate random hardware
failure.

: Exactly.

First, the report clearly says that the "crash-and-burn" exception
handler 
(which *is* an exception handler, but a damned bad one) was NOT sound
design,
and they offer an alternative (I'm quoting from the report posted at 
http://www.esrin.esa.it/htdocs/tidc/Press/Press96/ariane5rep.html):

    Although the source of the Operand Error has been identified, 
    this in itself did not cause the mission to fail. The specification 
    of the exception-handling mechanism also contributed to the 
    failure ...   

    ... It was the decision to cease the processor operation which 
    finally proved fatal ...

    ... For example the computers within the SRIs could have 
    continued to provide their best estimates of the required 
    attitude information. There is reason for concern that a
    software exception should be allowed, or even required, to 
    cause a processor to halt while handling mission-critical
    equipment ...

Second, we all seem to agree with the review board on this point:

    The Board is in favour of the ... view, that software 
    should be assumed to be faulty until applying the currently
    accepted best practice methods can demonstrate that it is 
    correct.

But none of us seems to agree with what they mean by this.  My feeling
is that this implies the the OS/hardware/whatever of the guidance system 
should have been designed on the assumption that the code it was running 
was *not* proven correct, but rather that the code it was running *may* 
be faulty.   

Other people believe it implies that the the engineers should have 
verified that the code was correct before it was allowed to fly.

Hopefully, I'm not the only person that realizes there is a middle
ground there.  However, I am taken aback both by Erik's idea that the
OS of the guidance system may have been designed properly and by his
idea that I am somehow advocating that the software module was 
designed properly.

It is abundantly clear that both the OS *and* the software module
were designed incorectly -- the software should have been verified,
but the OS should *not* have assumed it was verified.

Last, 

: >   extremely arrogant attitude still held by Brent...  will continue 
: >   to produce crappy code...

: Of course it is extremely unlikely that people like Brent will ever 
: clue into this as their own assumptions are blinding them to it.

I hate to break in on your little mating dance, but I don't know 
where you get off making any of these assumptions about me.

I didn't write that assumptions should should go uncheck, or that 
all errors should be ignored, or even that ignoring errors is in 
general the best policy.  What I remember writing is that this report 
makes it pretty clear that ignoring *this* error in *this* situation 
would have been a *better* policy than allowing non-critcal code 
to crash both the main and backup guidance systems of the rocket, 
ultimately causing the rocket to be torn to shreds by the atmosphere.

Whatever,
Brent Ellingson
········@badlands.nodak.kedu
[········@cx06 bellings]$
From: Erik Naggum
Subject: Re: dealing with errors (was: "Programming is FUN again")
Date: 
Message-ID: <3100267758827468@naggum.no>
* Brent A Ellingson
| I didn't write that assumptions should should go uncheck, or that all
| errors should be ignored, or even that ignoring errors is in general the
| best policy.  What I remember writing is that this report makes it pretty
| clear that ignoring *this* error in *this* situation would have been a
| *better* policy than allowing non-critcal code to crash both the main and
| backup guidance systems of the rocket, ultimately causing the rocket to
| be torn to shreds by the atmosphere.

  I am unable to understand that you are _not_ saying that the cause of the
  failure was that there should have been an exception handler (that did
  nothing) for this situation, but wasn't, and this flies in the face of
  the gist of the report, which was, and I repeat myself: that the bug was
  to let code run that should not have run to begin with.

  ObCL: IGNORE-ERRORS is an exception handler.

#:Erik
-- 
  religious cult update in light of new scientific discoveries:
  "when we cannot go to the comet, the comet must come to us."
From: Thant Tessman
Subject: Re: dealing with errors (was: "Programming is FUN again")
Date: 
Message-ID: <351AA727.794B@bogus.acm.org>
Brent A Ellingson wrote:
> 
> Thant Tessman wrote:
> >
> > Jonathan Guthrie wrote:
> >
> > > [...]  That, and the fact that there often is no in-program
> > > way to deal with the actual error (therefore many errors are
> > > best  simply ignored) is the reason why "never test for an
> > > error that you don't know how to deal with" is such a common,
> > > and useful, attitude WRT error trapping.  [...]
> >
> >  http://www.esrin.esa.it/htdocs/tidc/Press/Press96/ariane5rep.html
> >
> > -thant
> 
> As near as I can tell, Thant supplied this article to support Jon's
> statement.

I posted it because I think it demonstrates exactly the opposite.

> This article refers to the failure of a test flight of an Ariane 
> 5 rocket.  From what I can tell, it failed partially because 
> a software module on the guidance system didn't test for 
> exceptions during the conversion of a 64-bit floating point 
> number to a 16 bit int.  This was considered fine -- the module 
> didn't do anything useful after take off, and if it failed after 
> that (which it did) it wasn't going to affect anything. 

The reason they didn't bother to catch the exception was not because the
module didn't do anything useful after takeoff, but because they were
working on the assumption that an exception would have indicated a
hardware problem in which case shutting down the unit would have been
the appropriate thing to do (to let the backup unit take over).  The
fact that the software module wasn't actually serving any function at
the time was coincidental.

The engineers KNEW about the places in the code that could possibly
generate such an exception, and a conscious decision was made not to
deal with the critical section of code when converting the code from
Ariane 4 to Ariane 5.  This, plus the fact that the valid input had
changed for Ariane 5, is what brought down the rocket.

-thant
From: Martin Rodgers
Subject: Re: "Programming is FUN again" rambling commentary
Date: 
Message-ID: <MPG.f74800cd3349029989985@news.demon.co.uk>
Steve Gonedes wheezed these wise words:

> When this happened to me under Unix, the program kept running and
> happily began to truncate all the files I tried to save (sans the error
> message). What a pleasure.

Yeah, instead of "crash" we should really say, "unspecified behavior".
That's even more scary. I'd like to not only minimize the amount of 
damage software can do, but have some idea what the limit on that 
amount will be. The painfully reality is that there is no limit.
 
> I think the problem here is when writing programs that are trying to
> work with the of Unix programs is the lack of a consistant, well defined
> way for doing something as simple as getting the free space on a
> partition (not all Unix programs have statfs I believe). How sad. The
> kernel saves like %5 of the disk for itself too, sure would have been
> nice if it was smart enough to share or warn me. (I didn't get an error
> message for like a half hour; it was GNU mv that finally said hey bozo -
> you have no space left.)
> 

Would it be better for the performance to gracefully degrade? I'd 
still like some warning and means for software to catch it. Windows 
has an event called something like WM_COMPACTING, but this is a very 
crude way of warning an app that it needs to reduce memory resources.
With the demand page VM available today, it might not even be useful.

Unix has had true VM for a lot longer, but is there a signal for a 
"low memory" event? If so, how useful is it? Is there a similar signal 
for "low disk space"? How should a language handle such things? Even 
when an event is outside the scope of the language, there should be a 
way of setting an exception handler for it. For example, a conditon 
class in a Common Lisp system. This will be system dependant, unless 
the language already defines such a condition.
-- 
Please note: my email address is munged; You can never browse enough
         "There are no limits." -- ad copy for Hellraiser
From: Espen Vestre
Subject: Re: "Programming is FUN again" rambling commentary
Date: 
Message-ID: <w67m5z509y.fsf@gromit.nextel.no>
···@this.email.address.intentionally.left.crap.wildcard.demon.co.uk (Martin Rodgers) writes:

> WHat they lack may be a deeper understanding of _programming_. I don't 
> think that C is the culprit here. 

no, definitely.  I was unprecise, thinking "programming newbies
[accidently] using C", but writing "C newbies" ;-)

> Thus, we can still use C, embrace Java, and build on these tools in 
> Lisp. What are Java people doing? Building front ends to network apps?
> Lisp looks like a fabulous language to use at the back end, while Java 
> is complimentary tool for the front end. C code can provide low level 
> support at either end. The best of all worlds?

This is a very good point, but it seems that the Java people are very
busy moving into the back end realm, so hurry up all you lisp programmers
and get your lisp servers and middleware out! (which reminds me that
I shouldn't spend time writing this ;-))

--

 regards,
   Espen Vestre
From: Martin Rodgers
Subject: Re: "Programming is FUN again" rambling commentary
Date: 
Message-ID: <MPG.f7268f8268d256989981@news.demon.co.uk>
Espen Vestre wheezed these wise words:

> no, definitely.  I was unprecise, thinking "programming newbies
> [accidently] using C", but writing "C newbies" ;-)

This is what I thought you meant. ;)
 
> > Thus, we can still use C, embrace Java, and build on these tools in 
> > Lisp. What are Java people doing? Building front ends to network apps?
> > Lisp looks like a fabulous language to use at the back end, while Java 
> > is complimentary tool for the front end. C code can provide low level 
> > support at either end. The best of all worlds?
> 
> This is a very good point, but it seems that the Java people are very
> busy moving into the back end realm, so hurry up all you lisp programmers
> and get your lisp servers and middleware out! (which reminds me that
> I shouldn't spend time writing this ;-))

Indeed, they are! The Golden Horde are almost upon us. Well, it 
sometimes seems like it. History may record a different story.
-- 
Please note: my email address is munged; You can never browse enough
                  "Oh knackers!" - Mark Radcliffe
From: David H Wild
Subject: Re: "Programming is FUN again" rambling commentary
Date: 
Message-ID: <4825d29d6fdhwild@argonet.co.uk>
In article <·························@news.demon.co.uk>,
 Martin Rodgers
<···@this.email.address.intentionally.left.crap.wildcard.demon.co.uk>
wrote:
> Lisp books are written  by experienced programmers _for_ experienced
> programmers. Even the  tutorials aimed at "beginners" assume a great
> deal of intelligence.

I know what you mean, but I think that it's really *background knowledge*
rather than intelligence.

-- 
 __  __  __  __      __ ___   _____________________________________________
|__||__)/ __/  \|\ ||_   |   / Acorn Risc_PC
|  ||  \\__/\__/| \||__  |  /...Internet access for all Acorn RISC machines
___________________________/ ······@argonet.co.uk
Uploaded to newnews.dial.pipex.com on Thu,12 Mar 1998.21:51:53
From: Ken Deboy
Subject: Re: "Programming is FUN again" rambling commentary
Date: 
Message-ID: <350B64A1.4B81B4A0@locked_and_loaded.reno.nv.us>
Martin Rodgers wrote:

 -- snip --

> I'd say that K&R's C tutorial does the same thing, which is why some C
> programmers claim that book is "too hard". No it isn't! If you can't
> understand that book, then you should touch C! (However, see below.)
> That book assumes you already know how to program. _All_ the other C
> books I've seen don't teach you anything about programming, never mind
> how to program in C.

As someone just learning to program (in C), I'm not sure what you're
try-
ing to say here. It seems like one already needs to know how to program
to understand the book (?) but if I can't learn programming from a book
or a class then I'm not supposed to program (or try to)? I haven't seen
the K&R book, but the books I've found most useful to teach me program-
ming are Thinking Forth (but I don't use Forth yet) and "Teach Yourself
Advanced C." I agree that most begining C books teach squat about actual
programming. As for books, is Winston and Horn Lisp a good one to learn
programming in Lisp?

> A lot of people will suggest other alternatives to C. Today, the media
> and certain vendors are going mad about the current favourite
> alternative, Java.

I looked at learning Java, but I didn't like that everything in it has
to
be "object oriented." Besides if it was so good there wouldn't be so
much
hype about it (look at Win95;) ). I doubt I'll ever use it except to add
a GUI to programs I write in other languages. Or maybe I'll use Tk
instead
since the Forth I want to try has a Tk interface but afaik no Java int.
Well since I drifted off topic anyway maybe someone doesn't mind
answering
a simple question: what is a good book for learning programming? I have 
Winston and Horn on order, is Little Schemer better. Also where is the
FAQ so I can find scheme for my platform? Thanks for any help.

With best wishes,
Ken Deboy
······@locked_and_loaded.reno.nv.us
(please cc to my email because my newsreader is defective)
From: Christopher B. Browne
Subject: Re: "Programming is FUN again" rambling commentary
Date: 
Message-ID: <slrn6gmuu2.dnp.cbbrowne@knuth.brownes.org>
On Sat, 14 Mar 1998 21:18:25 -0800, Ken Deboy <······@locked_and_loaded.reno.nv.us> posted:
>Martin Rodgers wrote:
>
> -- snip --
>
>> I'd say that K&R's C tutorial does the same thing, which is why some C
>> programmers claim that book is "too hard". No it isn't! If you can't
>> understand that book, then you should touch C! (However, see below.)
>> That book assumes you already know how to program. _All_ the other C
>> books I've seen don't teach you anything about programming, never mind
>> how to program in C.
>
>As someone just learning to program (in C), I'm not sure what you're
>try-
>ing to say here. It seems like one already needs to know how to program
>to understand the book (?) but if I can't learn programming from a book
>or a class then I'm not supposed to program (or try to)? 

In effect, K&R assumes that you have a programming background, and a general
understanding of "how computers work."  In that context, it's a great
description of C.

What I think is being implied here is that:
a) C is not a good language to start with, and
b) *None* of the books on C do a good job of teaching you *how to program.*

They may be overstating the case somewhat; they've nonetheless got a point.
I haven't seen any really good books about C that don't require that you
already have a programming background.

>I haven't seen
>the K&R book, but the books I've found most useful to teach me program-
>ming are Thinking Forth (but I don't use Forth yet) and "Teach Yourself
>Advanced C." I agree that most begining C books teach squat about actual
>programming. As for books, is Winston and Horn Lisp a good one to learn
>programming in Lisp?

SICP (Structure and Interpretation of Computer Programs) seems to get the
nod as being a great book about teaching about programming.  It comes from a
Scheme perspective.  It gets pretty mathematical in places; those not so
inclined will not appreciate this.

Thinking FORTH is indeed a good book; it particularly tries to grapple with
how one should factor code into pieces, and explains the importance of that
very well.

Winston & Horn is a pretty good book about learning LISP; it is somewhat
oriented towards Artificial Intelligence applications whereas SICP really
looks at the issue of understanding algorithms.

>> A lot of people will suggest other alternatives to C. Today, the media
>> and certain vendors are going mad about the current favourite
>> alternative, Java.
>
>I looked at learning Java, but I didn't like that everything in it has
>to
>be "object oriented." Besides if it was so good there wouldn't be so
>much
>hype about it (look at Win95;) ). I doubt I'll ever use it except to add
>a GUI to programs I write in other languages. Or maybe I'll use Tk
>instead
>since the Forth I want to try has a Tk interface but afaik no Java int.
>Well since I drifted off topic anyway maybe someone doesn't mind
>answering
>a simple question: what is a good book for learning programming? I have 
>Winston and Horn on order, is Little Schemer better. Also where is the
>FAQ so I can find scheme for my platform? Thanks for any help.

What was your platform?  There are Scheme implementations for virtually any
platform that supports Tk...

I've got a number of links to Scheme implementations and various information
resources at <http://www.hex.net/~cbbrowne/languages.html>

-- 
Those who do not understand Unix are condemned to reinvent it, poorly. 	
-- Henry Spencer          <http://www.hex.net/~cbbrowne/lsf.html>
········@hex.net - "What have you contributed to Linux today?..."
From: Wolfgang von Hansen
Subject: Re: "Programming is FUN again" rambling commentary
Date: 
Message-ID: <97864844@geodesy.inka.de>
Moin, Moin!

Espen Vestre <··@nextel.no> schreibt �ber `Re: "Programming is FUN again"
rambling commentary':

> ··@wetware.com (Bill Coderre) writes:
> 
> > Perhaps if C programs did all of the error-checking and memory-management
> > they were supposed to, they would also be slower. (I'm sure there's cases
> > where Lisp is doing TOO MUCH error checking, and that's resulting in
> > unnecessary speed loss, but hey.)
> 
> It's not generally the case that C programs are _fast_.  Quite in
> opposite: The popular programs these days typically are _suprisingly_
> _fat_ and _slow_.  Some reasons for this might be:
> 
> 1) a lot of code is programmed by really bad C newbies
> 2) too much has to be done in too short time
> 3) too many software companies hold hardware shares
> 
> But, if you think of it, (1) is partly a consequence of (2).
> And if you think more of it, both (1) and (2) may partly be
> caused by the choice of language.  In Common Lisp fewer people
> could do more in shorter time!

I disagree with that. (1) and (2) are not caused by the choice of language
but by the choice of algorithms. You don't expect a newbie to produce fast
code in any given language. Just think of the seven versions of COPY-LIST
in the FAQ. IMHO (3) only holds true for Micro$soft as they seem to be in
liaison with Intel. That's not all bad because we have seen a massive
increase in processing speed during the past years. OTOH we are forced to
run Windoze on such systems most of the time. :-( Anyway, I believe Windoze
would be a neat OS if they would develop it on a 386sx-16 rather than on
state-of-the-art computers. I add (4) and (5) to your list:

4) software development on computers faster than those the program is going
   to be executed on in real life
5) compilers that heavily rely on libraries instead of creating the code
   themselves

As for (5), I just wrote two "Hello World!"-programs -- one in C and one in
C++ and compiled both with the GNU-C-Compiler. Their size was the same (17k)
but that's dynamic linking on run-time. The C version took 0.3s to execute
(my computer isn't too fast anyway). The C++ version ran for full 1.6s! I'm
not sure if I really want to know what the computer did during the excess
1.3s. Supposedly a lot more libraries had to be opened. (I even had to add
termcap manually because curses (which had been added for some strange
reason) needed the function tgetstr; don't ask me why an output-only program
reads characters from the keyboard.)

To get back to Lisp, I believe speed solely depends on the algorithms used
and the quality of the implementation of the interpreter/compiler. Unlike
other languages, Lisp has its own view on the memory which is organized in
cells that build up a binary tree but do not have any sequential order in
the first place (here, the word heap for free mem is even more appropriate).
An extra layer is needed to connect this external representation to the
underlying hardware structure and if this layer is implemented really bad
the whole system performance goes down.

OTOH, all programs run on the same processor architecture and can be defined
as an automaton that accepts certain inputs and answers with certain
outputs. A Lisp compiler may produce the same code as a C compiler for
a given task. The only reason why they differ is that the compiler doesn't
know what will happen during execution. E.g. in C, all memory is located on
compile-time (excluding malloc() and the like which simply are function
calls) and the compiler thus knows the location of every data item. A Lisp
compiler never knows how often CONS is invoked or if a literal is going to
be bound to some memory location or not.

However, this difference also influences the usability of a language for
a given task. Programs that need a highly dynamic interior are better
implemented in Lisp rather than C. But if the task only needs constant space
and simple operations on them, Lisp might also be too sluggish because it
expects the program to do more than it actually does.

> There will always be a need for programming at the level of C
> or lower.

As long as operating systems are written in C, it's best to access system
resources through the native tongue.


Gru�

Wolfgang
-- 
   (_(__)_)  Privat: ···@geodesy.inka.de                      //
     ~oo~       Uni: ·······@ipf.bau-verm.uni-karlsruhe.de  \X/
     (..)
From: Hartmann Schaffer
Subject: Re: "Programming is FUN again" rambling commentary
Date: 
Message-ID: <350DA518.4A2953D6@netcom.ca>
Wolfgang von Hansen wrote:
> ...

> state-of-the-art computers. I add (4) and (5) to your list:
> 
> 4) software development on computers faster than those the program is going
>    to be executed on in real life
> 5) compilers that heavily rely on libraries instead of creating the code
>    themselves
> ...
Add
6) Too many dewvelopers believe that their code will only be run on
machines with infinitesimal cycle time and infinite memory, and this
assumption turns out to be wrong in real life
-- 

Hartmann Schaffer
Guelph, Ontario, Canada
········@netcom.ca (hs)
From: Espen Vestre
Subject: Re: "Programming is FUN again" rambling commentary
Date: 
Message-ID: <w6vhtd1zh4.fsf@gromit.nextel.no>
"Wolfgang von Hansen" <···@geodesy.inka.de> writes:

[I wrote:]
> > It's not generally the case that C programs are _fast_.  Quite in
> > opposite: The popular programs these days typically are _suprisingly_
> > _fat_ and _slow_.  Some reasons for this might be:
> > 
> > 1) a lot of code is programmed by really bad C newbies
> > 2) too much has to be done in too short time
> > 3) too many software companies hold hardware shares
> > 
> > But, if you think of it, (1) is partly a consequence of (2).
> > And if you think more of it, both (1) and (2) may partly be
> > caused by the choice of language.  In Common Lisp fewer people
> > could do more in shorter time!
> 
> I disagree with that. (1) and (2) are not caused by the choice of language
> but by the choice of algorithms. You don't expect a newbie to produce fast
> code in any given language.

No, we agree.  I think :-).

My point is that choice of language influences time to deliver 
(acceptable) code.  To me it's quite obvious - and I think it has 
been scientifically documented - that in the general case (not such
special cases as highly optimized numeric code which is discussed in 
another thread) development in CL is faster than in C/C++.

So, as the projects typically try to accomplish too much within
a too limited timeframe, a project using CL could fulfill the
goal with fewer people, _or_ with the same number of people
having more time to figure out better algorithms to use.

In such a setting, the efficiency of compilers is not a very big
issue, I think, since the project with its limits will end up
using very suboptimal algorithms anyway, one of the reasons being
(1) which again is caused by (2) because the project has to hire
a lot of very inexperienced people, using "raw manpower" to get
things done within the unrealistic time limits.  With CL, you
could avoid hiring too many of the unexperienced, and you could
have more time finding the right algorithms.

--

regards,
  Espen Vestre
From: David Fox
Subject: Re: "Programming is FUN again" rambling commentary
Date: 
Message-ID: <y5aiupd7cgk.fsf@fox.openix.com>
Espen Vestre <··@nextel.no> writes:

> My point is that choice of language influences time to deliver 
> (acceptable) code.  To me it's quite obvious - and I think it has 
> been scientifically documented - that in the general case (not such
> special cases as highly optimized numeric code which is discussed in 
> another thread) development in CL is faster than in C/C++.

If this has been scientifically documented, I'd love a reference to
add to my bibliography...
-- 
David Fox	   http://www.cat.nyu.edu/fox		xoF divaD
NYU Media Research Lab   ···@cat.nyu.edu   baL hcraeseR aideM UYN