From: Bruce Hoult
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <2937085664@hoult.actrix.gen.nz>
Tim Pierce <···············@mail.bsd.uchicago.edu> writes:
> Maybe I'm getting a bit off the point.  The real problem doesn't
> strike me as one of ignorant programmers not *knowing* that
> overflow can occur (although in practice that is a legitimate
> concern).  The problem is that the programmer needs to worry about
> it at all, and that the language itself does not provide for a
> convenient facility for dealing with such error conditions.
>
> In some contexts that's undoubtedly a feature, but not in the
> context of writing mission-critical user-level applications.
> Forcing the programmer to make a manual check for overflow every
> time a calculation is made makes for wasteful and inefficient
> development.

I'm not sure that I agree.

Users are probably happier getting "Integer overflow: program aborted"
in mission-critical applications than silently getting incorrect results.
But not a *lot* happier, I'd bet.

The vast majority of integer calculations made in programs simply *can't*
overflow, unless some grave logic error has been made that will prevent the
program working at all.  Programmers well know when they're writing a line
of code of which the size of the result is sufficiently unknown that it
might overflow.  They need to *think* about whether it can overflow or not,
and what to do about it if it does. Automatic checks for overflow might be
useful as an ambulance at the bottom of the cliff, but any programmer who
allows the automatic overflow machinery to kick in on a machine integer
calculation in a mission-critical application has badly failed in his duties.

"Never test for any error that you don't know how to handle"

-- Bruce

--
...in 1996, software marketers wore out a record 31,296 copies of Roget's
Thesaurus searching for synonyms to the word "coffee" ...

From: Chris Bitmead
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <BITMEADC.97Jan28100338@Alcatel.com.au>
In article <··········@hoult.actrix.gen.nz> ·····@hoult.actrix.gen.nz (Bruce Hoult) writes:

>> In some contexts that's undoubtedly a feature, but not in the
>> context of writing mission-critical user-level applications.
>> Forcing the programmer to make a manual check for overflow every
>> time a calculation is made makes for wasteful and inefficient
>> development.
>
>I'm not sure that I agree.
>
>The vast majority of integer calculations made in programs simply *can't*
>overflow, unless some grave logic error has been made that will prevent the
>program working at all.  Programmers well know when they're writing a line
>of code of which the size of the result is sufficiently unknown that it
>might overflow.  They need to *think* about whether it can overflow or not,
>and what to do about it if it does. Automatic checks for overflow might be
>useful as an ambulance at the bottom of the cliff, but any programmer who
>allows the automatic overflow machinery to kick in on a machine integer
>calculation in a mission-critical application has badly failed in his duties.

I think this is silly. It's tough enough for an expert programmer to
build a non-trivial program that doesn't have any bugs in normal
cases. Let alone your average programmer trying to deal with obscure
cases. At least if the language gives an error, someone might find a
problem like this in the testing phase. If it's silently ignored, then
nobody will even know that there was a problem in the program during
development *or* deployment. Or if it is noticed it will be
sufficiently difficult to find that it will be ignored anyway.

The right answer for 99% of cases has got to be to use a language with
safe arithmetic, and then hand optimise for performance problems,
possibly using something like the lisp macro someone else described.
From: Tim Bradshaw
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <ey34tfzp3hd.fsf@staffa.aiai.ed.ac.uk>
[comp.arch removed from newsgroups]

* Chris Bitmead wrote:

> The right answer for 99% of cases has got to be to use a language with
> safe arithmetic, and then hand optimise for performance problems,
> possibly using something like the lisp macro someone else described.

It's also worth remembering that for some languages with safe
arithmetic, when the compiler can prove that the arithmetic can't
overflow, then code with no checks can be compiled, with no reduction
in safety. CMU Common Lisp can do this, and I'm sure other systems can
too. In systems like this it's also possible to make rather precise
type declarations which can help things a lot -- for instance declare
a variable to be an integer in [100, 1000] and have that be checked
for you. Such a system can check types once (say on fn invocation),
and then use the knowledge it has to generate fast unchecked
arithmetic/slot access, giving a very nice combination of safety and
speed.

--tim