From: Robert Harley
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <5c5c65$9ed@news-rocq.inria.fr>
Erik Naggum <····@naggum.no> writes:
>[...]
>thank you.  you prove my point most eloquently: C does not give programmers
>access to arithmetic condition known as "overflow".

What exactly is an "arithmetic condition"?  If you mean a carry or
overflow flag, hardware may not have one for you to access.


>as I said, A+1 is either A+1, 0 or -(A+1) in C.

A+1 is A+1.

In the ring "modulo 2^n, where n is the number of bits in the type",
according to Mssrs Kernighan and Ritchie.  If you assumed a different
ring (such as Z) that's tough.  You should read the language spec
before pontificating in error.


Bye,
  Rob.
     .-.                     ·············@inria.fr                    .-.
    /   \           .-.        "Dances with bits"       .-.           /   \
   /     \         /   \       .-.     _     .-.       /   \         /     \
  /       \       /     \     /   \   / \   /   \     /     \       /       \
 /         \     /       \   /     `-'   `-'     \   /       \     /         \
            \   /         `-'                     `-'         \   /
             `-'         Hit me with those laser beams.        `-'

From: Erik Naggum
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <3062937607803710@naggum.no>
* Robert Harley
| In the ring "modulo 2^n, where n is the number of bits in the type",
| according to Mssrs Kernighan and Ritchie.  If you assumed a different
| ring (such as Z) that's tough.  You should read the language spec
| before pontificating in error.

you gotta love this kind of response.  it's the language specification that
is in error.  just as it is in error in many other ways.  division between
two signed integers can yield either floor or ceiling depending on what the
CPU architecture supports.  chars are signed or unsigned according to what
the "hardware" thinks is faster.  etc.  the _language_ C is underspecified.
no matter how well they write their specification, C remains underspecified.

C is _designed_ to be "fast but unreliable".  _that_ is the problem.

#\Erik
-- 
1,3,7-trimethylxanthine -- a basic ingredient in quality software.
From: Oleg Moroz
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <32e6a3d8.4322028@news-win.rinet.ru>
On 22 Jan 1997 16:00:07 +0000, Erik Naggum <····@naggum.no> wrote:

>* Robert Harley
>| In the ring "modulo 2^n, where n is the number of bits in the type",
>| according to Mssrs Kernighan and Ritchie.  If you assumed a different
>| ring (such as Z) that's tough.  You should read the language spec
>| before pontificating in error.
>
>you gotta love this kind of response.  it's the language specification that
>is in error.  just as it is in error in many other ways.  division between
>two signed integers can yield either floor or ceiling depending on what the
>CPU architecture supports.  chars are signed or unsigned according to what
>the "hardware" thinks is faster.  etc.  the _language_ C is underspecified.
>no matter how well they write their specification, C remains underspecified.
>
>C is _designed_ to be "fast but unreliable".  _that_ is the problem.

The fact is that for most everyday programming needs many (if not all)
programmers need just that - fast and unreliable integer arithmetic, not an
overspecified Common Lisp or Ada numerics or Scheme's number tower. And when
they do need the precisely defined number-crunching operations, they can surely
code them explicitly. 

I think that one of the faults of CL and like was to first overspecify some
features of the language and then depend on "sufficiently smart compilers" to
achieve good performance, instead of using more straightforwardly implementable
things as the language basis and then building on it with more (standard)
libraries and such.

Oleg
From: Michael Haynie
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <mbhaynie-ya023180002401972204150001@pgh.nauticom.net>
In article <················@news-win.rinet.ru>, ·····@inist.ru (Oleg
Moroz) wrote:

> 
> I think that one of the faults of CL and like was to first overspecify some
> features of the language and then depend on "sufficiently smart compilers" to
> achieve good performance, instead of using more straightforwardly
implementable
> things as the language basis and then building on it with more (standard)
> libraries and such.
> 
> Oleg

pmfji:

As I recall, most of the performance problems one sees is virtually any
LISP-like language are related not to numeric performance, but to the very
characteristics that make LISP such a pleasure to develop large systems in. 
In particular, since LISP's are largely untyped languages, there are few
cues that a compiler can use to generate clever and effecient code. 
Numeric inner-loop code in Common Lisp with the proper declarations is as
fast as similar C code.  However, the bulk of an application is usually
more mundane. It is hard to select tight code when the type of object must
always be tested before processing.

One uses the fact that *objects* are typed to write clever code with no
thought to keeping the type of things more than loosely in control.

C on the other hand has a rather more strict type system (though not
complete...).  There are *many* cues for the compiler to use.  Moreover, it
is presumed that the return type of a function is what it was declared to
be, and no code is generated to check for any other possibility.  Thus the
programmer must think much more carefully when writing C.

As to the overspecification of the behaviour of the CL math library -- it
was necessary to provide consistent results on various platforms.  As the
spec is very close to the IEEE floating point standard (whose number I
cannot recall), the real burden on the compiler implementer was small
relative to the other difficulties of providing an effecient LISP
implementation.


The sort of things that would make LISP easier to compile would be to
remove such things as closures/first-class-functions, dynamic extent, and
add such things as a strict type system.  But then, the result would not be
LISP-like any more.  Clever compilers really are the best way in this case.

Perhaps some new language with a type/object system as flexible as CLOS,
with type inference as complete as ML or Haskel, and with the comfortable
(well, familiar, anyway) look of C/C++ will grant us coding nirvanna.

I'm not holding my breath ;-)

cul8r

-- 
cul8r

Michael & Bobbi Haynie
From: Peter da Silva
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <5c96o8$jv2@web.nmti.com>
In article <······················@alcatel.com.au>,
Chris Bitmead <·············@Alcatel.com.au> wrote:
> It's easier to write a lisp compiler, and then spend some time getting
> the optimisations right, than to write even a plain 'ol C++ compiler
> that accepts the full syntax, and generates correct code.

C++ is not "plain old" in any sense of the word. It's also not C. C++ is
a hack, plain and simple. C is not designed to do object oriented work in,
and fronting it with an OO preprocessor is doomed to a maddening cycle of
ever increasing complexity unless you deliberately limit what OO features
you choose to implement.

As, indeed, has happened with C++.
-- 

             The Reverend Peter da Silva, ULC, COQO, BOFH.

                  Har du kramat din varg, idag? `-_-'
From: Chris Bitmead
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <BITMEADC.97Jan24103707@Alcatel.com.au>
In article <················@news-win.rinet.ru> ·····@inist.ru (Oleg Moroz) writes:

>I think that one of the faults of CL and like was to first overspecify some
>features of the language and then depend on "sufficiently smart compilers" to
>achieve good performance, instead of using more straightforwardly implementable
>things as the language basis and then building on it with more (standard)
>libraries and such.

It's easier to write a lisp compiler, and then spend some time getting
the optimisations right, than to write even a plain 'ol C++ compiler
that accepts the full syntax, and generates correct code.
From: Alex Colvin
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <5cr5rj$v7k$1@dartvax.dartmouth.edu>
>>I think that one of the faults of CL and like was to first overspecify some
>>features of the language and then depend on "sufficiently smart compilers" to
>>achieve good performance, instead of using more straightforwardly implementable

>It's easier to write a lisp compiler, and then spend some time getting
>the optimisations right, than to write even a plain 'ol C++ compiler
>that accepts the full syntax, and generates correct code.

Both true.

PL/I attempted to rely on standard optimizations. They were fragile.
Small changes to the code had global impact.  You wind up reading
object code to see if it does what you expect.

I have yet to use a C++ compiler that always generates correct code.

--
	Alex Colvin
	···········@dartmouth.edu
From: Chris Morgan
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <m2d8uh7j7k.fsf@mihalis.demon.co.uk>
In article <················@news-win.rinet.ru> ·····@inist.ru (Oleg
Moroz) writes:

> The fact is that for most everyday programming needs many (if not all)
> programmers need just that - fast and unreliable integer arithmetic, not an
> overspecified Common Lisp or Ada numerics or Scheme's number tower. And when
> they do need the precisely defined number-crunching operations, they
> can surely
> code them explicitly. 
 
... and get them wrong much more often than compiler writers,
especially Ada compiler writers.

Chris
-- 
Chris Morgan
From: Larry Kilgallen
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <1997Jan22.121634.1@eisner>
In article <··········@news-rocq.inria.fr>, ······@pauillac.inria.fr (Robert Harley) writes:
> Erik Naggum <····@naggum.no> writes:
>>[...]
>>thank you.  you prove my point most eloquently: C does not give programmers
>>access to arithmetic condition known as "overflow".
> 
> What exactly is an "arithmetic condition"?  If you mean a carry or
> overflow flag, hardware may not have one for you to access.

Hardware is irrelevant.  If the hardware does not support such a check,
it should be generated by the compiler.  Of course if a language has no
way to report such an error, whether the compiled code detected such an
overflow or not is a secret.

>>as I said, A+1 is either A+1, 0 or -(A+1) in C.
> 
> A+1 is A+1.
> 
> In the ring "modulo 2^n, where n is the number of bits in the type",
> according to Mssrs Kernighan and Ritchie.  If you assumed a different
> ring (such as Z) that's tough.  You should read the language spec
> before pontificating in error.

If a language provides  modular arithmetic by default, it does not
meet the expectations of most programmers.  Most programmers do not
read language specifications.  Restricting one's interest to the
subset known as "good programmers" is not acceptable, since most
software is not written by "good programmers" (the other sort being
in much more plentiful supply and much cheaper).

Larry Kilgallen
From: Peter da Silva
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <5c602r$k46@web.nmti.com>
In article <··················@eisner>,
Larry Kilgallen <·········@eisner.decus.org> wrote:
> If a language provides  modular arithmetic by default, it does not
> meet the expectations of most programmers.

Programmers who don't learn how C arithmetic (or Forth arithmetic, which is
also modular, or PL/M arithmetic, or the arithmetic of any other low level
language I've ever used) works shouldn't be programming in C.

That's like programming in Pascal without understanding the nuances of "get",
or in Ada, without understanding operator overloading.

-- 

             The Reverend Peter da Silva, ULC, COQO, BOFH.

                  Har du kramat din varg, idag? `-_-'
From: Larry Kilgallen
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <1997Jan22.181709.1@eisner>
In article <··········@web.nmti.com>, ·····@nmti.com (Peter da Silva) writes:
> In article <··················@eisner>,
> Larry Kilgallen <·········@eisner.decus.org> wrote:
>> If a language provides  modular arithmetic by default, it does not
>> meet the expectations of most programmers.
> 
> Programmers who don't learn how C arithmetic (or Forth arithmetic, which is
> also modular, or PL/M arithmetic, or the arithmetic of any other low level
> language I've ever used) works shouldn't be programming in C.

"Shouldn't" has no place in the real world.  They do, regardless of
whether that is because it is the only language they learned in school
or because their boss learned from Computerworld that all programs
should be written in C.

Larry Kilgallen
From: Peter da Silva
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <5c8ged$kon@web.nmti.com>
In article <··················@eisner>,
Larry Kilgallen <·········@eisner.decus.org> wrote:
> "Shouldn't" has no place in the real world.

Trying to convince people not to code in C is just as futile in the real
world. In any world where there's any point at all to this entire discussion
then the word "shouldn't" is an entirely appropriate response.

Now if you want to rag on lusers let's talk about the ten minutes I just
spent trying to understand why someone was complaining that they couldn't
telnet into an NT server when it worked just yesterday. And, no, there's
never been a telnetd installed on that particular box.
-- 

             The Reverend Peter da Silva, ULC, COQO, BOFH.

                  Har du kramat din varg, idag? `-_-'
From: Peter da Silva
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <5cbgej$5mi@web.nmti.com>
In article <············@goanna.cs.rmit.edu.au>,
Richard A. O'Keefe <··@goanna.cs.rmit.EDU.AU> wrote:
> ·····@nmti.com (Peter da Silva) writes:
> >Programmers who don't learn how C arithmetic (or Forth arithmetic, which is
> >also modular, or PL/M arithmetic, or the arithmetic of any other low level
> >language I've ever used) works shouldn't be programming in C.

> There is nothing in the Fortran 66, Fortran 77, or Fortran 90 standards
> that says Fortran integer arithmetic is modular.

Fortran is not a "low level language". It's a rather specialized language,
aimed towards scientific and engineering number crunching, but it's not a
low level language in the way that C, Forth, or PL/M are.

As for what the ANSI standard says, well, we've already had the "Peter da
Silva doesn't believe in standards" flamewar [1]. I'll pass on it this time
around.

[1] Standards are tools, not straightjackets. 'nuff said.
-- 

             The Reverend Peter da Silva, ULC, COQO, BOFH.

                  Har du kramat din varg, idag? `-_-'
From: Richard A. O'Keefe
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <5cp7lo$lbe$1@goanna.cs.rmit.edu.au>
·····@nmti.com (Peter da Silva) writes:
>Fortran is not a "low level language". It's a rather specialized language,
>aimed towards scientific and engineering number crunching, but it's not a
>low level language in the way that C, Forth, or PL/M are.

Hmmm.  Admittedly Fortran 90 is higher level than C 89, but Fortran has
certainly been _used_ as a low level language to very good effect.  The
Primos operating system (at one time quite popular) was written almost
entirely in Fortran.  Before "language X to C" compilers became popular,
there were several "language X to Fortran" compilers.  There are precisely
two low level features present in standard C and missing from Fortran 77:
bit fields in records (which can be faked with the MIL-STD bit-twiddling
functions in Fortran) and pointers (which are actually supported by a lot
of Fortran 77 compilers).  Fortran 90 has pointers.

Yes, Fortran is particularly good at number crunching, but Fortran 90
has pretty much everything we wanted in 70s and early 80s, including
modules, type checked interfaces, function and operator overloading,
user defined data types, and even (via a supplementary but quite official
standard) dynamic variable-length strings + character stream I/O.

The relevance of Fortran to this discussion is that Fortran implementors
(who are represented very well on the standards committees) are concerned
with blazing speed.  If modular integer arithmetic were necessary or even
particularly useful for doing serious arithmetic with blazing speed, they
would certainly have specified it.

On the contrary, the Fortran standardisers have moved _away_ from
"machine size integer" to parametric integer types, where you can
ask for an integer type that is big enough for your needs.  (Of course
you may not _get_ it because the compiler may not support a sufficiently
large integral type.)  The point here of course is that supporting the
numbers you need, however slow, _is_ "blazingly fast" compared with not
supporting them.

>As for what the ANSI standard says, well, we've already had the "Peter da
>Silva doesn't believe in standards" flamewar [1]. I'll pass on it this time
>around.

>[1] Standards are tools, not straightjackets. 'nuff said.

A standard that leaves the result of integer overflow undefined, whether
it is a Fortran standard or the C standard, is in that respect not a
straitjacket.  (The jackets are "strait" = narrow, confining, not "straight".)

Standards are _useful_ as tools only if other people respect them.

-- 
My tertiary education cost a quarter of a million in lost income
(assuming close-to-minimum wage); why make students pay even more?
Richard A. O'Keefe; http://www.cs.rmit.edu.au/%7Eok; RMIT Comp.Sci.
From: Peter da Silva
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <5cqdu8$7um@web.nmti.com>
In article <············@goanna.cs.rmit.edu.au>,
Richard A. O'Keefe <··@goanna.cs.rmit.edu.au> wrote:
> Hmmm.  Admittedly Fortran 90 is higher level than C 89, but Fortran has
> certainly been _used_ as a low level language to very good effect.

I know. I've done a lot of low level work in Fortran. It requires a lot more
work than in C, and you need a lot more assembly language glue and runtime
support.

And the fortran compilers of that period did machine arithmetic. We had to
watch for that, because the behaviour of corner cases on 2s-complement and
1s-complement machines had the same strangenesses in Fortran as they did
in C, and we were working on 2s complement and 1s complement machines.

And C, in turn, is higher level than PL/M... but the two are close enough
that I was able to convert a *huge* collection of PL/M code to readable
and maintainable (if occasionally rather ugly) C. Translators between C and
Fortran don't even try to do that sort of thing.

As for blazing speed, forget it. I haven't said word one about machine
arithmetic being faster. The issue isn't speed, it's whether the language
exposes machine semantics or not.
-- 

             The Reverend Peter da Silva, ULC, COQO, BOFH.

                  Har du kramat din varg, idag? `-_-'
From: William Clodius
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <32F0E863.7A5F@lanl.gov>
Richard A. O'Keefe wrote:
> 
> <snip>   There are precisely
> two low level features present in standard C and missing from Fortran 77:
> bit fields in records (which can be faked with the MIL-STD bit-twiddling
> functions in Fortran) and pointers (which are actually supported by a lot
> of Fortran 77 compilers).  Fortran 90 has pointers.

However F90's "POINTER"s are rather different in their semantics from
those of most low level languages. (The F77 "CRAY pointer" extension was
also different from than most POINTERs, albeit more low level than F90
POINTERs). They are in effect strongly typed descriptors. This semantics
facilitates their useage for high level constructs, but makes it
difficult to do the type casts that C allows.

Fortran's overall semantics, e.g., FUNCTIONs should only be called in a
context where an optimizer may treat them as pure functions, the lack of
a volatile attribute, etc., also makes it awkward to do some low level
coding.

> 
> Yes, Fortran is particularly good at number crunching, but Fortran 90
> has pretty much everything we wanted in 70s and early 80s, including
> modules, type checked interfaces, function and operator overloading,
> user defined data types, and even (via a supplementary but quite official
> standard) dynamic variable-length strings + character stream I/O.

However, the only implementation available of dynamic variable-length
strings is via the F90 module code printed in its appendix and
publically available, no compiler currently directly supports this
string type, and opportunities for optimization are hence not fully
exploited.

True character stream I/O is not present in F90. Although ADVANCE=NONE
allows most of its capabilities, the underlying model remains record
based and there are a few cases, e.g., interfacing with experimental
data files, where its users would appreciate true stream based I/O (If
the OS does not directly support records most Fortran compilers create
and expect a leading and trailing INTEGER giving recording a record
length that would be absent in true stream file I/O).

> 
> The relevance of Fortran to this discussion is that Fortran implementors
> (who are represented very well on the standards committees) are concerned
> with blazing speed.  If modular integer arithmetic were necessary or even
> particularly useful for doing serious arithmetic with blazing speed, they
> would certainly have specified it.

Not quite.  Fortran implementors are also concerned about portability to
their current and future systems. If specifying modular arithmetic would
cause a performance degredation on systems that do not directly support
it, it would also not be specified. What you can expect is that if the
implementors are concerned about the Fortran market, they will ensure
that the implementation of integers is consistent with the standard and
that it is a fast as possible given that constraint.

> 
> On the contrary, the Fortran standardisers have moved _away_ from
> "machine size integer" to parametric integer types, where you can
> ask for an integer type that is big enough for your needs.  (Of course
> you may not _get_ it because the compiler may not support a sufficiently
> large integral type.)  The point here of course is that supporting the
> numbers you need, however slow, _is_ "blazingly fast" compared with not
> supporting them.

They also have parametric REALs, and quasi-parametric LOGICAL's and
CHARACTERs.

> <snip>

-- 

William B. Clodius		Phone: (505)-665-9370
Los Alamos Nat. Lab., NIS-2     FAX: (505)-667-3815
PO Box 1663, MS-C323    	Group office: (505)-667-5776
Los Alamos, NM 87545            Email: ········@lanl.gov
From: Rob Warnock
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <5cpbc0$ep0@tokyo.engr.sgi.com>
Richard A. O'Keefe <··@goanna.cs.rmit.edu.au> wrote:
+---------------
| Before "language X to C" compilers became popular, there were several
| "language X to Fortran" compilers...
+---------------

Ah, yezzz... Good old "RATFOR"! ("Software Tools", Kernighan & Plauger.)
You, too, can program in (almost) C in a FORTRAN-only shop...


-Rob

-----
Rob Warnock, 7L-551		····@sgi.com
Silicon Graphics, Inc.		http://reality.sgi.com/rpw3/
2011 N. Shoreline Blvd.		Phone: 415-933-1673  FAX: 415-933-0979
Mountain View, CA  94043	PP-ASEL-IA
From: Richard A. O'Keefe
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <5c97b9$p1t$1@goanna.cs.rmit.EDU.AU>
·····@nmti.com (Peter da Silva) writes:
>Programmers who don't learn how C arithmetic (or Forth arithmetic, which is
>also modular, or PL/M arithmetic, or the arithmetic of any other low level
>language I've ever used) works shouldn't be programming in C.

There is nothing in the Fortran 66, Fortran 77, or Fortran 90 standards
that says Fortran integer arithmetic is modular.  If a Fortran integer
arithmetic operation has a result that is not in range, that is an
error in your program, and a Fortran system can do anything it likes,
up to and including making demons fly out of your nose.  The Burroughs
Fortran system (a very pleasant Fortran 66 + extensions system) signalled
an exception for integer overflow.  This was perfectly ok according to the
standard.

Let me paraphrase Peter da Silva, with whom I am almost always in
agreement, by saying "anyone who doesn't learn that *standard* Fortran
arithmetic is NOT modular shouldn't program in Fortran."

If it comes to that, section 6.2.1.2 of the C standard draws a careful
distinction between signed and unsigned integers in C.  I can find
nothing in the C standard that says that *signed* integers in C are
modular, and have used (and been very thankful to use) a C implementation
where they _aren't_.  (It is arguable from the C standard that if x is
signed integral, then x+1 always has a defined value, but from 6.2.1.2
if you try to _store_ that value in a variable that isn't big enough to
hold it, the effect is explicitly *implementation-defined*.)

-- 
My tertiary education cost a quarter of a million in lost income
(assuming close-to-minimum wage); why make students pay even more?
Richard A. O'Keefe; http://www.cs.rmit.edu.au/%7Eok; RMIT Comp.Sci.
From: David Hanley
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <32E6CA6E.45B5@netright.com>
Tim Pierce wrote:
> 
> >A+1 is A+1.
> >
> >In the ring "modulo 2^n, where n is the number of bits in the type",
> >according to Mssrs Kernighan and Ritchie.  If you assumed a different
> >ring (such as Z) that's tough.  You should read the language spec
> >before pontificating in error.
> 
> How silly to imagine that a variable type entitled `int' might
> actually be related to the ring Z.

	It might seems silly, but I seem to remember it as part of the required
computer science classes at my university, and pretty much everyone got
it.  If someone can't be bothered to learn basic language 'stuff' I
don't know how much I trust them as programmers.



-- 
------------------------------------------------------------------------------
David Hanley, Software Developer, NetRight technologies.
My employer pays me for my opinions; nonetheless he does not share all
of them
E-mail address munged to defeat automailers, Delete _nospam
From: Erik Naggum
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <3063010159007887@naggum.no>
* David Hanley
| [modulo 2^n definition of integer types] might seems silly, but I seem to
| remember it as part of the required computer science classes at my
| university, and pretty much everyone got it.  If someone can't be
| bothered to learn basic language 'stuff' I don't know how much I trust
| them as programmers.

obviously, the problems don't happen when programmers are aware of them and
actually code to detect overflow.  the problems happen because programmers
are not aware of the _possibility_ of an overflow in a given operation, and
so calculate and return the wrong answers without any notification or error.

with extreme care and precaution, a C programmer may write explicit code
that detects when the modulo 2^n arithmetic does not agree with expected
mathematical values, but overflow is an _error_ condition.  if a programmer
writes checks after the fact, the _error_ still went unnoticed by C.  you
can add as much explicit code as you want -- C _still_ doesn't give you
access to the overflow condition.

and yes, I _do_ know that what I'm asking for is not C.  please get the
point: C _does_ _not_ _provide_ a necessary mechanism for safe programming.

#\Erik
-- 
1,3,7-trimethylxanthine -- a basic ingredient in quality software.
From: Robert Bernecky
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <1997Jan23.092748.2909@jarvis.cs.toronto.edu>
This followup may be a mistake, but let's see what we can do...

The included text below has the flavor of Socratic dialogue to me.
Perhaps a concrete example will help.

Consider the bankers of the field, how they grow; they toil not, neither do
they spin. They do, however, make bazillions by using giant brain
machines to look at the market and decide where to place their bets.

Often, they don't like floating point for currency, because roundoff 
error can be evil. So, let's consider scaled integers. 
Oops, careful -- runaway currency 
inflation can do you in there, producing [Gasp!] integer overflow
when you least expect it.  Now, they're going to be doing stuff
like very very big matrix multiplies to pick the best subset of
market instruments to meet some criteria. 

Hanley suggests that the programmers will carefully write 
unsigned integer code and check everywhere for overflow so they
can take appropriate action. I don't think so.  Particularly
when they inherit code from the dark ages and have never
taken a computer science course in their life. They got sent on 
one (count 'em, 1) C programming course offered by the
Matchbook Institute Of Computering and are now playing
the Leeson version of computer roulette. 

As for me, I'd be happy if I could merely detect SIGNED integer
overflow (as any self-respecting bit of hardware does as a matter
of course) and take "appropriate" action within a C program generated
by my compiler.
Use of unsigned math is not the greatest thing to here it -- clutter
reigns.

I think of C as a generic assembler code -- evil, unsafe, and ugly,
and preferably not to be hand-generated. BUT, one of its [many] shortcomings
is this inability to do something that the most humble pocket calculator
is able to do.

Bob


In article <················@naggum.no> Erik Naggum <····@naggum.no> writes:
>* David Hanley
>| [modulo 2^n definition of integer types] might seems silly, but I seem to
>| remember it as part of the required computer science classes at my
>| university, and pretty much everyone got it.  If someone can't be
>| bothered to learn basic language 'stuff' I don't know how much I trust
>| them as programmers.
>
>obviously, the problems don't happen when programmers are aware of them and
>actually code to detect overflow.  the problems happen because programmers
>are not aware of the _possibility_ of an overflow in a given operation, and
>so calculate and return the wrong answers without any notification or error.
>
>with extreme care and precaution, a C programmer may write explicit code
>that detects when the modulo 2^n arithmetic does not agree with expected
>mathematical values, but overflow is an _error_ condition.  if a programmer
>writes checks after the fact, the _error_ still went unnoticed by C.  you
>can add as much explicit code as you want -- C _still_ doesn't give you
>access to the overflow condition.
>
>and yes, I _do_ know that what I'm asking for is not C.  please get the
>point: C _does_ _not_ _provide_ a necessary mechanism for safe programming.
>
>#\Erik
>-- 
>1,3,7-trimethylxanthine -- a basic ingredient in quality software.
From: David Hanley
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <32E7EB19.3BED@netright.com>
Robert Bernecky wrote:
> 
> Hanley suggests that the programmers will carefully write
> unsigned integer code and check everywhere for overflow so they
> can take appropriate action. I don't think so.

	I don't think so.  I never suggested any such thing.  What I did state
is that there are times when machine integers with no overflow catch are
a very good thing<tm>.  

	Nowehere did I state that it was the correct approach for all
applications, and nowhere did I state that you should check for overflow
in all situations manually.  In fact, I'm really curious how you got the
latter part from my post.

	There are situations where each approach is useful.  Criticizing a
language for taking one of these equally valid approaches is very silly;
it doesn't expand anyone's knowledge by one iota. 

>   Particularly
> when they inherit code from the dark ages and have never
> taken a computer science course in their life. They got sent on
> one (count 'em, 1) C programming course offered by the
> Matchbook Institute Of Computering and are now playing
> the Leeson version of computer roulette.

	I doubt such a person should be writing in C.  However, I think BCD
numbers a in the STL( are they? ) and they behave much like regular
integers.

> I think of C as a generic assembler code -- evil, unsafe, and ugly,
> and preferably not to be hand-generated. BUT, one of its [many] shortcomings
> is this inability to do something that the most humble pocket calculator
> is able to do.

	Like scale bitmps in realtime?

-- 
------------------------------------------------------------------------------
David Hanley, Software Developer, NetRight technologies.
My employer pays me for my opinions; nonetheless he does not share all
of them
E-mail address munged to defeat automailers, Delete _nospam
From: David Hanley
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <32E7D7C3.60ED@netright.com>
Erik Naggum wrote:
> 
> * David Hanley
> | [modulo 2^n definition of integer types] might seems silly, but I seem to
> | remember it as part of the required computer science classes at my
> | university, and pretty much everyone got it.  If someone can't be
> | bothered to learn basic language 'stuff' I don't know how much I trust
> | them as programmers.
> 
> obviously, the problems don't happen when programmers are aware of them and
> actually code to detect overflow.  the problems happen because programmers
> are not aware of the _possibility_ of an overflow in a given operation, and
> so calculate and return the wrong answers without any notification or error.

	Then why did't they flunk out of the SE class?

> 
> with extreme care and precaution, a C programmer may write explicit code
> that detects when the modulo 2^n arithmetic does not agree with expected
> mathematical values, but overflow is an _error_ condition.  if a programmer
> writes checks after the fact, the _error_ still went unnoticed by C.  you
> can add as much explicit code as you want -- C _still_ doesn't give you
> access to the overflow condition.

	Integer overflow is not an error condition in C.  The result that is
produced is perfectly valid as the result of the expression.  Defining
overflow as an error condition is a value judgement, whick in turn
causes several eingineering tradeoffs.  

	If I know integer overflow will not happen in a number crunching loop,
I don't want to pay because you happen to like defaulting to bignums!

> 
> and yes, I _do_ know that what I'm asking for is not C.  please get the
> point: C _does_ _not_ _provide_ a necessary mechanism for safe programming.

	It depends on how you happen to define 'safe programming'.  Are you
claiming there are languages in which error scannot happen?  What are
these exactly?  I'm guesing you like scheme or lisp.  What hapens in
these when the dynamic type is not what you expect?

-- 
------------------------------------------------------------------------------
David Hanley, Software Developer, NetRight technologies.
My employer pays me for my opinions; nonetheless he does not share all
of them
E-mail address munged to defeat automailers, Delete _nospam
From: Erik Naggum
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <3063055314263603@naggum.no>
* David Hanley
| Then why did't they flunk out of the SE class?

why do you think they had an SE class?

haven't you noticed that those who have software engineering and have a
computer science education don't usually write software in C?

| If I know integer overflow will not happen in a number crunching loop,
| I don't want to pay because you happen to like defaulting to bignums!

I'm sorry to see the level to which people feel the need to resort, now.

the myth you're believing and trying to perpetuate is that Common Lisp
cannot be efficient because it defaults to safer than C.  however, you can
compile with safety 0 and speed 3 and you can declare all your variables
just like you have to in C.  the result is most likely faster than C, and
at least as safe.

you should investigate something which stirs up so much emotion in you, and
try to learn some of the history and the facts surrounding your case.  if
you attack others for flunking SE classes, maybe it's time to re-read you
programming language textbook, or perhaps look further into the matter.

| I'm guesing you like scheme or lisp.  What hapens in these when the
| dynamic type is not what you expect?

you're thrown into the debugger unless you did as I suggested above,
compile everything with speed 3 and safety 0.  then you will just wind up
with junk values from the program.  it may even crash.  this is no worse
than C.  however, no sane Lisp programmer compiles everything with speed 3
and safety 0.  that is done for well-tested functions that need the extra
performance, in the way I demonstrated.

writing efficient code in Lisp takes some effort.  writing working code in
Lisp is relatively easy.  the reverse is true for C.  Peter da Silva will
have some words of wisdom to add to this statement, no doubt.

#\Erik
-- 
1,3,7-trimethylxanthine -- a basic ingredient in quality software.
From: David Hanley
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <32E8FEC9.1F11@netright.com>
Erik Naggum wrote:

> haven't you noticed that those who have software engineering and have a
> computer science education don't usually write software in C?

	No.  Statistics, please.  I see a lot of people writing code in C every
day.

> 
> | If I know integer overflow will not happen in a number crunching loop,
> | I don't want to pay because you happen to like defaulting to bignums!
> 
> I'm sorry to see the level to which people feel the need to resort, now.

	This is just a tad ironic, IMHO, after some of the adjectives I've seen
you use to describe people.

> 
> the myth you're believing and trying to perpetuate is that Common Lisp
> cannot be efficient because it defaults to safer than C.

	No/Yes.  There are reasons why C will always be faster than lisp &
lisp-like languages.  It can come close in some cases, however.  If we
support bignums/overflow/etc, we make speed tradeoffs.

>   however, you can
> compile with safety 0 and speed 3 and you can declare all your variables
> just like you have to in C.  the result is most likely faster than C, and
> at least as safe.

	Not true.  If all checks are off, lisp will probably be less safe than
C, because of it's reliance on linked structures, and still not quite as
fast, for the same reason.  

> 
> you should investigate something which stirs up so much emotion in you, and
> try to learn some of the history and the facts surrounding your case.

	Uh-huh.  More irony.  I write a lot of my code in SML( a functional
language ) and would call myself a proficent programmer in both C and
lisp.  I've written compilers for fun; I study optimizing compiler
issues.  I think I know quite a bit about the issues involved in
producing optimal code.

	What's _bad_ is, in an arguemnt, assuming that because the other guy
disagrees with your opintions, he must be either stupid, or ignorant of
the facts of the matter.  Maybe he has different goals, or doesn't share
your religous convictions.

> if
> you attack others for flunking SE classes, maybe it's time to re-read you
> programming language textbook, or perhaps look further into the matter.

	Uh-huh.

> 
> | I'm guesing you like scheme or lisp.  What hapens in these when the
> | dynamic type is not what you expect?
> 
> you're thrown into the debugger unless you did as I suggested above,
> compile everything with speed 3 and safety 0.  then you will just wind up
> with junk values from the program.  it may even crash.  this is no worse
> than C.

	It's a lot worse than C.  In C, if you get your types bunged up,  the
compiler will probably let you know at compile time.  The same is more
true for 'better' languages like SML, Modula-3, heck, even C++.


>  however, no sane Lisp programmer compiles everything with speed 3
> and safety 0.  that is done for well-tested functions that need the extra
> performance, in the way I demonstrated.

	This wories me quite a bit; I infer that speed three safety zero has
the potential of changing the behaviour of algorithms.  That seems
pretty dangerous!  I wouldn't trust a C optimizer that had the potential
of changing how my (correct) code behaved.

> 
> writing efficient code in Lisp takes some effort.  writing working code in
> Lisp is relatively easy.  the reverse is true for C.

	Maybe.  So are you saying the right choice might depend on what your
goals are and what you are good at?

-- 
------------------------------------------------------------------------------
David Hanley, Software Developer, NetRight technologies.
My employer pays me for my opinions; nonetheless he does not share all
of them
E-mail address munged to defeat automailers, Delete _nospam
From: Erik Naggum
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <3063147946409477@naggum.no>
* David Hanley
| [Common Lisp] can come close [to C] in some cases, however.

and in some cases, CL far exceed the wettest dreams of C programmers.
nearly exactly a year ago, some prejudiced fool presented a snippet of Lisp
code that was very slow in CLISP and blamed Lisp as such for it in a
Norwegian newsgroup.  I compiled his code with CMUCL, and it ran twice as
fast as C.  just for kicks, I've done the same with Allegro CL.  my system
is a 50MHz SPARC 2+, running SunOS 4.1.3_U1.

the code:

#| lotto.cl -*- mode: common-lisp -*- 35453.053353 |#

(in-package :user)

(declaim (optimize (speed 3) (safety 0) (debug 0)))

;; the following declaration helps ACL avoid boxing.
(declaim (ftype (function (fixnum fixnum) fixnum) lotto))

(defun lotto (n m)
  (declare (fixnum n m))
  (the fixnum				; same thing for CMUCL
    (cond ((= m 1) n)
	  ((= m n) 1)
	  (t (+ (lotto (1- n) (1- m))
		(lotto (1- n) m))))))

#| lotto.cl ends |#

/* lotto.c -*- mode: c -*- 35453.046640 */

#include <stdio.h>

int lotto (int, int);

int main (void)
{
  printf ("%d\n", lotto (35, 7));
  return 0;
}

int lotto (int n, int m)
{
  if (m == 1)
    return n;
  if (m == n)
    return 1;
  return lotto (n - 1, m - 1) + lotto (n - 1, m);
}

/* lotto.c ends */

lotto.c was compiled with gcc -O3.

$ time lotto
6724520
4.42user 0.04system 0:04.50elapsed 99%CPU

in ACL, (time (lotto 35 7)) reports:

; cpu time (non-gc) 4,550 msec user, 0 msec system
; cpu time (gc)     0 msec user, 0 msec system
; cpu time (total)  4,550 msec user, 0 msec system
; real time  4,561 msec
; space allocation:
;  2 cons cells, 0 symbols, 32 other bytes
6724520

in CMUCL 17f, (time (lotto 35 7)) reports:

Evaluation took:
  2.15 seconds of real time
  2.04 seconds of user run time
  0.0 seconds of system run time
  0 page faults and
  0 bytes consed.
6724520

you can draw your own conclusions.  don't blame me for the code, it was
supplied by said prejudiced fool, except I cleaned it up somewhat
syntactically.

| Not true.  If all checks are off, lisp will probably be less safe than C,
| because of it's reliance on linked structures, and still not quite as
| fast, for the same reason.

look, "not true" and "probably" don't belong in the same sentence.  if you
don't know, don't pretend you do.  I loathe people who *guess* and don't
even know they do it, but go on with their *guesses* is if they are facts.

Lisp programmers turn off checks when they have tested their code, and they
get very useful debugging help if and when something breaks.  if you have a
type error, you can see where the problem is, immediately find the caller,
edit it right away, and restart evaluation with the new code.  a Lisp
programmer has done all this and is moving on by the time you have started
to edit your code.  (some "IDE"'s for C++ offer a limited version of the
same, finally catching up to Lisps of 20 years ago.)

moreover, in a large system, with many strange dependencies, you will have
a hard time finding all callers if you change the type of a C function.  in
Lisp, you don't need to change the callers, because you can dispatch on the
type of the argument.  if you _do_ need to change all the callers, the
magical incantation can be just M-x fi:edit-who-calls RET, and the Lisp
system knows where to find all callers.  quite a different league from C.

| Maybe he has different goals, or doesn't share your religous convictions.

the hallmark of a loser is that he tries to portray an opponent as
"religious" if he can't win an argument because facts don't match his own
convictions.  it's disgusting to behold.  it's almost as bad as dragging in
the Nazis, which happens _once_ in every otherwise never-ending USENET
discussion.  I believe the technical phrase is "to hitler a discussion".  a
similar term for people who think "religious" is a similarly bad name
should be found.  maybe "to hanley a discussion" will do for now?

| It's a lot worse than C.  In C, if you get your types bunged up, the
| compiler will probably let you know at compile time.  The same is more
| true for 'better' languages like SML, Modula-3, heck, even C++.

*sigh*� the static vs dynamic type mythology, again.  people who can't see
both sides of this issue have a problem that no language can help solve.
all of the Lisp systems I have used have reported conflicts if I declare
the type of values I will put in a variable and I lie about it.  type
propagation is an important part of any good Lisp compiler.

| This wories me quite a bit; I infer that speed three safety zero has the
| potential of changing the behaviour of algorithms.  That seems pretty
| dangerous!  I wouldn't trust a C optimizer that had the potential of
| changing how my (correct) code behaved.

hahaha!  you're stretching so far it's _hilarious_.  "if I drive to Boston
with this safety belt on, I get to Boston, but if I don't use it, I might
end up in a graveyard back home.  pretty dangerous change in algorithm!"

#\Erik
-- 
1,3,7-trimethylxanthine -- a basic ingredient in quality software.
From: D. J. Bernstein
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <1997Jan2506.10.34.16383@koobera.math.uic.edu>
In article <················@naggum.no>, Erik Naggum  <····@naggum.no> wrote:
> and in some cases, CL far exceed the wettest dreams of C programmers.

Expand the tail recursion in C, like this---

   unsigned int binomial(n,k) unsigned int n; unsigned int k;
   { unsigned int result; result = 0;
     while (k <= n) { if (!k || (k == n)) return result + 1;
       --n; result += binomial(n,k - 1); } return result; }

---and you get a 5x speedup. That's more than twice as fast as your
proud example of fast LISP code.

Of course, it's bad to use the stack for temporary space. Allocate what
you need at the beginning, handling out-of-memory explicitly rather than
crashing, and use some obvious dynamic programming---

   unsigned int binomial(n,k) unsigned int n; unsigned int k;
   { unsigned int i; unsigned int j; unsigned int *u;
     if (k > n) return 0; i = n - k; if (k > i) k = i; if (!k) return 1;
     u = (unsigned int *) malloc(sizeof(unsigned int) * (k + 1));
     if (!u) barf(); u[0] = 1; for (j = 1;j <= k;++j) u[j] = 0;
     for (i = 1;i <= n;++i) for (j = k;j > 0;--j) u[j] += u[j - 1];
     return u[k]; }

---for another 500x speedup. How would you translate that to LISP? How
fast is the result?

The code can still be substantially improved; e.g., the u array should
be duplicated to allow vectorization. There are also much better
algorithms for computing this function, but that's a side issue.

---Dan
Put an end to unauthorized mail relaying. http://pobox.com/~djb/qmail.html
From: Erik Naggum
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <3063227553270898@naggum.no>
* D. J. Bernstein
| Expand the tail recursion in C, like this---
:
| ---and you get a 5x speedup. That's more than twice as fast as your
| proud example of fast LISP code.

just to be entirely clear on this, the code in the "proud example" was
posted by a fool who posted both the C and the Lisp code to debunk Lisp's
speed.  he was indeed very proud of his code, and he made no attempt to
speed up the C code once he was shown that CMUCL outdid it with a factor of
2.  you're not answering the question as asked if you change the premises
they way you have done.  but to show his state of mind, let me include a
sentence from his message (translated):

    Execution time under CLISP on my old Pentium/Linux: 140 seconds.
    After compiling the stuff, however, I got it down to 11 seconds.

he was comparing _interpreted_ code to his compiled C code!  you can judge
his brilliance from this alone.  his refusal to acknowledge that CLISP ran
byte-code was the reason I wanted to beat up his example.

however, your challenge was much tougher.

| Of course, it's bad to use the stack for temporary space. Allocate what
| you need at the beginning, handling out-of-memory explicitly rather than
| crashing, and use some obvious dynamic programming---
:
| ---for another 500x speedup. How would you translate that to LISP? How
| fast is the result?

since both the Lisp and the C versions are very fast, I ran a million
passes over each version to get accurate timing.  (SunOS 4 has some odd
notions of internal clock speed, measuring in 100Hz and reporting i 60Hz,
thereby losing a lot of precision.)  your C code used 78 s, but allocated
36M of memory and needed 7 s of system time to do so.  a call to free(u)
and a slight rewrite of the value-returning form increased the time to 82
s, but reduced the system time to less than .3 seconds.  using `alloca' to
allocate on the stack, instead, it's down to 66 s.  the Lisp version uses
104 s, plus 1.1 s of garbage collection and .3 s of system time.  it had
consed a total of 48M.  (experiments showed that zeroing a pre-allocated
vector was _much_ slower than allocating fresh memory.)

here's the code I used for the comparisons:

/* binomial.c -*- mode: c -*- 35453.668341 */

int binomial(int n, int k)
{
  int i, j, *u;
  if (k > n)
    return 0;
  i = n - k;
  if (k > i)
    k = i;
  if (!k)
    return 1;
  u = (int *) alloca((k + 1) * sizeof (int));
  if (!u)
    return -1;
  u[0] = 1;
  for (i = 1;  i <= n; ++i)
    for (j = k; j > 0; --j)
      u[j] += u[j - 1];
  return u[k];
}

/* binomial.c ends */

#| binomial.cl -*- mode: common-lisp -*- 35453.668456 |#

(in-package :user)

(defun binomial (n k)
  (declare (fixnum n k))
  (locally (declare (optimize (speed 3) (safety 0) (debug 0)))
    (unless (and (plusp k) (plusp n))
      (error "No possible result"))
    (when (> k n)
      (cerror "Evaluate (binomial ~*~D ··@*~D) instead"
	      "Reversed arguments in (binomial ~D ~D)?"
	      n k)
      (rotatef n k))
    (setq k (min k (- n k)))    
    (when (zerop k)
      (return-from binomial 1))
    (let* ((j 0) (u (make-array (1+ k) :initial-element 0)))
      (declare (fixnum j) (type (vector fixnum) u))
      (setf (svref u 0) 1)
      (tagbody
       outer   (setq j k)
       inner   (incf (svref u j) (svref u (1- j)))
	(unless (zerop (decf j)) (go inner))
	(unless (zerop (decf n)) (go outer)))
      (svref u k))))

#| binomial.cl ends |#

there may be ways to improve this further; I don't have time to sit down
and _really_ tune this code (the above was based on a couple hunches and a
little experimentation, and took me about 40 minutes to tweak this into
place, plus about half an hour CPU time for tests running so long at a time
I could get some real work done in between.)  clearly, this is an immense
improvement over the previous code, a factor of 43,100!  (I have been told
that this is because the SPARC CPU (or my version of it) handles recursion
badly due to limited register windows.  I'm seeking more information on
this.)

#\Erik
-- 
1,3,7-trimethylxanthine -- a basic ingredient in quality software.
From: Jonathan Guthrie
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <5cil9h$b2d$1@news.hal-pc.org>
In comp.lang.scheme D. J. Bernstein <···@koobera.math.uic.edu> wrote:

> In article <················@naggum.no>, Erik Naggum  <····@naggum.no> wrote:

> > and in some cases, CL far exceed the wettest dreams of C programmers.

> Expand the tail recursion in C, like this---

> Of course, it's bad to use the stack for temporary space. Allocate what
> you need at the beginning, handling out-of-memory explicitly rather than
> crashing, and use some obvious dynamic programming---

> ---for another 500x speedup. How would you translate that to LISP? How
> fast is the result?

> The code can still be substantially improved; e.g., the u array should
> be duplicated to allow vectorization. There are also much better
> algorithms for computing this function, but that's a side issue.

I don't normally respond to such obvious flame bait, but I have to put
my $0.02 in, here.  And, I'd like to ask what I think is an obvious
question.

Mr. Bernstein spent most of his message tweaking the C code to make it
faster.  Mr. Naggum did some (quite necessary, in my opinion) clean-up
of Mr. Bernstein's code.  In my opinion, this small part of the overall
exchange illustrates that Mr. Naggum's original point ("in some cases,
CL far exceed the wettest dreams of C programmers.") is not supported by
his example program.  (I make no claims about any other sample programs.)

On the other hand, I am a firm believer in the methods of Kernighan and
Plauger.  As such, I am philosophically opposed to the sort of "tweaking"
that Mr. Bernstein engages in.  The cleanup required to make the tweaked
code not leak memory like a sieve leaks water shows that it isn't quite as
easy to get the code correct as it looks.  I am, after all, not interested
in code that gets the wrong answer with blazing speed.  Neither am I
interested in code that kills my computer (or is killed by my computer)
when I try to run it.

Instead, I point out that computer programs have two speeds, fast enough
and too slow.  Certain algorithms (the one used in "lotto" is an example)
are fast enough for small cases but will quickly become too slow for
somewhat larger cases.  (Instead of (lotto 35 7) try (lotto 60 30) or
(lotto 100 50), and you'll see what I mean.)  In the most general case,
it is the limitations of the algorithm that will dominate the performance
of the program, not the (mostly mythical) performance capabilities "of
the language" or of one of the implementations of that language.

The question I ask is about the "much better algorithms for computing
this function."  In a cursory effort, I was unable to find any such
thing, and I'm curious.  Is there one that is linear in both "n" and
"m"?  What are these better algorithms?  Anyone have any references?

-- 
Jonathan Guthrie (········@brokersys.com)
Information Broker Systems   +281-895-8101   http://www.brokersys.com/
12703 Veterans Memorial #106, Houston, TX  77014, USA

We sell Internet access and commercial Web space.  We also are general
network consultants in the greater Houston area.
From: Rainer Joswig
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <joswig-ya023180002501970711350001@news.lavielle.com>
In article <················@naggum.no>, Erik Naggum <····@naggum.no> wrote:

> * David Hanley
> | [Common Lisp] can come close [to C] in some cases, however.
> 
> and in some cases, CL far exceed the wettest dreams of C programmers.
> nearly exactly a year ago, some prejudiced fool presented a snippet of Lisp
> code that was very slow in CLISP and blamed Lisp as such for it in a
> Norwegian newsgroup.  I compiled his code with CMUCL, and it ran twice as
> fast as C.  just for kicks, I've done the same with Allegro CL.  my system
> is a 50MHz SPARC 2+, running SunOS 4.1.3_U1.

> $ time lotto
> 6724520
> 4.42user 0.04system 0:04.50elapsed 99%CPU

You may want to upgrade your SPARCstation to a Macintosh PowerBook 5300c
with Macintosh Common Lisp 4.0

? (time (lotto 35 7))
(LOTTO 35 7) took 1,183 milliseconds (1.183 seconds) to run.
Of that, 17 milliseconds (0.017 seconds) were spent in The Cooperative
Multitasking Experience.
6724520

-- 
http://www.lavielle.com/~joswig/
From: David B. Lamkins
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <AF10F56D-1652F@206.163.123.223>
On Fri, Jan 24, 1997 10:11 PM, Rainer Joswig <·············@lavielle.com>
wrote: 
>You may want to upgrade your SPARCstation to a Macintosh PowerBook
>5300c
>with Macintosh Common Lisp 4.0
>
>? (time (lotto 35 7))
>(LOTTO 35 7) took 1,183 milliseconds (1.183 seconds) to run.
>Of that, 17 milliseconds (0.017 seconds) were spent in The
>Cooperative
>Multitasking Experience.
>6724520
>

Likewise, the times on a PowerMac 7200/75:

? (time (lotto 35 7))
(LOTTO 35 7) took 1,793 milliseconds (1.793 seconds) to run.
Of that, 40 milliseconds (0.040 seconds) were spent in The Cooperative
Multitasking Experience.
6724520

It's worth pointing out that the "cooperative multitasking
experience" is is the time spent by the Mac OS to give cycles
to other applications which don't have the user's focus. I
found that the total time tracked very closely with this
OS overhead; thus the total CPU time in the above example is
1.753 seconds.


---------------------------------------------------------
 David B. Lamkins --- http://www.teleport.com/~dlamkins/
    CPU Cycles: Use them now, or lose them forever...
---------------------------------------------------------
From: Erik Naggum
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <3063178897704930@naggum.no>
* Erik Naggum
| ;; the following declaration helps ACL avoid boxing.
| (declaim (ftype (function (fixnum fixnum) fixnum) lotto))

just to answer some e-mailed questions: this form shaves off 2.5% of the
execution time in ACL.  it is not strictly necessary.

|   (the fixnum	... )			; same thing for CMUCL

but in CMUCL, leaving out this result type declaration means nearly 50%
more execution time.  CMUCL lets you know that you need it, though.

#\Erik
-- 
1,3,7-trimethylxanthine -- a basic ingredient in quality software.
From: Chris Pirih, proverbs at wolfenet dot com
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <omlet.32eb4066wn@news.aa.net>
In article <················@naggum.no>, Erik Naggum <····@naggum.no> wrote:
| $ time lotto
| 4.42user 0.04system 0:04.50elapsed 99%CPU
| 
| in CMUCL 17f, (time (lotto 35 7)) reports:
|   2.15 seconds of real time

Just for chuckles, I tried the C and Lisp code with gcc and gcl, 
respectively, on a P6/200 running Linux.  GCC: 0.30 seconds; 
GCL: 56.9 seconds.  GCL doesn't seem to understand the (debug) 
declamation though, though it did understand the other 
optimizations.  Maybe that's why it is so slow?

This presents so many opportunities!  We can segue into a Linux vs 
SunOS flamewar, a Sparc vs x86 flamewar, a Gnu vs CMU flamewar....  
The C vs Lisp flamewar is definitely starting to wear out.

---
chris
From: Peter da Silva
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <5cbasc$jqv@web.nmti.com>
In article <················@naggum.no>, Erik Naggum  <····@naggum.no> wrote:
> writing efficient code in Lisp takes some effort.  writing working code in
> Lisp is relatively easy.  the reverse is true for C.  Peter da Silva will
> have some words of wisdom to add to this statement, no doubt.

No doubt.

I don't think that it's as easy to write efficient code in C as a lot of
people make out. Mostly because it doesn't have a large library of
effective and well-optimized routines to draw on, with a few exceptions
(like GMP, for example).

It takes more work to write good code (by any metric of good) in C, but if
you need to get close to the hardware (for whatever reason... efficiency,
reliability (yes!), or security (again, yes)) and have the expertise to do
so, the availability of good cheap high-performance C compilers is certainly
a blessing. If you don't, it's a trap.

It's too bad there aren't a lot of cheap efficient easily integrated Smalltalk
compilers out there.
-- 

             The Reverend Peter da Silva, ULC, COQO, BOFH.

                  Har du kramat din varg, idag? `-_-'
From: Cyber Surfer
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <854367034snz@wildcard.demon.co.uk>
In article <··········@web.nmti.com> ·····@nmti.com "Peter da Silva" writes:

> It's too bad there aren't a lot of cheap efficient easily integrated Smalltalk
> compilers out there.

There are one or two, if you look for them. Cheap, certainly.
Efficient? That depends on how you define it. A few months ago,
I saw a C programmer, somewhere on UseNet, slagging off VB and
Delphi for not supporting threads, and therefore not being
"efficient", in his eyes, enough for writing an NT service.

However, there's at least one Smalltalk that supports threads,
and it's available for the same platform as VB and Delphi.

If I wanted to make an equally cheap shot at C, I'd ask where
I can get find an "efficient" C compiler - one that compiles
single functions fast enough so that I never notice it. Delphi
can do that, and so can a number of Lisp compilers, all with
native code as the result. Plus, how much memory is needed to
do this? Lisp and Delphi can do it with as little as 16 MB,
while (intentionally biasing the argument to make my point
about bias!) I've not yet found a C++ compiler that can do the
same using an identical platform.

Of course, I've not tried all of the available C++ compilers
for this platform, so I confess that I'm not being fair. I'm
not sure about your argument. What do qualities do _you_ look
for in an "efficient" compiler?

I'm not sure that many people really care about this, judging
by the popularity of slow compilers for C++. If enough of us
wanted an fast edit/compile/run cycle, then C++ might be a very
different language, or at least the compilers might be different.

It's the humans and dolphins issue, yet again.
-- 
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
  Martin Rodgers | Developer and Information Broker | London, UK
       Please remove the "nospam" if you want to email me.
                  "Blow out the candles, HAL."
From: Peter da Silva
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <5cj1c6$mko@web.nmti.com>
In article <············@wildcard.demon.co.uk>,
Cyber Surfer <············@wildcard.demon.co.uk> wrote:
> In article <··········@web.nmti.com> ·····@nmti.com "Peter da Silva" writes:
> 
> > It's too bad there aren't a lot of cheap efficient easily integrated Smalltalk
> > compilers out there.

> There are one or two, if you look for them. Cheap, certainly.

I'd appreciate any pointers you can give me.

> If I wanted to make an equally cheap shot at C,

I'm sorry if you thought I was taking a cheap shot. Smalltalk is by far
my favorite language... but it's been mostly a platonic relationship
because I haven't been able to get a smalltalk development platform on
a wide enough range of boxes for me to use. I'm not anti-smalltalk, just
frustrated.

As for C++, well, so far as I'm concerned C++ is the devil's work. Forget
it. I'm not going to touch it unless I'm paid a lot of money to do so.

> It's the humans and dolphins issue, yet again.

No, I think you're either reading my messages *very* selectively or you're
just jumping to conclusions without bothering to read them at all.
-- 

             The Reverend Peter da Silva, ULC, COQO, BOFH.

                  Har du kramat din varg, idag? `-_-'
From: Cyber Surfer
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <854474285snz@wildcard.demon.co.uk>
In article <··········@web.nmti.com> ·····@nmti.com "Peter da Silva" writes:

> > There are one or two, if you look for them. Cheap, certainly.
> 
> I'd appreciate any pointers you can give me.

I'd appreciate it if you could give me your opinion
of Smalltall MT: <URL:http://www.objectconnect.com/>,
perhaps by email, to avoid this thread wandering.
 
> I'm sorry if you thought I was taking a cheap shot. Smalltalk is by far
> my favorite language... but it's been mostly a platonic relationship
> because I haven't been able to get a smalltalk development platform on
> a wide enough range of boxes for me to use. I'm not anti-smalltalk, just
> frustrated.

I can understand that, as it's a familiar experience. ;)
-- 
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
  Martin Rodgers | Developer and Information Broker | London, UK
       Please remove the "nospam" if you want to email me.
                  "Blow out the candles, HAL."
From: Steven Huang
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <5cighm$5mf@hnssysb.hns.com>
In article <················@naggum.no> Erik Naggum <····@naggum.no> writes:
>* David Hanley
>| Then why did't they flunk out of the SE class?

>why do you think they had an SE class?

>haven't you noticed that those who have software engineering and have a
>computer science education don't usually write software in C?
[...]

I've had Software Engineering both as an undergrad and graduate student
of Computer Science, and I now work for a company that uses C for its
embedded systems development.  Am I really that unusual, or are you
just unable to see too far beyond your immediate surroundings?
From: Paul Prescod
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <E4qBnG.L5K@undergrad.math.uwaterloo.ca>
In article <··········@hnssysb.hns.com>, Steven Huang <·······@hns.com> wrote:
>In article <················@naggum.no> Erik Naggum <····@naggum.no> writes:
>>* David Hanley
>>| Then why did't they flunk out of the SE class?
>
>>why do you think they had an SE class?
>
>>haven't you noticed that those who have software engineering and have a
>>computer science education don't usually write software in C?
>[...]
>
>I've had Software Engineering both as an undergrad and graduate student
>of Computer Science, and I now work for a company that uses C for its
>embedded systems development.  Am I really that unusual, or are you
>just unable to see too far beyond your immediate surroundings?

Knuth programs in C-Web, which is essentially C (as far as the 
programming constructs), and I think he qualifies as having
"a computer science education."

From my personal experience, I believe that most commercial
(as opposed to corporate) software developers program in C or 
C++, regardless of their level of educational experience. 
The tragic truth is that most operating systems, SDKs and existing 
code bases are sufficiently biased towards C or C++ that it isn't 
worth changing. The "age of the Internet" offers an opportunity
to rewrite many things, so Java has the "big break" that it
would not have had a few years ago.

The JVM may also offer interesting opportunities for other
languages, because it is a more hospitable runtime environment
(gc, dynamic type checking, etc.) than a traditional operating
system.

 Paul Prescod
From: Frank A. Adrian
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <01bc0e05$55e2e610$e871df26@truth>
Paul Prescod <········@csclub.uwaterloo.ca> wrote in article
<··········@undergrad.math.uwaterloo.ca>...
> Knuth programs in C-Web, which is essentially C (as far as the 
> programming constructs), and I think he qualifies as having
> "a computer science education."

Although Dr. Knuth is known in the CS community for his work in that field,
his formal training was in Mathematics, not in CS (as we readers of
mathematical
journals, as well as CS journals know :-).  I think that his work is better
for it...

Frank A. Adrian
From: Bernd Paysan
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <32EFC938.1740B215@informatik.tu-muenchen.de>
Paul Prescod wrote:
> The JVM may also offer interesting opportunities for other
> languages, because it is a more hospitable runtime environment
> (gc, dynamic type checking, etc.) than a traditional operating
> system.

More hostile? The JVM is _very_ restrictive and makes it very hard to
implement non-Algol languages. E.g. a good APL, Lisp, Smalltalk, Prolog
or Forth system generates native code on the fly for performance. The
JVM disallows this, at least in a portable fashion ("native" means "JVM"
to me in this contextm, the JIT would be responsible to translate this
to real native code). Java is just yet another slightly improvement of
C, and for today's standard, it's a step back by 20 years for computer
sciene. Somewhere to the start of Smalltalk's developement. Hm, IMHO
it's much worse, because Java still isn't a interactive language, it's
plain old batch. And anybody who got used to at least one of the five
language classes above knows the value of interaction.

Please remove comp.arch from this thread!

-- 
Bernd Paysan
"Late answers are wrong answers!"
http://www.informatik.tu-muenchen.de/~paysan/
From: Paul Prescod
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <E53uq7.KHn@undergrad.math.uwaterloo.ca>
In article <·················@informatik.tu-muenchen.de>,
Bernd Paysan  <······@informatik.tu-muenchen.de> wrote:
>Paul Prescod wrote:
>> The JVM may also offer interesting opportunities for other
>> languages, because it is a more hospitable runtime environment
>> (gc, dynamic type checking, etc.) than a traditional operating
>> system.
>
>More hostile? The JVM is _very_ restrictive and makes it very hard to
>implement non-Algol languages. E.g. a good APL, Lisp, Smalltalk, Prolog
>or Forth system generates native code on the fly for performance. The
>JVM disallows this, at least in a portable fashion ("native" means "JVM"
>to me in this contextm, the JIT would be responsible to translate this
>to real native code). 

"A ClassLoader can be used to tell the runtime system to convert an array
of bytes into an instance of class Class. This conversion information
is passed to the runtime using the defineClass() method."

>Java is just yet another slightly improvement of
>C, and for today's standard, it's a step back by 20 years for computer
>sciene. 

Java is not state-of-the-art, but neither is it a step back. We're talking
about average programmers actually accepting garbage collection, closures,
and runtime type checking. To me, that's a major step foward. Those 
features have always had a stigma of being "too high level" for 
mainstream programming. Java blows up that myth and legitimizes
those features.

>Somewhere to the start of Smalltalk's developement. Hm, IMHO
>it's much worse, because Java still isn't a interactive language, it's
>plain old batch. And anybody who got used to at least one of the five
>language classes above knows the value of interaction.

That's why I run a scheme implementation in the JVM. Because scheme *is* 
interactive and the JVM supports it. Which was my point...

Anyhow, what in the Java spec. or JVM spec. makes Java into a "batch
language", esp. when compared to SmallTalk?

 Paul Prescod
From: David Hanley
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <330203A1.7131@netright.com>
Bernd Paysan wrote:
> 
> Paul Prescod wrote:
> > The JVM may also offer interesting opportunities for other
> > languages, because it is a more hospitable runtime environment
> > (gc, dynamic type checking, etc.) than a traditional operating
> > system.
> 
> More hostile? The JVM is _very_ restrictive and makes it very hard to
> implement non-Algol languages. E.g. a good APL, Lisp, Smalltalk, Prolog
> or Forth system generates native code on the fly for performance.

	You can easily do this in the JVM.

>  The
> JVM disallows this, at least in a portable fashion ("native" means "JVM"
> to me in this contextm, the JIT would be responsible to translate this
> to real native code). Java is just yet another slightly improvement of
> C, and for today's standard, it's a step back by 20 years for computer
> sciene.

	How so?  For instance, it's object and type model is quite interesting,
and, to my knowledge, has not been used in a widespread language
before.  This alone is interesting.


>  Somewhere to the start of Smalltalk's developement. Hm, IMHO
> it's much worse, because Java still isn't a interactive language, it's
> plain old batch. And anybody who got used to at least one of the five
> language classes above knows the value of interaction.

	There is nothing preventing java from being interactive.  This is
merely the model most programmers are comfortable with.  Interactive
java systems exist.

-- 
------------------------------------------------------------------------------
David Hanley, Software Developer, NetRight technologies.
My employer pays me for my opinions; nonetheless he does not share all
of them
E-mail address munged to defeat automailers, Delete _nospam
From: Thant Tessman
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <3304A89C.41C6@acm.org>
David Hanley wrote:

> 
>  [...]  For instance, [Java's] object and type model is quite interesting,
> and, to my knowledge, has not been used in a widespread language
> before.  This alone is interesting.   [...]

Forgive my laziness for not just learning Java myself, but I was under
the impression that Java's object model was just a dumbed-down version
of C++'s object model with the addition of what in Objective-C is
called a "protocol".

Am I mistaken?

-thant
From: David Hanley
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <330609A3.4A9B@netright.com>
Thant Tessman wrote:
> 
> David Hanley wrote:
> 
> >
> >  [...]  For instance, [Java's] object and type model is quite interesting,
> > and, to my knowledge, has not been used in a widespread language
> > before.  This alone is interesting.   [...]
> 
> Forgive my laziness for not just learning Java myself, but I was under
> the impression that Java's object model was just a dumbed-down version
> of C++'s object model with the addition of what in Objective-C is
> called a "protocol".

	Perhaps--I suppose I wasn't thinking of Objective-C as a widespread
language.  Java supports only single inheretence, but also provides
interfaces, which, I suppose, are broadly similar to objective-C
protocols.
	
	Note that I'm not saying Java is the greatest language in
existance--given the choice, I'd much rather program in ML ir Haskell. 
But among the choices, I've got ( C/C++/St,etc ) it's what I pick to
work in.


-- 
------------------------------------------------------------------------------
David Hanley, Software Developer, NetRight technologies.
My employer pays me for my opinions; nonetheless he does not share all
of them
E-mail address munged to defeat automailers, Delete _nospam
From: Richard A. O'Keefe
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <5cp8f6$m8p$1@goanna.cs.rmit.edu.au>
········@csclub.uwaterloo.ca (Paul Prescod) writes:
>Knuth programs in C-Web, which is essentially C (as far as the 
>programming constructs), and I think he qualifies as having
>"a computer science education."

Knuth is one hot programmer, computer scientist, mathematician,
and writer.  But the claim was not about people like him, but
about Software Engineers, and Knuth is not a Software Engineer.

Knuth's major published works are in *assembler* (MIXAL) and Pascal (TeX).

His _recent_ work uses C; notably "The Stanford GraphBase".
However, I don't think you should use him as an authority for the
use of C.  One of my treasures is a cheque with his signature on it,
which I won by pointing out some errors in the code of that book.

I won't go into detail, but I think it is a very telling point against
C that an exceptionally experienced and exceptionally capable
programmer with exceptional insight into machine level programming
suffered from a number of fairly basic misconceptions about C, with
the result that his code was less portable than it needed to be, to
no advantage.  (If you doubt me, read the book.)

-- 
My tertiary education cost a quarter of a million in lost income
(assuming close-to-minimum wage); why make students pay even more?
Richard A. O'Keefe; http://www.cs.rmit.edu.au/%7Eok; RMIT Comp.Sci.
From: Paul Prescod
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <E528qo.46p@undergrad.math.uwaterloo.ca>
In article <············@goanna.cs.rmit.edu.au>,
Richard A. O'Keefe <··@goanna.cs.rmit.edu.au> wrote:
>········@csclub.uwaterloo.ca (Paul Prescod) writes:
>>Knuth programs in C-Web, which is essentially C (as far as the 
>>programming constructs), and I think he qualifies as having
>>"a computer science education."
>
>Knuth is one hot programmer, computer scientist, mathematician,
>and writer.  But the claim was not about people like him, but
>about Software Engineers, and Knuth is not a Software Engineer.

The original post mentioned both a computer science education and
a software engineering education. Clearly Knuth has thought a lot
about both, and his education predates a degree that bears the label 
"software engineer".

>His _recent_ work uses C; notably "The Stanford GraphBase".
>However, I don't think you should use him as an authority for the
>use of C.  One of my treasures is a cheque with his signature on it,
>which I won by pointing out some errors in the code of that book.
>
>I won't go into detail, but I think it is a very telling point against
>C that an exceptionally experienced and exceptionally capable
>programmer with exceptional insight into machine level programming
>suffered from a number of fairly basic misconceptions about C, with
>the result that his code was less portable than it needed to be, to
>no advantage.  (If you doubt me, read the book.)

I did not intend to promote C or the use of C. I thought I was quite
explicit that many people with CS and SE degrees use C *but would 
rather not*. The original assertion was that most of them do not
use C. My personal observation from looking around is that most of them
do use either C or C++ because of many historical factors.

 Paul Prescod
From: David B. Lamkins
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <AF0D8F39-1776D@206.163.124.227>
On Thu, Jan 23, 1997 4:09 AM, Erik Naggum <···········@naggum.no> wrote: 
>and yes, I _do_ know that what I'm asking for is not C.  please get
>the
>point: C _does_ _not_ _provide_ a necessary mechanism for safe
>programming.
>

Yup. And even when the language does provide a mechanism,
many programmers will disable it in the name of "efficiency."

I worked on one project where the default was to disable
all compiler-generated runtime checks.  We found quite a 
few new bugs once I convinced others on the team that we
should enable all checks by default, and explicitly disable
checks only in small, local sections of code where we were
(reasonably) sure that we had all the boundary conditions
covered by design...


---------------------------------------------------------
 David B. Lamkins --- http://www.teleport.com/~dlamkins/
    CPU Cycles: Use them now, or lose them forever...
---------------------------------------------------------
From: John  Bayko
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <5cb8rb$ius@sue.cc.uregina.ca>
In article <··········@midway.uchicago.edu>,
    Tim Pierce  <···············@mail.bsd.uchicago.edu> wrote:
>
>Maybe I'm getting a bit off the point.  The real problem doesn't
>strike me as one of ignorant programmers not *knowing* that
>overflow can occur (although in practice that is a legitimate
>concern).  The problem is that the programmer needs to worry about
>it at all, and that the language itself does not provide for a
>convenient facility for dealing with such error conditions.
>
>In some contexts that's undoubtedly a feature, but not in the
>context of writing mission-critical user-level applications.
>Forcing the programmer to make a manual check for overflow every
>time a calculation is made makes for wasteful and inefficient
>development.

    The obvious solution would be to have two different integer types
- 'secure' and 'insecure'. 'Insecure' integers would be where you
don't want the overhead of overflow checking 'cause it'll never
happen anyway:

    for(x=0;x<ARRAYSIZE;x++) ...

You'd use 'secure' types when you don't know what the limits in your
data are (maybe 'cause you're too lazy to do range checking upon
input?).
    Or of course you could include sanity checks on your data around
your code and detect the problem closer where it actually occurs
instead of waiting until it's too late to correct it?

--
John Bayko (Tau).
·····@cs.uregina.ca
http://www.cs.uregina.ca/~bayko
From: Richard A. O'Keefe
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <5cp6pf$k2p$1@goanna.cs.rmit.edu.au>
I like to recall Dijkstra's discussion of things like arithmetic
in "A Discipline of Programming".  What you _want_ to do is to run your
program on a "Sufficiently Large Machine" where you don't run out of
memory or integer bits or whatever.  But that might be too expensive,
so a practical alternative is to use a "Hopefully Sufficiently Large
Machine", which will TELL if if it turns out not to be large enough.

There are at least four possible treatments of integers in a programming
language.

(1) Support integers of whatever size turns out to be necessary,
    to the extent that time and memory limits permit.  (Lisp.)

(2) Allow the programmer to _declare_ integers of any size, and support
    the size that the programmer allowed.  (Pascal and Ada have the
    notational resources to permit this, but there is a tradition of
    arbitrarily limiting the supported sizes.  I don't see the point
    in this.  Some Pascal systems have a special Integer[n] syntax for    
    larger integers.  It _can_ be done.)

(3) Allow the programmer to declare integers that fit into machine
    registers, and check for overflow at run time.  The MIPS machines
    make this a very efficient approach.  (Traditional Pascal.)

Those three approaches support application programming.

(4) Force the programmer to use sizes that are convenient for the
    machine, and check nothing at run time.  (The C _standard_ allows
    overflow checks for signed integer arithmetic; the C _tradition_
    doesn't use them.  There have however been C systems that got it
    right.)

As noted above, run-time overflow checks are fully consistent with
very high performance.  Proof:  MIPS.  (This is especially so if you
do not insist that integer overflow exceptions be precise, and I note
that high performance machines prefer imprecise floating point
exceptions, and we can live with _that_.)

There is *nothing* about approach (4) that makes it more efficient
than approach (3) or approach (2).  If you have a programming language
where the compiler lets you say

	Mills_Per_Dollar: constant := 1_000;
	Dollars_Per_Milliard: constant := 1_000_000_000;
	Country_Money_In_Milliards: constant := 1_000_000;
	Countries: constant := 300;
	World_Money_In_Mills: constant :=
	    Countries * Country_Money_In_Milliards *
	    Dollars_Per_Milliard * Mills_Per_Dollar;

	type Money is
	   range 1-World_Money_In_Mills .. World_Money_In_Mills-1;

(21 decimal digits) and makes it _work_ on a 32-bit machine (needing
3 words), why _not_?  Nothing there prevents the handling of numbers
that _do_ fit in a machine register being efficient!

Yes, for any programming language there is an efficiency advantage
to using numbers the machine can handle directly, when and only
when those numbers are big enough for your problem.  Even in a systems
programming language, there is no reason not to support approach (2).
In fact I've used a systems programming language for the M6800 where
(2) was *especially* appropriate.
-- 
My tertiary education cost a quarter of a million in lost income
(assuming close-to-minimum wage); why make students pay even more?
Richard A. O'Keefe; http://www.cs.rmit.edu.au/%7Eok; RMIT Comp.Sci.
From: Jerry Leichter
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <32F0B2BE.2593@smarts.com>
| ...[R]un-time overflow checks are fully consistent with very high 
| performance.  Proof:  MIPS.

You bet.  Another example is the Alpha (where, in fact, "overflow trap
enabled" is an instruction modifier, rather than a global state - a much
nicer programming model.)

Unfortunately, not all machines are designed with this degree of care. 
SPARC appears to require you to check the overflow flag and branch to a
handler - relatively expensive, and a symptom of the bias toward C,
which (as noted) almost always ignores overflows.  The x86 series also
requires a check, though it's a short instruction that causes a trap if
an overflow occurred.  Note that both of these architectures *know* if
there was an overflow - they provide an overflow bit.  That they provide
no way to get an immediate trap is a design choice that would have very
little, if any, effect on performance.

|			       (This is especially so if you do not 
| insist that integer overflow exceptions be precise, and I note that 
| high performance machines prefer imprecise floating point exceptions,  
| and we can live with _that_.)

We understand how to deal with this much better than we once did. 
MIPS's FP architecture is really clever, avoiding the problem by testing
the inputs and stalling subsequent operations in the pipeline if an
exception *could* occur.  Alpha takes the "pure RISC" approach":  It's
possible to generate code in such a way that, if an exception *does*
occur, software can work backwards to determine exactly where.  (This
involves some cost-free marker bits that the hardware ignores, and some
extra state flush instructions that *do* slow things down.)  It's your
choice when you generate the code:  Maximum speed with imprecise excep-
tions (you *do* still get them, of course), or some slowdown with
precise exceptions.

The technology is there.  Most of it's been there for years.  What's
been missing, and continues to be missing, is the will to move beyond
the cowboy programming model.
							-- Jerry
From: Maynard Handley
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <handleym-3001971418300001@handma.apple.com>
In article <·············@smarts.com>, Jerry Leichter
<········@smarts.com> wrote:

> | ...[R]un-time overflow checks are fully consistent with very high 
> | performance.  Proof:  MIPS.
> 
> You bet.  Another example is the Alpha (where, in fact, "overflow trap
> enabled" is an instruction modifier, rather than a global state - a much
> nicer programming model.)
> 
> Unfortunately, not all machines are designed with this degree of care. 
> SPARC appears to require you to check the overflow flag and branch to a
> handler - relatively expensive, and a symptom of the bias toward C,
> which (as noted) almost always ignores overflows.  The x86 series also
> requires a check, though it's a short instruction that causes a trap if
> an overflow occurred.  Note that both of these architectures *know* if
> there was an overflow - they provide an overflow bit.  That they provide
> no way to get an immediate trap is a design choice that would have very
> little, if any, effect on performance.
> 

I'd be interested to know what people think of the PPC model in this
regard. I program at a pretty low-level where I usually want 32 bit modulo
arithmetic, so I ignore these details, but PPC provides two mechanisms for
this sort of thing.
First on a per-instruction basis one can set a bit on certain instructions
to have them set the overflow flag in condition register 0. One can
subequently test this condition register. I suspect if one sets the branch
hint on this test to default to the "no-overflow" code this would not be
too expensive.
Secondly one can clear the Summary Overflow bit in the XER register,
perform some calculations, then at the end test if Summary Overflow was
set and if so, go back and redo the calculation testing each piece. 

Are these actually useful, or are there hidden gotchas that make them
either useless or just horribly slow?

Maynard

-- 
My opinion only
From: Joe Keane
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <5db0d1$ad5@shellx.best.com>
Machine designers seem to dislike dealing with results of operations
that are larger than one word, and machines handle it differently.

This happens in add/subtract with carry-out, for example adding two
32-bit words, and maybe a carry-in, gives you 33 bits of result.

One solution is to have a global condition code that is set and can be
used in later instructions.  Plenty of old machines worked this way.
This is a simple model, but it creates big problems when you want to
move instructions around, and i don't like the contention.

One improvement, that SPARC uses, is to make sure that setting the
condition codes is always optional.  Better than that, some machines
have multiple condition codes, which i think is really good.

Another way is to get rid of condition codes entirely, and always move
data to and from general registers, even for a 1-bit value.  Of course
this is very clean, and it's perhaps the best solution overall.

Another problem case is multiply; multiplying two 32-bit words really
gives you a 64-bit result.  So what to do with the double-word result?
And without condition codes, add-with-carry-out has the same issue.

The original SPARC returned a double-word result for multiply.  It did
this by using a special register.  The new SPARC has no special register
and it doesn't give you a double-word result.  I think that's too bad.

I think that special registers aren't a bad thing; but you need some
choice, if there's only one you can use then it becomes a bottleneck.
If you could send the results of multiplies to, say, one of eight
double-word registers then i think it wouldn't be a problem.

Another scheme is to return a double-word result in a register pair,
typically two general registers with consecutive even and odd numbers.
This is pretty nice to deal with, but it might require more accesses for
your register file than you may otherwise need, which would be bad.

--
Joe Keane, amateur mathematician
From: Bruce Hoult
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <2938098861@hoult.actrix.gen.nz>
Joe Keane <···@jgk.org> writes:
> Another problem case is multiply; multiplying two 32-bit words really
> gives you a 64-bit result.  So what to do with the double-word result?
> And without condition codes, add-with-carry-out has the same issue.
>
> The original SPARC returned a double-word result for multiply.  It did
> this by using a special register.  The new SPARC has no special register
> and it doesn't give you a double-word result.  I think that's too bad.
>
> I think that special registers aren't a bad thing; but you need some
> choice, if there's only one you can use then it becomes a bottleneck.
> If you could send the results of multiplies to, say, one of eight
> double-word registers then i think it wouldn't be a problem.
>
> Another scheme is to return a double-word result in a register pair,
> typically two general registers with consecutive even and odd numbers.
> This is pretty nice to deal with, but it might require more accesses for
> your register file than you may otherwise need, which would be bad.

Another option (used by PowerPC) is to have two instructions, one of which
returns the bottom half of the result, while the other instruction returns
the top half of the result.

These two instructions can be executed sequentially, or on superscalar
implementations they can be issued simultaneously to different execution
units.

-- Bruce

--
...in 1996, software marketers wore out a record 31,296 copies of Roget's
Thesaurus searching for synonyms to the word "coffee" ...
From: Jerry Leichter
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <32F9EF6F.73B6@smarts.com>
| > Another problem case is multiply; multiplying two 32-bit words 
| > really gives you a 64-bit result.  So what to do with the
| > double-word result?
| Another option (used by PowerPC) is to have two instructions, one of 
| which returns the bottom half of the result, while the other 
| instruction returns the top half of the result.
|
| These two instructions can be executed sequentially, or on superscalar
| implementations they can be issued simultaneously to different 
| execution units.

Just for completeness of the historical record:  This innovation, like
many others in high-performance computing, goes back to the CDC-6000
series in the late 1960's.  The 6600 actually had two multiply units, so
the two multiplications could issue (and complete) in successive cycles.

(What the 6000's did wasn't *exactly* what's being discussed here, since
they had no integer multiply instruction as such.  The multiplications
were floating point, and what you were getting was a double-precision FP
result.  The "extra" instruction was giving you the *lower* bits of the
result, rather than the extra *upper* bits that you would want an extra
instruction for when doing integer multiplies.  The reason I said the
machines had no integer multiply "as such" was that the hardware could
be used to get the same effect.  6000's had 60-bit words, and the FP
representation used 48-bit mantissas.  If left the exponents as all zero
bits, you could think of the values as 48-bit integers - as FP values,
they were "undefined".  The hardware was special-cased so that the
low-order product of two such things wasn't shifted completely away but
was stored as another such number.  Or something very much like that -
it's been many years.)
							-- Jerry
From: ········@wat.hookup.net
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <5dcu1v$35e$2@nic.wat.hookup.net>
In <·············@smarts.com>, Jerry Leichter <········@smarts.com> writes:
>| > Another problem case is multiply; multiplying two 32-bit words 
>| > really gives you a 64-bit result.  So what to do with the
>| > double-word result?
>| Another option (used by PowerPC) is to have two instructions, one of 
>| which returns the bottom half of the result, while the other 
>| instruction returns the top half of the result.
>|
>| These two instructions can be executed sequentially, or on superscalar
>| implementations they can be issued simultaneously to different 
>| execution units.
>
>Just for completeness of the historical record:  This innovation, like
>many others in high-performance computing, goes back to the CDC-6000
>series in the late 1960's.  The 6600 actually had two multiply units, so
>the two multiplications could issue (and complete) in successive cycles.
>
>(What the 6000's did wasn't *exactly* what's being discussed here, since
>they had no integer multiply instruction as such.  The multiplications
>were floating point, and what you were getting was a double-precision FP
>result.  The "extra" instruction was giving you the *lower* bits of the
>result, rather than the extra *upper* bits that you would want an extra
>instruction for when doing integer multiplies.  The reason I said the
>machines had no integer multiply "as such" was that the hardware could
>be used to get the same effect.  6000's had 60-bit words, and the FP
>representation used 48-bit mantissas.  If left the exponents as all zero
>bits, you could think of the values as 48-bit integers - as FP values,
>they were "undefined".  The hardware was special-cased so that the
>low-order product of two such things wasn't shifted completely away but
>was stored as another such number.  Or something very much like that -
>it's been many years.)
>							-- Jerry

Close.  The 6000s did their floating point arithmetic unnormalized.  Early
on in the life of the series you had to pack the integers to get an
unnormalized floating point number, then unpack the product.  Later models
special cased the plain integer multiplication (although it still was
restricted to 48 bit integers).

Hartmann Schaffer
From: Richard A. O'Keefe
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <5drbcn$a2s$1@goanna.cs.rmit.edu.au>
Joe Keane <···@jgk.org> writes:
>The original SPARC returned a double-word result for multiply.  It did
>this by using a special register.  The new SPARC has no special register
>and it doesn't give you a double-word result.  I think that's too bad.

According to the SPARC V9 handbook, the %Y register is still there,
as are the V8 instructions that used it.  It is not used for 64-bit
multiplicands, and you are recommended not to use it in new code.

>Another scheme is to return a double-word result in a register pair,
>typically two general registers with consecutive even and odd numbers.
>This is pretty nice to deal with,

Not if you're writing a compiler it's not.

-- 
limits on the scope of cooperation are often due to the inability
to recognise the identity or the acts of the other playes.  --R.Axelrod
Richard A. O'Keefe; http://www.cs.rmit.edu.au/%7Eok; RMIT Comp.Sci.
From: Chris Brown
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <5c7gak$jqg@sis.armltd.co.uk>
In article <··········@midway.uchicago.edu>,
Tim Pierce  <···············@mail.bsd.uchicago.edu> wrote:
>
>How silly to imagine that a variable type entitled `int' might
>actually be related to the ring Z.

I think people are just going to have to accept that this is going to
remain a point of difference between Computer Scientists (of the
practical kind), and mathematicians and Computer Scientists (of the
theoretical kind). I don't think either (if we're talking about Lisp
and C here) language is 'superior', because I don't think a comparison
between them is sensible - they're aimed at different
targets. Criticisms of C on the basis of it using machine integer
types, rather than 'true' integer types are somewhat silly - it's like
criticising a giraffe for not having wheels.

Of course, if you want to carry on having a pointless flamewar, please
don't let me stop you. :-)
--
/*  _  */main(int k,char**n){char*i=k&1?"+L*;99,RU[,RUo+BeKAA+BECACJ+CAACA"
/* / ` */"CD+LBCACJ*":1[n],j,l=!k,m;do for(m=*i-48,j=l?m/k:m%k;m>>7?k=1<<m+
/* |   */8,!l&&puts(&l)**&l:j--;printf("  \0_/"+l));while((l^=3)||l[++i]);}
/* \_,hris Brown -- All opinions expressed are probably wrong. */
From: Scott Schwartz
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <8gd8uuxpo2.fsf@galapagos.cse.psu.edu>
In article <················@naggum.no> Erik Naggum <····@naggum.no> writes:
| $ time lotto
| in ACL, (time (lotto 35 7)) reports:

Just to be fair, you ought to measure the same thing.  In the first
case you measure the time to load and initialize the C runtime system.
In the second case you already have Lisp fired up, so you omit the
measurements for the cost of doing that.  Let's see the time for the
a.out generated by ACL or CMUCL, starting at the shell prompt.
From: Rainer Joswig
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <joswig-ya023180002501970710520001@news.lavielle.com>
In article <··············@galapagos.cse.psu.edu>,
········@galapagos.cse.psu.edu.NO-SPAM (Scott Schwartz) wrote:

> In article <················@naggum.no> Erik Naggum <····@naggum.no> writes:
> | $ time lotto
> | in ACL, (time (lotto 35 7)) reports:
> 
> Just to be fair, you ought to measure the same thing.  In the first
> case you measure the time to load and initialize the C runtime system.
> In the second case you already have Lisp fired up, so you omit the
> measurements for the cost of doing that.  Let's see the time for the
> a.out generated by ACL or CMUCL, starting at the shell prompt.

I almost ever have a CL running.

Also my Lisp machines OS is running CL, there is no quitting CL
other than shutting down the machine.

-- 
http://www.lavielle.com/~joswig/
From: Erik Naggum
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <3063183095872312@naggum.no>
* Scott Schwartz
| Just to be fair, you ought to measure the same thing.  In the first
| case you measure the time to load and initialize the C runtime system.
| In the second case you already have Lisp fired up, so you omit the
| measurements for the cost of doing that.  Let's see the time for the
| a.out generated by ACL or CMUCL, starting at the shell prompt.

right.  the usual reaction from "C just _has_ to win"-people is to force
everything to fit its paradigm.  when C loses by a factor of 2, it has to
be explained away with some nonsense like the above.

a Lisp system is like a shell.  in Unix, the shell evaluates your command
line, including forking and running programs and whatnot.  in Lisp systems,
Lisp evaluates your expressions.  if you goal in life is to get something
done efficiently, you measure what is natural in each environment.  if your
goal in life is to execute a.out files efficiently, never mind what they
do, then, and only then, should you generate a.out files for all test
cases.

but let's see what difference the runtime system _does_ make.  ACL has a
nice foreign function interface that should even out the costs:

lotto.c is now compiled with gcc -c -O3 -fPIC.

(load "lotto.o")
(ff:defforeign 'c-lotto
    :entry-point "_lotto"
    :arguments '(fixnum fixnum)
    :return-type :integer)

user(40): (time (lotto 35 7))
; cpu time (non-gc) 4,567 msec user, 0 msec system
; cpu time (gc)     0 msec user, 0 msec system
; cpu time (total)  4,567 msec user, 0 msec system
; real time  4,591 msec
; space allocation:
;  2 cons cells, 0 symbols, 32 other bytes
6724520
user(41): (time (c-lotto 35 7))
; cpu time (non-gc) 4,417 msec user, 0 msec system
; cpu time (gc)     0 msec user, 0 msec system
; cpu time (total)  4,417 msec user, 0 msec system
; real time  4,431 msec
; space allocation:
;  4 cons cells, 0 symbols, 32 other bytes
6724520

(these are stabilized values, i.e., time repeated the same values several
times in a row.)

we recall that the C function in the fork-and-execute version used 4.42
seconds of user time.  in other words, the difference is negligible.

note: the foreign function interface to CMUCL is less well documented, and
I have never used it much.  however, I'm using Allegro CL for a project
that uses lots of binary protocols for which we already have working code.

#\Erik
-- 
1,3,7-trimethylxanthine -- a basic ingredient in quality software.
From: Peter da Silva
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <5cij7g$dvp@web.nmti.com>
In article <················@naggum.no>, Erik Naggum  <····@naggum.no> wrote:
> * Scott Schwartz
> | Just to be fair, you ought to measure the same thing.  In the first
> | case you measure the time to load and initialize the C runtime system.
> | In the second case you already have Lisp fired up, so you omit the
> | measurements for the cost of doing that.  Let's see the time for the
> | a.out generated by ACL or CMUCL, starting at the shell prompt.

> right.  the usual reaction from "C just _has_ to win"-people is to force
> everything to fit its paradigm.  when C loses by a factor of 2, it has to
> be explained away with some nonsense like the above.

Fine. Rewrite the C code to time itself. You don't have to fit everything
into the UNIX model to get a fair benchmark, you can fit everything into
the Lisp model. Embed the C code in Tclsh or something, if that bothers
you. You're comparing UNIX to the interpreter, not Lisp to C.

(and, no, C doesn't always win. Maclisp was already beating GOOD Fortran
 compilers many years ago, so I wouldn't be surprised to find that your
 CMUCL still beat C on a fair benchmark... but *do* run a fair one, eh?)
-- 

             The Reverend Peter da Silva, ULC, COQO, BOFH.

                  Har du kramat din varg, idag? `-_-'
From: Steven Huang
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <5cihvb$5o5@hnssysb.hns.com>
In article <················@naggum.no> Erik Naggum <····@naggum.no> writes:
>* Scott Schwartz
>| Just to be fair, you ought to measure the same thing.  In the first
>| case you measure the time to load and initialize the C runtime system.
>| In the second case you already have Lisp fired up, so you omit the
>| measurements for the cost of doing that.  Let's see the time for the
>| a.out generated by ACL or CMUCL, starting at the shell prompt.

>right.  the usual reaction from "C just _has_ to win"-people is to force
>everything to fit its paradigm.  when C loses by a factor of 2, it has to
>be explained away with some nonsense like the above.

I'm probably not the only one who feels you are arrogant.  The concern
over your biased benchmarking is well placed, whether or not your
revised benchmarkings bring significant differences.

>a Lisp system is like a shell.  in Unix, the shell evaluates your command
>line, including forking and running programs and whatnot.  in Lisp systems,
>Lisp evaluates your expressions.  if you goal in life is to get something
>done efficiently, you measure what is natural in each environment.  if your
>goal in life is to execute a.out files efficiently, never mind what they
>do, then, and only then, should you generate a.out files for all test
>cases.

I agree.  On a given system, if Lisp is already available at no further
cost in time or space to the user, the inherent advantage should not be
factored out.  However, if Lisp on your system must be loaded (like
archaic MS-DOS BASIC interpreters, for instance), factoring out the load
time is unfair.

>but let's see what difference the runtime system _does_ make.  ACL has a
>nice foreign function interface that should even out the costs:
[...]

Since you bothered enough to do it anyway, why bother with the two
previous paragraphs?  I'm not following very closely, but I think
DJ Bernstein provided C solutions that has undergone some algorithmic
optimizations.  Have you examined those?
From: Rainer Joswig
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <joswig-ya023180002801971513560001@news.lavielle.com>
In article <··········@hnssysb.hns.com>, ·······@hns.com (Steven Huang) wrote:

> I agree.  On a given system, if Lisp is already available at no further
> cost in time or space to the user, the inherent advantage should not be
> factored out.  However, if Lisp on your system must be loaded (like
> archaic MS-DOS BASIC interpreters, for instance), factoring out the load
> time is unfair.

Huh? Sure you can measure startup time for some runtime system
and compare it. But this has nothing to do with benchmarking
what a C or Lisp compiler generates of some source code.
I wonder how people are doing their benchmarks? If I want
to know how efficient a compiler handles recursion
and integer arithmetic, what has that to do with startup
times of whatever???

> 
> >but let's see what difference the runtime system _does_ make.  ACL has a
> >nice foreign function interface that should even out the costs:
> [...]
> 
> Since you bothered enough to do it anyway, why bother with the two
> previous paragraphs?  I'm not following very closely, but I think
> DJ Bernstein provided C solutions that has undergone some algorithmic
> optimizations.  Have you examined those?

Again, the question is not whether you can find better solutions
for computing these results - the question was how fast some
comparable source code for C and Common Lisp is. That is
all. If he tries to use loops or arrays implementing it
differently - that is fine. But what do you expect - 
the same recodings in Common Lisp will also run faster.
That is no big deal.

The real question is, why are people still (maybe forced) using
C for high-level programming, even where it is not adequate. Have
the C programmers failed to build an environment
which can be extended with non-low-level languages and libraries
in a seamless way? Doing unsafe arithmetic in real world software
is just one of those jokes. Everything that is not C-like
and is not modelled the C-way is currently harder
then necessary. What is the result? People are forced
to use low-level languages like C with poor abstraction
capabilities, which are really portable assemblers.
Additionally users are really beta testers.

Rainer Joswig

-- 
http://www.lavielle.com/~joswig/
From: Simon Brooke
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <5clllu$a1a@caleddon.intelligent.co.uk>
In article <··········@hnssysb.hns.com>,
	·······@hns.com (Steven Huang) writes:
> In article <················@naggum.no> Erik Naggum <····@naggum.no> writes:
>>* Scott Schwartz
>>| I don't think it does even out the costs.  For one thing, you didn't
>>| measure the cost of (load "lotto.o").
> 
>>you're being stupid on purpose.  you don't include compilation, assembling,
>>and linking time in C programs, so why do you do it for Lisp?  go away.
> 
> Question:  Do you recompile, assemble, and link your C executable every
> time you need to run it?
> 
> Question:  Do you need to load your Lisp program every time you run it?
> (How often is it already in memory?)

Look, I really don't wish to inflame this discussion any further. But
there is a very fundamental misunderstanding here by C proponents as
to how a LISP environment works.

Take the example, as an example, of a PICK user who, for reasons of
economy, now runs his PICK system on a generic UN*X box. You wouldn't
for a moment suppose that he would start his PICK environment for each
small PICK program he wanted to run. This would just be silly. The
PICK user in this scenario obviously starts PICK in place of a login
shell, and continues to run the same PICK session until he logs out.

Ah but, you will say, PICK is an operating system (like UN*X). LISP is
just a programming language (like C). And this is the basic
misunderstanding. LISP is far more like UN*X than it is like C. It is,
for LISP users, a complete environment in which you do all your
work. Historically, for many LISP users, there has been no 'operating
system' (as the C proponents would understand it) at all. The XEROX
machines on which I learned my craft had a primitive boot loader; and
they had LISP. And in LISP you ran your drawing package, your
word-processor, your spreadsheet, your project planning package, your
news and mail readers, all elegantly graphical, all written in LISP,
all happily multitasking together. Other people used Symbolics or LMI
or TI machines which were even nicer.

Of course we (mostly) don't do that now, because cheap UN*X machines
have priced special-purpose LISP machines out of the market (it makes
me weep to see how far my working environment today is behind what I
had ten years ago, but there you go...). But the idea of starting a
complete new LISP image in order to run one program is just as
meaningless to me as the idea of starting a complete new login session
to run one program is to you. Why not count the cost of booting the
machine from cold as well?

Simon

-- 
·····@intelligent.co.uk (Simon Brooke) http://www.intelligent.co.uk/~simon

    There is no Kay but Kay, and Little is his profit.
From: Cyber Surfer
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <854380841snz@wildcard.demon.co.uk>
In article <··············@galapagos.cse.psu.edu>
           ········@galapagos.cse.psu.edu.NO-SPAM "Scott Schwartz" writes:

> In article <················@naggum.no> Erik Naggum <····@naggum.no> writes:
> | $ time lotto
> | in ACL, (time (lotto 35 7)) reports:
> 
> Just to be fair, you ought to measure the same thing.  In the first
> case you measure the time to load and initialize the C runtime system.
> In the second case you already have Lisp fired up, so you omit the
> measurements for the cost of doing that.  Let's see the time for the
> a.out generated by ACL or CMUCL, starting at the shell prompt.

This is silly. A Lisp programmer could just as easily ask how
easily you can write a C program that can extend itself at runtime
(not always a typical use of Lisp, but it can be done), and
how general your solution would be, and how efficient the code
it generated would be.

There's nothing to stop Erik from running the lotto code within
an alreday running Lisp environment. A "fair comparison" that
you ask for should perhaps compile the C code before running it.
I'm that too would add to the run time.

Here's a more realistic example (one that I know Erik will be
familiar with). Imagine that the lotto code is run from a web
server. The C code might well be run as a new process, adding
to the loading of the machine. A web server that uses Lisp could
just call the function.

Of course, there are ways of doing this in C (ISAPI, etc), but
the "process vs language environment" comparison is not very
meaningful, unless you have a specific application in mind that
demands one approach and excludes the other, i.e. a contrived
example that only tells us about the demands of that one example.

I sometimes suspect that both C _and_ Lisp people are guilty
of this mistake. If you're fortunate enough to find a tool that
works well for you, that's great. However, please don't assume
that what works for you will be equally good for everyone else.
Similarly, please don't assume that what fails for you will fail
for everyone else.

Humans and dolphins.
-- 
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
  Martin Rodgers | Developer and Information Broker | London, UK
       Please remove the "nospam" if you want to email me.
                  "Blow out the candles, HAL."
From: Scott Schwartz
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <8gafpxyaoc.fsf@galapagos.cse.psu.edu>
Erik Naggum <····@naggum.no> writes:
| a Lisp system is like a shell. in Unix, the shell evaluates your command
| line, including forking and running programs and whatnot.  in Lisp systems,
| Lisp evaluates your expressions.  if you goal in life is to get something
| done efficiently, you measure what is natural in each environment.  if your
| goal in life is to execute a.out files efficiently, never mind what they
| do, then, and only then, should you generate a.out files for all test
| cases.

Yes, my goal in life is to execute a.out files efficiently, because
that's the way my OS of choice does things.  If Lisp doesn't easily
accomodate itself to that environment, it might be part of the reason
why Lisp is unpopular.

| but let's see what difference the runtime system _does_ make.  ACL has a
| nice foreign function interface that should even out the costs:

I don't think it does even out the costs.  For one thing, you didn't
measure the cost of (load "lotto.o").
From: Erik Naggum
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <3063227791126192@naggum.no>
* Scott Schwartz
| I don't think it does even out the costs.  For one thing, you didn't
| measure the cost of (load "lotto.o").

you're being stupid on purpose.  you don't include compilation, assembling,
and linking time in C programs, so why do you do it for Lisp?  go away.

#\Erik
-- 
1,3,7-trimethylxanthine -- a basic ingredient in quality software.
From: Steven Huang
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <5ciidv$5oi@hnssysb.hns.com>
In article <················@naggum.no> Erik Naggum <····@naggum.no> writes:
>* Scott Schwartz
>| I don't think it does even out the costs.  For one thing, you didn't
>| measure the cost of (load "lotto.o").

>you're being stupid on purpose.  you don't include compilation, assembling,
>and linking time in C programs, so why do you do it for Lisp?  go away.

Question:  Do you recompile, assemble, and link your C executable every
time you need to run it?

Question:  Do you need to load your Lisp program every time you run it?
(How often is it already in memory?)

An accurate benchmark must first simulate the usual working environment,
and the usual working environment for C is that the program is already
compiled to an executable, sitting perhaps on disk.  I am not familiar
with the usual Lisp environment, so that part is up to you.

Unless your Lisp code is usually already in memory, "load" is not without
cost, and any fair benchmarking must include that cost, however minute
its contributions.  Calling other people stupid is not going to win any
points.  Remember also that enough "negligible" costs ignored can add up
to an inaccurate benchmark.
From: William Clodius
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <32EE5ECE.FF6@lanl.gov>
Steven Huang wrote:
> 
> <snip>
> 
> An accurate benchmark must first simulate the usual working environment,
> and the usual working environment for C is that the program is already
> compiled to an executable, sitting perhaps on disk.  I am not familiar
> with the usual Lisp environment, so that part is up to you.
> 
> <snip>

No!

The majority of people are interested in more than one aspect of a
system but it is impractical to develope benchmarks that test all
aspects of a system with the same weight of importance that an arbitrary
user would give them.  As a result no one benchmark is sufficient, but
that does not mean that it is inaccurate.  An accurate benchmark must
carefully document what it is benchmarking, and maintains consistency in
its application from one system (environment) to another. The user can
then attempt to give the benchmark its appropriate weight. This can be
difficult, however, if there are no other benchmarks available that test
other aspects of the system.

For programming language development systems there are a number of
aspects of interest and the aspects of interest will vary from user to
user, sometimes from language to language, and environment to
environment. For developers the compile, link, debug cycle time can be
of greater interest than the final code performance. For isolated end
users, the final code performance can be more important. As a result it
is not uncommon for developers to rely on two or more environments, one
well suited for most stages of development and another that is better at
optimizing code. For small applications the load time might be
important, for large ones the run time dominates. In addition less
quantifieable aspects of the system, robustness, documentation, support,
flexibility, are also of great interest.

Unfortunately most language benchmarks give only execution time, and for
interlanguage comparisoons it is often not clear whether the compared
code should be as close as possible or reflect typical language idioms.

>  Unless your Lisp code is usually already in memory, "load" is not without
> cost, and any fair benchmarking must include that cost, however minute
> its contributions. <snip>

This comment make implicit assumptions about the use of Lisp (i.e.,
because it is not C, it must not be used like C) that you should be
aware of. Your whole message is not truly complaining about an
inaccuracy, but about incompleteness, but the additional information
that you desire is not made fully explicit. For example your questions
are posed in terms of how Eric uses his environment, not in terms of how
you would use the environment.

Note I have deleted comp.arch from my posing and followups.

-- 

William B. Clodius		Phone: (505)-665-9370
Los Alamos Nat. Lab., NIS-2     FAX: (505)-667-3815
PO Box 1663, MS-C323    	Group office: (505)-667-5776
Los Alamos, NM 87545            Email: ········@lanl.gov
From: Erik Naggum
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <3063405313554753@naggum.no>
* Steven Huang
| Question:  Do you recompile, assemble, and link your C executable every
| time you need to run it?

of course not.

| Question:  Do you need to load your Lisp program every time you run it?
| (How often is it already in memory?)

of course not.

| An accurate benchmark must first simulate the usual working environment,

sigh.  what were we measuring?  were we measuring the time from we put the
phone down after ordering a brand new computer all the way through getting
the number of combinations on a lotto coupon, or were we measuring how fast
a single function ran?  I was measuring the latter.  I think this goes
beyond the bloody obvious.

| I am not familiar with the usual Lisp environment, so that part is up to
| you.

depends on your needs.  if it the function always needed in your Lisp
programs, you link the Lisp image with it (in case of a .o file), or you
dump a new lisp image with it (in case of a Lisp function).  if it is not
very frequently used, but often enough that you want to have it available,
you could autoload it or load a stub function.  if it is rarely necessary,
you typically load the .o or compiled Lisp function before you start using
it.  in either case, it is in memory when you call it, and it will stay in
memory.  you also typically have a Lisp running throughout your whole
session, just like X or xterm or the shell or Emacs.  not all applications
under Unix are as ephemeral as say, an `ls' command.

there is no theoretical upper limit to how much time must have elapsed
before you can determine that some process has started (except for the
beginning of time, which we don't know how far back is).  if I say that I
include the time it took to load a file from disk, you could argue that the
disk needs to reach its calibrated spin speed, which could be a significant
number of seconds.  it gets sillier and sillier, of course, but if you
don't want to limit the timing to the smallest possible timable event, you
will never, and I mean _never_ in the absolute sense, get two people to
agree on _any_ timing statistics.  which I assume is _your_ point.  mine
was trying to show a useful correspondence between compiled code in two
languages.  I repeat myself, but please go away.

#\Erik
-- 
1,3,7-trimethylxanthine -- a basic ingredient in quality software.
From: Steven Huang
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <5cl85r$8su@hnssysb.hns.com>
In article <················@naggum.no> Erik Naggum <····@naggum.no> writes:
[...]
>there is no theoretical upper limit to how much time must have elapsed
>before you can determine that some process has started (except for the
>beginning of time, which we don't know how far back is).  if I say that I
>include the time it took to load a file from disk, you could argue that the
>disk needs to reach its calibrated spin speed, which could be a significant
>number of seconds.  it gets sillier and sillier, of course, but if you
>don't want to limit the timing to the smallest possible timable event, you
>will never, and I mean _never_ in the absolute sense, get two people to
>agree on _any_ timing statistics.

You are incorrect.  You can easily get consensus on the fairness of a
comparison.  This means that your audience is convinced that your method
is sound and you aren't biased against one of the solutions.  IOW, the
error margin should be small enough to be acceptable, and that goal is
far from "never" reached.

Numerous benchmarks for CPUs are available and accepted by the industry.
"Accepted" doesn't mean that one cannot nitpick and point out small
differences, but it does mean that the industry will take the results
from such acceptable tests and benchmarks as relevant statistics and
these small differences as side notes to those statistics.

It is your duty as a person making an assertion to provide satisfactory
proof that your methods are sound, but your impatience and arrogance
prevents this step from being carried out.

>which I assume is _your_ point.  mine
>was trying to show a useful correspondence between compiled code in two
>languages.  I repeat myself, but please go away.

If this is the only thing you can say to somebody who questions your
method, then I feel sorry for you.  I will go away, and you are free
to feel victorious in an argument against everyone you ordered to go
away.
From: Erik Naggum
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <3063474382854268@naggum.no>
* Steven Huang
| You can easily get consensus on the fairness of a comparison.

so what went wrong when you think my comparison is unfair, and I think
you're being silly about it?  sure doesn't look like "easily" to me, but
your answer to this no doubt lies in your attitude to what I say: it is
important to you to say that I'm wrong regardless of what I'm saying, and
the most important thing to you now is how impatient and arrogant I am.

| It is your duty as a person making an assertion to provide satisfactory
| proof that your methods are sound, but your impatience and arrogance
| prevents this step from being carried out.

really?  how come you fail to read answers to your questions?

my experience tells me that a minority of people get much more upset with
what they perceive as "impatience" or "arrogance" than anything else I do
and they will make a hell of a stink about it -- in fact, they get much
more upset that somebody, somewhere in this world can dare to be arrogant
than any other possible ill or evil on the whole planet.  morover, I hear
complaints about my technical expertise _only_ from people who have already
jumped to the conclusion that I'm arrogant or impatient or worse.  I have
yet to figure out why it is so important to such people to point this out
-- they surely cannot think they have made a significantly new discovery?

if you had not failed to look for more than "proof" of your prejudice, you
would have found answers to your questions.  you ignore them, and I ask you
to go away because I still try to discuss the topic of this thread, not
whether I'm impatient and arrogant.  really, that's common knowledge.

#\Erik
-- 
1,3,7-trimethylxanthine -- a basic ingredient in quality software.
From: Dennis Yelle
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <dennisE4qMEA.DK3@netcom.com>
In article <················@naggum.no> Erik Naggum <····@naggum.no> writes:
[...]
>... and I ask you
>to go away because I still try to discuss the topic of this thread, not
>whether I'm impatient and arrogant.  really, that's common knowledge.
>
>#\Erik

Well, now that everyone knows that you are impatient and arrogant,
have you ever thought about changing that?

-- 
······@netcom.com (Dennis Yelle)
"You must do the thing you think you cannot do." -- Eleanor Roosevelt
From: Erik Naggum
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <3063483543243747@naggum.no>
* Dennis Yelle
| Well, now that everyone knows that you are impatient and arrogant,
| have you ever thought about changing that?

I think I already answered that in the article you respond to, where I also
wrote:

    my experience tells me that a minority of people get much more upset
    with what they perceive as "impatience" or "arrogance" than anything
    else I do and they will make a hell of a stink about it -- in fact,
    they get much more upset that somebody, somewhere in this world can
    dare to be arrogant than any other possible ill or evil on the whole
    planet.

find something worthwhile to spend your time on, Dennis.  the only person
you can change is yourself.  I suggest you start by learning that purely
personal questions without any possible technical contents have nothing to
do here or anywhere else.  people like you _make_ me impatient and
arrogant.  in case you wish to change something, change the cause, not the
effect.  in other words: shut up.

#\Erik
-- 
1,3,7-trimethylxanthine -- a basic ingredient in quality software.
From: Christopher Oliver
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <5cmh21$s5s@guy-smiley.traverse.com>
Erik Naggum (····@naggum.no) wrote:
: people like you _make_ me impatient and arrogant.

Wrong.  You and only you are responsible for the behavior you show in
a given situation.  Don't dream to shift the blame.  Your manners are
your responsibility alone.

: in case you wish to change something, change the cause, not the
: effect.  in other words: shut up.

This sort of outburst does nothing for your credibility.  I seem to
remember words such as "I may not agree with what you say, but I will
defend to the death your right to say it."

I apologize for this to the general Lisp community.  I genuinely enjoy
reading about Lisp here, but I see so little of this on the Lisp or
Scheme groups, but rather than no traffic, these groups are ballooned
with prejudicial tirades.  This is not sound forensic discourse.

--
Christopher Oliver                     Traverse Communications
Systems Coordinator                    223 Grandview Pkwy, Suite 108
······@traverse.com                    Traverse City, Michigan, 49684
From: Erik Naggum
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <3063506692425499@naggum.no>
* Erik Naggum
| people like you _make_ me impatient and arrogant.

* Christopher Oliver
| Wrong.  You and only you are responsible for the behavior you show in
| a given situation.  Don't dream to shift the blame.  Your manners are
| your responsibility alone.

sigh.  I knew some fool had to come forward the second I had posted that
sentence.  _cause_ and _responsibility_ are two fundamentally different
properties of actions.  this is obvious for anyone who pauses to think and
does not immediately jump on the "mommy, he's so _arrogant_!" bandwagon.
just get over it.  not everybody in this world will be to your liking, and
if that includes me, I'm just too happy to return the favor.  and just for
the stupid record, nobody was shifting any _blame_ -- just because you
don't get it doesn't mean it's "wrong" or that you're "right".

* Erik Naggum
| in case you wish to change something, change the cause, not the effect.
| in other words: shut up.

* Christopher Oliver
| This sort of outburst does nothing for your credibility.  I seem to
| remember words such as "I may not agree with what you say, but I will
| defend to the death your right to say it."

look, you fool, if he can _give_ advice, the least he can do is take it.
in your case: at least be annoyed with something that has some semblance of
relevance to the newsgroups you post to, OK?

it's funny when people use that quotation in the _second_ person, not even
realizing it is in the _first_.  but since you're grand-standing about
defending people's right to spout nonsense: how come you don't defend _my_
right to say what I say?  it's pretty damn easy to "defend" something when
you agree with it or you can hide behind some misguided idea of "the little
guy", right?  if you really had anything going for you in that credibility
department, you would defend something you were upset about, not something
you already felt morally obliged to defend from the outset.  think about
it.  and take a look at that fine quotation of yours in the _first_ person.

| This is not sound forensic discourse.

so do something constructive about it!  dammit, if all you can do is
complain that others do something not to your liking, you have only
yourself to blame for not doing something that _would_ be to your liking.

#\Erik
-- 
1,3,7-trimethylxanthine -- a basic ingredient in quality software.
From: Scott Schwartz
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <8gafpx6ogz.fsf@roke.cse.psu.edu>
Erik Naggum <····@naggum.no> writes:
| you're being stupid on purpose.  you don't include compilation, assembling,
| and linking time in C programs, so why do you do it for Lisp?  go away.

That's uncalled for.  My point is valid: you want to treat lisp like a
shell, but then ignore the cost of doing so when the overhead is in
question.  In contrast, the time you gave for the a.out version
includes the cost of dynamically loading all the shared libraries.
From: Erik Naggum
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <3063274536533203@naggum.no>
* Scott Schwartz
| My point is valid: you want to treat lisp like a shell, but then ignore
| the cost of doing so when the overhead is in question.  In contrast, the
| time you gave for the a.out version includes the cost of dynamically
| loading all the shared libraries.

your "point" is not only invalid, it is completely ridiculous.

I measured 1 million calls to these functions, and got the timings I did,
and don't even try to believe I fork'ed and exec'ed a million times.  I
wanted to measure the time of the function itself, not the time of running
a program.  by running the same function a million times inside a given
process, the time information I get for Lisp can be expressed as

    1000000(binomial+looper)

and for the C as

    1000000(binomial+looper)+startup

to find the time of the looper and the startup costs, I ran both with
(binomial 35 35) to find the cost of the function call.  those timings were
subtracted from the times I listed.

considering the amount of idiocy I get from C users when timing stuff and C
loses or proves to be less than the super-efficient hands-down winner of
all contests, my belief that C damages people's brains in irreparable ways
is indeed solidifying.

#\Erik
-- 
1,3,7-trimethylxanthine -- a basic ingredient in quality software.
From: Cyber Surfer
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <854445681snz@wildcard.demon.co.uk>
In article <··············@roke.cse.psu.edu>
           ········@roke.cse.psu.edu.NO-SPAM "Scott Schwartz" writes:

> Erik Naggum <····@naggum.no> writes:
> | you're being stupid on purpose.  you don't include compilation, assembling,
> | and linking time in C programs, so why do you do it for Lisp?  go away.
> 
> That's uncalled for.  My point is valid: you want to treat lisp like a
> shell, but then ignore the cost of doing so when the overhead is in
> question.  In contrast, the time you gave for the a.out version
> includes the cost of dynamically loading all the shared libraries.

While I agree that Erik's "go away" is uncalled for, I think
that you're ignoring the value of using Lisp as a shell. Could
this be because you can't do this in C? I don't know. Most C
implementations can't do it, while most Lisps can. Go figure.
-- 
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
  Martin Rodgers | Developer and Information Broker | London, UK
       Please remove the "nospam" if you want to email me.
                  "Blow out the candles, HAL."
From: Evan Champion
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <32EBB0CB.12A9@synapse.net>
Can you please stop crossposting this thread to comp.arch.  It has 
absolutely nothing to do with computer architecture.  Thanks.

Evan
From: Andreas Eder
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <m3ohe72rk1.fsf@laphroig.mch.sni.de>
··@goanna.cs.rmit.edu.au (Richard A. O'Keefe) writes:

> There are at least four possible treatments of integers in a programming
> language.
> 
> (1) Support integers of whatever size turns out to be necessary,
>     to the extent that time and memory limits permit.  (Lisp.)
> 
> (2) Allow the programmer to _declare_ integers of any size, and support
>     the size that the programmer allowed.  (Pascal and Ada have the
>     notational resources to permit this, but there is a tradition of
>     arbitrarily limiting the supported sizes.  I don't see the point
>     in this.  Some Pascal systems have a special Integer[n] syntax for    
>     larger integers.  It _can_ be done.)
> 
> (3) Allow the programmer to declare integers that fit into machine
>     registers, and check for overflow at run time.  The MIPS machines
>     make this a very efficient approach.  (Traditional Pascal.)
> 
> Those three approaches support application programming.
> 
> (4) Force the programmer to use sizes that are convenient for the
>     machine, and check nothing at run time.  (The C _standard_ allows
>     overflow checks for signed integer arithmetic; the C _tradition_
>     doesn't use them.  There have however been C systems that got it
>     right.)
> 

Lisp does not only follow (1), you can only do (2):
e.g. (integer 2 15) is a valid type, describing integers between 2 and 15.
From: David Gay
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <s71bua6ao00.fsf@barnowl.CS.Berkeley.EDU>
In article <·············@smarts.com> Jerry Leichter <········@smarts.com> writes:
   |			       (This is especially so if you do not 
   | insist that integer overflow exceptions be precise, and I note that 
   | high performance machines prefer imprecise floating point exceptions,  
   | and we can live with _that_.)

   We understand how to deal with this much better than we once did. 
   MIPS's FP architecture is really clever, avoiding the problem by testing
   the inputs and stalling subsequent operations in the pipeline if an
   exception *could* occur.  Alpha takes the "pure RISC" approach":  It's
   possible to generate code in such a way that, if an exception *does*
   occur, software can work backwards to determine exactly where.  (This
   involves some cost-free marker bits that the hardware ignores, and some
   extra state flush instructions that *do* slow things down.)  

In fact, on pure floating-point code (i.e. essentially no integer ops), I
measured a slow-down by a factor of 4 for adds (21164). The DEC compilers
apparently insert one of those flush instructions after every
floating-point op (i.e. they aren't very aggressive). I then tried to get
better performance using hand-coded assembly, and following the rules in
the Alpha architecture manual: if I understand correctly, you can insert
one trap per basic block as long as you never clobber any input registers.
This worked great for reducing the slowdown, but produced incorrect results
in one case. Either I didn't quite understand the rules, or the emulation
code hasn't been sufficiently tested (which considering that the compiler
only generates the simple cases may not be very surprising).

I can post the code in question if anyone is interested.

   It's your
   choice when you generate the code:  Maximum speed with imprecise excep-
   tions (you *do* still get them, of course), or some slowdown with
   precise exceptions.

It was my understanding that the latest Alpha (21264) had precise exceptions ?

-- 
David Gay - Yet Another Starving Grad Student
····@cs.berkeley.edu
From: Simon Brooke
Subject: It's the compiler, stupid (was Re: Theory #51 (superior(?) programming languages))
Date: 
Message-ID: <5ctj0m$plk@caleddon.intelligent.co.uk>
(Note: the new title of this thread is a humourous reference to Bill
Clinton's campaign slogan, and is not intended to insult any
participant in this discussion. OK?)

In article <················@news.aa.net>,
	·@_._ (Chris Pirih, proverbs at wolfenet dot com) writes:
> In article <················@naggum.no>, Erik Naggum <····@naggum.no> wrote:
>| $ time lotto
>| 4.42user 0.04system 0:04.50elapsed 99%CPU
>| 
>| in CMUCL 17f, (time (lotto 35 7)) reports:
>|   2.15 seconds of real time
> 
> Just for chuckles, I tried the C and Lisp code with gcc and gcl, 
> respectively, on a P6/200 running Linux.  GCC: 0.30 seconds; 
> GCL: 56.9 seconds.  GCL doesn't seem to understand the (debug) 
> declamation though, though it did understand the other 
> optimizations.  Maybe that's why it is so slow?

OK, Linux 2.0.28, twin processor P6/200, gcc 2.7.2 vs Franz Allegro
4.3, compilation options as discussed upthread:

[·····@merrick lisp]$ time ./lotto
6724520
0.00user 0.30system 0:00.30elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k

[1] USER(27): (time (lotto 35 7))
; cpu time (non-gc) 280 msec user, 0 msec system
; cpu time (gc)     0 msec user, 0 msec system
; cpu time (total)  280 msec user, 0 msec system
; real time  280 msec
; space allocation:
;  2 cons cells, 0 symbols, 32 other bytes
6724520

Allegro beats GCC by 2 hundredths of a second. Which is another way of
saying it's all down to the quality of the compiler, and different
compilers are good at different things. For example, I often use
(factorial 1000) as a quick, lazy first test of a LISP system I am
being introduced to. A Xerox Daybreak could do (factorial 1000) in 22
minutes and odds. My Acorn A310 running Cambridge LISP could do the
same computation in about eleven seconds. Was the A310 sixty times
faster than the Daybreak? Nothing like it. InterLISP was just
phenomenally bad at bignums, while Cambridge LISP had been highly
tuned to run things like REDUCE (ACL on the i686 does this computation
in five hundreths of a second).

-- 
·····@intelligent.co.uk (Simon Brooke) http://www.intelligent.co.uk/~simon

	Due to financial constraints, the light at the end of the tunnel
	has been switched off.
From: Rainer Joswig
Subject: Re: It's the compiler, stupid (was Re: Theory #51 (superior(?) programming languages))
Date: 
Message-ID: <joswig-ya023180000202970031450001@news.lavielle.com>
In article <··········@caleddon.intelligent.co.uk>, ·····@intelligent.co.uk
(Simon Brooke) wrote:

> > Just for chuckles, I tried the C and Lisp code with gcc and gcl, 
> > respectively, on a P6/200 running Linux.  GCC: 0.30 seconds; 
> > GCL: 56.9 seconds.  GCL doesn't seem to understand the (debug) 
> > declamation though, though it did understand the other 
> > optimizations.  Maybe that's why it is so slow?

I still don't believe the GCL numbers. Has anybody verified it?
Was the function really compiled? What did the code
look like? Would it need additional declarations?

-- 
http://www.lavielle.com/~joswig/
From: Simon Brooke
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <5cv853$4f9@caleddon.intelligent.co.uk>
In article <··············@laphroig.mch.sni.de>,
	Andreas Eder <···@laphroig.mch.sni.de> writes:
> ··@goanna.cs.rmit.edu.au (Richard A. O'Keefe) writes:
> 
>> There are at least four possible treatments of integers in a programming
>> language.
>> 
>> (1) Support integers of whatever size turns out to be necessary,
>>     to the extent that time and memory limits permit.  (Lisp.)
>> 
>> (2) Allow the programmer to _declare_ integers of any size, and support
>>     the size that the programmer allowed.  (Pascal and Ada have the
>>     notational resources to permit this, but there is a tradition of
>>     arbitrarily limiting the supported sizes.  I don't see the point
>>     in this.  Some Pascal systems have a special Integer[n] syntax for    
>>     larger integers.  It _can_ be done.)
> 
> Lisp does not only follow (1), you can only do (2):
> e.g. (integer 2 15) is a valid type, describing integers between 2 and 15.

Oh? So it's only in my imagination that I can calculate the factorial
of one thousand and return the result as an integer? It looks like an
integer to me. Or do you know something about integers that I don't?

Yes, I do know it's a bignum. A bignum is just a bigger integer. An
integer is just a member of the set of whole numbers between -infinity
and infinity.

-- 
·····@intelligent.co.uk (Simon Brooke) http://www.intelligent.co.uk/~simon

	If you really want a Tory government for ever, keep on voting
	Labour. If you want a Labour government soon, vote SNP just once.
From: Gareth McCaughan
Subject: Re: It's the compiler, stupid (was Re: Theory #51 (superior(?) programming languages))
Date: 
Message-ID: <va4vi8amqzy.fsf@jay.dpmms.cam.ac.uk>
Rainer Joswig wrote:

> > > Just for chuckles, I tried the C and Lisp code with gcc and gcl, 
> > > respectively, on a P6/200 running Linux.  GCC: 0.30 seconds; 
> > > GCL: 56.9 seconds.  GCL doesn't seem to understand the (debug) 
> > > declamation though, though it did understand the other 
> > > optimizations.  Maybe that's why it is so slow?
> 
> I still don't believe the GCL numbers. Has anybody verified it?
> Was the function really compiled? What did the code
> look like? Would it need additional declarations?

I have just un-verified it. :-)

On a not-very-fast SPARCstation,

GCL said
    >(time (lotto 35 7))
    real time : 8.367 secs
    run time  : 7.883 secs
    6724520

CMUCL said
    * (time (lotto 35 7))
    Compiling LAMBDA NIL: 
    Compiling Top-Level Form: 
     
    Evaluation took:
      1.19 seconds of real time
      1.19 seconds of user run time
      0.0 seconds of system run time
      0 page faults and
      0 bytes consed.
    6724520

GCC (-O2) said
    % time a.out
    6724520
    --- 2.660s,0.020s user/kernel 0:02.67 elapsed  0+88k (sh/unsh) 0pf 0sw

So, CMU CL is about twice as fast as gcc -O2, which is about three times
the speed of GCL.

Relevant trivia:
  - GCC version 2.7.2; -O2 produced code as fast as any other
    optimisation setup I tried.
  - GCL version 1.1.
  - CMUCL version 17f.

I conjecture that the person claiming GCL was 190 times faster than GCC
probably didn't realise that typing "(compile 'lotto)" might make a
difference...

This does show that the CMU compiler is a lot better than GCL's. But
we knew *that* already. :-)

-- 
Gareth McCaughan       Dept. of Pure Mathematics & Mathematical Statistics,
·····@dpmms.cam.ac.uk  Cambridge University, England.
From: Gareth McCaughan
Subject: Re: It's the compiler, stupid (was Re: Theory #51 (superior(?) programming languages))
Date: 
Message-ID: <va4zpxkl9x1.fsf@jay.dpmms.cam.ac.uk>
I wrote:

> GCL said
>     >(time (lotto 35 7))
>     real time : 8.367 secs
>     run time  : 7.883 secs
>     6724520

Someone has pointed out to me by e-mail that I haven't specified
well enough what GCL did, because of course it uses a C compiler
to do its compilation.

It turns out that it's using gcc -O. Here, for reference, is the
C file it produces (with a huge amount of preliminary stuff --
#defines and such -- omitted):

    /*      local entry for function LOTTO  */
     
    static int LI1(V3,V4)
     
    register int V3;register int V4;
    {        VMB1 VMS1 VMV1
    TTL:;
            if(!((V4)==(1))){
            goto T2;}
            {int V5 = V3;
            VMR1(V5)}
    T2:;
            if(!((V4)==(V3))){
            goto T5;}
            {int V6 = 1;
            VMR1(V6)}
    T5:;
            { save_avma;
            V7 = stoi((*(LnkLI0))((V3)-1,(V4)-1));
            V8 = stoi((*(LnkLI0))((V3)-1,V4));
            {int V9 = fix(make_integer(addii(V7,V8)));
            restore_avma; 
            VMR1(V9)}restore_avma;}
    }
    static int  LnkTLI0(va_alist)va_dcl{va_list ap;va_start(ap);
    return(int )call_proc(VV[0],&LnkLI0,20738,ap);} /* LOTTO */

Perhaps there are incantations I could have uttered to make it
use fixnums...

-- 
Gareth McCaughan       Dept. of Pure Mathematics & Mathematical Statistics,
·····@dpmms.cam.ac.uk  Cambridge University, England.
From: Marty Hall
Subject: Re: It's the compiler, stupid (was Re: Theory #51 (superior(?) programming languages))
Date: 
Message-ID: <x57mknc56r.fsf@rsi.jhuapl.edu>
Gareth McCaughan <·····@pmms.cam.ac.uk> writes:

> I conjecture that the person claiming GCL was 190 times faster than GCC
                                                          ^^^^^^
> probably didn't realise that typing "(compile 'lotto)" might make a
> difference...

If GCL was this much faster before compiling, just think how much
faster it will be after.  :-)
					- Marty

(proclaim '(inline skates))
Lisp Resources: <http://www.apl.jhu.edu/~hall/lisp.html>
From: Marco Antoniotti
Subject: Re: It's the compiler, stupid (was Re: Theory #51 (superior(?) programming languages))
Date: 
Message-ID: <s08n2tjmbz3.fsf@crawdad.ICSI.Berkeley.EDU>
Can somebody post the 'lotto' function?

Thanks
-- 
Marco Antoniotti - Resistente Umano
===============================================================================
International Computer Science Institute	| ·······@icsi.berkeley.edu
1947 Center STR, Suite 600			| tel. +1 (510) 643 9153
Berkeley, CA, 94704-1198, USA			|      +1 (510) 642 4274 x149
===============================================================================
	...it is simplicity that is difficult to make.
	...e` la semplicita` che e` difficile a farsi.
				Bertholdt Brecht
From: ozan s. yigit
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <OZ.97Feb5120936@nexus.yorku.ca>
	... As such, I am philosophically opposed to the sort of "tweaking"
   that Mr. Bernstein engages in.

unraveling the tail recursion is not much of a "tweaking" and certainly
not a machine-dependent tweaking. if it is just as readable, what is the
problem? 

oz
---
When a passenger of the foot heave in sight, tootle the horn. Trumpet
him melodiously at first, but if he still obstacles your passage, then
tootle him with vigor.
                        -- warning to motorists in tokyo
From: Jonathan Guthrie
Subject: Re: Theory #51 (superior(?) programming languages)
Date: 
Message-ID: <5dddda$q8d$1@news.hal-pc.org>
In comp.lang.scheme ozan s. yigit <··@nexus.yorku.ca> wrote:

> 	... As such, I am philosophically opposed to the sort of "tweaking"
>    that Mr. Bernstein engages in.

> unraveling the tail recursion is not much of a "tweaking" and certainly
> not a machine-dependent tweaking. if it is just as readable, what is the
> problem? 

Although I also use the term "tweaking" to refer to those cases where the
programmer makes semantically null changes to the program source code in
an attempt to fool the compiler into producing better code, what makes the
changes in question "tweaking" is that the resulting code is not
structured the way that the algorithm is structured.

Of course, I find tweaking of the former category to be more troublesome
than the latter category, but in an ideal world no such changes would be
necessary.  The compiler would do all the appropriate "mechanical"
transformations to the algorithm.

-- 
Jonathan Guthrie (········@brokersys.com)
Information Broker Systems   +281-895-8101   http://www.brokersys.com/
12703 Veterans Memorial #106, Houston, TX  77014, USA

We sell Internet access and commercial Web space.  We also are general
network consultants in the greater Houston area.
From: Gareth McCaughan
Subject: Re: It's the compiler, stupid (was Re: Theory #51 (superior(?) programming languages))
Date: 
Message-ID: <va4zpxi15mh.fsf@jay.dpmms.cam.ac.uk>
Marty Hall wrote:

> > I conjecture that the person claiming GCL was 190 times faster than GCC
>                                                           ^^^^^^
...
> If GCL was this much faster before compiling, just think how much
> faster it will be after.  :-)

Oooooops.

Incidentally, someone who knows much more about GCL than I do claims
in e-mail that suitable incantations can persuade it to use only fixnums
throughout. Having put in what seem to be suitable incantations
(and verified that the resulting C code seems to be using fixnums
everywhere), I get better results: the times are almost exactly
the same as those for compiled C.

CMUCL still wins by a factor of 2 or so, though.

-- 
Gareth McCaughan       Dept. of Pure Mathematics & Mathematical Statistics,
·····@dpmms.cam.ac.uk  Cambridge University, England.