In article <·······················@10.0.2.1>,
······@netcom.com (Henry Baker) writes:
> In article <··········@fido.asd.sgi.com>, ·······@engr.sgi.com (Mike
> McDonald) wrote:
>> My first introduction to lisp was on an IBM 360-XXX in batch mode! I
> couldn't
>> figure out why anyone would want to use lisp as a result. Later, when we got a
>> Vax 11/780 running BSD, I got hooked on Franz Lisp. An interactive environment
>> for lisp makes a big difference.
>>
>> Mike McDonald
>> ·······@engr.sgi.com
>
> I'm no fan of batch processing, but _even in batch processing mode_, Lisp
> was still more productive than other batch processing languages I've used
> (Fortran, PL/I, assembly, Pascal, etc.).
>
> (Yes, Virginia, there was life before emacs, just like there was life
> before indoor plumbing. :-)
Remember, this was in one of those silly "Introduction to AI" classes.
Running Eliza in batch mode isn't very exciting. Now that I'm more knowledgable
about lisp, I can see where it's useful even in batch mode. But I couldn't back
then as a first introduction.
Mike McDonald
·······@engr.sgi.com
From: Erik Naggum
Subject: Re: Why lisp failed in the marketplace
Date:
Message-ID: <3066588599082020@naggum.no>
* Mike McDonald
| Remember, this was in one of those silly "Introduction to AI" classes.
| Running Eliza in batch mode isn't very exciting. Now that I'm more
| knowledgable about lisp, I can see where it's useful even in batch
| mode. But I couldn't back then as a first introduction.
I wonder if the purported failure in the marketplace is due _only_ to bad
first introductions and resulting prejudice. once people get over their
first introductions (or the bad (formal) introductions aren't their first),
they seem to stay with Lisp.
#\Erik
--
if you think big enough, you never have to do it
In article <················@naggum.no>, Erik Naggum <····@naggum.no> wrote:
>I wonder if the purported failure in the marketplace is due _only_ to bad
>first introductions and resulting prejudice. once people get over their
>first introductions (or the bad (formal) introductions aren't their first),
>they seem to stay with Lisp.
I think that programmers from the hacker culture are also more comfortable
with languages where they feel that they understand the underlying runtime
model and can predict optimizations based on their knowledge of assembly
language. Newer algol-derived languages make these forms of predictions
harder and harder, so I am proposing this as a historical phenomenon and
not at all as a valid current argument against functional languages.
Paul Prescod
In article <··········@undergrad.math.uwaterloo.ca>,
········@csclub.uwaterloo.ca (Paul Prescod) wrote:
> In article <················@naggum.no>, Erik Naggum <····@naggum.no> wrote:
> >I wonder if the purported failure in the marketplace is due _only_ to bad
> >first introductions and resulting prejudice. once people get over their
> >first introductions (or the bad (formal) introductions aren't their first),
> >they seem to stay with Lisp.
>
> I think that programmers from the hacker culture are also more comfortable
> with languages where they feel that they understand the underlying runtime
> model and can predict optimizations based on their knowledge of assembly
> language. Newer algol-derived languages make these forms of predictions
> harder and harder, so I am proposing this as a historical phenomenon and
> not at all as a valid current argument against functional languages.
I no longer trust what manuals and customer service people tell me. When
approaching a new language/compiler, I spend a day (or much more) looking at the
actual code generated. It's amazing how good and how bad it can be.
Unfortunately, you have to go through the same exercise _each time a new
release comes out_, so it's pretty exhausting. But if you're building
something where performance and/or correctness is really important, you have
little choice. E.g., avionics systems written in Ada typically have their
assembly code checked out rather completely, and the compiler is kept constant
during the process. If a compiler bug is found, one has no choice but to
program around it, because upgrading to a new release would require all the
regression tests to be redone.
It is dangerous to trust what even the compiler writers themselves think, since
the interactions of the various types of optimizations tax their own
intuitions.
Thanks for an interesting article. I don't understand one thing, though:
In article <·······················@10.0.2.1>,
Henry Baker <······@netcom.com> wrote:
>I no longer trust what manuals and customer service people tell me. When
>approaching a new language/compiler, I spend a day (or much more) looking at the
>actual code generated. It's amazing how good and how bad it can be.
>Unfortunately, you have to go through the same exercise _each time a new
>release comes out_, so it's pretty exhausting. But if you're building
>something where performance and/or correctness is really important, you have
>little choice.
...
>It is dangerous to trust what even the compiler writers themselves think, since
>the interactions of the various types of optimizations tax their own
>intuitions.
I don't understand how a day (or much more) of reverse engineering can give
you a better understanding of the compiler than the compiler writer.
Proving a compiler "correct" does not seem to be a tractable problem
(theoretically or practically), so the best you can get is a warm fuzzy
that the compiler writer knows what he or she is doing.
Paul Prescod
In article <··········@undergrad.math.uwaterloo.ca>,
········@csclub.uwaterloo.ca (Paul Prescod) wrote:
> Thanks for an interesting article. I don't understand one thing, though:
>
> In article <·······················@10.0.2.1>,
> Henry Baker <······@netcom.com> wrote:
> >I no longer trust what manuals and customer service people tell me. When
> >approaching a new language/compiler, I spend a day (or much more)
looking at the
> >actual code generated. It's amazing how good and how bad it can be.
> >Unfortunately, you have to go through the same exercise _each time a new
> >release comes out_, so it's pretty exhausting. But if you're building
> >something where performance and/or correctness is really important, you have
> >little choice.
>
> ...
>
> >It is dangerous to trust what even the compiler writers themselves
think, since
> >the interactions of the various types of optimizations tax their own
> >intuitions.
>
> I don't understand how a day (or much more) of reverse engineering can give
> you a better understanding of the compiler than the compiler writer.
> Proving a compiler "correct" does not seem to be a tractable problem
> (theoretically or practically), so the best you can get is a warm fuzzy
> that the compiler writer knows what he or she is doing.
Compiler writers have explained to me how their compiler worked, and then
when I showed them some code that it produced, they were amazed at some of
the things that it did. Anyone who has worked with non-trivial AI-type
programs have had the same experience.
The particular issue that I was interested in was whether a particular
programming style resulted in 'good'/'efficient' code. Although a compiler
writer can describe the general types of optimizations, it is often difficult
to predict what will actually happen in a particular case. Since making this
style produce efficient code was particularly important to me, and I was willing
to change my style to make it efficient if that were necessary, I had to check
the actual output of the compiler. I could also run timings, but you can get
a lot more information by looking at the generated code.
From: William D Clinger
Subject: Tractability of proving compiler correctness
Date:
Message-ID: <33249420.46C9@ccs.neu.edu>
········@csclub.uwaterloo.ca (Paul Prescod) wrote:
> Proving a compiler "correct" does not seem to be a tractable problem
> (theoretically or practically)...
For the record: The algorithms that were used for code generation
by the MacScheme compiler (at optimization level 0) and by the
VLISP compiler were proved correct [1,2,3,4]. The compiler code
that implemented those algorithms was not proved correct, but the
code was a pretty straightforward transcription of the algorithms.
No bugs were ever found in the portion of the MacScheme compiler
that corresponded to the proof. As I recall, fewer than 20 bugs
were found in other parts of that compiler. For a compiler, this
is a small number of bugs.
Nowadays optimizations such as assignment elimination, lambda
lifting, and flow analyses can also be proved correct, provided of
course that they really are correct! The proofs aren't necessarily
easy, but they are certainly tractable.
It is unfortunate that the correctness of a runtime system,
standard library, and user interface cannot be proved as easily
as the correctness of a compiler.
Will
--------
References.
[1] William Clinger. The Scheme 311 compiler: an exercise in
denotational semantics. In 1984 ACM Symposium on Lisp and
Functional Programming, pages 356-364.
[2] Joshua D Guttman, John D Ramsdell, and Mitchell Wand. VLISP:
a verified implementation of Scheme. In Journal of Lisp and
Symbolic Computation 8(1/2), March 1995, pages 5-32.
[3] Joshua D Guttman, John D Ramsdell, and Vipin Swarup. The
VLISP verified Scheme system. Ibid, pages 33-110.
[4] Dino P Oliva, John D Ramsdell, and Mitchell Wand. The VLISP
verified PreScheme compiler. Ibid, pages 111-182.
In article <·············@ccs.neu.edu>,
William D Clinger <····@ccs.neu.edu> wrote:
>········@csclub.uwaterloo.ca (Paul Prescod) wrote:
>> Proving a compiler "correct" does not seem to be a tractable problem
>> (theoretically or practically)...
>
>For the record: The algorithms that were used for code generation
>by the MacScheme compiler (at optimization level 0) and by the
>VLISP compiler were proved correct [1,2,3,4]. The compiler code
>that implemented those algorithms was not proved correct, but the
>code was a pretty straightforward transcription of the algorithms.
I was thinking more of an arbitrary compiler instead of one created
according to particular algorithms. Still, your experience is very
interesting: how much does designing according to provably correct
algorithms restrict your flexibility for optimization? Is it
feasible for real-world code, or would the loss of optimization
render it infeasible?
Paul Prescod
From: Anthony Shipman
Subject: Re: Tractability of proving compiler correctness
Date:
Message-ID: <5g58vn$b60@ephor.tusc.com.au>
Have a look at
http://public.logica.com/~stepneys/bib/ss/hic.htm
--
Anthony Shipman "You've got to be taught before it's too late,
TUSC Computer Systems Pty Ltd Before you are six or seven or eight,
To hate all the people your relatives hate,
E-mail: ···@tusc.com.au You've got to be carefully taught." R&H
In article <················@naggum.no>, Erik Naggum <····@naggum.no> wrote:
> * Mike McDonald
> | Remember, this was in one of those silly "Introduction to AI" classes.
> | Running Eliza in batch mode isn't very exciting. Now that I'm more
> | knowledgable about lisp, I can see where it's useful even in batch
> | mode. But I couldn't back then as a first introduction.
>
> I wonder if the purported failure in the marketplace is due _only_ to bad
> first introductions and resulting prejudice. once people get over their
> first introductions (or the bad (formal) introductions aren't their first),
> they seem to stay with Lisp.
There may be something to this. In my experience of teaching Lisp (and APL)
to non-CS undergrads, they had no trouble with Lisp at all, and were able to
do very substantial projects in a 1-semester course. The CS students -- due
to the baggage that they carried into the class from BASIC, C, PL/I, Cobol,
Pascal, etc. -- actually had a much harder time coming to grips with Lisp.
The CS people kept trying to use loops instead of recursion, and worried
very inappropriately about 'efficiency' instead of just getting on with
the job of programming something.
Then, of course, it is conceivable that the CS students may not have been as
bright -- the others were prelaw, premed, physics, math, etc.