From: David Steuber
Subject: TERPRI
Date: 
Message-ID: <m2ad2h2fvi.fsf@david-steuber.com>
I'm reading PAIP and I see some code with a call to TERPRI.  Being a
noob, the first thought that jumped into my head was something I
shouldn't say in public.  The cleaned up version would be something
like, "I wonder what that does?"  So I looked TERPRI up in the CLHS.

Who thought that TERPRI was mnemonic for 'print "\n";' and why?

Clearly there is some interesting computer history lore that I am
missing out on.  Would anyone care to enlighten me?

-- 
Those who do not remember the history of Lisp are doomed to repeat it,
badly.

> (dwim x)
NIL

From: Barry Margolin
Subject: Re: TERPRI
Date: 
Message-ID: <barmar-4F229D.06555016032004@comcast.ash.giganews.com>
In article <··············@david-steuber.com>,
 David Steuber <·············@verizon.net> wrote:

> I'm reading PAIP and I see some code with a call to TERPRI.  Being a
> noob, the first thought that jumped into my head was something I
> shouldn't say in public.  The cleaned up version would be something
> like, "I wonder what that does?"  So I looked TERPRI up in the CLHS.
> 
> Who thought that TERPRI was mnemonic for 'print "\n";' and why?
> 
> Clearly there is some interesting computer history lore that I am
> missing out on.  Would anyone care to enlighten me?

I think it's short for TERminate PRInting.

-- 
Barry Margolin, ······@alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***
From: David Sletten
Subject: Re: TERPRI
Date: 
Message-ID: <lDC5c.7370$Xd1.6342@twister.socal.rr.com>
David Steuber wrote:

> I'm reading PAIP and I see some code with a call to TERPRI.  Being a
> noob, the first thought that jumped into my head was something I
> shouldn't say in public.  The cleaned up version would be something
> like, "I wonder what that does?"  So I looked TERPRI up in the CLHS.
> 
> Who thought that TERPRI was mnemonic for 'print "\n";' and why?
> 
> Clearly there is some interesting computer history lore that I am
> missing out on.  Would anyone care to enlighten me?
> 
I thought it stood for TERminator PRIority, which is usually bound to 
'((connor john) (connor sara))...
From: Matthias
Subject: Re: TERPRI
Date: 
Message-ID: <36w7jxl9fb6.fsf@hundertwasser.ti.uni-mannheim.de>
David Steuber <·············@verizon.net> writes:

> Who thought that TERPRI was mnemonic for 'print "\n";' and why?

It probably has something to do with TERminal PRInting.

Your printer will usually wait for a "\n" until it prints the current
line.

> Those who do not remember the history of Lisp are doomed to repeat it,
> badly.

Fortunately, the complete history of Lisp is available in condensed
form in CL...
From: Kenny Tilton
Subject: Re: TERPRI
Date: 
Message-ID: <D_F5c.52063$Wo2.34104@twister.nyc.rr.com>
Matthias wrote:

> 
> Fortunately, the complete history of Lisp is available in condensed
> form in CL...

If you prefer the long, painful, unabridged version, it is being 
replayed now over at comp.lang.python and comp.lang.java.

kt


-- 
http://tilton-technology.com

Why Lisp? http://alu.cliki.net/RtL%20Highlight%20Film

Your Project Here! http://alu.cliki.net/Industry%20Application
From: Alex Mizrahi
Subject: Re: TERPRI
Date: 
Message-ID: <c38003$23tbvf$1@ID-177567.news.uni-berlin.de>
(message (Hello 'David)
(you :wrote  :on '(Tue, 16 Mar 2004 11:25:21 GMT))
(

 DS> Clearly there is some interesting computer history lore that I am
 DS> missing out on.  Would anyone care to enlighten me?

such strange name was interesting to me too.. so i did a google search and
find result just after CLHS:

http://wombat.doc.ic.ac.uk/foldoc/foldoc.cgi?terpri
--
terpri
/ter'pree/ TERminate PRInt line. [LISP 1.5 and later, MacLISP] To output a
newline. Still used in Common LISP. On some early operating systems and
hardware, no characters would be printed until a complete line was formed,
so this operation terminated the line and emitted the output.

--

)
(With-best-regards '(Alex Mizrahi) :aka 'killer_storm)
(prin1 "Jane dates only Lisp programmers"))
From: David Steuber
Subject: Re: TERPRI
Date: 
Message-ID: <m2hdwoi8qm.fsf@david-steuber.com>
"Alex Mizrahi" <·········@xhotmail.com> writes:

> such strange name was interesting to me too.. so i did a google search and
> find result just after CLHS:
> 
> http://wombat.doc.ic.ac.uk/foldoc/foldoc.cgi?terpri
> --
> terpri
> /ter'pree/ TERminate PRInt line. [LISP 1.5 and later, MacLISP] To output a
> newline. Still used in Common LISP. On some early operating systems and
> hardware, no characters would be printed until a complete line was formed,
> so this operation terminated the line and emitted the output.

Sometimes I think people take saving a few keystrokes too far.

-- 
Those who do not remember the history of Lisp are doomed to repeat it,
badly.

> (dwim x)
NIL
From: Sashank Varma
Subject: Re: TERPRI
Date: 
Message-ID: <none-4C7273.20343516032004@news.vanderbilt.edu>
In article <··············@david-steuber.com>,
 David Steuber <·············@verizon.net> wrote:

> "Alex Mizrahi" <·········@xhotmail.com> writes:
>
> > terpri
> > /ter'pree/ TERminate PRInt line. [LISP 1.5 and later, MacLISP] To output a
> > newline. Still used in Common LISP. On some early operating systems and
> > hardware, no characters would be printed until a complete line was formed,
> > so this operation terminated the line and emitted the output.
> 
> Sometimes I think people take saving a few keystrokes too far.

Some thoughts:

No one tops The Artist Formerly Known as Prince in this regard.

Perhaps typing (terminate-print-line) ate up too many of the 80
characters available per line, and thus led to a far greater
sin: non-standard indenting.

I hardly ever use (terpri), preferring instead (format t "~%").

On a related note, I was once involved in a project called "Whole
Day Whole Year".  The project leader liked to abbreviate it WDWY.
I found this insanely funny because the "longer" name had fewer
syllables than the "shorter" acronym.
From: Christopher C. Stacy
Subject: Re: TERPRI
Date: 
Message-ID: <uk71kvyj8.fsf@news.dtpq.com>
>>>>> On Tue, 16 Mar 2004 20:34:35 -0600, Sashank Varma ("Sashank") writes:

 Sashank> In article <··············@david-steuber.com>,
 Sashank>  David Steuber <·············@verizon.net> wrote:

 >> "Alex Mizrahi" <·········@xhotmail.com> writes:
 >> 
 >> > terpri
 >> > /ter'pree/ TERminate PRInt line. [LISP 1.5 and later, MacLISP] To output a
 >> > newline. Still used in Common LISP. On some early operating systems and
 >> > hardware, no characters would be printed until a complete line was formed,
 >> > so this operation terminated the line and emitted the output.
 >> 
 >> Sometimes I think people take saving a few keystrokes too far.

 Sashank> Some thoughts:

 Sashank> No one tops The Artist Formerly Known as Prince in this regard.

 Sashank> Perhaps typing (terminate-print-line) ate up too many of the 80
 Sashank> characters available per line, and thus led to a far greater
 Sashank> sin: non-standard indenting.

 Sashank> I hardly ever use (terpri), preferring instead (format t "~%").

TERPRI is a leftover function from before Lisp had any formatted
output facility.  There was just TYO (now known as WRITE-CHAR),
and there was PRINC and PRINT.  Programs were filled with output
statements, and you called TERPRI a lot.
From: Sashank Varma
Subject: Re: TERPRI
Date: 
Message-ID: <none-BB825D.13584117032004@news.vanderbilt.edu>
In article <·············@news.dtpq.com>,
 ······@news.dtpq.com (Christopher C. Stacy) wrote:

>  Sashank> I hardly ever use (terpri), preferring instead (format t "~%").
> 
> TERPRI is a leftover function from before Lisp had any formatted
> output facility.  There was just TYO (now known as WRITE-CHAR),
> and there was PRINC and PRINT.  Programs were filled with output
> statements, and you called TERPRI a lot.

Yeah, my code from way back was filled with TERPRIs, PRINCs,
and PRINTs.  Over the years, these have been replaced by
FORMATs.
From: Pascal Bourguignon
Subject: Re: TERPRI
Date: 
Message-ID: <87wu5jjy98.fsf@thalassa.informatimago.com>
Sashank Varma <····@vanderbilt.edu> writes:

> In article <··············@david-steuber.com>,
>  David Steuber <·············@verizon.net> wrote:
> 
> > "Alex Mizrahi" <·········@xhotmail.com> writes:
> >
> > > terpri
> > > /ter'pree/ TERminate PRInt line. [LISP 1.5 and later, MacLISP] To output a
> > > newline. Still used in Common LISP. On some early operating systems and
> > > hardware, no characters would be printed until a complete line was formed,
> > > so this operation terminated the line and emitted the output.
> > 
> > Sometimes I think people take saving a few keystrokes too far.
> 
> Some thoughts:
> 
> No one tops The Artist Formerly Known as Prince in this regard.
> 
> Perhaps typing (terminate-print-line) ate up too many of the 80
> characters available per line, and thus led to a far greater
> sin: non-standard indenting.

You don't get it!

Each TERMINATE-PRINT-LINE would take:
    (/ (LENGTH "TERMINATE-PRINT-LINE") 6.0) 
= 3.33 seconds to print on a 60 c/s teletype, 
while TERPRI would take a mere 1.0 second to print.  

For I/O intensive code, the difference would be whether you could come
home at 17h, or whether you'd have to stay at work until 23h, miss the
bus, have to go  home on foot and having your SO  waiting for you in a
mood not leading to "having a life".


-- 
__Pascal_Bourguignon__                     http://www.informatimago.com/
There is no worse tyranny than to force a man to pay for what he doesn't
want merely because you think it would be good for him.--Robert Heinlein
http://www.theadvocates.org/
From: Sashank Varma
Subject: Re: TERPRI
Date: 
Message-ID: <none-DBAEC0.14022117032004@news.vanderbilt.edu>
In article <··············@thalassa.informatimago.com>,
 Pascal Bourguignon <····@thalassa.informatimago.com> wrote:

> Sashank Varma <····@vanderbilt.edu> writes:
> > 
> > Perhaps typing (terminate-print-line) ate up too many of the 80
> > characters available per line, and thus led to a far greater
> > sin: non-standard indenting.
> 
> You don't get it!
> 
> Each TERMINATE-PRINT-LINE would take:
>     (/ (LENGTH "TERMINATE-PRINT-LINE") 6.0) 
> = 3.33 seconds to print on a 60 c/s teletype, 
> while TERPRI would take a mere 1.0 second to print.  
> 
> For I/O intensive code, the difference would be whether you could come
> home at 17h, or whether you'd have to stay at work until 23h, miss the
> bus, have to go  home on foot and having your SO  waiting for you in a
> mood not leading to "having a life".

;-)
From: Don Geddis
Subject: Re: TERPRI
Date: 
Message-ID: <87ekrqt0br.fsf@sidious.geddis.org>
Sashank Varma <····@vanderbilt.edu> writes:
> On a related note, I was once involved in a project called "Whole
> Day Whole Year".  The project leader liked to abbreviate it WDWY.
> I found this insanely funny because the "longer" name had fewer
> syllables than the "shorter" acronym.

How about the unfortunate choice of "www" as a prefix for web sites?
I cringe every time a hear a radio ad where the announcer tells you to
go check out "H tee tee pee colon slash slash double you double you double you
dot..."  That's a lot of radio seconds with basically zero content.  (I also
once heard a radio ad that said "backslash backslash", which is not only
slower to speak, but also wrong!)

As long as we're on this topic, may I present:
        http://www.doubleudoubleudoubleudotcom.com/

_______________________________________________________________________________
Don Geddis                  http://don.geddis.org/               ···@geddis.org
Frank knew that no man had ever crossed the desert on foot and lived to tell
about it.  So, he decided to get back in his car and keep driving.
	-- Deep Thoughts, by Jack Handey
From: Barry Margolin
Subject: Re: TERPRI
Date: 
Message-ID: <barmar-3C4C5B.01115719032004@comcast.ash.giganews.com>
In article <··············@sidious.geddis.org>,
 Don Geddis <···@geddis.org> wrote:

> Sashank Varma <····@vanderbilt.edu> writes:
> > On a related note, I was once involved in a project called "Whole
> > Day Whole Year".  The project leader liked to abbreviate it WDWY.
> > I found this insanely funny because the "longer" name had fewer
> > syllables than the "shorter" acronym.
> 
> How about the unfortunate choice of "www" as a prefix for web sites?
> I cringe every time a hear a radio ad where the announcer tells you to
> go check out "H tee tee pee colon slash slash double you double you double 
> you
> dot..."  That's a lot of radio seconds with basically zero content.  (I also
> once heard a radio ad that said "backslash backslash", which is not only
> slower to speak, but also wrong!)

In the early days of the web, when mostly geeks knew about it, I 
remember some pronounced "www" as "dubdubdub".

> As long as we're on this topic, may I present:
>         http://www.doubleudoubleudoubleudotcom.com/

Seems more appropriate to the "buffalo" threads. :)

-- 
Barry Margolin, ······@alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***
From: Rob Warnock
Subject: Re: TERPRI
Date: 
Message-ID: <mMGdnb2Vw5UU0MHd3czS-g@speakeasy.net>
Don Geddis  <···@geddis.org> wrote:
+---------------
| How about the unfortunate choice of "www" as a prefix for web sites?
...
| As long as we're on this topic, may I present:
|         http://www.doubleudoubleudoubleudotcom.com/
+---------------

Much more to the point: <URL:http://yost.com/misc/nix-on-www.html>


-Rob

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Barry Margolin
Subject: Re: TERPRI
Date: 
Message-ID: <barmar-EC71BC.00304317032004@comcast.ash.giganews.com>
In article <··············@david-steuber.com>,
 David Steuber <·············@verizon.net> wrote:

> "Alex Mizrahi" <·········@xhotmail.com> writes:
> 
> > such strange name was interesting to me too.. so i did a google search and
> > find result just after CLHS:
> > 
> > http://wombat.doc.ic.ac.uk/foldoc/foldoc.cgi?terpri
> > --
> > terpri
> > /ter'pree/ TERminate PRInt line. [LISP 1.5 and later, MacLISP] To output a
> > newline. Still used in Common LISP. On some early operating systems and
> > hardware, no characters would be printed until a complete line was formed,
> > so this operation terminated the line and emitted the output.
> 
> Sometimes I think people take saving a few keystrokes too far.

Please remember that this name was devised about 30 years ago.  Memory 
was expensive, so it was important to save space.  And terminals printed 
at about 10 characters per second, so using short names would make a 
significant difference in the time taken to print out your program.

-- 
Barry Margolin, ······@alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***
From: Tim Bradshaw
Subject: Re: TERPRI
Date: 
Message-ID: <fbc0f5d1.0403170226.fa4a3b2@posting.google.com>
David Steuber <·············@verizon.net> wrote in message news:<··············@david-steuber.com>...

> 
> Sometimes I think people take saving a few keystrokes too far.

Or may be they were just interested in keeping names short, because
the whole image hat to fit in a few K.  And later, they wanted to make
sure that the large programs thay already had would carry on running.
From: David Steuber
Subject: Re: TERPRI
Date: 
Message-ID: <m28yhzujkg.fsf@david-steuber.com>
··········@tfeb.org (Tim Bradshaw) writes:

> David Steuber <·············@verizon.net> wrote in message news:<··············@david-steuber.com>...
> 
> > 
> > Sometimes I think people take saving a few keystrokes too far.
> 
> Or may be they were just interested in keeping names short, because
> the whole image hat to fit in a few K.  And later, they wanted to make
> sure that the large programs thay already had would carry on running.

This is sort of a flame but I will say it anyway.

-rw-r--r--   1 david  staff  25604096 16 Mar 22:24 sbcl.core

It's really too bad that memory seems to be considered an infinite
resource.  I am quite convinced that a program that has its code and
data fit in CPU cache will run a heck of a lot faster than code that
spans gigapages of memory.

I'm aware that Common Lisp is a large system.  Why does it have to
come out this large though?  What's the date of the ANSI spec?  What
did a typical desktop computer have to work with then?

CPUs may well be getting faster.  Memories are certainly getting
larger.  Memory access is even getting faster.  However, there is a
huge disparity between CPU speed and memory access speed.  I doubt
that will ever go away.

Don't get me wrong.  I like Lisp.  I know that other languages, like
Java, also suffer from gross overuse of memory (although that is no
excuse).  I'm working on learning it in the hopes that it will be an
enjoyable tool that I can use for applications programs.

Lisp is also a challenging language.  Not just learning the words
either.  The idioms are different.  It is alien compared to C.  Not
bad alien, just that it takes some unlearning of the C ways to learn
the Lisp ways.  Lisp also has a much smaller mind share than the
likes of Java.  That actually does matter.

At the end of the day, I would like to find that the effort was worth
it.

-- 
Those who do not remember the history of Lisp are doomed to repeat it,
badly.

> (dwim x)
NIL
From: Cameron MacKinnon
Subject: Return of CDR coding?
Date: 
Message-ID: <j_SdnTUHLsM5hMTd4p2dnA@golden.net>
David Steuber wrote:

> This is sort of a flame but I will say it anyway.
> 
> -rw-r--r--   1 david  staff  25604096 16 Mar 22:24 sbcl.core
> 
> It's really too bad that memory seems to be considered an infinite
> resource.

This reminds me of something I've been meaning to ask: With the moves 
afoot to 64 bit CPUs on the desktop, and the ever increasing gap 
(chasm?) between CPU performance and memory subsystem latency, does 
anyone think we're due for the return of CDR coding?

It would seem to me that a few extra cycles spent to improve the 
effective sizes of the data caches (if you measure cache size in units 
of cons cells) could be a win, given the high cost of cache misses.

There aren't any current implementations on non tagged hardware that do 
this, are there? I've read that CDR coding never really paid off except 
on tagged architectures.

-- 
Cameron MacKinnon
Toronto, Canada
From: Christophe Rhodes
Subject: Re: TERPRI
Date: 
Message-ID: <sqptbbhvx5.fsf@lambda.dyndns.org>
David Steuber <·············@verizon.net> writes:

> ··········@tfeb.org (Tim Bradshaw) writes:
>
>> Or may be they were just interested in keeping names short, because
>> the whole image hat to fit in a few K.  And later, they wanted to make
>> sure that the large programs thay already had would carry on running.
>
> This is sort of a flame but I will say it anyway.

That's nice.

> -rw-r--r--   1 david  staff  25604096 16 Mar 22:24 sbcl.core
>
> It's really too bad that memory seems to be considered an infinite
> resource.  I am quite convinced that a program that has its code and
> data fit in CPU cache will run a heck of a lot faster than code that
> spans gigapages of memory.
>
> I'm aware that Common Lisp is a large system.  Why does it have to
> come out this large though?  What's the date of the ANSI spec?  What
> did a typical desktop computer have to work with then?

Please stop (mis)using the class/instance fallacy.  SBCL is not the
only Common Lisp; it postdates the ANSI specification by at least five
years; it is by no means optimized for space.  You may wish to compare
its size with OpenMCL's, for instance, or with CLISP's, or with an
"embedded" image from CMUCL.  Consider also lisp500, which is rapidly
becoming a Common Lisp implementation of rather minimal size.

SBCL comes out this large because of various design choices it has
made, along with limited enthusiasm and energy for making it smaller.
If the size offends you, you have several options: you could grin and
bear it; you could alter it to your personal preferences; you could
just ignore it.  Heck, you could even post a rant on USENET about it;
whether that is likely to lead to anything constructive, I leave you
to decide.

Christophe
-- 
http://www-jcsu.jesus.cam.ac.uk/~csr21/       +44 1223 510 299/+44 7729 383 757
(set-pprint-dispatch 'number (lambda (s o) (declare (special b)) (format s b)))
(defvar b "~&Just another Lisp hacker~%")    (pprint #36rJesusCollegeCambridge)
From: David Steuber
Subject: Re: TERPRI
Date: 
Message-ID: <m2r7vqtt7l.fsf@david-steuber.com>
Christophe Rhodes <·····@cam.ac.uk> writes:

> Please stop (mis)using the class/instance fallacy.  SBCL is not the
> only Common Lisp; it postdates the ANSI specification by at least five
> years; it is by no means optimized for space.  You may wish to compare
> its size with OpenMCL's, for instance, or with CLISP's, or with an
> "embedded" image from CMUCL.  Consider also lisp500, which is rapidly
> becoming a Common Lisp implementation of rather minimal size.

I should probably also be annoyed at the memory footprint of Safari
as well.  Actually, I am.

An unfair comparison of OpenMCL shows it taking up 26MB on my disk.
I'm not clear on which of the numerous files actually get loaded, but
it does seem to have a smaller kernel image.

> SBCL comes out this large because of various design choices it has
> made, along with limited enthusiasm and energy for making it smaller.
> If the size offends you, you have several options: you could grin and
> bear it; you could alter it to your personal preferences; you could
> just ignore it.  Heck, you could even post a rant on USENET about it;
> whether that is likely to lead to anything constructive, I leave you
> to decide.

Once I've gotten comfortable programming in Lisp I might try hacking
on SBCL or some other free Lisp.  I honestly don't know.  I've never
even tried to implement scheme.  So I don't really know what it's
like to do something like that.

It would certainly be a fine thing to work on a Lisp implementation
that beat C in size and speed in the general case.

-- 
Those who do not remember the history of Lisp are doomed to repeat it,
badly.

> (dwim x)
NIL
From: Raymond Toy
Subject: Re: TERPRI
Date: 
Message-ID: <4nwu5iw78u.fsf@edgedsp4.rtp.ericsson.se>
>>>>> "David" == David Steuber <·············@verizon.net> writes:

    David> Once I've gotten comfortable programming in Lisp I might try hacking
    David> on SBCL or some other free Lisp.  I honestly don't know.  I've never
    David> even tried to implement scheme.  So I don't really know what it's

I've never even thought of implementing scheme.  Don't let that stop
you.

    David> It would certainly be a fine thing to work on a Lisp implementation
    David> that beat C in size and speed in the general case.

Is there such a thing?  If not, are you helping to achieve such a
thing by helping with code or with money?  If not, was it really that
important to you?

Ray
From: David Steuber
Subject: Re: TERPRI
Date: 
Message-ID: <m2ad2dubv8.fsf@david-steuber.com>
Raymond Toy <···@rtp.ericsson.se> writes:

>     David> It would certainly be a fine thing to work on a Lisp implementation
>     David> that beat C in size and speed in the general case.
> 
> Is there such a thing?  If not, are you helping to achieve such a
> thing by helping with code or with money?  If not, was it really that
> important to you?

I don't know.  I would like to be able to help with code when I reach
the appropriate level of competence, money if I find myself
developing a successful application other than a compiler.  As to
importance...

I have found a strange thing starting to happen over the past few
days.  I am beginning to prefer Lisp syntax over C style syntax.  In
fact, reading some of Bruce Eckel's Java (from a link off of Peter
Seibel's book) is making me feel irritated by C style syntax.

I think in the near term, perhaps extending on into the long term, I
will find myself much preferring to hack Lisp code over other
languages.  This makes me want to have all around better Lisp tools
available under free licenses.

If I had the talent and energy, I would be all over it right now.

-- 
Those who do not remember the history of Lisp are doomed to repeat it,
badly.

> (dwim x)
NIL
From: Thomas F. Burdick
Subject: Re: TERPRI
Date: 
Message-ID: <xcvwu5h4lbb.fsf@famine.OCF.Berkeley.EDU>
David Steuber <·············@verizon.net> writes:

> I think in the near term, perhaps extending on into the long term, I
> will find myself much preferring to hack Lisp code over other
> languages.  This makes me want to have all around better Lisp tools
> available under free licenses.
> 
> If I had the talent and energy, I would be all over it right now.

I think it's safe to say that if you seriously intend to use Lisp for
something, there is a segment of the Lisp community that has an
interest in making sure there's a CL implementation up to it.  Speed
isn't a problem.  For normal end-user applications and server
applications, the size of the various Lisp implementations ranges from
no problem at all, to the large end of reasonable.  CMUCL, for
example, isn't normally used in its small-core variation, but it can
be.  CLISP isn't a speed demon, but it can be forced to fit onto a
little more than 1/2 of a floppy disk.

The open source CL implementations can always use work, but there's no
immediate need for it.  They're fine already.

-- 
           /|_     .-----------------------.                        
         ,'  .\  / | No to Imperialist war |                        
     ,--'    _,'   | Wage class war!       |                        
    /       /      `-----------------------'                        
   (   -.  |                               
   |     ) |                               
  (`-.  '--.)                              
   `. )----'                               
From: Raymond Toy
Subject: Re: TERPRI
Date: 
Message-ID: <4nvfl0or8i.fsf@edgedsp4.rtp.ericsson.se>
>>>>> "Thomas" == Thomas F Burdick <···@famine.OCF.Berkeley.EDU> writes:

    Thomas> applications, the size of the various Lisp implementations ranges from
    Thomas> no problem at all, to the large end of reasonable.  CMUCL, for
    Thomas> example, isn't normally used in its small-core variation, but it can
    Thomas> be.  CLISP isn't a speed demon, but it can be forced to fit onto a
    Thomas> little more than 1/2 of a floppy disk.

FWIW, Fred Gilham had an experimental version where the whole
lisp.core and lisp C executable was about 3 MB.[1]  Pretty nice.

Ray

Footnotes: 
[1]  He used ELF to put the core with the executable, and then gzexe'd
the result.  I think.  Everything else was total transparent.  But I
think it might have unzipped to a temp space.  Not sure, though.
From: Rob Warnock
Subject: Re: TERPRI
Date: 
Message-ID: <-f2dndB_v-Cny8Hd3czS-w@speakeasy.net>
Thomas F. Burdick <···@famine.OCF.Berkeley.EDU> wrote:
+---------------
| David Steuber <·············@verizon.net> writes:
| > If I had the talent and energy, I would be all over it right now.
| 
| I think it's safe to say that if you seriously intend to use Lisp for
| something, there is a segment of the Lisp community that has an
| interest in making sure there's a CL implementation up to it.  Speed
| isn't a problem.  For normal end-user applications and server
| applications, the size of the various Lisp implementations ranges from
| no problem at all, to the large end of reasonable.  CMUCL, for
| example, isn't normally used in its small-core variation, but it can be.
+---------------

Actually, for any machine less than a few years old (say, anything with
few hundred MHz CPU or better), the distributed CMUCL is plenty fast
enough for all *kinds* of simple systems utility tasks, e.g., the sorts
of things other people tend to use Perl for. For one trivial example,
a couple of days ago I was doing some micro-benchmarking of CMUCL startup
(running small "scripts" 100 times and histogramming the results), and I
needed to sum the "user" and "system" times reported by the "csh" builtin
"time" command (since the "total" time has one fewer digits of precision).
Here's what I wrote:

    #!/usr/local/bin/cmucl -script

    ;;; Script to sum the first two times in a "csh" "time" output:
    ;;;    0.046u 0.007s 0:00.06 66.6%     147+3008k 0+0io 0pf+0w

    (defun sum-u-s (line)
      (+ (read-from-string line nil nil :start 0 :end (position #\u line))
	 (read-from-string line nil nil :start (1+ (position #\space line))
					:end (position #\s line))))

    (loop for line = (read-line *standard-input* nil nil)
	  while line
      do (format t "~5,3f~%" (sum-u-s line)))

Trivial? Yes, but it's what I needed at the moment, and it was the
language I wa able to do it in fastest.

I mention this only because one of the best ways to learn Lisp is to 
simply use it whenever you can, and just watch how things go -- where
the easy parts are, where the hard parts are, and where Lisp doesn't
seem to fit at all [e.g., other parts of the micro-benchmarker used
"sh", "csh", "sort" and "uniq", because they happened to be the easiest
things to use at the time].


-Rob

p.s. I'll be writing up the cute/ugly hack I used to make that
"#!/usr/local/bin/cmucl -script" business work (something I've been
playing with off & on since the "executables" thread a while back),
and will post something "soon". Hint/teaser: It did *not* require
building a custom CMUCL core image -- it uses the released CMUCL-18e
"bin/lisp" and "lib/lisp.core".

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: David Steuber
Subject: Re: TERPRI
Date: 
Message-ID: <m2oeqpq3x4.fsf@david-steuber.com>
····@rpw3.org (Rob Warnock) writes:

> p.s. I'll be writing up the cute/ugly hack I used to make that
> "#!/usr/local/bin/cmucl -script" business work (something I've been
> playing with off & on since the "executables" thread a while back),
> and will post something "soon". Hint/teaser: It did *not* require
> building a custom CMUCL core image -- it uses the released CMUCL-18e
> "bin/lisp" and "lib/lisp.core".

I would love to see this posted here.  It looks like a great way to
pretend that Lisp is just another scripting language for playing with
little through away programs that I would otherwise have to fire up a
Lisp session and call up from there.

Also, after the first startup of Lisp and its core file, the system
will cache it.  Lots of successive startups should reduce the overall
startup overhead.

Perl also goes through the process of starting up Perl, compiling the
Perl script, and then finally running it.  With that nasty syntax,
I'm surprised that the compiler is so fast.

-- 
Those who do not remember the history of Lisp are doomed to repeat it,
badly.

> (dwim x)
NIL
From: Rob Warnock
Subject: Re: TERPRI
Date: 
Message-ID: <5TKdnVEOubM5S8Pd3czS-g@speakeasy.net>
David Steuber  <·············@verizon.net> wrote:
+---------------
| ····@rpw3.org (Rob Warnock) writes:
| > p.s. I'll be writing up the cute/ugly hack I used to make that
| > "#!/usr/local/bin/cmucl -script" business work...
| 
| I would love to see this posted here.
+---------------

I will, I promise, "real soon now".  ;-}   ;-} 
I'm just trying to get something [that uses it!]
out the door first.

+---------------
| It looks like a great way to pretend that Lisp is just another
| scripting language...
+---------------

As someone else pointed out, CLISP does it "out of the box" already,
so if you just want to start playing with "Lisp scripting" I'd say
go ahead and try CLISP. I use it myself for a few things.

In fact, I even have a few scripts with the following few lines in
the front:   ;-}

	#!/usr/local/bin/clisp
	#!/usr/local/bin/cmucl -script
	;;; This will work *whichever* of the above lines is first. --rpw3

	(require :clx)
	#+cmu (shadow 'define-keysym)
	#+clisp (shadowing-import 'xlib:char-width)
	(use-package :xlib)
	...

+---------------
| Also, after the first startup of Lisp and its core file, the system
| will cache it.  Lots of successive startups should reduce the overall
| startup overhead.
+---------------

Indeed it does, as I've shown with the microbenchmarks I mentioned
previously.

+---------------
| Perl also goes through the process of starting up Perl, compiling the
| Perl script, and then finally running it.  With that nasty syntax,
| I'm surprised that the compiler is so fast.
+---------------

Well, even when "interpreting", CMUCL "minimally compiles" everything
to an intermediate representation, too, yet it seems fast enough for
a lot of "scripting" stuff. [Remember: A Lisp "script" can explicitly
compile selected functions within itself as it runs, so critical pieces
can run at full speed.]


-Rob

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Jacek Generowicz
Subject: Re: TERPRI
Date: 
Message-ID: <tyfvfkx5fkd.fsf@pcepsft001.cern.ch>
David Steuber <·············@verizon.net> writes:

> ····@rpw3.org (Rob Warnock) writes:
> 
> > p.s. I'll be writing up the cute/ugly hack I used to make that
> > "#!/usr/local/bin/cmucl -script" business work (something I've been
> > playing with off & on since the "executables" thread a while back),
> > and will post something "soon". Hint/teaser: It did *not* require
> > building a custom CMUCL core image -- it uses the released CMUCL-18e
> > "bin/lisp" and "lib/lisp.core".
> 
> I would love to see this posted here.  It looks like a great way to
> pretend that Lisp is just another scripting language for playing with
> little through away programs that I would otherwise have to fire up a
> Lisp session and call up from there.

You are aware that Clisp _is_ "just another scripting language", out
of the box? Just put "#!/usr/local/bin/clisp" (or whatever is
appropriate for your system) at the top of your CL source code, and
away you go.

Hell, you can even use Clisp as your shell:

   http://clisp.cons.org/clash.html
From: David Steuber
Subject: Re: TERPRI
Date: 
Message-ID: <m21xnko7pf.fsf@david-steuber.com>
Jacek Generowicz <················@cern.ch> writes:

> You are aware that Clisp _is_ "just another scripting language", out
> of the box? Just put "#!/usr/local/bin/clisp" (or whatever is
> appropriate for your system) at the top of your CL source code, and
> away you go.

I am now.

I wonder if I can get that sort of thing to work with OpenMCL as
well.  Or SBCL.

-- 
It would not be too unfair to any language to refer to Java as a
stripped down Lisp or Smalltalk with a C syntax.
--- Ken Anderson
    http://openmap.bbn.com/~kanderso/performance/java/index.html
From: Scott Schwartz
Subject: Re: TERPRI
Date: 
Message-ID: <8gk7107ftc.fsf@galapagos.bx.psu.edu>
> You are aware that Clisp _is_ "just another scripting language", out
> of the box? Just put "#!/usr/local/bin/clisp" (or whatever is

How to you force clisp to write errors to stderr instead of stdout?

$ clisp -q -x '(/ 1 0)'  2>/dev/null
 
*** - division by zero

That would suck if you had written this:

$ clisp-program | xargs rm -r

(It would delete all your files, instead of the ones you asked to have
deleted.)

Perl gets it right:

$ perl -e '$a=1/0;' 2>/dev/null

Python gets it right:

$ python -c 'a=1/0' 2>/dev/null

CMUCL gets it right:

$ lisp -batch -quiet -eval '(/ 1 0)' 2>/dev/null
From: Kaz Kylheku
Subject: Re: TERPRI
Date: 
Message-ID: <cf333042.0403312026.25229ff0@posting.google.com>
Scott Schwartz <··········@usenet ·@bio.cse.psu.edu> wrote in message news:<··············@galapagos.bx.psu.edu>...
> > You are aware that Clisp _is_ "just another scripting language", out
> > of the box? Just put "#!/usr/local/bin/clisp" (or whatever is
> 
> How to you force clisp to write errors to stderr instead of stdout?
>
> $ clisp -q -x '(/ 1 0)'  2>/dev/null
>  
> *** - division by zero
> 
> That would suck if you had written this:
> 
> $ clisp-program | xargs rm -r
> 
> (It would delete all your files, instead of the ones you asked to have
> deleted.)

The pathnames that are input to xargs are not subject to shell
globbing expansion. xargs most likely uses the fork() and exec*()
interfaces to run the generated command lines, rather than a shell:

   echo '*' | xargs ls
   ls: *: no such file or directory

Of course, if you have an important file called "*** - division by
zero", you're out of luck. :)
From: Pascal Bourguignon
Subject: Re: clisp stdout/stderr
Date: 
Message-ID: <87hdw4v6vd.fsf_-_@thalassa.informatimago.com>
Scott Schwartz <··········@usenet ·@bio.cse.psu.edu> writes:

> > You are aware that Clisp _is_ "just another scripting language", out
> > of the box? Just put "#!/usr/local/bin/clisp" (or whatever is
> 
> How to you force clisp to write errors to stderr instead of stdout?
> 
> $ clisp -q -x '(/ 1 0)'  2>/dev/null
>  
> *** - division by zero
> 
> That would suck if you had written this:
> 
> $ clisp-program | xargs rm -r
> 
> (It would delete all your files, instead of the ones you asked to have
> deleted.)

Yes, there is a problem with clisp (2.33):

$ clisp -x '(progn (print :error *error-output*)(print :output *standard-output*))'>out 2>err
$ cat out
;; Loading file /home/pascal/.clisprc.lisp ...
[1]> 
:ERROR 
:OUTPUT 
:OUTPUT
$ cat err
$ 


 
> Perl gets it right:
> 
> $ perl -e '$a=1/0;' 2>/dev/null
> 
> Python gets it right:
> 
> $ python -c 'a=1/0' 2>/dev/null
> 
> CMUCL gets it right:
> 
> $ lisp -batch -quiet -eval '(/ 1 0)' 2>/dev/null
> 

-- 
__Pascal_Bourguignon__                     http://www.informatimago.com/
There is no worse tyranny than to force a man to pay for what he doesn't
want merely because you think it would be good for him.--Robert Heinlein
http://www.theadvocates.org/
From: Sam Steingold
Subject: Re: clisp stdout/stderr
Date: 
Message-ID: <usmff3gt0.fsf@gnu.org>
> * Pascal Bourguignon <····@gunynffn.vasbezngvzntb.pbz> [2004-04-01 03:12:22 +0200]:
>
> Scott Schwartz <··········@usenet ·@bio.cse.psu.edu> writes:
>
>> > You are aware that Clisp _is_ "just another scripting language", out
>> > of the box? Just put "#!/usr/local/bin/clisp" (or whatever is
>> 
>> How to you force clisp to write errors to stderr instead of stdout?

clisp does this by default when it is used as a script interpreter:
<http://clisp.cons.org/impnotes/quickstart.html#script-exec>

>> $ clisp -q -x '(/ 1 0)'  2>/dev/null
>>  
>> *** - division by zero
>> 
>> That would suck if you had written this:
>> 
>> $ clisp-program | xargs rm -r
>> 
>> (It would delete all your files, instead of the ones you asked to have
>> deleted.)

no, shell globbing is not done here, so the only deleted file would be
named "*** - division by zero", and that only if you used the command
line, not the script.
You are right though, it makes sense to use stderr for "-x" too.
I will fix this for the next release.
Thanks.

> Yes, there is a problem with clisp (2.33):
>
> $ clisp -x '(progn (print :error *error-output*)(print :output *standard-output*))'>out 2>err
> $ cat out
> ;; Loading file /home/pascal/.clisprc.lisp ...
> [1]> 
> :ERROR 
> :OUTPUT 
> :OUTPUT
> $ cat err
> $ 

$ cat stderr.sh
#!/usr/bin/clisp
(defun out (s) (format t "~&~S: ~S~%" s (symbol-value s)))
(out '*standard-output*)
(out '*error-output*)
(out '*terminal-io*)
(error "foo!")
$ stderr.sh
*STANDARD-OUTPUT*: #<IO SYNONYM-STREAM *TERMINAL-IO*>
*ERROR-OUTPUT*: #<OUTPUT UNBUFFERED FILE-STREAM CHARACTER #P"/dev/fd/2">
*TERMINAL-IO*: #<IO TERMINAL-STREAM>

*** - foo!
$ stderr.sh > out 2>err
$ cat err

*** - foo!
$ cat out
*STANDARD-OUTPUT*: #<OUTPUT BUFFERED FILE-STREAM CHARACTER #P"/dev/fd/1">
*ERROR-OUTPUT*: #<OUTPUT UNBUFFERED FILE-STREAM CHARACTER #P"/dev/fd/2">
*TERMINAL-IO*:
#<IO TWO-WAY-STREAM #<IO TERMINAL-STREAM>
  #<OUTPUT BUFFERED FILE-STREAM CHARACTER #P"/dev/fd/1">>
$

-- 
Sam Steingold (http://www.podval.org/~sds) running w2k
<http://www.camera.org> <http://www.iris.org.il> <http://www.memri.org/>
<http://www.mideasttruth.com/> <http://www.honestreporting.com>
Profanity is the one language all programmers know best.
From: Mario S. Mommer
Subject: Re: TERPRI
Date: 
Message-ID: <fzptb9ah0o.fsf@germany.igpm.rwth-aachen.de>
David Steuber <·············@verizon.net> writes:
> I have found a strange thing starting to happen over the past few
> days.  I am beginning to prefer Lisp syntax over C style syntax.

:-)

>  In fact, reading some of Bruce Eckel's Java (from a link off of
> Peter Seibel's book) is making me feel irritated by C style syntax.

It is just too damn iregular. At the end, semantics are always to a
large degree defined on parse trees. And as a markup for parse trees
C syntax is awkward to say the least.

> I think in the near term, perhaps extending on into the long term, I
> will find myself much preferring to hack Lisp code over other
> languages.  This makes me want to have all around better Lisp tools
> available under free licenses.
> 
> If I had the talent and energy, I would be all over it right now.

Take it easy!

The implementations are fairly complex, and "fixing" things as core
sizes is probably very hard. There are a lot of needs, however,
related to them. Documentation, web pages, little fixes, etc. Lots of
opportunities to have a positive impact and to learn different things.

The same can be said about a lot of other CL-related projects. You
might want to take a look at

   http://common-lisp.net/projects.shtml

The index is a bit raw (I'm working on it!) but you get the idea.
From: Joe Marshall
Subject: Re: TERPRI
Date: 
Message-ID: <65d0uci3.fsf@ccs.neu.edu>
Mario S. Mommer <········@yahoo.com> writes:

> David Steuber <·············@verizon.net> writes:
>> I have found a strange thing starting to happen over the past few
>> days.  I am beginning to prefer Lisp syntax over C style syntax.
>>
>>  In fact, reading some of Bruce Eckel's Java (from a link off of
>> Peter Seibel's book) is making me feel irritated by C style syntax.
>
> It is just too damn iregular. At the end, semantics are always to a
> large degree defined on parse trees. And as a markup for parse trees
> C syntax is awkward to say the least.

I've been working with parsing C code lately.  Infix isn't that much
of a problem, but C is more like `chaosfix'.  

  char (*(*x[3])())[5][9];

Arglists and square brackets are postfix, but square brackets combine
left to right.  Asterisks are prefix, but prefer to associate to the
left.  Parens might be a function call indicator, or they might be
precedence indicators.  Any of the ones above are required, but you
can add balancing extra ones *except* around the empty pair. Finally
the whole thing is inside out: the variable being declared, X, is
buried in the middle of the hash, and the most prominent thing on the
line, `char', has the most tenuous connection to X.  (The one thing
that is immediately obvious from the above is that whatever X is, it
is *not* a char.)
From: Pascal Bourguignon
Subject: Re: TERPRI
Date: 
Message-ID: <87wu5ghn87.fsf@thalassa.informatimago.com>
Joe Marshall <···@ccs.neu.edu> writes:
> I've been working with parsing C code lately.  Infix isn't that much
> of a problem, but C is more like `chaosfix'.  
> 
>   char (*(*x[3])())[5][9];
> 
> Arglists and square brackets are postfix, but square brackets combine
> left to right.  Asterisks are prefix, but prefer to associate to the
> left.  Parens might be a function call indicator, or they might be
> precedence indicators.  Any of the ones above are required, but you
> can add balancing extra ones *except* around the empty pair. Finally
> the whole thing is inside out: the variable being declared, X, is
> buried in the middle of the hash, and the most prominent thing on the
> line, `char', has the most tenuous connection to X.  (The one thing
> that is immediately obvious from the above is that whatever X is, it
> is *not* a char.)

And this is a quite begning example!  
Why do you think they invented cdecl(1)?

-- 
__Pascal_Bourguignon__                     http://www.informatimago.com/
There is no worse tyranny than to force a man to pay for what he doesn't
want merely because you think it would be good for him.--Robert Heinlein
http://www.theadvocates.org/
From: Joe Marshall
Subject: Re: TERPRI
Date: 
Message-ID: <k71gsvbj.fsf@ccs.neu.edu>
Pascal Bourguignon <····@thalassa.informatimago.com> writes:

> And this is a quite begning example!  
> Why do you think they invented cdecl(1)?

But I thought C-like syntax was supposed to be easier to understand.
From: Pascal Bourguignon
Subject: Re: TERPRI
Date: 
Message-ID: <87fzc4hltr.fsf@thalassa.informatimago.com>
Joe Marshall <···@ccs.neu.edu> writes:

> Pascal Bourguignon <····@thalassa.informatimago.com> writes:
> 
> > And this is a quite begning example!  
> > Why do you think they invented cdecl(1)?
> 
> But I thought C-like syntax was supposed to be easier to understand.

Who told you that?

-- 
__Pascal_Bourguignon__                     http://www.informatimago.com/
There is no worse tyranny than to force a man to pay for what he doesn't
want merely because you think it would be good for him.--Robert Heinlein
http://www.theadvocates.org/
From: Joe Marshall
Subject: Re: TERPRI
Date: 
Message-ID: <y8pwek08.fsf@ccs.neu.edu>
Pascal Bourguignon <····@thalassa.informatimago.com> writes:

> Joe Marshall <···@ccs.neu.edu> writes:
>
>> Pascal Bourguignon <····@thalassa.informatimago.com> writes:
>> 
>> > And this is a quite begning example!  
>> > Why do you think they invented cdecl(1)?
>> 
>> But I thought C-like syntax was supposed to be easier to understand.
>
> Who told you that?

I thought the whole problem with Lisp was the parenthesis and that
people found things like 

   char (*(*x[3])())[5];

to be much more readable than

 (:array :size 3 (:pointer (:function :returning (:pointer (:array :size 5 char)))))

At least that's what the trolls keep telling us.
From: Pascal Bourguignon
Subject: Re: TERPRI
Date: 
Message-ID: <873c84hakl.fsf@thalassa.informatimago.com>
Joe Marshall <···@ccs.neu.edu> writes:
> I thought the whole problem with Lisp was the parenthesis and that
> people found things like 
> 
>    char (*(*x[3])())[5];
> 
> to be much more readable than
> 
> (array size 3 (pointer (function returning (pointer (array size 5 char)))))
> 
> At least that's what the trolls keep telling us.

$ echo 'explain char (*(*x[3])())[5] ' |cdecl
declare x as array 3 of pointer to function returning pointer to array 5 of char


-- 
__Pascal_Bourguignon__                     http://www.informatimago.com/
There is no worse tyranny than to force a man to pay for what he doesn't
want merely because you think it would be good for him.--Robert Heinlein
http://www.theadvocates.org/
From: Marcin 'Qrczak' Kowalczyk
Subject: Re: TERPRI
Date: 
Message-ID: <pan.2004.03.19.22.51.14.33567@knm.org.pl>
On Fri, 19 Mar 2004 14:04:23 -0500, Joe Marshall wrote:

>>> But I thought C-like syntax was supposed to be easier to understand.
>>
>> Who told you that?
> 
> I thought the whole problem with Lisp was the parenthesis and that
> people found things like 
> 
>    char (*(*x[3])())[5];

No. The fact that some people (including me) find lots of parentheses
unreadable *doesn't* imply that they like C syntax.

It's very easy to compare syntax to C, one of the worst examples of a
non-sexpr syntax (besides Perl), and choose the worst part of the C syntax
(declarations), and say: hey, those parens look better than the C crap, so
they must be the best syntax around. Bullshit. That they are better than
the C syntax means nothing.

To be fair, readability should be compared to good and average examples
of non-sexpr syntax: Haskell, Dylan, Ruby, Python, OCaml, Erlang, even C#.

In the same way most people criticizing a static type system show Java
as an example. But Java's faults are not inherent in static typing, and
good type systems don't have most of its problems: Haskell, OCaml, Clean,
Mercury. Taking the poorest example of the competition and showing that
it's indeed poor is not impressive.

(As for my preferences: Macros need the syntax to be parsable into some
tree structure before knowing the meaning of names in it. This is enough;
sexprs are not needed to archieve that. More traditional syntax can do
that too, with some infix operators and without quoting needed for lists,
and I find it much more readable. Both static typing and dynamic typing
have important advantages, neither is so universal to make the other
irrelevant.)

-- 
   __("<         Marcin Kowalczyk
   \__/       ······@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
From: Joe Marshall
Subject: Re: TERPRI
Date: 
Message-ID: <ekrogzwj.fsf@comcast.net>
Marcin 'Qrczak' Kowalczyk <······@knm.org.pl> writes:

> On Fri, 19 Mar 2004 14:04:23 -0500, Joe Marshall wrote:
>
>>>> But I thought C-like syntax was supposed to be easier to understand.
>>>
>>> Who told you that?
>> 
>> I thought the whole problem with Lisp was the parenthesis and that
>> people found things like 
>> 
>>    char (*(*x[3])())[5];
>
> No. The fact that some people (including me) find lots of parentheses
> unreadable *doesn't* imply that they like C syntax.
>
> To be fair, readability should be compared to good and average examples
> of non-sexpr syntax: Haskell, Dylan, Ruby, Python, OCaml, Erlang, even C#.

Who said anything about being fair?  What's wrong with taking cheap
shots at C?


-- 
~jrm
From: Gareth McCaughan
Subject: Re: TERPRI
Date: 
Message-ID: <878yhug70u.fsf@g.mccaughan.ntlworld.com>
Joe Marshall wrote:

> I thought the whole problem with Lisp was the parenthesis and that
> people found things like 
> 
>    char (*(*x[3])())[5];
> 
> to be much more readable than
> 
>  (:array :size 3 (:pointer (:function :returning (:pointer (:array :size 5 char)))))
> 
> At least that's what the trolls keep telling us.

:-)

C's decision to make declaration mirror usage has clearly
led to a language where declarations involving complicated types
are unreadable. What's particularly sad is that there's no
reason why it had to be that way: a C-like language (i.e.,
syntactically terse and preferring punctuation to words)
doesn't need to make that decision. If the same thing were
rendered as

    (() -> char[5]*)*[3] x;

then it would (with a little practice) be just about as readable
(for both humans and programs) as the Lispy version, while being
scarcely longer than the C version.

In fairness, Lisp's type designators aren't always the most
readable either...

-- 
Gareth McCaughan
.sig under construc
From: Pascal Bourguignon
Subject: Re: TERPRI
Date: 
Message-ID: <87y8pxj0xw.fsf@thalassa.informatimago.com>
Raymond Toy <···@rtp.ericsson.se> writes:
>     David> It would certainly be a fine thing to work on a Lisp implementation
>     David> that beat C in size and speed in the general case.
> 
> Is there such a thing?  If not, are you helping to achieve such a
> thing by helping with code or with money?  If not, was it really that
> important to you?

Isn't there somewhere a "one-liner"  C compiler?  The first C compiled
could not be  "big" in any sense,  since it ran on so  small (in today
standard) a computer.  Why would you want to beat that?  Do you have a
time machine?


-- 
__Pascal_Bourguignon__                     http://www.informatimago.com/
There is no worse tyranny than to force a man to pay for what he doesn't
want merely because you think it would be good for him.--Robert Heinlein
http://www.theadvocates.org/
From: André Thieme
Subject: Re: TERPRI
Date: 
Message-ID: <c3dedc$8hs$1@ulric.tng.de>
Pascal Bourguignon wrote:

> Raymond Toy <···@rtp.ericsson.se> writes:
> 
>>    David> It would certainly be a fine thing to work on a Lisp implementation
>>    David> that beat C in size and speed in the general case.
>>
>>Is there such a thing?  If not, are you helping to achieve such a
>>thing by helping with code or with money?  If not, was it really that
>>important to you?
> 
> 
> Isn't there somewhere a "one-liner"  C compiler?  The first C compiled
> could not be  "big" in any sense,  since it ran on so  small (in today
> standard) a computer.  Why would you want to beat that?  Do you have a
> time machine?

Not really a one-line, but still very short:
http://www.ioccc.org/2001/bellard.c

It was written for "The International Obfuscated C Code Contest"
http://www.ioccc.org/main.html

and it compiles itself:
http://www.ioccc.org/2001/bellard.hint


I personally like this program very much:

#include <stdio.h>
int l;int main(int o,char **O,
int I){char c,*D=O[1];if(o>0){
for(l=0;D[l              ];D[l
++]-=10){D   [l++]-=120;D[l]-=
110;while   (!main(0,O,l))D[l]
+=   20;   putchar((D[l]+1032)
/20   )   ;}putchar(10);}else{
c=o+     (D[I]+82)%10-(I>l/2)*
(D[I-l+I]+72)/10-9;D[I]+=I<0?0
:!(o=main(c/10,O,I-1))*((c+999
)%10-(D[I]+92)%10);}return o;}

http://www.ioccc.org/2001/cheong.c
http://www.ioccc.org/2001/cheong.hint



Andr�
--
From: Barry Margolin
Subject: Re: TERPRI
Date: 
Message-ID: <barmar-D66A65.00571718032004@comcast.ash.giganews.com>
In article <··············@david-steuber.com>,
 David Steuber <·············@verizon.net> wrote:

> I'm aware that Common Lisp is a large system.  Why does it have to
> come out this large though?  What's the date of the ANSI spec?  What
> did a typical desktop computer have to work with then?

Desktop computers weren't expected to run Lisp in those days, except for 
"toy" implementations.  At the time Common Lisp was originally being 
designed (mid 80's), desktop machines outside CS labs were things like 
IBM PC/XT and Mac SE/30.  The only personal computers that ran 
full-features Lisps were Lisp Machines, costing $50-100K.

-- 
Barry Margolin, ······@alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***
From: james anderson
Subject: Re: TERPRI
Date: 
Message-ID: <405951C4.2130396@setf.de>
Barry Margolin wrote:
> 
> In article <··············@david-steuber.com>,
>  David Steuber <·············@verizon.net> wrote:
> 
> > I'm aware that Common Lisp is a large system.  Why does it have to
> > come out this large though?  What's the date of the ANSI spec?  What
> > did a typical desktop computer have to work with then?
> 
> Desktop computers weren't expected to run Lisp in those days, except for
> "toy" implementations.  At the time Common Lisp was originally being
> designed (mid 80's), desktop machines outside CS labs were things like
> IBM PC/XT and Mac SE/30.

by '87 mcl was running quite comfortably on an se. mine won't boot anymore, so
i'm not sure of the parameters, but i recall that it was "accellerated" with a
68020 and an extra megabyte of memory = total 2.

>    The only personal computers that ran
> full-features Lisps were Lisp Machines, costing $50-100K.
> 

...
From: Tim Bradshaw
Subject: Re: TERPRI
Date: 
Message-ID: <fbc0f5d1.0403180309.6f7d3739@posting.google.com>
David Steuber <·············@verizon.net> wrote in message news:<··············@david-steuber.com>...

> 
> This is sort of a flame but I will say it anyway.
> 
> -rw-r--r--   1 david  staff  25604096 16 Mar 22:24 sbcl.core
> 
> It's really too bad that memory seems to be considered an infinite
> resource.  I am quite convinced that a program that has its code and
> data fit in CPU cache will run a heck of a lot faster than code that
> spans gigapages of memory.

What makes you believe that the size of an image tells you *anything*
about how much memory it uses, or its cache behaviour?  Go learn about
VM and cache systems. What makes you think that the size of the image
of a single CL implementation tells you anything *at all* about how
bug `CL' is?

--tim
From: David Steuber
Subject: Re: TERPRI
Date: 
Message-ID: <m2y8pxsvo2.fsf@david-steuber.com>
··········@tfeb.org (Tim Bradshaw) writes:

> What makes you believe that the size of an image tells you *anything*
> about how much memory it uses, or its cache behaviour?  Go learn about
> VM and cache systems. What makes you think that the size of the image
> of a single CL implementation tells you anything *at all* about how
> bug `CL' is?

I guess I thought all that was being loaded into RAM.  Or can be at
any time.

Activity Monitor tells me that a freshly started SBCL 0.8.8.29
process consumes 8.51MB of real memory on OSX 10.3.2.  This was
started by typing 'sbcl' into bash.

I guess I should back off.

Last night in Safari (now there is a memory hog) I selected some text
to copy and paste.  Whatever code that handles the mouse click event
had apparantly been swapped out to disk.  Or maybe the disk had to
spin up for a different reason.  In any event, the disk had to spin
up and a couple seconds passed before I got feedback on the display
that I was selecting text.  The delay was long enough to make me
think that the event had been missed and try it again.

While not at all related to SBCL, this is a case where an interactive
application caused annoyance.  My belief, which may be off base, is
that the delay was caused by the fact that the memory footprint of
Safari is so large.  I have 640MB of RAM.  I would expect a lot of
stuff to fit in that space.

I apologies for any offense I may have caused through my ignorance.
Runtime performance is something I do think about and is important to
me as a user.  Because of this I may say some stupid things from time
to time.

-- 
Those who do not remember the history of Lisp are doomed to repeat it,
badly.

> (dwim x)
NIL
From: Cameron MacKinnon
Subject: Re: TERPRI
Date: 
Message-ID: <HaednZLC5t9NtsfdRVn-tw@golden.net>
David Steuber wrote:

> I apologies for any offense I may have caused through my ignorance.
> Runtime performance is something I do think about and is important to
> me as a user.  Because of this I may say some stupid things from time
> to time.


Have you decided that your original impression was wrong, or are you 
just worried about alienating some people over what is obviously a 
sensitive topic?

You produced a hard number, the size of an SBCL core file. In return 
there was a lot of hand waving about how some implementations might be 
smaller (no numbers, except from you) and how VM and cache behaviour 
might mitigate some of the size effects -- again, with no pointers to 
actual implementations that do this or papers that discuss it.

I think Lisp is phat, but I also think Lisp is fat.

Your observation is a common one, and the community's response often 
runs to [deleted to avoid flamewar - ed.] or just wishing the issue 
would go away. Given the move afoot to 64 bits, it won't.

-- 
Cameron MacKinnon
Toronto, Canada
From: Thomas Lindgren
Subject: Re: TERPRI
Date: 
Message-ID: <m3y8pw3izk.fsf@localhost.localdomain>
Cameron MacKinnon <··········@clearspot.net> writes:

> I think Lisp is phat, but I also think Lisp is fat.

An 8MB image isn't much these days. It's 1/64th of the memory on my
$300 PC. My experience is instead that these languages as a rule are
fairly lean.  The reason for this, presumably, is that many of the
implementation technologies were devised when memories were small,
meaning the 70s-80s.

As a student, I used Common Lisp (version 1) on a time-sharing machine
that had 1MB memory, for instance. It worked quite well.

Best,
                        Thomas
-- 
Thomas Lindgren
"It's becoming popular? It must be in decline." -- Isaiah Berlin
 
From: Adam Warner
Subject: Re: TERPRI
Date: 
Message-ID: <pan.2004.03.19.01.19.41.686960@consulting.net.nz>
Hi Cameron MacKinnon,

> Have you decided that your original impression was wrong, or are you
> just worried about alienating some people over what is obviously a
> sensitive topic?
> 
> You produced a hard number, the size of an SBCL core file. In return
> there was a lot of hand waving about how some implementations might be
> smaller (no numbers, except from you) and how VM and cache behaviour
> might mitigate some of the size effects -- again, with no pointers to
> actual implementations that do this or papers that discuss it.

There's no need to point to "actual implementations that do this or papers
that discuss it." You just need some clues about how virtual memory works
in a modern operating system.

If you load a large memory image into say Linux then over time (and
especially if physical RAM is scare) any inactive portions will get
swapped out to disk. You will effectively use no more physical RAM than
with a smaller memory image where every part of the image is in constant
use. The only hard constraints are the address space and the size of your
swap file.

There's no need to discuss how actual implementations do this because the
functionality is provided by the operating system.

Regards,
Adam
From: David Steuber
Subject: Re: TERPRI
Date: 
Message-ID: <m2r7vpsbb1.fsf@david-steuber.com>
Adam Warner <······@consulting.net.nz> writes:

> Hi Cameron MacKinnon,
> 
> > Have you decided that your original impression was wrong, or are you
> > just worried about alienating some people over what is obviously a
> > sensitive topic?

I don't want to alienate anyone.  People do put a lot of work into
SBCL, CMUCL, et al.  I don't want to say I am wrong so much as
underqualified to really judge.  My math tells me that no matter what
the reason, if the code + data take up more space than fits in cache,
the CPU will be forced to idle or do something else while waiting for
data from main memory.

> > You produced a hard number, the size of an SBCL core file. In return
> > there was a lot of hand waving about how some implementations might be
> > smaller (no numbers, except from you) and how VM and cache behaviour
> > might mitigate some of the size effects -- again, with no pointers to
> > actual implementations that do this or papers that discuss it.
> 
> If you load a large memory image into say Linux then over time (and
> especially if physical RAM is scare) any inactive portions will get
> swapped out to disk. You will effectively use no more physical RAM than
> with a smaller memory image where every part of the image is in constant
> use. The only hard constraints are the address space and the size of your
> swap file.
> 
> There's no need to discuss how actual implementations do this because the
> functionality is provided by the operating system.

The problem as I see it is that hitting the disk is thousands of
times slower than having the data and instructions all in CPU.  Disk
bandwidth is much lower than RAM bandwidth which is slower than cache
bandwidth which is slower than register bandwidth.

Granted the CPU will know something about ordering instructions at a
low level.  A compiler that knows the CPU can be a big help.  I
expect that is why Intel's C++ compiler is faster than GNU on the PIV
and why IBM's is faster than GNU on the PPC970.

Then of course the OS has its say with scheduling, VM and whatnot.

In the end, I still think a smaller application will be faster than a
larger one because of I/O constraints above and beyond the fact that
a larger application is larger because it does more work.

I think what I need is to come up with some real profiling data.  It
is common enough for people to be completely wrong about the time
cost of code.  So I will defer this topic until I can come up with
some hard, scientific numbers for real applications.

I guess if everyone was worried about runtime speed, websites
wouldn't be driven by PHP, Perl, Python, and Java.  Then again, there
is often a lot of heavy lifting being done by a database on the back
end.  Also network bandwidth is the narrowest pipe of all.

-- 
Those who do not remember the history of Lisp are doomed to repeat it,
badly.

> (dwim x)
NIL
From: Marc Spitzer
Subject: Re: TERPRI
Date: 
Message-ID: <86smg4acy9.fsf@bogomips.optonline.net>
David Steuber <·············@verizon.net> writes:

> Adam Warner <······@consulting.net.nz> writes:
>
>> Hi Cameron MacKinnon,
>> 
>> > Have you decided that your original impression was wrong, or are you
>> > just worried about alienating some people over what is obviously a
>> > sensitive topic?
>
> I don't want to alienate anyone.  People do put a lot of work into
> SBCL, CMUCL, et al.  I don't want to say I am wrong so much as
> underqualified to really judge.  My math tells me that no matter what
> the reason, if the code + data take up more space than fits in cache,
> the CPU will be forced to idle or do something else while waiting for
> data from main memory.

But all the Log charts are done.  The things I tune for as a SA are
network and disk to make things faster before I even bother with cpu,
ram also will generally come befor cpu also.  The fact that the cpu is
having a few more cache misses just ain't that big of a deal generaly.
After all how do you fit DB indexes into cache, or the whole db for
that matter, with current technology.  Also keep in mind that OS's do
many concurrent tasks, ie this slice we are doing postgres next slice
we are doing apache etc, and all of them will not fit into cache. 

>
> The problem as I see it is that hitting the disk is thousands of
> times slower than having the data and instructions all in CPU.  Disk
> bandwidth is much lower than RAM bandwidth which is slower than cache
> bandwidth which is slower than register bandwidth.

I think to really optmize cache and regester use we would need to get
rid of multitasking because doing lots of different things all the time 
will by its nature degrade cache hits.  Yes this is supposed to sound
silly.  

>
> Granted the CPU will know something about ordering instructions at a
> low level.  A compiler that knows the CPU can be a big help.  I
> expect that is why Intel's C++ compiler is faster than GNU on the PIV
> and why IBM's is faster than GNU on the PPC970.

GCC is a great example of "good enough"

>
> Then of course the OS has its say with scheduling, VM and whatnot.
>
> In the end, I still think a smaller application will be faster than a
> larger one because of I/O constraints above and beyond the fact that
> a larger application is larger because it does more work.

The thing you appear to be missing is that it does not matter how much
faster program A vs B is wonc they both are fast enough to do the job.
Cost then becoms the dominant factor, perl(so they say) is faster to
develop in then C as C is faster then assembly.  The simple fact that
output is slower is just not a big deal when it runs fast enough.



>
> I think what I need is to come up with some real profiling data.  It
> is common enough for people to be completely wrong about the time
> cost of code.  So I will defer this topic until I can come up with
> some hard, scientific numbers for real applications.

Real apps tend to be I/O bound(disk, network) not cpu or cache bound. 

>
> I guess if everyone was worried about runtime speed, websites
> wouldn't be driven by PHP, Perl, Python, and Java.  Then again, there
> is often a lot of heavy lifting being done by a database on the back
> end.  Also network bandwidth is the narrowest pipe of all.

That is not true, people are worried about speed.  But this only has a
business impact, when the app is the choke point, if it is not the
speed up is a pure wast of money.  In fact many times it is cheaper to
just buy more hardware to horozontaly scale your applacation
($1-2000.00 per 1u PC) rather then have your staff spen time speeding
it up when they could be working on the next thing instead.

marc
From: Rob Warnock
Subject: Re: TERPRI
Date: 
Message-ID: <Av6dne0z-d5z-sHd3czS-w@speakeasy.net>
David Steuber  <·············@verizon.net> wrote:
+---------------
| In the end, I still think a smaller application will be faster than a
| larger one because of I/O constraints above and beyond the fact that
| a larger application is larger because it does more work.
| 
| I think what I need is to come up with some real profiling data.  It
| is common enough for people to be completely wrong about the time
| cost of code.  
+---------------

You may find some of the results surprising; I certainly did. As I
mentioned elsewhere, I was running some micro-benchmarks on startup
times for "scripts", and comparing CLISP-2.29 and CMUCL-18e produced
some unexpected results [on a 1.855 GHz Mobile Athlon XP 2500+]:

    % cat hello.clisp
    #!/usr/local/bin/clisp
    (format t "hello world!~%")
    % time-hist -n 100 ./hello.clisp
    Timing 100 runs of: ./hello.clisp
       2 0.019
      30 0.020
      68 0.021
    % 

Not bad: ~20-21ms per run. But look at *this*!

    % cat hello.cmucl
    #!/usr/local/bin/cmucl -script
    (format t "hello world!~%")
    % time-hist -n 100 ./hello.cmucl
    Timing 100 runs of: ./hello.cmucl
       1 0.015
      98 0.016
       1 0.017
    % 

It's even slightly faster [though less convenient] if you don't use
the "#!" scripting:

    % cat hello.lisp
    (format t "hello world!~%")
    % time-hist cmucl -quiet -noinit -load test1.lisp -eval "'(quit)'"
    Timing 100 runs of: cmucl -quiet -noinit -load test1.lisp -eval '(quit)'
      97 0.014
       3 0.015
    % 

Note that on this system I always have at least one CLISP process and
at least one CMUCL process running all the time (started at boot time
or login time), so none of the above numbers capture the "first touch"
overhead of either implementation, which is considerable (since the disk
here isn't hyper-fast).

Some things I can think of to explain the direction of the difference,
despite CLISP's bin+core being ~4.2 MB and CMUCL's bin+core being ~21.1 MB,
are that:

1. "/usr/local/bin/clisp" is a tiny wrapper program that actually execs
   "/usr/local/lib/clisp/full/lisp.run", so there is an extra Unix
   "exec()" involved compared to "/usr/local/bin/cmucl" (which is just
   a symlink to "/u/lisp/contrib/cmucl/bin/lisp" [where it happens to
   live here]).
   
2. CMUCL mmap's its core image MAP_PRIVATE, that is, "copy on write",
   which means that copies of the file in the filesystem's memory
   buffer cache can be re-used for other CMUCL processes without
   touching the disk. I can't read German, but from the occasional
   English comments in the CLISP code it *looks* like it's doing the
   same thing, but I can't be sure. Assuming they both do, that tends to
   wipe out any difference in file sizes, since most of both executables
   and images would be in memory all the time anyway.

3. Almost all of the CMUCL core Common Lisp functionality is compiled
   to native x86 code; a goodly amount of the CLISP core is compiled
   to byte-code, which is slower.

The point being that one's choice of CLISP vs. CMUCL for any given
application, say, "scripting", should not be made based on bin+core
size alone. Use the one that works best for you, has the features
you need, etc.


-Rob

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: David Steuber
Subject: Re: TERPRI
Date: 
Message-ID: <m2isgxq36x.fsf@david-steuber.com>
····@rpw3.org (Rob Warnock) writes:

> The point being that one's choice of CLISP vs. CMUCL for any given
> application, say, "scripting", should not be made based on bin+core
> size alone. Use the one that works best for you, has the features
> you need, etc.

You have posted some encouraging results.  And your advice quoted
above is good.

My present needs for learning are actually fairly simple.  My desire
is to be able to hop between (any-of 'CMUCL 'SBCL 'OpenMCL) on
platforms (any-of 'Debian/testing-x86 'OSX) without changing my
source code at all.

When I get to GUI stuff, the requirements may have to be relaxed.  An
OSX app should be native rather than X11.

I have SBCL on my Mac built from CVS whereas SBCL on my Linux box is
from the sbcl and sbcl-mt packages.  The Linux box has CMUCL and my
Mac has OpenMCL.  Both also have CLISP, but I have yet to try it out.

-- 
Those who do not remember the history of Lisp are doomed to repeat it,
badly.

> (dwim x)
NIL
From: Adam Warner
Subject: Re: TERPRI
Date: 
Message-ID: <pan.2004.03.19.01.20.15.950785@consulting.net.nz>
Hi Cameron MacKinnon,

> Have you decided that your original impression was wrong, or are you
> just worried about alienating some people over what is obviously a
> sensitive topic?
> 
> You produced a hard number, the size of an SBCL core file. In return
> there was a lot of hand waving about how some implementations might be
> smaller (no numbers, except from you) and how VM and cache behaviour
> might mitigate some of the size effects -- again, with no pointers to
> actual implementations that do this or papers that discuss it.

There's no need to point to "actual implementations that do this or papers
that discuss it." You just need some clues about how virtual memory works
in a modern operating system.

If you load a large memory image into say Linux then over time (and
especially if physical RAM is scarce) any inactive portions will get
swapped out to disk. You will effectively use no more physical RAM than
with a smaller memory image where every part of the image is in constant
use. The only hard constraints are the address space and the size of your
swap file.

There's no need to discuss how actual implementations do this because the
functionality is provided by the operating system.

Regards,
Adam
From: Cameron MacKinnon
Subject: Lisp locality (was Re: TERPRI)
Date: 
Message-ID: <J62dnToCTu9WmMbd4p2dnA@golden.net>
Adam Warner wrote:
> Hi Cameron MacKinnon,
>>You produced a hard number, the size of an SBCL core file. In return
>>there was a lot of hand waving about how some implementations might be
>>smaller (no numbers, except from you) and how VM and cache behaviour
>>might mitigate some of the size effects -- again, with no pointers to
>>actual implementations that do this or papers that discuss it.
> 
> 
> There's no need to point to "actual implementations that do this or papers
> that discuss it." You just need some clues about how virtual memory works
> in a modern operating system.
> 
> If you load a large memory image into say Linux then over time (and
> especially if physical RAM is scarce) any inactive portions will get
> swapped out to disk. You will effectively use no more physical RAM than
> with a smaller memory image where every part of the image is in constant
> use. The only hard constraints are the address space and the size of your
> swap file.
> 
> There's no need to discuss how actual implementations do this because the
> functionality is provided by the operating system.

Actually, if you mmap the image, untouched pages won't be read in the 
first place. Read-only pages that are read in and then evicted won't be 
written to swap, they'll just be read in again from the original image 
if needed (I'm not sure if this is true for all mmapped files, or just 
the original executable image).

Since one frequently used object on a page keeps that page in the 
working set, the trick is to group frequently used objects together, and 
to group rarely used objects together.

This is, in general, a hard problem. But Lispers love hard problems, right?

CMUCL allows a :root-structures parameter to save-lisp. From the manual: 
"This should be a list of the main entry points in any newly loaded 
systems.  This need not be supplied, but locality and/or GC performance 
will be better if they are."

I'd welcome pointers to information on how the locality problem is 
attacked in the Lisp world. Sorry if my original post wasn't clear 
enough on this; I should have mentioned locality explicitly.

-- 
Cameron MacKinnon
Toronto, Canada
From: Joe Marshall
Subject: Re: Lisp locality
Date: 
Message-ID: <smg4svrf.fsf@ccs.neu.edu>
Cameron MacKinnon <··········@clearspot.net> writes:

> I'd welcome pointers to information on how the locality problem is
> attacked in the Lisp world. Sorry if my original post wasn't clear
> enough on this; I should have mentioned locality explicitly.

Depth-first copying collection tends to localize things nicely.

TI hacked their GC so that the scavenge was delayed for quite some
time after a flip.  As the mutator processes ran, the objects they
touched would be transported to newspace in the order in which they
were touched.  When it looked like oldspace needed to be reaped,
they'd start the scavenger, finish transporting live objects, and
immediately flip again.

I don't know if there are any published papers available on this.
From: Thomas Lindgren
Subject: Re: Lisp locality (was Re: TERPRI)
Date: 
Message-ID: <m3k71g3ffv.fsf@localhost.localdomain>
Cameron MacKinnon <··········@clearspot.net> writes:

> Since one frequently used object on a page keeps that page in the
> working set, the trick is to group frequently used objects together,
> and to group rarely used objects together.
> 
> This is, in general, a hard problem. But Lispers love hard problems, right?

Note that a copying garbage collector will dynamically group objects
for you.  There have been a couple of PhD theses on this topic,
showing that cache performance for SML and Scheme is quite good if
your hardware handles write-misses well. I'm not aware of any similar
characterization of Common Lisp, though, which might differ in that
objects could be updated more frequently.

I think Paul Wilson and others further extended this with a GC that
copied some data depth-first to improve locality beyond the
above. Published: possibly in 1990?

Best,
                        Thomas
-- 
Thomas Lindgren
"It's becoming popular? It must be in decline." -- Isaiah Berlin
 
From: Cameron MacKinnon
Subject: Re: Lisp locality (was Re: TERPRI)
Date: 
Message-ID: <YrOdnTssas_UV8bd4p2dnA@golden.net>
Thomas Lindgren wrote:
> I think Paul Wilson and others further extended this with a GC that
> copied some data depth-first to improve locality beyond the
> above. Published: possibly in 1990?

Thanks, Thomas.

Paul R. Wilson, Michael S. Lam, and Thomas G. Moher. Effective 
Static-Graph Reorganization to Improve Locality in Garbage-Collected 
Systems Proc. ACM SIGPLAN 1991 Conf. on Programming Language Design and 
Implementation, ACM SIGPLAN Notices vol. 26, No. 6 (June 1991), pp. 177-191.

http://www.cs.utexas.edu/users/wilson/papers/esgr.pdf

It looks like Paul and his "OOPS Group" published a number of good 
papers relating to VM, locality and garbage collection. They're at:

http://www.cs.utexas.edu/users/oops/papers.html

-- 
Cameron MacKinnon
Toronto, Canada
From: Frode Vatvedt Fjeld
Subject: Re: Lisp locality
Date: 
Message-ID: <2h1xnoan7v.fsf@vserver.cs.uit.no>
Another locality issue to consider is that of efficient CPU cache
usage. The speed gap between level 1 cache and main memory is I
believe on about the same order of magnitude as that between main
memory and disk. And while it's often quite feasible to have "enough"
main memory now, this is not the case with CPU caches, which are
fixed, and scarce. (With Movitz one can easily switch off the CPU
caches entirely and witness the effects. IIRC the speed is reduced to
about 1% of normal.)

One frequent operation in any lisp system is, I believe, to look up a
symbol's function-value, and to apply that function to some arguments.
The symbol data-structure would (presumably) tend to be about the size
of 5-6 words, or 0,5-2 CPU cache-lines depending on the CPU
architecture and lisp implementation. If the symbol data-structure is
implemented naively as a memory-contiguous sequence of words (of which
the function-value is one), then each function call operation is going
to waste a lot of cache resources, because the remaining 5/6 of that
symbol (and cache-line) is unlikely to be used even remotely as
often. I.e. such a design would imply a lot of "false sharing", in CPU
architecture parlance, I think, and one would expect the CPU cache to
perform substantialy worse than it otherwise potentially could.

I'd be interested to hear if any lisp implementors have considered this.

-- 
Frode Vatvedt Fjeld
From: Adam Warner
Subject: Re: Lisp locality (was Re: TERPRI)
Date: 
Message-ID: <pan.2004.03.20.02.23.19.390832@consulting.net.nz>
Hi Cameron MacKinnon,

> Since one frequently used object on a page keeps that page in the
> working set, the trick is to group frequently used objects together, and
> to group rarely used objects together.
> 
> This is, in general, a hard problem. But Lispers love hard problems,
> right?
> 
> CMUCL allows a :root-structures parameter to save-lisp. From the manual:
> "This should be a list of the main entry points in any newly loaded
> systems.  This need not be supplied, but locality and/or GC performance
> will be better if they are."
> 
> I'd welcome pointers to information on how the locality problem is
> attacked in the Lisp world. Sorry if my original post wasn't clear
> enough on this; I should have mentioned locality explicitly.

Your original post was chiding Lisp for being fat. Your clarification that
little-used functions such as EXT:WORLD-DOMINATION should not be in the
same 4096 octets as CL:CAR has nothing to do with the overall size of a
Lisp image.

Nice save though. "You produced a hard number, the size of an SBCL core
file." Your original post was clear. Page locality is an however an
interesting issue.

Regards,
Adam
From: Adam Warner
Subject: Re: Lisp locality (was Re: TERPRI)
Date: 
Message-ID: <pan.2004.03.20.02.24.24.15849@consulting.net.nz>
Hi Cameron MacKinnon,

> Since one frequently used object on a page keeps that page in the
> working set, the trick is to group frequently used objects together, and
> to group rarely used objects together.
> 
> This is, in general, a hard problem. But Lispers love hard problems,
> right?
> 
> CMUCL allows a :root-structures parameter to save-lisp. From the manual:
> "This should be a list of the main entry points in any newly loaded
> systems.  This need not be supplied, but locality and/or GC performance
> will be better if they are."
> 
> I'd welcome pointers to information on how the locality problem is
> attacked in the Lisp world. Sorry if my original post wasn't clear
> enough on this; I should have mentioned locality explicitly.

Your original post was chiding Lisp for being fat. Your clarification that
little-used functions such as EXT:WORLD-DOMINATION should not be in the
same 4096 octets as CL:CAR has nothing to do with the overall size of a
Lisp image.

Nice save though. "You produced a hard number, the size of an SBCL core
file." Your original post was clear. Page locality is however an
interesting issue.

Regards,
Adam
From: Tim Bradshaw
Subject: Re: TERPRI
Date: 
Message-ID: <fbc0f5d1.0403190832.6388b1eb@posting.google.com>
David Steuber <·············@verizon.net> wrote in message news:<··············@david-steuber.com>...

> I guess I thought all that was being loaded into RAM.  Or can be at
> any time.
> 
> I guess I should back off.
> 

I'm sorry if I was rude: it's kind of a conditioned response to so
many `lisp is x'  trolls over the years.

However, I think it's well worth while understanding how modern OSs
handle memory, and how caching systems work.  Things are almost but
not quite completely different than most people expect.

Essentially what happens (in a modern Unix, which I expect include
linux) when an executable file is about to be run is that the OS will
map various parts of it into  the process's address space.  This sets
up a relationship between addresses in memory and offsets in the
executable, but what it *doesn't* do is to actually load the
executable into memory.  That happens at some indeterminate time
later, as the memory is actually referenced.  For memory pages which
are never referenced, it may never happen (it may happen, because the
system will typically try and page in data that it thinks it will need
soon, and it can be wrong about that).

Additionally, a lot of an executable can be mapped read-only, and this
mapping can then be shared between many processes (shared libraries
are the ultimate case of this).  More than this, read-write pages can
be shared until first write, which may never happen.

The end result of all this trickery (which I've described very
simplistically) is that it's very non-obvious what the resource usage
of an application actually is. This is made worse because most of the
performance-monitoring tools don't really tell you enough information,
or don't tell it in a good way.  For instance if you have two
processes which are using 7M each, what's the total usage?  It could
be 7M (or 7M + epsilon).

This isn't to deny that CL implementations generally take more real
memory and CPU resources than some optimised C-based systems.  They
generally do, but, well, they do more.  CL implementations have a
compiler, for instance (how big is gcc, again?)!

Sorry if this article seems patronising, it's not meant to be. (And of
course, many people reading it will know way more than me anyway...)

--tim
From: Ray Dillinger
Subject: Re: TERPRI
Date: 
Message-ID: <405B413E.33B5CED2@sonic.net>
Tim Bradshaw wrote:
> 
> David Steuber <·············@verizon.net> wrote in message news:<··············@david-steuber.com>...
> 
> > I guess I thought all that was being loaded into RAM.  Or can be at
> > any time.
> >
> > I guess I should back off.
> >
> 
> I'm sorry if I was rude: it's kind of a conditioned response to so
> many `lisp is x'  trolls over the years.
> 
> However, I think it's well worth while understanding how modern OSs
> handle memory, and how caching systems work.  Things are almost but
> not quite completely different than most people expect.
> 
> Essentially what happens (in a modern Unix, which I expect include
> linux) when an executable file is about to be run is that the OS will
> map various parts of it into  the process's address space.  This sets
> up a relationship between addresses in memory and offsets in the
> executable, but what it *doesn't* do is to actually load the
> executable into memory.  That happens at some indeterminate time
> later, as the memory is actually referenced.  For memory pages which
> are never referenced, it may never happen (it may happen, because the
> system will typically try and page in data that it thinks it will need
> soon, and it can be wrong about that).

As an implementor, I want to produce executables that do NOT require 
the user to have any unusual dll's or .so's on his system.  But for 
performance's sake, I'd still like the OS to recognize when two or more 
different such executables are running that much of the runtime image 
(the eval compiler and macroexpander for example) can be shared.

Is this, in general, possible? 

				Bear
From: Brian Downing
Subject: Re: TERPRI
Date: 
Message-ID: <LgI6c.39946$JL2.458940@attbi_s03>
In article <·················@sonic.net>,
Ray Dillinger  <····@sonic.net> wrote:
> As an implementor, I want to produce executables that do NOT require 
> the user to have any unusual dll's or .so's on his system.  But for 
> performance's sake, I'd still like the OS to recognize when two or more 
> different such executables are running that much of the runtime image 
> (the eval compiler and macroexpander for example) can be shared.
> 
> Is this, in general, possible? 

(I'm only talking about a purely OS level thing below - not lisp
specific at all.)

In general?  Sure, I guess you could tack some sort of unique identifier
onto sections in your executable, and share those sections between
processes if any two or more are running with the same uuid.  Or have
the OS md5sum them to get the same effect.

In practice, I don't think anything does something like that and
probably never will.  It's just a lot more trouble that it's worth, and
with constantly growing standard libraries (a freshly-started Cocoa
application on OS X is over 80MB of mapped virtual space - take that
into account when talking about huge CL runtimes!), the arguments for
external shared libraries are still significant.

-bcd
From: Cameron MacKinnon
Subject: Re: TERPRI
Date: 
Message-ID: <v6idnforGLPrjMHdRVn-vw@golden.net>
Ray Dillinger wrote:
> As an implementor, I want to produce executables that do NOT require 
> the user to have any unusual dll's or .so's on his system.  But for 
> performance's sake, I'd still like the OS to recognize when two or more 
> different such executables are running that much of the runtime image 
> (the eval compiler and macroexpander for example) can be shared.
> 
> Is this, in general, possible? 

An OS isn't going to be able to recognize that two distinct, monolithic 
executables are substantially similar. They could:
	- share a "System V" shared memory region - see shmat(2)
	- both mmap(2)  a common file

Shared libraries are just a special case of the latter. If you're 
worried about search paths and users having permission to install 
libraries in the system directories, maybe you want dlopen(3) and 
friends, which allow late linking (i.e. after your program is running) 
to a library whose full path you specify.

Numbers in brackets refer to UNIX manual sections. Windows has 
facilities analogous to mmap and dlopen.

-- 
Cameron MacKinnon
Toronto, Canada
From: Tim Bradshaw
Subject: Re: TERPRI
Date: 
Message-ID: <fbc0f5d1.0403220301.1a380969@posting.google.com>
Ray Dillinger <····@sonic.net> wrote in message news:<·················@sonic.net>...

> As an implementor, I want to produce executables that do NOT require 
> the user to have any unusual dll's or .so's on his system.  But for 
> performance's sake, I'd still like the OS to recognize when two or more 
> different such executables are running that much of the runtime image 
> (the eval compiler and macroexpander for example) can be shared.

I don't think that will work in general - to do it would require that
the OS do a diff on pages of two different files, establish that they
are actually the same, and use the same mapping, at least until first
write.  That's possible, but I'm not aware that anyone does it.

What would probably be more likely to work would be to ship things as
a big lisp image and then some (possibly bundled) fasl files, which
get loaded during startup.  Different apps would then just have
different FASL files and you'd have a better chance of not modifying
the main image's pages.  Our (cley's) Weld system worked like this -
there was a large dumped image which had all of Lisp as well as a fair
amount of weld substrate, and it immediately loaded modules at start
time which were typically the actual application.  I confess to never
having tried to measure how much was shared...

--tim
From: Espen Vestre
Subject: Re: TERPRI
Date: 
Message-ID: <kwk71hfb2a.fsf@merced.netfonds.no>
David Steuber <·············@verizon.net> writes:

> While not at all related to SBCL, this is a case where an interactive
> application caused annoyance.  My belief, which may be off base, is
> that the delay was caused by the fact that the memory footprint of
> Safari is so large.  I have 640MB of RAM.  I would expect a lot of
> stuff to fit in that space.

Hmm. I just upgraded our family mac, a really old beige G3, to 640MB
of RAM, and it doesn't swap at all. And even with only 224MB, vm_stat
showed excellent behaviour most of the time while the kids were running
3 or 4 programs at once.
-- 
  (espen)
From: Matthias
Subject: Re: TERPRI
Date: 
Message-ID: <36wvfl2mps5.fsf@rembrandt.ti.uni-mannheim.de>
David Steuber <·············@verizon.net> writes:

> ··········@tfeb.org (Tim Bradshaw) writes:
> 
> > David Steuber <·············@verizon.net> wrote in message news:<··············@david-steuber.com>...
> > 
> > > 
> > > Sometimes I think people take saving a few keystrokes too far.
> > 
> > Or may be they were just interested in keeping names short, because
> > the whole image hat to fit in a few K.  And later, they wanted to make
> > sure that the large programs thay already had would carry on running.
> 
> This is sort of a flame but I will say it anyway.
> 
> -rw-r--r--   1 david  staff  25604096 16 Mar 22:24 sbcl.core
> 
> It's really too bad that memory seems to be considered an infinite
> resource.  I am quite convinced that a program that has its code and
> data fit in CPU cache will run a heck of a lot faster than code that
> spans gigapages of memory.

The file size on your hard drive has nothing to do with cache hits /
misses during runtime.

The common lisp designers simply decided (for backward compatibility,
I guess) to put a large library into one common namespace (package in
Lisp speak).  It's as if half of your C-libs in /usr/lib were merged
into one super-lib which you always link against.  This is annoying
when you try to memorize the function names but does not influence
cache behavior or runtime performance.

File size, runtime size, runtime performance are, for the majority of
applications, hotly debated non-issues when it comes to Lisp: People
regularly use systems that are both, larger and slower than CL.  You
rarely hear complaints.

Matthias
From: David Steuber
Subject: Re: TERPRI
Date: 
Message-ID: <m24qslubht.fsf@david-steuber.com>
Matthias <··@spam.pls> writes:

> File size, runtime size, runtime performance are, for the majority of
> applications, hotly debated non-issues when it comes to Lisp: People
> regularly use systems that are both, larger and slower than CL.  You
> rarely hear complaints.

I suspect that Java falls into the larger and slower than CL
category.  I don't think enough people complain about it.

I'm sure that I'm an outlier.

-- 
Those who do not remember the history of Lisp are doomed to repeat it,
badly.

> (dwim x)
NIL
From: Joel Ray Holveck
Subject: Re: TERPRI
Date: 
Message-ID: <87brmt6esa.fsf@thor.piquan.org>
> People regularly use systems that are both, larger and slower than
> CL.  You rarely hear complaints.

Oh, please.  I hear my co-workers cursing Java every time they pull up
any one of a number of apps.  I hear complaints all the time.

joelh