From: Greg Menke
Subject: Upper limits of CL
Date: 
Message-ID: <m3wut1riiz.fsf@europa.pienet>
On a Linux User Group list I'm a member of, the question just came up
about inverting a 20,000 x 20,000 matrix of floats and how it might be
done.  

No doubt different CL environments have different limits on
performance.  I was wondering in general how well any of the current
generation might handle that large a dataset, given sufficient
hardware.  I was thinking of a situation where multiple arrays are
allocated to get around array-dimension-limit, array-total-size-limit,
etc.., so the aggregate might form the basis for a logical matrix of
sufficient size.

Thanks,

Greg Menke

From: Christopher Browne
Subject: Re: Upper limits of CL
Date: 
Message-ID: <aed39j$62tj8$2@ID-125932.news.dfncis.de>
A long time ago, in a galaxy far, far away, Greg Menke <············@mindspring.com> wrote:
> On a Linux User Group list I'm a member of, the question just came up
> about inverting a 20,000 x 20,000 matrix of floats and how it might be
> done.  
>
> No doubt different CL environments have different limits on
> performance.  I was wondering in general how well any of the current
> generation might handle that large a dataset, given sufficient
> hardware.  I was thinking of a situation where multiple arrays are
> allocated to get around array-dimension-limit, array-total-size-limit,
> etc.., so the aggregate might form the basis for a logical matrix of
> sufficient size.

* (* 20000 20000 8)
3200000000

No matter what sort of system you use, that size of matrix will be
problematic, as it pushes the edges of the permissible size of a
process.

8 byte floats will lead to memory consumption of ~3.2GB, which won't
be a happy thing irrespective of what language you're working in.

And that needs to be at least doubled, as you need space for the
"source" matrix as well as the "destination" matrix.
-- 
(concatenate 'string "cbbrowne" ·@cbbrowne.com")
http://cbbrowne.com/info/rdbms.html
Share and Enjoy!!
From: Robert Monfera
Subject: Re: Upper limits of CL
Date: 
Message-ID: <HXoO8.3187$oC3.3505357@news2.news.adelphia.net>
"Christopher Browne" <········@acm.org> wrote in message
···················@ID-125932.news.dfncis.de...

| No matter what sort of system you use, that size of matrix will be
| problematic, as it pushes the edges of the permissible size of a
| process.
|
| 8 byte floats will lead to memory consumption of ~3.2GB, which won't
| be a happy thing irrespective of what language you're working in.

4GB memory costs only a few hundred dollars, so addressing arrays with
20-something bits (a few MB) is getting inadequate really fast.

A natural point of conversion to using 32 or preferably 40 (or definitely at
least 30) bit array indexing is the migration to 64-bit processors, for
example x86-64 - indeed, increased address space and usable physical memory
can be the primary motivator besides the 3-5 times as many usable registers.

(I assume 64-bit Lisp implementations allow bigger array sizes and direct
integers already).

Robert
From: Christopher Browne
Subject: Re: Upper limits of CL
Date: 
Message-ID: <aed89h$61hf1$1@ID-125932.news.dfncis.de>
In the last exciting episode, "Robert Monfera" <·······@fisec.com> wrote::
> "Christopher Browne" <········@acm.org> wrote in message
> ···················@ID-125932.news.dfncis.de...
>
> | No matter what sort of system you use, that size of matrix will be
> | problematic, as it pushes the edges of the permissible size of a
> | process.
> |
> | 8 byte floats will lead to memory consumption of ~3.2GB, which won't
> | be a happy thing irrespective of what language you're working in.
>
> 4GB memory costs only a few hundred dollars, so addressing arrays
> with 20-something bits (a few MB) is getting inadequate really fast.

Do you have a computer that allows you to address that much memory in
one process?

64 bit systems that have all the hardware to support huge amounts of
memory _aren't_ cheap.

It may cost a few hundred dollars to move from 512MB of memory to 2GB
of memory; jumping onwards to 6GB requires spending the bigger bucks
on an upgrade to [UltraSPARC | IBM PPC | Itanic | SGI MIPS | HPaq
Alpha].

The price immediately jumps sharply, the RAM probably _isn't_ cheap,
and you probably get to gulp on $10K USD before you've gotten started,
and there may not be a suitable CL yet for the system you've bought.

The problem is that no 64 bit architecture has leapt out as being a
really good "risk."  None are cheap; all are _highly_ tied to specific
vendors; there has been lots of FUD since the late '90s about "Merced"
being the only 64 bit architecture that could conceivably survive.
Note that there is a serious dearth of vendors hawking it, even now.
-- 
(concatenate 'string "chris" ·@cbbrowne.com")
http://www3.sympatico.ca/cbbrowne/spreadsheets.html
Rules  of the  Evil Overlord  #214. "If  a malignant  being  demands a
sacrificial victim  have a  particular quality, I  will check  to make
sure said victim has this quality immediately before the sacrifice and
not rely on  earlier results. (Especially if the  quality is virginity
and the victim is the hero's girlfriend.)"
<http://www.eviloverlord.com/>
From: Barry Margolin
Subject: Re: Upper limits of CL
Date: 
Message-ID: <hcqO8.16$Pd6.199@paloalto-snr1.gtei.net>
In article <··············@ID-125932.news.dfncis.de>,
Christopher Browne  <········@acm.org> wrote:
>In the last exciting episode, "Robert Monfera" <·······@fisec.com> wrote::
>> 4GB memory costs only a few hundred dollars, so addressing arrays
>> with 20-something bits (a few MB) is getting inadequate really fast.
>
>Do you have a computer that allows you to address that much memory in
>one process?

If a 64-bit architecture doesn't allow this, what's the point of it?

>The price immediately jumps sharply, the RAM probably _isn't_ cheap,
>and you probably get to gulp on $10K USD before you've gotten started,
>and there may not be a suitable CL yet for the system you've bought.

Prices will come down when they become more common.

It's probably the case that none of the CL implementations have yet been
ported to 64-bit environments (but I think I've seen mention of Emacs on
64-bit systems allowing multi-GB buffers).  That should also happen when
the systems get cheaper, or if a big customer has an application that
requires a 64-bit Lisp.

-- 
Barry Margolin, ······@genuity.net
Genuity, Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: Carl Shapiro
Subject: Re: Upper limits of CL
Date: 
Message-ID: <ouysn3py8d8.fsf@panix3.panix.com>
Barry Margolin <······@genuity.net> writes:

> It's probably the case that none of the CL implementations have yet been
> ported to 64-bit environments (but I think I've seen mention of Emacs on
> 64-bit systems allowing multi-GB buffers).  

There are 64-bit Lisp implementations available for the popular 64-bit
UNIX systems.  Franz's Allegro CL has been ported to three 64-bit
architectures: the Alpha, PA 2.0 and the UltraSPARC.

Interestingly enough, even though the width of a fixnum was increased
in Franz's 64-bit Lisp, no upward adjustment was made to the maximum
array dimension limit.  How strange.
From: Robert Monfera
Subject: Re: Upper limits of CL
Date: 
Message-ID: <_dvO8.3214$oC3.3580013@news2.news.adelphia.net>
"Carl Shapiro" <·············@panix.com> wrote in message
····················@panix3.panix.com...

| Interestingly enough, even though the width of a fixnum was increased
| in Franz's 64-bit Lisp, no upward adjustment was made to the maximum
| array dimension limit.  How strange.

It depends on how you define what a 64-bit lisp means - does it mean it runs
in a 64-bit environment, or it has at least one 64-bit feature, or does it
have to exploit all benefits of ?  These would be my criteria in the order
of importance:

1. heap size
2. heap size
3. heap size
4. array indexing
5. fixnum size

Robert
From: Barry Margolin
Subject: Re: Upper limits of CL
Date: 
Message-ID: <PmwO8.31$Pd6.234@paloalto-snr1.gtei.net>
In article <······················@news2.news.adelphia.net>,
Robert Monfera <·······@fisec.com> wrote:
>
>"Carl Shapiro" <·············@panix.com> wrote in message
>····················@panix3.panix.com...
>
>| Interestingly enough, even though the width of a fixnum was increased
>| in Franz's 64-bit Lisp, no upward adjustment was made to the maximum
>| array dimension limit.  How strange.
>
>It depends on how you define what a 64-bit lisp means - does it mean it runs
>in a 64-bit environment, or it has at least one 64-bit feature, or does it
>have to exploit all benefits of ?  These would be my criteria in the order
>of importance:
>
>1. heap size
>2. heap size
>3. heap size
>4. array indexing
>5. fixnum size

Unless you're implementing complex AI systems like Cyc, which have
databases containing an enormous number of medium-sized objects, I think
array size is likely to be the highest criteria.  For most other
applications, you're not going to be able to fill up that huge heap unless
you can create huge arrays, unless you kludge it with nested arrays (as the
OP mentioned).

-- 
Barry Margolin, ······@genuity.net
Genuity, Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: Robert Monfera
Subject: Re: Upper limits of CL
Date: 
Message-ID: <yfzO8.25$1n4.23400@news2.news.adelphia.net>
"Barry Margolin" <······@genuity.net> wrote in message
·····················@paloalto-snr1.gtei.net...

| >1. heap size
| >2. heap size
| >3. heap size
| >4. array indexing
| >5. fixnum size
|
| Unless you're implementing complex AI systems like Cyc, which have
| databases containing an enormous number of medium-sized objects, I think
| array size is likely to be the highest criteria.  For most other
| applications, you're not going to be able to fill up that huge heap unless
| you can create huge arrays, unless you kludge it with nested arrays (as
the
| OP mentioned).

You are right.  Though would it be very untypical for non-AI applications to
require a big heap and still not use large arrays?  For example, an image
processing system would use large arrays (larger ones than what current
implementations can handle), but a CAD system perhaps would not.

Regarding your second point: if image can grow large, we can make virtual
arrays, but if image size is constrained, then we are worse off even if an
array can be as big as the image - reason for the prioritization.

Large fixnum range is also useful, for example for currency amounts.

Robert
From: Carl Shapiro
Subject: Re: Upper limits of CL
Date: 
Message-ID: <ouywut1um3x.fsf@panix3.panix.com>
"Robert Monfera" <·······@fisec.com> writes:

> Regarding your second point: if image can grow large, we can make virtual
> arrays, but if image size is constrained, then we are worse off even if an
> array can be as big as the image - reason for the prioritization.
> 
> Large fixnum range is also useful, for example for currency amounts.

For this sort of application, I would consider large fixnums to be
purely an optimization time convenience.  There shouldn't be any added
cost in the development or maintenance of your wares if currency
objects are represented as bignums.  All of the standard Common Lisp
numerical operations "just work" for bignums as they do for fixnums.

In contrast, creating a truly useful virtual array type strikes me as
being a potentially expensive undertaking.  While just about anybody
could hack together a simple virtual array abstraction over the
weekend, I would imagine that reimplementing the relevant parts of the
arrays and sequences dictionary for this new type as well as devising
a reliable discipline for marshalling virtual arrays through your FFI
requires a more considerable effort.

Having large arrays especially wins when dealing with external library
code where somebody else set the rules.  I am especially fond of the
array and sequence frobs Common Lisp offers me--resorting to
operations on large arrays in foreign code just because my data is too
big for representation on the Lisp run-time is unquestionably
loathsome.
From: Christopher Browne
Subject: Re: Upper limits of CL
Date: 
Message-ID: <aeehr7$6aqia$2@ID-125932.news.dfncis.de>
The world rejoiced as "Robert Monfera" <·······@fisec.com> wrote:
> Large fixnum range is also useful, for example for currency amounts.

Indeed.  32 bits isn't nearly enough to express Bill Gates' net worth
in pennies, and that's not a facetious remark, as it only takes a
company of "moderate" size to get sales of >$214M wherein 32 bit
values "blow up."

Throw in a couple of tag bits, and MOST-POSITIVE-FIXNUM shrinks
further.

For instance, the biggest value CLISP can cope with, in pennies, is
167772.16, and hopefully many of the gentle readers have pension
savings exceeding that :-).
-- 
(reverse (concatenate 'string ··········@" "enworbbc"))
http://cbbrowne.com/info/finances.html
"In computing, invariants are ephemeral."  -- Alan Perlis
From: Bruce Hoult
Subject: Re: Upper limits of CL
Date: 
Message-ID: <bruce-1D8583.17474715062002@copper.ipg.tsnz.net>
In article <··············@ID-125932.news.dfncis.de>,
 Christopher Browne <········@acm.org> wrote:

> The world rejoiced as "Robert Monfera" <·······@fisec.com> wrote:
> > Large fixnum range is also useful, for example for currency amounts.
> 
> Indeed.  32 bits isn't nearly enough to express Bill Gates' net worth
> in pennies, and that's not a facetious remark, as it only takes a
> company of "moderate" size to get sales of >$214M wherein 32 bit
> values "blow up."

As has been pointed out here before, IEEE Doubles can be used as 
perfectly good 53 bit fixnums.  You lose 11 bits out of 64, but that's 
better than many alternatives -- and it's fast.

For those worried about silent overflow ... the Inexact Result trap 
and/or flag can take care of that.  I don't have the necessary 
documentation for other architectures here but the PowerPC has three 
options:

1) you can enable an exception that is taken if an inexact result is 
produced

2) there is a status flag that is set/cleared based on whether the last 
FP operation produced an inexact result

3) there is a status flag that is set when an FP operation produces an 
inexact result, and never cleared (you can, manually, of course).

-- Bruce
From: Duane Rettig
Subject: Re: Upper limits of CL
Date: 
Message-ID: <4k7ozzpn5.fsf@beta.franz.com>
Carl Shapiro <·············@panix.com> writes:

> Barry Margolin <······@genuity.net> writes:
> 
> > It's probably the case that none of the CL implementations have yet been
> > ported to 64-bit environments (but I think I've seen mention of Emacs on
> > 64-bit systems allowing multi-GB buffers).  
> 
> There are 64-bit Lisp implementations available for the popular 64-bit
> UNIX systems.  Franz's Allegro CL has been ported to three 64-bit
> architectures: the Alpha, PA 2.0 and the UltraSPARC.
> 
> Interestingly enough, even though the width of a fixnum was increased
> in Franz's 64-bit Lisp, no upward adjustment was made to the maximum
> array dimension limit.  How strange.

Yup.  Thanks for reminding me when I visited.  The bigger array sizes are
going to be in place for 6.2.  The new limit for 64-bit versions is going
to be 72057594037927936, or (expt 2 56).  Hopefully that will be enough
for a while.

-- 
Duane Rettig          Franz Inc.            http://www.franz.com/ (www)
1995 University Ave Suite 275  Berkeley, CA 94704
Phone: (510) 548-3600; FAX: (510) 548-8253   ·····@Franz.COM (internet)
From: Raymond Toy
Subject: Re: Upper limits of CL
Date: 
Message-ID: <4nk7owd8gd.fsf@edgedsp4.rtp.ericsson.se>
>>>>> "Duane" == Duane Rettig <·····@franz.com> writes:

    Duane> Yup.  Thanks for reminding me when I visited.  The bigger array sizes are
    Duane> going to be in place for 6.2.  The new limit for 64-bit versions is going
    Duane> to be 72057594037927936, or (expt 2 56).  Hopefully that will be enough
    Duane> for a while.

We (expt 2 56)?  Does this mean you are using 8 bits for tags?

Ray
From: Duane Rettig
Subject: Re: Upper limits of CL
Date: 
Message-ID: <4d6uoy7fo.fsf@beta.franz.com>
Raymond Toy <···@rtp.ericsson.se> writes:

> >>>>> "Duane" == Duane Rettig <·····@franz.com> writes:
> 
>     Duane> Yup.  Thanks for reminding me when I visited.  The bigger array sizes are
>     Duane> going to be in place for 6.2.  The new limit for 64-bit versions is going
>     Duane> to be 72057594037927936, or (expt 2 56).  Hopefully that will be enough
>     Duane> for a while.
> 
> We (expt 2 56)?  Does this mean you are using 8 bits for tags?

For types, in the header word of the array.

-- 
Duane Rettig          Franz Inc.            http://www.franz.com/ (www)
1995 University Ave Suite 275  Berkeley, CA 94704
Phone: (510) 548-3600; FAX: (510) 548-8253   ·····@Franz.COM (internet)
From: Raymond Toy
Subject: Re: Upper limits of CL
Date: 
Message-ID: <4nlm9cbodt.fsf@edgedsp4.rtp.ericsson.se>
>>>>> "Duane" == Duane Rettig <·····@franz.com> writes:

    Duane> Raymond Toy <···@rtp.ericsson.se> writes:
    >> >>>>> "Duane" == Duane Rettig <·····@franz.com> writes:
    >> 
    Duane> Yup.  Thanks for reminding me when I visited.  The bigger array sizes are
    Duane> going to be in place for 6.2.  The new limit for 64-bit versions is going
    Duane> to be 72057594037927936, or (expt 2 56).  Hopefully that will be enough
    Duane> for a while.
    >> 
    >> We (expt 2 56)?  Does this mean you are using 8 bits for tags?

    Duane> For types, in the header word of the array.

Ok.  I guess this means that the header word contains the 8-bit header
type and the remaining bits are used to indicate the length of the
array.

If so, this is different from CMUCL where the length of the array is
stored as a fixnum in the word following the header word.

Ray
From: Duane Rettig
Subject: Re: Upper limits of CL
Date: 
Message-ID: <48z5cy0at.fsf@beta.franz.com>
Raymond Toy <···@rtp.ericsson.se> writes:

> >>>>> "Duane" == Duane Rettig <·····@franz.com> writes:
> 
>     Duane> Raymond Toy <···@rtp.ericsson.se> writes:
>     >> >>>>> "Duane" == Duane Rettig <·····@franz.com> writes:
>     >> 
>     Duane> Yup.  Thanks for reminding me when I visited.  The bigger
>     array sizes are going to be in place for 6.2.  The new limit for
>     64-bit versions is going to be 72057594037927936, or (expt 2 56).
>     Hopefully that will be enough for a while.
>     >> 
>     >> We (expt 2 56)?  Does this mean you are using 8 bits for tags?
> 
>     Duane> For types, in the header word of the array.
> 
> Ok.  I guess this means that the header word contains the 8-bit header
> type and the remaining bits are used to indicate the length of the
> array.
> 
> If so, this is different from CMUCL where the length of the array is
> stored as a fixnum in the word following the header word.

Yes.  This is a tradeoff that every implementation has to make.  On the
one hand, there is a slightly lower limit for array sizes if the size
field is encroached upon by the type byte.  On the other hand, using
an extra word for a size field forces all arrays (actually, half, on
average) to be larger.  We chose the former situation because a
one-element array can be only one Allocation Unit (AU) but with an
extra word being used for size, the same array would have to be 2 AUs.

-- 
Duane Rettig          Franz Inc.            http://www.franz.com/ (www)
1995 University Ave Suite 275  Berkeley, CA 94704
Phone: (510) 548-3600; FAX: (510) 548-8253   ·····@Franz.COM (internet)
From: James A. Crippen
Subject: Re: Upper limits of CL
Date: 
Message-ID: <m3r8j4ib2i.fsf@kappa.unlambda.com>
Duane Rettig <·····@franz.com> writes:

> Yes.  This is a tradeoff that every implementation has to make.  On the
> one hand, there is a slightly lower limit for array sizes if the size
> field is encroached upon by the type byte.  On the other hand, using
> an extra word for a size field forces all arrays (actually, half, on
> average) to be larger.  We chose the former situation because a
> one-element array can be only one Allocation Unit (AU) but with an
> extra word being used for size, the same array would have to be 2 AUs.

This is the sort of thing that really interests me.  I wish there was
a Lisp Implementors list somewhere that people could yak about
implementation concepts.

Of course, there's the GC-List, but that really only covers GC.  And
is not particularly active lately.  (I suppose it'll pick up once
OOPSLA comes along again.)

'james

-- 
James A. Crippen <·····@unlambda.com> ,-./-.  Anchorage, Alaska,
Lambda Unlimited: Recursion 'R' Us   |  |/  | USA, 61.20939N, -149.767W
Y = \f.(\x.f(xx)) (\x.f(xx))         |  |\  | Earth, Sol System,
Y(F) = F(Y(F))                        \_,-_/  Milky Way.
From: Raymond Toy
Subject: Re: Upper limits of CL
Date: 
Message-ID: <4nr8j48cde.fsf@edgedsp4.rtp.ericsson.se>
>>>>> "James" == James A Crippen <·····@unlambda.com> writes:

    James> Duane Rettig <·····@franz.com> writes:
    >> Yes.  This is a tradeoff that every implementation has to make.  On the
    >> one hand, there is a slightly lower limit for array sizes if the size
    >> field is encroached upon by the type byte.  On the other hand, using
    >> an extra word for a size field forces all arrays (actually, half, on
    >> average) to be larger.  We chose the former situation because a
    >> one-element array can be only one Allocation Unit (AU) but with an
    >> extra word being used for size, the same array would have to be 2 AUs.

    James> This is the sort of thing that really interests me.  I wish there was
    James> a Lisp Implementors list somewhere that people could yak about
    James> implementation concepts.

The CMUCL Design document has some information about this which you
might find interesting.

Ray
From: Raymond Toy
Subject: Re: Upper limits of CL
Date: 
Message-ID: <4nvg8g8cf9.fsf@edgedsp4.rtp.ericsson.se>
>>>>> "Duane" == Duane Rettig <·····@franz.com> writes:

    Duane> Raymond Toy <···@rtp.ericsson.se> writes:
    >> 
    >> Ok.  I guess this means that the header word contains the 8-bit header
    >> type and the remaining bits are used to indicate the length of the
    >> array.
    >> 
    >> If so, this is different from CMUCL where the length of the array is
    >> stored as a fixnum in the word following the header word.

    Duane> Yes.  This is a tradeoff that every implementation has to make.  On the
    Duane> one hand, there is a slightly lower limit for array sizes if the size
    Duane> field is encroached upon by the type byte.  On the other hand, using
    Duane> an extra word for a size field forces all arrays (actually, half, on
    Duane> average) to be larger.  We chose the former situation because a
    Duane> one-element array can be only one Allocation Unit (AU) but with an
    Duane> extra word being used for size, the same array would have to be 2 AUs.

I see.  Just curious on how this design decision came about.  Are one
element arrays that common?  Or at least common enough to warrant this
savings?

According to the CMUCL design doc, the extra word "allows for much
larger vectors, and it allows LENGTH to simply access a single word
without masking or shifting".  Of course, until recently, no version
of CMUCL allowed you to allocate even 2^24 of anything except
strings.  (The heap was 64 MB in all platforms, I think.)

Ray
From: Duane Rettig
Subject: Re: Upper limits of CL
Date: 
Message-ID: <4fzzk3usu.fsf@beta.franz.com>
Raymond Toy <···@rtp.ericsson.se> writes:

> >>>>> "Duane" == Duane Rettig <·····@franz.com> writes:
> 
>     Duane> Raymond Toy <···@rtp.ericsson.se> writes:
>     >> 
>     >> Ok.  I guess this means that the header word contains the 8-bit header
>     >> type and the remaining bits are used to indicate the length of the
>     >> array.
>     >> 
>     >> If so, this is different from CMUCL where the length of the array is
>     >> stored as a fixnum in the word following the header word.
> 
>     Duane> Yes.  This is a tradeoff that every implementation has to make.  On the
>     Duane> one hand, there is a slightly lower limit for array sizes if the size
>     Duane> field is encroached upon by the type byte.  On the other hand, using
>     Duane> an extra word for a size field forces all arrays (actually, half, on
>     Duane> average) to be larger.  We chose the former situation because a
>     Duane> one-element array can be only one Allocation Unit (AU) but with an
>     Duane> extra word being used for size, the same array would have to be 2 AUs.
> 
> I see.  Just curious on how this design decision came about.  Are one
> element arrays that common?  Or at least common enough to warrant this
> savings?

No, that's backwards.  It is the lighter weight of arrays that make it
more conducive to allocate more of them with impunity.  This applies to
arrays of all sizes.  It also applies to cons cells, as well (I know there
are implementations that use three words for cons cells rather than two,
and I think I rememeber hearing of one which used 4 words per cons).
It is also the same reason why we allow hash-tables to be created that
have only three keys.  The lighter weight any particular object can be,
the more of them that can be allocated by a user.

> According to the CMUCL design doc, the extra word "allows for much
> larger vectors, and it allows LENGTH to simply access a single word
> without masking or shifting".  Of course, until recently, no version
> of CMUCL allowed you to allocate even 2^24 of anything except
> strings.  (The heap was 64 MB in all platforms, I think.)

Kind of an ironic twist, eh, that the design intention was thwarted by
reality, even if only temporarily?

I wouldn't make the same claim (i.e. "allows for much larger vectors")
when calling out (expt 2 56) as the array-*-limit - I know of nobody who
could even test such a limit currently, and so calling this limit out
simply states that the total size array that a CL user could allocate
on one of our 64-bit lisps will not be limited by this value, once users
are able to build up machines with this much virtual memory.

-- 
Duane Rettig          Franz Inc.            http://www.franz.com/ (www)
1995 University Ave Suite 275  Berkeley, CA 94704
Phone: (510) 548-3600; FAX: (510) 548-8253   ·····@Franz.COM (internet)
From: Raymond Toy
Subject: Re: Upper limits of CL
Date: 
Message-ID: <4nd6un8khe.fsf@edgedsp4.rtp.ericsson.se>
>>>>> "Duane" == Duane Rettig <·····@franz.com> writes:

    Duane> Raymond Toy <···@rtp.ericsson.se> writes:
    >> I see.  Just curious on how this design decision came about.  Are one
    >> element arrays that common?  Or at least common enough to warrant this
    >> savings?

    Duane> No, that's backwards.  It is the lighter weight of arrays that make it
    Duane> more conducive to allocate more of them with impunity.  This applies to
    Duane> arrays of all sizes.  It also applies to cons cells, as well (I know there
    Duane> are implementations that use three words for cons cells rather than two,
    Duane> and I think I rememeber hearing of one which used 4 words per cons).
    Duane> It is also the same reason why we allow hash-tables to be created that
    Duane> have only three keys.  The lighter weight any particular object can be,
    Duane> the more of them that can be allocated by a user.

I can understand why you might want cons cells to be small.  I find
less of a reason for arrays, but that might be because my arrays tend
to fairly large and one extra word out of thousands for each array is
irrelevant to me. :-)  As an implementor, I'm sure you have a much
different view.  Thanks for the info.


    >> According to the CMUCL design doc, the extra word "allows for much
    >> larger vectors, and it allows LENGTH to simply access a single word
    >> without masking or shifting".  Of course, until recently, no version
    >> of CMUCL allowed you to allocate even 2^24 of anything except
    >> strings.  (The heap was 64 MB in all platforms, I think.)

    Duane> Kind of an ironic twist, eh, that the design intention was thwarted by
    Duane> reality, even if only temporarily?

But given that the original CMUCL developers stopped around 1990 or
so, it's not really a surprise.  We used to drool over 64 MB
Sparcstations back then.  I'm pretty sure that not until a few years
ago did any one even ask for more heap space.

    Duane> I wouldn't make the same claim (i.e. "allows for much larger vectors")
    Duane> when calling out (expt 2 56) as the array-*-limit - I know of nobody who
    Duane> could even test such a limit currently, and so calling this limit out
    Duane> simply states that the total size array that a CL user could allocate
    Duane> on one of our 64-bit lisps will not be limited by this value, once users
    Duane> are able to build up machines with this much virtual memory.

Indeed.  If I remember correctly, the Sparc V9 architecture says that
there are only 44 bits worth of virtual memory even though you have
64-bit addresses.  There's a huge whole in the middle of the space.

Ray
From: Nicolas Neuss
Subject: Re: Upper limits of CL
Date: 
Message-ID: <87wuskdge1.fsf@ortler.iwr.uni-heidelberg.de>
Raymond Toy <···@rtp.ericsson.se> writes:

> I can understand why you might want cons cells to be small.  I find
> less of a reason for arrays, but that might be because my arrays tend
> to fairly large and one extra word out of thousands for each array is
> irrelevant to me. :-)  As an implementor, I'm sure you have a much
> different view.  Thanks for the info.

Think of using arrays to store the coordinates of the nodes of an
unstructured mesh in 1-3 dimensions and you have a lot of small
arrays.

Nicolas.
From: Raymond Toy
Subject: Re: Upper limits of CL
Date: 
Message-ID: <4n3cv8u8kr.fsf@rtp.ericsson.se>
>>>>> "Nicolas" == Nicolas Neuss <·············@iwr.uni-heidelberg.de> writes:

    Nicolas> Raymond Toy <···@rtp.ericsson.se> writes:
    >> I can understand why you might want cons cells to be small.  I find
    >> less of a reason for arrays, but that might be because my arrays tend
    >> to fairly large and one extra word out of thousands for each array is
    >> irrelevant to me. :-)  As an implementor, I'm sure you have a much
    >> different view.  Thanks for the info.

    Nicolas> Think of using arrays to store the coordinates of the nodes of an
    Nicolas> unstructured mesh in 1-3 dimensions and you have a lot of small
    Nicolas> arrays.

But I could also say that having a single extra header word for each
of those tiny arrays is an unacceptable solution.  :-)  Upto 50%
wasted. 

So, I go for one giant array to hold everything and do the indexing
myself.  Now it's either one or two words of overhead for all of the
data.

:-)

Ray
From: Duane Rettig
Subject: Re: Upper limits of CL
Date: 
Message-ID: <48z50pwaw.fsf@beta.franz.com>
Raymond Toy <···@rtp.ericsson.se> writes:

> >>>>> "Nicolas" == Nicolas Neuss <·············@iwr.uni-heidelberg.de> writes:
> 
>     Nicolas> Raymond Toy <···@rtp.ericsson.se> writes:
>     >> I can understand why you might want cons cells to be small.  I find
>     >> less of a reason for arrays, but that might be because my arrays tend
>     >> to fairly large and one extra word out of thousands for each array is
>     >> irrelevant to me. :-)  As an implementor, I'm sure you have a much
>     >> different view.  Thanks for the info.
> 
>     Nicolas> Think of using arrays to store the coordinates of the nodes of an
>     Nicolas> unstructured mesh in 1-3 dimensions and you have a lot of small
>     Nicolas> arrays.
> 
> But I could also say that having a single extra header word for each
> of those tiny arrays is an unacceptable solution.  :-)  Upto 50%
> wasted. 

Correct. The cost of such a style must be considered.  And we have
customers on both sides - ones who have rejected this approach in favor of
another, and ones who have accepted this approach because they deemed the
cost not to be too great.  Which group is right?  They both are!

> So, I go for one giant array to hold everything and do the indexing
> myself.  Now it's either one or two words of overhead for all of the
> data.

There are pros and cons to this kind of approach as well.  All in all, the
pros and cons of each approach are worthy of careful consideration and
weighing during design for each project (and according to the style of
each programmer and the problem at hand).

As an implementor of a general-purpose language, I try to give as many
options as possible to users.  Obviously, I can't please all of the
people all of the time, but I can certainly try.

-- 
Duane Rettig          Franz Inc.            http://www.franz.com/ (www)
1995 University Ave Suite 275  Berkeley, CA 94704
Phone: (510) 548-3600; FAX: (510) 548-8253   ·····@Franz.COM (internet)
From: Nicolas Neuss
Subject: Re: Upper limits of CL
Date: 
Message-ID: <87it44d2t6.fsf@ortler.iwr.uni-heidelberg.de>
Raymond Toy <···@rtp.ericsson.se> writes:

> 
>     Nicolas> Think of using arrays to store the coordinates of the
>     Nicolas> nodes of an unstructured mesh in 1-3 dimensions and you
>     Nicolas> have a lot of small arrays.
> 
> But I could also say that having a single extra header word for each
> of those tiny arrays is an unacceptable solution.  :-)  Upto 50%
> wasted. 
> 
> So, I go for one giant array to hold everything and do the indexing
> myself.  Now it's either one or two words of overhead for all of the
> data.
> 
> :-)
> 
> Ray

You're right that this is usually more efficient.  But now imagine
this mesh being modified from time step to time step by adding and
removing points.  You would have to do some garbage collection for
your indices then, which is nontrivial.

Thus, it would be very nice for me if Matlisp could handle also small
vectors/matrices fast.  Unfortunately, the generic function call kills
you in that case.  (I'm just working on a solution for that.  At the
moment, my solution uses the deftransform construct of CMUCL.
Probably, a portable solution would be to define compiler-macros which
inline generic function calls for certain types.)

Nicolas.
From: Carl Shapiro
Subject: Re: Upper limits of CL
Date: 
Message-ID: <ouy1yasn4u6.fsf@panix3.panix.com>
Nicolas Neuss <·············@iwr.uni-heidelberg.de> writes:

> Think of using arrays to store the coordinates of the nodes of an
> unstructured mesh in 1-3 dimensions and you have a lot of small
> arrays.

Wouldn't small lists be the appropriate data type for storing such
coordinates?
From: Nicolas Neuss
Subject: Re: Upper limits of CL
Date: 
Message-ID: <87n0tgd3cy.fsf@ortler.iwr.uni-heidelberg.de>
Carl Shapiro <·············@panix.com> writes:

> Wouldn't small lists be the appropriate data type for storing such
> coordinates?

That would need more space as much as I see (at least for 2D and 3D).
Nicolas.
From: Pierpaolo BERNARDI
Subject: Re: Upper limits of CL
Date: 
Message-ID: <elrO8.35371$TS.1025962@news1.tin.it>
"Barry Margolin" <······@genuity.net> ha scritto nel messaggio ·····················@paloalto-snr1.gtei.net...

> It's probably the case that none of the CL implementations have yet been
> ported to 64-bit environments

AFAIK, at least ACL and Clisp have been ported to 64-bit.

P.
From: Hrvoje Niksic
Subject: Re: Upper limits of CL
Date: 
Message-ID: <sxs7kkxzvqt.fsf@florida.munich.redhat.com>
Barry Margolin <······@genuity.net> writes:

> It's probably the case that none of the CL implementations have yet
> been ported to 64-bit environments (but I think I've seen mention of
> Emacs on 64-bit systems allowing multi-GB buffers).

There's no doubt about it -- on a 64-bit system such as Tru64 under
Alpha, XEmacs has 63-bit integers, which means the size of your
buffers is pretty much only limited by the amount of free virtual
memory.

The same goes with UltraSPARC in 64-bit mode ("LP64"), which XEmacs
supports just fine.
From: Christopher Browne
Subject: Re: Upper limits of CL
Date: 
Message-ID: <aeotnp$8r9qj$1@ID-125932.news.dfncis.de>
Barry Margolin <······@genuity.net> wrote:
> In article <··············@ID-125932.news.dfncis.de>,
> Christopher Browne  <········@acm.org> wrote:
>>In the last exciting episode, "Robert Monfera" <·······@fisec.com> wrote::
>>> 4GB memory costs only a few hundred dollars, so addressing arrays
>>> with 20-something bits (a few MB) is getting inadequate really fast.
>>
>>Do you have a computer that allows you to address that much memory in
>>one process?
>
> If a 64-bit architecture doesn't allow this, what's the point of it?

There's a difference between what is _theoretically_ allowed, and what
is actually achievable.

Obviously, the are _no_ systems out there allowing you to truly have
2^64 bytes/words/nybbles/... of RAM.  That quantity of RAM would
probably cost as much as all computers constructed thus far in history
:-).

Even on the more reasonable side of things, many of the earlier 64 bit
systems predate RAM becoming cheap, and may simply not have enough
slots around to allow having 2GB (2^31) of memory.  That's
unfortunate, but nonetheless the case.  That is likely to apply to
just about all of the "desktop" 64 bit systems that are getting
released to the used market.

>>The price immediately jumps sharply, the RAM probably _isn't_ cheap,
>>and you probably get to gulp on $10K USD before you've gotten
>>started, and there may not be a suitable CL yet for the system
>>you've bought.

> Prices will come down when they become more common.

Doubtless so.

> It's probably the case that none of the CL implementations have yet
> been ported to 64-bit environments (but I think I've seen mention of
> Emacs on 64-bit systems allowing multi-GB buffers).  That should
> also happen when the systems get cheaper, or if a big customer has
> an application that requires a 64-bit Lisp.

What is unfortunate is that nobody got around to building a _nice_
segmented OS on the IA-32 architecture.  There are addressing modes
that should allow that to be handled nicely, and the horrors of the
64KB segments associated with the earlier days of Intel wouldn't apply
the same way to the use of less-drainbamaged segmentation schemes.

Having _huge_ linear memory spaces isn't forcibly mandatory, so long
as there's a way of reasonably accessing very large amounts of memory.
-- 
(reverse (concatenate 'string ··········@" "enworbbc"))
http://www3.sympatico.ca/cbbrowne/multics.html
The light at the end of the tunnel may be an oncoming dragon.
From: Barry Margolin
Subject: Re: Upper limits of CL
Date: 
Message-ID: <7w1Q8.6$ZY2.284@paloalto-snr1.gtei.net>
In article <··············@ID-125932.news.dfncis.de>,
Christopher Browne  <········@acm.org> wrote:
>Barry Margolin <······@genuity.net> wrote:
>> In article <··············@ID-125932.news.dfncis.de>,
>> Christopher Browne  <········@acm.org> wrote:
>>>Do you have a computer that allows you to address that much memory in
>>>one process?
>>
>> If a 64-bit architecture doesn't allow this, what's the point of it?
>
>There's a difference between what is _theoretically_ allowed, and what
>is actually achievable.
>
>Obviously, the are _no_ systems out there allowing you to truly have
>2^64 bytes/words/nybbles/... of RAM.  That quantity of RAM would
>probably cost as much as all computers constructed thus far in history
>:-).

So?  I was just talking about more than 4GB, not the maximum theoretical
possibility.

Furthermore, the amount of RAM is irrelevant, since "memory in one process"
refers to *virtual* memory, not physical memory.  Virtual memory has almost
always been used to allow addressing more memory than is available as
physical RAM.

-- 
Barry Margolin, ······@genuity.net
Genuity, Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: Christopher Browne
Subject: Re: Upper limits of CL
Date: 
Message-ID: <aeqatb$961d2$2@ID-125932.news.dfncis.de>
Barry Margolin <······@genuity.net> wrote:
> In article <··············@ID-125932.news.dfncis.de>,
> Christopher Browne  <········@acm.org> wrote:
>>Barry Margolin <······@genuity.net> wrote:
>>> In article <··············@ID-125932.news.dfncis.de>,
>>> Christopher Browne  <········@acm.org> wrote:
>>>>Do you have a computer that allows you to address that much memory in
>>>>one process?
>>>
>>> If a 64-bit architecture doesn't allow this, what's the point of it?
>>
>>There's a difference between what is _theoretically_ allowed, and what
>>is actually achievable.
>>
>>Obviously, the are _no_ systems out there allowing you to truly have
>>2^64 bytes/words/nybbles/... of RAM.  That quantity of RAM would
>>probably cost as much as all computers constructed thus far in history
>>:-).
>
> So?  I was just talking about more than 4GB, not the maximum theoretical
> possibility.
>
> Furthermore, the amount of RAM is irrelevant, since "memory in one process"
> refers to *virtual* memory, not physical memory.  Virtual memory has almost
> always been used to allow addressing more memory than is available as
> physical RAM.

If the O.P. was actually trying to invert matrices of dimension
'(20000 20000), then they more than likely care somewhat about the
amount of memory physically available.
-- 
(concatenate 'string "chris" ·@cbbrowne.com")
http://cbbrowne.com/info/spiritual.html
I'm a Lisp variable -- bind me!
From: Raymond Toy
Subject: Re: Upper limits of CL
Date: 
Message-ID: <4nadprteo5.fsf@rtp.ericsson.se>
>>>>> "Christopher" == Christopher Browne <········@acm.org> writes:

    Christopher> Barry Margolin <······@genuity.net> wrote:
    >> In article <··············@ID-125932.news.dfncis.de>,
    >> Christopher Browne  <········@acm.org> wrote:
    >>> Barry Margolin <······@genuity.net> wrote:
    >>>> In article <··············@ID-125932.news.dfncis.de>,
    >>>> Christopher Browne  <········@acm.org> wrote:
    >>>>> Do you have a computer that allows you to address that much memory in
    >>>>> one process?
    >>>> 
    >>>> If a 64-bit architecture doesn't allow this, what's the point of it?
    >>> 
    >>> There's a difference between what is _theoretically_ allowed, and what
    >>> is actually achievable.
    >>> 
    >>> Obviously, the are _no_ systems out there allowing you to truly have
    >>> 2^64 bytes/words/nybbles/... of RAM.  That quantity of RAM would
    >>> probably cost as much as all computers constructed thus far in history
    >>> :-).
    >> 
    >> So?  I was just talking about more than 4GB, not the maximum theoretical
    >> possibility.
    >> 
    >> Furthermore, the amount of RAM is irrelevant, since "memory in one process"
    >> refers to *virtual* memory, not physical memory.  Virtual memory has almost
    >> always been used to allow addressing more memory than is available as
    >> physical RAM.

    Christopher> If the O.P. was actually trying to invert matrices of dimension
    Christopher> '(20000 20000), then they more than likely care somewhat about the
    Christopher> amount of memory physically available.

If I did the math right, this matrix takes about 3GB of memory for
double-precision values.  I'm pretty sure an inversion can be done
in-place, so you only need the array and probably a couple more 20000
element arrays to do it.

Even if not, the OP has other more serious problems if he wants to
invert such a matrix and get anything meaningful, so he's probably
caring about the wrong thing.

Ray
From: Greg Menke
Subject: Re: Upper limits of CL
Date: 
Message-ID: <m3znxq38fv.fsf@europa.pienet>
>     >> Furthermore, the amount of RAM is irrelevant, since "memory in one process"
>     >> refers to *virtual* memory, not physical memory.  Virtual memory has almost
>     >> always been used to allow addressing more memory than is available as
>     >> physical RAM.
> 
>     Christopher> If the O.P. was actually trying to invert matrices of dimension
>     Christopher> '(20000 20000), then they more than likely care somewhat about the
>     Christopher> amount of memory physically available.
> 
> If I did the math right, this matrix takes about 3GB of memory for
> double-precision values.  I'm pretty sure an inversion can be done
> in-place, so you only need the array and probably a couple more 20000
> element arrays to do it.
> 
> Even if not, the OP has other more serious problems if he wants to
> invert such a matrix and get anything meaningful, so he's probably
> caring about the wrong thing.

It was asserted to me that inverting the array was meaningful- however
I don't know the exact application beyond its something a CS prof at
the local university is working with.  Our default position with
respect to memory was to stuff the machine with as much ram as
possible & use plenty of swap.  Either that, or beg, bribe or steal
time on the university's fairly studly Sun hardware.

The prof in question ended up working up a Pascal program (poor guy-
the C people have it good compared to the Pascal people) using then
"Strassen algorithm".  I don't know if its finished running yet...

Gregm
From: Lieven Marchand
Subject: Re: Upper limits of CL
Date: 
Message-ID: <873cvh68y6.fsf@wyrd.be>
Greg Menke <············@mindspring.com> writes:

> It was asserted to me that inverting the array was meaningful- however
> I don't know the exact application beyond its something a CS prof at
> the local university is working with.

I really doubt it. In the standard texts on numerical linear algebra,
inverting matrices hardly shows up. What are you going to do with the
result? Multiplying it with another matrix can be done equally well by
LU factorization and back substitution. For other problems things like
QR factorization, Givens or Householder transforms or stuff like that
might come into play, but I've never seen an algorithm that called for
inverting large matrices.

> The prof in question ended up working up a Pascal program (poor guy-
> the C people have it good compared to the Pascal people) using then
> "Strassen algorithm".  I don't know if its finished running yet...

The Strassen algorithm is an algorithm to multiply matrices faster
than the obvious O(n^2) approach.

-- 
Bored, now.
Lieven Marchand <···@wyrd.be>
From: Greg Menke
Subject: Re: Upper limits of CL
Date: 
Message-ID: <m3it4d361y.fsf@europa.pienet>
> > It was asserted to me that inverting the array was meaningful- however
> > I don't know the exact application beyond its something a CS prof at
> > the local university is working with.
> 
> I really doubt it. In the standard texts on numerical linear algebra,
> inverting matrices hardly shows up. What are you going to do with the
> result? Multiplying it with another matrix can be done equally well by
> LU factorization and back substitution. For other problems things like
> QR factorization, Givens or Householder transforms or stuff like that
> might come into play, but I've never seen an algorithm that called for
> inverting large matrices.

I don't know what he's doing it for.  For all I know he was trying to
make it work just as a simple exercise.  At one point he attempted it
using files for the data & estimated it would take ~50 days to
complete.  I also don't know how efficient his algorithm was, but I
suspect it would have been a fseek(), fread() kind of thing for each
cell.  He was happier with the Strassen approach.

Since Common Lisp is cool & fun to use, I thought of asking about how
it might handle the problem.

Regards,

Gregm
From: Joseph Dale
Subject: Re: Upper limits of CL
Date: 
Message-ID: <3D12A64B.60304@uclink.berkeley.edu>
Lieven Marchand wrote:
> The Strassen algorithm is an algorithm to multiply matrices faster
> than the obvious O(n^2) approach.

It may have been a typo, but just for the record, the naive method for 
matrix multiplication is O(n^3), not O(n^2). Strassen's algorithm is 
O(n^2.81), and the best known upper bound for any matrix multiplication 
algorithm was O(n^2.376) at the time of the publication of the first 
edition of CLR (1990, I think).

Also, FWIW, there are algorithms for matrix inversion which reduce it to 
a bunch of multiplications of smaller matrices, so Strassen's algorithm 
does come into play there.

Joe
From: Barry Margolin
Subject: Re: Upper limits of CL
Date: 
Message-ID: <V%3Q8.10$0%3.519@paloalto-snr1.gtei.net>
In article <··············@ID-125932.news.dfncis.de>,
Christopher Browne  <········@acm.org> wrote:
>If the O.P. was actually trying to invert matrices of dimension
>'(20000 20000), then they more than likely care somewhat about the
>amount of memory physically available.

Whether the memory is real or virtual affects the performance, but not the
ability to do it.  In some applications you need an answer, and you don't
mind waiting a while for it.

-- 
Barry Margolin, ······@genuity.net
Genuity, Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: Erik Naggum
Subject: Re: Upper limits of CL
Date: 
Message-ID: <3233532779857997@naggum.net>
* Christopher Browne
| If the O.P. was actually trying to invert matrices of dimension '(20000
| 20000), then they more than likely care somewhat about the amount of memory
| physically available.

  Although I shall not speak of the efficiency issue since I have never done
  anything like this before, it is fairly obvious that all you need to do this
  is some fairly minimal amount of memory (only kilobytes in magnitude) and
  enough disk space to hold one or two copies of this matrix.  Binary I/O with
  objects of the appropriate type is a requirement, of course.  There is, also
  of course, no material difference between this and system-managed virtual
  memory, but it might not even matter that it is not random access if you are
  actually going to traverse the data sequentially.  You could even do the old
  tape-station trick and write a temporary tape/file with redundant data in
  your access pattern if this is faster than random access to disk or virtual
  memory -- there is strong empirical evidence that paging is slower than
  predictable sequential disk access, and you get a free 40G disk with every
  three pairs of sneakers these days.

  A good engineer would balance the tradeoffs and solve the problem within the
  existing resource constraints.  A theoretical computer scientist would whine
  until he got a big enough machine to implement the mathematical solution with
  the least amount of fussing about the constraints of the real world.  I know
  which one I would hire to get the job done.

  When I hear about these amazing datasets, I keep thinking that they need to
  be generated somehow.  The only applications I know of that require multiple
  gigabytes of memory (and then tens of gigabytes) are those that require
  multiple terabytes of disk space (and then easily tens of terabytes, too).  A
  friend of mine from the U of Oslo once headed the IT department in Statoil
  responsible for geological surveys.  They aquired the first homogeneous park
  of disks capable of 1 terabyte simultaneous access in Norway (they already
  had _hundreds_ og terabytes of "virtual disk space" on enormous tape systems
  that were loaded onto disk as needed).  These datasets were so huge that they
  took many months to accumulate -- they had to send physical _ships_ out to
  sea to scan the seabed, at the cost of dollar a byte, and the sheer transfer
  time from tape to disk could be a couple _days_ after the physical tapes had
  also taken several days to move from ship to the computer facility.  Then, of
  course, came the processing of these things.  Dedicated hardware costing many
  millions of dollars churned on the data for weeks on end before it finally
  spit out a colorful map of the seabed that took several days to plot on their
  monster plotters so they could hang them up on the walls, hundreds of meters
  long in all.  Later, computers became powerful enough to do virtual reality
  with real-time 3D navigation over and into the analyzed seabed.  This amazing
  feat was all done in Fortran, unsurprisingly, software costing more millions
  of dollars and that were tens of millions of lines of code developed over
  decades.  This was all worth it because it takes approximately one day of
  actual production from an oil well to pay for all the computing power
  necessary to punch the right hole in the ground.  The whole country of Norway
  can run for twice as long as it takes to pump up the oil, from the _taxation_
  on the _profits_ alone, socialist plan economy and social security and
  defense and everything.  This is not some jerkoff hobbyist whining about not
  getting a 64-bit Common Lisp -- this is big enough to invest 100 man-years to
  develop dedicated programming languages and spawning moderately large
  companies just to cater to a single industry of a _very_ small number of
  companies.  (We have no mom-and-pop oil rigs in Norway.)  It would be cheaper
  to implement a Common Lisp system of your own for this kind of operation than
  to ask someone else to do it for you.  (In an unrelated industry, that is
  just how Erlang got started.)  The morale of this long-winded example is
  this: If you really need to invert a 20,000 by 20,000 matrix of doubles, and
  it is not just some kind of academic masturbation, the memory and Common Lisp
  and 64-bit hardware and whatever else you can come up with on the IT side are
  going to be miniscule compared to the rest of the costs of the system.  Take
  meteorology -- the first Cray purchased in Norway was used to help our
  fishing industry plan their expeditions to sea.  According to legend, the
  hardware paid for itself the first week it had been in operation, but the
  cost of satellite imaging, telecommunications equipment capable of feeding
  the machine in real time, etc, dwarfed the Cray hardware by two orders of
  magnitude.  Apart from the oil, we have _weather_ in this mountainous costal
  country.  Billions of dollars have been poured into meteorological prediction
  in this country alone since computers were able to help at all.  Under such
  circumstances, I can fully sympathize with the need for more than 32-bit
  addressing and I appreciate the need for the raw computational power that
  goes into these things, but if this talk about 64-bit implementations is only
  some academic exercise, I actually find it _insulting_ to the brilliant minds
  who have had to do without it.  Besides, if you really need gigabyte upon
  gigabyte of memory and the hardware to utilize it, the only thing between you
  and satisfying that need has been money for at least 10 years.  It's not like
  it's Common Lisp's fault that you haven't had that money -- and if you had
  had it, would you have had anything left over to do anything useful with it
  over a reasonable period of time?  Like, you don't buy a 150-million-dollar
  printing press just because you got an idea about publishing a newspaper --
  you upgrade from the 75-million-dollar printing press when you approach 20
  out of 24 hours running time 7 days a week and want to not run out of hours
  of the day before the new press can be delivered and run in.

  So pardon my cynical twist, but what are you doing with that 20,000�20,000
  double-precision floating point matrix you say you need to invert _today_?
  If you answer "nutt'n, I jus kinda wondered what it'd be like, you know", you
  should be very happy that I am most likely more than 3000 miles away from
  you, or I would come over and slap you hard.

  And if you _are_ doing serious stuff of this magnitude, why do you even
  bother with run-of-the-mill Common Lisp implementations on stock hardware?
  Implement your own goddamn Common Lisp system optimized for your hardware and
  your domain-specific language and other needs.  That was -- after all -- how
  several of the Common Lisp implementations out there got started.  It wasn't
  a miracle then, and it won't be a miracle now.  Just f-ing do it.

[ If I sound grumpy, it is only because I have come across too many idiots of
  the "it can't be done" persuasion lately, the kind of managers who have an
  aquarium in their office because fifteen brains think better than one.  ]
-- 
  Guide to non-spammers: If you want to send me a business offer, please be
  specific and do not put "business offer" in the Subject header.  If it is
  urgent, do not use the word "urgent".  If you need an immediate answer,
  give me a reason, do not shout "for your immediate attention".  Thank you.
From: Joel Ray Holveck
Subject: Re: Upper limits of CL
Date: 
Message-ID: <y7c3cviimtg.fsf@sindri.juniper.net>
> So pardon my cynical twist, but what are you doing with that
> 20,000�20,000 double-precision floating point matrix you say you
> need to invert _today_?  If you answer "nutt'n, I jus kinda wondered
> what it'd be like, you know", you should be very happy that I am
> most likely more than 3000 miles away from you, or I would come over
> and slap you hard.

Easy, Erik!  The OP made it clear that this was a curiosity thing; he
didn't indicate that it was an immediate need.  I see no need to get
upset.

Cheers,
joelh
From: Erik Naggum
Subject: Re: Upper limits of CL
Date: 
Message-ID: <3233584671500403@naggum.net>
* Joel Ray Holveck
| Easy, Erik!  The OP made it clear that this was a curiosity thing; he didn't
| indicate that it was an immediate need.  I see no need to get upset.

  Like I need a reason to get upset.  :)

  IMNSHO, we are at a point of technological development where the hardware is
  so extremely underutilized that it is insulting to everybody involved to ask
  for more.  This is like the pampered child who whines for another soft pillow
  to keep from actually moving his body to correct the inevitable pain of _not_
  moving.  Just imagine what the engineers who worked miracles with 16K could
  have done with the gigabyte now wasting away in my computer!  I still have
  only two 600MHz Pentium III processors with that memory, which is slow by
  today's standards, and that means the otherwise great website for the movie
  Minority Report takes real effort for the processors to show and the flash
  animation is jerky and when Mozilla needs that last 200M block of memory and
  the swap disk lamp starts flashing, I am reminded of the stupid ad for some
  German car maker: "The new <car> has more computing power than it took to
  send a man to the moon!" (I think my trusty old HP48GX does, too), and my
  first thought was "and yet you are stuck here on earth?"

  I have to wonder what went wrong and where when I get a 900K message encoded
  with BASE64 and MIME and some HTML crap that only contains a _compressed_
  archive of two Word documents that contains a very nice invitation, but which
  would have been only about 60K of network traffic had it been made into a PNG
  graphic and they had sent me a URL to it, not to mention that it would have
  taken approximately a millionth of the processing power to show it to me that
  way than it took to decrypt the Word idiocy -- and since I do not actually
  have Word, either, it probably looks much less ... inviting.  Then I keep
  thinking that I will be stuck here on earth for a _long_ while.

  Curiosity is not a bad thing.  However, too much idle curiosity and too
  little engineering-style or researcher-style curiosity _is_ a bad thing.
  Idle curiosity makes people lose track of any _purpose_ to their thinking.
  With all the work we have done improve information technology, one would
  think that it should be easier to maintain a sustained focus, but not so.
  Concentration and attention are in short supply all over the Western world,
  mostly because of what I consider the worst plague on mankind ever: rampant
  and uncontrolled information pollution in the form of advertising, but also
  because the energy required to maintain and sustain focus has increased in
  our culture.  Cognitive overload causes a constant stress level that taxes
  people's ability to stay focused even on their conscious purposes.  It is
  like high caffeine "poisoning", except a rambling flow of thought and speech
  is no longer considered a symptom of anything wrong.  Idle curiosity is just
  like that rambling flow of thought that produces exactly nothing worthwhile.
-- 
  Guide to non-spammers: If you want to send me a business proposal, please be
  specific and do not put "business proposal" in the Subject header.  If it is
  urgent, do not use the word "urgent".  If you need an immediate answer, give
  me a reason, do not shout "for your immediate attention".  Thank you.
From: Paolo Amoroso
Subject: Re: Upper limits of CL
Date: 
Message-ID: <uuQZPRVQpGnKn8ieyXCV4WrnERam@4ax.com>
On Thu, 20 Jun 2002 17:57:52 GMT, Erik Naggum <····@naggum.net> wrote:

>   moving.  Just imagine what the engineers who worked miracles with 16K could
>   have done with the gigabyte now wasting away in my computer!  I still have

Examples of what has been done with even less than 16K can be found in:

  Computers in Spaceflight: The NASA Experience
  http://www.hq.nasa.gov/office/pao/History/computers/Compspace.html


>   the swap disk lamp starts flashing, I am reminded of the stupid ad for some
>   German car maker: "The new <car> has more computing power than it took to
>   send a man to the moon!" (I think my trusty old HP48GX does, too), and my
>   first thought was "and yet you are stuck here on earth?"

:D For more info on the real thing, see:

  Apollo Guidance Computer
  http://hrst.mit.edu/hrs/apollo/public


Paolo

P.S.
By the way, the Apollo spacecraft had a "CDR" :)
-- 
EncyCMUCLopedia * Extensive collection of CMU Common Lisp documentation
http://www.paoloamoroso.it/ency/README
From: Joel Ray Holveck
Subject: Re: Upper limits of CL
Date: 
Message-ID: <y7cr8iq4vd6.fsf@sindri.juniper.net>
> Just imagine what the engineers who worked miracles with 16K could
> have done with the gigabyte now wasting away in my computer!

This much I agree with.

In an amusing twist of irony, you know the old quip (I think from Alan
Perlis):
  "Lisp programmers know the value of everything, and the cost of nothing."

These days, I tend to find systems written in Lisp to take up rather
less memory than systems in other languages.  I recently saw a bunch
of jaws hit the table when I mentioned that a program I'm developing
(in Lisp) has a 9MB dynamic dataset size, including the code.  Most of
the other coders (used to other languages) had expected something in
the 25-100 MB range.

I normally attribute this to the newest crop of coders who don't know
(and frequently don't care) how things work under the hood.  It
doesn't take a lot of in-depth analysis to keep things reasonably
tight, just a basic familiarity with The Way Things Work to sit in the
back of your head while you code.

> Curiosity is not a bad thing.  However, too much idle curiosity and too
> little engineering-style or researcher-style curiosity _is_ a bad thing.

I'm going to take this at face value, which is different than how I
read it the first time through.  I think you're saying that idle
curiosity (investigation without a purpose) is bad, unless it's mixed
in with a generous dose of researcher-style curiosity (investigation
with a purpose).  I'm not sure whether I quite agree, but I'm a little
unsure about the meaning of your terms.

I engage in a lot of both styles of curiosity, and they both pay off
regularly.  I first started learning Lisp because of a "I wonder what
it's like" thought.  I'm now using it to write programs that people
thought were unworkable.  I first picked up Principles of AI
Programming because I wondered what it had to offer.  A few months
later, I adapted one of the algorithms I learned about from it to form
the core of a major project at work.  I constantly find myself using
things that I first learned about as a result of idle curiosity.

Certainly, when I investigate something new, I think of how I can
apply it to whatever jobs I have at hand.  But just because I don't
have an immediate application doesn't keep me from investigating.

> Idle curiosity is just like that rambling flow of thought that
> produces exactly nothing worthwhile.

I'm still hazy on the definitions here, but if by "idle curiosity" you
mean "investigation without a definite, immediate, application", then
I'm going to have to disagree.  I named a couple of counterexamples
from my own experiences above, and I've found that what I've learned
without an immediate purpose (at that time) has helped me time and
time again.  It provides me with a vast toolkit that I can use to
tackle new and unusual problems, and gives me a broader perspective
that I can use to deal with new situations.

But that's just my experience.

Cheers,
joelh
From: Erik Naggum
Subject: Re: Upper limits of CL
Date: 
Message-ID: <3234315101862564@naggum.net>
* Joel Ray Holveck
| I'm going to take this at face value, which is different than how I read it
| the first time through.  I think you're saying that idle curiosity
| (investigation without a purpose) is bad, unless it's mixed in with a
| generous dose of researcher-style curiosity (investigation with a purpose).

  The kind of idle curiosity I refer to is the that calls up an ISP and wonders
  what it would cost to give him a 100Mbps pipe with an average load of 20Mbps,
  but has no intention of buying _anything_ -- he would just sort of like to
  know, without saying so up front, admitting that he feared he would not get
  the right answer if he did not pretend he was buying.  But I guess I would
  not believe such people existed, either, if I had not experienced them first
  hand.  Idle curiosity is the kind that you would like to invoice people for
  after wasting somebody's paid time.
-- 
  Guide to non-spammers: If you want to send me a business proposal, please be
  specific and do not put "business proposal" in the Subject header.  If it is
  urgent, do not use the word "urgent".  If you need an immediate answer, give
  me a reason, do not shout "for your immediate attention".  Thank you.
From: Robert Monfera
Subject: Re: Upper limits of CL
Date: 
Message-ID: <kNdQ8.467$zN4.510457@news2.news.adelphia.net>
"Erik Naggum" <····@naggum.net> wrote in message
·····················@naggum.net...

|   The only applications I know of that require multiple
|   gigabytes of memory (and then tens of gigabytes) are those that require
|   multiple terabytes of disk space [...]

I once had a client who happened to run a data warehouse on 32 processors,
many hundreds of GB (if not a few TB) hard drive, ~16GB of memory and an SQL
database manager.  They needed these resources because the database was very
large, and queries were demanding and numerous.  Part of the reason for the
large database was denormalization, which is a common data warehousing
tradeoff between database size and throughput, amongst other things.

As an experiment, I performed an informal study of how much _information_
was really in the database and I concluded that an equally informative, but
well-modeled database would fit entirely in ~6GB of memory and drastically
decrease query latency.  But much of the speed and flexibility benefits and
ease of development are lost if you have to interface with an SQL database,
even an in-memory one, or have to work around system limitations such as
image and array sizes.

A group of applications does require multiple gigabytes of physical memory
and does not require much or any disk space.  I'd rather just want a big RAM
as an alternative to scaling up other system components (processors, I/O
bandwidth) to cope with the task or spend time and money to manually write a
paging system.  Equally, I'd rather use a quality Lisp implementation and
work with the vendor than to write one just because I happened to outgrow a
decades-old system limitation originating at a time when memory was orders
of magnitude more expensive than today, and Lisp was considered "big", or
because the system is using moving parts (!) to fetch data from.  We will
outgrow the disk the same way we outgrew tape, drum memory and mechanical
adders.

Lisp relieves the developer of manual memory management.  I then don't want
it to come back through the back door in the form of manual virtual memory
management.  So the lispy thing is to throw in as much memory as the working
set requires, if it is a bounded value, and it is economical.  The cost is
often nothing relative to the savings on other hardware specs, not to
mention development costs.  And maybe the memory is already avaliable.  The
other lispy thing is, like bignums, to be safe and free of imposed
limitations.  For these reasons, I was glad to hear about progress from
Duane on their 64-bit products.

(Unfortunately the analogue to the RAM commoditization coin is the
increasing gap between RAM speed and cache speed - if the "new hard drive"
is the RAM, then the "new memory" is the processor cache.)

Robert
From: Tim Bradshaw
Subject: Re: Upper limits of CL
Date: 
Message-ID: <ey3hejya1ru.fsf@cley.com>
* Erik Naggum wrote:
>   This was all worth it because it takes approximately one day of
>   actual production from an oil well to pay for all the computing
>   power necessary to punch the right hole in the ground.  The whole
>   country of Norway can run for twice as long as it takes to pump up
>   the oil, from the _taxation_ on the _profits_ alone, socialist
>   plan economy and social security and defense and everything.  This
>   is not some jerkoff hobbyist whining about not getting a 64-bit
>   Common Lisp -- this is big enough to invest 100 man-years to
>   develop dedicated programming languages and spawning moderately
>   large companies just to cater to a single industry of a _very_
>   small number of companies.  (We have no mom-and-pop oil rigs in
>   Norway.)  It would be cheaper to implement a Common Lisp system of
>   your own for this kind of operation than to ask someone else to do
>   it for you.  (In an unrelated industry, that is just how Erlang
>   got started.)  The morale of this long-winded example is this: If
>   you really need to invert a 20,000 by 20,000 matrix of doubles,
>   and it is not just some kind of academic masturbation, the memory
>   and Common Lisp and 64-bit hardware and whatever else you can come
>   up with on the IT side are going to be miniscule compared to the
>   rest of the costs of the system.

I spend half my life trying to batter this into people.  People have
*really strange ideas* about what is expensive and what is cheap.
I've, seriously, argued with people who thought it was a smart idea to
go to an investment bank and suggest that they could save lots of
money by running their transactional database on a farm of commodity
PCs instead of gold-plated hardware.  Millions of dollars an hour go
through these things: the cost of hardware is *so* not an issue.  You
hear similar stuff about telco billing systems...  

And 64bit machines are expensive.  Right.

--tim
From: Barry Wilkes
Subject: Re: Upper limits of CL
Date: 
Message-ID: <87k7oui73b.fsf@orton.bew.org.uk>
Tim Bradshaw <···@cley.com> writes:

> I spend half my life trying to batter this into people.  People have
> *really strange ideas* about what is expensive and what is cheap.
> I've, seriously, argued with people who thought it was a smart idea to
> go to an investment bank and suggest that they could save lots of
> money by running their transactional database on a farm of commodity
> PCs instead of gold-plated hardware.  Millions of dollars an hour go
> through these things: the cost of hardware is *so* not an issue.  You
> hear similar stuff about telco billing systems...  

I agree with this comment as far as TP systems go. But there are other systems
which are important to investment banks which are basically just users of this
data (for example, risk analysis). For these, cost is defiantly an issue. It is
very difficult to justify the cost of the gold plated hardware for these
systems when commodity hardware gives so much more bang per buck. 

Barry.

-- 
If in the last few years you haven't discarded a major opinion or  
acquired a new one, check your pulse.  You may be dead.

-- Gelett Burgess (1866-1951)
From: Patrick W
Subject: Re: Upper limits of CL
Date: 
Message-ID: <WALR8.2454$5l4.87834@ozemail.com.au>
"Tim Bradshaw" <···@cley.com> wrote in message
····················@cley.com...

> I spend half my life trying to batter this into people.  People have
> *really strange ideas* about what is expensive and what is cheap.

I found a ridiculous example of this the other day. The author of a tech mag
article argued that, when Sun Microsystems needed a widget toolkit for their
StarOffice suite, they chose GTK over Qt because they would have to _pay_
for commercial Qt licences.
From: Raymond Toy
Subject: Re: Upper limits of CL
Date: 
Message-ID: <4n3cvbjyh5.fsf@rtp.ericsson.se>
>>>>> "Patrick" == Patrick W <······@yahoo.com.au> writes:

    Patrick> "Tim Bradshaw" <···@cley.com> wrote in message
    Patrick> ····················@cley.com...

    >> I spend half my life trying to batter this into people.  People have
    >> *really strange ideas* about what is expensive and what is cheap.

    Patrick> I found a ridiculous example of this the other day. The author of a tech mag
    Patrick> article argued that, when Sun Microsystems needed a widget toolkit for their
    Patrick> StarOffice suite, they chose GTK over Qt because they would have to _pay_
    Patrick> for commercial Qt licences.

Does OpenOffice and StarOffice share code in the same way that Mozilla
and Netscape share code?  That would be a really good incentive to
choose a free widget toolkit.

Maybe a commercial Qt license says $100 per StarOffice sold.  That
might be a problem when Sun is trying to sell StarOffice for $75.  Not
having looked at the actual cost of a Qt license, I'm totally guessing.

What I find strange is that I know of places where they refuse to buy
a few $5-10 K Sun boxes for individuals who probably cost the company
a quarter million each with overhead.  But, then they don't even blink
an eye when you tell them you need a quarter million dollar piece of
protocol test equipment.  Of course, without the equipment, you can't
test what you've made.  But if the people don't have fast enough
computers to do the work, your products are even later than you might
wish, which then costs you tens of millions in lost sales, etc.  

I have no understanding of how management thinks.  Guess I'll never be
a manager.

Now that is *really strange idea* of what's expensive and what's
cheap.

Ray
From: Joe Marshall
Subject: Re: Upper limits of CL
Date: 
Message-ID: <Lm_R8.147689$nZ3.64308@rwcrnsc53>
"Raymond Toy" <···@rtp.ericsson.se> wrote in message ···················@rtp.ericsson.se...
>
> I have no understanding of how management thinks.

You're starting from the false premise that management thinks at all.
From: Erik Naggum
Subject: Re: Upper limits of CL
Date: 
Message-ID: <3234002743762019@naggum.net>
* Joe Marshall
| You're starting from the false premise that management thinks at all.

  Oh, management thinks.  It just does not think about things that matter to
  non-management.  There is lots and lots of evidence of thinking mangagers,
  but they evidently value wildly different things than non-management.  E.g.,
  in a huge corporation from which several of my friends have fled recently,
  management at all levels have generous bonus programs, thought to be a kind
  of motivator, but it leads to a very curious metric for all ideas that are
  offered them: "How does this affect my bonus?"  Needless to say, when bonuses
  are paid at the end of every quarter, any idea that fails to promise a bonus
  increase for that particular manager in the current or next quarter is just
  ignored or worse, yet, is actively blocked from propagated upwards because
  the lowest-level manager does not want others to get a bonus for his work.
  It is somewhat like giving children candy for everything they do to motivate
  them -- you get to pay for it in dental health bills.  Generally speaking,
  they provide some evidence to the theory that motivation does not motivate,
  it only demotivates that which is not explicitly motivated.  Management is
  generall running on the wrong kinds of stimulators and motivators.
-- 
  Guide to non-spammers: If you want to send me a business proposal, please be
  specific and do not put "business proposal" in the Subject header.  If it is
  urgent, do not use the word "urgent".  If you need an immediate answer, give
  me a reason, do not shout "for your immediate attention".  Thank you.
From: Raymond Toy
Subject: Re: Upper limits of CL
Date: 
Message-ID: <4nk7onice4.fsf@rtp.ericsson.se>
>>>>> "Erik" == Erik Naggum <····@naggum.net> writes:

    Erik> * Joe Marshall
    Erik> | You're starting from the false premise that management thinks at all.

    Erik>   Oh, management thinks.  It just does not think about things that matter to
    Erik>   non-management.  There is lots and lots of evidence of thinking mangagers,
    Erik>   but they evidently value wildly different things than non-management.  E.g.,
    Erik>   in a huge corporation from which several of my friends have fled recently,
    Erik>   management at all levels have generous bonus programs, thought to be a kind
    Erik>   of motivator, but it leads to a very curious metric for all ideas that are
    Erik>   offered them: "How does this affect my bonus?"  Needless to say, when bonuses
    Erik>   are paid at the end of every quarter, any idea that fails to promise a bonus
    Erik>   increase for that particular manager in the current or next quarter is just
    Erik>   ignored or worse, yet, is actively blocked from propagated upwards because
    Erik>   the lowest-level manager does not want others to get a bonus for his work.
    Erik>   It is somewhat like giving children candy for everything they do to motivate
    Erik>   them -- you get to pay for it in dental health bills.  Generally speaking,
    Erik>   they provide some evidence to the theory that motivation does not motivate,
    Erik>   it only demotivates that which is not explicitly motivated.  Management is
    Erik>   generall running on the wrong kinds of stimulators and motivators.

Yeah, this really sucks.  In the meantime, they just grind the workers
into total apathy and eventual departure, thereby losing all the good
guys they had, so now they have no hope to actually get anything done
to get their big bonuses, except that they probably will because they
saved the company big bucks by making everyone leave and didn't have
to pay severance. :-)

I've always felt that business management should be paid the same way
sports managers are paid.  The players (i.e. me! :-) ) get paid the
big bucks because they actually make things happen.  After all, I
don't go to a game to see a manager shouting at everyone; I go to see
the players do their magic.

Ray, the worker bee
From: Joe Marshall
Subject: Re: Upper limits of CL
Date: 
Message-ID: <Hz2S8.308729$352.31772@sccrnsc02>
"Raymond Toy" <···@rtp.ericsson.se> wrote in message ···················@rtp.ericsson.se...
>
> I've always felt that business management should be paid the same way
> sports managers are paid.  The players (i.e. me! :-) ) get paid the
> big bucks because they actually make things happen.  After all, I
> don't go to a game to see a manager shouting at everyone; I go to see
> the players do their magic.

I'm ready to endorse Nike for the appropriate remuneration.
From: Patrick W
Subject: Re: Upper limits of CL
Date: 
Message-ID: <bp0S8.3020$5l4.102680@ozemail.com.au>
"Raymond Toy" <···@rtp.ericsson.se> wrote in message
···················@rtp.ericsson.se...

> Does OpenOffice and StarOffice share code in the same way that Mozilla
> and Netscape share code?  That would be a really good incentive to
> choose a free widget toolkit.

There may be some licensing issues[*] that make GTK+ a better choice, but I
was commenting on the notion that Qt is prohibitively expensive to Sun
Microsystems. (A Qt license is comparable to buying a copy of Visual C++.
It's a one-off payment. No royalties involved).

[*] A commercial Qt license is only required for proprietary products. Qt is
also licensed under the GPL, but only the X11 version is GPL'd. A no-cost
"non-commercial" version exists for Windows, which can be used in products
licensed under the LGLP or a BSD-style license. OpenOffice is distributed
under the LGPL (which presents no problem for use with Qt), but it's also
released under the "Sun Industry Standards Source License" which may or may
not be incompatible with either the GPL'd Qt or the "non-commercial" Qt. (I
haven't checked, and don't really care) ;-)
From: Rob Warnock
Subject: Re: Upper limits of CL
Date: 
Message-ID: <aesae0$2j36i$1@fido.engr.sgi.com>
Erik Naggum  <····@naggum.net> wrote:
+---------------
| Besides, if you really need gigabyte upon gigabyte of memory
| and the hardware to utilize it, the only thing between you and
| satisfying that need has been money for at least 10 years.
+---------------

Exactly so, see:

	<URL:http://www.sgi.com/origin/3000/>
	<URL:http://www.sgi.com/products/storage/9400.html>

With adequate cash, you can buy a COTS system (do, please!! -- and list
me as a referral!) with 512 CPUs [single system image, not a cluster]
with 1 TB of main RAM [flat address space -- each CPU sees all of it]
and dozens of terabytes of disk. Oh, and measured filesystem read rates
as high as 7.5 GB/s on a single Unix file descriptor.

Unfortunately(?), applications that make effective *use* of all that
horsepower at once in a single program tend to be written in Fortran
(usually with OpenMP or MPI), not Common Lisp.

+---------------
| So pardon my cynical twist, but what are you doing with that
| 20,000x20,000 double-precision floating point matrix you say
| you need to invert _today_?
+---------------

People who buy our systems invert, multiply, add, and diagonalize
such matrices all the time, sometimes even larger ones. Oil & gas
exploration, weather prediction, drug modelling, simulated crash
analysis, computational fludic analysis (CFD) [simulated wind
tunnels], etc., etc., can all make use of that scale of math.

+---------------
| And if you _are_ doing serious stuff of this magnitude, why do you even
| bother with run-of-the-mill Common Lisp implementations on stock hardware?
| Implement your own goddamn Common Lisp system optimized for your hardware
| and your domain-specific language and other needs.
+---------------

I'd agree, in the case of a single-CPU problem. But isn't using
Common Lisp effectively on multiple processors still considered
a "hard" problem? [I'm referring to running a "single program"
that uses large numbers of CPUS, not a large number of separate
communicating Unix processes -- that's easy.] Allocation & garbage
collection alone would seem to be... "interesting".

[If anyone knows of an implementation that can run the mutator
on >25 CPUs while running the garbage collector on, say, ~5 more
CPUs, I'd be very interested in hearing about it...]


-Rob

-----
Rob Warnock, 30-3-510		<····@sgi.com>
SGI Network Engineering		<http://www.rpw3.org/>
1600 Amphitheatre Pkwy.		Phone: 650-933-1673
Mountain View, CA  94043	PP-ASEL-IA

[Note: ·········@sgi.com and ········@sgi.com aren't for humans ]  
From: Joel Ray Holveck
Subject: Cluster CL [Was: Upper limits of CL]
Date: 
Message-ID: <y7cvg824wqr.fsf_-_@sindri.juniper.net>
>> Besides, if you really need gigabyte upon gigabyte of memory
>> and the hardware to utilize it, the only thing between you and
>> satisfying that need has been money for at least 10 years.
> Exactly so, see:
> 	<URL:http://www.sgi.com/origin/3000/>
> 	<URL:http://www.sgi.com/products/storage/9400.html>
[snip]
> Unfortunately(?), applications that make effective *use* of all that
> horsepower at once in a single program tend to be written in Fortran
> (usually with OpenMP or MPI), not Common Lisp.

I've had a hard time finding CLs for cluster computing.  Does anybody
know if this sort of thing was ever developed as a product?

Cheers,
joelh
From: Christopher C. Stacy
Subject: Re: Cluster CL [Was: Upper limits of CL]
Date: 
Message-ID: <usn36fy0v.fsf@grant.org>
>>>>> On 28 Jun 2002 19:58:04 -0700, Joel Ray Holveck ("Joel") writes:

 >>> Besides, if you really need gigabyte upon gigabyte of memory
 >>> and the hardware to utilize it, the only thing between you and
 >>> satisfying that need has been money for at least 10 years.
 >> Exactly so, see:
 >> <URL:http://www.sgi.com/origin/3000/>
 >> <URL:http://www.sgi.com/products/storage/9400.html>
 Joel> [snip]
 >> Unfortunately(?), applications that make effective *use* of all that
 >> horsepower at once in a single program tend to be written in Fortran
 >> (usually with OpenMP or MPI), not Common Lisp.

 Joel> I've had a hard time finding CLs for cluster computing.  Does anybody
 Joel> know if this sort of thing was ever developed as a product?

Sometimes "comp.lang.lisp" looks like a "cluster-CL".
From: Carl Shapiro
Subject: Re: Cluster CL [Was: Upper limits of CL]
Date: 
Message-ID: <ouy4rfmxn1n.fsf@panix3.panix.com>
Joel Ray Holveck <·····@juniper.net> writes:

> >> Besides, if you really need gigabyte upon gigabyte of memory
> >> and the hardware to utilize it, the only thing between you and
> >> satisfying that need has been money for at least 10 years.
> > Exactly so, see:
> > 	<URL:http://www.sgi.com/origin/3000/>
> > 	<URL:http://www.sgi.com/products/storage/9400.html>
> [snip]
> > Unfortunately(?), applications that make effective *use* of all that
> > horsepower at once in a single program tend to be written in Fortran
> > (usually with OpenMP or MPI), not Common Lisp.
> 
> I've had a hard time finding CLs for cluster computing.  Does anybody
> know if this sort of thing was ever developed as a product?

Yes.  Ask Google about either MPI and Lisp or PVM and Lisp.
From: Petter Gustad
Subject: Re: Cluster CL [Was: Upper limits of CL]
Date: 
Message-ID: <87hejg59lc.fsf@filestore.home.gustad.com>
Joel Ray Holveck <·····@juniper.net> writes:

> >> Besides, if you really need gigabyte upon gigabyte of memory
> >> and the hardware to utilize it, the only thing between you and
> >> satisfying that need has been money for at least 10 years.
> > Exactly so, see:
> > 	<URL:http://www.sgi.com/origin/3000/>
> > 	<URL:http://www.sgi.com/products/storage/9400.html>
> [snip]
> > Unfortunately(?), applications that make effective *use* of all that
> > horsepower at once in a single program tend to be written in Fortran
> > (usually with OpenMP or MPI), not Common Lisp.
> 
> I've had a hard time finding CLs for cluster computing.  Does anybody
> know if this sort of thing was ever developed as a product?

Try to take a look at NetCLOS at:
http://lki-www.informatik.uni-hamburg.de/~hotz/homepage.html

Petter
-- 
________________________________________________________________________
Petter Gustad   8'h2B | (~8'h2B) - Hamlet in Verilog   http://gustad.com
From: Tim Bradshaw
Subject: Re: Upper limits of CL
Date: 
Message-ID: <ey33cvoh8lz.fsf@cley.com>
* Christopher Browne wrote:

> Do you have a computer that allows you to address that much memory in
> one process?

Yes, several.  Other than the oldest of them they were all less than
L2,000 new. Although I don't think any of the (small) boxes we have
can physically have more than 4GB, so this would involve paging unless
the space was sparse.  But we buy very low-end boxes.

> 64 bit systems that have all the hardware to support huge amounts of
> memory _aren't_ cheap.

They're not expensive, either, other than for silly values of
expensive.  I mean, how much is it going to cost you to design the
manual-paging scheme you need to solve some problem that won't fit in
2-4GB?

> The price immediately jumps sharply, the RAM probably _isn't_ cheap,
> and you probably get to gulp on $10K USD before you've gotten started,
> and there may not be a suitable CL yet for the system you've bought.

That's a silly value of expensive, I guess.  However if you're really
constrained, start looking for used boxes.  I nearly bought (and
regret not buying) an ex-lease Sun E3k last year, which had, I think
6GB of memory in it (and 4 processors), for L3,000.

> The problem is that no 64 bit architecture has leapt out as being a
> really good "risk."  None are cheap; all are _highly_ tied to specific
> vendors; there has been lots of FUD since the late '90s about "Merced"
> being the only 64 bit architecture that could conceivably survive.
> Note that there is a serious dearth of vendors hawking it, even now.

This is only really true on the desktop. Sun and IBM and HP (and maybe
SGI) have sold billions of dollars worth of 64bit kit over the last
few years.

--tim
From: Christopher Browne
Subject: Re: Upper limits of CL
Date: 
Message-ID: <aefscj$6bd4n$1@ID-125932.news.dfncis.de>
The world rejoiced as Tim Bradshaw <···@cley.com> wrote:
> * Christopher Browne wrote:
>> Do you have a computer that allows you to address that much memory in
>> one process?

> Yes, several.  Other than the oldest of them they were all less than
> L2,000 new. Although I don't think any of the (small) boxes we have
> can physically have more than 4GB, so this would involve paging
> unless the space was sparse.  But we buy very low-end boxes.

If they don't have slots for >4GB of memory, then I have to protest
that they aren't computers that "allow you to address that much memory
in one process."

I've got an Alpha box that could cope with having 1.5GB of memory on
it; I don't claim that's >4GB, despite the fact that the CPU would be
_theoretically_ capable of the job.  I can't put 4GB of RAM in the box
(except via the futile exercise of taping a bag containing an extra
20GB to the side of it; that's not _usable_ memory :-)).

>> 64 bit systems that have all the hardware to support huge amounts of
>> memory _aren't_ cheap.

> They're not expensive, either, other than for silly values of
> expensive.  I mean, how much is it going to cost you to design the
> manual-paging scheme you need to solve some problem that won't fit
> in 2-4GB?

The machines with slots for 16GB of RAM don't sell for $800, and more
than likely don't use the RAM that's cheap.

>> The price immediately jumps sharply, the RAM probably _isn't_ cheap,
>> and you probably get to gulp on $10K USD before you've gotten started,
>> and there may not be a suitable CL yet for the system you've bought.
>
> That's a silly value of expensive, I guess.  However if you're really
> constrained, start looking for used boxes.  I nearly bought (and
> regret not buying) an ex-lease Sun E3k last year, which had, I think
> 6GB of memory in it (and 4 processors), for L3,000.

I'm assuming _new_ hardware here.  After all, _somebody_ will be
paying full price.

I agree on the merits of the E3K; that would indeed be a slick
addition to the stable.

>> The problem is that no 64 bit architecture has leapt out as being a
>> really good "risk."  None are cheap; all are _highly_ tied to specific
>> vendors; there has been lots of FUD since the late '90s about "Merced"
>> being the only 64 bit architecture that could conceivably survive.
>> Note that there is a serious dearth of vendors hawking it, even now.
>
> This is only really true on the desktop. Sun and IBM and HP (and maybe
> SGI) have sold billions of dollars worth of 64bit kit over the last
> few years.

I _was_ thinking of servers when I wrote that paragraph.  SPARC,
Alpha, MIPS, PPC, IA-64.  People don't think of them as "desktop"
platforms.

I got a response via email suggesting AMD "Sledgehammer" as a possible
upcoming alternative.  

It is indeed a very attractive idea that in another year, there may be
machines that are compatible with IA-32 (thus making them acceptable
to the teeming hordes of desktop folk), and which offer 64 bit
extensions.

Two objections still emerge:
 1.  It's not available now, so it's no answer, now.
 2.  It's not obvious that motherboards supporting 8GB or 16GB of RAM
     will be immediately available upon release of the architecture.

(Arguably there's #3: It may take time for there to be a x86-64 "code
generator" for GCC and/or Python...)

I have periodically griped about much the same issue vis-a-vis
"alternative" CPU architectures and their usability for Linux.  Linux
has supported StrongARM for some years now, but it has been
problematic to get inexpensive hardware for that.  Ditto for PPC.

You can get some cheap CPUs for some of these variant architectures;
if there's not a source of cheap motherboards, those CPUs are useless.
The only way to get a Linux/PPC system is to buy your hardware from
Apple (thus buying MacOS), or to pay _BIG_ bucks for third party
"embedded system" hardware.

In comparison, I can walk down the street and get an IA-32 motherboard
that supports 2GB of RAM for maybe a couple hundred bucks.  But it
won't support 8GB.
-- 
(concatenate 'string "cbbrowne" ·@cbbrowne.com")
http://www3.sympatico.ca/cbbrowne/
:FATAL ERROR -- ERROR IN USER
From: Thomas F. Burdick
Subject: Re: Upper limits of CL
Date: 
Message-ID: <xcvd6uscp65.fsf@conquest.OCF.Berkeley.EDU>
Christopher Browne <········@acm.org> writes:

> The world rejoiced as Tim Bradshaw <···@cley.com> wrote:
> > * Christopher Browne wrote:
> >> Do you have a computer that allows you to address that much memory in
> >> one process?
> 
> > Yes, several.  Other than the oldest of them they were all less than
> > L2,000 new. Although I don't think any of the (small) boxes we have
> > can physically have more than 4GB, so this would involve paging
> > unless the space was sparse.  But we buy very low-end boxes.
> 
> If they don't have slots for >4GB of memory, then I have to protest
> that they aren't computers that "allow you to address that much memory
> in one process."
> 
> I've got an Alpha box that could cope with having 1.5GB of memory on
> it; I don't claim that's >4GB, despite the fact that the CPU would be
> _theoretically_ capable of the job.  I can't put 4GB of RAM in the box
> (except via the futile exercise of taping a bag containing an extra
> 20GB to the side of it; that's not _usable_ memory :-)).

Physical memory is one thing, process size is another.  I've written
software on IA-32 that had to do manual paging because of the process
size limit, and it would have been *so* much nicer/faster/easier to
just use a 64-bit architecture and let the kernel do the paging.
Since the memory access was more or less random within each 250MB
chunk, but not across all several GB that the process wanted to
address, so paging would have been fine.

(I think this is relevant because didn't this start by wondering about
the usefulness of having a fully 64-bit lisp?)

-- 
           /|_     .-----------------------.                        
         ,'  .\  / | No to Imperialist war |                        
     ,--'    _,'   | Wage class war!       |                        
    /       /      `-----------------------'                        
   (   -.  |                               
   |     ) |                               
  (`-.  '--.)                              
   `. )----'                               
From: Carl Shapiro
Subject: Re: Upper limits of CL
Date: 
Message-ID: <ouyelf84a53.fsf@panix3.panix.com>
Christopher Browne <········@acm.org> writes:

[...]

> If they don't have slots for >4GB of memory, then I have to protest
> that they aren't computers that "allow you to address that much memory
> in one process."

This is an incredibly naive qualification.  While a low end 64-bit
machine may not have enough "slots" for over four gigabytes of RAM,
the processor and MMU can still address that amount memory.  For
example, unlike your average 32-bit machine a low-end 64-bit machine
can do such things as mmap() multi-gigabyte files without employing a
windowing hack.

> I've got an Alpha box that could cope with having 1.5GB of memory on
> it; I don't claim that's >4GB, despite the fact that the CPU would be
> _theoretically_ capable of the job.  I can't put 4GB of RAM in the box
> (except via the futile exercise of taping a bag containing an extra
> 20GB to the side of it; that's not _usable_ memory :-)).

I myself have an Alpha with "only" 1.5GB of memory, but I have tens of
gigabytes of virtual memory and I can mmap() my entire 72GB disk if I
genuinely feel the need to.  Since none of my applications have a
working set which exceeds the size of my physical memory, I am winning
big with a 64-bit machine which has well under 4GB of memory.

Do you actually have an application with a working set of over four
gigabytes?

[...]

> (Arguably there's #3: It may take time for there to be a x86-64 "code
> generator" for GCC and/or Python...)

Operating systems and compilers for x86-64 have been available over a
year.  While they may not be production quality at this time, working
code generators for this architecture are old news.

http://www.x86-64.org/
From: Paul F. Dietz
Subject: Re: Upper limits of CL
Date: 
Message-ID: <3D0B978F.75829DCC@dls.net>
While supporting large objects may be hard in a 32 bit address space,
it's possible to support a very large address space (64 bits or larger)
on a 32 bit machine using a technique called 'pointer swizzling at page
fault time'.

http://citeseer.nj.nec.com/wilson92pointer.html

	Paul
From: Duane Rettig
Subject: Re: Upper limits of CL
Date: 
Message-ID: <4fzznzoeu.fsf@beta.franz.com>
Carl Shapiro <·············@panix.com> writes:

> Christopher Browne <········@acm.org> writes:
> 
> > I've got an Alpha box that could cope with having 1.5GB of memory on
> > it; I don't claim that's >4GB, despite the fact that the CPU would be
> > _theoretically_ capable of the job.  I can't put 4GB of RAM in the box
> > (except via the futile exercise of taping a bag containing an extra
> > 20GB to the side of it; that's not _usable_ memory :-)).
> 
> I myself have an Alpha with "only" 1.5GB of memory, but I have tens of
> gigabytes of virtual memory and I can mmap() my entire 72GB disk if I
> genuinely feel the need to.  Since none of my applications have a
> working set which exceeds the size of my physical memory, I am winning
> big with a 64-bit machine which has well under 4GB of memory.
> 
> Do you actually have an application with a working set of over four
> gigabytes?

We have a customer of a customer who is up to a requirement for at least
6 Gb for their dataset.  We had to provide a patch in order for that to
work (it was stupid, really; we still had a few int usages where longs were
required, and so our heap_sbrk calls were going negative when requesting
more than 2 Gb at a time for new lisp heap).  I know of no others that are
up to datasets that large yet, but many are close, and it's only a matter
of time...

-- 
Duane Rettig          Franz Inc.            http://www.franz.com/ (www)
1995 University Ave Suite 275  Berkeley, CA 94704
Phone: (510) 548-3600; FAX: (510) 548-8253   ·····@Franz.COM (internet)
From: Erik Naggum
Subject: Re: Upper limits of CL
Date: 
Message-ID: <3233156264920068@naggum.net>
* Christopher Browne
| In comparison, I can walk down the street and get an IA-32 motherboard
| that supports 2GB of RAM for maybe a couple hundred bucks.  But it won't
| support 8GB.

  You are aware that IA-32 has support for 36-bit addressing, right?  (The
  PSE36 CPU flag reports its presence in a particular processor.)  Chipsets
  with support for the full 64GB address space are certainly available, and
  Linux supports them in recent kernel versions, giving you 4GB of physical
  memory per process.  I am not so sure the raw need for 64-bit processors
  will be all that imminent, given the ability to provide so much RAM to
  the 32-bit processors.  Not that 32 bits is enough, of course, but we may
  find better ways to work with 64-bit units than switching the whole
  processor.  The only serious short-coming in modern implementations of
  IA-32 is the register starvation.  It is quite fascinating what Intel
  (and AMD) have been able to do with this architecture.
-- 
  In a fight against something, the fight has value, victory has none.
  In a fight for something, the fight is a loss, victory merely relief.

  70 percent of American adults do not understand the scientific process.
From: Robert Monfera
Subject: Re: Upper limits of CL
Date: 
Message-ID: <0pOO8.126$1n4.223291@news2.news.adelphia.net>
"Erik Naggum" <····@naggum.net> wrote in message
·····················@naggum.net...

|   You are aware that IA-32 has support for 36-bit addressing, right?  (The
|   PSE36 CPU flag reports its presence in a particular processor.)
Chipsets
|   with support for the full 64GB address space are certainly available,
and
|   Linux supports them in recent kernel versions, giving you 4GB of
physical
|   memory per process.

I remember discussions about how old OS limits (max. memory size,
fragmentation) prevented lisp images to grow much beyond about 1GB.  With
4GB physical memory per process, is it likely that the new practical image
size limit on x86 approaches 4GB?

|   The only serious short-coming in modern implementations of
|   IA-32 is the register starvation.  It is quite fascinating what Intel
|   (and AMD) have been able to do with this architecture.

The Hammer may ease register starvation a good bit - even though they only
doubled the number of registers, it's worth more than that because the
number of general purpose registers just about tripled and the width is
doubled too (plus double of the 128-bit SSE registers).  Together, 32
registers aren't bad, and tens of millions of Hammers will be sold yearly.

AMD processors have long used internal register renaming, and probably they
have 64 or more internal registers.  It is thus noteworthy and somewhat
credible that AMD defended the choice of 16+16 registers, saying the
benefits of further increase diminish quickly beyond that point, and there
would be some drawbacks.

AMD introduced the RISC in a CISC concept first, and now the Pentium 4
translates to RISC upon first memory read and its instruction cache retains
the RISC code.  (Probably WLIW or micro-ops are preferable terms over RISC.)
With some dramatization, we can say that the memory controller gets rid of
IA-32 and CISC-ness before the processor is even reached.

Robert
From: Raymond Toy
Subject: Re: Upper limits of CL
Date: 
Message-ID: <4nznxwcfo6.fsf@edgedsp4.rtp.ericsson.se>
>>>>> "Robert" == Robert Monfera <·······@fisec.com> writes:

    Robert> "Erik Naggum" <····@naggum.net> wrote in message
    Robert> ·····················@naggum.net...

    Robert> |   You are aware that IA-32 has support for 36-bit addressing, right?  (The
    Robert> |   PSE36 CPU flag reports its presence in a particular processor.)
    Robert> Chipsets
    Robert> |   with support for the full 64GB address space are certainly available,
    Robert> and
    Robert> |   Linux supports them in recent kernel versions, giving you 4GB of
    Robert> physical
    Robert> |   memory per process.

    Robert> I remember discussions about how old OS limits (max. memory size,
    Robert> fragmentation) prevented lisp images to grow much beyond about 1GB.  With
    Robert> 4GB physical memory per process, is it likely that the new practical image
    Robert> size limit on x86 approaches 4GB?

For CMUCL, the heap can be 2 GB on FreeBSD, 1.75 GB for the other BSDs
and Linux.  The remaining memory space goes to the static space, the
read-only space, the binding stack, and the control stack.  It also
has to leave some room for C code, C stack, and dynamic libraries.

Does this qualify as "approaches 4GB"? :-)

    Robert> |   The only serious short-coming in modern implementations of
    Robert> |   IA-32 is the register starvation.  It is quite fascinating what Intel
    Robert> |   (and AMD) have been able to do with this architecture.

    Robert> The Hammer may ease register starvation a good bit - even though they only
    Robert> doubled the number of registers, it's worth more than that because the
    Robert> number of general purpose registers just about tripled and the width is
    Robert> doubled too (plus double of the 128-bit SSE registers).  Together, 32
    Robert> registers aren't bad, and tens of millions of Hammers will be sold yearly.

Considering that MIPS, HP-PA, PPC, and Sparc all have 32 registers, I
think 32 is a good number.

Ray
From: Robert Monfera
Subject: Re: Upper limits of CL
Date: 
Message-ID: <3ZRO8.151$1n4.269356@news2.news.adelphia.net>
"Raymond Toy" <···@rtp.ericsson.se> wrote in message
···················@edgedsp4.rtp.ericsson.se...

| For CMUCL, the heap can be 2 GB on FreeBSD, 1.75 GB for the other BSDs
| and Linux.  The remaining memory space goes to the static space, the
| read-only space, the binding stack, and the control stack.  It also
| has to leave some room for C code, C stack, and dynamic libraries.
|
| Does this qualify as "approaches 4GB"? :-)

Can you actually create 1900 to 2000 instances of 1MB arrays?  I just consed
1024 such arrays in ACL on Windows (the XP Professional has an OS-imposed
limit of 2GB), but as someone, probably Duane, pointed out, ACL now requires
a contiguous block of memory, and 1.1GB was the largest.  After some
tweaking, ACL and the OS behaved pretty well, I run some tests looping
through the vectors and doing global GCs.

Robert
From: Raymond Toy
Subject: Re: Upper limits of CL
Date: 
Message-ID: <4nofe8d8i0.fsf@edgedsp4.rtp.ericsson.se>
>>>>> "Robert" == Robert Monfera <·······@fisec.com> writes:

    Robert> "Raymond Toy" <···@rtp.ericsson.se> wrote in message
    Robert> ···················@edgedsp4.rtp.ericsson.se...

    Robert> | For CMUCL, the heap can be 2 GB on FreeBSD, 1.75 GB for the other BSDs
    Robert> | and Linux.  The remaining memory space goes to the static space, the
    Robert> | read-only space, the binding stack, and the control stack.  It also
    Robert> | has to leave some room for C code, C stack, and dynamic libraries.
    Robert> |
    Robert> | Does this qualify as "approaches 4GB"? :-)

    Robert> Can you actually create 1900 to 2000 instances of 1MB arrays?  I just consed

I don't have BSD to try but on Linux you should be able to create 1500
or so such arrays.  However, I just tried that and it stops after 500
MB or so, despite my telling CMUCL to use more heap.  I'll have to
look into that.

Ray
From: Tim Bradshaw
Subject: Re: Upper limits of CL
Date: 
Message-ID: <ey3lm9gnye0.fsf@cley.com>
* Christopher Browne wrote:

> If they don't have slots for >4GB of memory, then I have to protest
> that they aren't computers that "allow you to address that much memory
> in one process."

Well.  I can write a program that allocates an array of 2^32
double-floats.  That program will run (it may even run in reasonable
time if the access patterns are not toxic).  That's my criteria for
`address that much memory in one process'.  Maybe I should have
phrased it as `have that much (virtual) linear address space in one
process', sorry.

> The machines with slots for 16GB of RAM don't sell for $800, and more
> than likely don't use the RAM that's cheap.

Yes, like I said `silly value of expensive'.

> I'm assuming _new_ hardware here.  After all, _somebody_ will be
> paying full price.

Sure, then it's 10k or something.  That's still not really an
expensive machine if you're getting paid to solve problems that need
significant memory (I realise that `expensive machine' has come to
mean `anything over $500', to many people: I'm just not one).  It is
out of the hobby market, I agree (but hobbyists can find slower such
machines if they are willing to pay for the power and live with the
noise...)

All I'm trying to say is that if you have a problem that needs
slightly more than 32bits of address space which is in any way
commercially interesting, then the cost of a machine, with memory, to
solve it, is not in any sense prohibitive.  Certainly, if the problem
is in the 4-16GB range hardware is likely to be cheap compared to the
time cost of implementing manual paging or whatever you need to do to
get the problem down to 2-4GB.

--tim
From: Robert Monfera
Subject: Re: Upper limits of CL
Date: 
Message-ID: <wGOO8.127$1n4.227654@news2.news.adelphia.net>
"Christopher Browne" <········@acm.org> wrote in message
···················@ID-125932.news.dfncis.de...

| I got a response via email suggesting AMD "Sledgehammer" as a possible
| upcoming alternative.

Sorry Christopher, I meant to post it here (see on the bottom).

| It is indeed a very attractive idea that in another year, there may be
| machines that are compatible with IA-32 (thus making them acceptable
| to the teeming hordes of desktop folk), and which offer 64 bit
| extensions.
|
| Two objections still emerge:
|  1.  It's not available now, so it's no answer, now.

Correct.

|  2.  It's not obvious that motherboards supporting 8GB or 16GB of RAM
|      will be immediately available upon release of the architecture.

Indeed, according to roadmaps, server motherboards will arrive several
months after desktop (<=4GB) versions for the 64-bit AMD chips.

Robert

----- Original Message -----
From: "Christopher Browne" <········@acm.org>
Newsgroups: comp.lang.lisp
Sent: Friday, June 14, 2002 1:14 PM
Subject: Re: Upper limits of CL

| In the last exciting episode, "Robert Monfera" <·······@fisec.com> wrote::

| > 4GB memory costs only a few hundred dollars, so addressing arrays
| > with 20-something bits (a few MB) is getting inadequate really fast.

| 64 bit systems that have all the hardware to support huge amounts of
| memory _aren't_ cheap.

IBM wants to sell you a 32-bit dual-processor capable server expandable to
4GB for $949 -

http://commerce.www.ibm.com/cgi-bin/ncommerce/CategoryDisplay?cgrfnbr=207255
0&smrfnbr=2072488&cntrfnbr=1&cgmenbr=1&cntry=840&lang=en_US&scrfnbr=73

(You may get 10% off if you ask or pay with Amex.)

For 2GB memory, add $600.  With 4GB memory, it is still less than $3000.  I
expect similar or lower prices and larger physical memory capability with
the upcoming Hammers, but even the $10k price you mention does not sound
horrible for deployment.

| The problem is that no 64 bit architecture has leapt out as being a
| really good "risk."  None are cheap; all are _highly_ tied to specific
| vendors; there has been lots of FUD since the late '90s about "Merced"
| being the only 64 bit architecture that could conceivably survive.
| Note that there is a serious dearth of vendors hawking it, even now.

The success of Hammer looks inevitable, simply by virtue of automatically
becoming the first mass-market 64-bit computer processor.  It does not need
64-bit applications, it just has to replace the K7 Athlon, which has 20% of
the market.  Reportedly Intel develops Pentiums in the same direction.

Whereas in the past 7-8 families of low-volume (thus expensive) 64-bit
processors and 32-bit server processors competed, now it reduces to x86,
PowerPC, Itanium and Sparc at the most, of which 3 are already supported by
Lisp implementations to a good extent, and the 4th (Itanium) hasn't started
selling yet.  Memory for them is getting really cheap.  _Laptops_ can have
1GB RAM now.

Robert
From: Gisle Sælensminde
Subject: Re: Upper limits of CL
Date: 
Message-ID: <slrnagm76r.8me.gisle@apal.ii.uib.no>
In article <··············@ID-125932.news.dfncis.de>, Christopher Browne wrote:
> A long time ago, in a galaxy far, far away, Greg Menke <············@mindspring.com> wrote:
>> On a Linux User Group list I'm a member of, the question just came up
>> about inverting a 20,000 x 20,000 matrix of floats and how it might be
>> done.  
>>
>> No doubt different CL environments have different limits on
>> performance.  I was wondering in general how well any of the current
>> generation might handle that large a dataset, given sufficient
>> hardware.  I was thinking of a situation where multiple arrays are
>> allocated to get around array-dimension-limit, array-total-size-limit,
>> etc.., so the aggregate might form the basis for a logical matrix of
>> sufficient size.
> 
> * (* 20000 20000 8)
> 3200000000
> 
> No matter what sort of system you use, that size of matrix will be
> problematic, as it pushes the edges of the permissible size of a
> process.
> 
> 8 byte floats will lead to memory consumption of ~3.2GB, which won't
> be a happy thing irrespective of what language you're working in.
 
In different numerical sumulation problems, like weather prediction, climate 
models, oil resevoir modeling etc matrices of size 1 000 000 x 1 000 000
is not uncommon. The resulting matrices will typically be sparse, with only a
small fraction of the elements non-zero. You can besure that such matrices are 
not represented as a 1 000 000 x 1 000 000 array. 

On the other hand, it may have a million elements in one of the dimensions,
and a limit on 20000 elements in one dimension can be an unacceptable limitation
in some cases. 

--
Gisle S�lensminde ( ·····@ii.uib.no )   

With sufficient thrust, pigs fly just fine. However, this is not
necessarily a good idea. It is hard to be sure where they are going
to land, and it could be dangerous sitting under them as they fly
overhead. (from RFC 1925)