How? Is there anything wrong with the concept? I thought it was so
easy to parse the S-expressions because of it. Is this really a
problem for Lisp or this is more of a non-Lisper automatic reaction to
something he is not used to?
How would the implementation of a Lisp without a using cons look like?
Would it have any advantage over the "archaic" one?
Anticomuna <············@uol.com.br> writes:
> How?
In no way.
> Is there anything wrong with the concept?
No.
> I thought it was so
> easy to parse the S-expressions because of it.
S-expressions are easy to parse because they've got a simple grammar:
expression ::= atom | '(' expression* ')' .
> Is this really a problem for Lisp or this is more of a non-Lisper
> automatic reaction to something he is not used to?
The later.
> How would the implementation of a Lisp without a using cons look like?
> Would it have any advantage over the "archaic" one?
How would walking with a stone in your shoes feel like?
Would it have any advantage over the "archaic" way of stoneless shoes?
--
__Pascal Bourguignon__ http://www.informatimago.com/
HEALTH WARNING: Care should be taken when lifting this product,
since its mass, and thus its weight, is dependent on its velocity
relative to the user.
On 10 okt, 02:09, ····@informatimago.com (Pascal J. Bourguignon)
wrote:
> Anticomuna <············@uol.com.br> writes:
> > How?
>
> In no way.
>
> > Is there anything wrong with the concept?
>
> No.
>
> > I thought it was so
> > easy to parse the S-expressions because of it.
>
> S-expressions are easy to parse because they've got a simple grammar:
>
> expression ::= atom | '(' expression* ')' .
>
> > Is this really a problem for Lisp or this is more of a non-Lisper
> > automatic reaction to something he is not used to?
>
> The later.
>
> > How would the implementation of a Lisp without a using cons look like?
> > Would it have any advantage over the "archaic" one?
>
> How would walking with a stone in your shoes feel like?
> Would it have any advantage over the "archaic" way of stoneless shoes?
>
> --
> __Pascal Bourguignon__ http://www.informatimago.com/
>
> HEALTH WARNING: Care should be taken when lifting this product,
> since its mass, and thus its weight, is dependent on its velocity
> relative to the user.
You bet. I am writing a small S-expression interpreter in Perl, and I
am using Perl array references instead of cons lists. It works, but I
have to write some hairy code.
Jurgen
jurgen_defurne wrote:
> You bet. I am writing a small S-expression interpreter in Perl, and I
> am using Perl array references instead of cons lists. It works, but I
> have to write some hairy code.
Ah, a connaisseur. Stones in shoes are for wet newbies, indeed.
On 10 okt, 13:52, Matthias Buelow <····@incubus.de> wrote:
> jurgen_defurne wrote:
> > You bet. I am writing a small S-expression interpreter in Perl, and I
> > am using Perl array references instead of cons lists. It works, but I
> > have to write some hairy code.
>
> Ah, a connaisseur. Stones in shoes are for wet newbies, indeed.
I would like to mod you +1 Funny, and -1 Extremely Ambigous, because I
have no idea who you are considering a newbie here !?
On Oct 9, 5:09 pm, ····@informatimago.com (Pascal J. Bourguignon)
wrote:
> Anticomuna <············@uol.com.br> writes:
> > How?
>
> In no way.
>
> > Is there anything wrong with the concept?
>
> No.
>
> > I thought it was so
> > easy to parse the S-expressions because of it.
>
> S-expressions are easy to parse because they've got a simple grammar:
>
> expression ::= atom | '(' expression* ')' .
Not true.
For example,
• '( ... )
• `( , ,@ ... )
• #| ... |#
• [...]
etc.
It's quite fucked really.
The one language who's got uniform nested syntax without exceptions,
would be Mathematica.
Further readings:
• Fundamental Problems of Lisp
Xah Lee, 2008
http://xahlee.org/UnixResource_dir/writ/lisp_problems.html
• The Concepts and Confusions of Prefix, Infix, Postfix and Fully
Nested Notations
Xah Lee, 2006
http://xahlee.org/UnixResource_dir/writ/notations.html
Xah
∑ http://xahlee.org/
☄
> From: ····@informatimago.com (Pascal J. Bourguignon)
> > How would the implementation of a Lisp without a using cons look like?
> > Would it have any advantage over the "archaic" one?
> How would walking with a stone in your shoes feel like?
> Would it have any advantage over the "archaic" way of stoneless shoes?
That's the best retort I've read/heard in weeks. It's even better
than Jay Leno's joke about Sarah Palin
- going to the hockey game to demonstrate her athletic ability by
tossing out the first hockey puck, then
- going out on the ice to demonstrate her political ability by
skating around issues.
I'm pretty sure you could enlighten even the most arrogant idiot
there is around this year, Sarah Palin. If Warren Buffett offered
to pay you three billion dollars to teach Sarah Palin how to use
her brain for something other than avoiding answering questions
she's asked, and he locked her with you in a room, with the best
catered food, and gave you three weeks to complete the assignment,
with specific test questions she would have to be able to answer at
least 80% correct at the end of those three weeks or else you don't
get paid, would you take the job?
If Jay Leno invited you onto his show, would you accept his
invitation, and while on his show would you enlighten him about a
few things?
Caveat: I have one good thing to say about Sarah Palin:
She learned how to pronounce the name of the President of Iran.
Uncaveat: That's the *only* good thing I have to say about her.
And in case the OP is still reading this thread, linked lists are a
very good way to represent some kinds of data in some
circumstances, so not having them available is a stupid idea.
On Oct 12, 2:33 am, ·············@teh.intarweb.org (Robert Maas,
http://tinyurl.com/uh3t) wrote:
> > From: ····@informatimago.com (Pascal J. Bourguignon)
> > > How would the implementation of a Lisp without a using cons look like?
> > > Would it have any advantage over the "archaic" one?
> > How would walking with a stone in your shoes feel like?
> > Would it have any advantage over the "archaic" way of stoneless shoes?
>
> That's the best retort I've read/heard in weeks. It's even better
> than Jay Leno's joke about Sarah Palin
> - going to the hockey game to demonstrate her athletic ability by
> tossing out the first hockey puck, then
> - going out on the ice to demonstrate her political ability by
> skating around issues.
>
> I'm pretty sure you could enlighten even the most arrogant idiot
> there is around this year, Sarah Palin. If Warren Buffett offered
> to pay you three billion dollars to teach Sarah Palin how to use
> her brain for something other than avoiding answering questions
> she's asked, and he locked her with you in a room, with the best
> catered food, and gave you three weeks to complete the assignment,
> with specific test questions she would have to be able to answer at
> least 80% correct at the end of those three weeks or else you don't
> get paid, would you take the job?
>
> If Jay Leno invited you onto his show, would you accept his
> invitation, and while on his show would you enlighten him about a
> few things?
>
> Caveat: I have one good thing to say about Sarah Palin:
> She learned how to pronounce the name of the President of Iran.
> Uncaveat: That's the *only* good thing I have to say about her.
>
> And in case the OP is still reading this thread, linked lists are a
> very good way to represent some kinds of data in some
> circumstances, so not having them available is a stupid idea.
The best one I've heard recently was a hypothetical Joe Biden response
to Sarah Palin's "Do you mind if I call you Joe?" - "Not at all, for
now, since in a few months you'll be calling me Mr. Vice President."
Anticomuna <············@uol.com.br> writes:
> How? Is there anything wrong with the concept? I thought it was so
> easy to parse the S-expressions because of it. Is this really a
> problem for Lisp or this is more of a non-Lisper automatic reaction to
> something he is not used to?
>
> How would the implementation of a Lisp without a using cons look like?
> Would it have any advantage over the "archaic" one?
I used the word "archaic" in the same sentence as "cons cells" in another
topic myself... just to expand on what I meant:
"At the dawn of time, these names were mnemonic, at least to the folks
implementing the first Lisp on an IBM 704. But even then they were
just lifted from the assembly mnemonics used to implement the
operations."
http://gigamonkeys.com/book/they-called-it-lisp-for-a-reason-list-processing.html
The section on List Manipulation Functions is a little too verbose to
quote, but does suggest that the list has in a way already been
abstracted from its cons-cell roots, which were basically a very
low-level implementation if I read it all right. So when I mentioned
the terrible sounding word, "archaic," I meant it in the sense that
we're not all running Lisp on IBM 704's and most of us don't write
much assembly these days.
I don't really know how often cons is used these days, but Clojure
doesn't think we need them at all and Peter Siebel suggests all we
really need are FIRST/REST and NTH functions for *most* applications
we'd start writing from scratch.
On Oct 10, 9:16 am, J Kenneth King <·····@agentultra.com> wrote:
> Anticomuna <············@uol.com.br> writes:
> > How? Is there anything wrong with the concept? I thought it was so
> > easy to parse the S-expressions because of it. Is this really a
> > problem for Lisp or this is more of a non-Lisper automatic reaction to
> > something he is not used to?
>
> > How would the implementation of a Lisp without a using cons look like?
> > Would it have any advantage over the "archaic" one?
>
> I used the word "archaic" in the same sentence as "cons cells" in another
> topic myself... just to expand on what I meant:
>
> "At the dawn of time, these names were mnemonic, at least to the folks
> implementing the first Lisp on an IBM 704. But even then they were
> just lifted from the assembly mnemonics used to implement the
> operations."
>
> http://gigamonkeys.com/book/they-called-it-lisp-for-a-reason-list-pro...
>
> The section on List Manipulation Functions is a little too verbose to
> quote, but does suggest that the list has in a way already been
> abstracted from its cons-cell roots, which were basically a very
> low-level implementation if I read it all right. So when I mentioned
> the terrible sounding word, "archaic," I meant it in the sense that
> we're not all running Lisp on IBM 704's and most of us don't write
> much assembly these days.
Exactly.
The lisp's cons forces programer to think in implementation detail
level, pretty much in the same way when lispers sneer at C or Java.
> I don't really know how often cons is used these days, but Clojure
> doesn't think we need them at all and Peter Siebel suggests all we
> really need are FIRST/REST and NTH functions for *most* applications
> we'd start writing from scratch.
Yes. Many lisp experts has acknowledged the problem.
Note that First, Rest, Nth is not enough. There needs to be also Last,
at least. In general, when you think of lists as sequence of things as
a language primitive, and you think of general ways to manipulate such
a item, you will also have functions that extract arbitrary contiguous
sequence of the item (what many lang, such as Python, calls “slice”),
and functions that extract multiple elements in arbitrary positions in
the list (what Mathematica calls Parts, Extract, almost non-existent
in any other lang), and you must also have functions that do delete
versions of the above (i.e. instead of specifying which part you want,
the function returns the parts not specified. Basically a inverse
boolean. In many lang, there's no such function), and functions that
do the extraction/deletion based on a pattern (which is part of the
functionality of what's called Pattern Matching in many functional
langs such as Haskell, and in limited way called grep in e.g. Perl),
or the extraction spec can be based on a predicate function, and you
also need functions that extract arbitrary levels of the sublist or
arbitrary leafs, or arbitrary, multiple, nodes in the tree.
So, all things considered, you have aspect of:
• whether to extract, or its opposite of specificying of parts you
don't want. e.g. First, or NotFirst (i.e. Rest). Perl's grep, or grep
with reverse boolean !=.
• the syntax and spec used to indicate parts of the expression, i.e.
index based, pattern based, predicate based.
• extracting a slice. (i.e. a sequence of neighboring elements in the
same level of nesting. e.g. get me 3rd to 7th elemets of the 4th
element of the list.)
• extracting particular level. (e.g. give me all nodes in the list
that are 3rd level nesting)
• extracting arbitrary elements. e.g. a general function that accept
argument that can extract, say, 1st and last elements, or 3rd, 5th,
1th elemnts in that order.
All these, are well done in Mathematica. Some so-called array-
languages such as Matlab, APL/J, offers some of these. Some other
functional lang, such as Haskell, OCaml, afaik offers parts of these
too. In generality, consistency, simplicity, nothing comes close to
Mathematica.
Lisp with its cons is completely fucked up. Even Perl, Python, PHP,
provide far more easier manipulation of lists than lisp.
Further readings:
• Fundamental Problems of Lisp
Xah Lee, 2008
http://xahlee.org/UnixResource_dir/writ/lisp_problems.html
• The Concepts and Confusions of Prefix, Infix, Postfix and Fully
Nested Notations
Xah Lee, 2006
http://xahlee.org/UnixResource_dir/writ/notations.html
Xah
∑ http://xahlee.org/
☄
> From: ·······@gmail.com" <······@gmail.com>
> The lisp's cons forces programer to think in implementation detail
> level, pretty much in the same way when lispers sneer at C or Java.
Nope. It *allows* anyone the *option* of implementing a brand new
higher-level API for a new intentional datatype using efficient
operations upon CONS cells, and allows them to know and document
how long each operation is expected to take, and then others can
use that newly defined API with documented timing estimates to make
informed decisions about usage of that API. That's all a big win!
> > I don't really know how often cons is used these days, but Clojure
> > doesn't think we need them at all and Peter Siebel suggests all we
> > really need are FIRST/REST and NTH functions for *most* applications
> > we'd start writing from scratch.
That's obviously wrong/incomplete, because it allows picking parts
out of an already-constructed list, but doesn't provide way to
construct such a list in the first place. What did Peter Siebel
*really* say??
> Note that First, Rest, Nth is not enough. There needs to be also
> Last, at least.
Given a predicate such as ENDP, it's trivial to write LAST in terms
of REST and ENDP. So it doesn't need to be provided as a primitive.
It can be added later.
The rest of your article is not worth anyone reading except as a joke.
Dear Robert Maas moron,
On Oct 25, 1:06 am, ·············@teh.intarweb.org (Robert Maas,
http://tinyurl.com/uh3t) wrote:
> > From: ·······@gmail.com" <······@gmail.com>
> > The lisp's cons forces programer to think in implementation detail
> > level, pretty much in the same way when lispers sneer at C or Java.
>
> Nope. It *allows* anyone the *option* of implementing a brand new
> higher-level API for a new intentional datatype using efficient
> operations upon CONS cells, and allows them to know and document
> how long each operation is expected to take, and then others can
> use that newly defined API with documented timing estimates to make
> informed decisions about usage of that API. That's all a big win!
so, u r saying its a blob of molten?
why perl, python, Mathematica, haskell ... in fact no other lang uses
it?
> > Note that First, Rest, Nth is not enough. There needs to be also
> > Last, at least.
>
> Given a predicate such as ENDP, it's trivial to write LAST in terms
> of REST and ENDP. So it doesn't need to be provided as a primitive.
> It can be added later.
So, u r saying its a blob of molten again?
I agree you can make any great sculpture with it.
> The rest of your article is not worth anyone reading except as a joke.
how old r u exactly? 50? 60, or 70+? and still can't find a job?
a month or 2 ago someone suggested that you beg on the street to get a
used PC to supplant your 1990's computer as your main lisp hardware.
Did you succeed?
sorry if i sounded harsh, but people need to know the nature of the
meat who post to “comp.lang.lisp”.
it's well known that tech geekers are very concerned about privacy,
but that's rather a understatement. I noticed it's more proper to
describe them as sick loners who hide behind computers, esp the
regulars who hog on newsgroups and IRCs. Sure the programer class as
well as writers, tends to be private and introverts, but what i found
is that these newsgroup/irc hoggers are to a degree that its rather
more fitting to describe their privacy mannerisms as fear of exposing
what they really are in real life.
i mean, you can research on this. Personally, since about 2002 i have
had strong interest in studying the human animals, in particular in
areas sometimes called ethnology and ethology. I myself are sort of
hermit personality and expert in infomation, so i have some 10 or 20
online accounts in newsgroups, irc, online forums, Second Life, etc,
and thru my interaction with people, i tried to study who they are. It
is interesting, that in computer language newsgroup or irc, you may
chat with the regulars there long enough (months or years) to know
what field of computing expertise they have, but often, you can almost
never find any bits of info about who they really are, their age,
education, job, marital status, photo, even their real names. ...
don't want spend too much time writing on this for now, ... but the
short story is that i found these tech geeker seems really miserable
in real life, is a nobody, and have a unhealthy fear of disclosing any
info about themselves.
the few times i've actually seen a photo of the person behind his
screen name, the guy looks such a major looser that i'd not want to
have anything to do with him. I'm serious about this. Kinda sad.
(btw, there are few exceptions of course. Kenny is beautiful! Geeks
with celebrity status often have their photos or videos too, some
attend public conferences ... and of course there are local lisp
meetings i'm sure all of us become very beautiful after few free
beers...)
Hi, dearly beloved lispers, how about we, each start to write a brief
bio of self introduction? Hopefully instead of just posting it here,
each of us post it to somewhere more permenant such as a blog site, so
that the info is more accessible.
See also:
Second Life, Tech Geekers, And Escapism
http://xahlee.org/Netiquette_dir/Second_Life_vs_newsgroup_escapism.html
Xah
∑ http://xahlee.org/
☄
·······@gmail.com" <······@gmail.com> wrote on Sat, 25 Oct 2008:
> Hi, dearly beloved lispers, how about we, each start to write a brief
> bio of self introduction?
If I could sum up my life in one sentence, I think it would be: He
was born, he lived, and then he kept on living, much longer than
anyone had ever lived before, getting richer and richer and glowing
with a bright white light.
-- Deep Thoughts, by Jack Handey [1999]
_______________________________________________________________________________
Don Geddis http://don.geddis.org/ ···@geddis.org
> From: ·······@gmail.com" <······@gmail.com>
> > > The lisp's cons forces programer to think in implementation detail
> > > level, pretty much in the same way when lispers sneer at C or Java.
> > Nope. It *allows* anyone the *option* of implementing a brand new
> > higher-level API for a new intentional datatype using efficient
> > operations upon CONS cells, and allows them to know and document
> > how long each operation is expected to take, and then others can
> > use that newly defined API with documented timing estimates to make
> > informed decisions about usage of that API. That's all a big win!
> so, u r saying its a blob of molten?
No. Unlike assembly language (and C), Lisp provides a *structured*
low-level utility for building binary trees, and unlike Java this
binary-tree datatype is efficient in both space and speed. This
means people wishing to build a new higher-level API on top of it
can "just do it" in the most obvious way instead of needing to go
to great effort to accomplish the smallest step. It's vaguely
analagous to the difference between a blob of wet clay, which is
very difficult to form into a desired shape, and a set of Tinker
Toys or Lego blocks, which fit together in well-defined ways that
make it easy to build a large class of structures reliably. And
even better than those manual kinds of building-block toys, Lisp
allows writing tools to do the building automatically by feeding a
specification of how to put the pieces together through a generic
structure-building tool.
> why perl, python, Mathematica, haskell ... in fact no other lang
> uses it?
Some people prefer starting from a barebones programming
environment, which provides almost *nothing* automatically, and
then re-inventing whatever they need from there. Other people
prefer starting from an already-existing programming environment
with lots of built-in capability, and then emulating most of their
new features quickly using those existing capabilities. So some
people implement their new languages in C or assembly language
which are barebones, and others implement their new languages in
Lisp or Java which are highly featured. Thanks to Sun pushing Java
into everyone's minds, and MicroSoft installing Java (but not Lisp)
as a bundled part of all their recent "Windows" systems, Java is
more obviously available to build upon than Lisp is, so more people
are using Java than Lisp as their high-level starting point.
> > > Note that First, Rest, Nth is not enough. There needs to be also
> > > Last, at least.
> > Given a predicate such as ENDP, it's trivial to write LAST in terms
> > of REST and ENDP. So it doesn't need to be provided as a primitive.
> > It can be added later.
> So, u r saying its a blob of molten again?
No. In this case the code for LAST, written in terms of REST and
ENDP, is so trivial I bet even *you* could write it in a few weeks
if you really tried. (Anyone competant could write it in a couple
minutes, or less.)
> I agree you can make any great sculpture with it.
And a lot *easier* than with a blob of wet clay!!!!!!!!
(Using "sculpture" in a metaphorical way of course.)
> it's well known that tech geekers are very concerned about privacy,
> but that's rather a understatement.
English note: "a" should read "and".
English isn't as hard to learn as Chinese, but still it's not trivial,
so I offer you English help where it seems it would be useful.
> I myself are sort of hermit personality ...
English: "are" should read "am", unless "I" and "myself" refer to
two different people.
> ... i have some 10 or 20 online accounts in newsgroups, irc,
> online forums, Second Life, etc,
Are you admitting that you're afraid of letting people in one forum
learn what you are saying in another forum, so you use 10-20
different identities so that what you post one place can't easily
be matched with what you post elsewhere because it *seems* to be
posted by a different person?
Robert Maas wrote:
> No. Unlike assembly language (and C), Lisp provides a *structured*
> low-level utility for building binary trees, and unlike Java this
> binary-tree datatype is efficient in both space and speed.
So you are saying it's a blob of molton, but better blob than C or
Java?
I concur.
> > why perl, python, Mathematica, haskell ... in fact no other lang
> > uses it?
>
> Some people prefer starting from a barebones programming
> environment, which provides almost *nothing* automatically, and
> then re-inventing whatever they need from there. Other people
> prefer starting from an already-existing programming environment
> with lots of built-in capability, and then emulating most of their
> new features quickly using those existing capabilities.
As computing resource and technology progress, going higher level is
natural.
It's not a matter of preference. Lower level lang just gradually die
out.
> > it's well known that tech geekers are very concerned about privacy,
> > but that's rather a understatement.
>
> English note: "a" should read "and".
> English isn't as hard to learn as Chinese, but still it's not trivial,
> so I offer you English help where it seems it would be useful.
If you want to learn more about English from me, i recommend my
essays. You might start with:
• To An Or Not To An
http://xahlee.org/Periodic_dosage_dir/bangu/an.html
• I versus i
http://xahlee.org/Periodic_dosage_dir/bangu/i_vs_I.html
• on the postposition of conjunction in penultimate position of a
sequence
http://xahlee.org/Periodic_dosage_dir/t2/1_2_and_3.html
> > I myself are sort of hermit personality ...
>
> English: "are" should read "am", unless "I" and "myself" refer to
> two different people.
are you familiar with the concept of typo? you prob do, but i suppose
you are unfamiliar with the philosophy of argumentation. I recommend
you to start by reading the Wikipedia article on Critical Thinking. I
mean, read it word for word. Spend, say, 1 hour on that article.
> > ... i have some 10 or 20 online accounts in newsgroups, irc,
> > online forums, Second Life, etc,
>
> Are you admitting that you're afraid ...
admitting afraidness? Not sure what you mean. May i suggest that you
pick up the basics of Chinese? For, another language might help you
learn communication better.
Xah
∑ http://xahlee.org/
☄
> > No. Unlike assembly language (and C), Lisp provides a *structured*
> > low-level utility for building binary trees, and unlike Java this
> > binary-tree datatype is efficient in both space and speed.
> From: ·······@gmail.com" <······@gmail.com>
> So you are saying it's a blob of molton, but better blob than C or Java?
No. It's a useful toolkit.
Try to assemble something with wrenches and screwdrivers and other
tools, and it's feasible. Try to assemble it with just clay instead
of tools, and it's impossible.
> As computing resource and technology progress, going higher level
> is natural.
In that case, when building new software applications, you should
prefer Lisp which has lots of high-level tools built-in rather than
Java which has a lesser set of tools or C which has no high-level
tools at all.
> It's not a matter of preference. Lower level lang just gradually
> die out.
If and when SysLISP and BootLAP is available in CL or Java, then
indeed we will no longer need either assembly languages or C,
except to re-compile legacy sourcecode. (But with a convertor from
assembly language or C to BootLAP or parsed-C, even legacy
sourcecode could be fed through Lisp or Java to directly generate
executables and device drivers and operating systems etc.)
> May i suggest that you pick up the basics of Chinese?
Easier said than done. Several local UHF TV stations broadcast
Chinese-language programs with Chinese-character subtitles. I know
several common charcters (one, two, three, ten, man, large, small,
up, down, sun/day, moon/month, center, country, and maybe a few
more), but there are several more characters that I see very
commonly so that I recognize them each time I see them but I don't
know what they mean. AFAIK there's no lookup/search engine on the
InterNet whereby I can draw a picture of a character and submit it
and get back the information about that character, so I have no
practical way to learn what these recognized characters mean. I'd
like to set up a lookup/search engie for looking up characters by
shape, but I don't know anyone who knows Chinese who is willing to
help me. Also, I tried to set up some Web pages that illustrate one
idea I had for describing a character one stroke at a time, by
superimposing several possible characters identified by number in a
character grid but my cellphone doesn't render <pre>..</pre>
sections in fixed-pitch font, so that idea isn't workable, and I
have to think of some other idea. Here's the URL of that prototype
that looks fine on desktop Web browsers but doesn't work on my
cellphone:
<http://www.rawbw.com/~rem/WAP/ChineseInput/start1.html>
On Nov 9, 7:17 pm, ·············@teh.intarweb.org (Robert Maas,
http://tinyurl.com/uh3t) wrote:
> > > No. Unlike assembly language (and C), Lisp provides a *structured*
> > > low-level utility for building binary trees, and unlike Java this
> > > binary-tree datatype is efficient in both space and speed.
> > From: ·······@gmail.com" <······@gmail.com>
> > So you are saying it's a blob of molton, but better blob than C or Java?
>
> No. It's a useful toolkit.
> Try to assemble something with wrenches and screwdrivers and other
> tools, and it's feasible. Try to assemble it with just clay instead
> of tools, and it's impossible.
Yes, as far as analogy goes, when you have power tools, it is much
better than screw, nails, hammers.
list processing facilities in Mathematica, Perl, Python, PHP, are far
more easy, flexible, and powerful, than lisp's cons.
> > As computing resource and technology progress, going higher level
> > is natural.
>
> In that case, when building new software applications, you should
> prefer Lisp which has lots of high-level tools built-in rather than
> Java which has a lesser set of tools or C which has no high-level
> tools at all.
Yes. My point was not about lisp is dumber than C or Java. It about
lisp being dumber than Perl, Python, PHP, Mathematica, with respect to
list facilities.
> > It's not a matter of preference. Lower level lang just gradually
> > die out.
>
> If and when SysLISP and BootLAP is available in CL or Java, then
> indeed we will no longer need either assembly languages or C,
> except to re-compile legacy sourcecode. (But with a convertor from
> assembly language or C to BootLAP or parsed-C, even legacy
> sourcecode could be fed through Lisp or Java to directly generate
> executables and device drivers and operating systems etc.)
No comment. My expertise lies not in low level systems.
> > May i suggest that you pick up the basics of Chinese?
>
> Easier said than done. Several local UHF TV stations broadcast
> Chinese-language programs with Chinese-character subtitles. I know
> several common charcters (one, two, three, ten, man, large, small,
> up, down, sun/day, moon/month, center, country, and maybe a few
> more), but there are several more characters that I see very
> commonly so that I recognize them each time I see them but I don't
> know what they mean. AFAIK there's no lookup/search engine on the
> InterNet whereby I can draw a picture of a character and submit it
> and get back the information about that character, so I have no
> practical way to learn what these recognized characters mean.
if you have the chinese char in your computer, you can just paste it
into online dictionaries. There are lots of such dictionaries, english
to chinese, chinese to english, chinese to chinese.
but if you don't have the char already, you'll need to learn a chinese
input system to type chinese. There are a variety of Chinese input
systems in common use. Basically, there are 2 classes: those based on
phonetics, and those based on the character shape. The phonetics based
ones are easy to learn, but less efficient to type. The char based are
more difficult to learn, but is faster. Among phonetic systems, there
are 2 main classes: those based on pinyin, and those based on
bopomofo, which is a more classic chinese phonetic system. (pinyin is
based on latin alphabets)
In either case, you will need to be familiar with basic chinese before
you can actually learn a chinese input system. Normally, chinese
learners don't start to learn writing/reading chinese until after some
3 or so years of study. Many, if not majority, Chinese as second lang
learners never learned the writing system, even though they are fluent
in listening/speaking. The reason for that is because it is really
difficult. Native chinese typically spent 10 years from grade school
to high school learning it.
but if you are curious, you can follow examples here to type a chinese
char right away in emacs:
http://xahlee.org/emacs/emacs_n_unicode.html
excerpt:
Q: How to type Chinese?
Regardless what text editor you are using, you need to do two things:
(1) Set your editor's File Encoding system to one that supports your
language. (2) set your Input Method to a particular system suitable
for your language.
File Encoding tells your computer how to map symbols/glyphs/characters
into binary code. Input Method allows you to type languages that are
not based on alphabet. (For example, in Chinese, you cannot just type
a character by pressing a key, instead, you must use a input method to
type Chinese.) For languages based on the Latin alphabet, you don't
need to worry about input method.
To set your file encoding in emacs, use the menu “Options‣Mule
(Multilingual Environment)‣Set Language Environment”.
To set your input method, use the menu “Options‣Mule (Multilingual
Environment)‣Select Input Method...”.
After you've pulled the menu, be sure to also pull the menu command
“Options‣Save Options” so that emacs remembers your settings.
For me, i type Chinese often. There are several encoding systems for
Chinese, for example GB 18030↗, Big5↗, UTF-8↗. I use the UTF-8
encoding system. Among the Chinese input methods↗, i use the Pinyin
method↗. Here's how to set them in emacs without using the menu: “Alt
+x set-language-environment UTF-8” and “Alt+x set-input-method chinese-
py”.
Here's a example of actually typing the Chinese char 美 (meaning
beautiful). Type “Alt+x set-input-method Return chinese-py”, then type
“mei”. Emacs will show you a list of characters with the pronunciation
of mei. Type “2” to pick the right character. Then, emacs will insert
the character. To return to your input method, press “Ctrl+\”.
A in-depth tutorial of using Mac with Chinese is at: http://www.yale.edu/chinesemac/.
It includes comprehensive info and resources on Chinese fonts,
complete tutorials on several Chinese input methods, etc.
--------------------------
> I'd
> like to set up a lookup/search engie for looking up characters by
> shape, but I don't know anyone who knows Chinese who is willing to
> help me.
I'm willing to teach in a casual way, you or anyone.
You can meet me in Second Life, i'm Xah Toll there.
i enjoy teaching. But if anyone are serious, and want regular hourly
lessons, please talk to me about it.
For those curious, here's some related link:
• 難懂簡體字表 (Chinese Core Simplified Chars)
http://xahlee.org/lojban/simplified_chars.html
• Lojban A Word A Day (with English and Chinese translation)
http://xahlee.org/lojban/valsi_dikni/valsi_dikni.html
• Chinese Pinyin Letter Frequency
http://xahlee.org/Periodic_dosage_dir/bangu/pinyin_frequency.html
> Also, I tried to set up some Web pages that illustrate one
> idea I had for describing a character one stroke at a time, by
> superimposing several possible characters identified by number in a
> character grid but my cellphone doesn't render <pre>..</pre>
> sections in fixed-pitch font, so that idea isn't workable, and I
> have to think of some other idea. Here's the URL of that prototype
> that looks fine on desktop Web browsers but doesn't work on my
> cellphone:
> <http://www.rawbw.com/~rem/WAP/ChineseInput/start1.html>
that's weird. You methods are weird...
There are lots of websites that teach Chinese. Wikipedia has lots of
articles on Chinese too... worth at least few days of reading. (i've
more or less read them all in the past few years) e.g. articlese on
the lang in general, on its grammar, competing phonetic systems,
writing system, history of its writing system, morphology,
pronunciation, ... its current and past use in Japanese, Korean,
Vietnemese writing systems ..., political controversy between the
china and taiwan lots on simplified char vs traditional char and
romanization systems... lots.
i don't understand why can't you find a job or your personal problems
of debt... Chances are, you are not that interested in solving your
problems.
Xah
∑ http://xahlee.org/
☄
Robert Maas, http://tinyurl.com/uh3t wrote:
> No. Unlike assembly language (and C), Lisp provides a *structured*
> low-level utility for building binary trees, and unlike Java this
> binary-tree datatype is efficient in both space and speed.
Actually Java is over 4x faster than the fastest free Lisp implementation
(SBCL) on the binary trees benchmark:
http://shootout.alioth.debian.org/u64q/benchmark.php?test=all&lang=java&lang2=sbcl
--
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
On Wed, 12 Nov 2008 05:13:16 +0000, Jon Harrop <···@ffconsultancy.com>
wrote:
>Robert Maas, http://tinyurl.com/uh3t wrote:
>> No. Unlike assembly language (and C), Lisp provides a *structured*
>> low-level utility for building binary trees, and unlike Java this
>> binary-tree datatype is efficient in both space and speed.
>
>Actually Java is over 4x faster than the fastest free Lisp implementation
>(SBCL) on the binary trees benchmark:
>
>http://shootout.alioth.debian.org/u64q/benchmark.php?test=all&lang=java&lang2=sbcl
If you bothered to look at the Lisp code, you'd know that it is not
optimal speed-wise. The comments indicate it was rewritten to
explicitly lower its memory use (for what reason I have no idea).
It may produce the same answer but it is not an equivalent solution to
the Java code.
George
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Semi-OT: xah wants photos of everyone (was: Cons cell archaic!?)
Date:
Message-ID: <REM-2008nov06-002@Yahoo.Com>
> From: ·······@gmail.com" <······@gmail.com>
> in computer language newsgroup or irc, you may chat with the
> regulars there long enough (months or years) to know what field of
> computing expertise they have, but often, you can almost never find
> any bits of info about who they really are, their age, education,
> job, marital status, photo, even their real names. ...
As with almost any generalization, your generalization is not
entirely correct. I post using my true legal name, which I've had
since birth.
The reason why people don't post all their personal info here in a
technical newsgroup is because it's off-topic. If somebody tells me
I should buy something I can't afford, I tell them I can't afford,
and if they persist in harassing me to buy it, I point them to the
description of my personal financial situation. But my preference
in women, marital status, photo, etc. are totally irrelevant here.
But if you look in other newsgroups that deal with personal issues,
you may find some more such personal info about me. The same
applies to lots of people here. Apparently you haven't done your
research. If you want to learn personal info about people, you need
to look in forums that are more personally related, and look in
people's "home pages" and the like. If you're trying to collect
personal information about people, this newsgroup isn't a good
source of data, so you should look somewhere else.
> the few times i've actually seen a photo of the person behind his
> screen name,
Photos of me have been readily available ever since 1999 with a guy
with a digital camera had me post for pictures and then he uploaded
the pictures to the net and then I copied them to my own Web site.
A few additional photos of me were digitized by me, using a flatbed
scanner, from other images I have around, the most recent photos in
2004 (digitized and uploaded in 2005) and 2007 (digitized and
uploaded later that same year). If you're competant at finding
personal information about people, you should have already found
these online images of me. But obviously you're whining because you
haven't been able to find them, because you're incompetant.
> Kenny is beautiful!
So you're a "fag"?? Else why would you be so greatly attracted to him?
> how about we, each start to write a brief bio of self introduction?
I already did that, but nobody is interested in reading it. Despite
placing top five nationwide in a math contest, and solving several
previously unsolved problems, I'm not anybody that the regular
media finds interesting to show on TV (although I *was* on live TV
once, for a half hour, when I was in high school, demonstrating and
talking about one of my early accomplishments in computing
technlogy), so the average person who watches TV a lot has likely
never seen me, and when I walk around in Real Life I don't carry a
banner that announces my expertise in math and computers so people
seeing me just see me as a random person rather than anyone special
to notice. The only computer person I've seen in an unexpected
place in real life and recognized was Lynn Gold (former wife of
Mark Crispin) whom I saw once at a WesTech/BrassRing job-fair. The
only computer people I previously knew from the net whom I later
met in an unexpected place were Estefferud whom I met at
WesTech/BrassRing, and he was a totally friendly guy, a surprise
from his stuffy-sounding name, and Jon Postel, who was a very
unfriendly guy, and Kent Pittman who was living with a girl who
invited me to visit her back in 1981. I could have walked right by
most other Lispers (PascalB, Jeff Golden, JonLWhite, DaveMoon,
Kenny, Warnock, Mizrahi, Russ, Rettig) without noticing them,
because I have no idea what any of them look like. In particular,
Warnock identifies himself as residing in San Mateo, about 20 miles
from here, and might possibly have traveled to San Jose where we
walked right past each other without either recognizing the other.
The fact that you haven't yet found my personal info shows how
incompetant you are. I guess you are just starting your coursework
in psychological studies where you learn how to conduct research in
the sense of collecting data, right? Maybe in a couple years you'll
know how to use Google to find mentions of names, then in a few
more years you'll learn how to filter Google's results to discard
the false matches and bogus information. Hint: If you get curious
about mathematics, and a Google search turns up anything written by
a person who identifies himself as JSH, including a new factoring
algorithm, just ignore it, it's all garbage. The same goes for
Archimedes Plutonium writing about the physical laws of nature, and
Brad Guth writing about intelligent life on Venus. Most WikiPedia
articles are pretty high quality, but even there you may
occasionally find a piece of crap.
You seem to be especially interested in what people look like. One
of my proposed new Web-based applications is a system for matching
faces of various people in images posted to the net, thereby
identifying who looks like whom. The system would be funded as a
"collective" where people volunteer labor in return for getting
services. Let me know if you'd be interested in working with me to
build such a system. You don't even need to know how to write
software, you can just brainstorm with me about ideas what to
implement, and fully test the stuff written by me.
On Nov 6, 11:26 am, ·············@teh.intarweb.org (Robert Maas,
http://tinyurl.com/uh3t) wrote:
> > From: ·······@gmail.com" <······@gmail.com>
> > in computer language newsgroup or irc, you may chat with the
> > regulars there long enough (months or years) to know what field of
> > computing expertise they have, but often, you can almost never find
> > any bits of info about who they really are, their age, education,
> > job, marital status, photo, even their real names. ...
>
> As with almost any generalization, your generalization is not
> entirely correct. I post using my true legal name, which I've had
> since birth.
try to understand that the essay is not a description of you. Don't
flatter thyself.
> The reason why people don't post all their personal info here in a
> technical newsgroup is because it's off-topic.
And you are free to off-topic other things?
> If somebody tells me
> I should buy something I can't afford, I tell them I can't afford,
> and if they persist in harassing me to buy it, I point them to the
> description of my personal financial situation. But my preference
> in women, marital status, photo, etc. are totally irrelevant here.
O, it is very relevant. It's part of communication.
For example, when you go to job interview, you should dress up and put
on smile to your face. Sure, how u dress might have nothing to do with
your tech skills, but maybe that's why u cant find a job.
keep posting here with your elaborate messages. It can at least help u
pass time.
> But if you look in other newsgroups that deal with personal issues,
> you may find some more such personal info about me.
O, i don't realy care about u. Rly. If of any relevance to u, my essay
about tech geeker being inward to a unhealthy degree is just to make
you look like a major moron.
Btw, i'm a good guy. I'm concerned with the progress of human animals
though.
you might still wonder why i'm so rude or harh to u. That's because
you are this stupid slacker, can't think logically, and kept trying to
respond with stupid things to my essay.
Normally, i take some time to answer the question so addressed. But as
y'know, there are a gazillion tech geeking morons, each saying stupid
things. I can't answer them all, and i've been doing this for over 10
years. So, for some of these moron's replies, i elect to respond to
them in the way that reflects their stupidity. Consider u fortunate
that i actually took time to respond to u. Cherish it. It wont last.
> Photos of me
Nah. Do you have a young daughter? Show me photo of ur daughter
instead pls. I'd like portraits of her tits and pussy.
> > Kenny is beautiful!
>
> So you're a "fag"?? Else why would you be so greatly attracted to him?
Kenny is a troll, my compatriot. I consider cock sucking among males
being quite a good activity in some theoretical sense, but personally
by nature i find it aversive.
You suck cock? Have u ever been fucked by a donkey? U might give it a
try.
> > how about we, each start to write a brief bio of self introduction?
>
> I already did that, but nobody is interested in reading it. Despite
> placing top five nationwide in a math contest, and solving several
> previously unsolved problems
Hum? That sounds interesting. Do u have url or stuff to show? We want
to see.
> The only computer person I've seen in an unexpected
> place in real life and recognized was Lynn Gold (former wife of
> Mark Crispin) whom I saw once at a WesTech/BrassRing job-fair. The
> only computer people I previously knew from the net whom I later
> met in an unexpected place were Estefferud whom I met at
> WesTech/BrassRing, and he was a totally friendly guy, a surprise
> from his stuffy-sounding name, and Jon Postel, who was a very
> unfriendly guy, and Kent Pittman who was living with a girl who
> invited me to visit her back in 1981. I could have walked right by
> most other Lispers (PascalB, Jeff Golden, JonLWhite, DaveMoon,
> Kenny, Warnock, Mizrahi, Russ, Rettig) without noticing them,
> because I have no idea what any of them look like. In particular,
> Warnock identifies himself as residing in San Mateo, about 20 miles
> from here, and might possibly have traveled to San Jose where we
> walked right past each other without either recognizing the other.
Nice. I live in Mountain View. Maybe we passed each other too. Can u
buy me pizza?
my fav pizza favor is chicken pizza, or brocolli pizza. Weird, i know.
> The fact that you haven't yet found my personal info shows how
> incompetant you are.
lol. Kneel, with your hands offering your personal info, beg me to
read it, then i'll consider spending 5 min to know who u r.
> You seem to be especially interested in what people look like. One
> of my proposed new Web-based applications ...
Send me $10 thru paypal, then i'll consider to listen to what u've
done.
Xah
∑ http://xahlee.org/
☄
······@gmail.com wrote:
> On Nov 6, 11:26 am, ·············@teh.intarweb.org (Robert Maas,
> http://tinyurl.com/uh3t) wrote:
>
>>>Kenny is beautiful!
>>
>>So you're a "fag"?? Else why would you be so greatly attracted to him?
Ah, those were the days. Now I fear that qualifies as hate speech, not
that you meant it as such, but still, I would look for another way to
eviscerate my antagonist.
>
>
> Kenny is a troll, my compatriot. I consider cock sucking among males
> being quite a good activity in some theoretical sense, but personally
> by nature i find it aversive.
>
> You suck cock? Have u ever been fucked by a donkey? U might give it a
> try.
Come, come, gentlemen. This is not a brawl.*
Xah is right, the best thing about Lisp conferences and drinking
societies is being able to put faces on the yobbos and find out which of
them can drink. Come to think of it, not a one heeded my post-barge ECLM
battle cry when according to reports I climbed up some steps after we
had debarked and yelled out "Anyone not through drinking, follow me!"
We few, we happy few closed the joint and still made the 8am talk.
kt
* Title and artist for a beer. k
On Sat, 08 Nov 2008 05:28:26 -0500, Kenny <·········@gmail.com> wrote:
>······@gmail.com wrote:
>> On Nov 6, 11:26 am, ·············@teh.intarweb.org (Robert Maas,
>> http://tinyurl.com/uh3t) wrote:
>>
>
>>>>Kenny is beautiful!
>>>
>>>So you're a "fag"?? Else why would you be so greatly attracted to him?
>
>Ah, those were the days. Now I fear that qualifies as hate speech, not
>that you meant it as such, but still, I would look for another way to
>eviscerate my antagonist.
Hate speech?
fag�ot also fag�got
(fa-g'?t) Pronunciation Key
n.
1. A bundle of twigs, sticks, or branches bound together.
2. A bundle of pieces of iron or steel to be welded or hammered
into bars.
tr.v. fag�ot�ed also fag�got�ed, fag�ot�ing also fag�got�ing,
fag�ots also fag�gots
1. To bind into a fagot; bundle.
2. To decorate with fagoting.
I guess someone with stick up his ass could be considered anal 8)
George
George Neuner wrote:
> On Sat, 08 Nov 2008 05:28:26 -0500, Kenny <·········@gmail.com> wrote:
>
>
>>······@gmail.com wrote:
>>
>>>On Nov 6, 11:26 am, ·············@teh.intarweb.org (Robert Maas,
>>>http://tinyurl.com/uh3t) wrote:
>>>
>>
>>>>>Kenny is beautiful!
>>>>
>>>>So you're a "fag"?? Else why would you be so greatly attracted to him?
>>
>>Ah, those were the days. Now I fear that qualifies as hate speech, not
>>that you meant it as such, but still, I would look for another way to
>>eviscerate my antagonist.
>
>
>
> Hate speech?
>
>
> fag�ot also fag�got
> (fa-g'?t) Pronunciation Key
> n.
>
> 1. A bundle of twigs, sticks, or branches bound together.
> 2. A bundle of pieces of iron or steel to be welded or hammered
> into bars.
>
> tr.v. fag�ot�ed also fag�got�ed, fag�ot�ing also fag�got�ing,
> fag�ots also fag�gots
>
> 1. To bind into a fagot; bundle.
> 2. To decorate with fagoting.
>
>
> I guess someone with stick up his ass could be considered anal 8)
>
PWUAHAHAHAHAHA! Perfect! Just yesterday:
"I thought the word "thug" actually could be traced back to union thugs.
You know, I have been talking about union thugs for the longest time
I've been hosting this program. All of a sudden now "thug" has racial
connotations? "
http://www.rushlimbaugh.com/home/daily/site_110708/content/01125111.guest.html
He then took a call from someone who had produced some packaging
replacing Aunt Jemima's picture with Obama*. Rush expressed concerned
about the caller's exposure to a lawsuit for unfair use of a celebrity
image.
Go Patriots! Oh, sorry...
kzo
* Kenny <·························@cv.net> :
Wrote on Sat, 08 Nov 2008 19:13:45 -0500:
| "I thought the word "thug" actually could be traced back to union
| thugs. You know, I have been talking about union thugs for the longest
| time I've been hosting this program. All of a sudden now "thug" has
| racial connotations? "
Wait till you see this!
http://www.unexplainedstuff.com/Secret-Societies/The-Thuggee.html
Madhu wrote:
> * Kenny <·························@cv.net> :
> Wrote on Sat, 08 Nov 2008 19:13:45 -0500:
>
> | "I thought the word "thug" actually could be traced back to union
> | thugs. You know, I have been talking about union thugs for the longest
> | time I've been hosting this program. All of a sudden now "thug" has
> | racial connotations? "
>
> Wait till you see this!
>
> http://www.unexplainedstuff.com/Secret-Societies/The-Thuggee.html
Awesome. I should send that into Rush, maybe get a free membership. Thx!
kt
addendum to my previous post:
sometimes i tried to get the tech geekers to chat in voice, but it's
basically impossible.
y'know, in newsgroups or irc, each tech geeker sounds so motherfucking
grand. In argument, often comes to the point that each thinks the
other is a absolute idiot. It's quite funny, when some opinions or
facts that are absolutely answerable by research of fact checking, but
the argument usually becomes a impasse, where the parties consider the
other just moron.
so, sometimes i try to get them on voice. For example, i tried in irc
to some regulars, hey, wanna chat in skype or something? or join in
Second Life so we can all have a voice chat, and if we want, we can
argue in voice.
as in face to face meet, getting to chat in voice goes over a higher
level of communication, and people become more sincere at least, and
would naturally refrain from baseless bragging or throw mindless
insults. Not that arguing in voice prevents mindless quarrel, but at
least it is a deeper level of inter-personal relationship.
sadly, it is nigh impossible to get the tech geekers to talk. They all
have excuses ... (this partly contribute my conclusion that the tech
geekers are introverted loosers to some unhealthy degree)
i welcome any regulars to chat with me on voice. Here's my online IDs:
http://xahlee.org/Periodic_dosage_dir/t1/presences.html
in Skype or Second Life we can chat in voice. Esp in Second Life we
can have a group voice chat amid a virtual reality environment. It has
a emacs group and lisp group as well.
Note: i type some 24 hours a day, too much, even with dvorak keyboard
and my ergonomic emacs keybinding system. Other than a brief exchange
of hi, i'd prefer to use voice for any discussion or meeting. So, say,
if you want to discuss about lisp syntax or cons business, textual IM
wouldn't be a improvement from newsgroup posts.
Let's get the relationship thing going. LOL.
PS if you IM me for the first time, please introduce yourself. (e.g.
hi, i'm xyz in comp.lang.lisp ...) Once or twice a year i get prank or
hate mail from tech geekers due to my non-conformal attitude and
frequent voice of controversial opinions on technology.
Xah
∑ http://xahlee.org/
☄
······@gmail.com wrote:
> addendum to my previous post:
>
> sometimes i tried to get the tech geekers to chat in voice, but it's
> basically impossible.
>
> y'know, in newsgroups or irc, each tech geeker sounds so motherfucking
> grand. In argument, often comes to the point that each thinks the
> other is a absolute idiot. It's quite funny, when some opinions or
> facts that are absolutely answerable by research of fact checking, but
> the argument usually becomes a impasse, where the parties consider the
> other just moron.
>
> so, sometimes i try to get them on voice. For example, i tried in irc
> to some regulars, hey, wanna chat in skype or something? or join in
> Second Life so we can all have a voice chat, and if we want, we can
> argue in voice.
>
> as in face to face meet, getting to chat in voice goes over a higher
> level of communication, and people become more sincere at least, and
> would naturally refrain from baseless bragging or throw mindless
> insults. Not that arguing in voice prevents mindless quarrel, but at
> least it is a deeper level of inter-personal relationship.
>
> sadly, it is nigh impossible to get the tech geekers to talk. They all
> have excuses ... (this partly contribute my conclusion that the tech
> geekers are introverted loosers to some unhealthy degree)
>
> i welcome any regulars to chat with me on voice. Here's my online IDs:
>
> http://xahlee.org/Periodic_dosage_dir/t1/presences.html
>
> in Skype or Second Life we can chat in voice. Esp in Second Life we
> can have a group voice chat amid a virtual reality environment. It has
> a emacs group and lisp group as well.
>
> Note: i type some 24 hours a day, too much, even with dvorak keyboard
> and my ergonomic emacs keybinding system. Other than a brief exchange
> of hi, i'd prefer to use voice for any discussion or meeting. So, say,
> if you want to discuss about lisp syntax or cons business, textual IM
> wouldn't be a improvement from newsgroup posts.
>
> Let's get the relationship thing going. LOL.
>
> PS if you IM me for the first time, please introduce yourself. (e.g.
> hi, i'm xyz in comp.lang.lisp ...) Once or twice a year i get prank or
> hate mail from tech geekers due to my non-conformal attitude and
> frequent voice of controversial opinions on technology.
>
> Xah
> ∑ http://xahlee.org/
>
> ☄
No Lisp group nearby? Where is, nearby, btw?
peace,k
Kenny <·········@gmail.com> wrote on Sun, 26 Oct 2008:
> No Lisp group nearby? Where is, nearby, btw?
The Moon? Mars?
I don't think The Mighty Xah is a resident of our Earth...
_______________________________________________________________________________
Don Geddis http://don.geddis.org/ ···@geddis.org
On Oct 25, 4:06 pm, ·············@teh.intarweb.org (Robert Maas,
http://tinyurl.com/uh3t) wrote:
> The rest of your article is not worth anyone reading except as a joke.
Pwned!
At one point I enjoyed the trolling but XahLee got boring even for me.
Of course the chances of him actually understanding what you say are
roughly (- 0 0) so I expect to hear his 'insights' for a good few
years yet.
It's amazing that someone who bitches about Lisp all the time *still*
posts frequently to the newsgroup. Me, I've bitched about various
software in the past but I move on; either I live with it or I stop
the moaning and deal with the reality.
On the other hand, he has a few interesting things to say about Emacs,
and his heart seems to be in the right place. To me he seems to be
like the computer nerd equivalent of the 110% straight acting gay guy
- constantly denying what everybody knows to be the truth. Just step
out of the damn closet and agree that Lisp is the best *language* that
there is, please.
maximinus <·········@gmail.com> writes:
> On the other hand, he has a few interesting things to say about Emacs,
> and his heart seems to be in the right place.
It's just a shame his brain isn't.
--
Joost Diepenmaat | blog: http://joost.zeekat.nl/ | work: http://zeekat.nl/
J Kenneth King <·····@agentultra.com> writes:
> Anticomuna <············@uol.com.br> writes:
>
>> How? Is there anything wrong with the concept? I thought it was so
>> easy to parse the S-expressions because of it. Is this really a
>> problem for Lisp or this is more of a non-Lisper automatic reaction to
>> something he is not used to?
>>
>> How would the implementation of a Lisp without a using cons look like?
>> Would it have any advantage over the "archaic" one?
>
> I used the word "archaic" in the same sentence as "cons cells" in another
> topic myself... just to expand on what I meant:
>
> "At the dawn of time, these names were mnemonic, at least to the folks
> implementing the first Lisp on an IBM 704. But even then they were
> just lifted from the assembly mnemonics used to implement the
> operations."
>
> http://gigamonkeys.com/book/they-called-it-lisp-for-a-reason-list-processing.html
More or less.
In AIM-8, pairs are not named. The function used to build them is not
defined to take an atom as second argument. It is named "combine".
The accessors are named "first" and "rest".
The function "combine" indeed was implemented in assembler 704 under
the name "CONS" (for CONStruct), and FIRST and REST were stored in the
CAR and CDR, and indeed, LISP functions named "CAR" and "CDR" were
used to access them.
> The section on List Manipulation Functions is a little too verbose to
> quote, but does suggest that the list has in a way already been
> abstracted from its cons-cell roots, which were basically a very
> low-level implementation if I read it all right. So when I mentioned
> the terrible sounding word, "archaic," I meant it in the sense that
> we're not all running Lisp on IBM 704's and most of us don't write
> much assembly these days.
So what do you propose? QUARK, UP and DOWN? PAIR, LEFT, and RIGHT?
WUJI, YIN, YANG? We're not discussing lists, here, but pairs. And
the notion of pair is entirely disconnected from anything else,
there's no sense to give any specific name to it. Therefore using the
names CONS, CAR, CDR is a very good, because it connects the notion
with the history of lisp, every lisper knows what they actually mean,
and this gives some lively flesh to some otherwise quite abstract and
arid notion.
> I don't really know how often cons is used these days, but Clojure
> doesn't think we need them at all and Peter Siebel suggests all we
> really need are FIRST/REST and NTH functions for *most* applications
> we'd start writing from scratch.
So on one hand, AIM-8 considered only atoms and S-expressions built as
proper lists, by construction (and therefore named the functions
combine, first and rest, to manipulate lists).
On the other hand, the corresponding functions on 704 were called
CONS, CAR and CDR, and obviously could be used to build other data
structures than lists: the CONS cell, or PAIR was born.
(Note however that in section 4.2, AIM-8 mentionned "Binary Lisp" with
a combine function taking both atoms and s-expressions for both
arguments, that is, defining a pair, but discarded the notion
obviously slightly too quickly).
Therefore we could say that FIRST and REST are MORE archaic than CAR
and CDR (by at least a few weeks).
But the real reason why both LIST*/FIRST/REST and CONS/CAR/CDR are
kept, is while their binary code is almost the same or identical,
they are not intended for the same intentional types.
CONS/CAR/CDR work with the intentional data type Pair (which is
called CONS cell in Common Lisp, or PAIR in Scheme).
LIST*/FIRST/REST work with the intentional data type List (which
doesn't exist explicitely in Lisp or Scheme).
When you implement a record such as PERSON with a field NAME and a
field DNA, the functions MAKE-PERSON, PERSON-NAME, PERSON-DNA may too
have exactly the same binary code as the above mentionned functions.
But that doesn't mean that LIST*, FIRST and REST suddently become more
archaic.
(defstruct (person (:type list)) name dna)
(make-person :name "Pascal" :dna "CAAGATACTGAACGA...")
--> ("Pascal" "CAAGATACTGAACGA...")
You wouldn't dare using person-name on a list built with (LIST* 3.142
2.718), or on a pair built with (CONS 3.142 2.718), even if it would
sucessfully return 3.142...
--
__Pascal Bourguignon__ http://www.informatimago.com/
In a World without Walls and Fences,
who needs Windows and Gates?
> From: ····@informatimago.com (Pascal J. Bourguignon)
> In AIM-8, pairs are not named. The function used to build them
> is not defined to take an atom as second argument. It is named
> "combine". The accessors are named "first" and "rest".
I consider that to have been a preliminary idea that was modified
during implementation thus achieving greater benefit than the
original idea. If you implement the *original* idea literally, then
allocation of a new block of memory and a lot of copying must be
done to combine a long list with anything non-empty after it. As a
mathematical ideal, this is no problem, but as a practical computer
language this is a killer. Accordingly the CONS cell (what I call
"standard pair") was implemented instead, whereby allocation of
list structure can be done incrementally, so that new items can be
spliced onto the end of a list without needing to copy the whole
list up to that point into a new block of memory. This efficient
support for in-place modification of lists has other uses besides
splicing to the end of a list, such as splicing additional elements
into the *middle* of a list without needing to copy any of the old
cells. Once you have the low-level primitive to do all this, it's a
petty restriction to insist that the CDR of a cell can *only*
contain a proper list (possibly NIL, the empty list), so of course
that restriction was immediately removed. Suddenly the primitive
cell can be used not only to build linked lists, but can be
directly used to build binary trees of any structure whatsoever.
Now you can build self-balancing binary search trees, association
lists, etc., which give you great advantage over being able to
build *only* proper lists.
Now a completely alternate way of looking at this is to discard the
primitive operation of COMBINE as being anything efficient and
desireable, abandon that part of the theory of AIM-8, and not implement
the CONS cell at all. Instead, implement vectors (one-dimensional
arrays) of any desired length, where you must know the size before
you can allocate it. Then any time you know the size of a list ahead
of time you simply allocate a vector of the appropriate size. So what
do you do if you don't know the size in advance? You emulate pairs
as two-element vectors, daisy-chain them together just as if they
were CONS cells, and then at the end of the building operation when
you know the total size you allocate a vector of that size and copy
all the elements over to the vector. READ would work just fine (for
proper lists *only*) that way. There'd be more temporary storage
allocated to build the linked-lists prior to conversion to vectors,
but since this is locally inside READ it'd be easy to recycle those
2-vectors efficiently without needing to bother with GC. There'd be
*less* storage consumed permanently after the READ had returned,
which could be a big win. Since APPEND is required to copy its
first argument, it could be re-defined to copy *both* arguments
when it allocates the result vector, and would probably run faster
than APPEND working its way down linked lists. Storage
fragmentation in virtual memory would be greatly reduced if every
level of a list is a contiguous block of RAM instead of a daisy
chain jumping all over RAM. Association lists and property lists
could be re-invented and unified as chains of 3-vectors, where each
cell is [key, value, nextLink]. Finally, instead of binary search
trees implemented in various ways as clusters of CONS cells at each
node, it would take only a *single* vector at each node. Depending
on the type of BST, a cell might have any of these formats:
- [key, count, left, right]
- [key, depth, left, right]
- [key, count, left, right, parent]
etc. etc.
Now under the scheme of vector (arbitrary size) instead of CONS
cell (always two components) as primitive unit for building most
data structures, we'd have to do some things differently. One big
win is that we wouldn't need to deal with parsing and printing
(expecially *pretty*printing) dotted list notation!! Otherwise,
it's just **different**, not worse or better.
But given that Lisp took the approach of two-component cells, used
to build linked-lists by daisy-chaining the second component, and
nesting of lists via the first component, IMO it was a wise
decision to allow non-proper-lists in the CDR position, allowing
use of clumps of such two-component cells to make nodes in
association lists and binary search trees etc. Trying to return to
the day of AIM-8 when only proper lists were allowed is IMO a
mistake. AIM-8 is nice history, but shouldn't be treated as
"gospel", as the way that Lisp must forever be, and any deviation
from that "gospel" must be reverted back to AIM-8. So I reject the
implied rollback request of the previous poster.
> The function "combine" indeed was implemented in assembler 704
> under the name "CONS" (for CONStruct), and FIRST and REST were
> stored in the CAR and CDR, and indeed, LISP functions named "CAR"
> and "CDR" were used to access them.
As low-level first-implementation, those names we all agree were
just fine. Nowadays, perhaps aliases for them should be common,
depending on whether these cells are used to build lists or trees:
- First/Rest when traversing proper linked-lists.
- Left/Right when traversing trees.
- MakePair instead of CONS for generic use independent of list/tree distinction.
- JoinTrees instead of CONS when combining two sub-trees to make whole tree.
- Prepend instead of CONS when adding one element onto front of proper list.
- PUSH as defined in CL for side-effecting a place by prepending.
- POP as defined in CL for side-effecting a place by un-prepending.
I don't like the original "Combine" of AIM-8 because it's too
ambiguous/vague. In the context of a mathematical paper that
defined single-use jargon with only a very small number of terms
defined, it was fine. But outside that local context it's way too
vague. There are so very many ways that objects can be combined
that to simply say "Combine" is begging the question what operation
is desired. It's sort of like "COPY" which could mean almost anything
(copy-list copy-alist copy-tree etc.) as KMP explained in his essay.
> So what do you propose? QUARK, UP and DOWN? PAIR, LEFT, and RIGHT?
> WUJI, YIN, YANG? We're not discussing lists, here, but pairs.
I don't know what the other poster would suggest, but see my
suggestion above, with different aliases depending on intentional
datatype, either proper list or arbitrary binary tree. Also a
low-level fallback when the intentional type is not known, which
ought to be *never* or almost never in any well-designed program.
If the low-level fallback is CAR/CDR, I have no problem with that.
The more arcane the names are for the low-level primitives, the
more likely the application programmer is to hate them and instead
think about what he/she is *intending* to do and hence decide what
intentional datatype is *really* being built out of these arcane
CONS cells, and hence use the appropriate high-level
intentional-type-dependent aliases instead of the arcane fallbacks.
> And the notion of pair is entirely disconnected from anything
> else, there's no sense to give any specific name to it. Therefore
> using the names CONS, CAR, CDR is a very good,
I agree in the sense that those names ignore the intentional use of
the data structures, i.e. "disconnected" means disconnected from
intention, thus are just fine as obscure names that ordinary
programmers should avoid using.
> because it connects the notion with the history of lisp, every
> lisper knows what they actually mean, and this gives some lively
> flesh to some otherwise quite abstract and arid notion.
That's nice, but I don't see that as a properly guiding reason for
using those names. I like my reasons much better.
> in section 4.2, AIM-8 mentionned "Binary Lisp" with a combine
> function taking both atoms and s-expressions for both arguments,
> that is, defining a pair, but discarded the notion obviously
> slightly too quickly
Yeah. McCarthy was bright, but not infallible. So when he was busy
writing AIM-8 he didn't have time/energy to think of *every*
implication of design decisions of his adaption of the Lambda
Calculus to practical computing. So he thought of binary lisp, and
wrote a note about it, and dismissed it, just to show that he did
his due diligence in proposing proper-list-only Lisp, that he
considered the primary alternative. Funny that due to machine
considerations his dismissed alternative was actually the first
implementation and to this day remains the primary implementation.
I won't fault him for dismissing it without a full two-year study
of the idea. He had a publication deadline! Better is the enemy of
good enough. AIM-8 was good enough at the time it was written.
Better can come later, as it did.
> Therefore we could say that FIRST and REST are MORE archaic than
> CAR and CDR (by at least a few weeks).
I agree, but with a caveat. First/Rest were intended as the
*primitive* operations, the *only* ways you can build structure
directly. In that sense they are archaic. But after CAR/CDR were
actually implemented much more generally, then later using
First/Rest as aliases for the special restricted case wasn't
archaic at all. Do you see that distinction, First/Rest as a
straightjacket that limits mobility, compared to First/Rest as
shorthands for a subset of what can be done?
> But the real reason why both LIST*/FIRST/REST and CONS/CAR/CDR are
> kept, is while their binary code is almost the same or identical,
> they are not intended for the same intentional types.
Hey there!! **THIS** is why I decided to reply to your article. It
looks like you have decided that my emphasis on intentional
datatypes as the most important consideration in computer
software/algorithms, inspired originally by KMP's essay but much
more emphasised by me in recent months, is really a good idea, and
you have at this point started to adopt my way of talking about it!
Welcome to the club of intentional datatypes.
> CONS/CAR/CDR work with the intentional data type Pair (which is
> called CONS cell in Common Lisp, or PAIR in Scheme).
I see those more as low-level primitives. I'd still like aliases
like I proposed earlier for explicit intentional-type Pair,
especially when used as cells for building binary trees.
> LIST*/FIRST/REST work with the intentional data type List (which
> doesn't exist explicitely in Lisp or Scheme).
Hmm. LIST* instead of Prepend? Interesting idea! Prepend allows
prepending just one element at a time, whereas LIST* allows
prepending as many elements as allowed by call-arguments-limit.
Since call-arguments-limit is at least 50, whereas reasonble use
would be to prepend at most five or ten individually-specified
elements at a time, it seems to suffice. However LIST* as a name is
sort of arcane. It slightly begs the question. Sure if you read the
documentation you see that all but the last parameter are treated
as elements, while the last parameter is treated as a list. Maybe
Prepend should be an alias for List instead of an alias for CONS as
I proposed above. That would support the flexibility you noted and
the non-arcaneness of naming that I prefer. On the other hand,
Prepand and APPEND would then in discordance, since my Prepend
takes separate elements while APPEND takes only lists as
parameters. So maybe your arcane LIST*, with some tutorial trick to
explain it to newbies, would be best after all. LIST takes all
elements as paramters, and builds a List. LIST* has one extra
argument at the end which by the asterisk notation means (like in
BNF or RegEx) a whole bunch of elements. Voila, explained! Good
enough??
Note that *LIST doesn't exist as a built-in function, partly
because it is too inefficient to implement with linked lists
because the first paramter must be entirely copied. (Same
explanation: a whole bunch of elements as first parameter, then
single elements for each parameter afterward; semantics APPEND
first argument to LIST of remaining arguments.)
So anyway, now that you've apparently joined my club of intentional
datatypes, would you like to volunteer to organize some of the
existing Lisp libraries according to which intentional datatype
each deals with? See my call for voluneers posted recently:
<http://groups.google.com/group/comp.lang.lisp/msg/7147b65821b8bcf7>
= <······················@yahoo.com>
Robert Maas, http://tinyurl.com/uh3t wrote:
>> From: ····@informatimago.com (Pascal J. Bourguignon)
>> But the real reason why both LIST*/FIRST/REST and CONS/CAR/CDR are
>> kept, is while their binary code is almost the same or identical,
>> they are not intended for the same intentional types.
>
> Hey there!! **THIS** is why I decided to reply to your article. It
> looks like you have decided that my emphasis on intentional
> datatypes as the most important consideration in computer
> software/algorithms, inspired originally by KMP's essay but much
> more emphasised by me in recent months, is really a good idea, and
> you have at this point started to adopt my way of talking about it!
> Welcome to the club of intentional datatypes.
Well sure, we're communicating, aren't we.
> So anyway, now that you've apparently joined my club of intentional
> datatypes, would you like to volunteer to organize some of the
> existing Lisp libraries according to which intentional datatype
> each deals with? See my call for voluneers posted recently:
> <http://groups.google.com/group/comp.lang.lisp/msg/7147b65821b8bcf7>
> = <······················@yahoo.com>
Well, I don't have time left for at least one year.
Besides, I find that most CL or library functions are well documented
with respect to the intentional type, or, like car/cdr, is more or less
explicitely designed to be able to work at several levels.
--
__Pascal Bourguignon__
http://www.informatimago.com
On Oct 9, 7:44 pm, Anticomuna <············@uol.com.br> wrote:
> How? Is there anything wrong with the concept? I thought it was so
> easy to parse the S-expressions because of it. Is this really a
> problem for Lisp or this is more of a non-Lisper automatic reaction to
> something he is not used to?
>
> How would the implementation of a Lisp without a using cons look like?
> Would it have any advantage over the "archaic" one?
You could imagine a language similar to lisp in which everything is
either an atom or a null terminated list, and not having dotted pairs.
Some things would have to change (like alists) but much of the flavor
of lisp would not be affected. I don't think there would be any loss
of generality. Instead of cons you would just use list.
smallpond <·········@juno.com> writes:
> You could imagine a language similar to lisp in which everything is
> either an atom or a null terminated list, and not having dotted pairs.
> Some things would have to change (like alists) but much of the flavor
> of lisp would not be affected. I don't think there would be any loss
> of generality. Instead of cons you would just use list.
Correct. The only difference is that you will always have the extra
space usage that a list uses (assuming a singly-linked list instead of
one implemented on top of arrays). But the latter make structure
sharing really difficult.
But CommonLisp is intended as a practical language, and increasing the
size of data structures can be a problem with programs that need to
scale up.
--
Thomas A. Russ, USC/Information Sciences Institute
smallpond <·········@juno.com> wrote:
+---------------
| Anticomuna <············@uol.com.br> wrote:
| > How would the implementation of a Lisp without a using cons look like?
| > Would it have any advantage over the "archaic" one?
|
| You could imagine a language similar to lisp in which everything is
| either an atom or a null terminated list, and not having dotted pairs.
| Some things would have to change (like alists) but much of the flavor
| of lisp would not be affected. I don't think there would be any loss
| of generality. Instead of cons you would just use list.
+---------------
There is also a Middle Way:
http://en.wikipedia.org/wiki/CDR_coding
http://www.faqs.org/faqs/lisp-faq/part2/section-9.html
[2-9] What is CDR-coding?
S-expr syntax remains unchanged. Storage for s-exprs uses
contiguous vectors where possible, links (lists of vectors)
where not.
-Rob
-----
Rob Warnock <····@rpw3.org>
627 26th Avenue <URL:http://rpw3.org/>
San Mateo, CA 94403 (650)572-2607
····@rpw3.org (Rob Warnock) writes:
> smallpond <·········@juno.com> wrote:
> +---------------
> | Anticomuna <············@uol.com.br> wrote:
> | > How would the implementation of a Lisp without a using cons look like?
> | > Would it have any advantage over the "archaic" one?
> |
> | You could imagine a language similar to lisp in which everything is
> | either an atom or a null terminated list, and not having dotted pairs.
> | Some things would have to change (like alists) but much of the flavor
> | of lisp would not be affected. I don't think there would be any loss
> | of generality. Instead of cons you would just use list.
> +---------------
>
> There is also a Middle Way:
>
> http://en.wikipedia.org/wiki/CDR_coding
>
> http://www.faqs.org/faqs/lisp-faq/part2/section-9.html
> [2-9] What is CDR-coding?
>
> S-expr syntax remains unchanged. Storage for s-exprs uses
> contiguous vectors where possible, links (lists of vectors)
> where not.
And interestingly, experience showed that CDR coding didn't earn much
time or space at all, on real programs, so it's a technique that's not
applied anymore in modern lisp implementations. This should tell
something about all these languages that try to use vectors instead of
pair-based lists...
--
__Pascal Bourguignon__ http://www.informatimago.com/
Our enemies are innovative and resourceful, and so are we. They never
stop thinking about new ways to harm our country and our people, and
neither do we. -- Georges W. Bush
···@informatimago.com (Pascal J. Bourguignon) writes:
> ····@rpw3.org (Rob Warnock) writes:
>
>> smallpond <·········@juno.com> wrote:
>> +---------------
>> | Anticomuna <············@uol.com.br> wrote:
>> | > How would the implementation of a Lisp without a using cons look like?
>> | > Would it have any advantage over the "archaic" one?
>> |
>> | You could imagine a language similar to lisp in which everything is
>> | either an atom or a null terminated list, and not having dotted pairs.
>> | Some things would have to change (like alists) but much of the flavor
>> | of lisp would not be affected. I don't think there would be any loss
>> | of generality. Instead of cons you would just use list.
>> +---------------
>>
>> There is also a Middle Way:
>>
>> http://en.wikipedia.org/wiki/CDR_coding
>>
>> http://www.faqs.org/faqs/lisp-faq/part2/section-9.html
>> [2-9] What is CDR-coding?
>>
>> S-expr syntax remains unchanged. Storage for s-exprs uses
>> contiguous vectors where possible, links (lists of vectors)
>> where not.
>
> And interestingly, experience showed that CDR coding didn't earn much
> time or space at all, on real programs, so it's a technique that's not
> applied anymore in modern lisp implementations. This should tell
> something about all these languages that try to use vectors instead of
> pair-based lists...
No, the cause and effect aren't quite right here. In any decision,
there must be a proper attention to both the benefit of the decision,
and the cost. You mention time and space savings above, and as far is
space is concerned, there could be large space savings to be had with
cdr-coding, without much time cost when things aren't fragmented,
especially for constant lists. We've definitely looked into the
possibility of creating such a representation. However, the thing
that stops us from implementing cdr-coding (as well, I presume, as any
other Lisp vendors on General Purpose hardware), is the sheer cost of
the implementation in terms of time to discriminate and access cons
cells in a different way. The need for heavy use of hardware dispatch
for these cdr-coded constructs presumably makes them ideal for
machines that can do such dispatch in hardware, and it also preludes
us from doing the same on GP hardware, especially for such a
fundamental building-block as the cons cell. But as for the benefits,
don't for a moment think at least one GP vendor hasn't at least
considered it...
The real cause-and-effect is a direct one; instead of thinking that
cdr-coding went out of style _with_ the Lisp machines, think of it as
going out of style _because_ of the Lisp machines going out of style.
This implies that if Lisp machines came back in style, so could
cdr-coding. You may think to yourself "Preposterous; Lisp machines
will never be practical to build in such a way that they will compete
with the current advances in GP technology". And perhaps you would be
right, in a day and age where Moore's law is in effect. The question
is, Is Moore's law still in effect?
--
Duane Rettig ·····@franz.com Franz Inc. http://www.franz.com/
555 12th St., Suite 1450 http://www.555citycenter.com/
Oakland, Ca. 94607 Phone: (510) 452-2000; Fax: (510) 452-0182
In article <··············@hubble.informatimago.com>,
···@informatimago.com (Pascal J. Bourguignon) wrote:
> ····@rpw3.org (Rob Warnock) writes:
>
> > smallpond <·········@juno.com> wrote:
> > +---------------
> > | Anticomuna <············@uol.com.br> wrote:
> > | > How would the implementation of a Lisp without a using cons look like?
> > | > Would it have any advantage over the "archaic" one?
> > |
> > | You could imagine a language similar to lisp in which everything is
> > | either an atom or a null terminated list, and not having dotted pairs.
> > | Some things would have to change (like alists) but much of the flavor
> > | of lisp would not be affected. I don't think there would be any loss
> > | of generality. Instead of cons you would just use list.
> > +---------------
> >
> > There is also a Middle Way:
> >
> > http://en.wikipedia.org/wiki/CDR_coding
> >
> > http://www.faqs.org/faqs/lisp-faq/part2/section-9.html
> > [2-9] What is CDR-coding?
> >
> > S-expr syntax remains unchanged. Storage for s-exprs uses
> > contiguous vectors where possible, links (lists of vectors)
> > where not.
>
> And interestingly, experience showed that CDR coding didn't earn much
> time or space at all, on real programs,
Any sources of experience?
> so it's a technique that's not
> applied anymore in modern lisp implementations. This should tell
> something about all these languages that try to use vectors instead of
> pair-based lists...
--
http://lispm.dyndns.org/
On Sat, 11 Oct 2008 17:55:22 +0200, Rainer Joswig wrote:
> In article <··············@hubble.informatimago.com>,
> ···@informatimago.com (Pascal J. Bourguignon) wrote:
>
>> ····@rpw3.org (Rob Warnock) writes:
>>
>> > smallpond <·········@juno.com> wrote: +---------------
>> > | Anticomuna <············@uol.com.br> wrote: | > How would the
>> > implementation of a Lisp without a using cons look like? | > Would it
>> > have any advantage over the "archaic" one? |
>> > | You could imagine a language similar to lisp in which everything is
>> > | either an atom or a null terminated list, and not having dotted
>> > pairs. | Some things would have to change (like alists) but much of
>> > the flavor | of lisp would not be affected. I don't think there would
>> > be any loss | of generality. Instead of cons you would just use list.
>> > +---------------
>> >
>> > There is also a Middle Way:
>> >
>> > http://en.wikipedia.org/wiki/CDR_coding
>> >
>> > http://www.faqs.org/faqs/lisp-faq/part2/section-9.html [2-9] What
>> > is CDR-coding?
>> >
>> > S-expr syntax remains unchanged. Storage for s-exprs uses contiguous
>> > vectors where possible, links (lists of vectors) where not.
>>
>> And interestingly, experience showed that CDR coding didn't earn much
>> time or space at all, on real programs,
>
> Any sources of experience?
LMI dropped CDR coding for the K-machine, see:
<http://eval.apply.googlepages.com/kmachine.htm>
The paper doesn't go into too much detail on the measurements that they
did though.
Robert Swindells
Robert Swindells <···@fdy2.demon.co.uk> writes:
> On Sat, 11 Oct 2008 17:55:22 +0200, Rainer Joswig wrote:
>
>> In article <··············@hubble.informatimago.com>,
>> ···@informatimago.com (Pascal J. Bourguignon) wrote:
>>
>>> ····@rpw3.org (Rob Warnock) writes:
>>>
>>> > smallpond <·········@juno.com> wrote: +---------------
>>> > | Anticomuna <············@uol.com.br> wrote: | > How would the
>>> > implementation of a Lisp without a using cons look like? | > Would it
>>> > have any advantage over the "archaic" one? |
>>> > | You could imagine a language similar to lisp in which everything is
>>> > | either an atom or a null terminated list, and not having dotted
>>> > pairs. | Some things would have to change (like alists) but much of
>>> > the flavor | of lisp would not be affected. I don't think there would
>>> > be any loss | of generality. Instead of cons you would just use list.
>>> > +---------------
>>> >
>>> > There is also a Middle Way:
>>> >
>>> > http://en.wikipedia.org/wiki/CDR_coding
>>> >
>>> > http://www.faqs.org/faqs/lisp-faq/part2/section-9.html [2-9] What
>>> > is CDR-coding?
>>> >
>>> > S-expr syntax remains unchanged. Storage for s-exprs uses contiguous
>>> > vectors where possible, links (lists of vectors) where not.
>>>
>>> And interestingly, experience showed that CDR coding didn't earn much
>>> time or space at all, on real programs,
>>
>> Any sources of experience?
>
> LMI dropped CDR coding for the K-machine, see:
>
> <http://eval.apply.googlepages.com/kmachine.htm>
>
> The paper doesn't go into too much detail on the measurements that they
> did though.
It explains enough:
No CDR codes
Analysis of the storage on existing Lisp Machines indicated that
only about 20 percent of the storage was actually stored in lists
(the rest being vectors, structures, or classes), and that about
20 percent of the lists benefited from CDR codes. CDR codes take
their toll in increasing memory references (the CAR must be
examined before the CDR can be located), and reduced address space
(2-bits per pointer reserved for every object in the system). One
CDR-code bit was added to the address space, and the other to the
tag.
So basically, CDR coding is an optimization that makes you divide by
two 0.2*0.2 = 4% of the used memory, for a total sparing of 2% of used
memory. While doubling the access time of CAR cells...
It's true that it could be worth reintroducing it (and possibly other
schemes) if we could do so at the hardware level (with the implied
parallelism).
But it seems to me the main reason why it's no more so useful is the
introduction of additionnal primitive types such as structures
(objects) and vectors (and arrays) and all the other data types that
are implemented over these ones instead of lists. I'd mention also
the fact that data in lisp is not normally immutable.
When all (defstruct s) is (defstruct (s (:type list))) having the
system automatically implement them as (defstruct (s (:type vector)))
is worthwhile. But when the primitive structure is already similar to
(defstruct (s (:type vector))), there's no point in trying to CDR
coding the remaining lists.
--
__Pascal Bourguignon__ http://www.informatimago.com/
Nobody can fix the economy. Nobody can be trusted with their finger
on the button. Nobody's perfect. VOTE FOR NOBODY.
In article <··············@hubble.informatimago.com>,
···@informatimago.com (Pascal J. Bourguignon) wrote:
> Robert Swindells <···@fdy2.demon.co.uk> writes:
>
> > On Sat, 11 Oct 2008 17:55:22 +0200, Rainer Joswig wrote:
> >
> >> In article <··············@hubble.informatimago.com>,
> >> ···@informatimago.com (Pascal J. Bourguignon) wrote:
> >>
> >>> ····@rpw3.org (Rob Warnock) writes:
> >>>
> >>> > smallpond <·········@juno.com> wrote: +---------------
> >>> > | Anticomuna <············@uol.com.br> wrote: | > How would the
> >>> > implementation of a Lisp without a using cons look like? | > Would it
> >>> > have any advantage over the "archaic" one? |
> >>> > | You could imagine a language similar to lisp in which everything is
> >>> > | either an atom or a null terminated list, and not having dotted
> >>> > pairs. | Some things would have to change (like alists) but much of
> >>> > the flavor | of lisp would not be affected. I don't think there would
> >>> > be any loss | of generality. Instead of cons you would just use list.
> >>> > +---------------
> >>> >
> >>> > There is also a Middle Way:
> >>> >
> >>> > http://en.wikipedia.org/wiki/CDR_coding
> >>> >
> >>> > http://www.faqs.org/faqs/lisp-faq/part2/section-9.html [2-9] What
> >>> > is CDR-coding?
> >>> >
> >>> > S-expr syntax remains unchanged. Storage for s-exprs uses contiguous
> >>> > vectors where possible, links (lists of vectors) where not.
> >>>
> >>> And interestingly, experience showed that CDR coding didn't earn much
> >>> time or space at all, on real programs,
> >>
> >> Any sources of experience?
> >
> > LMI dropped CDR coding for the K-machine, see:
> >
> > <http://eval.apply.googlepages.com/kmachine.htm>
> >
> > The paper doesn't go into too much detail on the measurements that they
> > did though.
>
The K-Machine was supposed to be a very different architecture,
compared to the earlier systems.
> It explains enough:
>
> No CDR codes
>
> Analysis of the storage on existing Lisp Machines indicated that
> only about 20 percent of the storage was actually stored in lists
> (the rest being vectors, structures, or classes), and that about
> 20 percent of the lists benefited from CDR codes. CDR codes take
> their toll in increasing memory references (the CAR must be
> examined before the CDR can be located), and reduced address space
> (2-bits per pointer reserved for every object in the system). One
> CDR-code bit was added to the address space, and the other to the
> tag.
>
>
> So basically, CDR coding is an optimization that makes you divide by
> two 0.2*0.2 = 4% of the used memory, for a total sparing of 2% of used
> memory. While doubling the access time of CAR cells...
Lisp Machines that actually used CDR-coding could save both space and
time. Space was saved since the CONS cell then was replaced by
a single cell (not two). Time was saved for example, since less memory had
to be moved to walk through a list. The CDR code logic was
implemented in microcode on the CPU.
The Lisp Machines had the feature that one could use
'Optimize World' to sort the memory according to types
and memory areas. That could also allocate cdr-coded lists - which
got rid with the non-locality of cons cells.
Not sure, but I guess also the copying GC could create
CDR-coded lists.
Personally I would also find it more interesting how much
it saves for running systems with data loaded - not so
much for a 'naked' system. There were some software systems
using lots of lists and others using fewer lists.
I've seen some using amazing amounts of lists - mostly
to implement custom object systems.
Features like CDR coding, compact code representation, copying
GC, memory areas, etc. made it possible to use large systems
(from 50MB upwards) on machines with 40MB RAM and huge amounts
of virtual memory. CDR-coding helped to reduce paging - very important.
There was quite a lot attention to various optimizations.
Sure, CDR coding could be costly on other CPUs (as Duane mentioned),
but I still don't see real support for the assumption that
it was generally not a good idea on those existing systems
that used it - especially when earlier software sometimes used
lots of lists (where we today would use objects or other
data structure) in large virtual memories with costly paging.
>
>
> It's true that it could be worth reintroducing it (and possibly other
> schemes) if we could do so at the hardware level (with the implied
> parallelism).
>
> But it seems to me the main reason why it's no more so useful is the
> introduction of additionnal primitive types such as structures
> (objects) and vectors (and arrays) and all the other data types that
> are implemented over these ones instead of lists. I'd mention also
> the fact that data in lisp is not normally immutable.
>
>
> When all (defstruct s) is (defstruct (s (:type list))) having the
> system automatically implement them as (defstruct (s (:type vector)))
> is worthwhile. But when the primitive structure is already similar to
> (defstruct (s (:type vector))), there's no point in trying to CDR
> coding the remaining lists.
--
http://lispm.dyndns.org/
On Sat, 11 Oct 2008 23:07:42 +0200, Rainer Joswig wrote:
> Lisp Machines that actually used CDR-coding could save both space and
> time. Space was saved since the CONS cell then was replaced by a single
> cell (not two). Time was saved for example, since less memory had to be
> moved to walk through a list. The CDR code logic was implemented in
> microcode on the CPU.
>
> The Lisp Machines had the feature that one could use 'Optimize World' to
> sort the memory according to types and memory areas. That could also
> allocate cdr-coded lists - which got rid with the non-locality of cons
> cells. Not sure, but I guess also the copying GC could create CDR-coded
> lists.
>
> Personally I would also find it more interesting how much it saves for
> running systems with data loaded - not so much for a 'naked' system.
> There were some software systems using lots of lists and others using
> fewer lists. I've seen some using amazing amounts of lists - mostly to
> implement custom object systems.
>
> Features like CDR coding, compact code representation, copying GC,
> memory areas, etc. made it possible to use large systems (from 50MB
> upwards) on machines with 40MB RAM and huge amounts of virtual memory.
> CDR-coding helped to reduce paging - very important. There was quite a
> lot attention to various optimizations.
>
> Sure, CDR coding could be costly on other CPUs (as Duane mentioned), but
> I still don't see real support for the assumption that it was generally
> not a good idea on those existing systems that used it - especially when
> earlier software sometimes used lots of lists (where we today would use
> objects or other data structure) in large virtual memories with costly
> paging.
If you know the size of your sequences in advance or they change
infrequently, then (adjustable) vectors are just as good. CDR coding
would be beneficial in cases where largish lists are built up gradually
and then they are unchanged for a long time - then the GC could quietly
pack them using CDR coding.
But something that mimics this (eg lists of vectors) could be implemented
quite easily when needed. Of course it would not be as nifty and
automated as the GC doing it. If I saw evidence of people doing this, or
other actual real world examples of CDR coding helping a lot, I would
think it is worth thinking about it. But until then it just seems like
an optimization for a very special situation.
Tamas
In article <··············@mid.individual.net>,
Tamas K Papp <······@gmail.com> wrote:
> On Sat, 11 Oct 2008 23:07:42 +0200, Rainer Joswig wrote:
>
> > Lisp Machines that actually used CDR-coding could save both space and
> > time. Space was saved since the CONS cell then was replaced by a single
> > cell (not two). Time was saved for example, since less memory had to be
> > moved to walk through a list. The CDR code logic was implemented in
> > microcode on the CPU.
> >
> > The Lisp Machines had the feature that one could use 'Optimize World' to
> > sort the memory according to types and memory areas. That could also
> > allocate cdr-coded lists - which got rid with the non-locality of cons
> > cells. Not sure, but I guess also the copying GC could create CDR-coded
> > lists.
> >
> > Personally I would also find it more interesting how much it saves for
> > running systems with data loaded - not so much for a 'naked' system.
> > There were some software systems using lots of lists and others using
> > fewer lists. I've seen some using amazing amounts of lists - mostly to
> > implement custom object systems.
> >
> > Features like CDR coding, compact code representation, copying GC,
> > memory areas, etc. made it possible to use large systems (from 50MB
> > upwards) on machines with 40MB RAM and huge amounts of virtual memory.
> > CDR-coding helped to reduce paging - very important. There was quite a
> > lot attention to various optimizations.
> >
> > Sure, CDR coding could be costly on other CPUs (as Duane mentioned), but
> > I still don't see real support for the assumption that it was generally
> > not a good idea on those existing systems that used it - especially when
> > earlier software sometimes used lots of lists (where we today would use
> > objects or other data structure) in large virtual memories with costly
> > paging.
>
> If you know the size of your sequences in advance or they change
> infrequently, then (adjustable) vectors are just as good. CDR coding
> would be beneficial in cases where largish lists are built up gradually
> and then they are unchanged for a long time - then the GC could quietly
> pack them using CDR coding.
Lisp Machines were used differently. The Lisp image contained
all the data, all the code, all the OS, all the applications,
the windows, the bitmaps - everything. You basically boot
an image with lots of stuff preloaded, load your stuff on
top and then make a new image, which gets booted the next time.
It is just this image running on the metal - nothing else.
It contains lots of data. So when you save the image, then
it knows all about the data that is in memory and can
optimize it.
The GC does pack lists on the Lisp Machine at runtime. COPY-LIST also
creates CDR coded lists from an existing list. Some other list
operations also made their changes to the list and then
created a kind of CDR coded list (with the help of the GC).
>
> But something that mimics this (eg lists of vectors) could be implemented
> quite easily when needed. Of course it would not be as nifty and
> automated as the GC doing it. If I saw evidence of people doing this, or
> other actual real world examples of CDR coding helping a lot, I would
> think it is worth thinking about it. But until then it just seems like
> an optimization for a very special situation.
The 'general situation' was that at that time (1970-1990) main memory
was very expensive and virtual memory was extremely slow and expensive.
For that situation I think it made a lot of sense to implement all kinds
of things to decrease paging, increase locality, decrease
memory usage and so on.
Today we have lots more main memory and less paging (usually).
Still - I'm not happy that some Lisps use a similar amount
of memory for quite a bit less functionality (due to much
less compact code and data). I'm not saying that CDR coding
would be useful now, but I question the idea that it
was not useful then. I can understand that LMI did
not want it to implement it on a very different architecture.
It is also not implemented on CL implementations on RISC or
CISC architectures - but if you look at these implementations,
quite a bit from the Lisp Machine OS architecture has NOT
been implemented (not just the CDR coding), since most CLs haven't
been used to implement a full operating system with hardware
drivers, network stacks, virtual memory paging, window systems,
process schedulers, .... especially not on machines with
20-40MB RAM.
I guess that's the yearly cdr-coding discussion we all fear?
>
> Tamas
--
http://lispm.dyndns.org/
On Sun, 12 Oct 2008 00:04:23 +0200, Rainer Joswig wrote:
> Today we have lots more main memory and less paging (usually). Still -
> I'm not happy that some Lisps use a similar amount of memory for quite a
> bit less functionality (due to much less compact code and data). I'm not
> saying that CDR coding would be useful now, but I question the idea that
> it was not useful then. I can understand that LMI did not want it to
> implement it on a very different architecture. It is also not
> implemented on CL implementations on RISC or CISC architectures - but if
> you look at these implementations, quite a bit from the Lisp Machine OS
> architecture has NOT been implemented (not just the CDR coding), since
> most CLs haven't been used to implement a full operating system with
> hardware drivers, network stacks, virtual memory paging, window systems,
> process schedulers, .... especially not on machines with 20-40MB RAM.
I admit I know very little about the history of Lisp machines and
optimizations like this. But CDR coding in today's world (non-Lisp OS,
Lisp programs as ordinary processes, relatively cheap RAM etc) seems like
the ultimate premature optimization.
> I guess that's the yearly cdr-coding discussion we all fear?
I don't know why Rob brought it up, perhaps he just wanted to mention
that it existed at a time. It was interesting to read about it, but I
fail to see a large practical benefit.
Tamas
Tamas K Papp <······@gmail.com> wrote:
+---------------
| Rainer Joswig wrote:
| > I guess that's the yearly cdr-coding discussion we all fear?
|
| I don't know why Rob brought it up, perhaps he just wanted to mention
| that it existed at a time.
+---------------
Recall the "Subject:" of this thread: "Cons cell archaic!?"
I brought it up in the context of a fallacious argument that
was implying that abandoning cons-based lists meant abandoning
S-exprs. I mentioned CDR-coding as "a Middle Way" possibility
that abandons cons-based lists while *keeping* traditional S-exprs.
Oh, and also as reminder that an alternative to pure cons cells
was an *OLD* idea... ;-}
+---------------
| It was interesting to read about it, but I
| fail to see a large practical benefit.
+---------------
Indeed.
-Rob
-----
Rob Warnock <····@rpw3.org>
627 26th Avenue <URL:http://rpw3.org/>
San Mateo, CA 94403 (650)572-2607
On Oct 11, 6:47 pm, ····@rpw3.org (Rob Warnock) wrote:
> Tamas K Papp <······@gmail.com> wrote:
> +---------------| Rainer Joswig wrote:
>
> | > I guess that's the yearly cdr-coding discussion we all fear?
> |
> | I don't know why Rob brought it up, perhaps he just wanted to mention
> | that it existed at a time.
> +---------------
>
> Recall the "Subject:" of this thread: "Cons cell archaic!?"
> I brought it up in the context of a fallacious argument that
> was implying that abandoning cons-based lists meant abandoning
> S-exprs. I mentioned CDR-coding as "a Middle Way" possibility
> that abandons cons-based lists while *keeping* traditional S-exprs.
>
> Oh, and also as reminder that an alternative to pure cons cells
> was an *OLD* idea... ;-}
>
> +---------------
> | It was interesting to read about it, but I
> | fail to see a large practical benefit.
> +---------------
>
> Indeed.
>
> -Rob
>
> -----
> Rob Warnock <····@rpw3.org>
> 627 26th Avenue <URL:http://rpw3.org/>
> San Mateo, CA 94403 (650)572-2607
It does make a practical way for storing editable texts, though, as a
list of character vectors, with non-character views possibly included.
Editing is easy, searching is not too hard, nothing is very slow.
Tamas K Papp <······@gmail.com> writes:
> On Sun, 12 Oct 2008 00:04:23 +0200, Rainer Joswig wrote:
>
>> Today we have lots more main memory and less paging (usually). Still -
>> I'm not happy that some Lisps use a similar amount of memory for quite a
>> bit less functionality (due to much less compact code and data). I'm not
>> saying that CDR coding would be useful now, but I question the idea that
>> it was not useful then. I can understand that LMI did not want it to
>> implement it on a very different architecture. It is also not
>> implemented on CL implementations on RISC or CISC architectures - but if
>> you look at these implementations, quite a bit from the Lisp Machine OS
>> architecture has NOT been implemented (not just the CDR coding), since
>> most CLs haven't been used to implement a full operating system with
>> hardware drivers, network stacks, virtual memory paging, window systems,
>> process schedulers, .... especially not on machines with 20-40MB RAM.
>
> I admit I know very little about the history of Lisp machines and
> optimizations like this. But CDR coding in today's world (non-Lisp OS,
> Lisp programs as ordinary processes, relatively cheap RAM etc) seems like
> the ultimate premature optimization.
I've always preached against premature optimization, and I don't
regard this situation as one of those times. You have to ask yourself
the question "premature to what?". In the classic case of premature
optimization, the optimization takes place before the design is done,
or as a prerequisite to an implementation. Avoid such optimizations
like the plague. However, when as an implementor you are designing
representations for a fundamental building block, then yes, you want
to squeeze every byte, every cycle, even though it appears that bytes
and cycles are incredibly cheap nowadays. It wasn't more than a few
years ago when 100 Mb memory spaces were considered lavish and
overkill, and now a few gigabytes is the norm. Ten years ago, we were
continuing to fight to keep our lisp running in 2 Mb or less (and
we've kept the base lisp more or less to that size, though it
inevitably does grow as time goes on). Now, with biolisp and
triple-store problem sets, 10s of gigabytes are not enough, and we
fight to make the reprentation of a triple or a upi smaller and more
efficient, not to mention faster access. And in a typical lisp,
conses tend to make up a significant portion of the lisp heap for some
problem sets. You'd better believe that if we had the hardware needed
to implement cdr-coding efficiently, we would do so.
>> I guess that's the yearly cdr-coding discussion we all fear?
>
> I don't know why Rob brought it up, perhaps he just wanted to mention
> that it existed at a time. It was interesting to read about it, but I
> fail to see a large practical benefit.
Rob brought it up because we were discussing the subject line, and the
discussion moved toward alternate representations for lists.
Cdr-coding is one such representation, and it happens to have been a
proven technique (a proven technique need not be perfect, but only
needs to have been demonstrated to have been useful). OK, so now what
about your idea on how to represent cons cells?
--
Duane Rettig ·····@franz.com Franz Inc. http://www.franz.com/
555 12th St., Suite 1450 http://www.555citycenter.com/
Oakland, Ca. 94607 Phone: (510) 452-2000; Fax: (510) 452-0182
Duane Rettig wrote:
> Tamas K Papp <······@gmail.com> writes:
>
>
>>On Sun, 12 Oct 2008 00:04:23 +0200, Rainer Joswig wrote:
>>
>>
>>>Today we have lots more main memory and less paging (usually). Still -
>>>I'm not happy that some Lisps use a similar amount of memory for quite a
>>>bit less functionality (due to much less compact code and data). I'm not
>>>saying that CDR coding would be useful now, but I question the idea that
>>>it was not useful then. I can understand that LMI did not want it to
>>>implement it on a very different architecture. It is also not
>>>implemented on CL implementations on RISC or CISC architectures - but if
>>>you look at these implementations, quite a bit from the Lisp Machine OS
>>>architecture has NOT been implemented (not just the CDR coding), since
>>>most CLs haven't been used to implement a full operating system with
>>>hardware drivers, network stacks, virtual memory paging, window systems,
>>>process schedulers, .... especially not on machines with 20-40MB RAM.
>>
>>I admit I know very little about the history of Lisp machines and
>>optimizations like this. But CDR coding in today's world (non-Lisp OS,
>>Lisp programs as ordinary processes, relatively cheap RAM etc) seems like
>>the ultimate premature optimization.
>
>
> I've always preached against premature optimization, and I don't
> regard this situation as one of those times. You have to ask yourself
> the question "premature to what?".
Precisely. The injunction against preemy opt is not a vote for
deliberately writing slow code, sadly the general interpretation thereof.
On many things we guess wrong, but when the frickin /name/ of your
lsngusge includes "list processing" the astute programmer will discern
absent proof that fast list processing will pay off.
> In the classic case of premature
> optimization, the optimization takes place before the design is done,
> or as a prerequisite to an implementation. Avoid such optimizations
> like the plague. However, when as an implementor you are designing
> representations for a fundamental building block, then yes, you want
> to squeeze every byte, every cycle, even though it appears that bytes
> and cycles are incredibly cheap nowadays. It wasn't more than a few
> years ago when 100 Mb memory spaces were considered lavish and
> overkill, and now a few gigabytes is the norm. Ten years ago, we were
> continuing to fight to keep our lisp running in 2 Mb or less (and
> we've kept the base lisp more or less to that size, though it
> inevitably does grow as time goes on). Now, with biolisp and
> triple-store problem sets, 10s of gigabytes are not enough, and we
> fight to make the reprentation of a triple or a upi smaller and more
> efficient, not to mention faster access. And in a typical lisp,
> conses tend to make up a significant portion of the lisp heap for some
> problem sets. You'd better believe that if we had the hardware needed
> to implement cdr-coding efficiently, we would do so.
You refer of course to Eroom's Law, which states that no matter how fast
hardware improves programmers will throw away every iota of consequent
improved performance on retarded algorithms.
kzo
Kenny wrote:
> Duane Rettig wrote:
>
>> Tamas K Papp <······@gmail.com> writes:
>>
>>
>>> On Sun, 12 Oct 2008 00:04:23 +0200, Rainer Joswig wrote:
>>>
>>>
>>>> Today we have lots more main memory and less paging (usually). Still -
>>>> I'm not happy that some Lisps use a similar amount of memory for
>>>> quite a
>>>> bit less functionality (due to much less compact code and data). I'm
>>>> not
>>>> saying that CDR coding would be useful now, but I question the idea
>>>> that
>>>> it was not useful then. I can understand that LMI did not want it to
>>>> implement it on a very different architecture. It is also not
>>>> implemented on CL implementations on RISC or CISC architectures -
>>>> but if
>>>> you look at these implementations, quite a bit from the Lisp Machine OS
>>>> architecture has NOT been implemented (not just the CDR coding), since
>>>> most CLs haven't been used to implement a full operating system with
>>>> hardware drivers, network stacks, virtual memory paging, window
>>>> systems,
>>>> process schedulers, .... especially not on machines with 20-40MB RAM.
>>>
>>>
>>> I admit I know very little about the history of Lisp machines and
>>> optimizations like this. But CDR coding in today's world (non-Lisp
>>> OS, Lisp programs as ordinary processes, relatively cheap RAM etc)
>>> seems like the ultimate premature optimization.
>>
>>
>>
>> I've always preached against premature optimization, and I don't
>> regard this situation as one of those times. You have to ask yourself
>> the question "premature to what?".
>
>
> Precisely. The injunction against preemy opt is not a vote for
> deliberately writing slow code, sadly the general interpretation thereof.
>
> On many things we guess wrong, but when the frickin /name/ of your
> lsngusge includes "list processing" the astute programmer will discern
> absent proof that fast list processing will pay off.
>
>> In the classic case of premature
>> optimization, the optimization takes place before the design is done,
>> or as a prerequisite to an implementation. Avoid such optimizations
>> like the plague. However, when as an implementor you are designing
>> representations for a fundamental building block, then yes, you want
>> to squeeze every byte, every cycle, even though it appears that bytes
>> and cycles are incredibly cheap nowadays. It wasn't more than a few
>> years ago when 100 Mb memory spaces were considered lavish and
>> overkill, and now a few gigabytes is the norm. Ten years ago, we were
>> continuing to fight to keep our lisp running in 2 Mb or less (and
>> we've kept the base lisp more or less to that size, though it
>> inevitably does grow as time goes on). Now, with biolisp and
>> triple-store problem sets, 10s of gigabytes are not enough, and we
>> fight to make the reprentation of a triple or a upi smaller and more
>> efficient, not to mention faster access. And in a typical lisp,
>> conses tend to make up a significant portion of the lisp heap for some
>> problem sets. You'd better believe that if we had the hardware needed
>> to implement cdr-coding efficiently, we would do so.
>
>
> You refer of course to Eroom's Law, which states that no matter how fast
> hardware improves programmers will throw away every iota of consequent
> improved performance on retarded algorithms.
Ugh, too much caffeine, not enough Bass ale: that should be "will throw
away (* pi iota)", explaining why my 3ghz broadband system is slower
than my Apple II with a cassette recorder and a rockin 56kb modem.
kzo
On Oct 12, 12:28 am, Kenny <·········@gmail.com> wrote:
> Kenny wrote:
> > Duane Rettig wrote:
>
> >> Tamas K Papp <······@gmail.com> writes:
>
> >>> On Sun, 12 Oct 2008 00:04:23 +0200, Rainer Joswig wrote:
>
> >>>> Today we have lots more main memory and less paging (usually). Still -
> >>>> I'm not happy that some Lisps use a similar amount of memory for
> >>>> quite a
> >>>> bit less functionality (due to much less compact code and data). I'm
> >>>> not
> >>>> saying that CDR coding would be useful now, but I question the idea
> >>>> that
> >>>> it was not useful then. I can understand that LMI did not want it to
> >>>> implement it on a very different architecture. It is also not
> >>>> implemented on CL implementations on RISC or CISC architectures -
> >>>> but if
> >>>> you look at these implementations, quite a bit from the Lisp Machine OS
> >>>> architecture has NOT been implemented (not just the CDR coding), since
> >>>> most CLs haven't been used to implement a full operating system with
> >>>> hardware drivers, network stacks, virtual memory paging, window
> >>>> systems,
> >>>> process schedulers, .... especially not on machines with 20-40MB RAM.
>
> >>> I admit I know very little about the history of Lisp machines and
> >>> optimizations like this. But CDR coding in today's world (non-Lisp
> >>> OS, Lisp programs as ordinary processes, relatively cheap RAM etc)
> >>> seems like the ultimate premature optimization.
>
> >> I've always preached against premature optimization, and I don't
> >> regard this situation as one of those times. You have to ask yourself
> >> the question "premature to what?".
>
> > Precisely. The injunction against preemy opt is not a vote for
> > deliberately writing slow code, sadly the general interpretation thereof.
>
> > On many things we guess wrong, but when the frickin /name/ of your
> > lsngusge includes "list processing" the astute programmer will discern
> > absent proof that fast list processing will pay off.
>
> >> In the classic case of premature
> >> optimization, the optimization takes place before the design is done,
> >> or as a prerequisite to an implementation. Avoid such optimizations
> >> like the plague. However, when as an implementor you are designing
> >> representations for a fundamental building block, then yes, you want
> >> to squeeze every byte, every cycle, even though it appears that bytes
> >> and cycles are incredibly cheap nowadays. It wasn't more than a few
> >> years ago when 100 Mb memory spaces were considered lavish and
> >> overkill, and now a few gigabytes is the norm. Ten years ago, we were
> >> continuing to fight to keep our lisp running in 2 Mb or less (and
> >> we've kept the base lisp more or less to that size, though it
> >> inevitably does grow as time goes on). Now, with biolisp and
> >> triple-store problem sets, 10s of gigabytes are not enough, and we
> >> fight to make the reprentation of a triple or a upi smaller and more
> >> efficient, not to mention faster access. And in a typical lisp,
> >> conses tend to make up a significant portion of the lisp heap for some
> >> problem sets. You'd better believe that if we had the hardware needed
> >> to implement cdr-coding efficiently, we would do so.
>
> > You refer of course to Eroom's Law, which states that no matter how fast
> > hardware improves programmers will throw away every iota of consequent
> > improved performance on retarded algorithms.
>
> Ugh, too much caffeine, not enough Bass ale: that should be "will throw
> away (* pi iota)", explaining why my 3ghz broadband system is slower
> than my Apple II with a cassette recorder and a rockin 56kb modem.
>
> kzo
I still remember with fondness working (in assembler) on a ~1 mHz
machine, with head-per-track disk - dropping into the debugger was
instantaneous... ctrl-D - and there was a screen's worth of memory, in
full symbolic glory.
On Oct 12, 8:30 am, Duane Rettig <·····@franz.com> wrote:
> I've always preached against premature optimization, and I don't
> regard this situation as one of those times. You have to ask yourself
> the question "premature to what?". In the classic case of premature
> optimization, the optimization takes place before the design is done,
> or as a prerequisite to an implementation. Avoid such optimizations
> like the plague. However, when as an implementor you are designing
> representations for a fundamental building block, then yes, you want
> to squeeze every byte, every cycle, even though it appears that bytes
> and cycles are incredibly cheap nowadays. It wasn't more than a few
> years ago when 100 Mb memory spaces were considered lavish and
> overkill, and now a few gigabytes is the norm. Ten years ago, we were
> continuing to fight to keep our lisp running in 2 Mb or less (and
> we've kept the base lisp more or less to that size, though it
> inevitably does grow as time goes on). Now, with biolisp and
> triple-store problem sets, 10s of gigabytes are not enough, and we
> fight to make the reprentation of a triple or a upi smaller and more
> efficient, not to mention faster access. And in a typical lisp,
> conses tend to make up a significant portion of the lisp heap for some
> problem sets. You'd better believe that if we had the hardware needed
> to implement cdr-coding efficiently, we would do so.
Is it really that expensive in terms of runtime on GP hardware? On a
64-bit system, it seems like you'd easily have another tag bit to
throw at the problem, so you could have cdr-coded conses having both
the cons bit and the cdr-coded bit set; CAR would stay as it is now,
although it would involve another mask-test-jump triplet for open-
coded CDRs, but even that could be lifted up out of loops by a clever
compiler.
"Thomas F. Burdick" <········@gmail.com> writes:
> On Oct 12, 8:30�am, Duane Rettig <·····@franz.com> wrote:
>
>> I've always preached against premature optimization, and I don't
>> regard this situation as one of those times. �You have to ask yourself
>> the question "premature to what?". �In the classic case of premature
>> optimization, the optimization takes place before the design is done,
>> or as a prerequisite to an implementation. �Avoid such optimizations
>> like the plague. �However, when as an implementor you are designing
>> representations for a fundamental building block, then yes, you want
>> to squeeze every byte, every cycle, even though it appears that bytes
>> and cycles are incredibly cheap nowadays. �It wasn't more than a few
>> years ago when 100 Mb memory spaces were considered lavish and
>> overkill, and now a few gigabytes is the norm. �Ten years ago, we were
>> continuing to fight to keep our lisp running in 2 Mb or less (and
>> we've kept the base lisp more or less to that size, though it
>> inevitably does grow as time goes on). �Now, with biolisp and
>> triple-store problem sets, 10s of gigabytes are not enough, and we
>> fight to make the reprentation of a triple or a upi smaller and more
>> efficient, not to mention faster access. �And in a typical lisp,
>> conses tend to make up a significant portion of the lisp heap for some
>> problem sets. �You'd better believe that if we had the hardware needed
>> to implement cdr-coding efficiently, we would do so.
>
> Is it really that expensive in terms of runtime on GP hardware?
Yes.
> On a
> 64-bit system, it seems like you'd easily have another tag bit to
> throw at the problem, so you could have cdr-coded conses having both
> the cons bit and the cdr-coded bit set; CAR would stay as it is now,
> although it would involve another mask-test-jump triplet for open-
============================^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> coded CDRs, but even that could be lifted up out of loops by a clever
> compiler.
Another? We don't need no skinkin' tests! Let's look at linux/x86-64
code:
CL-USER(1): (compile
(defun foo (x)
(declare (optimize speed (safety 0) (debug 0)))
(caddr x)))
FOO
NIL
NIL
CL-USER(2): (disassemble *)
;; disassembly of #<Function FOO>
;; formals: X
;; code start: #x71731c84:
0: 8b 40 f3 movl eax,[eax-13]
3: 8b 58 f3 movl ebx,[eax-13]
6: 8b 43 ef movl eax,[ebx-17]
9: f8 clc
10: 8b 75 fc movl esi,[ebp-4]
13: c3 ret
CL-USER(3):
Now, how many tests do you see in the above code? And even if you
tell me that you don't read assembler, you should be able to answer
this one; there are simply no jumps, conditional or not; the code is
completely linear.
So, you say, what if I don't pass a proper list in? Well, let's do that:
CL-USER(3): (foo '(a . 10))
Error: Attempt to take the cdr of 10 which is not listp.
[condition type: TYPE-ERROR]
Restart actions (select using :continue):
0: Return to Top Level (an "abort" restart).
1: Abort entirely from this (lisp) process.
[1] CL-USER(4):
Note that the error got caught at the bogus data. Note furthermore
(you'll have to know how to read assembler for this one), that the
suspension point is at relative pc 3, which is the place where the
second cdr of the caddr was attempted:
[1] CL-USER(4): :zo :verbose t :count 2
Evaluation stack:
... 1 more (possibly invisible) newer frame ...
call to ERROR
required arg: EXCL::DATUM = TYPE-ERROR
&rest EXCL::ARGUMENTS = (:DATUM 10 :EXPECTED-TYPE ...)
function suspended at relative address 336
----------------------------
->call to FOO
required arg: X = (A . 10)
function suspended at relative address 3
----------------------------
(ghost call to EXCL::%EVAL)
----------------------------
call to EVAL
required arg: EXP = (FOO '(A . 10))
function suspended at relative address 97
----------------------------
... more older frames ...
[1] CL-USER(5):
No, we already let the hardware do the testing itself on architectures
and operating systems that allow it, and _any_ testing at all to
distinguish cons cells from cdr-coded lists represent a manifold
increase in cost.
Now, let's imagine a future generation lisp machine. It has a couple
of instructions, not microcoded, but perhaps which operate completely
in parallel - perhaps similar in concept to the predicated
instructions of the IA-64. On one branch, we perform cdr oprerations
on what we think might be cons cells, and on the other branch, we've
provided in the instruction a count of the number of cdrs we want to
advance. Also, imagine a memory-management unit that tracks the
number of cdrs in a particular cdr-coded list; presumably the cons
cell would be a degenerate case with a count of 0. At any juncture,
we would need to be prepared for cdr-coded lists of any length
(perhaps a shared structure with lists pointing to other lists), and
so a decision would indeed need to be made for the list. Finally, a
caddr operation would be two instructions: "cdr twice" and "car". The
"cdr twice" instruction would start both a pair of cdr operations
(which would take no more time than two cdr operations), and at the
same time test for and advance two locations down a potential
cdr-coded list. The test would take an extra cycle, so whichever half
of the instruction is valid because of the data would succeed and the
other half invalidated, and the car would take its pointer from
whatever half of the "cdr twice" instruction succeeded. Your idea to
use similarly aligned tags for cons cells and cdr-coded lists is a
good one, and would allow for minimal hardware to perform parallel
accesses because the accesses would always be aligned similarly.
The imagination above does not represent reality, and I don't know if
such a design would ever be cost-effective. But it does demonstrate
how a combination of hardware and software could be utilized to gain
speed over the currnt technique of successive cdring down cons cells.
--
Duane Rettig ·····@franz.com Franz Inc. http://www.franz.com/
555 12th St., Suite 1450 http://www.555citycenter.com/
Oakland, Ca. 94607 Phone: (510) 452-2000; Fax: (510) 452-0182
On Sat, 11 Oct 2008 23:30:34 -0700, Duane Rettig wrote:
> I've always preached against premature optimization, and I don't regard
> this situation as one of those times. You have to ask yourself the
> question "premature to what?". In the classic case of premature
> optimization, the optimization takes place before the design is done, or
> as a prerequisite to an implementation. Avoid such optimizations like
> the plague. However, when as an implementor you are designing
> representations for a fundamental building block, then yes, you want to
> squeeze every byte, every cycle, even though it appears that bytes and
> cycles are incredibly cheap nowadays. It wasn't more than a few years
> ago when 100 Mb memory spaces were considered lavish and overkill, and
> now a few gigabytes is the norm. Ten years ago, we were continuing to
> fight to keep our lisp running in 2 Mb or less (and we've kept the base
> lisp more or less to that size, though it inevitably does grow as time
> goes on). Now, with biolisp and triple-store problem sets, 10s of
> gigabytes are not enough, and we fight to make the reprentation of a
> triple or a upi smaller and more efficient, not to mention faster
> access. And in a typical lisp, conses tend to make up a significant
> portion of the lisp heap for some problem sets. You'd better believe
> that if we had the hardware needed to implement cdr-coding efficiently,
> we would do so.
I am sure it would benefit some programs and improve speed and memory
efficiency. But there is a trade-off: some programs would be slower on
GP hardware because of the extra checking. I think it is premature
optimization if CDR coding is introduced before we weight the costs and
the benefits.
Also, don't forget the extra complexity CDR coding would entail at the
implementation level. Complexity would mean new bugs and lots of
programmer time. The fact that neither the commercial nor the free Lisps
have CDR coding means that it is not worth it.
Tamas
Tamas K Papp <······@gmail.com> writes:
> On Sat, 11 Oct 2008 23:30:34 -0700, Duane Rettig wrote:
>
>> I've always preached against premature optimization, and I don't regard
>> this situation as one of those times. You have to ask yourself the
>> question "premature to what?". In the classic case of premature
>> optimization, the optimization takes place before the design is done, or
>> as a prerequisite to an implementation. Avoid such optimizations like
>> the plague. However, when as an implementor you are designing
>> representations for a fundamental building block, then yes, you want to
>> squeeze every byte, every cycle, even though it appears that bytes and
>> cycles are incredibly cheap nowadays. It wasn't more than a few years
>> ago when 100 Mb memory spaces were considered lavish and overkill, and
>> now a few gigabytes is the norm. Ten years ago, we were continuing to
>> fight to keep our lisp running in 2 Mb or less (and we've kept the base
>> lisp more or less to that size, though it inevitably does grow as time
>> goes on). Now, with biolisp and triple-store problem sets, 10s of
>> gigabytes are not enough, and we fight to make the reprentation of a
>> triple or a upi smaller and more efficient, not to mention faster
>> access. And in a typical lisp, conses tend to make up a significant
>> portion of the lisp heap for some problem sets. You'd better believe
>> that if we had the hardware needed to implement cdr-coding efficiently,
>> we would do so.
>
> I am sure it would benefit some programs and improve speed and memory
> efficiency. But there is a trade-off: some programs would be slower on
> GP hardware because of the extra checking.
Yes, absolutely.
> I think it is premature
> optimization if CDR coding is introduced before we weight the costs and
> the benefits.
You don't see any GP hardware vendors doing any cdr-coding, do you?
No, it wouldn't be premature optimization; it would just be poor
design.
> Also, don't forget the extra complexity CDR coding would entail at the
> implementation level. Complexity would mean new bugs and lots of
> programmer time. The fact that neither the commercial nor the free Lisps
> have CDR coding means that it is not worth it.
You still aren't getting it. The absence of cdr-coding by current
vendors of Lisp has no relationship to whether cdr-coding is itself
"worth it" - instead it reflects the major shift of the Lisp industry
away from special-purpose hardware that supports cdr-coding. What
you're really discussing is whether such hardware itself is worth it.
And if you want to know what I think of that question, just consider
who I work for, and whether we sell Lisps on GP or special purpose
hardware...
--
Duane Rettig ·····@franz.com Franz Inc. http://www.franz.com/
555 12th St., Suite 1450 http://www.555citycenter.com/
Oakland, Ca. 94607 Phone: (510) 452-2000; Fax: (510) 452-0182
On Oct 11, 4:33 pm, Tamas K Papp <······@gmail.com> wrote:
>
> I admit I know very little about the history of Lisp machines and
> optimizations like this. But CDR coding in today's world (non-Lisp OS,
> Lisp programs as ordinary processes, relatively cheap RAM etc) seems like
> the ultimate premature optimization.
>
When you follow that cdr pointer, you're frequently going to
dereference something that isn't in cache. So it's not just double
the space, but the cache misses that will get you. As flexible as
cons pairs are, they aren't great for modern processors, and null
terminated vectors will get you a lot of the same flexibility. Using
twice as much space also puts additional pressure on the cache. "CDR
coding" doesn't seem all that premature to me.
smallpond wrote:
> You could imagine a language similar to lisp in which everything is
> either an atom or a null terminated list, and not having dotted pairs.
> Some things would have to change (like alists) but much of the flavor
> of lisp would not be affected. I don't think there would be any loss
> of generality. Instead of cons you would just use list.
The cons pair is the simplest mechanism that allows practical lisp
construction; McCarthy is a theoretician after all. That's also where
the beauty of Lisp lies: combining primitive mechanisms in a
comprehensible way to construct high-level concepts. If you replace that
with some opaque voodoo[1] and apply that to the other parts of lisp
aswell, you'll end up with something inpenetrable as the C++ STL.
[1] although I agree of course that, for example, strings, hashtables,
etc. are such opaque voodoo, but they're not the basic building blocks
of the language.
> From: smallpond <·········@juno.com>
> You could imagine a language similar to lisp in which everything is
> either an atom or a null terminated list, and not having dotted pairs.
Under that plan, it'd be impossible to build lists incrementally,
which is essential for efficiently implementing READ and other
parser for other variable-width list representations. The best that
could be done is to either pre-allocated the largest reasoanble
array then copy it to a smaller array after the parse is done, and
switch to a larger array whenever an unexpectedly large list is
seen, or allocate blocks of elements that are chained together, and
then re-copy the chain to a single array when the READ operation is
done, or to emulate CDR-linked cells by using CADR-linked cells
instead and re-copy to a single array when done, or somehow emulate
a binary search tree to emulate an extensible array and re-copy to
a single array when done. It *can* be done, but it's a bit of a
mess any way you program it. See my more detailed discussion I
posted a few minutes ago. <······················@yahoo.com>
> Some things would have to change (like alists)
That detailed discussion proposes suitable alternative representations.
> but much of the flavor of lisp would not be affected.
I agree. Most low-level tasks would need to be re-designed quite
differently from how they are done with CONS cells. Of course
two-element vectors can be CADR-chained to emulate CDR chaining, as
a cheap hack if you don't have time to do it better. But making
more efficient use of these arbitrary-length vectors would be
worthwhile. At a sufficiently high level of abstraction, the
differences would go away except for issues of efficiency, and at
an even higher level of abstraction using self-balancing binary
search trees to emulate just about all application-level data
structures there'd be no difference even in efficiency; just about
every operation except full traversal would take log(n) time.
> I don't think there would be any loss of generality. Instead of
> cons you would just use list.
Actually LIST* as Pascal pointed out in the article I was
responding to in my more complete exposition of how that
alternative might work.
On Oct 23, 1:35 pm, ·············@teh.intarweb.org (Robert Maas,
http://tinyurl.com/uh3t) wrote:
>
> > From: smallpond <·········@juno.com>
> > You could imagine a language similar to lisp in which everything is
> > either an atom or a null terminated list, and not having dotted pairs.
>
> Under that plan, it'd be impossible to build lists incrementally,
> which is essential for efficiently implementing READ and other
> parser for other variable-width list representations. The best that
> could be done is to either pre-allocated the largest reasoanble
> array then copy it to a smaller array after the parse is done, and
> switch to a larger array whenever an unexpectedly large list is
> seen, or allocate blocks of elements that are chained together, and
> then re-copy the chain to a single array when the READ operation is
> done, or to emulate CDR-linked cells by using CADR-linked cells
> instead and re-copy to a single array when done, or somehow emulate
> a binary search tree to emulate an extensible array and re-copy to
> a single array when done. [...]
>
There are better strategies than you list (pun not originally
intended :-). You can realloc a vector as you go, allocating some
extra slots each time in anticipation of future appends. If you bump
up the size exponentially (making it a constant multiple larger at
each realloc), the amortized cost for appending to a vector is only
O(1). Moreover, if you choose a factor less than 2 (say 1.5 or
something) for your reallocations, you'll use less memory than a
linked list when you're finished. If you're memory allocator is
smart, you might occasionally get a larger vector without any
copying. Finally, traversing a vector will cause less cache grief
than traversing a linked list.
> From: Scott <·······@gmail.com>
> There are better strategies than you list (pun not originally
> intended :-).
Yeah, sometimes puns, and also cliches, strike when we least expect.
> You can realloc a vector as you go, allocating some extra slots
> each time in anticipation of future appends. If you bump up the
> size exponentially (making it a constant multiple larger at each
> realloc), the amortized cost for appending to a vector is only
> O(1). Moreover, if you choose a factor less than 2 (say 1.5 or
> something) for your reallocations, you'll use less memory than a
> linked list when you're finished.
Hmm, so when you are all done parsing a level of list structure
from s-expression or XML or other syntax you keep the bloated array
instead of contracting it to exactly-correct size? Unfortunately if
your factor is strictly between 1 and 2 then when splitting a
recycled old array to use part of it for a new not-yet-large array
you suffer storage fragmentation. It might be better to use a
factor of exactly 2 so that both halves of a split large array can
be directly used for a previous-size array that appears later in
the progression up the size scale. (The fibonacci progression is
another well-known algorithm, which doesn't have an exactly fixed
factor, but otherwise satisfies both your general description and a
desire to avoid excessive fragmentation. Of course it would need to
be exact fibonacci times whatever size block is the proper
alignment for the system you're working on, rather than exact
fibonacci number of bytes. But exact factor of 2 works better with
virtual memory, never splitting an array across a page boundary
unless that array is larger than a page, simultaneously working
well with several nested sizes of cache blocks from CPU to RAM to
disk sector.)
> If you're memory allocator is smart, you might occasionally get a
> larger vector without any copying.
True, once in a blue moon.
> Finally, traversing a vector will cause less cache grief than
> traversing a linked list.
That efficiency consideration really depends on the particular
application you're running. For using vectors to emulate lists that
are used for binary or multinary search trees, where typically you
look at only a few items at each level of the tree before jumping
down to a sub-tree, it wouldn't help much. For mapping a function
down an array such as in vector processing, it might help a lot.
For toplevel read-JIT-execute-print loop in Lisp, it's moot because
the entire nested-list of code is generated once as output from
READ and then read once during JIT compilation and then sits idle
ever after except during debugging when somebody needs to see the
code that triggered the bugtrap.
Here's another idea to try: Binary-chained CDR-coding where the
block sizes are always exact powers of two. Here's the sequence of
adding one unit at a time at the end of growing list:
Allocate 2:
1 [q/cdr]->NIL
2 [q/w]
Allocate 2, move 1 element:
3 [q/cdr]->[w/e]
Allocate 4, discard 2+2, move 3 elements:
4 [q/w/e/r]
Allocate 2, move 1 element:
5 [q/w/e/cdr]->[r/t]
Allocate 2, move 1 element:
6 [q/w/e/cdr]->[r/cdr]->[t/y]
Allocate 4, move 3 elements, discard 2+2:
7 [q/w/e/cdr]->[r/t/y/u]
Allocate 8, move 6 elements, discard 4+4:
8 [q/w/e/r/t/y/u/i]
Allocate 2, move 1 element:
9 [q/w/e/r/t/y/u/cdr]->[i/o]
Allocate 2, move 1 element:
10 [q/w/e/r/t/y/u/cdr]->[i/cdr]->[o/p]
Allocate 4, move 3 elements, discard 2+2:
11 [q/w/e/r/t/y/u/cdr]->[i/o/p/a]
Allocate 2, move 1 element:
12 [q/w/e/r/t/y/u/cdr]->[i/o/p/cdr]->[a/s]
Allocate 2, move 1 element:
13 [q/w/e/r/t/y/u/cdr]->[i/o/p/cdr]->[a/cdr]->[s/d]
Allocate 4, move 3 elements, discard 2+2:
14 [q/w/e/r/t/y/u/cdr]->[i/o/p/cdr]->[a/s/d/f]
Allocate 8, move 7 elements, discard 4+4:
15 [q/w/e/r/t/y/u/cdr]->[i/o/p/a/s/d/f/g]
Allocate 16, move 15 elements, discard 8+8:
16 [q/w/e/r/t/y/u/i/o/p/a/s/d/f/g/h]
Allocate 2, move 1 element:
17 [q/w/e/r/t/y/u/i/o/p/a/s/d/f/g/cdr]->[h/j]
etc.
What do you think of that algorithm? It requires only a single
CDR/ELEMENT flag for the whole array, any size block, which tells
how the last array cell will be interpreted. All other array cells
are *always* treated as elements (i.e. CARs). Small arrays
discarded when repacking to larger arrays are re-cycled rapidly,
2-arrays almost immediately, 4-arrays after not too long, etc.
So with a reference count system or forced discarding to free-list
this is very efficient storage-wise. Also most of the time only
elements near the end of the list need to be copied from two old
arrays to one new array. Also note the algorithm is nicely
mostly-tail recursive where repacking occurs on the way back up
using only the last two elements of the previous chain.
(Most of algorithm here: If the last two old blocks are the same
size, allocate twice that size and re-pack all old elements from
both old blocks and discard both old blocks. If the last two old
blocks are different size, allocate size 2 and move just one old
element from last old block. The only special cases are at the
very start when creating the very first size-2 array with cdr=NIL
and then the next step when simply replacing cdr=NIL with
element=whatever.)
Note that the stack for keeping track of all cells and allowing
efficient continuation without *ever* needing to scan from the
start of the chain to find the end again during building the chain
requires only log(n) depth worst-case. For efficiency this stack
could be implemented as an ordinary adjustable array with power of
two total capacity and fill pointer to show logical size at any
moment, using your original allocation algorithm with factor=2.
A similar idea could be used to make NTH efficient: Have a 2-d
array where one row has the pointers to the segments of the list
and the other row has the sizes of those segments. Then a single
pass through the second row counting down from the NTH parameter
would find which place to follow the pointer of the first row, and
the remainder just before zero-crossing would show direct index
within that segment of the list. So for a list of length N,
implemented as chain of power-of-two CDR-coded arrays plus that
continuation stack converted to 2-d array, only log(N) steps would
be needed to find any desired element.
By the way, I thought of that new algorithm just tonight while
composing this message. It came out really nice the first time!
I have no idea if anybody else ever thought of this same algorithm
previously.
But see below for slightly bad news:
Now in fact an array of N cells has some extra header words that
make its total allocation size some constant larger than N*k bytes
of memory where k is the size of a single CAR cell.
(Assuming you're not using BIBOP where each page of RAM stores only
one type of data-block of constant size so you don't need any tag
bytes at all to identify type+length of each allocated block.
The above algorithm would work just fine as-is with BIBOP.
Another way that would work is if all the size+tag info is in
each pointer to a block rather than in the block pointed at.)
This would require slight modification of the algorithm to make
sure the *total* allocation unit was power of two. For example, if
the total header (size number plus type-tag code) of an allocation
block takes the same storage as one CAR-or-CDR pointer, and we have
these block types available:
- lista treats all cells after the header as elements
- listd treats last cell as CDR link, all others (after header) as elements
- listx ignores last cell, treats all cells between header...last as elements
then the sequence might go more like this:
Allocate 2:
1 [2#lista/q]
Allocate 4, copy 1 element, release 2:
2 [4#listx/q/w/ ]
Change tag type:
3 [4#lista/q/w/e]
Allocate 4, copy 1 element, change tag type:
4 [4#listd/q/w/ptr]->[4#listx/e/r/ ]
Change tag type:
5 [4#listd/q/w/ptr]->[4#lista/e/r/t]
Allocate 4, copy 1 element, change tag type:
6 [4#listd/q/w/ptr]->[4#listd/e/r/ptr]->[4#listx/t/y/ ]
Allocate 8, copy 6 elements, discard 4+4+4:
7 [8#lista/q/w/e/r/t/y/u]
Allocate 4, copy 1 element, change tag type:
8 [8#lista/q/w/e/r/t/y/ptr]->[4#listx/u/i/ ]
This is sufficiently complicated that I'm not sure this is a clean
algorithm, and it's so far past bedtime that I don't feel like
working it all out tonight. Actually at step 6->7, I could have
just changed the tag type of that last 4-array instead of repacking
already. Then repacking to 8-array would occur at step 7->8.
It's amazing I saw that when so groggy hours past bedtime!!
If this algorithm works at all, it probably will always require
keeping local pointers to the last three arrays in the chain
unlike the original algorithm which required only two pointers.
But still it's almost as nicely mostly-tail recursive if it works
in a uniform way. I'm too groggy to work out how many blocks get
combined at higher levels than 4+4+4->8. G'nite all.
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Corrected second algorithm (was: Cons cell archaic!?)
Date:
Message-ID: <REM-2008nov03-002@Yahoo.Com>
About an hour after I went to bed, lying in bed unable to get to
sleep, I realized I made a mistake in my second algorithm. I didn't
reallocate early enough, which is why I had three blocks of the
same size at one point in the progression. So I got up and re-did
the algorithm below, and now it's much more uniform, nice and clean
just like the first algorithm. After a few hours sleep, I'm
semi-alert now to finish composing this article.
Corrected algorithm, where the total header (size number plus
type-tag code) of an allocation block takes the same storage as one
CAR-or-CDR pointer, and we have these block types available:
- lista treats all cells after the header as elements
- listd treats last cell as CDR link, all others (after header) as elements
- listx ignores last cell, treats all cells between header...last as elements
By my best effort now, the sequence should go like this:
Allocate 2:
1 [2#lista/q]
Allocate 4, copy 1 element, release 2:
2 [4#listx/q/w/ ]
Change tag type:
3 [4#lista/q/w/e]
Allocate 4, copy 1 element, change tag type:
4 [4#listd/q/w/ptr]->[4#listx/e/r/ ]
Change tag type:
5 [4#listd/q/w/ptr]->[4#lista/e/r/t]
Allocate 8, copy 5 elements, discard 4+4:
6 [8#listx/q/w/e/r/t/y/ ]
Change tag type:
7 [8#listx/q/w/e/r/t/y/u]
Allocate 4, copy 1 element, change tag type:
8 [8#listd/q/w/e/r/t/y/ptr]->[4#listx/u/i/ ]
Change tag type:
9 [8#listd/q/w/e/r/t/y/ptr]->[4#lista/u/i/o]
Allocate 4, copy 1 element, change tag type:
10 [8#listd/q/w/e/r/t/y/ptr]->[4#listd/u/i/ptr]->[4#listx/o/p/ ]
Change tag type:
11 [8#listd/q/w/e/r/t/y/ptr]->[4#listd/u/i/ptr]->[4#listx/o/p/a]
Allocate 8, copy 5 elements, discard 4+4:
13 [8#listd/q/w/e/r/t/y/ptr]->[8#listd/u/i/o/p/a/s/ ]
Change tag type:
13 [8#listd/q/w/e/r/t/y/ptr]->[8#listd/u/i/o/p/a/s/d]
Allocate 16, copy 13 elements, discard 8+8:
14 [16#listx/q/w/e/r/t/y/u/i/o/p/a/s/d/f/ ]
Change tag type:
15 [16#lista/q/w/e/r/t/y/u/i/o/p/a/s/d/f/g]
Allocate 4, copy 1 element, change tag type:
16 [16#listd/q/w/e/r/t/y/u/i/o/p/a/s/d/f/ptr]->[4#listx/g/h/ ]
Change tag type:
17 [16#listd/q/w/e/r/t/y/u/i/o/p/a/s/d/f/ptr]->[4#lista/g/h/j]
Allocate 4, copy 1 element, change tag type:
18 [16#listd/q/w/.../d/f/ptr]->[4#listd/g/h/ptr]->[4#listx/j/k/ ]
Change tag type:
19 [16#listd/q/w/.../d/f/ptr]->[4#lista/g/h/ptr]->[4#lista/j/k/l]
Allocate 8, copy 5 elements, discard 4+4:
20 [16#listd/q/w/.../d/f/ptr]->[8#listx/g/h/j/k/l/z/ ]
Change tag type:
21 [16#listd/q/w/.../d/f/ptr]->[8#lista/g/h/j/k/l/z/x]
Allocate 4, copy 1 element, change tag type:
22 [16#listd/q/w/.../d/f/ptr]->[8#listd/g/h/j/k/l/z/ptr]->[4#lista/x/c/ ]
Change tag type:
23 [16#listd/q/w/.../d/f/ptr]->[8#listd/g/h/j/k/l/z/ptr]->[4#lista/x/c/v]
Allocate 4, copy 1 element, change tag type:
24 [16#...]->[8#listd/g/h/j/k/l/z/ptr]->[4#lista/x/c/ptr]->[4#lista/v/b/ ]
This new second algorithm is only slightly more complicated than
the first algorithm, having a 3-way instead of 2-way choice after
the special case at the start: If last block is listx then change
to lista and just put new element there, else if last two blocks
both exist and are same size then merge them, else splice new
4#listx at end. Showing just parts of the sequence onward:
27 [16#...]->[8#listd/g/h/j/k/l/z/ptr]->[8#lista/x/c/v/b/n/m/Q]
Allocate 16, copy 13 elements, discard 8+8:
28 [16#...]->[16#listx/g/h/j/k/l/z/x/c/v/b/n/m/Q/W/ ]
Change tag type:
29 [16#...]->[16#listx/g/h/j/k/l/z/x/c/v/b/n/m/Q/W/E]
Allocate 32, copy 29 elements, discard 16+16:
30 [32#listx/q/w/...m/Q/W/E/R]
53 [32#...]->[16#...]->[8#...]->[4#...]
Allocate 4, copy 1 element, change tag type:
54 [32#...]->[16#...]->[8#...]->[4#...]->[4#listx/...]
Change tag type:
55 [32#...]->[16#...]->[8#...]->[4#...]->[4#lista/...]
Allocate 8, copy 5 elements, discard 4+4:
56 [32#...]->[16#...]->[8#...]->[8#listx/...]
Change tag type:
57 [32#...]->[16#...]->[8#...]->[8#lista/...]
Allocate 16, copy 13 elements, discard 8+8:
58 [32#...]->[16#...]->[16#listx/...]
Change tag type:
59 [32#...]->[16#...]->[16#lista/...]
Allocate 32, copy 29 elements, discard 16+16:
60 [32#...]->[32#listx/...]
Change tag type:
61 [32#...]->[32#lista/...]
Allocate 64, copy 61 elements, discard 32+32:
62 [64#listx/...]
241 [128#...]->[64#...]->[32#...]->[16#...]->[8#...]->[4#lista/...]
Allocate 4, copy 1 element, change tag type:
242 [128#..]->[64#..]->[32#..]->[16#..]->[8#..]->[4#listd/..]->[4#listx/..]
As with the first algorithm, an auxilary "stack" array of at most
log(n) elements points to the blocks of this chain, providing
continuation-style operation of this algorithm whereby we never
have to search down the chain to find the last two blocks where we
apply the algorithm each time a new element is presented to be
added to the end. This auxilary array could be allocated using your
original algorithm with factor exactly 2, like this:
1 thru 9 [4#adjustableArray/fillPointer/ / ]
Allocate 8, copy 2 elements, discard 4:
10 thru 241 [8#adjustableArray/fillPointer/ / / / / / ]
Allocate 16, copy 6 elements, discard 8:
242 thru 65505 [16#adjustableArray/fillPointer/ / / / / / / / / / / / / / ]
Allocate 32, copy 14 elements, discard 16:
65506 thru EOTWAWKI=YHRTEOTI.TINMTS. [32#adjustableArray/fillPointer/...]
From an ad on TV, a few years ago, for high-speed DSL:
+-------------------------------------------+
| **ALERT** |
| You have reached the end of the InterNet. |
| There is nothing more to see. |
+-------------------------------------------+
(Only 'Chloe' on Smallville could fully appreciate that accomplishment!)
On Oct 9, 4:44 pm, Anticomuna wrote:
> How? Is there anything wrong with the concept? I thought it was so
> easy to parse the S-expressions because of it. Is this really a
> problem for Lisp or this is more of a non-Lisper automatic reaction to
> something he is not used to?
>
> How would the implementation of a Lisp without a using cons look like?
> Would it have any advantage over the "archaic" one?
Not sure what you mean by “archaic” ones, but yes, lisp's cons and the
irregularities of its nested paren syntax are 2 fundamental problems
of lisp.
See:
• Fundamental Problems of Lisp
http://xahlee.org/UnixResource_dir/writ/lisp_problems.html
Plain text version follows.
---------------------------------
Fundamental Problems of Lisp
Xah Lee, 2008-07
In this article, we discuss 2 problems that are rooted in lisp. One is
the irregularity in its often cited regular syntax. The other is the
language's use of “cons” for list.
Syntax Irregularities
Lisp family of languages, in particular, Common Lisp, Scheme Lisp,
Emacs Lisp, are well know for its syntax's regularity, namely,
“everything” is of the form “(f x1 x2 ...)”. However, it is little
talked about that there are several irregularities in its syntax. Here
are some examples of the syntax irregularity.
* The comment syntax of semicolon to end of line “;”.
* The dotted notation for cons cell “(1 . 2)”.
* The single quote syntax used to hold evaluation, e.g. “'(1 2
3)”.
* The backquote and comma syntax used to hold but evaluate parts
of expression, e.g. “(setq x 1) (setq myVariableAndValuePair
`(x ,x))”.
* The “,@” for inserting a list as elements into another list.
e.g. “(setq myListX (list 1 2)) (setq myListY (list 3 4)) (setq
myListXY `(,@ myListX ,@ myListY))”
* There are various others in Common Lisp or Scheme Lisp. For
example, the char “#” and “#|”. In Scheme's R6RS, it has introduced a
few new ones.
In the following, we detail how these irregularities hamper the power
of regular syntax, and some powerful features and language
developments that lisp have missed that may be due to it.
Confusing
Lisp's irregular syntax are practically confusing. For example, the
difference between “(list 1 2 3)”, “'(1 2 3)”, “(quote (1 2 3))” is a
frequently asked question. The use of “` , ,@” etc are esoteric. If
all these semantics use the regular syntactical form “(f args)”, then
much confusion will be reduced and people will understand and use
these features better. For example, The “'(1 2 3)” might be changed to
“(' 1 2 3)”, and
(setq myListXY `(,@ myListX ,@ myListY))
could have been:
(setq myListXY (eval-parts (splice myListX) (splice myListY)))
or with sugar syntax for typing convenience:
(setq myListXY (` (,@ myListX) (,@ myListY)))”
Syntax-Semantics Correspondence
A regular nested syntax has a one-to-one correspondence to the
language's abstract syntax tree, and to a large extent the syntax has
some correspondence to the language's semantics. The irregularities in
syntax break this correspondence.
For example, programers can pretty much tell what piece of source code
“(f x1 x2 x3 ...)” do by just reading the name that appears as first
element in the paren. As a contrast, in syntax soup languages such as
Java, Perl, the programmer must be familiar with each of its tens of
syntactical forms. (e.g. “if (...) {...}”, “for (...; ...; ...)
{...}”, “(some? this: that)”, “x++”, “myList = [1, 2, 3]” etc.) As a
example, if lisp's “'(1 2 3)” is actually “(quote 1 2 3)” or shortcut
form “(' 1 2 3)”, then it is much easier to understand.
Source Code Transformation
Lisp relies on a regular nested syntax. Because of such regularity of
the syntax, it allows transformation of the source code by a simple
lexical scan. This has powerful ramification. (lisp's macros is one
example) For example, since the syntax is regular, one could easily
have alternative, easier to read syntaxes as a layer. (the concept is
somewhat known in early lisp as M-expression) Mathematica took this
advantage (probably independent of lisp's influence), so that you
really have easy to read syntax, yet fully retain the regular form
advantages. In lisp history, such layer been done and tried here and
there in various forms or langs ( CGOL↗, Dylan↗), but never caught on
due to largely social happenings. Part of these reasons are political
and lisper's sensitivity to criticism of its nested parens.
In lisp communities, it is widely recognized that lisp's regular
syntax has the property that “code is data; data is code”. However,
what does that mean exactly is usually not clearly understood in the
lisp community. Here is its defining characteristics:
A regular nested syntax, makes it possible to do source code
transformations trivially with a lexical scan.
The benefits of a regular syntax has become widely recognized since
mid 2000s, by the XML language. The XML language, due to its strict
syntactical regularity, has developed into many transformation
technologies such XSLT, XQuery, STX etc. See XML transformation
languages↗.
Automatic, Uniform, Universal, Source Code Display
One of the advantage of pure fully functional syntax is that a
programer should never need to format his source code (i.e. pressing
tabs, returns) in coding, and save the hundreds hours of labor,
guides, tutorials, advices, publications, editor tools, on what's
known as “coding style convention”, because the editor can reformat
the source code on the fly based on a simple lexical scan.
Because lisp's syntax has lots of nested parenthesis, the source code
formatting is much more labor intensive than syntax soup languages
such as Perl, even when using a dedicated lisp editor such as emacs
that contain large number editing commands on nested syntax.
The lisp community, established a particular way of formatting lisp
code as exhibited in emacs's lisp modes and written guides of
conventions. The recognition of such convention further erode any
possibility and awareness of automatic, uniform, universal,
formatting. (e.g. the uniform and universal part of advantage is
exhibited by Python)
As a example, the Mathematica language features a pure nested syntax
similar to lisp but without irregularities. So, in that language,
since version 3 released in 1996, the source code in its editor are
automatically formatted on the fly as programer types, much in the
same way paragraphs are automatically wrapped in a word processor
since early 1990s
Note the phrase “automatic, uniform, universal, source code display”.
By “automatic”, it means that any text editor can format your code on
the fly or by request, and this feature can be trivially implemented.
By “uniform”, it means there is one simple and mechanical heuristic,
to determine a canonical way to format any lisp code for human-
readable display. By “universal” is meant that all programers, will
recognize and habituated with this one canonical way, as a standard.
(they can of course set their editor to display it in other ways)
The “uniform” and “universal” aspect is a well-known property of
Python lang's source code. The reason Python's source code has such
uniform and universal display formatting is because it is worked into
the language's semantics. i.e. the semantics of the code depends on
the formatting (i.e. where you press tabs and returns). But also note,
Python's source code is not and cannot be automatically formatted,
precisely because the semantics and formatting is tied together. A
strictly regular nested syntax, such as Mathematica's, can, and is
done, since 1996. Lisp, despite its syntax irregularities, i think it
still can have a automatic formatting at least to a large, practical,
extent. Once lisp has automatic on-the-fly formatting (think of
emacs's fill-sexp, auto-fill-sexp), then lisp code will achieve
uniform and universal source code formatting display.
The advantage of having a automatic, uniform, universal, source code
display for a language is that it gets rids of the hundreds of hours
on the labor, tools, guides, arguments, about how one should format
his code. (this is partly the situation of Python already) But more
importantly, by having such properties, it will actually have a impact
on how programer codes in the language. i.e. what kind of idioms they
choose to use, what type of comments they put in code, and where.
This, further influences the evolution of the language, i.e. what kind
of functions or features are added to the lang. For some detail on
this aspect, see: The Harm of Manual Code Formating
Syntax As Markup Language
One of the power of such pure nested syntax is that you could build up
layers on top of it, so that the source code can function as markup of
conventional mathematical notations (i.e. MathML) and or as a word-
processing-like file that can contain structures, images (e.g.
Microsoft Office Open XML↗), yet lose practical nothing.
This is done in Mathematica in 1996 with release of Mathematica
version 3. (e.g. think of XML, its uniform nested syntax, its diverse
use as a markup lang, then, some people are adding computational
semantics to it now (i.e. a computer language with syntax of xml. e.g.
O:XML↗). You can think of Mathematica going the other way, by starting
with a computer lang with a regular nested syntax, then add new but
inert keywords to it with markup semantics. The compiler will just
treat these inert keywords like comment syntax when doing computation.
When the source code is read by a editor, the editor takes the markup
keywords for structural or stylistic representation, with title,
chapter heading, tables, images, animations, hyperlinks, typeset math
expression (e.g. think of MathML↗) etc. The non-marked-up keywords are
shown as one-dimensional textual source code just like source code is
normally shown is most languages.)
Frequently Asked Questions
You say that lisp syntax irregularities “reduce such syntax's power”.
What you mean by “syntax's power”?
Here are some concrete examples of what i mean by power of syntax.
In lisp, the comment is done by the char “;” running to end of line.
Note that this does not allow nested comment. So for example, if you
have multi-line code, and you want to comment out them all, you have
to prepend each line by a semicolon. However, if you have nested
comment syntax, one could just bracket the block of code to comment it
out. This, is a simple, perhaps trivial, example of “power of a
syntax”.
In Python, the formatting is part of the lang's syntax. Some
programers may not like it, but it is well accepted that due to
Python's syntax, python code is very easy to read, and it much done
away about programer preferences and argument about code formatting.
This is example of power of a syntax.
Let me give another, different example. You know that perl's syntax,
often the function's arguments do not necessarily need to have a paren
around it. For example, “print (3);” and “print 3;” are the same
thing. This is a example of power of syntax, when considered as a
flexibility or save of typing, for good or bad. Similarly, in
javascript for example, ending semicolon is optional when it is at end
of a line. (for sample perl and python code, see Xah's Perl and Python
Tutorial.)
In Mathematica, the language has a syntax system such that you can use
fully regular nested notation (e.g. “f[g[x]]”), postfix notation (e.g.
“x//g//f”), prefix notation (e.g. ··@·@x”), infix notation (e.g.
“1~Plus~2”), for ANY function in the language, and you can mix all of
the above. (prefix and postfix by nature are limited to functions with
just 1 arg, and infix notation by nature are limited to functions with
2 args) This is a example of power of syntax.
(for detailed explanation of Mathematica syntax and comparison to
lisp's, see: The Concepts and Confusions of Prefix, Infix, Postfix and
Fully Functional Notations )
In general, a computer lang has a syntax. The syntax, as text written
from left to right, has various properties and characteristics. Ease
of input (think of APL as counter example), succinctness (e.g. Perl,
APL), variability (Perl, Mathematica), readability (Python),
familiarity (C, Java, Javascript, ...), 2-dimensional notation (e.g.
traditional math notation) (Mathematica), ease of parsing (lisp),
regularity (APL, Mathematica, Lisp, XML), flexibility (Mathematica)...
etc. Basically, you can look at syntax, and programer's need to type
them, and how the textual structure can be mapped to the semantic
space, from many perspectives. The good qualities, such as ease of
input, ease of reading, ease of parsing, ease of transformation,
simplicity, flexibility, etc, can be considered as elements of the
syntax's power.
As a example of syntax of little power, think of a lang using just the
symbol “0” and “1” as its sole char set.
Many of lisp's sugar syntax are designed to reduce nested paren. Why
using a more consistent, but more annoying sugar syntax?
Ultimately, you have to ask why lisper advocate nested syntax in the
first place.
If lispers love the nested syntax, then, the argument that there
should not be irregularities, has merit. If lispers think occasional
irregularities of non parenthesized syntax is good, then there's the
question of how many, or what form. You might as well introduce “++i”
for “(setq i (1+ i))”.
Further readings:
* The Concepts and Confusions of Prefix, Infix, Postfix and Fully
Functional Notations
* The Harm of Hard-wrapping Lines
* A Text Editor Feature: Syntax Tree Walk
* A Simple Lisp Code Formatter
The Cons Business
The above are some of the damages of irregularity in lisp's syntax.
The other fundamental problem in the language is its cons cells as its
list construction primitive.
Lisp at core is based on functional programing on lists. This is
comparatively a powerful paradigm. However, for historical reasons,
lisp's list is based on the hardware concept of “cons” cell. From a
mathematical point of view, what this means is that lisp's lists is
limited to a max of 2 elements. If you want a longer list, you must
nest it and interpret it in a special way. (i.e. effectively creating
a mini-protocol of nesting lists, known as proper lists.) The cons
fundamentally crippled the development of list processing.
Lisp being historically based the cons for like 2 or 3 decades. The
cons (and cdr, car, caadar etc) are fundamentally rooted in the lisp
langs, is thus not something that can be easily mended. This is
unfortunate. However, this situation could be improved. (by, for
example, exposing only higher-level list manipulation functions in all
newer literature, or even mark cons related functions as obsolete)
One of the myth that is quickly injected into budding lispers, is that
cons are powerful. It is powerful in the sense any assembly lang is
powerful. Lisp's cons is perhaps the greatest fuck up in the history
of computer languages.
The cons issue bugged me for 10 years, since i first tried to learn
scheme in ~1999. I've never worked with lisp (other than academic
reading) until in recent years with emacs lisp since 2005. Several
times i tried to explain to lispers this problem, but usually with
paragraphs and paragraphs of descriptions, analogies, examples,
frustrated by how hard it is to convey such a simple flaw (aided by
their blind, deep-seated, lisp fanaticism). Yesterday it hit on me.
The above mathematical aspect of lisp's cons is the first time i
formulated the cons problem concisely. (my previous verbose
explanation here: Lisp's List Problem )
Frequently Asked Questions
If you don't like cons, lisp has arrays and hashmaps, too.
Suppose there's a lang called gisp. In gisp, there's cons but also
fons. Fons are just like cons except it has 3 cells with car, cbr,
cdr. Now, gisp is a old lang, the fons are deeply rooted in the lang.
Every some 100 lines of code you'll see a use of fons and car, cbr,
cdr, or any one of the caar, cdar, cbbar, cdbbar, etc. You got annoyed
by this. You as a critic, complains that fons is bad. But then some
gisp fanatics retort by saying: “If you don't like fons, gisp has
cons, too.”.
You see, by “having something too”, does not solve the problem of
polution. Sure, you can use just cons in gisp, but every lib or
other's code you encounter, there's a invasion of fons with its cbbar,
cdbbar, cbbbr. The problem created by fons cannot be solved by “having
cons too”.
I like the cons concept. Even in functional languages like Haskell it
is popular, e.g. when matching in the form of (x:xs), which is the
same like car/cdr in Lisp.
Languages that has a list data type and First, Rest functions do not
mean it has lisp's cons problem.
One part of the cons problem in lisp is that it forces programer to
think of list in a low-level nested of 2-item construction, with
explicit functions like “cons”, “car”, “cdr”, “caar”, “cadr”, “cdar”,
“cddr”, “caaar”, “caadr” etc.
In other langs, the programer is not forced to think of nested 2-
items.
The other problem with lisp's cons, is that it hinders any development
of tree data structure. For example, one might write a function that
extracts the leafs of a tree. But due to lisp's list made of cons,
there is a different interpretations of what's considered a leaf.
Similarly, binary tree in lisp can be implemented either using cons
natively, or use so-called “proper list” that is implemented on top of
cons. Worse, any proper list can be mixed with improper list. So, you
can have a list of cons, or cons of lists, cons of cons, list of
lists, or any mix. The overall effect of the cons is that it prevents
lisp to have a uniform view of tree structure, with the result that
development of functions that work on tree are inconsistent, few, or
otherwise hampered.
Further readings:
* Lisp's List Problem
* My First Encounter And Impression Of Lisp
* The Jargon “Lisp1” vs “Lisp2”
Why Lisp's Cons Problem Never Got Fixed?
Now, a little speculation.
We might wonder, why lisp has the cons problem and was never
addressed?
I guess at the time, 1960s and 1970s, the very fact that you could
have a concept like a list in a lang and manipulate it as a unit, is
extremely advanced at the time. The list, being built up by hardware
cons, is just something that has to be done.
Having data as a single manipulatable object (list) didn't really
become popular until the 1990s. (notably by Perl) And today, it is THE
most important feature of high-level languages (perl, python, php,
javascript... of the langs i'm expert of).
The lisp's cons, as a underlying primitive that builds lists, even
though a bit cumbersome, but works just fine when list structure are
simple. Even today, with all the perl, python, php, javascript etc
langs that deal with lists, vast majority of list usage is just simple
flat list, sometimes 2 level of nesting (list of list, list of hash,
hash of list). 3 levels of nesting is seldom used unless its 3d
matrices used mostly in computer graphics or linear algebra
applications. Greater than 3 level is almost never seen. Systematic
manipulation and exploitation of nested list, such as mapping to
leafs, to particular level, transposition by permutation on level, or
list structure pattern matching in today's functional langs, etc is
hardly ever to be seen. (except in Mathematica, APL)
So, in general, when you just deal with simple lists, the
cumbersomeness in using cons, car, cdr, caardr etc for list doesn't
really surface. Further, the cons is fundamentally rooted in the
language. It's not something that can be easily changed except
creating a new language. When there is a specific need in a
application, there is a haphazard collection of functions that deal
with lists at a higher level.
Today, with list manipulation being mainstream, especially with the
uprising of many functional langs, the lisp's cons historical baggage
becomes more apparent and painful.
Will Lisp ever be Popular?
Mathematica today sells for over 2 thousands dollars. Its sales
record, throughout its history, is probably more than ALL commercial
lisps combined. Such a illuminating social fact, in not known in lisp
communities. These fuckheads thinks that lisp is more popular.
10 years ago, in the dot com days (~1998), where Java, Javascript,
Perl are screaming the rounds. It was my opinion, that lisp will
inevitably become popular in the future, simply due to its inherent
superior design, simplicity, flexibility, power, whatever its existing
problems may be. Now i don't think that'll ever happen as is. Because,
due to the tremendous technological advances, in particular in
communication (i.e. the internet and its consequences, e.g. Wikipedia,
youtube, youporn, social networks sites, blogs, Instant chat, etc)
computer languages are proliferating like never before. (e.g. Erlang,
OCaml, Haskell, PHP, Ruby, C#, F#, Perl6, Arc, NewLisp, Scala, Groovy,
Lua, Q, Qz, Mercury, Scratch, Flash, Processing, (see Proliferation of
Computing Languages) ..., helped by the abundance of tools, libraries,
parsers, existence of infrastructures) New langs, will either have
most advantages of lisps, and or with modern libraries and idioms that
better fits today's need. I see that, perhaps in the next decade, as
communication technologies further hurl us forward, the proliferation
of langs will reduce to a trend of consolidation (e.g. fueled by
virtual machines such as Microsoft's “.NET”. (and, btw, the breaking
of programer's social taboo of cross communication of computing
languages, led by Xah Lee)).
There is one slight hope for Lisp the lang as we understood it, and
that is Emacs Lisp. Due to, it being deeply rooted in the industry as
a powerful text editor, and the embedded lisp have major practical
impact in getting people to know lisp and also actually use it for
practical need. The installed base of emacs lisp system is perhaps 100
or 1000 times more than the number of all Common Lisp and Scheme
Lisp's installed base. (this satisfying prospect is however mar'd by
the tech geekers. e.g. the Common Lispers, and Scheme Lispers,
perennially inject snide and sneer at emacs lisp that harms its
progress, while the fucking emacs priests, want to coffin emacs in its
1980s UI and terminologies. (see The Modernization of Emacs, Text
Editors Popularity.))
--------------------------
Xah
∑ http://xahlee.org/
☄
(Part 1 of multi-part reply:)
> From: ·······@gmail.com" <······@gmail.com>
> lisp's cons and the irregularities of its nested paren syntax are
> 2 fundamental problems of lisp.
The primitive two-part cell is not at all a problem, IMO. It's an
ideal component for incrementally building both arbitrary linked
lists and arbitrary binary trees, and it can be used to emulate
many other more complicated types of cells, such as might be
present in:
- binary search trees, which require at least three pointers from
each node: key/left/right
- self-balancing binary search trees, which require at least four
pointers from each node: key/countOrDepthOrColor/left/right
- binary search-and-lookup trees, which require at least four
pointers from each node: key/value/left/right
- self-balancing binary search-and-lookup trees, which require at
least five pointers from each node: key/value/countOrDepthOrColor/left/right
The advantage of emulating new types of nodes by using clusters of
simple pairs is that any ordinary programmer/user can immediately
implement any such new node type without waiting for the language
designers to install new capability. This is one of the
super-winning values of Lisp for rapid prototyping and agile
development and graceful upgrading of applications on the fly.
I agree that while dotted notation by itself, and proper-list
notation by itself, are fine, the optimized mixture of the two that
PRINT provides is a tad bit ugly, and READ needing to correctly
parse any arbitrary mixture of regular-list and dotted-pair
notation is a hurdle for people wishing to implement READ, it's a
very small nuisance/hurdle IMO. It's not worth complaining about!!
> See: =E2=80=A2 Fundamental Problems of Lisp
> http://xahlee.org/UnixResource_dir/writ/lisp_problems.html
> Plain text version follows. ...
> In this article, we discuss 2 problems that are rooted in lisp.
> One is the irregularity in its often cited regular syntax. The
> other is the language's use of =E2=80=9Ccons=E2=80=9D for list.
> Syntax Irregularities
> Lisp family of languages, in particular, Common Lisp, Scheme Lisp,
> Emacs Lisp, are well know for its syntax's regularity, namely,
> =E2=80=9Ceverything=E2=80=9D is of the form =E2=80=9C(f x1 x2 ...)=E2=80=9D=
That's a little bit hard to read with those non-ASCII bytes thrown in.
Please restrict yourself to US-ASCII when posting to
English-langauge newsgroups such as comp.lang.lisp, OK?
> there are several irregularities in its syntax. Here are some
> examples of the syntax irregularity.
> * The comment syntax of semicolon to end of line =E2=80=9C;=E2=80=9D.
Agreed. But that's not the syntax of Lisp, that's the syntax of
source files, which contain both Lisp syntax and comment syntax. By
its very nature, comment syntax is something that is supposed to be
excluded from Lisp syntax, ignored by READ. If you want to READ the
Lisp syntax but keep the comments attached to it, then you write a
different utility which produces not a list structure but a more
complicated data structure that contains both the list structure
and the comments with cross-links between them as desired. You need
to specify very carefully all the use cases you want to consider
before anyone can design such a utility to do what you would want.
We can't guess your needs from what you said here.
> * The dotted notation for cons cell =E2=80=9C(1 . 2)=E2=80=9D.
Minor nuisance, nothing to complain about, as I said above.
> * The single quote syntax used to hold evaluation, e.g.
> =E2=80=9C'(1 2 3)=E2=80=9D.
Instead of "hold", I suggest you use "suppress". I presume Chinese
is your native language and you are only semi-fluent in English, so
I'm offering to help you with minor word-usage mistakes like this.
I agree the read macro that converts 'foo into (quote foo) is
slightly ugly. In hindsight, perhaps an ordinary macro might be
better, something like (' foo). But that's more verbose, and
requires adding the close parens at the end, whereas ' as prefix
reader-macro is a single-point edit to convert an
expression-to-evaluate into an expression-to-keep-asis.
But in any case, use of ' is optional. You can write (quote foo)
any time you wish!! Nobody is forcing you to use '. So stop your
complaining!!
> * The backquote and comma syntax used to hold but evaluate parts
> of expression, e.g. =E2=80=9C(setq x 1) (setq
> myVariableAndValuePair `(x ,x))=E2=80=9D.
This is a powerful facility I haven't myself used enough to really
learn it. An alternative is to use SUBSTS, a function I wrote many
years ago which is like SUBST except instead of a single
before/after pair it takes an arbitrary number of before/after
pairs and substitutes them all in a single pass through the
CONS-tree. But again, you aren't required to use it. If you prefer
nested SUBST calls, or using my SUBSTS, you aref ree to do so, so
stop your complaining.
> * The =E2=80=9C,@=E2=80=9D for inserting a list as elements into
> another list. e.g. =E2=80=9C(setq myListX (list 1 2)) (setq
> myListY (list 3 4)) (setq myListXY `(,@ myListX ,@
> myListY))=E2=80=9D
Likewise, you aren't forced to use that, so stop your complaining.
> * There are various others in Common Lisp or Scheme Lisp. For
> example, the char =E2=80=9C#=E2=80=9D and =E2=80=9C#|=E2=80=9D.
There's so much non-ASCII I can't figure out what you're trying to say.
> In the following, we detail how these irregularities hamper the
> power of regular syntax
Having extra syntax as *optional* things you can use if you like
them or avoid if you don't like them, doesn't seem to me anything
that hampers use of the regular syntax. But I'll see what you have
to say below.
> Lisp's irregular syntax are practically confusing. For example,
> the difference between =E2=80=9C(list 1 2 3)=E2=80=9D, =E2=80=9C'(1
> 2 3)=E2=80= =9D, =E2=80=9C(quote (1 2 3))=E2=80=9D is a frequently
> asked question.
There should be a beginner's tutorial into trivial matters like
this, such as the difference between a literal constant and a
function call. Anybody who doesn't know the difference can't
program in C or Java or even FORTRAN, so why complain about Lisp?
For example, is there anybody who repeatedly complains that he/she
can't understand the difference between:
int x = 5; /* Program syntax to be compiled*/
"int x = 5;" /* (pointer to) literal constant */
> The =E2=80=9C'(1 2 3)=E2=80=9D might be changed to
> =E2=80=9C(' 1 2 3)=E2=80=9D, and
That might work fine when the literal constant is a proper list,
but what if the literal constant is an atom or a dotted pair/list?
What syntax do you propose for those cases?
'(1 2 3) replaced by (' 1 2 3)
'(a . b) replaced by ??what??
'foo replaced by ??what??
> (setq myListXY `(,@ myListX ,@ myListY))
> could have been:
> (setq myListXY (eval-parts (splice myListX) (splice myListY)))
Nothing is stopping you from doing it that way in your code, so
stop your complaining!
> A regular nested syntax has a one-to-one correspondence to the
> language's abstract syntax tree, and to a large extent the syntax has
> some correspondence to the language's semantics.
Nicely said!
> programers can pretty much tell what piece of source code
> =E2=80=9C(f x1 x2 x3 ...)=E2=80=9D do by just reading the name that
> appears= as first element in the paren.
Agreed. That's why I much prefer Lisp (op arg arg ...) syntax over
the wide variation of infix notation used by most other languages.
It even supports special forms and parse-tree transformations
(macros). Except for some arithmetic calculation functions (+ - *
/) and some arithmetic comparison functions (< = > <= >= !=) just
about every function name is an alphanumeric word or phrase which
can be used as a keyword in in a search engine or index in back of
book, and also provides a natural-language way of guessing part of
its meaning and later remembering what the technical manual said
about its exact meaning. The "words" of Lisp are so much easier to
understand than the chicken scratches of APL or Forth or even C.
And having the operator name right at the front (just after one
open parens) is so convenient to set the human-reader's mind in the
correct context for understanding the rest of the form. It's just
an awful lot of pain in C for example to need to scan matching
parens and operator precedence deep into infix notation just to
find the cryptic toplevel infix operator of the expression.
> Lisp relies on a regular nested syntax. Because of such
> regularity of the syntax, it allows transformation of the source
> code by a simple lexical scan.
I presume what you mean is a recursive local scan, i.e. you to a
depth-first scan, perhaps collecting some binding/declaration info
as you descend, returning a local result for each sub-expression
back up to the next higher continuation, and you never need to do
any sort of global cross-reference of symbol usage to disambiguate
anything, right? (I remember in 1974 when I got my only fulltime
permanent job, to maintain Four-Phase's COBOL compiler, their COBOL
compiler needed to syntactically analyze each statement separately,
then sort all the analyses per symbol referenced then collate all
references to each symbol separately in order to diagnose the
declarative type of such symbol as well as usages, then sort all
that collated info back into source-code sequence in order to
verify that the declared type of a symbol was consistent with its
usage in local context, and finally generate appropriate code
related to both the non-local declaration and the local usage. What
a monster! Lisp's recursive nested semantics are so much nicer by
comparison!)
> This has powerful ramification. (lisp's macros is one example)
Yes.
> In lisp communities, it is widely recognized that lisp's regular
> syntax has the property that =E2=80=9Ccode is data; data is
> code=E2=80=9D. =
More correctly: Valid code is a subset of valid data.
> The benefits of a regular syntax has become widely recognized
> since mid 2000s, by the XML language. The XML language, due to
> its strict syntactical regularity, has developed into many
> transformation technologies such XSLT, XQuery, STX etc. See XML
> transformation languages=E2=86=97. Automatic, Uniform, Universal,
> Source Code Display
Agreed. Not that XML defines a different type of nested structure
than S-expressions do, but either can be emulated in the other. In
s-expressions, list structure is anonymous. In XML, every level of
list structure has a type-name and optional properties. XML can be
emulated in s-expression by using the convention that the first
element of each list is the XML name and the remaining elements are
the XML sub-objects. S-expression can be emulated in XML by using
the convention that every level of list is of type LIST, and other
tags are used for literal constants such as strings and keywords.
Note these two conventions are not inverses of each other!!
XML : <html><head></head><body><p>hello there</p></body></html>
That XML emulated as s-expression:
(:HTML (:HEAD) (:BODY (:P "hello there")))
That s-exression in turn emulated as XML:
<list>
<keyword>HTML</keyword>
<list><keyword>HEAD</keyword></list>
<list>
<keyword>BODY</keyword>
<list>
<keyword>P</keyword>
<string>hello there</string>
</list>
</list>
</list>
Funny thing there: When XML is emulated in s-expr, it get less
verbose, because the XML tag need be written just once at the start
instead of twice at start/end, but when s-expr is emulated in XML,
it get much more verbose!! (And that s-expr to XML mapping doesn't
even support dotted lists!)
> Because lisp's syntax has lots of nested parenthesis, the source
> code formatting is much more labor intensive than syntax soup
> languages such as Perl, even when using a dedicated lisp editor
> such as emacs that contain large number editing commands on nested
> syntax.
I disagree. When coding in EMACS, you can break a line any time
you want, and then TAB to automatically indent the next line
the correct amount. I don't see that as very labor intensive.
And if you forget to tab, and have improperly indented code, your
code can be pretty-printed in-place in EMACS or or pretty-printed
in any Lisp environment then copy-and-paste into any other editor.
Since that's all optional, I don't see it as labor intensive when
writing code and not much labor intensive in cleaning up old code
to be better formatted.
> In lisp, the comment is done by the char =E2=80=9C;=E2=80=9D
> running to end of line. Note that this does not allow nested
> comment. So for example, if you have multi-line code, and you want
> to comment out them all, you have to prepend each line by a
> semicolon. However, if you have nested comment syntax, one could
> just bracket the block of code to comment it out.
Many years ago I fixed that by writing a macro called C which
simply ignored its parameters and returned NIL, which could be
inserted anywhere an extra value of NIL wouldn't cause problems,
such as between statements within a PROG or PROGN or anywhere an
implicit PROGN occurs. Why can't you think of that yourself?
Quit your complaining and make a trivial effort to fix the problem.
> In general, a computer lang has a syntax.
A computer lang isn't necessary to define algorithms in a computer.
See flowcharting that has been around since the early 1960's, and
think of implementing it using a GUI. Or see Dijkstra's D-charts
which could likewise be drawn in a GUI. Or see my proposal for a
no-syntax programming system, using intentional datatypes of
previously generated data objects to direct menus of available
operations upon those objects, whereby the D-chart gets drawn
automatically any time you want to see it as a visual aid.
> The syntax, as text written from left to right, has various
> properties and characteristics.
Syntax doesn't need to be linear. It could be 2-dimensional, like
tables. Or it could be tree-structured, using something like the
filesystem browser tree-view to see an overview and expand any
particular node for closer inspection then contract it again when
done seeing those local details. There's still syntax either way,
i.e. strings of text at terminal nodes (names of variables and
functions, and atomic literal constants), and some system for
linking these atomic pieces into a conceptual structure, with some
way to visualize the whole thing or parts thereof.
> 2-dimensional notation (e.g. traditional math notation) (Mathematica)
I don't have access to Mathematica, so I don't know specifically
what you're talking about there. But I did (long ago) have access
to MacSyma, where only output was 2-d. Input was essentially
FORTRAN/ALGOL/RLISP notation, just a linear string of infix
operator notation. Is Mathematica like MacSyma, or does it provide
a 2-d editor for expressions for input also?
> As a example of syntax of little power, think of a lang using just the
> symbol =E2=80=9C0=E2=80=9D and =E2=80=9C1=E2=80=9D as its sole char set.
Again there's so much non-ASCII fluff and so little text that it's
difficult for me to see what you're trying to say there. Are you
saying that the digits zero and one are the only input possible? In
a sense, at the bitraster level, that's what we have anyway on
modern digital-keyboard input, as opposed to handwritten input. The
keyboard simply geerates macros for standardized huge bitmap images
of characters we see on-screen, which internally are really
US-ASCII character codes which are *interpreted* as meaningful
English characters. And of course the US-ASCII character codes
themselves are nothing but collections of zeroes and ones
internally. The only thing not binary is the keyboard, which has
many more than two keys. Are you trying to say that there might be
a programming language where only two of the keys on the keyboard
are used to write programs, all the rest of the keys unused?
Yeah, I suppose it would be rather tiring to type like that.
Here's a related idea: 0 and 1 could be used to traverse a binary
tree to find desired things to enter into a program. For example if
it was time to pick a local variable, and there are eight
variables, it would take only three 0/1 decisions to select the one
variable that was wanted. If there are only sixteen functions that
could be applied meaningfully to that variable, four 0/1 decisions
would be enough to select the desired function. With a tree-view of
upcoming decisions, it might be almost user-friendly to navigate
down that tree. But that assumes each of the 2*n options at each
point are equally likely, so an equal-length code is optimal. But
if some options are much more likely, they should be given shorter
binary codes, while less likely options should be given longer
binary codes. So instead of showing a tree on-screen and requiring
traversing it one node at a time, show a zoomable map where the
most likely options have the largest area and the least likely
options are crammed into a tiny space. Then use the mouse to select
the desired part of the image to select and zoom. For a succession
of likely options, the map could show nesting of more than one
option, so that a single mouse click could descend more than one
level into the selection process and zoom all the way down to what
decision would be next. For a selction of a rare option, a single
mouse click would merely zoom a group of such rare options large
enough that the individual options could seen, and a second click
would be needed to finally select a single rare option. To minimize
mouse movement, an anti-fisheye view centered on the previous mouse
click could be shown, so that there's a high probability that the
next option will be located near the previous mouse click. Perhaps
the screen as a whole could contain two views, one showing a
structured-programming view of the current state of building an
algorithm, and one showing the anti-fisheye view of selections for
the next mouse click.
> Many of lisp's sugar syntax are designed to reduce nested paren.
Good point. I'm thinking that such conveniences should be
automatically converted to equivalent s-expression notation at the
first possible opportunity. Thus if the programmer/user types 'foo
it would be immediately converted to (quote foo). But the macro
expansion from backquote expressions with embedded comma atsign
etc. might be so horribly ugly that it would be comeinscrutable
compared to the sugar syntax, so maybe that's not really
desireable. Perhaps you (xah) can think of reasonable pure-sexpr
representations for such conveniences, as a compromise between the
current backquote syntax and the current macro-expansion of that
backquote syntax? In some cases, a simple call to SUBST would
suffice. In more complicated cases involving splicing
newly-computed sequences into the middle of a boilerplate
(constant) sequence, you would need to invent a more serious
facility. Two possibilities are something like FORMAT that takes a
boilerplate-and-controlCode string together with parameters, and
something else I saw that takes each object-to-print expression in
sequence, except that in either case we'd be splicing list
structure rather than splicing character sequences into a string.
And in the case of the FORMAT-like version, replacement would be
keyword-based rather than positional-based. But the basic idea is
the same, either boilerplate in one place and substitutions
elsewhere, or the pieces of boilerplate and substitutions
intermixed in proper global sequence. My old SUBSTS function is
basically the FORMAT mode, boilerplate separate from computed
key/replacement pairs.
Here are examples of what I'm talking about:
(setq name "Robert") (setq loc "Sunnyvale")
(format NIL "Hello, my name is ~A and I live in ~A." name loc)
(mapprint NIL "Hello, my name is " name " and I live in " loc ".")
(perleval NIL "Hello, my name is $name and I live in $loc.")
(setq seq1 '(foo baz)) (setq seq2 (reverse '(q w e r t y)))
`(a ,seq1 b ,seq2)
(list 'a seq1 'b seq2)
(subst 's1 seq1
(subst 's2 seq2
'(a s1 b s2)))
`(a ,@seq1 b ,@seq2 c)
(append '(a) seq1 '(b) seq2 '(c))
Note that in simple cases SUBST (or SUBSTS) handles comma, and
APPEND handles commaAtsign, inside back-apostrophe. But perhaps you
can think of a better way to handle all back-apostrophe
constructions together in a more uniform way?
> Why using a more consistent, but more annoying sugar syntax?
Kannitverstand.
> Ultimately, you have to ask why lisper advocate nested syntax in the
> first place.
For the reasons you cited (direct mapping between syntax and
parse/syntax tree, so that you don't have to constantly think in
two ways at the same time for the same object, and to make it easy
to understand how to write macros). The only sorta crufty thing is
how local declaractions come *inside* the form where the variable
was introduced, which requires one level of look-inside in order to
determine the meaning of variable bindings (whether lexical or
special, and whether general type or specific type). Instead of this:
(let ((x 5))
(declare (special x))
(printx))
I would rather have seen:
(locally-declare (special x)
(let ((x 5))
(printx)))
where it's known that x will be special before it's bound, or else:
(let ((x 5 special))
(printx))
where x is declared special at the same moment it's bound.
But needing to descend one level into the enclosed list of forms in
order to find declarations that apply to the symbol that should
already have bound but can't be bound until this info is known, and
needing to then skip those declaracterions when actually evaluating
the enclosed forms, is just a minor nuisance.
> If lispers love the nested syntax, then, the argument that there
> should not be irregularities, has merit.
Yes. Feel free to propose the best pure-nested-sexpr syntax you can
think of to replace all the specialized notations for building list
structure. (On the other hand, there's lesser argument for avoiding
special syntax for literal "atomic" values such as strings or
symbols in non-default packages or vectors or complex numbers etc.)
> If lispers think occasional irregularities of non parenthesized
> syntax is good, then there's the question of how many, or what
> form.
Agreed. I would wish for the bare minimum of special syntax. I
avoided #'functionName for a long time, using (function
functionName) instead, same for anonymous functions defined by
lambda inside function, but finally I got tired of all that typing,
and occasional misspellings of functio with n key on keyboard
flaky, so finally I switched to using the convenience. But I could
live without it again just fine.
> The other fundamental problem in the language is its cons cells as its
> list construction primitive.
That's not a problem at all. If you want to use linked lists, then
you need to have a way to build and use cells that have two points,
one for element (or sub-list) and one for continuation of the list
at the current level (or NIL or other special marker at end). If
you don't want to ever build linked lists, then you never need to
use such cells. And those same cells are handy for binary trees.
Would you **really** want to write software in a language where you
had no way to build linked lists nor binary trees???? Really????????
** Splitting long reply here **
Robert Maas wrote:
> (Part 1 of multi-part reply:)
>
> > From: ·······@gmail.com" <······@gmail.com>
> > lisp's cons and the irregularities of its nested paren syntax are
> > 2 fundamental problems of lisp.
>
> The primitive two-part cell ...
See, bottom:
http://xahlee.org/UnixResource_dir/writ/lisp_problems.html
plain text excerpt follows:
Frequently Asked Questions
Q: If you don't like cons, lisp has arrays and hashmaps, too.
A: Suppose there's a lang called gisp. In gisp, there's cons but also
fons. Fons are just like cons except it has 3 cells with car, cbr,
cdr. Now, gisp is a old lang, the fons are deeply rooted in the lang.
Every some 100 lines of code you'll see a use of fons and car, cbr,
cdr, or any one of the caar, cdar, cbbar, cdbbar, etc. You got annoyed
by this. You as a critic, complains that fons is bad. But then some
gisp fanatics retort by saying: “If you don't like fons, gisp has
cons, too.”.
You see, by “having something too”, does not solve the problem of
polution. Sure, you can use just cons in gisp, but every lib or
other's code you encounter, there's a invasion of fons with its cbbar,
cdbbar, cbbbr. The problem created by fons cannot be solved by “having
cons too”.
Q: I like the cons concept. Even in functional languages like Haskell
it is popular, e.g. when matching in the form of (x:xs), which is the
same like car/cdr in Lisp.
A: Languages that has a list data type and First, Rest functions do
not mean it has lisp's cons problem.
One part of the cons problem in lisp is that it forces programer to
think of list in a low-level nested of 2-item construction, with
explicit functions like “cons”, “car”, “cdr”, “caar”, “cadr”, “cdar”,
“cddr”, “caaar”, “caadr” etc.
In other langs, the programer is not forced to think of nested 2-
items.
The other problem with lisp's cons, is that it hinders any development
of tree data structure. For example, one might write a function that
extracts the leafs of a tree. But due to lisp's list made of cons,
there is a different interpretations of what's considered a leaf.
Similarly, binary tree in lisp can be implemented either using cons
natively, or use so-called “proper list” that is implemented on top of
cons. Worse, any proper list can be mixed with improper list. So, you
can have a list of cons, or cons of lists, cons of cons, list of
lists, or any mix. The overall effect of the cons is that it prevents
lisp to have a uniform view of tree structure, with the result that
development of functions that work on tree are inconsistent, few, or
otherwise hampered.
Xah
∑ http://xahlee.org/
☄
Xah wrote:
«
• Fundamental Problems of Lisp
http://xahlee.org/UnixResource_dir/writ/lisp_problems.html
»
Robert Maas wrote:
> > See: =E2=80=A2 Fundamental Problems of Lisp
> > ...
> > other is the language's use of =E2=80=9Ccons=E2=80=9D for list.
> > ...
> > =E2=80=9Ceverything=E2=80=9D is of the form =E2=80=9C(f x1 x2 ...)=E2=80=9D=
>
> That's a little bit hard to read with those non-ASCII bytes thrown in.
> Please restrict yourself to US-ASCII when posting to
> English-langauge newsgroups such as comp.lang.lisp, OK?
It is disclosed and discussed here, that you have problems finding a
job, some 50+ years old, still using a early 1990's computer as your
main machine (Mac Performa?), and insist on posting long rant of
strange issues.
I suggest you upgrade your hardware, instead of everyone downgrade to
your state of the art.
In particular, i suggest you use a newsgroup software that can
properly decode unicode.
In anycase, you could use a web browser to browser newsgroups in
google group:
http://groups.google.com/group/comp.lang.lisp/browse_frm/thread/b584da93b6ed65d6
unicode support in web browsers has been in Windows and Mac for about
a decade now. It is largely supported in open source browsers too in
recent years. You might try it.
Xah
∑ http://xahlee.org/
☄
Robert Maas wrote:
> > * The single quote syntax used to hold evaluation, e.g.
> > =E2=80=9C'(1 2 3)=E2=80=9D.
>
> Instead of "hold", I suggest you use "suppress". I presume Chinese
> is your native language and you are only semi-fluent in English, so
> I'm offering to help you with minor word-usage mistakes like this.
Dear Robert idiot, i suggest you peruse:
• The Tragedy Of Titus Andronicus (annotated by Xah Lee)
http://xahlee.org/p/titus/titus.html
• The Tale Of The Bull And The Ass (annotated by Xah Lee)
http://xahlee.org/p/arabian_nights/an2.html
• English Vocabulary, by Xah Lee
http://xahlee.org/PageTwo_dir/Vocabulary_dir/vocabulary.html
the above pages should convince you that you are a moron when compared
to me when it comes to English.
Btw, the term “Hold” is used in Mathematica. In this aspect, you can
do some self education by reading this essay of mine:
• The Importance of Terminology's Quality In A Computer Language
http://xahlee.org/UnixResource_dir/writ/naming_functions.html
plane text version follows:
------------------------------------------
The Importance of Terminology's Quality In A Computer Language
Xah Lee, 2008-05-08
I'd like to introduce a blog post by Stephen Wolfram, on the design
process of Mathematica. In particular, he touches on the importance of
naming of functions.
Ten Thousand Hours of Design Reviews (2008 Jan 10) by Stephen Wolfram
http://blog.wolfram.com/2008/01/10/ten-thousand-hours-of-design-reviews/
The issue is fitting here today, in our discussion of closure
terminology recently, as well the jargons “lisp 1 vs lisp2” (multi-
meaning space vs single-meaning space), “tail recursion”, “currying”,
“lambda”, that perennially crop up here and elsewhere in computer
language forums in wild misunderstanding and brouhaha.
The functions in Mathematica, are usually very well-named, in contrast
to most other computing languages. In particular, the naming in
Mathematica, as Stephen Wolfram indicated in his blog above, takes the
perspective of naming by capturing the essense, or mathematical
essence, of the keyword in question. (as opposed to, naming it
according to convention, which often came from historical happenings)
When a thing is well-named from the perspective of what it actually
“mathematically” is, as opposed to historical developments, it avoids
vast amount of potential confusion.
Let me give a few example.
• “lambda”, widely used as a keyword in functional languages, is named
just “Function” in Mathematica. The “lambda” happend to be called so
in the field of symbolic logic, is due to use of the greek letter
lambda “λ” by happenstance. The word does not convey what it means.
While, the name “Function”, stands for the mathematical concept of
“function” as is.
• Module, Block, With, in Mathematica is in lisp's various “let*”
keywords. The lisp's keywords “let”, is based on the English word
“let”. That word is one of the English word with multitudes of
meanings. If you look up its definition in a dictionary, you'll see
that it means many disparate things. One of them, as in “let's go”,
has the meaning of “permit; to cause to; allow”. This meaning is
rather vague from a mathematical sense. Mathematica's choice of
Module, Block, is based on the idea that it builds a self-contained
segment of code. (however, the choice of Block as keyword here isn't
perfect, since the word also has meanings like “obstruct; jam”)
• Functions that takes elements out of list are variously named First,
Rest, Last, Extract, Part, Take, Select, Cases, DeleteCases... as
opposed to “car”, “cdr”, “filter”, “filter”, “pop”, “push”, “shift”,
“unshift”, in lisps and perl and other langs.
The above are some examples. The thing to note is that, Mathematica's
choices are often such that the word stands for the meaning themselves
in some logical and independent way as much as possible, without
having dependent on a particular computer science field's context or
history. One easy way to confirm this, is taking a keyword and ask a
wide audience, who doesn't know about the language or even unfamiliar
of computer programing, to guess what it means. The wide audience can
be made up of mathematicians, scientists, engineers, programers,
laymen. This general audience, are more likely to guess correctly what
Mathematica's keyword is meant in the language, than the the name used
in other computer languages who's naming choices goes by convention or
context.
For some example of namings in popular computer languages... Perl's
naming heavily relies on unix culture (grep, pipe, croak, pop, shift,
stat, die, hash, glob, sigil, regex, file handle ...). As with the
unix culture, many names and terminologies intentionally exude
junevile humor. If you are a unix hacker, you may have a blast. But if
you are a mathematician, scientist, or electric engineer, most are
even struggling with writing a report in HTML/CSS or LaTeX, the unix-
colored jargons will prevent him from entry into basic programing that
he may benefit. Similarly, for programers coming from other
backgrounds such as Microsoft Windows, the pun and unix styled naming
reduces his chances of learning the language.
Functional lang's terminologies are typically heavily based on the
field of computer science (e.g. lambda, currying, closure, monad,
predicate, tail recursion, continuations). They are that way, because
the people who work with these languages are typically computer
scientists. They model their language and name constructs based on
particular model of mathematical logic. Thus, we have keywords names,
function names, library names, and language concepts like lambda,
currying, closure, monad. Effectively, these type of namings puts a
extra barrier for normal people to use the language. The language's
features and constructions, are not necessarily difficult, but
practically speaking, these upfront terms are part of the reason these
languages remain in a small academic or hobbyist community.
Lisp's cons, car, cdr, are based on computer hardware (this particular
naming, caused a major damage to the lisp language to this day).
(Other examples: pop, push, shift are based on particular data
structure concept in computer science called “stack” (as in, stack of
books). Grep is from Global Regular Expression Print, while Regular
Expression is from theoretical computer science of Automata (whose
etymology is from the meaning of “self-operating machine”). Thru the
several levels of contexts and term borrowing, the extremely simple
and clear term “string pattern” became known as “regular expression”.
The term “regular expression”, used syntax to match strings in
practical languages like perl, has no connection to the original
concept of “regular expression” in the theory of “Automata”
whatsoever. The name regex has done major hidden damage to the
computing industry, in the sense that if it have just called it
“string patterns”, then a lot explanations, literatures, confusions,
would have been avoided.)
(Note: Keywords or functions in Mathematica are not necessarily always
best named. Nor are there always one absolute choice as best, as there
are many other considerations, such as the force of wide existing
convention, the context where the function are used, brevity,
limitations of English language, different scientific context (e.g.
math, physics, engineering), or even human preferences.)
* * *
Many of the issues regarding the importance and effects of
terminology's quality, i've wrote about since about 2000. Here are the
relevant essays:
* Jargons of Info Tech Industry
* The Jargon “Lisp1” vs “Lisp2”
* The Term Currying In Computer Science
* What Is Closure In A Programing Language
* What are OOP's Jargons and Complexities
* Sun Microsystem's abuse of term “API” and “Interface”
* Math Terminology and Naming of Things
* Politics and the English Language, 1946, by George Orwell.
Xah
∑ http://xahlee.org/
☄
> From: ·······@gmail.com" <······@gmail.com>
> The functions in Mathematica, are usually very well-named, in
> contrast to most other computing languages. In particular, the
> naming in Mathematica, as Stephen Wolfram indicated in his blog
> above, takes the perspective of naming by capturing the essense, or
> mathematical essence, of the keyword in question.
In a system whose primary purpose is automating arithmetic and
related mathematics, that's a good idea. In data-processing
environments, something more tuned to data-processing concepts
(except where mathematical functions/objects are explicitly
involved) makes more sense. But I agree the basic idea is sound. I
my own code, I name functions according to what they do, prefering
to be as verbose as necessary rather than striving for
ultra-terseness. After all, storage is cheap nowadays, so using
twice as much storage really isn't a problem, and programmers don't
manually key in names of rarely-used functions, we copy-and-paste
from a template or documentation file, so length really isn't a
problem. (But I've recenty proposed a no-syntax programming
environment where you browse a library of functions available and
select from a menu what you want and never need to even *see* the
actual name of the function you're calling from your new "code".)
> (as opposed to, naming it according to convention, which often
> came from historical happenings)
Are you familiar with "installed user base" and "backward compatibility"
and "legacy code"? You see in the migration from MacLisp and other
early lisps to Common Lisp, there was a strong desire to allow old
code to work under the new system with only minimal changes. So
some of the old crufty names were kept instead of changing them to
be more "modern" or more logically/mathematically obvious in
meaning. But modern aliases for some of them were provided, such as
"first" for "car" etc. And if not enough modern names have been
provided, nothing is stopping you from writing your own personal
modern-alias library. If it works for you, and if others like it
well enough to use it too, it may become popular enough to become a
default standard, making you sorta famous!
> =E2=80=A2 =E2=80=9Clambda=E2=80=9D, widely used as a keyword in
> functional = languages, is named just =E2=80=9CFunction=E2=80=9D in
> Mathematica.
So why don't you work out a good name for 'lambda', or a completely
new layout for creating an anonymous function object by specifying
parameters and body separately, as well as a good name for
'function' which already exists in Common Lisp, or a whole new
semantics that avoids the need for two separate mechanisms here,
and propose your new names and/or syntax and/or mechanisms in a
followup? Be sure you cover *all* cases that currently are
implemented in Common Lisp.
By the way, the word "function" is misleading because almost none
of the Lisp functions (built-in or user-defined) are truly
functions in the mathematical sense.
> Perl's naming heavily relies on unix culture (grep, pipe, croak,
> pop, shift, stat, die, hash, glob, sigil, regex, file handle ...).
That's a more offensive naming situation than Common Lisp, because
there was *no* installed user base, no legacy code, which Perl
needed to be compatible with so that legacy code could continue to
work. It was only the narrowmindeness of the inventor of Perl,
intoxicated with Unix's jargon, that caused a brand new programming
language to be full of Unix jargon instead of English.
> Keywords or functions in Mathematica are not necessarily always
> best named.
Thanks for the admission.
OK, here's an exercise for you. Find appropriate names for all the
following functions (inline documentation for each function given here):
Given an input stream reading from a BSD-format inbox,
find the start of each message in that file,
and return a list of the corresponding file positions.
Given lin1 a string which is one line from an e-mail file,
execute this code (inside a PROG) to classify that line:
(cond ((null lin1) (return :EOF))
((string= lin1 "") (return :EMPTY))
((str-is-bsdmail0 lin1) (return :MAIL0)))
(setq ixw0 (position-if (function ch-is-white) lin1))
(cond ((null ixw0)
(setq ixc0 (position #\: lin1))
(cond ((null ixc0) (return :NOWHITE))
((= (length lin1) (+ 1 ixc0)) (return :HDRMAIN))
(t (return :NOWHITE))))
((= 0 ixw0) (return :WHITE0))
((char= #\: (elt lin1 (+ -1 ixw0))) (return :HDRMAIN)))
GIven a list of search strings, and one test string,
find which of the search strings is a substring of that test string,
and return the index of that search string.
;Given name of file containing output from ARIN whois command,
; parse either of three formats it might be:
;ARIN-1 (Single IP address-block record (which might just be a reference
; to APNIC or RIPE, or might have all the info itself))
;ARIN-2 (replacement for above, starting 2002.Aug)
;ARIN-N List of handles to single IP address-block records
;Decide which of those formats it is, and parse accordingly.
;Compare current date-last-written for system mailbox against previous
; such date kept in g*date-system-mailbox-written. If indeed it changed,
; i.e. file was written again since it was last checked, update the
; global keeping track of it, and return two values: :CHANGED olddate
;Given list of completely-specified filenames, clear out the first of the
; list by renaming it to the next which must be cleared out first
; by renaming to the second etc. down the line until a file is
; already non-existent so nothing past that point need be renamed.
;If all named files exist, the last is deleted and all before it are renamed,
; else just the initial contiguous segment is renamed and nothing deleted.
Let's see if you can come up with good names for those functions,
maybe even better names than what I called them in my code.
On Fri, 07 Nov 2008 12:12:06 -0800, Robert Maas, http://tinyurl.com/uh3t
wrote:
>> From: ·······@gmail.com" <······@gmail.com>
> enough modern names have been provided, nothing is stopping you from
> writing your own personal modern-alias library.
You mean apart from the fact that he is not writing any code?
Tamas
(Part 2 of multi-part reply:)
> From: ·······@gmail.com" <······@gmail.com>
> Lisp at core is based on functional programing on lists.
Correct. And as I pointed out elsewhere
<http://groups.google.com/group/comp.lang.lisp/msg/364f4db6214b6ade>
= <······················@yahoo.com>
<http://groups.google.com/group/comp.lang.lisp/msg/050cd26d10ca3b9d>
= <······················@yahoo.com>
using vectors (one-dimensioal arrays) *could* have been used
instead of linked lists to represent formal lists as defined by the
functional programming model, although there would be some pain
involved in reading and parsing an s-expression from an input
source because you don't know how large a vector to allocate at the
outset. In those articles I indicate some of the possible ways to
deal with this problem by allocating various kinds of temporary
storage *during* the READ operation then copying the result into a
vector of exactly the correct length and discarding (or immediately
recyclying) all that temporary storage. Would you be willing to
build a primitive version of Lisp both the normal way (linked
lists) and my proposed vector way, using identical implementation
methods so that runtime performance can be directly compared, then
running timing tests to see which way really is better? Another
person has proposed using Forth as an implemetation language for
Lisp, making the build-from-scratch task a lot easier than any
other plausable way. Would you like to join the two of us in such a
project? He's an expert (20+ years ago) at Forth, and I have Pocket
Forth on my Macintosh and have played with it at various times over
the past 17 years, and you can probably get some version of Forth
if you don't have it already, so I think this project might
possibly work if we have three people working on it.
> However, for historical reasons, lisp's list is based on the
> hardware concept of =E2=80=9Ccons=E2=80=9D cell. From a
> mathematical point of view, what this means is that lisp's lists is
> limited to a max of 2 elements.
That's an absurd way of looking at it!! The **intentional** type of
linked lists is a full list, even if the unit cells are two-pointer
objects daisy chained together. When considering the mathmatical
point of view, you *must* consider the *intentional* use of the
notation or data, not the physical reality of how you build it, or
else you are just being perverse. Math on paper consists of pencil
or ink marks which are smudges on paper. That's not what is
considered the mathematical meaning of a mathematical result
reported in a published journal. Math in a human mind consists of
neurons firing in a pattern and connected in a way, and again
taht's not what is considered the mathematical meaning of one's
mathematical thoughts. Even the abstract print representation of
characters on a page of paper or on a computer screen, the 2-d
layout of such formal characters, isn't the mathematical meaning.
The mathematical meaning is an integral of a function from point a
to point b, not a squggle with a under it and b over it and the
name of the function to the right of it and parens around a dummy
argument 'x' and a 'dx' to the right. The squiggle stuff, the
neurons, the pencil or ink on paper, the daisy chain of CONS cells,
are just the implementation, not the mathmatical meaning.
> If you want a longer list, you must nest it and interpret it in a
> special way. (i.e. effectively creating a mini-protocol of nesting
> lists, known as proper lists.)
That's exactly the **intentional** type of that way of building a
linked list. Note that the CONS cell is itself implemented as
tagged pointers in RAM. And the tagged pointers are implemented as
machine words built out of addressible bytes in RAM. And the
addressible bytes are implemented as some sort of semiconductor
technology. And the semiconductor functionning is built out of
chemistry and electronics. And the electronics is built out of
electromagnetics and quantum mechanics. And the chemistry is built
out of physics and quantum mechanics. None of that matters to the
mathematical view of a **list**, except insofar as the
implementation affects performance. For some applications, linked
lists are more efficient. For other applications, vectors would be
more efficient. For most applications, modern Intel CPUs are more
efficient (faster) than a 6502 or 8080 or even a 68030. The
difference in speed between modern Intel CPUs and a 6502 is so
great that if you can afford to upgrade you really should. But the
efficiency difference between linked-list and vector representation
of lists is probably so small as to be insignificant and not worth
making all the fuss you're making.
> The cons fundamentally crippled the development of list processing.
Bullshit!!
> Lisp being historically based the cons for like 2 or 3 decades.
I call your bet and raise it to 4 decades.
Really you should do your homework before posting your "facts".
Anybody want to raise to 5 decades? According to
<http://www8.informatik.uni-erlangen.de/html/lisp/histlit1.html>
in 1958.Nov, middle of month, when D. Park joined the M.I.T. AI
project, there was already a running LISP interpreter.
So it's possible the correct span of time to *today* is a full 5
decades, and if this discussion goes on a few more weeks it'll be
unquestionably at least 5 full decades.
> The cons (and cdr, car, caadar etc) are fundamentally rooted in
> the lisp langs, is thus not something that can be easily mended.
Nothing needs mending. If you don't want to use linked lists,
nobody is forcing you to use them. If you don't like those names
for the primitives of building and traversing linked lists, you can
make up aliases for those names, nobody is stopping you. Shut up!
> or .. mark cons related functions as obsolete ...
Or mark CONS/CAR/CDR as low-level primitives for when you are
coding a utility that is ambiguous with respect to intentional
datatype, intended to work with linked lists and trees and any
other use of CONS cells, but advertise other names more appropriate
to specific intentional types.
For example, see various proposals such as LIST*/FIRST/REST for
linked lists and MAKEPAIR/LEFT/RIGHT for binary trees.
> One of the myth that is quickly injected into budding lispers, is
> that cons are powerful.
It ain't no myth. It's true. CONS cells and the associated
low-level functions CONS/CAR/CDR are extremely valuable for
building lots of different kinds of data structures used to emulate
virtually any abstract data structure you might wish.
I challenge you to name/describe even one kind of abstract data
structure that can't be emulated by CONS structures.
> It is powerful in the sense any assembly lang is powerful.
True only in a vague sense. In fact CONS/CAR/CDR (in a working Lisp
environment with properly tagged data objects and pointers) are
much higher level than assembly language, thus much more
immediately available/useful for building higher-level
representations by "just doing it", no hassle.
> Lisp's cons is perhaps the greatest fuck up in the history of
> computer languages.
At this point you sound like a fuckwit. (Pardon my esperanto.)
> ... Several times i tried to explain to lispers this problem, but
> usually with paragraphs and paragraphs of descriptions, analogies,
> examples, frustrated by how hard it is to convey such a simple flaw
> (aided by their blind, deep-seated, lisp fanaticism).
Now you're starting to sound like a true crackpot, like JSH or
Archimedes Plutonium or Alan Guth, unable to see that you are
almost totally wrong, and *that* is why everyone is ignoring your
claims that all of mathematics or Lisp or science is wrong and your
true theory is right and you ought to get a Nobel prize and become
more famous than Einstein or Cantor or McCarthy.
> If you don't like cons, lisp has arrays and hashmaps, too.
Yup. For some applications, those are more expressive and/or more
efficient. CONS cells aren't the *only* structure-building
primitives available in modern Lisps. Just like the hammar isn't
the only tool in your toolbox. There are also screwdrivers and
wrenches and chisels and even a tube of exoxy glue. Don't complain
that the hammar fails in turning screws or tightening lugnuts or
sticking photos in a album. Try some of those other tools when a
hammar isn't important. But don't yell at everyone that we must
remove hammars (CONS cells) from our toolboxes (from Lisp)!!!
> Suppose there's a lang called gisp. In gisp, there's cons but
> also fons. Fons are just like cons except it has 3 cells with car,
> cbr, cdr.
You are an idiot if you can't see that Common Lisp already has that
available any time you want it, like this for example:
(defun xah-gisp (a b d)
(let ((arr (make-array '(3))))
(setf (aref arr 0) a)
(setf (aref arr 1) b)
(setf (aref arr 2) d)
arr))
(defun xah-car (arr) (aref arr 0))
(defun xah-cbr (arr) (aref arr 1))
(defun xah-cdr (arr) (aref arr 2))
;Caveat: untested code
So why don't you copy&paste my code, check if it's correct (and let
me know if I made any mistakes), and then use that instead of the
usual Lisp CONS/CAR/CDR, and report back how your alternate way of
working compares with the usual way?
> One part of the cons problem in lisp is that it forces programer
> to think of list in a low-level nested of 2-item construction, ...
Only when considering issues of efficiency. For example
(append <listOfAMillionElements> <otherList>)
will require a million copying of linked-list cells from the first
parameter to the first list, which might be a problem if you're
going to do it a lot. But
(append <listOfOneElement> <listOfAMillionElements>)
is guaranteed to be super-fast.
> In other langs, the programer is not forced to think of nested 2-items.
No, he/she is forced to think of *other* implementation issues when
considering issues of efficiency. For example in a language that
uses arrays instead of linked lists
(append <listOfOneElement> <listOfAMillionElements>)
now takes horrendously long and might actually exhaust available
memory because it needs to allocate a new array of million-and-one
elements and copy the data from the two parameters into it. While
in a language that uses hashtables for all list structure, there's
a chance the hashing won't be good and it'll take horrendously long
to do almost anything whatsoever. And in a language that uses
self-balancing binary search trees SBBSTs) for all list structure,
just about any operation except traversing entire lists takes
log(n) time, which is somewhat slower than linked lists for some
operations but faster for others. (Depending on the type of SBBST,
obtaining the size of a list may be virtually instant, with the
count right at the top of the tree, or may require traversing the
entire structure if the count isn't at the top.)
Your complaint about linked lists, which is really that some
operations are slower than others, applies to *any* decision about
implementation of data structures, so it's not a valid complaint
specifically against CONS cells.
> The other problem with lisp's cons, is that it hinders any
> development of tree data structure.
Bullshit!!!!!!!!!!!!!
> For example, one might write a function that extracts the leafs
> of a tree. But due to lisp's list made of cons, there is a
> different interpretations of what's considered a leaf.
Bullshit!!!!! When defining your ADT, you need to provide a way
that it's possible to tell by looking at a node whether it's a leaf
or not, and then you need to write code that correctly determines
whether a given node is a leaf or not per that way of looking at
it. If you can't design a non-ambiguous data structure, or you
can't write code to implement it correctly, that's because you are
a grossly incompetant person at the task of writing software, and
you should hire me to teach you how. You have no fucking business
complaining that you stub your fingers when you try to use a hammar
so hammars should be outlawed. Either hire somebody with better
skills to use the hammar for you, or learn how to use a hammar
properly.
> Similarly, binary tree in lisp can be implemented either using
> cons natively, or use so-called =E2=80=9Cproper list=E2=80=9D that
> is implemente= d on top of cons.
Other ways exist too. I see no problem here. Nothing bad at all.
Lisp offers lots of direct and "just do it" ways to implement just
about any data structure you may want, and binary trees are no
exception.
> Worse, any proper list can be mixed with improper list.
Bullshit. There was nothing wrong to begin with, and mixing
different kinds of lists in a complicated structure isn't wrong
either.
> So, you can have a list of cons, or cons of lists, cons of cons,
> list of lists, or any mix.
Yup. Nothing wrong with that.
> The overall effect of the cons is that it prevents lisp to have a
> uniform view of tree structure, with the result that development of
> functions that work on tree are inconsistent, few, or otherwise
> hampered.
Bullshit. You have the same task you had at the start of any
project: You need to clearly understand what abstract data
structures you need, and then you need to precisely define how you
will emulate (implement) these abstract structures as *actual*
structures within the given language, regardless of whether it's C
or Lisp or Fortran or assembly language, and then decide what
data/processing primitives you plan to implement for this new data
structure you're emulating (implementing), and decide the names and
parameters of the various functions to perform those D/P tasks, and
then in any sequence you choose you need to:
- Write the code to implement those functions to D/P your new
intentional data type;
- Formally document the API for your new-data-type D/P library.
This is no different in Lisp than in any other language, except in
Lisp it's a lot easier. Once you have your new intentional data
type fully implemented and documented, anyone using your library
would suffer no confusion about what D/P functions are available
and how to use them and what the result of any operation ought to
be. In most cases your API will provide accessors for the
components of a structure, and predicates to tell whether a
component is a leaf or a branching node or whatever else it might
be. There's no confusion unless you are incompetant at doing the
work and end up with a piece of crap. Are you openly admitting you
are a crappy computer programmer who is incapable of doing this
kind of work correctly?
> Why Lisp's Cons Problem Never Got Fixed?
There never was a problem to begin with. You are making up a
ficticious problem and complaning that your fantasy problem hasn't
gotten fixed. You might as well complain about the problem of
leprichauns hoarding all the gold in the world, or underworld
demons causing earthquakes, hasn't been solved.
** Splitting long reply here **
Robert Maas, http://tinyurl.com/uh3t wrote:
>> Suppose there's a lang called gisp. In gisp, there's cons but
>> also fons. Fons are just like cons except it has 3 cells with car,
>> cbr, cdr.
>
> You are an idiot if you can't see that Common Lisp already has that
> available any time you want it, like this for example:
> (defun xah-gisp (a b d)
> (let ((arr (make-array '(3))))
> (setf (aref arr 0) a)
> (setf (aref arr 1) b)
> (setf (aref arr 2) d)
> arr))
> (defun xah-car (arr) (aref arr 0))
> (defun xah-cbr (arr) (aref arr 1))
> (defun xah-cdr (arr) (aref arr 2))
> ;Caveat: untested code
It's even easier:
(defstruct (xah (:constructor xah-gisp (car cbr cdr))) car cbr cdr)
C/USER[5]> (xah-gisp 1 2 3)
#<XAH :CAR 1 :CBR 2 :CDR 3>
C/USER[6]> (xah-cbr (xah-gisp 1 2 3))
2
--
__Pascal Bourguignon__
http://www.informatimago.com
(Part 3 of multi-part reply:)
> From: ·······@gmail.com" <······@gmail.com>
> Now, a little speculation.
> We might wonder, why lisp has the cons problem and was never
> addressed?
That question presumes a fact not in evidence.
Instead, there never was a cons problem in the first place.
> Having data as a single manipulatable object (list) didn't really
> become popular until the 1990s. (notably by Perl)
Are you saying that in Perl you can manipulate an entire list as a
unit but you can't manipulate any lesser part of it by itself?
In Perl, what is the cost in time and new storage of each different
kind of list operation? For example, if a list has N elements, does
it take N or log(n) time units to make a new list that has one
additional element at the start or end or in the middle? Or is that
information hidden from users, so they have no idea whether doing
such an operation on a list of fifty billion elements is going to
be reasonable or too slow? In Lisp, using a linked list, it's
clearly known how long such an operation would take:
- Tacking on an additional element at start takes time 1 and uses 1
new unit of memory.
- Tacking on an additional element at end destructively takes time
N and uses 1 new unit of memory.
- Tacking on an additional element at end non-destructively takes
time N and uses N new units of memory.
- Tacking on an additional element at middle destructively takes time
N/2 and uses 1 new unit of memory.
- Tacking on an additional element at end non-destructively takes
time N/2 and uses N/2 new units of memory.
How does that compare with equivalent operations in Perl??
> The lisp's cons, as a underlying primitive that builds lists, even
> though a bit cumbersome, but works just fine when list structure are
> simple.
It's not cumbersome at all. And it works fine for complicated list
structure too.
> Even today, with all the perl, python, php, javascript etc
> langs that deal with lists, vast majority of list usage is just simple
> flat list, sometimes 2 level of nesting (list of list, list of hash,
> hash of list). 3 levels of nesting is seldom used unless its 3d
> matrices used mostly in computer graphics or linear algebra
> applications. Greater than 3 level is almost never seen.
That's totally stupid. There are many uses for unlimited-depth
lists, and if it's not done in those other languages it must be
because those languages make it too difficult, or the type of
people who use those other languages simply aren't bright enough to
think of how to use deeply nested lists. Consider for example
mathematical expressions. Not simple polynomials, which require
only three levels of list in general notation, or one level in
specialized notation, but more complicated expressions such as the
quadradic formula:
-b +/= sqrt(b^2 - 4*a*c)
------------------------
2*a
Did I remember it correctly? I memorized it in high school or
earlier, and it seems to have stuck with me. Working from the 4*a*c
outward, that's one level for the multiplication, one level for the
subtraction, one level for the square root, one level for the
addition or subtraction, and one level for the division. Five
levels right there!! I'm sure you can think of a mathematical
expression that is needs more than five levels of nested structure
if you put your brain to it. For example, try a problem in
first-semester differential calculus, involving a trigonometric
function of a linear transformation, with something multiplied
on the outside. For example, start with
f(x) = cos(x) * sin(x/2 + pi)
which is four levels deep on the right side, and then try to take
the first derivative using the product rule at the top and other
rules in the various terms:
df(x)/dx = d cos(x)/dx * sin(x/2 + pi)
+ cos(x) * d sin(x/2 + pi)/dx 5 levels deep now
= -sin(x) * sin(x/2 + pi)
+ cos(x) * (cos(x/2 + pi) * d (x/2)/dx) 6 levels deep now
= -sin(x) * sin(x/2 + pi)
+ cos(x) * (cos(x/2 + pi) * 1/2) still 6 levels deep
= -sin(x) * sin(x/2 + pi)
+ 1/2 * cos(x) * cos(x/2 + pi) flattened to 5 levels
Now try taking the second derivative. How many levels deep does it get?
As a completely different appliation, consider a multi-level search
process, such as a cladogram or taxonomy:
(life
(cellular
(prokaryote
(eubacteria ...)
(archea ...))
(eukaryote
(protist ...)
(fungi ...)
(plant ...)
(animal
(protozoa ...)
(metazoa
(round ...)
(bilaterallySymmetric
(proterostomes ...)
(deuterostomes ...))))))
(viral
(RNA ...)
(DNA ...))
(computer
(worm ...)
(virus ...)
(trojan ...)))
I've used ... to indicate where several more levels of structure
need to be included. Some species would be located about twenty
levels down from the top. Why can't that be expressed in Perl??
> Systematic manipulation and exploitation of nested list, such as
> mapping to leafs, to particular level, transposition by permutation
> on level, or list structure pattern matching in today's functional
> langs, etc is hardly ever to be seen.
Why isn't it seen? Because your eyes are closed so you can't see
what's plainly there? Or do those other languages really make it
too hard to do anything like that? Is Lisp the only language that
makes it easy enough to be practical?
> Will Lisp ever be Popular?
Yeah, as soon as some other lispers join me in creating a master
directory to all the libraries per which intentional datatypes they
deal with, whereby it'll be as easy to find the appropriate library
for Lisp as it already is for Java. Then newcomers considering
which language to try for their task will be able to look at both
Java and Lisp as languages worth consiering, and they'll choose
Lisp because it's a lot easier to actually use than Java is. Then
word will get around how easy it is to perform all sorts of tasks
in Lisp, and more and more people will convert to Lisp, and Lisp
will become popular.
> Mathematica today sells for over 2 thousands dollars.
Ouch!!! That's horribly expensive. I thought $400 for Macintosh
Allegro Common Lisp was too expensive, which is why I never
upgraded from version 1.2.2 that I got for free from work to a
newer version that would work on my Macintosh Performa. Who in
their right mind would pay that much for Mathematica?
> ... due to ... the internet and its consequences, e.g. Wikipedia,
> youtube, ... computer languages are proliferating like never
> before. (e.g. Erlang, OCaml, Haskell, PHP, Ruby, C#, F#, Perl6,
> Arc, NewLisp, Scala, Groovy, Lua, Q, Qz, Mercury, Scratch, Flash,
> Processing, (see Proliferation of Computing Languages)
Do *any* of those languages have a well-documented directory to all
available libraries (both built-in vendor-supplied and third-party
add-on) in a format as easy to browse as the Java API published by Sun?
<http://java.sun.com/j2se/1.4.2/docs/api/overview-summary.html>
If so, please post the URL of any such nicely organized API spec.
(Note: The CLHS isn't as nicely organized, and isn't as easy for
newcomers to understand, compared to the Java API, IMO.)
** Finished my three-part reply **
Anticomuna wrote:
> How? Is there anything wrong with the concept? I thought it was so
> easy to parse the S-expressions because of it. Is this really a
> problem for Lisp or this is more of a non-Lisper automatic reaction to
> something he is not used to?
Lisp can use other, alternative representations
of lists, without cons. Single linked lists, and
particularly Common Lisp and Scheme representation is
optimized for memory use, and designer accepted
some logical (or at least aesthetic) disadvantages
to achieve that.
[39]> (first '()) (rest '())
NIL
[40]>
NIL
and also:
[41]> (first '(())) (rest '(()))
NIL
[42]>
NIL
>
> How would the implementation of a Lisp without a using cons look like?
> Would it have any advantage over the "archaic" one?
Newlisp has slightly different implementation of
lists, with some advantages and disadvantages.
For example, there is no problem above, length
is fast, constant time operation (in CL it is linear,
so slow that it is frequently useless) and function
"last" is defined, which is just as fast as "first".
One of disadvantages is that circular lists are
not the special case of lists in Newlisp.
Standard answer is that one who needs different
data structure in Lisp (or Newlisp) can implement
such structures on his own, or use some library.
True, however, this is large step, resulting in
loss of all standard functions, defined only fo
built in data structures.
On Oct 10, 9:34 am, Majorinc Kazimir <·····@email.address> wrote:
> Anticomuna wrote:
> > How? Is there anything wrong with the concept? I thought it was so
> > easy to parse the S-expressions because of it. Is this really a
> > problem for Lisp or this is more of a non-Lisper automatic reaction to
> > something he is not used to?
>
> Lisp can use other, alternative representations
> of lists, without cons. Single linked lists, and
> particularly Common Lisp and Scheme representation is
> optimized for memory use, and designer accepted
> some logical (or at least aesthetic) disadvantages
> to achieve that.
One thing to keep in mind in this topic is the separation of syntax
and implementation.
For example, take lisp's cons. You can have a lang that walks and
quacks like cons, but the implementation can be nothing like linked
list. Conversely, many langs, in fact almost all langs, that has some
senes of list as a datatype, can implement it as linked list, array,
hash table. In fact, what exactly each of these jargons means is
pretty much dependent on the language now. (e.g. in PHP, a list is a
array and hashtable too. In Mathematica, List is what compiler geekers
typically a array. In Perl, Python, you have hashtable and dictionary.
In perl, list and array are distinct, but its list is nothing like
lisp's list. In many lang, some call these sequence, map, aggregate...
in Java, it separated things into the concept of Interfaces. Some lang
call these type of things sequence in their own idiosyncratic way
(e.g. Python, Emacs Lisp), some include strings as such type (e.g.
elisp, python) while not others (e.g. PHP, Perl). Then ther's also
“vectors”, ...)
So... it is important to have a clear idea of what we are talking
about. Are we talking about implementation? Are we talking about the
interface aspect (i.e. the language syntax)?
different lang in different degrees mix up the interface and
implementation concept. For example, Mathemtica goes for a pure,
interface, mathematical, aspect on threating this subject. Java makes
it clear by separating interface and implementation so that it
actually have a Interface language thingy. In PHP, it's all mushed up,
by a extremely practial point of view. Lisp, ties up its syntax of
cons with linked list tightly.
Further readings:
• Fundamental Problems of Lisp
Xah Lee, 2008
http://xahlee.org/UnixResource_dir/writ/lisp_problems.html
• The Concepts and Confusions of Prefix, Infix, Postfix and Fully
Nested Notations
Xah Lee, 2006
http://xahlee.org/UnixResource_dir/writ/notations.html
See also:
• Jargons of Info Tech industry
http://xahlee.org/UnixResource_dir/writ/jargons.html
plain text version follows:
--------------------------------
Jargons of Info Tech industry
(A Love of Jargons)
Xah Lee, 2002-02
People in the computing field like to spur the use of spurious
jargons. The less educated they are, the more they like extraneous
jargons, such as in the Unix & Perl community. Unlike mathematicians,
where in mathematics there are no fewer jargons but each and every one
are absolutely necessary. For example, polytope, manifold, injection/
bijection/surjection, group/ring/field.., homological, projective,
pencil, bundle, lattice, affine, topology, isomorphism, isometry,
homeomorphism, aleph-0, fractal, supremum/infimum, simplex, matrix,
quaternions, derivative/integral, ... and so on. Each and every one of
these captures a concept, for which practical and theoretical
considerations made the terms a necessity. Often there are synonyms
for them because of historical developments, but never “jargons for
jargon's sake” because mathematicians hate bloats and irrelevance.
The jargon-soaked stupidity in computing field can be grouped into
classes. First of all, there are jargons for marketing purposes. Thus
you have Mac OS “X”, Windows “XP”, Sun OS to Solaris and the
versioning confusion of 4.x to 7 to 8 and also the so called
“Platform” instead of OS. One flagrant example is Sun Microsystem's
Java stuff. Oak, Java, JDK, JSDK, J2EE, J2SE enterprise edition or no,
from java 1.x to 1.2 == Java 2 now 1.3, JavaOne, JFC, Jini, JavaBeans,
entity Beans, Awk, Swing... fucking stupid Java and fuck Sun
Microsystems. This is just one example of Jargon hodgepodge of one
single commercial entity. Marketing jargons cannot be avoided in
modern society. They abound outside computing field too. The Jargons
of marketing came from business practice, and they can be excusable
because they are kinda a necessity or can be considered as a naturally
evolved strategy for attracting attention in a laissez-faire economy
system.
The other class of jargon stupidity is from computing practitioners,
of which the Unix/Perl community is exemplary. For example, the name
Unix & Perl themselves are good examples of buzzing jargons. Unix is
supposed to be opposed of Multics and hints on the offensive and
tasteless term eunuchs. PERL is cooked up to be “Practical Extraction
& Reporting Language” and for the precise marketing drama of being
also “Pathologically Eclectic Rubbish Lister”. These types of jargons
exude juvenile humor. Cheesiness and low-taste is their hall-mark. If
you are familiar with unixism and perl programing, you'll find tons
and tons of such jargons embraced and verbalized by unix & perl
lovers. e.g. grep, glob, shell, pipe, man, regex, more, less, tarball,
shebang, Schwartzian Transform, croak, bless, interpolation,
TIMTOWTDI, DWIM, RFC, RTFM, I-ANAL, YMMV and so on.
There is another class of jargon moronicity, which i find them most
damaging to society, are jargons or spurious and vague terms used and
brandished about by programers that we see and hear daily among design
meetings, online tech group postings, or even in lots of computing
textbooks or tutorials. I think the reason for these, is that these
massive body of average programers usually don't have much knowledge
of significant mathematics, yet they are capable of technical thinking
that is not too abstract, thus you ends up with these people defining
or hatching terms a-dime-a-dozen that's vague, context dependent,
vacuous, and their commonality is often a result of sopho-morons
trying to sound big.
Here are some examples of the terms in question:
* anonymous functions or lambda or lamba function
* closure
* exceptions (as in Java)
* list, array, vector, aggregate
* hash (or hash table) ← fantastically stupid
* rehash (as in csh or tcsh)
* regular expression (as in regex, grep, egrep, fgrep)
* name space (as in Scheme vs Common Lisp debates)
* depth first/breadth first (as in tree traversing.)
* operator
* operator overloading
* polymorphism
* inheritance
* first class objects
* pointers, references
* tail recursion
My time is limited, so i'll just give a brief explanation of my thesis
on selective few of these examples among the umpteen.
In a branch of math called lambda calculus, in which much theories of
computation are based on, is the origin of the jargon _lambda
function_ that is so frequently reciprocated by advanced programering
donkeys. In practice, a subroutine without side-effects is supposed to
be what “lambda function” means. Functional languages often can define
them without assigning them to some variable (name), therefore the
“function without side-effects” are also called “anonymous functions”.
One can see that these are two distinct concepts. If mathematicians
are designing computer languages, they would probably just called such
thing _pure functions_. The term conveys the meaning, without the
“lamba” abstruseness. (in fact, the mathematics oriented language
Mathematica refers to lambda function as pure function, with the
keyword Function.) Because most programers are sopho-morons who are
less capable of clear thinking but nevertheless possess human vanity,
we can see that they have not adopted the clear and fitting term, but
instead you see lambda function this and that obfuscations dropping
from their mouths constantly.
Now the term “closure” can and indeed have meant several things in the
computing field. The most common is for it to mean a subroutine that
holds some memory but without some disadvantages of modifying a global
variable. Usually such is a feature of a programing language. When
taken to extreme, we have the what's called Object Oriented Programing
methodology and languages. The other meaning of “closure” i have seen
in text books, is for it to indicate that the things in the language
is “closed” under the operations of the language. For example, for
some languages you can apply operations or subroutines to any thing in
the language. (These languages are often what's called “dynamic
typing” or “typeless”). However, in other languages, things have types
and cannot be passed around subroutines or operators arbitrarily. One
can see that the term “closure” is quite vague in conveying its
meaning. The term nevertheless is very popular among talkative
programers and dense tutorials, precisely because it is vague and
mysterious. These pseudo-wit living zombies, never thought for a
moment that they are using a moronic term, mostly because they never
clearly understand the concepts behind the term among the contexts.
One can particular see this exhibition among Perl programers. (for a
example of the fantastically stupid write-up on closure by the Perl
folks, see “perldoc perlfaq7” and “perldoc perlref”.)
in the so-called “high-level” computing languages, there are often
data types that's some kind of a collection. The most illustrative is
LISt Processing language's lists. Essentially, the essential concept
is that the language can treat a collection of things as if it's a
single entity. As computer languages evolve, such collection entity
feature also diversified, from syntax to semantics to implementation.
Thus, beside lists, there are also terms like vector, array, matrix,
tree, hash/“hash table”/dictionary. Often each particular term is to
convey a particular implementation of collection so that it has
certain properties to facilitate specialized uses of such groupy. The
Java language has such groupy that can illustrate the point well. In
Java, there are these hierarchy of collection-type of things:
Collection
Set (AbstractSet, HashSet)
SortedSet (TreeSet)
List (AbstractList, LinkedList, Vector, ArrayList)
Map (AbstractMap, HashMap, Hashtable)
SortedMap (TreeMap)
The words without parenthesis are Java Interfaces, and ones in are
implementations. The interface hold a concept. The deeper the level,
the more specific or specialized. The implementation carry out
concepts. Different implementation gives different algorithmic
properties. Essentially, these hierarchies of Java show the potential
complexity and confusion around groupy entities in computer languages.
Now, among the programers we see daily, who never really thought out
of these things, will attach their own specific meaning to list/array/
vector/matrix/etc type of jargons in driveling and arguments,
oblivious to any thought of formalizing what the fuck they are really
talking about. (one may think from the above tree-diagram that Java
the language has at least put clear distinction to interface and
implementation, whereas in my opinion they are one fantastic fuck up
too, in many respects. (See On Java's Interface))
Xah
∑ http://xahlee.org/
☄
On 2008-10-09, Anticomuna <············@uol.com.br> wrote:
> How? Is there anything wrong with the concept? I thought it was so
The cons cell is no more (or less) archaic than the Von Neumann machine, two's
complement arithmetic, or quicksort.
The musical keyboard? Hasn't changed in almost three hundred years.
How about the English word ``the''? Shakespeare used it, we use it.
The pythagorean formula is also archaic, yet we need it to compute distances in
a cartesian coordinate system.
Your shiny new computer still manipulates word-sized quantities of information
according to the Von Neumann design. These are a good fit for cons cells.
Anticomuna wrote:
> How? Is there anything wrong with the concept? I thought it was so
> easy to parse the S-expressions because of it. Is this really a
> problem for Lisp or this is more of a non-Lisper automatic reaction to
> something he is not used to?
>
> How would the implementation of a Lisp without a using cons look like?
> Would it have any advantage over the "archaic" one?
Don't worry. Xahlee's a nut. Conses are perfectly good and pose no
problem. Of course, they've got an important part in making lisp so
easy and funny to use.
--
__Pascal Bourguignon__
http://www.informatimago.com
Anticomuna wrote:
> How? Is there anything wrong with the concept? I thought it was so
> easy to parse the S-expressions because of it. Is this really a
> problem for Lisp or this is more of a non-Lisper automatic reaction to
> something he is not used to?
>
> How would the implementation of a Lisp without a using cons look like?
In the absence of static typing, a cons cell is just the special case of a
2-element array. So you get rid of cons cells and use only arrays (as
Mathematica does).
--
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u