Python, Lambda, and Guido van Rossum
Xah Lee, 2006-05-05
In this post, i'd like to deconstruct one of Guido's recent blog about
lambda in Python.
In Guido's blog written in 2006-02-10 at
http://www.artima.com/weblogs/viewpost.jsp?thread=147358
is first of all, the title “Language Design Is Not Just Solving
Puzzles”. In the outset, and in between the lines, we are told that
“I'm the supreme intellect, and I created Python”.
This seems impressive, except that the tech geekers due to their
ignorance of sociology as well as lack of analytic abilities of the
mathematician, do not know that creating a language is a act that
requires little qualifications. However, creating a language that is
used by a lot people takes considerable skill, and a big part of that
skill is salesmanship. Guido seems to have done it well and seems to
continue selling it well, where, he can put up a title of belittlement
and get away with it too.
Gaudy title aside, let's look at the content of his say. If you peruse
the 700 words, you'll find that it amounts to that Guido does not like
the suggested lambda fix due to its multi-line nature, and says that he
don't think there could possibly be any proposal he'll like. The
reason? Not much! Zen is bantered about, mathematician's impractical
ways is waved, undefinable qualities are given, human's right brain is
mentioned for support (neuroscience!), Rube Goldberg contrivance
phraseology is thrown, and coolness of Google Inc is reminded for the
tech geekers (in juxtaposition of a big notice that Guido works
there.).
If you are serious, doesn't this writing sounds bigger than its
content? Look at the gorgeous ending: “This is also the reason why
Python will never have continuations, and even why I'm uninterested in
optimizing tail recursion. But that's for another installment.”. This
benevolent geeker is gonna give us another INSTALLMENT!
There is a computer language leader by the name of Larry Wall, who said
that “The three chief virtues of a programmer are: Laziness,
Impatience and Hubris” among quite a lot of other ingenious
outpourings. It seems to me, the more i learn about Python and its
leader, the more similarities i see.
So Guido, i understand that selling oneself is a inherent and necessary
part of being a human animal. But i think the lesser beings should be
educated enough to know that fact. So that when minions follow a
leader, they have a clear understanding of why and what.
----
Regarding the lambda in Python situation... conceivably you are right
that Python lambda is perhaps at best left as it is crippled, or even
eliminated. However, this is what i want: I want Python literatures,
and also in Wikipedia, to cease and desist stating that Python supports
functional programing. (this is not necessarily a bad publicity) And, I
want the Perl literatures to cease and desist saying they support OOP.
But that's for another installment.
----
This post is archived at:
http://xahlee.org/UnixResource_dir/writ/python_lambda_guido.html
Xah
···@xahlee.org
∑ http://xahlee.org/
On Fri, 05 May 2006 17:26:26 -0700, Xah Lee wrote:
> Regarding the lambda in Python situation... conceivably you are right
> that Python lambda is perhaps at best left as it is crippled, or even
> eliminated. However, this is what i want: I want Python literatures,
> and also in Wikipedia, to cease and desist stating that Python supports
> functional programing. (this is not necessarily a bad publicity) And, I
What does lambda have to do with supporting or not supporting functional
programming?
"I V" <········@gmail.com> wrote in message
··································@gmail.com...
> On Fri, 05 May 2006 17:26:26 -0700, Xah Lee wrote:
>> Regarding the lambda in Python situation... conceivably you are right
>> that Python lambda is perhaps at best left as it is crippled, or even
>> eliminated. However, this is what i want: I want Python literatures,
>> and also in Wikipedia, to cease and desist stating that Python supports
>> functional programing. (this is not necessarily a bad publicity) And, I
>
> What does lambda have to do with supporting or not supporting functional
> programming?
>
What does any of this have to do with Java?
--
Rhino
"Rhino" <·························@nospam.com> wrote:
> What does any of this have to do with Java?
Xah Lee is well known for abusing Usenet for quite some time, report
his posts as excessive crossposts to:
abuse at bcglobal dot net
abuse at dreamhost dot com
IIRC this is his third ISP account in 2 weeks, so it *does* work.
Moreover, his current hosting provider, dreamhost, might drop him soon.
--
John Bokma Freelance software developer
&
Experienced Perl programmer: http://castleamber.com/
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <6OS6g.78$Ey5.74@fe12.lga>
Xah Lee wrote:
> Python, Lambda, and Guido van Rossum
>
> Xah Lee, 2006-05-05
>
> In this post, i'd like to deconstruct one of Guido's recent blog about
> lambda in Python.
>
> In Guido's blog written in 2006-02-10 at
> http://www.artima.com/weblogs/viewpost.jsp?thread=147358
>
> is first of all, the title “Language Design Is Not Just Solving
> Puzzles”. In the outset, and in between the lines, we are told that
> “I'm the supreme intellect, and I created Python”.
>
> This seems impressive, except that the tech geekers due to their
> ignorance of sociology as well as lack of analytic abilities of the
> mathematician, do not know that creating a language is a act that
> requires little qualifications. However, creating a language that is
> used by a lot people takes considerable skill, and a big part of that
> skill is salesmanship. Guido seems to have done it well and seems to
> continue selling it well, where, he can put up a title of belittlement
> and get away with it too.
>
> Gaudy title aside, let's look at the content of his say. If you peruse
> the 700 words, you'll find that it amounts to that Guido does not like
> the suggested lambda fix due to its multi-line nature, and says that he
> don't think there could possibly be any proposal he'll like. The
> reason? Not much! Zen is bantered about, mathematician's impractical
> ways is waved, undefinable qualities are given, human's right brain is
> mentioned for support (neuroscience!), Rube Goldberg contrivance
> phraseology is thrown,
I think this is what you missed in your deconstruction. The upshot of
what he wrote is that it would be really hard to make semantically
meaningful indentation work with lambda. Guido did not mean it, but the
Rube Goldberg slam is actually against indentation as syntax. "Yes,
print statements in a while loop would be helpful, but..." it would be
so hard, let's go shopping. ie, GvR and Python have hit a ceiling.
That's OK, it was never meant to be anything more than a scripting
language anyway.
But the key in the whole thread is simply that indentation will not
scale. Nor will Python.
> and coolness of Google Inc is reminded for the
> tech geekers (in juxtaposition of a big notice that Guido works
> there.).
>
> If you are serious, doesn't this writing sounds bigger than its
> content? Look at the gorgeous ending: “This is also the reason why
> Python will never have continuations, and even why I'm uninterested in
> optimizing tail recursion. But that's for another installment.”. This
> benevolent geeker is gonna give us another INSTALLMENT!
>
> There is a computer language leader by the name of Larry Wall, who said
> that “The three chief virtues of a programmer are: Laziness,
> Impatience and Hubris” among quite a lot of other ingenious
> outpourings. It seems to me, the more i learn about Python and its
> leader, the more similarities i see.
>
> So Guido, i understand that selling oneself is a inherent and necessary
> part of being a human animal. But i think the lesser beings should be
> educated enough to know that fact. So that when minions follow a
> leader, they have a clear understanding of why and what.
Oh, my, you are preaching to the herd (?!) of lemmings?! Please tell me
you are aware that lemmings do not have ears. You should just do Lisp
all day and add to the open source libraries to speed Lisp's ascendance.
The lemmings will be liberated the day Wired puts John McCarthy on the
cover, and not a day sooner anyway.
kenny (wondering what to call a flock (?!) of lemmings)
--
Cells: http://common-lisp.net/project/cells/
"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
Ken Tilton <·········@gmail.com> wrote:
...
> But the key in the whole thread is simply that indentation will not
> scale. Nor will Python.
Absolutely. That's why firms who are interested in building *seriously*
large scale systems, like my employer (and supplier of your free mail
account), would never, EVER use Python, nor employ in prominent
positions such people as the language's inventor and BDFL, the author of
the most used checking tool for it, and the author of the best-selling
reference book about that language; and, for that matter, a Director of
Search Quality who, while personally a world-renowned expert of AI and
LISP, is on record as supporting Python very strongly, and publically
stating its importance to said employer.
Obviously will not scale. Never.
Well... hardly ever!
Alex
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <L4U6g.88$Ey5.52@fe12.lga>
Alex Martelli wrote:
> Ken Tilton <·········@gmail.com> wrote:
> ...
>
>>But the key in the whole thread is simply that indentation will not
>>scale. Nor will Python.
>
>
> Absolutely. That's why firms who are interested in building *seriously*
> large scale systems, like my employer (and supplier of your free mail
> account), would never, EVER use Python, nor employ in prominent
> positions such people as the language's inventor and BDFL, the author of
> the most used checking tool for it, and the author of the best-selling
> reference book about that language; and, for that matter, a Director of
> Search Quality who, while personally a world-renowned expert of AI and
> LISP, is on record as supporting Python very strongly, and publically
> stating its importance to said employer.
>
> Obviously will not scale. Never.
>
> Well... hardly ever!
You are talking about being incredibly popular. I was talking about
language expressivity. COBOL in its day was incredibly popular and
certainly the language of choice (hell, the only language) for the
biggest corporations you can imagine. But it did not scale as a
language. I hope there are no doubts on that score (and I actually am a
huge fan of COBOL).
The problem for Python is its success. meant to be a KISS scripting
language, it has caught on so well that people are asking it to be a
full-blown, OO, GC, reflexive, yada, yada, yada language. Tough to do
when all you wanted to be when you grew up was a scripting language.
kenny (who is old enough to have seen many a language come and go)
--
Cells: http://common-lisp.net/project/cells/
"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
Ken Tilton <·········@gmail.com> wrote:
...
> > Absolutely. That's why firms who are interested in building *seriously*
> > large scale systems, like my employer (and supplier of your free mail
...
> > Obviously will not scale. Never.
> >
> > Well... hardly ever!
>
> You are talking about being incredibly popular. I was talking about
Who, me? I'm talking about the deliberate, eyes-wide-open choice by
*ONE* firm -- one which happens to more or less *redefine* what "large
scale" computation *means*, along many axes. That's got nothing to do
with Python being "incredibly popular": it has everything to do with
scalability -- the choice was made in the late '90s (and, incidentally,
by people quite familiar with lisp... no less than the reddit.com guys,
you know, the ones who recently chose to rewrite their side from Lisp to
Python...?), based on scalability issues, definitely not "popularity"
(Python in the late '90s was a very obscure, little-known language).
> kenny (who is old enough to have seen many a language come and go)
See your "many a language" and raise you one penny -- besides sundry
Basic dialects, machine languages, and microcode[s], I started out with
Fortran IV and APL, and I have professionally programmed in Pascal (many
dialects), Rexx, Forth, PL/I, Cobol, Lisp before there was a "Common"
one, Prolog, Scheme, Icon, Tcl, Awk, EDL, and several proprietary 3rd
and 4th generation languages -- as well of course as C and its
descendants such as C++ and Java, and Perl. Many other languages I've
studied and played with, I've never programmed _professionally_ (i.e.,
been paid for programs in those languages), but I've written enough
"toy" programs to get some feeling for (Ruby, SML, O'CAML, Haskell,
Snobol, FP/1, Applescript, C#, Javascript, Erlang, Mozart, ...).
Out of all languages I know, I've deliberately chosen to specialize in
Python, *because it scales better* (yes, functional programming is
_conceptually_ perfect, but one can never find sufficiently large teams
of people with the right highly-abstract mathematical mindset and at the
same time with sufficiently down-to-earth pragmaticity -- so, for _real
world_ uses, Python scales better). When I was unable to convince top
management, at the firm at which I was the top programmer, that the firm
should move to Python (beyond the pilot projects which I led and gave
such stellar results), I quit, and for years I made a great living as a
freelance consultant (mostly in Python -- once in a while, a touch of
Pyrex, C or C++ as a vigorish;-).
That's how come I ended up working at the firm supplying your free mail
(as Uber Tech Lead) -- they reached across an ocean to lure me to move
from my native Italy to California, and my proven excellence in Python
was their prime motive. The terms of their offer were just too
incredible to pass by... so, I rapidly got my O1 visa ("alien of
exceptional skills"), and here I am, happily ubertechleading... and
enjoying Python and its incredibly good scalability every single day!
Alex
From: Bill Atkins
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <87y7xf8swl.fsf@rpi.edu>
·······@yahoo.com (Alex Martelli) writes:
> Ken Tilton <·········@gmail.com> wrote:
> ...
>> > Absolutely. That's why firms who are interested in building *seriously*
>> > large scale systems, like my employer (and supplier of your free mail
> ...
>> > Obviously will not scale. Never.
>> >
>> > Well... hardly ever!
>>
>> You are talking about being incredibly popular. I was talking about
>
> Who, me? I'm talking about the deliberate, eyes-wide-open choice by
> *ONE* firm -- one which happens to more or less *redefine* what "large
> scale" computation *means*, along many axes. That's got nothing to do
> with Python being "incredibly popular": it has everything to do with
> scalability -- the choice was made in the late '90s (and, incidentally,
> by people quite familiar with lisp... no less than the reddit.com guys,
> you know, the ones who recently chose to rewrite their side from Lisp to
> Python...?), based on scalability issues, definitely not "popularity"
> (Python in the late '90s was a very obscure, little-known language).
>
>> kenny (who is old enough to have seen many a language come and go)
>
> See your "many a language" and raise you one penny -- besides sundry
> Basic dialects, machine languages, and microcode[s], I started out with
> Fortran IV and APL, and I have professionally programmed in Pascal (many
> dialects), Rexx, Forth, PL/I, Cobol, Lisp before there was a "Common"
> one, Prolog, Scheme, Icon, Tcl, Awk, EDL, and several proprietary 3rd
> and 4th generation languages -- as well of course as C and its
> descendants such as C++ and Java, and Perl. Many other languages I've
> studied and played with, I've never programmed _professionally_ (i.e.,
> been paid for programs in those languages), but I've written enough
> "toy" programs to get some feeling for (Ruby, SML, O'CAML, Haskell,
> Snobol, FP/1, Applescript, C#, Javascript, Erlang, Mozart, ...).
>
> Out of all languages I know, I've deliberately chosen to specialize in
> Python, *because it scales better* (yes, functional programming is
> _conceptually_ perfect, but one can never find sufficiently large teams
> of people with the right highly-abstract mathematical mindset and at the
> same time with sufficiently down-to-earth pragmaticity -- so, for _real
> world_ uses, Python scales better). When I was unable to convince top
> management, at the firm at which I was the top programmer, that the firm
> should move to Python (beyond the pilot projects which I led and gave
> such stellar results), I quit, and for years I made a great living as a
> freelance consultant (mostly in Python -- once in a while, a touch of
> Pyrex, C or C++ as a vigorish;-).
>
> That's how come I ended up working at the firm supplying your free mail
> (as Uber Tech Lead) -- they reached across an ocean to lure me to move
> from my native Italy to California, and my proven excellence in Python
> was their prime motive. The terms of their offer were just too
> incredible to pass by... so, I rapidly got my O1 visa ("alien of
> exceptional skills"), and here I am, happily ubertechleading... and
> enjoying Python and its incredibly good scalability every single day!
>
>
> Alex
How do you define scalability?
--
This is a song that took me ten years to live and two years to write.
- Bob Dylan
Bill Atkins wrote:
<cut>
>
> How do you define scalability?
>
http://www.google.com/search?hl=en&q=define%3Ascalability&btnG=Google+Search
;-)
--
mph
From: Bill Atkins
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <87mzdvo2nm.fsf@rpi.edu>
"Martin P. Hellwig" <········@xs4all.nl> writes:
> Bill Atkins wrote:
> <cut>
>>
>> How do you define scalability?
>>
> http://www.google.com/search?hl=en&q=define%3Ascalability&btnG=Google+Search
>
> ;-)
>
> --
> mph
OK, my real question is: what features of Python make it "scalable"?
--
This is a song that took me ten years to live and two years to write.
- Bob Dylan
Bill Atkins wrote:
> OK, my real question is: what features of Python make it "scalable"?
Let me guess: Python makes it easier to scale the application on
the "features" axis, and the approach to large-scale computation
taken by google makes Python's poor raw performance not so big
an issue, so it doesn't prevent the application from scaling
on the "load" and "amount of data" axes. I also guess that python
is often used to control simple, fast C/C++ programs, or even
to generate such programs.
Best regards
Tomasz
Bill Atkins wrote:
> "Martin P. Hellwig" <········@xs4all.nl> writes:
>
>> Bill Atkins wrote:
>> <cut>
>>> How do you define scalability?
>>>
>> http://www.google.com/search?hl=en&q=define%3Ascalability&btnG=Google+Search
>>
>> ;-)
>>
>> --
>> mph
>
> OK, my real question is: what features of Python make it "scalable"?
>
Well I'm no expert, but I guess the ease of creating network services
and clients make it quite scalable. For example, I'm creating a
xmlrpcserver that returns a randomized cardlist, but I because of
fail-over I needed some form of scalability , my solution was to first
randomize the deck then marshal it and dump the file on a ZFS partition,
giving back the client a ticket number, the client can then connect with
the ticket number to receive the cardlist (read the file - unmarshal it).
While this is overkill for 1 server, I needed multiple because of
fail-over and load-balancing, in this case I have 3 'crypto' boxes (with
hardware crypto engines using OpenBSD) doing only the randomizing and 4
solaris machines doing the zfs and distribution of the list.
By using xmlrpc and DNS round-robin, I can just add boxes and it scales
without any problem, The ZFS boxes are the front-end listening to the
name 'shuffle' and are connecting to a private network to my crypto
boxes listening to the name 'crypto'.
So as long as I make DNS aliases (I have a little script that hearbeats
the boxes and when not responding within 10 seconds removes it alias)
and install the right scripts on the box I can scale till I'm round the
earth. Of course when the machine amount gets over a certain degree I
have to add some management functionality.
Now I don't say that I handle this situation well and that its the right
solution, but it worked for me and it was easy and fun to do with
python, but I guess that any language in this sence should be 'scalable'
and perhaps other languages have even better built-in networking
libraries but I'm not a professional programmer and until I learn other
languages (and are comfortable enough to use it) I'll keep on using
python for my projects.
For me python is easy, scalable, fun and by this the 'best' but that is
personal and I simply don't know whether my opinion will change in the
future or not.
--
mph
"Martin P. Hellwig" <········@xs4all.nl> writes:
> and clients make it quite scalable. For example, I'm creating a
> xmlrpcserver that returns a randomized cardlist, but I because of
> fail-over I needed some form of scalability , my solution was to first
> randomize the deck then marshal it and dump the file on a ZFS
> partition, giving back the client a ticket number, the client can then
> connect with the ticket number to receive the cardlist (read the file
> - unmarshal it).
This is a weird approach. Why not let the "ticket" by the (maybe
encrypted) PRNG seed that generates the permutation?
> While this is overkill for 1 server, I needed multiple because of
> fail-over and load-balancing, in this case I have 3 'crypto' boxes
> (with hardware crypto engines using OpenBSD) doing only the
> randomizing and 4 solaris machines doing the zfs and distribution of
> the list.
I don't know what good that hardware crypto is doing you, if you're
then writing out the shuffled deck to disk in the clear.
Paul Rubin <·············@NOSPAM.invalid> writes:
> I don't know what good that hardware crypto is doing you, if you're
> then writing out the shuffled deck to disk in the clear.
Ehhh, I guess you want the crypto hardware to generate physical
randomness for each shuffle. I'm skeptical of the value of this since
a cryptographic PRNG seeded with good entropy is supposed to be
computationally indistinguishable from physical randomness, and if
it's not, we're all in big trouble; further, that hardware engine is
almost certainly doing some cryptographic whitening, which is a
problem if you don't think that cryptography works.
Anyway, if it's just a 52-card deck you're shuffling, there's only
about 226 bits of entropy per shuffle, or 52*6 = 312 bits if you write
out the permutation straightforwardly as a vector. You could use that
as the ticket but if you're generating it that way you may need to
save the shuffle for later auditing.
For practical security purposes I'd be happier generating the shuffles
entirely inside the crypto module (HSM) by cryptographic means, with
the "ticket" just being a label for a shuffle. E.g. let
K1, K2 = secret keys
T(n) = ticket #n = AES(K1, n) to prevent clients from guessing
ticket numbers
shuffle(n) = HMAC-SHA-384(K2, n) truncated to 312 bits, treated as
permutation on 52 cards
You could put some of the card dealing logic into the HSM to get the
cards dealt out only as the game as played, to decrease the likelihood
of any cards getting exposed prematurely.
Paul Rubin schreef:
> a cryptographic PRNG seeded with good entropy is supposed to be
> computationally indistinguishable from physical randomness
Doesn't your "good entropy" include "physical randomness"?
--
Affijn, Ruud
"Gewoon is een tijger."
Paul Rubin wrote:
> "Martin P. Hellwig" <········@xs4all.nl> writes:
>> and clients make it quite scalable. For example, I'm creating a
>> xmlrpcserver that returns a randomized cardlist, but I because of
>> fail-over I needed some form of scalability , my solution was to first
>> randomize the deck then marshal it and dump the file on a ZFS
>> partition, giving back the client a ticket number, the client can then
>> connect with the ticket number to receive the cardlist (read the file
>> - unmarshal it).
>
> This is a weird approach. Why not let the "ticket" by the (maybe
> encrypted) PRNG seed that generates the permutation?
Because the server that handles the generate request doesn't need to be
the same as the one that handles the request to give the client that
deck. Even more, the server that handles the request calls the crypto
servers to actually do the shuffling. So when the server fails before it
has given the client the ticket, it could be possible that a deck is
already created but not used, no biggie there.
But if the ticket is given to the client, than any other server can
serve back that ticket to give the shuffled deck, unless the ZFS dies of
course but then again thats why I use ZFS so I can mirror them om 4
different machines in 2 different locations.
>
>> While this is overkill for 1 server, I needed multiple because of
>> fail-over and load-balancing, in this case I have 3 'crypto' boxes
>> (with hardware crypto engines using OpenBSD) doing only the
>> randomizing and 4 solaris machines doing the zfs and distribution of
>> the list.
>
> I don't know what good that hardware crypto is doing you, if you're
> then writing out the shuffled deck to disk in the clear.
It's not about access security it's more about the best possible
randomness to shuffle the deck.
--
mph
"Martin P. Hellwig" <········@xs4all.nl> writes:
> > This is a weird approach. Why not let the "ticket" by the (maybe
> > encrypted) PRNG seed that generates the permutation?
>
> Because the server that handles the generate request doesn't need to
> be the same as the one that handles the request to give the client
> that deck.
Wait a sec, are you giving the entire shuffled deck to the client?
Can you describe the application? I was imagining an online card game
where clients are playing against each other. Letting any client see
the full shuffle is disastrous.
> But if the ticket is given to the client, than any other server can
> serve back that ticket to give the shuffled deck, unless the ZFS dies
> of course but then again thats why I use ZFS so I can mirror them om 4
> different machines in 2 different locations.
> > I don't know what good that hardware crypto is doing you, if you're
> > then writing out the shuffled deck to disk in the clear.
>
> It's not about access security it's more about the best possible
> randomness to shuffle the deck.
Depending on just what the server is for, access security may be a far
more important issue. If I'm playing cards online with someone, I'd
be WAY more concerned about the idea of my opponent being able to see
my cards by breaking into the server, than his being able to
cryptanalyze a well-designed PRNG based solely on its previous
outputs.
Paul Rubin wrote:
> "Martin P. Hellwig" <········@xs4all.nl> writes:
>>> This is a weird approach. Why not let the "ticket" by the (maybe
>>> encrypted) PRNG seed that generates the permutation?
>> Because the server that handles the generate request doesn't need to
>> be the same as the one that handles the request to give the client
>> that deck.
>
> Wait a sec, are you giving the entire shuffled deck to the client?
> Can you describe the application? I was imagining an online card game
> where clients are playing against each other. Letting any client see
> the full shuffle is disastrous.
Nope I have a front end service that does the client bit, its about this
(in this context, there are more services of course):
crypto - ZFS - table servers - mirror dispatching - client xmlrpc access
- client ( last one has not been written yet )
<cut>
>
> Depending on just what the server is for, access security may be a far
> more important issue. If I'm playing cards online with someone, I'd
> be WAY more concerned about the idea of my opponent being able to see
> my cards by breaking into the server, than his being able to
> cryptanalyze a well-designed PRNG based solely on its previous
> outputs.
Only client xmlrpc access is (should be) accessible from the outside and
since this server is user session based they only see their own card.
However this project is still in it's early development, I'm doing now
initial alpha-tests (and stress testing) and after this I'm going to let
some audit bureau's check for security (probably Madison-Ghurka, but I
haven't asked them yet).
--
mph
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <PV27g.16$pv7.11@fe08.lga>
Martin P. Hellwig wrote:
> Bill Atkins wrote:
> <cut>
>
>>
>> How do you define scalability?
>>
> http://www.google.com/search?hl=en&q=define%3Ascalability&btnG=Google+Search
>
Damn! Google can do that?! Omigod!!! Not joking, I never knew that,a
lways used dictionary.com. Thx! I meant:
> The ability to add power and capability to an existing system without significant expense or overhead.
> www.yipes.com/care/cc_glossary.shtml
The number of definitions explains why most respondents should save
their breath. Natural language is naturally ambiguous. Meanwhile Usenet
is the perfect place to grab one meaning out of a dozen and argue over
the implications of that one meaning which of course is never the one
originally intended, as any reasonable, good faith reader would admit.
kenny
--
Cells: http://common-lisp.net/project/cells/
"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
Ken Tilton <·········@gmail.com> wrote:
> Martin P. Hellwig wrote:
> > Bill Atkins wrote:
> > <cut>
> >
> >>
> >> How do you define scalability?
> >>
> > http://www.google.com/search?hl=en&q=define%3Ascalability&btnG=Google+Search
> >
>
> Damn! Google can do that?! Omigod!!! Not joking, I never knew that,a
You're welcome; we do have several little useful tricks like that.
> lways used dictionary.com. Thx! I meant:
>
> > The ability to add power and capability to an existing system without
> > significant expense or overhead. www.yipes.com/care/cc_glossary.shtml
Excellent -- just the definition of "scalability" that Google and its
competitor live and die by ((OK, OK, I'm _not_ implying that such issues
as usability &c don't matter, by no means -- but, I live mostly in the
world of infrastructure, where scalability and reliability reign)).
> The number of definitions explains why most respondents should save
> their breath. Natural language is naturally ambiguous. Meanwhile Usenet
> is the perfect place to grab one meaning out of a dozen and argue over
> the implications of that one meaning which of course is never the one
> originally intended, as any reasonable, good faith reader would admit.
However, you and I are/were discussing exactly the same nuance of
meaning, either by a funny quirk of fate or because it's the one that
really matters in large-scale programming (and more generally,
large-scale systems). E.g., if your existing system can gracefully
handle your current traffic of, say, a billion queries of complexity X,
you want to be able to rapidly add a feature that will increase the
average query's complexity to (X+dX) and attract 20% more users, so
you'll need to handle 1.2 billion queries just as gracefully: i.e., you
need to be able to add power and capability to your existing system,
rapidly and reliably, just as that definition says.
When this is the challenge, your choice of programming language is not
the first order of business, of course -- your hardware and network
architecture loom large, and so does the structuring of your
applications and infrastructure software across machines and networks.
Still, language does matter, at a "tertiary" level if you will. Among
the potential advantages of Lisp is the fact that you could use Lisp
across almost all semantic levels ("almost" because I don't think "Lisp
machines" are a realistic option nowadays, so lower levels of the stack
would remain in C and machine language -- but those levels may probably
be best handled by a specialized squad of kernel-level and device-driver
programmers, anyway); among the potential advantages of Python, the fact
that (while not as suited as Lisp to lower-level coding, partly because
of a lack of good solid compilers to make machine language out of it),
it brings a powerful drive to uniformity, rather than a drive towards a
host of "domain-specific" Little Languages as is encouraged by Lisp's
admirably-powerful macro system.
One key axis of scalability here is, how rapidly can you grow the teams
of people that develop and maintain your software base? To meet all the
challenges and grasp all the opportunities of an exploding market,
Google has had to almost-double its size, in terms of number of
engineers, every year for the last few years -- I believe that doing so
while keeping stellar quality and productivity is an unprecedented feat,
and while (again!) the choice of language(s) is not a primary factor
(most kudos must go to our management and its approaches and methods, of
course, and in particular to the strong corporate identity and culture
they managed to develop and maintain), it still does matter. The
uniformity of coding style and practices in our codebase is strong.
We don't demand Python knowledge from all the engineers we hire: for any
"engineering superstar" worth the adjective, Python is really easy and
fast to pick up and start using productively -- I've seen it happen
thousands of times, both in Google and in my previous career, and not
just for engineers with a strong software background, but also for those
whose specialties are hardware design, network operations, etc, etc. The
language's simplicity and versatility allow this. Python "fits people's
brains" to an unsurpassed extent -- in a way that, alas, languages
requiring major "paradigm shifts" (such as pure FP languages, or Common
Lisp, or even, say, Smalltalk, or Prolog...) just don't -- they really
require a certain kind of mathematical mindset or predisposition which
just isn't as widespread as you might hope. Myself, I do have more or
less that kind of mindset, please note: while my Lisp and scheme are
nowadays very rusty, proficiency with them was part of what landed me my
first job, over a quarter century ago (microchip designers with a good
grasp of lisp-ish languages being pretty rare, and TI being rather
hungry for them at the time) -- but I must acknowlegde I'm an exception.
Of course, the choice of Python does mean that, when we really truly
need a "domain specific little language", we have to implement it as a
language in its own right, rather than piggybacking it on top of a
general-purpose language as Lisp would no doubt afford; see
<http://labs.google.com/papers/sawzall.html> for such a DSLL developed
at Google. However, I think this tradeoff is worthwhile, and, in
particular, does not impede scaling.
Alex
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <lr87g.502$SV1.253@fe10.lga>
Alex Martelli wrote:
> Ken Tilton <·········@gmail.com> wrote:
>
>
>>Martin P. Hellwig wrote:
>>
>>>Bill Atkins wrote:
>>><cut>
>>>
>>>>How do you define scalability?
>>>>
>>>
>>>http://www.google.com/search?hl=en&q=define%3Ascalability&btnG=Google+Search
>>>
>>
>>Damn! Google can do that?! Omigod!!! Not joking, I never knew that,a
>
>
> You're welcome; we do have several little useful tricks like that.
>
>
>>lways used dictionary.com. Thx! I meant:
>>
>>
>>>The ability to add power and capability to an existing system without
>>>significant expense or overhead. www.yipes.com/care/cc_glossary.shtml
>
>
> Excellent -- just the definition of "scalability" that Google and its
> competitor live and die by ((OK, OK, I'm _not_ implying that such issues
> as usability &c don't matter, by no means -- but, I live mostly in the
> world of infrastructure, where scalability and reliability reign)).
>
>
>
>>The number of definitions explains why most respondents should save
>>their breath. Natural language is naturally ambiguous. Meanwhile Usenet
>>is the perfect place to grab one meaning out of a dozen and argue over
>>the implications of that one meaning which of course is never the one
>>originally intended, as any reasonable, good faith reader would admit.
>
>
> However, you and I are/were discussing exactly the same nuance of
> meaning, either by a funny quirk of fate or because it's the one that
> really matters in large-scale programming (and more generally,
> large-scale systems). E.g., if your existing system can gracefully
> handle your current traffic of, say, a billion queries of complexity X,
> you want to be able to rapidly add a feature that will increase the
> average query's complexity to (X+dX) and attract 20% more users, so
> you'll need to handle 1.2 billion queries just as gracefully: i.e., you
> need to be able to add power and capability to your existing system,
> rapidly and reliably, just as that definition says.
>
> When this is the challenge, your choice of programming language is not
> the first order of business, of course ...
Looks like dictionaries are no match for the ambiguity of natural
language. :) Let me try again: it is Python itself that cannot scale, as
in gain "new power and capability", and at least in the case of lambda
it seems to be because of indentation-sensitivity.
Is that not what GvR said?
By contrast, in On Lisp we see Graham toss off Prolog in Chapter 22 and
an object system from scratch in Chapter 25. Lite versions, to be sure,
but you get the idea.
My sig has a link to a hack I developed after doing Lisp for less than a
month, and without lambda (and to a lesser degree macros) it would be
half the tool it is. It adds a declarative paradigm to the CL object
system, and is built on nothing but ansi standard Lisp. Yet it provides
new power and capability. And that by an application programmer just
working on a nasty problem, never mind the language developer.
I just find it interesting that sexpr notation (which McCarthy still
wants to toss!) is such a huge win, and that indentation seems to be so
limiting.
-- your hardware and network
> architecture loom large, and so does the structuring of your
> applications and infrastructure software across machines and networks.
> Still, language does matter, at a "tertiary" level if you will. Among
> the potential advantages of Lisp is the fact that you could use Lisp
> across almost all semantic levels ("almost" because I don't think "Lisp
> machines" are a realistic option nowadays, so lower levels of the stack
> would remain in C and machine language -- but those levels may probably
> be best handled by a specialized squad of kernel-level and device-driver
> programmers, anyway); among the potential advantages of Python, the fact
> that (while not as suited as Lisp to lower-level coding, partly because
> of a lack of good solid compilers to make machine language out of it),
> it brings a powerful drive to uniformity, rather than a drive towards a
> host of "domain-specific" Little Languages as is encouraged by Lisp's
> admirably-powerful macro system.
>
> One key axis of scalability here is, how rapidly can you grow the teams
> of people that develop and maintain your software base?
I am with Brooks on the Man-Month myth, so I am more interested in /not/
growing my team. If Lisp is <pick a number, any numer> times more
expressive than Python, you need exponentially fewer people.
In some parallel universe Norvig had the cojones to dictate Lisp to
Google and they listened, and in that universe... I don't know, maybe
GMail lets me click on the sender column to sort my mail? :)
> To meet all the
> challenges and grasp all the opportunities of an exploding market,
> Google has had to almost-double its size, in terms of number of
> engineers, every year for the last few years -- I believe that doing so
> while keeping stellar quality and productivity is an unprecedented feat,
> and while (again!) the choice of language(s) is not a primary factor
> (most kudos must go to our management and its approaches and methods, of
> course, and in particular to the strong corporate identity and culture
> they managed to develop and maintain), it still does matter. The
> uniformity of coding style and practices in our codebase is strong.
Well, you said it for me. Google hires the best and pays a lot. Hey, I
wrote great code in Cobol. So as much as you want to brag on yourself
and Google <g>, your success does not address:
Indentation-sensitivity: Is it holding Python back?
>
> We don't demand Python knowledge from all the engineers we hire: for any
> "engineering superstar" worth the adjective, Python is really easy and
> fast to pick up and start using productively -- I've seen it happen
> thousands of times, both in Google and in my previous career, and not
> just for engineers with a strong software background, but also for those
> whose specialties are hardware design, network operations, etc, etc. The
> language's simplicity and versatility allow this. Python "fits people's
> brains" to an unsurpassed extent -- in a way that, alas, languages
> requiring major "paradigm shifts" (such as pure FP languages, or Common
> Lisp, or even, say, Smalltalk, or Prolog...) just don't -- they really
> require a certain kind of mathematical mindset or predisposition which
> just isn't as widespread as you might hope.
Talk about Lisp myths. The better the language, the easier the language.
And the best programmers on a team get to develop tools and macrology
that empower the lesser lights, so (a) they have fun work that keeps
them entertained while (b) the drones who just want to get through the
day are insanely productive, too.
Another myth (or is this the same?) is this "pure FP" thing. Newbies can
and usually do code as imperatively as they wanna be. Until someone else
sees their code, tidies it up, and the light bulb goes on. But CL does
not force a sharp transition on anyone.
> Myself, I do have more or
> less that kind of mindset, please note: while my Lisp and scheme are
> nowadays very rusty, proficiency with them was part of what landed me my
> first job, over a quarter century ago (microchip designers with a good
> grasp of lisp-ish languages being pretty rare, and TI being rather
> hungry for them at the time) -- but I must acknowlegde I'm an exception.
>
> Of course, the choice of Python does mean that, when we really truly
> need a "domain specific little language", we have to implement it as a
> language in its own right, rather than piggybacking it on top of a
> general-purpose language as Lisp would no doubt afford; see
> <http://labs.google.com/papers/sawzall.html> for such a DSLL developed
> at Google.
No lambdas? Static typing?! eewwwewww. :) Loved the movie, tho.
Come on, try just one meaty Common Lisp project at Google. Have someone
port Cells to Python. I got halfway done but decided I would rather be
doing Lisp. uh-oh. Does Python have anything like special variables? :)
Kenny
--
Cells: http://common-lisp.net/project/cells/
"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
Ken Tilton <·········@gmail.com> wrote:
...
> Looks like dictionaries are no match for the ambiguity of natural
> language. :) Let me try again: it is Python itself that cannot scale, as
> in gain "new power and capability", and at least in the case of lambda
> it seems to be because of indentation-sensitivity.
In my opinion (and that of several others), the best way for Python to
grow in this regard would be to _lose_ lambda altogether, since named
functions are preferable (and it's an acknowledged Python design
principle that there should ideally be just one obvious way to perform a
task); GvR used to hold the same opinion, but changed his mind recently,
alas, so we'll keep the wart.
But, quite apart from the whole issue of whether it's desirable to
languages to change massively ("add new power and capability" meaning
new enriched features in the language itself), your whole argument is
bogus: it's obvious that _any_ fundamental design choice in an artefact
will influence the feasibility and desirability of future design choices
in future releases of that same, identical artefact. At a syntax-sugar
level, for example, Lisp's choice to use parentheses as delimiter means
it's undesirable, even unfeasible, to use the single character '(' as an
ordinary identifier in a future release of the language. Considering
this to mean that Lisp "cannot scale" is just as ridiculous as
considering that Python "cannot scale" by not having an elegant way to
make lambdas heavier and richer -- totally laughable and idiotic. ``An
unneeded feature "cannot" be added (elegantly) in future releases of the
language'' is just as trivial and acceptable for the unneded feature
``allow ( as an ordinary single-character identifier'' as for the
unneded feature ``allow unnamed functions with all the flexibility of
named ones''.
> By contrast, in On Lisp we see Graham toss off Prolog in Chapter 22 and
Oh, is that the same Graham who writes:
"""
A friend of mine who knows nearly all the widely used languages uses
Python for most of his projects. He says the main reason is that he
likes the way source code looks. That may seem a frivolous reason to
choose one language over another. But it is not so frivolous as it
sounds: when you program, you spend more time reading code than writing
it. You push blobs of source code around the way a sculptor does blobs
of clay. So a language that makes source code ugly is maddening to an
exacting programmer, as clay full of lumps would be to a sculptor.
"""
...? [[ I suspect that friend is in fact a common friend of mine and
Graham's, the guy you also mention later in your post, and who
introduced Graham and me when G recently came talk at Google (we had
"brushed" before, speaking in the same sessions at conferences and the
like, but had never "met", as in, got introduced and _talked_...;-). ]]
But, no matter, let's get back to Graham's point: significant
indentation is a large part of what gives Python its own special beauty,
uncluttered by unneeded punctuation. And while you, I, Graham, and that
common friend of ours, might likely agree that Lisp, while entirely
different, has its own eerie beauty, most people's aesthetics are poles
apart from that (why else would major pure-FP languages such as *ML and
Haskell entirely reject Lisp's surface syntax, willingly dropping the
ease of macros, to introduce infix operator syntax etc...? obviously,
their designers' aesthetics weigh parenthesized prefixsyntax negatively,
despite said designers' undeniable depth, skill and excellence).
Alex
From: Bill Atkins
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <87mzdurayc.fsf@rpi.edu>
·······@yahoo.com (Alex Martelli) writes:
> Ken Tilton <·········@gmail.com> wrote:
> ...
>> Looks like dictionaries are no match for the ambiguity of natural
>> language. :) Let me try again: it is Python itself that cannot scale, as
>> in gain "new power and capability", and at least in the case of lambda
>> it seems to be because of indentation-sensitivity.
>
> In my opinion (and that of several others), the best way for Python to
> grow in this regard would be to _lose_ lambda altogether, since named
> functions are preferable (and it's an acknowledged Python design
> principle that there should ideally be just one obvious way to perform a
> task); GvR used to hold the same opinion, but changed his mind recently,
> alas, so we'll keep the wart.
>
> But, quite apart from the whole issue of whether it's desirable to
> languages to change massively ("add new power and capability" meaning
> new enriched features in the language itself), your whole argument is
> bogus: it's obvious that _any_ fundamental design choice in an artefact
> will influence the feasibility and desirability of future design choices
> in future releases of that same, identical artefact. At a syntax-sugar
> level, for example, Lisp's choice to use parentheses as delimiter means
> it's undesirable, even unfeasible, to use the single character '(' as an
> ordinary identifier in a future release of the language. Considering
> this to mean that Lisp "cannot scale" is just as ridiculous as
> considering that Python "cannot scale" by not having an elegant way to
> make lambdas heavier and richer -- totally laughable and idiotic. ``An
> unneeded feature "cannot" be added (elegantly) in future releases of the
> language'' is just as trivial and acceptable for the unneded feature
> ``allow ( as an ordinary single-character identifier'' as for the
> unneded feature ``allow unnamed functions with all the flexibility of
> named ones''.
Not so infeasible:
(let ((|bizarrely(named()symbol| 3))
(+ |bizarrely(named()symbol| 4))
;; => 7
And in any case, enforced indentation is a policy with vastly more
serious consequences than the naming of identifiers.
>> By contrast, in On Lisp we see Graham toss off Prolog in Chapter 22 and
>
> Oh, is that the same Graham who writes:
>
> """
> A friend of mine who knows nearly all the widely used languages uses
> Python for most of his projects. He says the main reason is that he
> likes the way source code looks. That may seem a frivolous reason to
> choose one language over another. But it is not so frivolous as it
> sounds: when you program, you spend more time reading code than writing
> it. You push blobs of source code around the way a sculptor does blobs
> of clay. So a language that makes source code ugly is maddening to an
> exacting programmer, as clay full of lumps would be to a sculptor.
> """
> ...? [[ I suspect that friend is in fact a common friend of mine and
> Graham's, the guy you also mention later in your post, and who
> introduced Graham and me when G recently came talk at Google (we had
> "brushed" before, speaking in the same sessions at conferences and the
> like, but had never "met", as in, got introduced and _talked_...;-). ]]
>
> But, no matter, let's get back to Graham's point: significant
> indentation is a large part of what gives Python its own special beauty,
> uncluttered by unneeded punctuation. And while you, I, Graham, and that
> common friend of ours, might likely agree that Lisp, while entirely
> different, has its own eerie beauty, most people's aesthetics are poles
> apart from that (why else would major pure-FP languages such as *ML and
> Haskell entirely reject Lisp's surface syntax, willingly dropping the
> ease of macros, to introduce infix operator syntax etc...? obviously,
> their designers' aesthetics weigh parenthesized prefixsyntax negatively,
> despite said designers' undeniable depth, skill and excellence).
>
>
> Alex
--
This is a song that took me ten years to live and two years to write.
- Bob Dylan
Bill Atkins <············@rpi.edu> wrote:
...
> > ``allow ( as an ordinary single-character identifier'' as for the
> > unneded feature ``allow unnamed functions with all the flexibility of
> > named ones''.
>
> Not so infeasible:
>
> (let ((|bizarrely(named()symbol| 3))
> (+ |bizarrely(named()symbol| 4))
>
> ;; => 7
Read again what I wrote: I very specifically said "ordinary
*single-character* identifier" (as opposed to "one of many characters
inside a multi-character identifier"). Why do you think I said
otherwise, when you just quoted what I had written? (Even just a
_leading_ ( at the start of an identifier may be problematic -- and just
as trivial as having to give names to functions, of course, see below).
> And in any case, enforced indentation is a policy with vastly more
> serious consequences than the naming of identifiers.
So far, what was being discussed here isn't -- having to use an
identifier for an object, rather than keeping it anonymous -- trivial.
Python practically enforces names for several kinds of objects, such as
classes and modules as well as functions ("practically" because you CAN
call new.function(...), type(...), etc, where the name is still there
but might e.g. be empty -- not a very practical alternative, though) --
so what? Can you have an unnamed macro in Lisp? Is being "forced" to
name it a "serious consequence"? Pah.
Anyway, I repeat: *any* design choice (in a language, or for that matter
any other artefact) has consequences. As Paul Graham quotes and
supports his unnamed friend as saying, Python lets you easily write code
that *looks* good, and, as Graham argues, that's an important issue --
and, please note, a crucial consequence of using significant
indentation. Alien whitespace eating nanoviruses are no more of a worry
than alien parentheses eating nanoviruses, after all.
Alex
From: Bill Atkins
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <87ac9ubsfg.fsf@rpi.edu>
·······@yahoo.com (Alex Martelli) writes:
> Bill Atkins <············@rpi.edu> wrote:
> ...
>> > ``allow ( as an ordinary single-character identifier'' as for the
>> > unneded feature ``allow unnamed functions with all the flexibility of
>> > named ones''.
>>
>> Not so infeasible:
>>
>> (let ((|bizarrely(named()symbol| 3))
>> (+ |bizarrely(named()symbol| 4))
>>
>> ;; => 7
>
> Read again what I wrote: I very specifically said "ordinary
> *single-character* identifier" (as opposed to "one of many characters
> inside a multi-character identifier"). Why do you think I said
> otherwise, when you just quoted what I had written? (Even just a
> _leading_ ( at the start of an identifier may be problematic -- and just
> as trivial as having to give names to functions, of course, see below).
Well, the same technique can obviously be used for:
(let ((|(| 3)))
(+ |(| 4)))
;; => 7
The length of the identifier is irrelevant...
>> And in any case, enforced indentation is a policy with vastly more
>> serious consequences than the naming of identifiers.
>
> So far, what was being discussed here isn't -- having to use an
> identifier for an object, rather than keeping it anonymous -- trivial.
> Python practically enforces names for several kinds of objects, such as
> classes and modules as well as functions ("practically" because you CAN
> call new.function(...), type(...), etc, where the name is still there
> but might e.g. be empty -- not a very practical alternative, though) --
> so what? Can you have an unnamed macro in Lisp? Is being "forced" to
> name it a "serious consequence"? Pah.
Common Lisp does not support unnamed macros (how would these be
useful?), but nothing stops me from adding these. What use case do
you envision for anonymous macros?
> Anyway, I repeat: *any* design choice (in a language, or for that matter
> any other artefact) has consequences. As Paul Graham quotes and
> supports his unnamed friend as saying, Python lets you easily write code
> that *looks* good, and, as Graham argues, that's an important issue --
> and, please note, a crucial consequence of using significant
> indentation. Alien whitespace eating nanoviruses are no more of a worry
> than alien parentheses eating nanoviruses, after all.
It *is* an important issue, but it's also a subjective issue. I find
Lisp to be far prettier than any syntax-based language, so it's far
from an objective truth that Python code often looks good - or even at
all.
Plus, I can easily write code that looks good without using a language
that enforces indentation rules. Lisp's regular syntax lets Emacs do
it for me with a simple C-M-a C-M-q. What could be easier?
>
>
> Alex
--
This is a song that took me ten years to live and two years to write.
- Bob Dylan
Bill Atkins <············@rpi.edu> wrote:
...
> >
> > Read again what I wrote: I very specifically said "ordinary
> > *single-character* identifier" (as opposed to "one of many characters
> > inside a multi-character identifier"). Why do you think I said
> > otherwise, when you just quoted what I had written? (Even just a
> > _leading_ ( at the start of an identifier may be problematic -- and just
> > as trivial as having to give names to functions, of course, see below).
>
> Well, the same technique can obviously be used for:
>
> (let ((|(| 3)))
> (+ |(| 4)))
> ;; => 7
>
> The length of the identifier is irrelevant...
But it cannot be a SINGLE CHARACTER, *just* the openparenthesis.
Wow, it's incredible to me that you STILL can't read, parse and
understand what I have so clearly expressed and repeated!
> Common Lisp does not support unnamed macros (how would these be
> useful?), but nothing stops me from adding these. What use case do
> you envision for anonymous macros?
None, just like there is none for anonymous functions -- there is
nothing useful I can do with anonymous functions that I cannot do with
named ones.
> > Anyway, I repeat: *any* design choice (in a language, or for that matter
> > any other artefact) has consequences. As Paul Graham quotes and
> > supports his unnamed friend as saying, Python lets you easily write code
> > that *looks* good, and, as Graham argues, that's an important issue --
> > and, please note, a crucial consequence of using significant
> > indentation. Alien whitespace eating nanoviruses are no more of a worry
> > than alien parentheses eating nanoviruses, after all.
>
> It *is* an important issue, but it's also a subjective issue. I find
> Lisp to be far prettier than any syntax-based language, so it's far
> from an objective truth that Python code often looks good - or even at
> all.
The undeniable truth, the objective fact, is that *to most programmers*
(including ones deeply enamored of Lisp, such as Graham, Tilton, Norvig,
...) Python code looks good; the Lisp code that looks good to YOU (and,
no doubt them), and palatable to me (I have spoken of "eerie beauty"),
just doesn't to most prospective readers. If you program on your own,
or just with a few people who share your tastes, then only your taste
matters; if you want to operate in the real world, maybe, as I've
already pointed out, to build up a successful firm faster than had ever
previously happened, this *DOESN'T SCALE*. Essentially the same issue
I'm explaining on the parallel subthread with Tilton, except that he
fully agrees with my aesthetic sense (quoting Tilton, "No argument. The
little Python I wrote while porting Cells to Python was strikingly
attractive") so this facet of the jewel needed no further belaboring
there.
>
> Plus, I can easily write code that looks good without using a language
> that enforces indentation rules. Lisp's regular syntax lets Emacs do
> it for me with a simple C-M-a C-M-q. What could be easier?
If you need to edit and reformat other people's code with Emacs to find
it "looks good", you've made my point: code exists to be read, far more
than it's written, and Python's design choice to keep punctuation scarce
and unobtrusive obviates the need to edit and reformat code that way.
Alex
From: Bill Atkins
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <87bquaskap.fsf@rpi.edu>
·······@yahoo.com (Alex Martelli) writes:
> Bill Atkins <············@rpi.edu> wrote:
> ...
>> >
>> > Read again what I wrote: I very specifically said "ordinary
>> > *single-character* identifier" (as opposed to "one of many characters
>> > inside a multi-character identifier"). Why do you think I said
>> > otherwise, when you just quoted what I had written? (Even just a
>> > _leading_ ( at the start of an identifier may be problematic -- and just
>> > as trivial as having to give names to functions, of course, see below).
>>
>> Well, the same technique can obviously be used for:
>>
>> (let ((|(| 3)))
>> (+ |(| 4)))
>> ;; => 7
>>
>> The length of the identifier is irrelevant...
>
> But it cannot be a SINGLE CHARACTER, *just* the openparenthesis.
>
> Wow, it's incredible to me that you STILL can't read, parse and
> understand what I have so clearly expressed and repeated!
Read my other post. It's incredible that you STILL haven't considered
the possibility that you're just wrong.
>
>> Common Lisp does not support unnamed macros (how would these be
>> useful?), but nothing stops me from adding these. What use case do
>> you envision for anonymous macros?
>
> None, just like there is none for anonymous functions -- there is
> nothing useful I can do with anonymous functions that I cannot do with
> named ones.
Sure there are.
Does Python have any support for closures? If so, ignore this point.
But if not, what about examples like this:
(defun make-window (window observer)
;; initialization code here
(add-handler window 'close
(lambda (event)
(notify observer event)))
;; more code)
Being able to keep pass around state with functions is useful.
There are also cases where a function is so trivial that the simplest
way to describe it is with its source code, where giving it a name and
putting it at the beginning of a function is just distracting and
time-consuming. E.g.:
(remove-if (lambda (name)
(find #\- name :test #'char=))
list-of-names)
What's the sense of giving that function its own name? It's much
clearer to simply write it in place. Yes, it's _possible_ to use
named functions, but in this case its functionality is so simple that
it's clearer to simply type it in place. Why is this expressiveness a
bad thing, aside from its power to wreck an indentation-significant
language?
>
>> > Anyway, I repeat: *any* design choice (in a language, or for that matter
>> > any other artefact) has consequences. As Paul Graham quotes and
>> > supports his unnamed friend as saying, Python lets you easily write code
>> > that *looks* good, and, as Graham argues, that's an important issue --
>> > and, please note, a crucial consequence of using significant
>> > indentation. Alien whitespace eating nanoviruses are no more of a worry
>> > than alien parentheses eating nanoviruses, after all.
>>
>> It *is* an important issue, but it's also a subjective issue. I find
>> Lisp to be far prettier than any syntax-based language, so it's far
>> from an objective truth that Python code often looks good - or even at
>> all.
>
> The undeniable truth, the objective fact, is that *to most programmers*
> (including ones deeply enamored of Lisp, such as Graham, Tilton, Norvig,
> ...) Python code looks good; the Lisp code that looks good to YOU (and,
> no doubt them), and palatable to me (I have spoken of "eerie beauty"),
> just doesn't to most prospective readers. If you program on your own,
> or just with a few people who share your tastes, then only your taste
> matters; if you want to operate in the real world, maybe, as I've
> already pointed out, to build up a successful firm faster than had ever
> previously happened, this *DOESN'T SCALE*. Essentially the same issue
> I'm explaining on the parallel subthread with Tilton, except that he
> fully agrees with my aesthetic sense (quoting Tilton, "No argument. The
> little Python I wrote while porting Cells to Python was strikingly
> attractive") so this facet of the jewel needed no further belaboring
> there.
And I'm sure Kelly Clarkson sounds better *to most listeners* but that
doesn't mean she's a better musician than Hendrix. The fact that most
people are used to ALGOL-like languages does not mean that ALGOL-like
languages are more aesthetically pleasing on their own merits.
>>
>> Plus, I can easily write code that looks good without using a language
>> that enforces indentation rules. Lisp's regular syntax lets Emacs do
>> it for me with a simple C-M-a C-M-q. What could be easier?
>
> If you need to edit and reformat other people's code with Emacs to find
> it "looks good", you've made my point: code exists to be read, far more
> than it's written, and Python's design choice to keep punctuation scarce
> and unobtrusive obviates the need to edit and reformat code that way.
That's not what I'm saying at all. My point is that I can write a
function, counting on Emacs to keep it indented for me, and then after
making a series of changes to it, a mere C-M-a C-M-q takes them into
account and bam, no-fuss indenting.
>
> Alex
>
--
This is a song that took me ten years to live and two years to write.
- Bob Dylan
Bill Atkins <············@rpi.edu> writes:
> Does Python have any support for closures? If so, ignore this point.
> But if not, what about examples like this:
>
> (defun make-window (window observer)
> ;; initialization code here
> (add-handler window 'close
> (lambda (event)
> (notify observer event)))
> ;; more code)
Python has closures but you can only read the closed variables, not
write them.
> Being able to keep pass around state with functions is useful.
>
> There are also cases where a function is so trivial that the simplest
> way to describe it is with its source code, where giving it a name and
> putting it at the beginning of a function is just distracting and
> time-consuming. E.g.:
>
> (remove-if (lambda (name)
> (find #\- name :test #'char=))
> list-of-names)
If I read that correctly, in Python you could use
filter(list_of_names, lambda name: '-' not in name)
or
[name for name in list_of_names if '-' not in name]
Both of these have become more natural for me than the Lisp version.
On Sat, 06 May 2006 21:19:58 -0400, Bill Atkins wrote:
> There are also cases where a function is so trivial that the simplest
> way to describe it is with its source code, where giving it a name and
> putting it at the beginning of a function is just distracting and
> time-consuming. E.g.:
>
> (remove-if (lambda (name)
> (find #\- name :test #'char=))
> list-of-names)
>
> What's the sense of giving that function its own name? It's much
> clearer to simply write it in place. Yes, it's _possible_ to use
> named functions, but in this case its functionality is so simple that
> it's clearer to simply type it in place. Why is this expressiveness a
> bad thing, aside from its power to wreck an indentation-significant
> language?
Well, you can do that with python's current,
non-indentation-significance-wrecking, lambda syntax, so I don't think
that's a particularly persuasive example. Note also that the only
limitation python has on where you can define functions is that you can't
define them inside expressions (they have to be statements), so you could
define a named function right before the call to remove-if, which removes
some of the advantage you get from the lambda (that is, the function
definition is not very far from when it's used).
Actually, I think the limitation on python that is operative here is not
significant whitespace, but the distinction between statements and
expressions.
(crossposts trimmed)
From: Bill Atkins
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <87psiqy57k.fsf@rpi.edu>
I V <········@gmail.com> writes:
> On Sat, 06 May 2006 21:19:58 -0400, Bill Atkins wrote:
>> There are also cases where a function is so trivial that the simplest
>> way to describe it is with its source code, where giving it a name and
>> putting it at the beginning of a function is just distracting and
>> time-consuming. E.g.:
>>
>> (remove-if (lambda (name)
>> (find #\- name :test #'char=))
>> list-of-names)
>>
>> What's the sense of giving that function its own name? It's much
>> clearer to simply write it in place. Yes, it's _possible_ to use
>> named functions, but in this case its functionality is so simple that
>> it's clearer to simply type it in place. Why is this expressiveness a
>> bad thing, aside from its power to wreck an indentation-significant
>> language?
>
> Well, you can do that with python's current,
> non-indentation-significance-wrecking, lambda syntax, so I don't think
> that's a particularly persuasive example. Note also that the only
> limitation python has on where you can define functions is that you can't
> define them inside expressions (they have to be statements), so you could
> define a named function right before the call to remove-if, which removes
> some of the advantage you get from the lambda (that is, the function
> definition is not very far from when it's used).
>
> Actually, I think the limitation on python that is operative here is not
> significant whitespace, but the distinction between statements and
> expressions.
>
> (crossposts trimmed)
You're right, I was replying to Alex's assertion that "there is
nothing useful I can do with anonymous functions that I cannot do with
named ones."
--
This is a song that took me ten years to live and two years to write.
- Bob Dylan
Bill Atkins <············@rpi.edu> wrote:
...
> Does Python have any support for closures? If so, ignore this point.
Ignored, since closures are there.
> Being able to keep pass around state with functions is useful.
Sure, but naming the functions doesn't hamper that.
> There are also cases where a function is so trivial that the simplest
> way to describe it is with its source code, where giving it a name and
> putting it at the beginning of a function is just distracting and
> time-consuming. E.g.:
>
> (remove-if (lambda (name)
> (find #\- name :test #'char=))
> list-of-names)
>
> What's the sense of giving that function its own name? It's much
> clearer to simply write it in place. Yes, it's _possible_ to use
> named functions, but in this case its functionality is so simple that
> it's clearer to simply type it in place. Why is this expressiveness a
> bad thing, aside from its power to wreck an indentation-significant
> language?
Offering several "equivalently obvious" approaches to performing just
one same task is bad for many reasons: you end up with a bigger language
(with all the attending ills) with no extra power, and you diminish the
style uniformity of programs written in the language, hampering the
ability of many people to work in the same codebase with no "ownership".
Like most all design principles, "there should be one, and preferably
only one, obvious way to perform a task" is an _ideal_, unattainable in
its perfect form in this sublunar world; but I find it an extremely
helpful guideline to design, and it does pervade Python (many of the
planned improvements for the future Python 3.0, which will be allowed to
be backwards-incompatible with 2.*, involve *removing* old features
made, in this sense, redundant by enhancements in recent versions; my
only beef here is that _not enough_ is scheduled for removal, alas).
> people are used to ALGOL-like languages does not mean that ALGOL-like
> languages are more aesthetically pleasing on their own merits.
"on their own merits" can't be the case -- as Wittgenstein pointed out,
we ARE talking about the natural history of human beings. A frequently
observed aestethic preference for X over Y says nothing about X and Y
``on their own merits'' (if such noumena could even be envisaged), but
it's empirically important about X and Y _when many human beings must
interact with them_.
Being "used to" some kinds of languages has nothing to do with it: when
I met Python I was most "used to" perl (and cognizant of Forth, scheme,
pre-common Lisp, sh, APL, ...), nevertheless Python immediately struck
me as allowing particularly beautiful coding. And I specifically named
many Lisp experts as acknowledging exacty the same thing, further
demolishing your "used to" strawman...
Alex
Bill Atkins wrote:
> Does Python have any support for closures? If so, ignore this point.
> But if not, what about examples like this:
>
> (defun make-window (window observer)
> ;; initialization code here
> (add-handler window 'close
> (lambda (event)
> (notify observer event)))
> ;; more code)
>
> Being able to keep pass around state with functions is useful.
I agree and Python supports this. What is interesting is how
counter-intuitive many programmers find this. For example, one of my
colleges (who is a reasonably average programmer), was completely
stumped by this:
>>> def right_partial(fn, *args):
... def inner(*innerargs):
... return fn(*(innerargs + args))
... return inner
...
>>> square = right_partial(pow, 2)
>>> square(5)
25
>>> cube = right_partial(pow, 3)
>>> cube(5)
125
(for those of you unfamiliar with Python, the '>>>' and '...' means
that I typed this in the interactive shell)
So I try to use this sort of pattern sparingly because many programmers
don't think of closures as a way of saving state. That might be because
it is not possible to do so in most mainstream languages.
There are already some people in the Python community who think that
Python has already gone too far in supporting "complex" language
features and now imposes two steep a learning curve i.e. you now have
to know a lot to be considered a Python expert. And there is a lot of
resistance to adding features that will raise the bar even higher.
> There are also cases where a function is so trivial that the simplest
> way to describe it is with its source code, where giving it a name and
> putting it at the beginning of a function is just distracting and
> time-consuming. E.g.:
>
> (remove-if (lambda (name)
> (find #\- name :test #'char=))
> list-of-names)
There are some ways to write it in Python (if I understand the code):
# prefered
[name for name in list_of_names if '-' not in name]
OR:
# 2nd choice
def contains_no_dash(name):
return '-' not in name
filter(contains_no_dash, list_of_names)
# not recommended
filter(lambda name: return '-' not in name, list_of_names)
> What's the sense of giving that function its own name? It's much
> clearer to simply write it in place. Yes, it's _possible_ to use
> named functions, but in this case its functionality is so simple that
> it's clearer to simply type it in place. Why is this expressiveness a
> bad thing, aside from its power to wreck an indentation-significant
> language?
There are a few people in the Python community who think that unnamed
functions are inherently inferior to named ones because the name
provides documentation. The majority probably just don't care about the
issue at all. I will say that I am not sure that I know what your LISP
code does. However, even if you didn't understand the Python code in my
named-function version, the name "contains_no_dash" might provide you
with a small clue.
Cheers,
Brian
<·····@sweetapp.com> wrote:
...
> > Being able to keep pass around state with functions is useful.
>
> I agree and Python supports this. What is interesting is how
> counter-intuitive many programmers find this. For example, one of my
Funny: I have taught/mentored large number of people in Python, people
coming from all different levels along the axis of "previous knowledge
of programming in general", and closures are not among the issues where
I ever noticed large number of people having problems.
> So I try to use this sort of pattern sparingly because many programmers
> don't think of closures as a way of saving state. That might be because
> it is not possible to do so in most mainstream languages.
I don't normally frame it in terms of "saving" state, but rather of
"keeping some amount of state around" -- which means more or less the
same thing but may perhaps be easier to digest (just trying to see what
could explain the difference between my experience and yours).
> There are already some people in the Python community who think that
> Python has already gone too far in supporting "complex" language
> features and now imposes two steep a learning curve i.e. you now have
> to know a lot to be considered a Python expert. And there is a lot of
> resistance to adding features that will raise the bar even higher.
I might conditionally underwrite this, myself, but I guess my emphasis
is different from that of the real "paladins" of this thesis (such as
Mark Shuttleworth, who gave us all an earful about this when he
delivered a Keynote at Europython 2004).
I'm all for removing _redundant_ features, but I don't think of many
things on the paladins' hitlist as such -- closures, itertools, genexps,
etc, all look just fine to me (and I have a good track record of
teaching them...). I _would_ love to push (for 3.0) further
simplifications, e.g., I do hate it that
[ x for x in container if predicate(x) ]
is an exact synonym of the more legible
list( x for x in container if predicate(x) )
and the proposed
{1, 2, 3}
is an exact synonym of
set((1, 2, 3))
just to focus on a couple of redundant syntax-sugar ideas (one in
today's Python but slated to remain in 3.0, one proposed for 3.0). It's
not really about there being anything deep or complex about this, but
each and every such redundancy _does_ "raise the bar" without any
commensurate return. Ah well.
Alex
·······@yahoo.com (Alex Martelli) writes:
> I do hate it that
> [ x for x in container if predicate(x) ]
> is an exact synonym of the more legible
> list( x for x in container if predicate(x) )
Heh, I hate it that it's NOT an exact synonym (the listcomp leaves 'x'
polluting the namespace and clobbers any pre-existing 'x', but the
gencomp makes a new temporary scope).
> and the proposed
> {1, 2, 3}
> is an exact synonym of
> set((1, 2, 3))
There's one advantage that I can think of for the existing (and
proposed) list/dict/set literals, which is that they are literals and
can be treated as such by the parser. Remember a while back that we
had a discussion of reading expressions like
{'foo': (1,2,3),
'bar': 'file.txt'}
from configuration files without using (unsafe) eval. Aside from that
I like the idea of using constructor functions instead of special syntax.
Paul Rubin <·············@NOSPAM.invalid> wrote:
> ·······@yahoo.com (Alex Martelli) writes:
> > I do hate it that
> > [ x for x in container if predicate(x) ]
> > is an exact synonym of the more legible
> > list( x for x in container if predicate(x) )
>
> Heh, I hate it that it's NOT an exact synonym (the listcomp leaves 'x'
> polluting the namespace and clobbers any pre-existing 'x', but the
> gencomp makes a new temporary scope).
Yeah, that's gonna be fixed in 3.0 (can't be fixed before, as it would
break backwards compatibility) -- then we'll have useless synonyms.
> > and the proposed
> > {1, 2, 3}
> > is an exact synonym of
> > set((1, 2, 3))
>
> There's one advantage that I can think of for the existing (and
> proposed) list/dict/set literals, which is that they are literals and
> can be treated as such by the parser. Remember a while back that we
> had a discussion of reading expressions like
> {'foo': (1,2,3),
> 'bar': 'file.txt'}
> from configuration files without using (unsafe) eval. Aside from that
And as I recall I showed how to make a safe-eval -- that could easily be
built into 3.0, btw (including special treatment for builtin names of
types that are safe to construct). I'd be all in favor of specialcasing
such names in the parser, too, but that's a harder sell.
> I like the idea of using constructor functions instead of special syntax.
Problem is how to make _GvR_ like it too;-)
Alex
·······@yahoo.com (Alex Martelli) writes:
> Bill Atkins <············@rpi.edu> wrote:
> ...
> > > ``allow ( as an ordinary single-character identifier'' as for the
> > > unneded feature ``allow unnamed functions with all the flexibility of
> > > named ones''.
> >
> > Not so infeasible:
> >
> > (let ((|bizarrely(named()symbol| 3))
> > (+ |bizarrely(named()symbol| 4))
> >
> > ;; => 7
>
> Read again what I wrote: I very specifically said "ordinary
> *single-character* identifier" (as opposed to "one of many characters
> inside a multi-character identifier"). Why do you think I said
> otherwise, when you just quoted what I had written? (Even just a
> _leading_ ( at the start of an identifier may be problematic -- and just
> as trivial as having to give names to functions, of course, see below).
bah...
[1]> (setq rtbl (copy-readtable))
#<READTABLE #x203A0F6D>
[2]> (set-syntax-from-char #\{ #\()
T
[3]> (set-syntax-from-char #\( #\a)
T
[4]> {defun ( {a) {1+ a))
(
[5]> {( 1)
2
[6]> {( 4)
5
[7]> {setq *readtable* rtbl)
#<READTABLE #x203A0F6D>
[8]> (1+ 1)
2
With readtable and reader macros you can change the syntax as you
wish.
ajr.
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <uHa7g.509$SV1.289@fe10.lga>
Alex Martelli wrote:
> Ken Tilton <·········@gmail.com> wrote:
> ...
>
>>Looks like dictionaries are no match for the ambiguity of natural
>>language. :) Let me try again: it is Python itself that cannot scale, as
>>in gain "new power and capability", and at least in the case of lambda
>>it seems to be because of indentation-sensitivity.
>
>
> In my opinion (and that of several others), the best way for Python to
> grow in this regard would be to _lose_ lambda altogether, since named
> functions are preferable (and it's an acknowledged Python design
> principle that there should ideally be just one obvious way to perform a
> task); GvR used to hold the same opinion, but changed his mind recently,
> alas, so we'll keep the wart.
Yes, I am enjoying watching lambda teetering on the brink. So it has
been re-upped for another tour? Go, lambda! Go, lambda!
>
> But, quite apart from the whole issue of whether it's desirable to
> languages to change massively ("add new power and capability" meaning
> new enriched features in the language itself), your whole argument is
> bogus: it's obvious that _any_ fundamental design choice in an artefact
> will influence the feasibility and desirability of future design choices
> in future releases of that same, identical artefact.
True but circular, because my very point is that () was a great design
choice in that it made macros possible and they made CL almost
infinitely extensible, while indentation-sensitivity was a mistaken
design choice because it makes for very clean code (I agree
wholeheartedly) but placed a ceiling on its expressiveness.
As for:
> At a syntax-sugar
> level, for example, Lisp's choice to use parentheses as delimiter means
> it's undesirable, even unfeasible, to use the single character '(' as an
> ordinary identifier in a future release of the language.
(defun |(| (aside) (format nil "Parenthetically speaking...~a." aside))
=> |(|
(|(| "your Lisp /is/ rusty.")
=> "Parenthetically speaking...your Lisp /is/ rusty.."
:) No, seriously, is that all you can come up with?
> Considering
> this to mean that Lisp "cannot scale" is just as ridiculous as
> considering that Python "cannot scale" by not having an elegant way to
> make lambdas heavier and richer -- totally laughable and idiotic.
Harsh. :) I demand satisfaction. See end of article.
> ``An
> unneeded feature "cannot" be added (elegantly) in future releases of the
> language'' is just as trivial and acceptable for the unneded feature
> ``allow ( as an ordinary single-character identifier'' as for the
> unneded feature ``allow unnamed functions with all the flexibility of
> named ones''.
>
>
>>By contrast, in On Lisp we see Graham toss off Prolog in Chapter 22 and
>
>
> Oh, is that the same Graham who writes:
So we are going to skip the point I was making about Common Lisp being
so insanely extensible? By /application/ programmers? Hell, for all we
know CL does have a BDFL, we just do not need their cooperation.
>
> """
> A friend of mine who knows nearly all the widely used languages uses
> Python for most of his projects. He says the main reason is that he
> likes the way source code looks.
No argument. The little Python I wrote while porting Cells to Python was
strikingly attractive. But it was a deal with the devil, unless Python
is content to be just a scripting language. (And it should be.)
OK, I propose a duel. We'll co-mentor this:
http://www.lispnyc.org/wiki.clp?page=PyCells
In the end Python will have a Silver Bullet, and only the syntax will
differ, because Python has a weak lambda, statements do not always
return values, it does not have macros, and I do not know if it has
special variables.
Then we can just eyeball the code and see if the difference is
interesting. These abstract discussions do tend to loop.
kenny
--
Cells: http://common-lisp.net/project/cells/
"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
Ken Tilton <·········@gmail.com> wrote:
...
> True but circular, because my very point is that () was a great design
> choice in that it made macros possible and they made CL almost
> infinitely extensible, while indentation-sensitivity was a mistaken
> design choice because it makes for very clean code (I agree
> wholeheartedly) but placed a ceiling on its expressiveness.
Having to give functions a name places no "ceiling on expressiveness",
any more than, say, having to give _macros_ a name.
> As for:
>
> > At a syntax-sugar
> > level, for example, Lisp's choice to use parentheses as delimiter means
> > it's undesirable, even unfeasible, to use the single character '(' as an
> > ordinary identifier in a future release of the language.
>
> (defun |(| (aside) (format nil "Parenthetically speaking...~a." aside))
> => |(|
> (|(| "your Lisp /is/ rusty.")
> => "Parenthetically speaking...your Lisp /is/ rusty.."
>
> :) No, seriously, is that all you can come up with?
Interestingly, the SECOND lisper to prove himself unable to read the
very text he's quoting. Reread carefully, *USE THE ***SINGLE***
CHARACTER* ... *AS AN ORDINARY IDENTIFIER*. What makes you read a
``PART OF'' that I had never written? You've shown how to use the
characters as *PART* of an identifier [[and I believe it couldn't be the
very start]], and you appear to believe that this somehow refutes my
assertion?
Are you ready to admit you were utterly wrong, and (while it is indeed
true that my Lisp is rusty) there is nothing in this exchange to show
it, as opposed to showing rustiness in your ability to understand
English? Or shall we move from polite although total dissent right on
to flamewars and namecalling?
The point is, OF COURSE any design choice places limitations on future
design choices; but some limitations are even DESIRABLE (a language
where *every* single isolated character could mean anything whatsoever
would not be "expressive", but rather totally unreadable) or at least
utterly trivial (syntax-sugar level issues most typically are). Wilfully
distorting some such limitation as meaning that one language "can scale"
(when EVERY language inevitably has SOME such limitations) is not even
funny, and clearly characterizes a polemist who is intent on proving a
preconceived thesis, as opposed to an investigator with any real
interest in ascertaining the truth of the matter.
> > Oh, is that the same Graham who writes:
>
> So we are going to skip the point I was making about Common Lisp being
> so insanely extensible? By /application/ programmers? Hell, for all we
> know CL does have a BDFL, we just do not need their cooperation.
Yes, we are, because the debate about why it's better for Python (as a
language used in real-world production systems, *SCALABLE* to extremely
large-scale ones) to *NOT* be insanely extensible and mutable is a
separate one -- Python's uniformity of style allows SCALABILITY of
teams, and teams-of-teams, which is as crucial in the real world as
obviously not understood by you (the law you misquoted was about adding
personnel to a LATE project making it later -- nothing to do with how
desirable it can be to add personnel to a large and growing collection
of projects, scaling and growing in an agile, iterative way to meet
equally growing needs and market opportunities).
This specific debate grew from your misuse of "scalable" to mean or
imply "a bazillion feechurz can [[and, implicitly, should]] be added to
a language, and therefore anything that stands in the way of feechuritis
is somehow holding the language back". That's bad enough, even though
in its contextual misuse of "scalable" it breaks new ground, and I don't
want to waste even more time re-treading *old* ground as to whether the
"*insane* extensibility" afforded by macros is a good or a bad thing in
a language to be used for real-world software production (as opposed to
prototyping and research).
> > """
> > A friend of mine who knows nearly all the widely used languages uses
> > Python for most of his projects. He says the main reason is that he
> > likes the way source code looks.
>
> No argument. The little Python I wrote while porting Cells to Python was
> strikingly attractive. But it was a deal with the devil, unless Python
> is content to be just a scripting language. (And it should be.)
It's hard to attribute feelings to a programming language, but, if you
really must, I'd say Pyton aspires to be *useful* -- if all you need is
"just a scripting language", it will be content to be one for you, and
if your need SCALE, well then, PYTHON IS SCALABLE, and will remain a
*SIMPLE, CLEAN, LITTLE AND POWERFUL LANGUAGE* (letting nobody do
anything INSANE to it;-) while scaling up to whatever size of project(s)
you need (including systems so large that they redefine the very concept
of "large scale" -- believe me, once in a while at a conference I make
the mistake of going to some talk about "large scale" this or that, and
invariably stagger out once again with the realization that what's
"large scale" to the world tends to be a neat toy-sized throwaway little
experiment to my current employer).
> OK, I propose a duel. We'll co-mentor this:
>
> http://www.lispnyc.org/wiki.clp?page=PyCells
>
> In the end Python will have a Silver Bullet, and only the syntax will
> differ, because Python has a weak lambda, statements do not always
> return values, it does not have macros, and I do not know if it has
> special variables.
>
> Then we can just eyeball the code and see if the difference is
> interesting. These abstract discussions do tend to loop.
As a SummerOfCode mentor, I'm spoken for, and can't undertake to mentor
other projects. I do agree that these discussions can be sterile, and
I'll be glad to see what comes of your "pycells" project, but until then
there's little we can do except agree to disagree (save that I'd like
you to acknowledge my point above, regarding what exactly I had said and
the fact that your alleged counterexample doesn't address at all what I
had said -- but, I'll live even without such an acknowledgment).
Alex
·······@yahoo.com (Alex Martelli) writes:
> > (|(| "your Lisp /is/ rusty.")
>
> Interestingly, the SECOND lisper to prove himself unable to read the
> very text he's quoting. Reread carefully, *USE THE ***SINGLE***
> CHARACTER* ... *AS AN ORDINARY IDENTIFIER*. What makes you read a
> ``PART OF'' that I had never written? You've shown how to use the
> characters as *PART* of an identifier [[and I believe it couldn't be the
> very start]], and you appear to believe that this somehow refutes my
> assertion?
The identifier there is a single paren. The vertical bars are used to
escape the paren, so that the reader doesn't get confused. The Pythonic
equivalent would be something like
\( = 5
where the backslash escapes the paren. In real Python you could say:
locals()['('] = 5
In Lisp you could get rid of the need to escape the paren if you
wanted, using suitable read macros. Whether that's a good idea is of
course a different matter.
> Yes, we are, because the debate about why it's better for Python (as a
> language used in real-world production systems, *SCALABLE* to extremely
> large-scale ones) to *NOT* be insanely extensible and mutable is a
> separate one -- Python's uniformity of style allows SCALABILITY of
> teams, and teams-of-teams, which is as crucial in the real world ...
My current take on Lisp vs Python is pretty close to Peter Norvig's
(http://www.norvig.com/python-lisp.html):
Python has the philosophy of making sensible compromises that make
the easy things very easy, and don't preclude too many hard
things. In my opinion it does a very good job. The easy things are
easy, the harder things are progressively harder, and you tend not
to notice the inconsistencies. Lisp has the philosophy of making
fewer compromises: of providing a very powerful and totally
consistent core. This can make Lisp harder to learn because you
operate at a higher level of abstraction right from the start and
because you need to understand what you're doing, rather than just
relying on what feels or looks nice. But it also means that in
Lisp it is easier to add levels of abstraction and complexity;
Lisp makes the very hard things not too hard.
>
> It's hard to attribute feelings to a programming language, but, if you
> really must, I'd say Pyton aspires to be *useful* -- if all you need is
> "just a scripting language", it will be content to be one for you, and
> if your need SCALE, well then, PYTHON IS SCALABLE, and will remain a
> *SIMPLE, CLEAN, LITTLE AND POWERFUL LANGUAGE* (letting nobody do
> anything INSANE to it;-) while scaling up to whatever size of project(s)
> you need (including systems so large that they redefine the very concept
> of "large scale" -- believe me, once in a while at a conference I make
> the mistake of going to some talk about "large scale" this or that, and
> invariably stagger out once again with the realization that what's
> "large scale" to the world tends to be a neat toy-sized throwaway little
> experiment to my current employer).
I've heard many times that your current employer uses Python for all
kinds of internal tools; I hadn't heard that it was used in Very Large
projects over there. I'd be interested to hear how that's been
working out, since the biggest Python projects I'd heard of before
(e.g. Zope) are, as you say, toy-sized throwaways compared to the
stuff done regularly over there at G.
Paul Rubin <·············@NOSPAM.invalid> wrote:
...
> > Yes, we are, because the debate about why it's better for Python (as a
> > language used in real-world production systems, *SCALABLE* to extremely
> > large-scale ones) to *NOT* be insanely extensible and mutable is a
> > separate one -- Python's uniformity of style allows SCALABILITY of
> > teams, and teams-of-teams, which is as crucial in the real world ...
>
> My current take on Lisp vs Python is pretty close to Peter Norvig's
> (http://www.norvig.com/python-lisp.html):
>
> Python has the philosophy of making sensible compromises that make
> the easy things very easy, and don't preclude too many hard
> things. In my opinion it does a very good job. The easy things are
> easy, the harder things are progressively harder, and you tend not
> to notice the inconsistencies. Lisp has the philosophy of making
> fewer compromises: of providing a very powerful and totally
> consistent core. This can make Lisp harder to learn because you
> operate at a higher level of abstraction right from the start and
> because you need to understand what you're doing, rather than just
> relying on what feels or looks nice. But it also means that in
> Lisp it is easier to add levels of abstraction and complexity;
> Lisp makes the very hard things not too hard.
Sure -- however, Python makes them not all that hard either. Peter and
I have uncannily similar tastes -- just last month we happened to meet
at the same Arizona Pueblo-ancestral-people ruins-cum-museum (we both
independently chose to take vacation during spring break week to take
the kids along, we both love the Grand Canyon region, we both love
dwelling on ancient cultures, etc -- so I guess the coincidence wasn't
TOO crazy -- but it still WAS a shock to see Peter walk into the museum
at the same time I was walking out of it!-). Yes, it IS easier to add
complexity in Lisp; that's a good summary of why I prefer Python.
> I've heard many times that your current employer uses Python for all
> kinds of internal tools; I hadn't heard that it was used in Very Large
> projects over there. I'd be interested to hear how that's been
> working out, since the biggest Python projects I'd heard of before
> (e.g. Zope) are, as you say, toy-sized throwaways compared to the
> stuff done regularly over there at G.
Sorry, but I'm not authorized to speak for my employer nor to reveal
details of our internal systems -- hey, I cannot even say how many
servers we have; the "corporate approved metrics" is ``several
thousands''...;-). The most that managed to get approved for external
communication you'll find in the "Python at Google" presentation that
Chris di Bona and Greg Stein hold at times in various venues, and that's
not saying much -- good thing I'm not the one giving those
presentations, as I might find hard to stick to the approved line;-)
Alex
From: Bill Atkins
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <8764kive9g.fsf@rpi.edu>
·······@yahoo.com (Alex Martelli) writes:
> Ken Tilton <·········@gmail.com> wrote:
> ...
>> True but circular, because my very point is that () was a great design
>> choice in that it made macros possible and they made CL almost
>> infinitely extensible, while indentation-sensitivity was a mistaken
>> design choice because it makes for very clean code (I agree
>> wholeheartedly) but placed a ceiling on its expressiveness.
>
> Having to give functions a name places no "ceiling on expressiveness",
> any more than, say, having to give _macros_ a name.
>
>
>> As for:
>>
>> > At a syntax-sugar
>> > level, for example, Lisp's choice to use parentheses as delimiter means
>> > it's undesirable, even unfeasible, to use the single character '(' as an
>> > ordinary identifier in a future release of the language.
>>
>> (defun |(| (aside) (format nil "Parenthetically speaking...~a." aside))
>> => |(|
>> (|(| "your Lisp /is/ rusty.")
>> => "Parenthetically speaking...your Lisp /is/ rusty.."
>>
>> :) No, seriously, is that all you can come up with?
>
> Interestingly, the SECOND lisper to prove himself unable to read the
> very text he's quoting. Reread carefully, *USE THE ***SINGLE***
> CHARACTER* ... *AS AN ORDINARY IDENTIFIER*. What makes you read a
> ``PART OF'' that I had never written? You've shown how to use the
> characters as *PART* of an identifier [[and I believe it couldn't be the
> very start]], and you appear to believe that this somehow refutes my
> assertion?
Now I see what the problem is here - you just don't know what you're
talking about. The identifier in Ken's and my samples *is* a single
character identifier. The vertical bars tell the Lisp reader that
what's between them is exempt from other reading rules.
(symbol-name '|(| ) => "("
(length (symbol-name '|(| )) => 1
> Are you ready to admit you were utterly wrong, and (while it is indeed
> true that my Lisp is rusty) there is nothing in this exchange to show
> it, as opposed to showing rustiness in your ability to understand
> English? Or shall we move from polite although total dissent right on
> to flamewars and namecalling?
Believe it or not, _you_ got it wrong.
> The point is, OF COURSE any design choice places limitations on future
> design choices; but some limitations are even DESIRABLE (a language
> where *every* single isolated character could mean anything whatsoever
> would not be "expressive", but rather totally unreadable) or at least
> utterly trivial (syntax-sugar level issues most typically are). Wilfully
> distorting some such limitation as meaning that one language "can scale"
> (when EVERY language inevitably has SOME such limitations) is not even
> funny, and clearly characterizes a polemist who is intent on proving a
> preconceived thesis, as opposed to an investigator with any real
> interest in ascertaining the truth of the matter.
Having to name a variable "paren" instead of "(" is not a serious
restriction. I can't think of a single situation where being able to
do so would be useful.
That said, raw, out-of-the-box Common Lisp can accomodate you if you
both a) need variables named "(" and b) are unwilling to use the bar
syntax. Simply redefine the parenthesis characters in the readtable
(a matter of four function calls) and get this abomination:
{let {{( 3}}
{+ ( 5}}
Lisp places no restrictions on you, even when your goal is as silly as
this one.
>> > Oh, is that the same Graham who writes:
>>
>> So we are going to skip the point I was making about Common Lisp being
>> so insanely extensible? By /application/ programmers? Hell, for all we
>> know CL does have a BDFL, we just do not need their cooperation.
>
> Yes, we are, because the debate about why it's better for Python (as a
> language used in real-world production systems, *SCALABLE* to extremely
> large-scale ones) to *NOT* be insanely extensible and mutable is a
> separate one -- Python's uniformity of style allows SCALABILITY of
> teams, and teams-of-teams, which is as crucial in the real world as
> obviously not understood by you (the law you misquoted was about adding
> personnel to a LATE project making it later -- nothing to do with how
> desirable it can be to add personnel to a large and growing collection
> of projects, scaling and growing in an agile, iterative way to meet
> equally growing needs and market opportunities).
Buh? The project doesn't have to be late for Brooks's law to hold;
adding programmers, so goes Brooks reasoning, will always increase the
time required to complete the project because of various communication
issues.
> This specific debate grew from your misuse of "scalable" to mean or
> imply "a bazillion feechurz can [[and, implicitly, should]] be added to
> a language, and therefore anything that stands in the way of feechuritis
> is somehow holding the language back". That's bad enough, even though
> in its contextual misuse of "scalable" it breaks new ground, and I don't
> want to waste even more time re-treading *old* ground as to whether the
> "*insane* extensibility" afforded by macros is a good or a bad thing in
> a language to be used for real-world software production (as opposed to
> prototyping and research).
>
>
>> > """
>> > A friend of mine who knows nearly all the widely used languages uses
>> > Python for most of his projects. He says the main reason is that he
>> > likes the way source code looks.
>>
>> No argument. The little Python I wrote while porting Cells to Python was
>> strikingly attractive. But it was a deal with the devil, unless Python
>> is content to be just a scripting language. (And it should be.)
>
> It's hard to attribute feelings to a programming language, but, if you
> really must, I'd say Pyton aspires to be *useful* -- if all you need is
> "just a scripting language", it will be content to be one for you, and
> if your need SCALE, well then, PYTHON IS SCALABLE, and will remain a
> *SIMPLE, CLEAN, LITTLE AND POWERFUL LANGUAGE* (letting nobody do
> anything INSANE to it;-) while scaling up to whatever size of project(s)
> you need (including systems so large that they redefine the very concept
> of "large scale" -- believe me, once in a while at a conference I make
> the mistake of going to some talk about "large scale" this or that, and
> invariably stagger out once again with the realization that what's
> "large scale" to the world tends to be a neat toy-sized throwaway little
> experiment to my current employer).
You haven't given much justification for the claim that Python is a
particularly "scalable" language. Sure, Google uses it, Graham gave
it some props somewhere in the middle of his notoriously pro-Lisp
writings, and even Norvig has said good things about it.
Fair enough. But what does Python offer above any garbage-collected
language that makes it so scalable?
>
>> OK, I propose a duel. We'll co-mentor this:
>>
>> http://www.lispnyc.org/wiki.clp?page=PyCells
>>
>> In the end Python will have a Silver Bullet, and only the syntax will
>> differ, because Python has a weak lambda, statements do not always
>> return values, it does not have macros, and I do not know if it has
>> special variables.
>>
>> Then we can just eyeball the code and see if the difference is
>> interesting. These abstract discussions do tend to loop.
>
> As a SummerOfCode mentor, I'm spoken for, and can't undertake to mentor
> other projects. I do agree that these discussions can be sterile, and
> I'll be glad to see what comes of your "pycells" project, but until then
> there's little we can do except agree to disagree (save that I'd like
> you to acknowledge my point above, regarding what exactly I had said and
> the fact that your alleged counterexample doesn't address at all what I
> had said -- but, I'll live even without such an acknowledgment).
>
>
> Alex
--
This is a song that took me ten years to live and two years to write.
- Bob Dylan
Bill Atkins <············@rpi.edu> writes:
> Fair enough. But what does Python offer above any garbage-collected
> language that makes it so scalable?
I think what used to be Lisp culture now uses the *ML languages or
Haskell. It's only throwbacks (which includes me sometimes) who still
use Lisp. I've been wanting for a while to do something in ML but
just haven't worked up enough steam for it. Python really does make
small and medium-sized tasks easy, even compared with Lisp. I'm still
reserving judgement about how it is at large tasks.
Bill Atkins <············@rpi.edu> wrote:
...
> Believe it or not, _you_ got it wrong.
Acknowledged: Common Lisp is even MORE insane (note that the quote
"INSANELY extensible" is from Tilton) than I believed -- I'm pretty sure
that the Lisp dialects I used in 1979-1981 didn't go to such crazy
extremes, and neither did Scheme.
> Buh? The project doesn't have to be late for Brooks's law to hold;
> adding programmers, so goes Brooks reasoning, will always increase the
> time required to complete the project because of various communication
> issues.
And here is where we check if you're as gracious about admitting your
errors, as I am about mine. Brooks' law is:
"""Adding manpower to a late software project makes it later."""
These are Brooks' words, literally. OK so far?
Your claim, that adding programmers will ALWAYS increase the time, is
not just wrong, but utterly ridiculous. I can't put it better than the
wikipedia:
"""
Misconceptions
A commonly understood implication of Brooks' law is that it will be more
productive to employ a smaller number of very talented (and highly paid)
programmers on a project than to employ a larger number of less talented
programmers, since individual programmer productivity can vary greatly
between highly talented and efficient programmers and less talented
programmers. However, Brooks' law does not mean that starving a project
of resources by employing fewer programmers beyond a certain point will
get it done faster.
"""
Moreover, check out the research on Pair Programming: it scientifically,
empirically confirms that "two heads are better than one", which should
surprise nobody. Does this mean that there aren't "various
communication issues"? Of course there are, but there's no implied
weighting of these factors wrt the _advantages_ of having that second
person on the team (check the Pair Programming literature for long lists
of those advantages).
Only empirical research can tell us where the boundary is -- when
productivity is decreased by going from N to N+1. A lot depends on
imponderable issues such as personality meshes or clashes, leadership
abilities at hands, methodology used, etc. All of this is pretty
obvious, making your assertion that Brooks held otherwise actionable (as
libel, by Brooks) in many legislations.
As it happens, I also have examples in which adding (a few carefully
selected) people to a software project that WAS (slightly) late put that
project back on track and made it deliver successfully on schedule.
Definitely not the "pointy-haired boss" management style of "throwing
warm bodies at the problem" that Brooks was fighting against, but,
consider: the project's velocity has suffered because [a] the tech lead
has his personal (usually phenomenal) productivity hampered by a painful
condition requiring elective surgery to abate, and [b] nobody on the
team is really super-experienced in the intricacies of cryptography, yes
some very subtle cryptographic work turns out to be necessary. One day,
the tech lead calls in -- the pain has gotten just too bad, he's going
for surgery, and will be out of the fray for at least one week.
I still remember with admiration how my Director reacted to the
emergency: he suspended two other projects, deciding that, if THEIR
deadlines were broken, that would be a lesser damage to the company than
if this one slipped any further; and cherrypicked exactly two people --
one incredibly flexible "jack of all trades" was tasked with getting up
to speed on the project and becoming the acting TL for it, and an
excellent cryptography specialist was tasked to dig deep into the
project's cryptography needs and solve them pronto.
So, We Broke Brooks' Law -- the cryptographer did his magic, and
meanwhile the JOAT ramped up instantly and took the lead (kudos to the
Jack's skills, to the clarity and transparency of the previous TL's
work, to the agile methodologies employed throughout, AND to the
uniformity of style of one language which will stay unnamed here)... and
the project delivered on time and within budget. We had one extra
person (two "replacements" for one TL's absence), yet it didn't make the
late software project even later -- it brought it back on track
perfectly well.
I have many other experiences where _I_ was that JOAT (and slightly
fewer ones where I was the specialist -- broadly speaking, I'm more of a
generalist, but, as needs drive, sometimes I do of necessity become the
specialist in some obscure yet necessary technology... must have
happened a dozen times over the 30 years of my careers, counting
graduate school...).
This set of experiences in no way tarnishes the value of Brooks' Law,
but it does *put it into perspective*: done JUST RIGHT, by sufficiently
brilliant management, adding A FEW people with exactly the right mix of
skills and personality to a late software project CAN save the bacon,
> Fair enough. But what does Python offer above any garbage-collected
> language that makes it so scalable?
Uniformity of style, facilitating egoless programming; a strong cultural
bias for simplicity and clarity, and against cleverness and obscurity;
"just the right constraints" (well, mostly;-), such as immutability at
more or less the right spots (FP languages, with _everything_ immutable,
have an edge there IN THEORY... but in practice, using them most
effectively requires a rare mindset/skillset, making it hard to "scale
up" teams due to the extra difficulty of finding the right people).
Alex
From: Bill Atkins
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <87bquay2rv.fsf@rpi.edu>
·······@yahoo.com (Alex Martelli) writes:
> Bill Atkins <············@rpi.edu> wrote:
> ...
>> Believe it or not, _you_ got it wrong.
>
> Acknowledged: Common Lisp is even MORE insane (note that the quote
> "INSANELY extensible" is from Tilton) than I believed -- I'm pretty sure
> that the Lisp dialects I used in 1979-1981 didn't go to such crazy
> extremes, and neither did Scheme.
>
>> Buh? The project doesn't have to be late for Brooks's law to hold;
>> adding programmers, so goes Brooks reasoning, will always increase the
>> time required to complete the project because of various communication
>> issues.
>
> And here is where we check if you're as gracious about admitting your
> errors, as I am about mine. Brooks' law is:
>
> """Adding manpower to a late software project makes it later."""
>
> These are Brooks' words, literally. OK so far?
You are correct.
I posted too hastily. Here is what my paragraph ought to have said:
Buh? The project doesn't have to be late for Brooks's law to hold;
adding programmers *in the middle of a project*, so goes Brooks
reasoning, will always increase the time required to complete the
project because of various communication issues.
Agree?
> Your claim, that adding programmers will ALWAYS increase the time, is
> not just wrong, but utterly ridiculous. I can't put it better than the
> wikipedia:
> """
> Misconceptions
>
> A commonly understood implication of Brooks' law is that it will be more
> productive to employ a smaller number of very talented (and highly paid)
> programmers on a project than to employ a larger number of less talented
> programmers, since individual programmer productivity can vary greatly
> between highly talented and efficient programmers and less talented
> programmers. However, Brooks' law does not mean that starving a project
> of resources by employing fewer programmers beyond a certain point will
> get it done faster.
> """
>
> Moreover, check out the research on Pair Programming: it scientifically,
> empirically confirms that "two heads are better than one", which should
> surprise nobody. Does this mean that there aren't "various
> communication issues"? Of course there are, but there's no implied
> weighting of these factors wrt the _advantages_ of having that second
> person on the team (check the Pair Programming literature for long lists
> of those advantages).
>
> Only empirical research can tell us where the boundary is -- when
> productivity is decreased by going from N to N+1. A lot depends on
> imponderable issues such as personality meshes or clashes, leadership
> abilities at hands, methodology used, etc. All of this is pretty
> obvious, making your assertion that Brooks held otherwise actionable (as
> libel, by Brooks) in many legislations.
Hahaha.
> As it happens, I also have examples in which adding (a few carefully
> selected) people to a software project that WAS (slightly) late put that
> project back on track and made it deliver successfully on schedule.
> Definitely not the "pointy-haired boss" management style of "throwing
> warm bodies at the problem" that Brooks was fighting against, but,
> consider: the project's velocity has suffered because [a] the tech lead
> has his personal (usually phenomenal) productivity hampered by a painful
> condition requiring elective surgery to abate, and [b] nobody on the
> team is really super-experienced in the intricacies of cryptography, yes
> some very subtle cryptographic work turns out to be necessary. One day,
> the tech lead calls in -- the pain has gotten just too bad, he's going
> for surgery, and will be out of the fray for at least one week.
>
> I still remember with admiration how my Director reacted to the
> emergency: he suspended two other projects, deciding that, if THEIR
> deadlines were broken, that would be a lesser damage to the company than
> if this one slipped any further; and cherrypicked exactly two people --
> one incredibly flexible "jack of all trades" was tasked with getting up
> to speed on the project and becoming the acting TL for it, and an
> excellent cryptography specialist was tasked to dig deep into the
> project's cryptography needs and solve them pronto.
>
> So, We Broke Brooks' Law -- the cryptographer did his magic, and
> meanwhile the JOAT ramped up instantly and took the lead (kudos to the
> Jack's skills, to the clarity and transparency of the previous TL's
> work, to the agile methodologies employed throughout, AND to the
> uniformity of style of one language which will stay unnamed here)... and
> the project delivered on time and within budget. We had one extra
> person (two "replacements" for one TL's absence), yet it didn't make the
> late software project even later -- it brought it back on track
> perfectly well.
>
> I have many other experiences where _I_ was that JOAT (and slightly
> fewer ones where I was the specialist -- broadly speaking, I'm more of a
> generalist, but, as needs drive, sometimes I do of necessity become the
> specialist in some obscure yet necessary technology... must have
> happened a dozen times over the 30 years of my careers, counting
> graduate school...).
>
> This set of experiences in no way tarnishes the value of Brooks' Law,
> but it does *put it into perspective*: done JUST RIGHT, by sufficiently
> brilliant management, adding A FEW people with exactly the right mix of
> skills and personality to a late software project CAN save the bacon,
>
>
>> Fair enough. But what does Python offer above any garbage-collected
>> language that makes it so scalable?
>
> Uniformity of style, facilitating egoless programming; a strong cultural
> bias for simplicity and clarity, and against cleverness and obscurity;
> "just the right constraints" (well, mostly;-), such as immutability at
> more or less the right spots (FP languages, with _everything_ immutable,
> have an edge there IN THEORY... but in practice, using them most
> effectively requires a rare mindset/skillset, making it hard to "scale
> up" teams due to the extra difficulty of finding the right people).
>
>
> Alex
--
This is a song that took me ten years to live and two years to write.
- Bob Dylan
Bill Atkins <············@rpi.edu> wrote:
...
> > And here is where we check if you're as gracious about admitting your
> > errors, as I am about mine. Brooks' law is:
> >
> > """Adding manpower to a late software project makes it later."""
> >
> > These are Brooks' words, literally. OK so far?
>
> You are correct.
>
> I posted too hastily. Here is what my paragraph ought to have said:
>
> Buh? The project doesn't have to be late for Brooks's law to hold;
> adding programmers *in the middle of a project*, so goes Brooks
> reasoning, will always increase the time required to complete the
> project because of various communication issues.
>
> Agree?
"What does one do when an essential software project is behind schedule?
Add manpower, naturally. As Figs 2.1 through 2.4 suggest, this may or
may not help".
How do you translate "may or may not help" into "will always increase
the time required", and manage to impute that translation to "Brooks
reasoning"? It *MAY* help -- Brooks says so very explicitly, and from
the Fig 2.4 he quotes just as explicitly you may infer he has in mind
that the cases in which it may help are those in which the project was
badly understaffed to begin with (a very common case, as the rest of the
early parts of chapter 2 explains quite adequately).
I could further critique Brooks for his ignoring specific personal
factors -- some extremely rare guys are able to get up to speed on an
existing project in less time, and requiring less time from those
already on board, than 99.9% of the human race (the JOAT in my previous
example); and, another crucial issue is that sometimes the key reason a
project is late is due to lack of skill on some crucial _specialty_, in
which case, while adding more "generic" programmers "may or may not
help" (in Brooks' words), adding just the right specialist (such as, the
cryptography expert in my previous example) can help a LOT. But, it
would be uncourteous to critique a seminal work in the field for not
anticipating such detailed, nuanced discussion (besides, in the next
chapter Brooks does mention teams including specialists, although if I
recall correctly he does not touch on the issue of a project which finds
out the need for a specialist _late_, as in this case). I'm still more
interested in critiquing your overgeneralization of Brooks' assertions
and reasoning.
Alex
Bill Atkins wrote:
> Buh? The project doesn't have to be late for Brooks's law to hold;
> adding programmers, so goes Brooks reasoning, will always increase the
> time required to complete the project because of various communication
> issues.
1. This is not what Brooks says. Brooks was talking about late
projects. Please provide a supporting quote if you wish to continue
to claim that "adding programmers will always increase the time
required to complete the project".
2. There has to be a mechanism where an organization can add
developers - even if it is only for new projects. Python advocates
would say that getting developers up to speed on Python is easy
because:
- it fits most programmers brains i.e. it is similar enough to
languages that most programmers have experience with and the
differences are usually perceived to beneficial (exception:
people from a Java/C/C++ background often perceive dynamic
typing as a misfeature and have to struggle with it)
- the language is small and simple
- "magic" is somewhat frowned upon in the Python community i.e.
most code can be taken at face value without needing to learn a
framework, mini-language, etc. (but I think that the Python
community could do better on this point)
I'm sure that smarter people can think of more points.
> Fair enough. But what does Python offer above any garbage-collected
> language that makes it so scalable?
See above point - you can more easily bring programmers online in your
organization because most programmers find Python easily learnable.
And, as a bonus, it is actually a pretty flexible, powerful language.
Cheers,
Brian
From: Bill Atkins
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <871wv6b3y0.fsf@rpi.edu>
·····@sweetapp.com writes:
> Bill Atkins wrote:
>> Buh? The project doesn't have to be late for Brooks's law to hold;
>> adding programmers, so goes Brooks reasoning, will always increase the
>> time required to complete the project because of various communication
>> issues.
>
> 1. This is not what Brooks says. Brooks was talking about late
> projects. Please provide a supporting quote if you wish to continue
> to claim that "adding programmers will always increase the time
> required to complete the project".
The "always" in my claim should not be there, I admit. Brooks didn't
claim that.
I refer you to pages 17 - 18 of The Mythical Man-Month:
Since software construction is inherently a systems effort - an
exercise in complex interrelationships - communication effort is
great...Adding more men then lengthens, not shortens, the schedule.
It is totally absurd to assume that, simply because a project has not
yet passed its deadline, it will somehow become immune to the kinds of
things Brooks is talking about. His thesis is that adding programmers
to an already-in-progress project will cause a delay, because the new
programmers must be brought up to speed. It does not matter if the
project is eight weeks late or has only been active for a month. This
issue still remains:
The two new men, however competent and however quickly trained, will
require training in the task by one of the experienced men. If this
takes a month, 3 man-months will have been devoted to work not in
the original estimate. (p. 24, TMM-M)
Brooks's Law mentions only late projects, but the rest of his
discussion applies to adding programmers in the middle of *any*
project.
Is this really so radical an idea?
> 2. There has to be a mechanism where an organization can add
> developers - even if it is only for new projects. Python advocates
Obviously.
> would say that getting developers up to speed on Python is easy
> because:
>
> - it fits most programmers brains i.e. it is similar enough to
> languages that most programmers have experience with and the
> differences are usually perceived to beneficial (exception:
> people from a Java/C/C++ background often perceive dynamic
> typing as a misfeature and have to struggle with it)
> - the language is small and simple
> - "magic" is somewhat frowned upon in the Python community i.e.
> most code can be taken at face value without needing to learn a
> framework, mini-language, etc. (but I think that the Python
> community could do better on this point)
These are not things I look for in a programming language.
>
> I'm sure that smarter people can think of more points.
>
>> Fair enough. But what does Python offer above any garbage-collected
>> language that makes it so scalable?
>
> See above point - you can more easily bring programmers online in your
> organization because most programmers find Python easily learnable.
> And, as a bonus, it is actually a pretty flexible, powerful language.
>
> Cheers,
> Brian
>
--
This is a song that took me ten years to live and two years to write.
- Bob Dylan
Bill Atkins wrote:
> ·····@sweetapp.com writes:
>
>> Bill Atkins wrote:
>>> Buh? The project doesn't have to be late for Brooks's law to hold;
>>> adding programmers, so goes Brooks reasoning, will always increase the
>>> time required to complete the project because of various communication
>>> issues.
>> 1. This is not what Brooks says. Brooks was talking about late
>> projects. Please provide a supporting quote if you wish to continue
>> to claim that "adding programmers will always increase the time
>> required to complete the project".
>
> The "always" in my claim should not be there, I admit. Brooks didn't
> claim that.
>
> I refer you to pages 17 - 18 of The Mythical Man-Month:
>
> Since software construction is inherently a systems effort - an
> exercise in complex interrelationships - communication effort is
> great...Adding more men then lengthens, not shortens, the schedule.
>
> It is totally absurd to assume that, simply because a project has not
> yet passed its deadline, it will somehow become immune to the kinds of
> things Brooks is talking about.
Right. But only when a project is late does Brooks say that adding
programmers will always make it later (the claim that you made). In
other cases he says "Add manpower, ..., this may or may not help". That
seems intuitively obvious to me. If the programmers being added require
extensive training [1] by the team to become productive, and their
contribution to the project will be smaller than that amount (e.g. it
is a small or nearly completed project) then their net impact on the
project will be negative. If, OTOH, the new programmers are able to
quickly understand the project/organization/technologies and almost
immediately make useful contributions, then they are likely to have a
net positive impact.
>> 2. There has to be a mechanism where an organization can add
>> developers - even if it is only for new projects. Python advocates
>
> Obviously.
It's good that you agree. I think that the ability to add new
productive developers to a project/team/organization is at least part
of what Alex means by "scaleability". I'm sure that he will correct me
if I am wrong.
>> [list of proposed Python advantages snipped]
> These are not things I look for in a programming language.
Fair enough. That doesn't mean that these advantages aren't important
to others or, in some situations, objectively important in the survival
of a project/organization.
For example, imagine that Google had used language X instead of Python
to develop their tools (assume they started with 10 expert X
programmers). Expert X programmers are Y percent more productive than
expert Python programmers. Now Google wants to grow aggressively and
needs 100 times more developer productivity (and expects to need even
more productivity in the future). If it is harder to find/hire/create
experts in language X than Python, then Y will have to be large to make
language X a better choice than Python. Also, if non-expert Python
programmers can be more productive than non-expert X programmers, then
Python also has an advantage. Eric Raymond claims that Python has very
high initial productivity and that becoming an expert is fairly easy.
BTW, I'm not saying that Common Lisp fits X in this example.
Cheers,
Brian
[1] I'm considering introducing bugs or misdesigns that have to be
fixed
as part of training for the purposes of this discussion. Also the
time needed to learn to coordinate with the rest of the team.
From: Bill Atkins
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <87vesi5dar.fsf@rpi.edu>
·····@sweetapp.com writes:
> Bill Atkins wrote:
>> ·····@sweetapp.com writes:
>>
>>> Bill Atkins wrote:
>>>> Buh? The project doesn't have to be late for Brooks's law to hold;
>>>> adding programmers, so goes Brooks reasoning, will always increase the
>>>> time required to complete the project because of various communication
>>>> issues.
>>> 1. This is not what Brooks says. Brooks was talking about late
>>> projects. Please provide a supporting quote if you wish to continue
>>> to claim that "adding programmers will always increase the time
>>> required to complete the project".
>>
>> The "always" in my claim should not be there, I admit. Brooks didn't
>> claim that.
>>
>> I refer you to pages 17 - 18 of The Mythical Man-Month:
>>
>> Since software construction is inherently a systems effort - an
>> exercise in complex interrelationships - communication effort is
>> great...Adding more men then lengthens, not shortens, the schedule.
>>
>> It is totally absurd to assume that, simply because a project has not
>> yet passed its deadline, it will somehow become immune to the kinds of
>> things Brooks is talking about.
>
> Right. But only when a project is late does Brooks say that adding
> programmers will always make it later (the claim that you made). In
> other cases he says "Add manpower, ..., this may or may not help". That
> seems intuitively obvious to me. If the programmers being added require
> extensive training [1] by the team to become productive, and their
> contribution to the project will be smaller than that amount (e.g. it
> is a small or nearly completed project) then their net impact on the
> project will be negative. If, OTOH, the new programmers are able to
> quickly understand the project/organization/technologies and almost
> immediately make useful contributions, then they are likely to have a
> net positive impact.
There is another essay in TMM-M that discusses the difference between
essential complexity and accidental complexity. You might think
Python is really swell, and I might think Common Lisp is really swell,
but at the heart of it there is still what Brooks calls "essential
complexity" - the difficulty of mapping a complicated real-world
situation into a model a computer can handle. So I think using Python
or Lisp will help get rid of a lot of the accidental complexity that
would arise from using C or C++, but it can only do so much; there is
still a lot of complexity involved even in projects that are written
in very high-level languages. IMHO.
>>> 2. There has to be a mechanism where an organization can add
>>> developers - even if it is only for new projects. Python advocates
>>
>> Obviously.
>
> It's good that you agree. I think that the ability to add new
> productive developers to a project/team/organization is at least part
> of what Alex means by "scaleability". I'm sure that he will correct me
> if I am wrong.
>
>>> [list of proposed Python advantages snipped]
>> These are not things I look for in a programming language.
>
> Fair enough. That doesn't mean that these advantages aren't important
> to others or, in some situations, objectively important in the survival
> of a project/organization.
Sure, agreed.
> For example, imagine that Google had used language X instead of Python
> to develop their tools (assume they started with 10 expert X
> programmers). Expert X programmers are Y percent more productive than
> expert Python programmers. Now Google wants to grow aggressively and
> needs 100 times more developer productivity (and expects to need even
> more productivity in the future). If it is harder to find/hire/create
> experts in language X than Python, then Y will have to be large to make
> language X a better choice than Python. Also, if non-expert Python
> programmers can be more productive than non-expert X programmers, then
> Python also has an advantage. Eric Raymond claims that Python has very
> high initial productivity and that becoming an expert is fairly easy.
>
> BTW, I'm not saying that Common Lisp fits X in this example.
>
> Cheers,
> Brian
>
> [1] I'm considering introducing bugs or misdesigns that have to be
> fixed
> as part of training for the purposes of this discussion. Also the
> time needed to learn to coordinate with the rest of the team.
>
--
This is a song that took me ten years to live and two years to write.
- Bob Dylan
<·····@sweetapp.com> wrote:
...
> >> 2. There has to be a mechanism where an organization can add
> >> developers - even if it is only for new projects. Python advocates
> >
> > Obviously.
>
> It's good that you agree. I think that the ability to add new
> productive developers to a project/team/organization is at least part
> of what Alex means by "scaleability". I'm sure that he will correct me
> if I am wrong.
I agree with your formulation, just not with your spelling of
"scalability";-).
> [1] I'm considering introducing bugs or misdesigns that have to be
> fixed
> as part of training for the purposes of this discussion. Also the
Actually, doing it _deliberately_ (on "training projects" for new people
just coming onboard) might be a good training technique; what you learn
by finding and fixing bugs nicely complements what you learn by studying
"good" example code. I do not know of this technique being widely used
in real-life training, either by firms or universities, but I'd love to
learn about counterexamples.
> time needed to learn to coordinate with the rest of the team.
Pair programming can help a lot with this (in any language, I believe)
if the pairing is carefully chosen and rotated for the purpose.
Alex
From: Aaron Denney
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <slrne5smof.j0h.wnoise@ofb.net>
["Followup-To:" header set to comp.lang.functional.]
On 2006-05-07, ·····@sweetapp.com <·····@sweetapp.com> wrote:
> - it fits most programmers brains i.e. it is similar enough to
> languages that most programmers have experience with and the
> differences are usually perceived to beneficial (exception:
> people from a Java/C/C++ background often perceive dynamic
> typing as a misfeature and have to struggle with it)
It is a misfeature. It's just less of a misfeature than the typing of
Java/C/C++, etc.
--
Aaron Denney
-><-
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <ePb7g.334$pv7.229@fe08.lga>
Alex Martelli wrote:
> Ken Tilton <·········@gmail.com> wrote:
> ...
>
>>True but circular, because my very point is that () was a great design
>>choice in that it made macros possible and they made CL almost
>>infinitely extensible, while indentation-sensitivity was a mistaken
>>design choice because it makes for very clean code (I agree
>>wholeheartedly) but placed a ceiling on its expressiveness.
>
>
> Having to give functions a name places no "ceiling on expressiveness",
> any more than, say, having to give _macros_ a name.
>
>
>
>>As for:
>>
>>
>>> At a syntax-sugar
>>>level, for example, Lisp's choice to use parentheses as delimiter means
>>>it's undesirable, even unfeasible, to use the single character '(' as an
>>>ordinary identifier in a future release of the language.
>>
>>(defun |(| (aside) (format nil "Parenthetically speaking...~a." aside))
>>=> |(|
>>(|(| "your Lisp /is/ rusty.")
>>=> "Parenthetically speaking...your Lisp /is/ rusty.."
>>
>>:) No, seriously, is that all you can come up with?
>
>
> Interestingly, the SECOND lisper to prove himself unable to read the
> very text he's quoting. Reread carefully, *USE THE ***SINGLE***
> CHARACTER* ... *AS AN ORDINARY IDENTIFIER*. What makes you read a
> ``PART OF'' that I had never written? You've shown how to use the
> characters as *PART* of an identifier [[and I believe it couldn't be the
> very start]], and you appear to believe that this somehow refutes my
> assertion?
The function name here:
(|(| "Boy, your Lisp is rusty")
-> Boy, your Lisp is rusty.
...is exactly one (1) character long.
(length (symbol-name'|(|) -> 1
Why? (symbol-name '|(|) -> "(" (No, the "s are not part of the name!)
If you want to argue about that, I will have to bring up the Lisp
readtable. Or did you forget that, too?
:)
kenny
--
Cells: http://common-lisp.net/project/cells/
"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
Ken Tilton <·········@gmail.com> wrote:
...
> Why? (symbol-name '|(|) -> "(" (No, the "s are not part of the name!)
>
> If you want to argue about that, I will have to bring up the Lisp
> readtable. Or did you forget that, too?
Mea culpa -- it wasn't in the Lisp(s) I used 25+ years ago, nor in
Scheme; I've never used Common Lisp in anger, and obviously "just
dabbling" gives one no good feel for how *INSANELY COMPLICATED* (or, if
you prefer, "insanely extensible") it truly is.
I personally think these gyrations are a good example of why Python, the
language that "fits your brain" and has simplicity among its goals, is
vastly superior for production purposes. Nevertheless, I admit I was
wrong!
Alex
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <cee7g.543$RO4.202@fe12.lga>
Alex Martelli wrote:
> Ken Tilton <·········@gmail.com> wrote:
> ...
>
>>Why? (symbol-name '|(|) -> "(" (No, the "s are not part of the name!)
>>
>>If you want to argue about that, I will have to bring up the Lisp
>>readtable. Or did you forget that, too?
>
>
> Mea culpa -- it wasn't in the Lisp(s) I used 25+ years ago, nor in
> Scheme; I've never used Common Lisp in anger, and obviously "just
> dabbling" gives one no good feel for how *INSANELY COMPLICATED* (or, if
> you prefer, "insanely extensible") it truly is.
>
> I personally think these gyrations...
Please. You are the only one gyrating. I was joking with both |(| and
the fact that its length is one, no matter what our eyes tell us.
And I hope to god you were joking with the objection that a Lisper could
not name a variable "(". You would not say something that daft just to
win a Usenet debate, would you? Omigod...I think you did! I mean, look
at the way you were jumping up and down and shouting and accusing Bill
and me of not understanding English and... well, you are an uber tech
geek, what did i expect? All is forgiven. But...
It is vastly more disappointing that an alleged tech genius would sniff
at the chance to take undeserved credit for PyCells, something probably
better than a similar project on which Adobe (your superiors at
software, right?) has bet the ranch. This is the Grail, dude, Brooks's
long lost Silver Bullet. And you want to pass?????
C'mon, Alex, I just want you as co-mentor for your star quality. Of
course you won't have to do a thing, just identify for me a True Python
Geek and she and I will take it from there.
Here's the link in case you lost it:
http://www.lispnyc.org/wiki.clp?page=PyCells
:)
peace, kenny
ps. flaming aside, PyCells really would be amazingly good for Python.
And so Google. (Now your job is on the line. <g>) k
--
Cells: http://common-lisp.net/project/cells/
"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
Ken Tilton wrote:
> It is vastly more disappointing that an alleged tech genius would sniff
> at the chance to take undeserved credit for PyCells, something probably
> better than a similar project on which Adobe (your superiors at
> software, right?) has bet the ranch. This is the Grail, dude, Brooks's
> long lost Silver Bullet. And you want to pass?????
>
> C'mon, Alex, I just want you as co-mentor for your star quality. Of
> course you won't have to do a thing, just identify for me a True Python
> Geek and she and I will take it from there.
>
> Here's the link in case you lost it:
>
> http://www.lispnyc.org/wiki.clp?page=PyCells
>
> :)
>
> peace, kenny
>
> ps. flaming aside, PyCells really would be amazingly good for Python.
> And so Google. (Now your job is on the line. <g>) k
Perhaps I'm missing something but what's the big deal about PyCells?
Here is 22-lines barebones implementation of spreadsheet in Python,
later I create 2 cells "a" and "b", "b" depends on a and evaluate all
the cells. The output is
a = negate(sin(pi/2)+one) = -2.0
b = negate(a)*10 = 20.0
=================== spreadsheet.py ==================
class Spreadsheet(dict):
def __init__(self, **kwd):
self.namespace = kwd
def __getitem__(self, cell_name):
item = self.namespace[cell_name]
if hasattr(item, "formula"):
return item()
return item
def evaluate(self, formula):
return eval(formula, self)
def cell(self, cell_name, formula):
"Create a cell defined by formula"
def evaluate_cell():
return self.evaluate(formula)
evaluate_cell.formula = formula
self.namespace[cell_name] = evaluate_cell
def cells(self):
"Yield all cells of the spreadsheet along with current values
and formulas"
for cell_name, value in self.namespace.items():
if not hasattr(value, "formula"):
continue
yield cell_name, self[cell_name], value.formula
import math
def negate(x):
return -x
sheet1 = Spreadsheet(one=1, sin=math.sin, pi=math.pi, negate=negate)
sheet1.cell("a", "negate(sin(pi/2)+one)")
sheet1.cell("b", "negate(a)*10")
for name, value, formula in sheet1.cells():
print name, "=", formula, "=", value
From: Bill Atkins
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <87psiq48sm.fsf@rpi.edu>
"Serge Orlov" <···········@gmail.com> writes:
> Ken Tilton wrote:
>> It is vastly more disappointing that an alleged tech genius would sniff
>> at the chance to take undeserved credit for PyCells, something probably
>> better than a similar project on which Adobe (your superiors at
>> software, right?) has bet the ranch. This is the Grail, dude, Brooks's
>> long lost Silver Bullet. And you want to pass?????
>>
>> C'mon, Alex, I just want you as co-mentor for your star quality. Of
>> course you won't have to do a thing, just identify for me a True Python
>> Geek and she and I will take it from there.
>>
>> Here's the link in case you lost it:
>>
>> http://www.lispnyc.org/wiki.clp?page=PyCells
>>
>> :)
>>
>> peace, kenny
>>
>> ps. flaming aside, PyCells really would be amazingly good for Python.
>> And so Google. (Now your job is on the line. <g>) k
>
> Perhaps I'm missing something but what's the big deal about PyCells?
> Here is 22-lines barebones implementation of spreadsheet in Python,
> later I create 2 cells "a" and "b", "b" depends on a and evaluate all
> the cells. The output is
>
> a = negate(sin(pi/2)+one) = -2.0
> b = negate(a)*10 = 20.0
>
> =================== spreadsheet.py ==================
> class Spreadsheet(dict):
> def __init__(self, **kwd):
> self.namespace = kwd
> def __getitem__(self, cell_name):
> item = self.namespace[cell_name]
> if hasattr(item, "formula"):
> return item()
> return item
> def evaluate(self, formula):
> return eval(formula, self)
> def cell(self, cell_name, formula):
> "Create a cell defined by formula"
> def evaluate_cell():
> return self.evaluate(formula)
> evaluate_cell.formula = formula
> self.namespace[cell_name] = evaluate_cell
> def cells(self):
> "Yield all cells of the spreadsheet along with current values
> and formulas"
> for cell_name, value in self.namespace.items():
> if not hasattr(value, "formula"):
> continue
> yield cell_name, self[cell_name], value.formula
>
> import math
> def negate(x):
> return -x
> sheet1 = Spreadsheet(one=1, sin=math.sin, pi=math.pi, negate=negate)
> sheet1.cell("a", "negate(sin(pi/2)+one)")
> sheet1.cell("b", "negate(a)*10")
> for name, value, formula in sheet1.cells():
> print name, "=", formula, "=", value
>
I hope Ken doesn't mind me answering for him, but Cells is not a
spreadsheet (where did you get that idea?). It does apply the basic
idea of a spreadsheet to software - that is, instead of updating value
when some event occurs, you specify in advance how that value can be
computed and then you stop worrying about keeping it updated.
Incidentally, is this supposed to be an example of Python's supposed
"aesthetic pleasantness"? I find it a little hideous, even giving you
the benefit of the doubt and pretending there are newlines between
each function. There's nothing like a word wrapped in pairs of
underscores to totally ruin an aesthetic experience.
P.S. Is this really a spreadsheet? It looks like it's a flat
hashtable...
--
This is a song that took me ten years to live and two years to write.
- Bob Dylan
Bill Atkins wrote:
> "Serge Orlov" <···········@gmail.com> writes:
>
> > Ken Tilton wrote:
> >> It is vastly more disappointing that an alleged tech genius would sniff
> >> at the chance to take undeserved credit for PyCells, something probably
> >> better than a similar project on which Adobe (your superiors at
> >> software, right?) has bet the ranch. This is the Grail, dude, Brooks's
> >> long lost Silver Bullet. And you want to pass?????
> >>
> >> C'mon, Alex, I just want you as co-mentor for your star quality. Of
> >> course you won't have to do a thing, just identify for me a True Python
> >> Geek and she and I will take it from there.
> >>
> >> Here's the link in case you lost it:
> >>
> >> http://www.lispnyc.org/wiki.clp?page=PyCells
> >>
> >> :)
> >>
> >> peace, kenny
> >>
> >> ps. flaming aside, PyCells really would be amazingly good for Python.
> >> And so Google. (Now your job is on the line. <g>) k
> >
> > Perhaps I'm missing something but what's the big deal about PyCells?
> > Here is 22-lines barebones implementation of spreadsheet in Python,
> > later I create 2 cells "a" and "b", "b" depends on a and evaluate all
> > the cells. The output is
> >
> > a = negate(sin(pi/2)+one) = -2.0
> > b = negate(a)*10 = 20.0
> >
> > =================== spreadsheet.py ==================
> > class Spreadsheet(dict):
> > def __init__(self, **kwd):
> > self.namespace = kwd
> > def __getitem__(self, cell_name):
> > item = self.namespace[cell_name]
> > if hasattr(item, "formula"):
> > return item()
> > return item
> > def evaluate(self, formula):
> > return eval(formula, self)
> > def cell(self, cell_name, formula):
> > "Create a cell defined by formula"
> > def evaluate_cell():
> > return self.evaluate(formula)
> > evaluate_cell.formula = formula
> > self.namespace[cell_name] = evaluate_cell
> > def cells(self):
> > "Yield all cells of the spreadsheet along with current values
> > and formulas"
> > for cell_name, value in self.namespace.items():
> > if not hasattr(value, "formula"):
> > continue
> > yield cell_name, self[cell_name], value.formula
> >
> > import math
> > def negate(x):
> > return -x
> > sheet1 = Spreadsheet(one=1, sin=math.sin, pi=math.pi, negate=negate)
> > sheet1.cell("a", "negate(sin(pi/2)+one)")
> > sheet1.cell("b", "negate(a)*10")
> > for name, value, formula in sheet1.cells():
> > print name, "=", formula, "=", value
> >
>
> I hope Ken doesn't mind me answering for him, but Cells is not a
> spreadsheet (where did you get that idea?).
It's written on the page linked above, second sentence: "Think of the
slots as cells in a spreadsheet, and you've got the right idea". I'm
not claiming that my code is full PyCell implementation.
> It does apply the basic
> idea of a spreadsheet to software - that is, instead of updating value
> when some event occurs, you specify in advance how that value can be
> computed and then you stop worrying about keeping it updated.
The result is the same. Of course, I don't track dependances in such a
tiny barebones example. But when you retrieve a cell you will get the
same value as with dependances. Adding dependances is left as an
exercise.
>
> Incidentally, is this supposed to be an example of Python's supposed
> "aesthetic pleasantness"?
Nope. This is an example that you don't need macros and
multi-statements. Ken writes: "While the absence of macros and
multi-statement lambda in Python will make coding more cumbersome". I'd
like to see Python code doing the same if the language had macros and
multi-statement lambda. Will it be more simple? More expressive?
> I find it a little hideous, even giving you
> the benefit of the doubt and pretending there are newlines between
> each function. There's nothing like a word wrapped in pairs of
> underscores to totally ruin an aesthetic experience.
I don't think anyone who is not a master of a language can judge
readability. You're just distracted by insignificant details, they
don't matter if you code in that language for many years. I'm not going
to tell you how Lisp Cell code looks to me ;)
> P.S. Is this really a spreadsheet? It looks like it's a flat
> hashtable...
Does it matter if it's flat or 2D?
From: Bill Atkins
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <87mzducj27.fsf@rpi.edu>
"Serge Orlov" <···········@gmail.com> writes:
> Bill Atkins wrote:
>> "Serge Orlov" <···········@gmail.com> writes:
>>
>> > Ken Tilton wrote:
>> >> It is vastly more disappointing that an alleged tech genius would sniff
>> >> at the chance to take undeserved credit for PyCells, something probably
>> >> better than a similar project on which Adobe (your superiors at
>> >> software, right?) has bet the ranch. This is the Grail, dude, Brooks's
>> >> long lost Silver Bullet. And you want to pass?????
>> >>
>> >> C'mon, Alex, I just want you as co-mentor for your star quality. Of
>> >> course you won't have to do a thing, just identify for me a True Python
>> >> Geek and she and I will take it from there.
>> >>
>> >> Here's the link in case you lost it:
>> >>
>> >> http://www.lispnyc.org/wiki.clp?page=PyCells
>> >>
>> >> :)
>> >>
>> >> peace, kenny
>> >>
>> >> ps. flaming aside, PyCells really would be amazingly good for Python.
>> >> And so Google. (Now your job is on the line. <g>) k
>> >
>> > Perhaps I'm missing something but what's the big deal about PyCells?
>> > Here is 22-lines barebones implementation of spreadsheet in Python,
>> > later I create 2 cells "a" and "b", "b" depends on a and evaluate all
>> > the cells. The output is
>> >
>> > a = negate(sin(pi/2)+one) = -2.0
>> > b = negate(a)*10 = 20.0
>> >
>> > =================== spreadsheet.py ==================
>> > class Spreadsheet(dict):
>> > def __init__(self, **kwd):
>> > self.namespace = kwd
>> > def __getitem__(self, cell_name):
>> > item = self.namespace[cell_name]
>> > if hasattr(item, "formula"):
>> > return item()
>> > return item
>> > def evaluate(self, formula):
>> > return eval(formula, self)
>> > def cell(self, cell_name, formula):
>> > "Create a cell defined by formula"
>> > def evaluate_cell():
>> > return self.evaluate(formula)
>> > evaluate_cell.formula = formula
>> > self.namespace[cell_name] = evaluate_cell
>> > def cells(self):
>> > "Yield all cells of the spreadsheet along with current values
>> > and formulas"
>> > for cell_name, value in self.namespace.items():
>> > if not hasattr(value, "formula"):
>> > continue
>> > yield cell_name, self[cell_name], value.formula
>> >
>> > import math
>> > def negate(x):
>> > return -x
>> > sheet1 = Spreadsheet(one=1, sin=math.sin, pi=math.pi, negate=negate)
>> > sheet1.cell("a", "negate(sin(pi/2)+one)")
>> > sheet1.cell("b", "negate(a)*10")
>> > for name, value, formula in sheet1.cells():
>> > print name, "=", formula, "=", value
>> >
>>
>> I hope Ken doesn't mind me answering for him, but Cells is not a
>> spreadsheet (where did you get that idea?).
>
> It's written on the page linked above, second sentence: "Think of the
> slots as cells in a spreadsheet, and you've got the right idea". I'm
> not claiming that my code is full PyCell implementation.
Unfortunately, it's *nothing* like a full PyCell implementation. I
explained what Cells is above. It is not merely a spreadsheet - it is
an extension that allows the programmer to specify how the value of
some slot (Lisp lingo for "member variable") can be computed. It
frees the programmer from having to recompute slot values since Cells
can transparently update them. It has to do with extending the object
system, not with merely setting tables in a hash and then retrieving
them.
>> It does apply the basic
>> idea of a spreadsheet to software - that is, instead of updating value
>> when some event occurs, you specify in advance how that value can be
>> computed and then you stop worrying about keeping it updated.
>
> The result is the same. Of course, I don't track dependances in such a
> tiny barebones example. But when you retrieve a cell you will get the
> same value as with dependances. Adding dependances is left as an
> exercise.
>
>>
>> Incidentally, is this supposed to be an example of Python's supposed
>> "aesthetic pleasantness"?
>
> Nope. This is an example that you don't need macros and
> multi-statements. Ken writes: "While the absence of macros and
> multi-statement lambda in Python will make coding more cumbersome". I'd
> like to see Python code doing the same if the language had macros and
> multi-statement lambda. Will it be more simple? More expressive?
FWIW (absolutely nothing, I imagine), here is my take on your
spreadsheet:
http://paste.lisp.org/display/19766
It is longer, for sure, but it does more and I haven't made any
attempt to stick to some minimum number of lines.
>> I find it a little hideous, even giving you
>> the benefit of the doubt and pretending there are newlines between
>> each function. There's nothing like a word wrapped in pairs of
>> underscores to totally ruin an aesthetic experience.
>
> I don't think anyone who is not a master of a language can judge
> readability. You're just distracted by insignificant details, they
> don't matter if you code in that language for many years. I'm not going
> to tell you how Lisp Cell code looks to me ;)
>
>> P.S. Is this really a spreadsheet? It looks like it's a flat
>> hashtable...
>
> Does it matter if it's flat or 2D?
>
Not really, because this is not Cells.
--
This is a song that took me ten years to live and two years to write.
- Bob Dylan
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <kBn7g.3$4q2.1@fe12.lga>
Serge Orlov wrote:
> Bill Atkins wrote:
>
>>"Serge Orlov" <···········@gmail.com> writes:
>>
>>
>>>Ken Tilton wrote:
>>>
>>>>It is vastly more disappointing that an alleged tech genius would sniff
>>>>at the chance to take undeserved credit for PyCells, something probably
>>>>better than a similar project on which Adobe (your superiors at
>>>>software, right?) has bet the ranch. This is the Grail, dude, Brooks's
>>>>long lost Silver Bullet. And you want to pass?????
>>>>
>>>>C'mon, Alex, I just want you as co-mentor for your star quality. Of
>>>>course you won't have to do a thing, just identify for me a True Python
>>>>Geek and she and I will take it from there.
>>>>
>>>>Here's the link in case you lost it:
>>>>
>>>> http://www.lispnyc.org/wiki.clp?page=PyCells
>>>>
>>>>:)
>>>>
>>>>peace, kenny
>>>>
>>>>ps. flaming aside, PyCells really would be amazingly good for Python.
>>>>And so Google. (Now your job is on the line. <g>) k
>>>
>>>Perhaps I'm missing something but what's the big deal about PyCells?
>>>Here is 22-lines barebones implementation of spreadsheet in Python,
>>>later I create 2 cells "a" and "b", "b" depends on a and evaluate all
>>>the cells. The output is
>>>
>>>a = negate(sin(pi/2)+one) = -2.0
>>>b = negate(a)*10 = 20.0
>>>
>>>=================== spreadsheet.py ==================
>>>class Spreadsheet(dict):
>>> def __init__(self, **kwd):
>>> self.namespace = kwd
>>> def __getitem__(self, cell_name):
>>> item = self.namespace[cell_name]
>>> if hasattr(item, "formula"):
>>> return item()
>>> return item
>>> def evaluate(self, formula):
>>> return eval(formula, self)
>>> def cell(self, cell_name, formula):
>>> "Create a cell defined by formula"
>>> def evaluate_cell():
>>> return self.evaluate(formula)
>>> evaluate_cell.formula = formula
>>> self.namespace[cell_name] = evaluate_cell
>>> def cells(self):
>>> "Yield all cells of the spreadsheet along with current values
>>>and formulas"
>>> for cell_name, value in self.namespace.items():
>>> if not hasattr(value, "formula"):
>>> continue
>>> yield cell_name, self[cell_name], value.formula
>>>
>>>import math
>>>def negate(x):
>>> return -x
>>>sheet1 = Spreadsheet(one=1, sin=math.sin, pi=math.pi, negate=negate)
>>>sheet1.cell("a", "negate(sin(pi/2)+one)")
>>>sheet1.cell("b", "negate(a)*10")
>>>for name, value, formula in sheet1.cells():
>>> print name, "=", formula, "=", value
>>>
>>
>>I hope Ken doesn't mind me answering for him, but Cells is not a
>>spreadsheet (where did you get that idea?).
>
>
> It's written on the page linked above, second sentence: "Think of the
> slots as cells in a spreadsheet, and you've got the right idea". I'm
> not claiming that my code is full PyCell implementation.
Well it's the Miller Analogy Test, isn't it? <g> Very easy to leave out
a level of abstraction. Like I said in my other reply, /everyone/ leaves
this level out.
Your spreadsheet does not have slots ruled by functions, it has one slot
for a dictionary where you store names and values/formulas.
Go back to your example and arrange it so a and b are actual slots (data
members? fields?) of the spreadsheet class. You can just stuff numbers in a:
sheet1.a = 42
but b should be somehow associated with a rule when sheet1 is created.
As I said in the other post, also associate an on-change callback with
slots a and b.
When you create sheet1:
sheet1 = Spreadsheet(a=42,b=pow(self.a,2))
You should see the callbacks for a and b fire. Then when you:
sheet1.a = 7
You should see them fire again (and each should see old and new values
as one would hope).
The fun part is when the rule associated with b is just:
self.b = somefunction(self)
and somefunction happens to access slot a and it all still works.
One important feature above is that the rule is associated with the slot
when the instance is created, and different instances can have different
ruls for the same slot. In effect, classes become programmable, hence
more reusable -- without this one is forever subclassing to satisfy
different functional requirements.
>
>
>
>>It does apply the basic
>>idea of a spreadsheet to software - that is, instead of updating value
>>when some event occurs, you specify in advance how that value can be
>>computed and then you stop worrying about keeping it updated.
>
>
> The result is the same.
Hopefully I have made clear the different levels. With your example, all
variables exist only in the universe of the Spreadsheet class. So you
have a good start on an /interpreter/ that works like the /extension/
Cells provide to Common Lisp.
Do you see the difference?
Apologies for all the pseudo-Python above, btw.
btw, I am not saying Python cannot do PyCells. I got pretty far on a
port from CL to Python before losing interest. You just take your code
above and move it to the metaclass level and, yes, track dependencies,
etc etc etc.
When that is done we can look at a working example and see how well
Python fared without macros and full-blown lambda.
hth, kenny
--
Cells: http://common-lisp.net/project/cells/
"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
On Sun, 07 May 2006 10:36:00 -0400, Ken Tilton <·········@gmail.com>
wrote:
>[...]
>
>Your spreadsheet does not have slots ruled by functions, it has one slot
>for a dictionary where you store names and values/formulas.
>
>Go back to your example and arrange it so a and b are actual slots (data
>members? fields?) of the spreadsheet class. You can just stuff numbers in a:
>
> sheet1.a = 42
>
>but b should be somehow associated with a rule when sheet1 is created.
>As I said in the other post, also associate an on-change callback with
>slots a and b.
I must be missing something - seems this should be easy using
__setattr__ and __getattr__. Then _literally_ there's just a
dict containing names and functions, but when you _use_ the
class it looks just like the above:
>[...]
>
>When that is done we can look at a working example and see how well
>Python fared without macros and full-blown lambda.
No lambda in the non-programmer-half-hour implementation below.
You need to define a named function for each cell to use as
a callback. Except for that what are Cells supposed to do that
the implementation below doesn't do?
"""PyCells.py"""
class Cell:
def __init__(self, name, owner, callback):
self.name = name
self.callback = callback
self.owner = owner
def onchange(self, value):
self.value = value
self.callback(self, value)
class Cells:
def __init__(self):
#self.slots = {}
#Oops, don't work so well with __setattr__:
self.__dict__['slots'] = {}
def __setattr__(self, name, value):
self.slots[name].onchange(value)
def __getattr__(self, name):
return self.slots[name].value
def AddCell(self, name, callback):
self.slots[name] = Cell(name, self, callback)
***********
Sample use:
cells = Cells()
def acall(cell, value):
cell.owner.slots['b'].value = value + 1
cells.AddCell('a',acall)
def bcall(cell, value):
cell.owner.slots['a'].value = value - 1
cells.AddCell('b',bcall)
cells.a = 42
print cells.a, cells.b
cells.b = 24
print cells.a, cells.b
************************
David C. Ullrich
On Mon, 08 May 2006 08:05:38 -0500, David C. Ullrich
<·······@math.okstate.edu> wrote:
>[...]
>
>def acall(cell, value):
> cell.owner.slots['b'].value = value + 1
Needing to say that sort of thing every time
you define a callback isn't very nice.
New and improved version:
"""PyCells.py"""
class Cell:
def __init__(self, name, owner, callback):
self.name = name
self.callback = callback
self.owner = owner
def onchange(self, value):
self.value = value
self.callback(self, value)
def __setitem__(self, name, value):
self.owner.slots[name].value = value
class Cells:
def __init__(self):
self.__dict__['slots'] = {}
def __setattr__(self, name, value):
self.slots[name].onchange(value)
def __getattr__(self, name):
return self.slots[name].value
def AddCell(self, name, callback):
self.slots[name] = Cell(name, self, callback)
Sample:
cells = Cells()
def acall(cell, value):
cell['b'] = value + 1
cells.AddCell('a',acall)
def bcall(cell, value):
cell['a'] = value - 1
cells.AddCell('b',bcall)
cells.a = 42
print cells.a, cells.b
cells.b = 24
print cells.a, cells.b
#OR you could give Cell a __setattr__ so the above
#would be cell.a = value - 1. I think I like this
#version better; in applications I have in mind I
#might be iterating over lists of cell names.
************************
David C. Ullrich
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <1LJ7g.17$u17.10@fe08.lga>
David C. Ullrich wrote:
> On Sun, 07 May 2006 10:36:00 -0400, Ken Tilton <·········@gmail.com>
> wrote:
>
>
>>[...]
>>
>>Your spreadsheet does not have slots ruled by functions, it has one slot
>>for a dictionary where you store names and values/formulas.
>>
>>Go back to your example and arrange it so a and b are actual slots (data
>>members? fields?) of the spreadsheet class. You can just stuff numbers in a:
>>
>> sheet1.a = 42
>>
>>but b should be somehow associated with a rule when sheet1 is created.
>>As I said in the other post, also associate an on-change callback with
>>slots a and b.
>
>
> I must be missing something - seems this should be easy using
> __setattr__ and __getattr__. Then _literally_ there's just a
> dict containing names and functions, but when you _use_ the
> class it looks just like the above:
Ah, but looks like is not enough. Suppose you have a GUI class from
Tkinter. After a little more playing and fixing the huge gap described
in the next paragraph you decide, Cripes! Kenny was right! This is very
powerful. So now you want to subclass a Tkinter button and control
whether it is enabled with a rule (the huge gap, btw). But the enabled
flag of the super class is a native Python class slot. How would you
handle that with your faux object system? Had you truly extended the
Python class system you could just give the inherited slot a rule.
Speaking of which...
btw, You claimed "no lambda" but I did not see you doing a ruled value
anywhere, and that is where you want the lambda. And in case you
thinking your callbacks do that.
No, you do not want on-change handlers propagating data to other slots,
though that is a sound albeit primitive way of improving
self-consistency of data in big apps. The productivity win with VisiCalc
was that one simply writes rules that use other cells, and the system
keeps track of what to update as any cell changes for you. You have that
exactly backwards: every slot has to know what other slots to update. Ick.
kenny
--
Cells: http://common-lisp.net/project/cells/
"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
Ken Tilton <·········@gmail.com> writes:
> No, you do not want on-change handlers propagating data to other
> slots, though that is a sound albeit primitive way of improving
> self-consistency of data in big apps. The productivity win with
> VisiCalc was that one simply writes rules that use other cells, and
> the system keeps track of what to update as any cell changes for
> you. You have that exactly backwards: every slot has to know what
> other slots to update. Ick.
No no, that's fine and the way it should be: when you change a slot,
it should know who to update. And that's also the way it works in
Cells. The trick is that Cells takes care of that part for you: all
the *programmer* has to care about is what values a slot depends on --
Cells takes care of inverting that for you, which is important because
that's a job that a computer is much better at than a human.
On 08 May 2006 12:53:09 -0700, ···@conquest.OCF.Berkeley.EDU (Thomas
F. Burdick) wrote:
>Ken Tilton <·········@gmail.com> writes:
>
>> No, you do not want on-change handlers propagating data to other
>> slots, though that is a sound albeit primitive way of improving
>> self-consistency of data in big apps. The productivity win with
>> VisiCalc was that one simply writes rules that use other cells, and
>> the system keeps track of what to update as any cell changes for
>> you. You have that exactly backwards: every slot has to know what
>> other slots to update. Ick.
>
>No no, that's fine and the way it should be: when you change a slot,
>it should know who to update. And that's also the way it works in
>Cells. The trick is that Cells takes care of that part for you:
I'm glad you said that - this may be what he meant, but it seems
more plausible than what he actually said.
> all
>the *programmer* has to care about is what values a slot depends on --
>Cells takes care of inverting that for you, which is important because
>that's a job that a computer is much better at than a human.
Fine. I suppose that is better; if b is going to return a + 1
the fact that this is what b returns should belong to b, not
to a. So a has an update list including b, so when a's value
is set a tells b it needs to update itself.
If we're allowed to pass (at some point, to some constructor
or other) something like (b, a + 1, [a]), which sets up a
cell b that shall return a + 1, and where the [a] is used
in the constructor to tell a to add b to a's update list
then this seems like no big deal.
And doing that doesn't seem so bad - now when the programmer
is writing b he has to decide that it should return a + 1
and also explicitly state that b shall depend on a; this
is all nice and localized, it's still _b_ telling _a_ to
add b to a's update list, and the programmer only has
to figure out what _b_ depends on when he's writing _b_.
Doesn't seem so bad.
But of course it would be better to be able to pass just
something morally equivalent to (b, a + 1) to whatever
constructor and have the system figure out automatically
that since b returns a + 1 it has to add a to b's update
list. There must be some simple trick to accomplish that
(using Python, without parsing code). (I begin to see the
point to the comment about how the callbacks should fire
when things are constucted.) Exactly what the trick is I
don't see immediately.
In Cells do we just pass a rule using other cells to
determine this cell's value, or do we also include
an explicit list of cells that this cell depends on?
************************
David C. Ullrich
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <NUP7g.17$G22.12@fe11.lga>
David C. Ullrich wrote:
> On 08 May 2006 12:53:09 -0700, ···@conquest.OCF.Berkeley.EDU (Thomas
> F. Burdick) wrote:
>
>
>>Ken Tilton <·········@gmail.com> writes:
>>
>>
>>>No, you do not want on-change handlers propagating data to other
>>>slots, though that is a sound albeit primitive way of improving
>>>self-consistency of data in big apps. The productivity win with
>>>VisiCalc was that one simply writes rules that use other cells, and
>>>the system keeps track of what to update as any cell changes for
>>>you. You have that exactly backwards: every slot has to know what
>>>other slots to update. Ick.
>>
>>No no, that's fine and the way it should be: when you change a slot,
>>it should know who to update. And that's also the way it works in
>>Cells. The trick is that Cells takes care of that part for you:
>
>
> I'm glad you said that - this may be what he meant, but it seems
> more plausible than what he actually said.
There may be some confusion here because there are two places for code
being discussed at the same time, and two sense of propagation.
the two places for code are (1) the rule attached to A which is
responsible for computing a value for A and (2) a callback for A to be
invoked whenever A changes. Why the difference?
In Cells, A is a slot such as 'background-color'. Whenever that changes,
we have to do something more. On Mac OS9 it was "InvalidateRect" of the
widget. In Cells-Tk, it is:
(Tcl_interp "mywidget configure -background <new color>")
In my OpenGL GUI, it is to rebuild the display-list for the widget.
That is the same no matter what rule some instance has for the slot
background-color, and different instances will have different rules.
As for propagating, yes, Cells propagates automatically. More below on
that. What I saw in the example offered was a hardcoded on-change
callback that was doing /user/ propagation form B to A (and B to A! ...
doesn't that loop, btw? Anyway...)
>
>
>>all
>>the *programmer* has to care about is what values a slot depends on --
>>Cells takes care of inverting that for you, which is important because
>>that's a job that a computer is much better at than a human.
>
>
> Fine. I suppose that is better; if b is going to return a + 1
> the fact that this is what b returns should belong to b, not
> to a. So a has an update list including b, so when a's value
> is set a tells b it needs to update itself.
>
> If we're allowed to pass (at some point, to some constructor
> or other) something like (b, a + 1, [a]), which sets up a
> cell b that shall return a + 1, and where the [a] is used
> in the constructor to tell a to add b to a's update list
> then this seems like no big deal.
>
> And doing that doesn't seem so bad - now when the programmer
> is writing b he has to decide that it should return a + 1
> and also explicitly state that b shall depend on a; this
> is all nice and localized, it's still _b_ telling _a_ to
> add b to a's update list, and the programmer only has
> to figure out what _b_ depends on when he's writing _b_.
> Doesn't seem so bad.
>
> But of course it would be better to be able to pass just
> something morally equivalent to (b, a + 1) to whatever
> constructor and have the system figure out automatically
> that since b returns a + 1 it has to add a to b's update
> list. There must be some simple trick to accomplish that
> (using Python, without parsing code).
Right, you do not want to parse code. It really would not work as
powerfully as Cells, which notice any dynamic access to another cell
while a rule is running. So my rule can call a function on "self" (the
instance that wons the slot being calculated, and since self can have
pointers to other instances, the algorithm can navigate high and low
calling other functions before finally reading another ruled slot. You
want to track those.
> Exactly what the trick is I
> don't see immediately.
To compute a value for a slot that happens to have a rule associated
with it, have a little cell datastructure that implements all this and
associate the cell with the slot and store a pointer to the rule in the
cell. Then have a global variable called *dependent* and locally:
currentdependent = *dependent*
oldvalue = cell.value
newvalue = call cell.rule, passing it the self instance
*dependent* = currentvalue
if newvalue not = oldvalue
call on-change on the slot name, self, newvalue and oldvalue
(the on-chnage needs to dispatch on as many arguments as
the language allows. Lisp does it on them all)
In the reader on a slot (in your getattr) you need code that notices if
the value being read is mediated by a ruled cell, and if the global
*dependent* is non empty. If so, both cells get a record of the other
(for varying demands of the implementation).
>
> In Cells do we just pass a rule using other cells to
> determine this cell's value, or do we also include
> an explicit list of cells that this cell depends on?
Again, the former. Just write the rule, the above scheme dynamically
figures out the dependencies. Note then that dependencies vary over time
because of different branches a rule might take.
I want to reassure the community that this (nor the spreadsheet analogy
<g>) is not just my crazy idea. In 1992:
http://www.cs.utk.edu/~bvz/active-value-spreadsheet.html
"It is becoming increasingly evident that imperative languages are
unsuitable for supporting the complicated flow-of-control that arises in
interactive applications. This paper describes a declarative paradigm
for specifying interactive applications that is based on the spreadsheet
model of programing. This model includes multi-way constraints and
action procedures that can be triggered when constraints change the
values of variables."
Cells do not do multi-way constraints, btw. Nor partial constraints. To
hard to program, because the system gets non-deterministic. That kinda
killed (well, left to a small niche) the whole research programme. I
have citations on that as well. :)
kenny
--
Cells: http://common-lisp.net/project/cells/
"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
Ken,
I don't mean to sound dismissive of Cells and being a 'evangelist' for
other obscure technologies myself I understand the frustration of
people not 'getting it'. In this instance, though, I'm one of the
people not getting it.
How does Cells compare to the properties concept in Delphi, and .Net
languages?
In these languages you can write something like:
...
property boolean Enabled = { onread: readEnabled };
function boolean readEnabled( )
{
return Text.HasSelection();
}
The above basically implements your example of "Cut" menu item being
enabled only if there is a selection. This can be generalised so that
the 'readEnabled' function is not hard coded and can be supplied on an
instance-by-instance basis rather than a class basis.
Is this equivalent to Cells?
Is Cells a superset of this functionality?
Thanks,
Brian.
Scream wrote:
> Ken,
>
> I don't mean to sound dismissive of Cells and being a 'evangelist' for
> other obscure technologies myself I understand the frustration of
> people not 'getting it'. In this instance, though, I'm one of the
> people not getting it.
>
> How does Cells compare to the properties concept in Delphi, and .Net
> languages?
>
> In these languages you can write something like:
>
> ...
> property boolean Enabled = { onread: readEnabled };
> function boolean readEnabled( )
> {
> return Text.HasSelection();
> }
>
> The above basically implements your example of "Cut" menu item being
> enabled only if there is a selection. This can be generalised so that
> the 'readEnabled' function is not hard coded and can be supplied on an
> instance-by-instance basis rather than a class basis.
Well, first of all, you /will/ find a lot of prior art out there, and I
do cite a different Smalltalk capability in at least one place in my
varous writings -- an :onChange capability, yes? Or maybe that was just
the QKS SmalltalkAgents I was using?
Anyway, your example will not scale performancewise. It is where we
started on Day One of Cells: we associated rules with slots and used a
little macrology and accessor magic to funcall the rule when the slot
was accessed.
But we were using this to arrange insanely detailed and complex GUI
layouts. Everybody dependended on some aspect of somebody else, right
down to the size of a character being a function of the character and
the font metrics of the associated font.
So you /have/ to cache computations, and if you cache computations you
have to know when to update the cache.
etc etc etc
>
> Is this equivalent to Cells?
No, but it is motivated by the same concern: making it easier for
developers to develop complex systems with interdependent state. Like I
said, it was where we started.
> Is Cells a superset of this functionality?
Not so much superset as "scalable implementation". And superset. :)
kenny
--
Cells: http://common-lisp.net/project/cells/
"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
On Mon, 08 May 2006 18:46:57 -0400, Ken Tilton <·········@gmail.com>
wrote:
>
>
>David C. Ullrich wrote:
>> On 08 May 2006 12:53:09 -0700, ···@conquest.OCF.Berkeley.EDU (Thomas
>> F. Burdick) wrote:
>>
>>
>>>Ken Tilton <·········@gmail.com> writes:
>>>
>>>
>>>>No, you do not want on-change handlers propagating data to other
>>>>slots, though that is a sound albeit primitive way of improving
>>>>self-consistency of data in big apps. The productivity win with
>>>>VisiCalc was that one simply writes rules that use other cells, and
>>>>the system keeps track of what to update as any cell changes for
>>>>you. You have that exactly backwards: every slot has to know what
>>>>other slots to update. Ick.
>>>
>>>No no, that's fine and the way it should be: when you change a slot,
>>>it should know who to update. And that's also the way it works in
>>>Cells. The trick is that Cells takes care of that part for you:
>>
>>
>> I'm glad you said that - this may be what he meant, but it seems
>> more plausible than what he actually said.
>
>There may be some confusion here because there are two places for code
>being discussed at the same time, and two sense of propagation.
>
>the two places for code are (1) the rule attached to A which is
>responsible for computing a value for A and (2) a callback for A to be
>invoked whenever A changes. Why the difference?
>
>In Cells, A is a slot such as 'background-color'. Whenever that changes,
>we have to do something more. On Mac OS9 it was "InvalidateRect" of the
>widget. In Cells-Tk, it is:
> (Tcl_interp "mywidget configure -background <new color>")
>
>In my OpenGL GUI, it is to rebuild the display-list for the widget.
>
>That is the same no matter what rule some instance has for the slot
>background-color, and different instances will have different rules.
>
>As for propagating, yes, Cells propagates automatically. More below on
>that. What I saw in the example offered was a hardcoded on-change
>callback that was doing /user/ propagation form B to A (and B to A! ...
>doesn't that loop, btw?
No, there's no looping in the example. Yes, the code determining what
b returns should be attached to b instead of to a, but the code I
gave does work as advertised. (I mean give me a break - I provided
a sample use so you could simply run it. Installing Python is not
hard.)
If you, um, look at the code you see that "cells.a = 42" triggers
cells.__setattr__, which fires a's callback; the callback then
reaches inside and sets the value of b _without_ going through
__setattr__, hence without triggering b's callback.
In Cells you can't have A depend on B and also B depend on A?
That seems like an unfortunate restriction - I'd want to be
able to have Celsius and Farenheit, so that setting either
one sets the other.
Of course if there are no loops it's easier to see how
you managed to do the stuff you were talking about elsewhere,
(at least sometimes) delaying execution until needed.
>Anyway...)
>
>>
>>
>>>all
>>>the *programmer* has to care about is what values a slot depends on --
>>>Cells takes care of inverting that for you, which is important because
>>>that's a job that a computer is much better at than a human.
>>
>>
>> Fine. I suppose that is better; if b is going to return a + 1
>> the fact that this is what b returns should belong to b, not
>> to a. So a has an update list including b, so when a's value
>> is set a tells b it needs to update itself.
>>
>> If we're allowed to pass (at some point, to some constructor
>> or other) something like (b, a + 1, [a]), which sets up a
>> cell b that shall return a + 1, and where the [a] is used
>> in the constructor to tell a to add b to a's update list
>> then this seems like no big deal.
>>
>> And doing that doesn't seem so bad - now when the programmer
>> is writing b he has to decide that it should return a + 1
>> and also explicitly state that b shall depend on a; this
>> is all nice and localized, it's still _b_ telling _a_ to
>> add b to a's update list, and the programmer only has
>> to figure out what _b_ depends on when he's writing _b_.
>> Doesn't seem so bad.
>>
>> But of course it would be better to be able to pass just
>> something morally equivalent to (b, a + 1) to whatever
>> constructor and have the system figure out automatically
>> that since b returns a + 1 it has to add a to b's update
>> list. There must be some simple trick to accomplish that
>> (using Python, without parsing code).
>
>Right, you do not want to parse code. It really would not work as
>powerfully as Cells, which notice any dynamic access to another cell
>while a rule is running. So my rule can call a function on "self" (the
>instance that wons the slot being calculated, and since self can have
>pointers to other instances, the algorithm can navigate high and low
>calling other functions before finally reading another ruled slot. You
>want to track those.
>
>> Exactly what the trick is I
>> don't see immediately.
>
>To compute a value for a slot that happens to have a rule associated
>with it, have a little cell datastructure that implements all this and
>associate the cell with the slot and store a pointer to the rule in the
>cell. Then have a global variable called *dependent* and locally:
>
> currentdependent = *dependent*
> oldvalue = cell.value
> newvalue = call cell.rule, passing it the self instance
> *dependent* = currentvalue
>
> if newvalue not = oldvalue
> call on-change on the slot name, self, newvalue and oldvalue
> (the on-chnage needs to dispatch on as many arguments as
> the language allows. Lisp does it on them all)
>
>In the reader on a slot (in your getattr) you need code that notices if
>the value being read is mediated by a ruled cell, and if the global
>*dependent* is non empty. If so, both cells get a record of the other
>(for varying demands of the implementation).
>
>>
>> In Cells do we just pass a rule using other cells to
>> determine this cell's value, or do we also include
>> an explicit list of cells that this cell depends on?
>
>Again, the former. Just write the rule, the above scheme dynamically
>figures out the dependencies. Note then that dependencies vary over time
>because of different branches a rule might take.
>
>I want to reassure the community that this (nor the spreadsheet analogy
><g>) is not just my crazy idea. In 1992:
>
> http://www.cs.utk.edu/~bvz/active-value-spreadsheet.html
>
>"It is becoming increasingly evident that imperative languages are
>unsuitable for supporting the complicated flow-of-control that arises in
>interactive applications. This paper describes a declarative paradigm
>for specifying interactive applications that is based on the spreadsheet
>model of programing. This model includes multi-way constraints and
>action procedures that can be triggered when constraints change the
>values of variables."
>
>Cells do not do multi-way constraints, btw.
My _guess_ is that a "multi-way constraint" is something like
what's above, where A depends on B and B depends on A?
>Nor partial constraints. To
>hard to program, because the system gets non-deterministic. That kinda
>killed (well, left to a small niche) the whole research programme. I
>have citations on that as well. :)
>
>kenny
************************
David C. Ullrich
On Tue, 09 May 2006 05:35:47 -0500, David C. Ullrich
<·······@math.okstate.edu> wrote:
>On Mon, 08 May 2006 18:46:57 -0400, Ken Tilton <·········@gmail.com>
>wrote:
>[...]
>
>If you, um, look at the code you see that "cells.a = 42" triggers
>cells.__setattr__, which fires a's callback; the callback then
>reaches inside and sets the value of b _without_ going through
>__setattr__, hence without triggering b's callback.
>
>In Cells you can't have A depend on B and also B depend on A?
>That seems like an unfortunate restriction - I'd want to be
>able to have Celsius and Farenheit, so that setting either
>one sets the other.
Realized later that I hadn't thought this through.
I'd been assuming that of course we should be allowed to
have A and B depend on each other. Hence if a change in
A propagates to a change in B that change in B has to
be a non-propagating change - thought I was just so
clever seeing a way to do that.
But duh, if that's how things are then we can't have
transitive dependencies working out right; surely we
want to be able to have B depend on A and then C
depend on B...
(And also if A and B are allowed to depend on each
other then the programmer has to ensure that the
two rules are inverses of each other, which seems
like a bad constraint in general, something non-trivial
that the programmer has to get right.)
So fine, no loops. If anything, if we know that
there are no loops in the dependencies that simplifies
the rest of the programming, no need for the sort of
finagling described in the first paragraph above.
But this raises a question:
Q: How do we ensure there are no loops in the dependencies?
Do we actually run the whole graph through some algorithm
to verify there are no loops?
The simplest solution seems like adding the cells one
at a time, and only allowing a cell to depend on
previously added cells. It's clear that that would
prevent loops, but it's not clear to me whether or
not that disallows some non-looping graphs. A
math question the answer to which is not immediately
clear to me (possibly trivial, the question just
ocurred to me this second):
Say G is a (finite) directed graph with no loops. Is it always
possible to order the vertices in such a way that
every edge goes from a vertex to a _previous_ vertex?
>Of course if there are no loops it's easier to see how
>you managed to do the stuff you were talking about elsewhere,
>(at least sometimes) delaying execution until needed.
>
>>Anyway...)
>>
>>>
>>>
>>>>all
>>>>the *programmer* has to care about is what values a slot depends on --
>>>>Cells takes care of inverting that for you, which is important because
>>>>that's a job that a computer is much better at than a human.
>>>
>>>
>>> Fine. I suppose that is better; if b is going to return a + 1
>>> the fact that this is what b returns should belong to b, not
>>> to a. So a has an update list including b, so when a's value
>>> is set a tells b it needs to update itself.
>>>
>>> If we're allowed to pass (at some point, to some constructor
>>> or other) something like (b, a + 1, [a]), which sets up a
>>> cell b that shall return a + 1, and where the [a] is used
>>> in the constructor to tell a to add b to a's update list
>>> then this seems like no big deal.
>>>
>>> And doing that doesn't seem so bad - now when the programmer
>>> is writing b he has to decide that it should return a + 1
>>> and also explicitly state that b shall depend on a; this
>>> is all nice and localized, it's still _b_ telling _a_ to
>>> add b to a's update list, and the programmer only has
>>> to figure out what _b_ depends on when he's writing _b_.
>>> Doesn't seem so bad.
>>>
>>> But of course it would be better to be able to pass just
>>> something morally equivalent to (b, a + 1) to whatever
>>> constructor and have the system figure out automatically
>>> that since b returns a + 1 it has to add a to b's update
>>> list. There must be some simple trick to accomplish that
>>> (using Python, without parsing code).
>>
>>Right, you do not want to parse code. It really would not work as
>>powerfully as Cells, which notice any dynamic access to another cell
>>while a rule is running. So my rule can call a function on "self" (the
>>instance that wons the slot being calculated, and since self can have
>>pointers to other instances, the algorithm can navigate high and low
>>calling other functions before finally reading another ruled slot. You
>>want to track those.
>>
>>> Exactly what the trick is I
>>> don't see immediately.
>>
>>To compute a value for a slot that happens to have a rule associated
>>with it, have a little cell datastructure that implements all this and
>>associate the cell with the slot and store a pointer to the rule in the
>>cell. Then have a global variable called *dependent* and locally:
>>
>> currentdependent = *dependent*
>> oldvalue = cell.value
>> newvalue = call cell.rule, passing it the self instance
>> *dependent* = currentvalue
>>
>> if newvalue not = oldvalue
>> call on-change on the slot name, self, newvalue and oldvalue
>> (the on-chnage needs to dispatch on as many arguments as
>> the language allows. Lisp does it on them all)
>>
>>In the reader on a slot (in your getattr) you need code that notices if
>>the value being read is mediated by a ruled cell, and if the global
>>*dependent* is non empty. If so, both cells get a record of the other
>>(for varying demands of the implementation).
>>
>>>
>>> In Cells do we just pass a rule using other cells to
>>> determine this cell's value, or do we also include
>>> an explicit list of cells that this cell depends on?
>>
>>Again, the former. Just write the rule, the above scheme dynamically
>>figures out the dependencies. Note then that dependencies vary over time
>>because of different branches a rule might take.
>>
>>I want to reassure the community that this (nor the spreadsheet analogy
>><g>) is not just my crazy idea. In 1992:
>>
>> http://www.cs.utk.edu/~bvz/active-value-spreadsheet.html
>>
>>"It is becoming increasingly evident that imperative languages are
>>unsuitable for supporting the complicated flow-of-control that arises in
>>interactive applications. This paper describes a declarative paradigm
>>for specifying interactive applications that is based on the spreadsheet
>>model of programing. This model includes multi-way constraints and
>>action procedures that can be triggered when constraints change the
>>values of variables."
>>
>>Cells do not do multi-way constraints, btw.
>
>My _guess_ is that a "multi-way constraint" is something like
>what's above, where A depends on B and B depends on A?
>
>>Nor partial constraints. To
>>hard to program, because the system gets non-deterministic. That kinda
>>killed (well, left to a small niche) the whole research programme. I
>>have citations on that as well. :)
>>
>>kenny
>
>
>************************
>
>David C. Ullrich
************************
David C. Ullrich
David C Ullrich asked:
> Q: How do we ensure there are no loops in the dependencies?
>
> Do we actually run the whole graph through some algorithm
> to verify there are no loops?
The question you are asking is the dependency graph a "directed
acyclic graph" (commonly called a DAG)? One algorithm to determine if
it is, is called "topological sort". That algorithm tells you where
there are cyclces in your graph, if any, and also tells you the order
of the dependencies, i.e. if x is updated, what you have to update
downstream AND THE ORDER you have to perform the downstream
computations in. We use this algorithm for solving just the kind of
dataflow problems you are talking about in our circuit design tools.
Circuit designs have one-way dependencies that we want to sort and
resolve--similarly, we don't want cycles in our circuits except ones
that pass through clocked flip-flops. We solve such problems on
circuits with millions of gates, i.e. enough gates to represent the
CPU of your computer or a disk controller chip or a router.
I believe there are also algorithms that allow you to construct only
acyclic (the technical term for non-looping) graphs and don't require
you to enter the vertexes (verticies if you prefer) of the graph in
any specific order, and in the worst case you can always run the
topological sort on any graph and determine if the graph is cyclic.
The area is well-studied and you can find a variety of algorithms that
solve most interesting graph problems as they all occur over-and-over
in numerous diverse fields.
Hope this helps,
-Chris
*****************************************************************************
Chris Clark Internet : ·······@world.std.com
Compiler Resources, Inc. Web Site : http://world.std.com/~compres
23 Bailey Rd voice : (508) 435-5016
Berlin, MA 01503 USA fax : (978) 838-0263 (24 hours)
------------------------------------------------------------------------------
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <Fio8g.7$CQ6.0@fe11.lga>
Chris F Clark wrote:
> David C Ullrich asked:
>
>>Q: How do we ensure there are no loops in the dependencies?
>>
>>Do we actually run the whole graph through some algorithm
>>to verify there are no loops?
>
>
> The question you are asking is the dependency graph a "directed
> acyclic graph" (commonly called a DAG)? One algorithm to determine if
> it is, is called "topological sort". That algorithm tells you where
> there are cyclces in your graph,
Yep. But with Cells the dependency graph is just a shifting record of
who asked who, shifting because all of a sudden some outlier data will
enter the system and a rule will branch to code for the first time, and
suddenly "depend on" on some new other cell (new as in never before used
by this cell). This is not subject to static analysis because, in fact,
lexically everyone can get to everything else, what with closures,
first-class functions, runtime branching we cannot predict... fuggedaboutit.
So we cannot say, OK, here is "the graph" of our application model. All
we can do is let her rip and cross our fingers. :)
kenny
Kenny replied to me saying:
> Yep. But with Cells the dependency graph is just a shifting record of
> who asked who, shifting because all of a sudden some outlier data will
> enter the system and a rule will branch to code for the first time,
> and suddenly "depend on" on some new other cell (new as in never
> before used by this cell). This is not subject to static analysis
> because, in fact, lexically everyone can get to everything else, what
> with closures, first-class functions, runtime branching we cannot
> predict... fuggedaboutit.
>
> So we cannot say, OK, here is "the graph" of our application
> model. All we can do is let her rip and cross our fingers. :)
Yes, if you have Turing completeness in your dependency graph, the
problem is unsolvable. However, it's like the static v. dynamic
typing debate, you can pick how much you want to allow your graph to
be dynamic versus how much "safety" you want. In particular, I
suspect that in many applications, one can compute the set of
potentially problematic dependencies (and that set will be empty).
It's just a matter of structuring and annotating them correctly. Just
like one can create type systems that work for ML and Haskell. Of
course, if you treat your cell references like C pointers, then you
get what you deserve.
Note that you can even run the analysis dynamically, recomputing
whether the graph is cycle free as each dependency changes. Most
updates have local effect. Moreover, if you have used topological
sort to compute an ordering as well as proving cycle-free-ness, the
edge is only pontentially problemantic when it goes from a later
vertex in the order to an earlier one. I wouldn't be surprised to
find efficient algorithms for calculating and updating a topological
sort already in the literature.
It is worth noting that in typical chip circuitry there are
constructions, generally called "busses" where the flow of information
is sometimes "in" via an edge and sometimes "out" via the same edge
and we can model them in a cycle-free manner.
If you want to throw up your hands and say the problem is intractable
in general, you can. However, in my opinion one doesn't have to give
up quite that easily.
-Chris
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <mao8g.5$CQ6.4@fe11.lga>
[Sorry, I missed this one originally.]
David C. Ullrich wrote:
> On Tue, 09 May 2006 05:35:47 -0500, David C. Ullrich
> <·······@math.okstate.edu> wrote:
>
>
>>On Mon, 08 May 2006 18:46:57 -0400, Ken Tilton <·········@gmail.com>
>>wrote:
>>[...]
>>
>>If you, um, look at the code you see that "cells.a = 42" triggers
>>cells.__setattr__, which fires a's callback; the callback then
>>reaches inside and sets the value of b _without_ going through
>>__setattr__, hence without triggering b's callback.
>>
>>In Cells you can't have A depend on B and also B depend on A?
>>That seems like an unfortunate restriction - I'd want to be
>>able to have Celsius and Farenheit, so that setting either
>>one sets the other.
Set Kelvin, and make Celsius and Fahrneheit functions of that. ie, There
is only one datapoint, the temperature. No conflict unless one creates one.
>
>
> Realized later that I hadn't thought this through.
>
> I'd been assuming that of course we should be allowed to
> have A and B depend on each other. Hence if a change in
> A propagates to a change in B that change in B has to
> be a non-propagating change - thought I was just so
> clever seeing a way to do that.
I think it could be arranged, if one were willing to tolerate a little
fuzziness: no, there would be no strictly correct snapshot at which
point everyone had their "right value". Instead, A changes so B
recomputes, B changes so A recomputes... our model has now come to life,
we just have to poll for OS events or socket data, and A and B never get
to a point where they are self-consistent, because one or the other
always needs to be recalculated.
I sometimes wonder if the physical universe is like that, explaining why
gravity slows time: it is not the gravity, it is the mass and we are
seeing system degradation as the matrix gets bogged down recomputing all
that matter.
[Cue Xah]
>
> But duh, if that's how things are then we can't have
> transitive dependencies working out right; surely we
> want to be able to have B depend on A and then C
> depend on B...
>
> (And also if A and B are allowed to depend on each
> other then the programmer has to ensure that the
> two rules are inverses of each other, which seems
> like a bad constraint in general, something non-trivial
> that the programmer has to get right.)
Right, when I considered multi-way dependencies I realized I would have
to figure out some new syntax to declare in one place the rules for two
slots, and that would be weird because in Cells it is the instance that
gets a rule at make-instance time, so i would really have to have some
new make-instance-pair capability. Talk about a slippery slope. IMO, the
big constraints research program kicked off by Steele's thesis withered
into a niche technology because they sniffed at the "trivial"
spreadsheet model of linear dataflow and tried to do partial and
multi-way dependencies. I call it "a bridge too far", and in my
experience of Cells (ten years of pretty intense use), guess what?, all
we need as developers is one-way, linear, fully-specified dependencies.
>
> So fine, no loops. If anything, if we know that
> there are no loops in the dependencies that simplifies
> the rest of the programming, no need for the sort of
> finagling described in the first paragraph above.
Actually, I do allow an on-change callback ("observer" in Cells
parlance) to kick off a toplevel, imperative state change to the model.
Two cells who do that to each other will run until one decides not to do
so. I solve some GUI situations (the classic being a scrollbar thumb and
the text offset, which each at different times control the other, by
having them simply set the other in an observer. On the second
iteration, B is setting A to the value A has already, so propagation
stops (a longstanding Cells feature).
These feel like GOTOs, by the way, and are definitely to be avoided
because they break the declarative paradigm of Cells in which I can
always look at one (anonymous!) rule and see without question from where
any value it might hold comes. (And observers define where they take
effect outside the model, but those I have to track down by slot name
using OO browsing tools.)
> But this raises a question:
>
> Q: How do we ensure there are no loops in the dependencies?
Elsewhere I suggested the code was:
(let ((*dependent* this-cell))
(funcall (rule this-cell) (object this-cell)))
It is actually:
(let ((*dependents* (list* this-cell *dependents*)))
(funcall (rule this-cell) (object this-cell)))
So /before/ that I can say:
(assert (not (find this-cell *dependents*)))
>
> Do we actually run the whole graph through some algorithm
> to verify there are no loops?
>
> The simplest solution seems like adding the cells one
> at a time, and only allowing a cell to depend on
> previously added cells. It's clear that that would
> prevent loops, but it's not clear to me whether or
> not that disallows some non-looping graphs.
As you can see, the looping is detected only when there is an actual
circularity, defined as a computation requiring its own computation as
an input.
btw, a rule /does/ have access to the prior value it computed, if any,
so the cell can be value-reflective even though the rules cannot be
reentrant.
> A
> math question the answer to which is not immediately
> clear to me (possibly trivial, the question just
> ocurred to me this second):
>
> Say G is a (finite) directed graph with no loops. Is it always
> possible to order the vertices in such a way that
> every edge goes from a vertex to a _previous_ vertex?
I am just a simple application programmer, so I just wait till Cells
breaks and then I fix that. :)
kenny
--
Cells: http://common-lisp.net/project/cells/
"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
Ken Tilton <·········@gmail.com> writes:
>
> Set Kelvin, and make Celsius and Fahrneheit functions of that.
Or Rankine:-)
--
Robert Uhl <http://public.xdi.org/=ruhl>
Brought to you by 'Ouchies', the sharp, prickly toy you bathe with...
Ken Tilton <·········@gmail.com> writes:
> David C. Ullrich wrote:
>
> > But duh, if that's how things are then we can't have transitive
> > dependencies working out right; surely we
> > want to be able to have B depend on A and then C
> > depend on B...
> > (And also if A and B are allowed to depend on each
> > other then the programmer has to ensure that the
> > two rules are inverses of each other, which seems
> > like a bad constraint in general, something non-trivial
> > that the programmer has to get right.)
>
> Right, when I considered multi-way dependencies I realized I would
> have to figure out some new syntax to declare in one place the rules
> for two slots, and that would be weird because in Cells it is the
> instance that gets a rule at make-instance time, so i would really
> have to have some new make-instance-pair capability. Talk about a
> slippery slope. IMO, the big constraints research program kicked off
> by Steele's thesis withered into a niche technology because they
> sniffed at the "trivial" spreadsheet model of linear dataflow and
> tried to do partial and multi-way dependencies. I call it "a bridge
> too far", and in my experience of Cells (ten years of pretty intense
> use), guess what?, all we need as developers is one-way, linear,
> fully-specified dependencies.
It may also be that the bridge too far was in trying to do big,
multi-way constraints in a general-purpose manner. Cells provides you
with the basics, and you can build a special-purpose multi-way system
on top of it, much like you can use it as a toolkit for doing KR.
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <VfO7g.63$u17.58@fe08.lga>
Thomas F. Burdick wrote:
> Ken Tilton <·········@gmail.com> writes:
>
>
>>No, you do not want on-change handlers propagating data to other
>>slots, though that is a sound albeit primitive way of improving
>>self-consistency of data in big apps. The productivity win with
>>VisiCalc was that one simply writes rules that use other cells, and
>>the system keeps track of what to update as any cell changes for
>>you. You have that exactly backwards: every slot has to know what
>>other slots to update. Ick.
>
>
> No no, that's fine and the way it should be: when you change a slot,
> it should know who to update. And that's also the way it works in
> Cells. The trick is that Cells takes care of that part for you: all
> the *programmer* has to care about is what values a slot depends on --
> Cells takes care of inverting that for you, which is important because
> that's a job that a computer is much better at than a human.
Well, as long as we are being precise, an important distinction is being
obscured here: I was objecting to a slot "knowing" who to update thanks
to having been hardcoded to update certain other slots. When you say
"Cells takes care of that", it is important to note that it does so
dynamically at runtime based on actual usage of one slot by the rule for
another slot.
kt
--
Cells: http://common-lisp.net/project/cells/
"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
From: Bill Atkins
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <87odyatirq.fsf@rpi.edu>
Bill Atkins <············@rpi.edu> writes:
> Incidentally, is this supposed to be an example of Python's supposed
> "aesthetic pleasantness"? I find it a little hideous, even giving you
> the benefit of the doubt and pretending there are newlines between
> each function. There's nothing like a word wrapped in pairs of
> underscores to totally ruin an aesthetic experience.
I don't mean to suggest that your code in particular is hideous -
sorry if it came off that way. It's just that Python code seems a
disappointingly un-pretty after all the discussion about its beauty in
this thread.
--
This is a song that took me ten years to live and two years to write.
- Bob Dylan
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <r8n7g.10$Q17.3@fe08.lga>
Serge Orlov wrote:
> Ken Tilton wrote:
>
>>It is vastly more disappointing that an alleged tech genius would sniff
>>at the chance to take undeserved credit for PyCells, something probably
>>better than a similar project on which Adobe (your superiors at
>>software, right?) has bet the ranch. This is the Grail, dude, Brooks's
>>long lost Silver Bullet. And you want to pass?????
>>
>>C'mon, Alex, I just want you as co-mentor for your star quality. Of
>>course you won't have to do a thing, just identify for me a True Python
>>Geek and she and I will take it from there.
>>
>>Here's the link in case you lost it:
>>
>> http://www.lispnyc.org/wiki.clp?page=PyCells
>>
>>:)
>>
>>peace, kenny
>>
>>ps. flaming aside, PyCells really would be amazingly good for Python.
>>And so Google. (Now your job is on the line. <g>) k
>
>
> Perhaps I'm missing something...
yes, but do not feel bad, everyone gets confused by the /analogy/ to
spreadsheets into thinking Cells /is/ a spreadsheet. In fact, for a
brief period I swore off the analogy because it was so invariably
misunderstood. Even Graham misunderstood it.
But it is such a great analogy! <sigh>
> but what's the big deal about PyCells?
> Here is 22-lines barebones implementation of spreadsheet in Python,
> later I create 2 cells "a" and "b", "b" depends on a and evaluate all
> the cells. The output is
>
> a = negate(sin(pi/2)+one) = -2.0
> b = negate(a)*10 = 20.0
Very roughly speaking, that is supposed to be the code, not the output.
So you would start with (just guessing at the Python, it has been years
since I did half a port to Python):
v1 = one
a = determined_by(negate(sin(pi/2)+v1)
b = determined_by(negate(a)*10)
print(a) -> -2.0 ;; this and the next are easy
print(b) -> 20
v1 = two ;; fun part starts here
print(b) -> 40 ;; of course a got updated, too
The other thing we want is (really inventing syntax here):
on_change(a,new,old,old-bound?) print(list(new, old, old-bound?)
Then the print statements Just Happen. ie, It is not as if we are just
hiding computed variables behind syntax and computations get kicked off
when a value is read. Instead, an underlying engine propagates any
assignment throughout the dependency graph before the assignment returns.
My Cells hack does the above, not with global variables, but with slots
(data members?) of instances in the CL object system. I have thought
about doing it with global variables such as a and b above, but never
really seen much of need, maybe because I like OO and can always think
of a class to create of which the value should be just one attribute.
kenny
--
Cells: http://common-lisp.net/project/cells/
"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <Juo7g.6$sC4.4@fe09.lga>
Ken Tilton wrote:
>
>>
>> a = negate(sin(pi/2)+one) = -2.0
>> b = negate(a)*10 = 20.0
>
>
> Very roughly speaking, that is supposed to be the code, not the output.
> So you would start with (just guessing at the Python, it has been years
> since I did half a port to Python):
>
> v1 = one
Sorry, only later did I realize "one" was going in as a named
spreadsheet Cell. I thought maybe Python was a bit of a super-cobol and
had one as a keyword. <g>
kenny
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <0mp7g.600$4q2.70@fe12.lga>
Ken Tilton wrote:
>
>
> Serge Orlov wrote:
>
>> Ken Tilton wrote:
>>
>>> It is vastly more disappointing that an alleged tech genius would sniff
>>> at the chance to take undeserved credit for PyCells, something probably
>>> better than a similar project on which Adobe (your superiors at
>>> software, right?) has bet the ranch. This is the Grail, dude, Brooks's
>>> long lost Silver Bullet. And you want to pass?????
>>>
>>> C'mon, Alex, I just want you as co-mentor for your star quality. Of
>>> course you won't have to do a thing, just identify for me a True Python
>>> Geek and she and I will take it from there.
>>>
>>> Here's the link in case you lost it:
>>>
>>> http://www.lispnyc.org/wiki.clp?page=PyCells
>>>
>>> :)
>>>
>>> peace, kenny
>>>
>>> ps. flaming aside, PyCells really would be amazingly good for Python.
>>> And so Google. (Now your job is on the line. <g>) k
>>
>>
>>
>> Perhaps I'm missing something...
>
>
> yes, but do not feel bad, everyone gets confused by the /analogy/ to
> spreadsheets into thinking Cells /is/ a spreadsheet. In fact, for a
> brief period I swore off the analogy because it was so invariably
> misunderstood. Even Graham misunderstood it.
>
> But it is such a great analogy! <sigh>
>
>> but what's the big deal about PyCells?
>> Here is 22-lines barebones implementation of spreadsheet in Python,
>> later I create 2 cells "a" and "b", "b" depends on a and evaluate all
>> the cells. The output is
>>
>> a = negate(sin(pi/2)+one) = -2.0
>> b = negate(a)*10 = 20.0
>
>
> Very roughly speaking, that is supposed to be the code, not the output.
> So you would start with (just guessing at the Python, it has been years
> since I did half a port to Python):
>
> v1 = one
> a = determined_by(negate(sin(pi/2)+v1)
> b = determined_by(negate(a)*10)
> print(a) -> -2.0 ;; this and the next are easy
> print(b) -> 20
> v1 = two ;; fun part starts here
> print(b) -> 40 ;; of course a got updated, too
>
> The other thing we want is (really inventing syntax here):
>
> on_change(a,new,old,old-bound?) print(list(new, old, old-bound?)
>
> Then the print statements Just Happen. ie, It is not as if we are just
> hiding computed variables behind syntax and computations get kicked off
> when a value is read. Instead, an underlying engine propagates any
> assignment throughout the dependency graph before the assignment returns.
>
> My Cells hack does the above, not with global variables, but with slots
> (data members?) of instances in the CL object system.
here it is:
(in-package :cells)
(defmodel useless () ;; defmodel is CLOS defclass plus Cells wiring
((one :initform nil :accessor one :initarg :one)
(a :initform nil :accessor a :initarg :a)
(b :initform nil :accessor b :initarg :b)))
;; defobserver defines a CLOS method just the right way
(defobserver one (self new-value old-value old-value-bound-p)
(print (list :observing-one new-value old-value old-value-bound-p)))
(defobserver a (self new-value old-value old-value-bound-p)
(print (list :observing-a new-value old-value old-value-bound-p)))
(defobserver b (self new-value old-value old-value-bound-p)
(print (list :observing-b new-value old-value old-value-bound-p)))
;; c-in and c? hide more Cells wiring. The long names are c-input and
c-formula, btw
(progn
(print :first-we-make-a-useless-instance)
(let ((u (make-instance 'useless
:one (c-in 1) ;; we want to change it later, so wrap as
input value
:a (c? (- (+ (sin (/ pi 2)) (one self)))) ;;
negate(sin(pi/2)+one)
:b (c? (* (- (a self) 10)))))) ;; negate(a)*10
(print :now-we-change-one-to-ten)
(setf (one u) 10)))
#| output of the above
:first-we-make-a-useless-instance
(:observing-one 1 nil nil)
(:observing-a -2.0d0 nil nil)
(:observing-b -12.0d0 nil nil)
:now-we-change-one-to-ten
(:observing-one 10 1 t)
(:observing-a -11.0d0 -2.0d0 t)
(:observing-b -21.0d0 -12.0d0 t)
|#
kenny
--
Cells: http://common-lisp.net/project/cells/
"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
From: Alexander Schmolck
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <yfshd41yeaj.fsf@oc.ex.ac.uk>
[trimmed groups]
Ken Tilton <·········@gmail.com> writes:
> yes, but do not feel bad, everyone gets confused by the /analogy/ to
> spreadsheets into thinking Cells /is/ a spreadsheet. In fact, for a brief
> period I swore off the analogy because it was so invariably misunderstood.
> Even Graham misunderstood it.
Count me in.
>
> But it is such a great analogy! <sigh>
>
> > but what's the big deal about PyCells?
> > Here is 22-lines barebones implementation of spreadsheet in Python,
> > later I create 2 cells "a" and "b", "b" depends on a and evaluate all
> > the cells. The output is
> > a = negate(sin(pi/2)+one) = -2.0
>
> > b = negate(a)*10 = 20.0
>
> Very roughly speaking, that is supposed to be the code, not the output. So you
> would start with (just guessing at the Python, it has been years since I did
> half a port to Python):
>
>
> v1 = one
> a = determined_by(negate(sin(pi/2)+v1)
> b = determined_by(negate(a)*10)
> print(a) -> -2.0 ;; this and the next are easy
> print(b) -> 20
> v1 = two ;; fun part starts here
> print(b) -> 40 ;; of course a got updated, too
>
do you mean 30?
I've translated my interpretation of the above to this actual python code:
from math import sin, pi
v1 = cell(lambda: 1)
a = cell(lambda:-(sin(pi/2)+v1.val), dependsOn=[v1])
b = cell(lambda: -a.val*10, dependsOn=[a],
onChange=lambda *args: printChangeBlurp(name='b',*args))
print 'v1 is', v1
print 'a is', a # -2.0 ;; this and the next are easy
print 'b is', b # 20
v1.val = 2 # ;; fun part starts here
print 'v1 now is', v1
print 'b now is', b # 30 ;; of course a got updated, too
I get the following printout:
v1 is 1
a is -2.0
b is [cell 'b' changed from <__main__.unbound object at 0xb4e2472c> to 20.0,
it was not bound]20.0
[cell 'b' changed from 20.0 to 30.0, it was bound ] v1 now is 2
b now is 30.0
Does that seem vaguely right?
> The other thing we want is (really inventing syntax here):
>
> on_change(a,new,old,old-bound?) print(list(new, old, old-bound?)
Is the above what you want (you can also dynamically assign onChange later
on, as required or have a list of procedures instead)?
>
> Then the print statements Just Happen. ie, It is not as if we are just hiding
> computed variables behind syntax and computations get kicked off when a value
> is read. Instead, an underlying engine propagates any assignment throughout
> the dependency graph before the assignment returns.
Updating on write rather than recalculating on read does in itself not seem
particularly complicated.
> My Cells hack does the above, not with global variables, but with slots (data
> members?) of instances in the CL object system. I have thought about doing it
> with global variables such as a and b above, but never really seen much of
> need, maybe because I like OO and can always think of a class to create of
> which the value should be just one attribute.
OK, so in what way does the quick 35 line hack below also completely miss your
point?
# (NB. for lispers: 'is' == EQ; '==' is sort of like EQUAL)
def printChangeBlurp(someCell, oldVal, newVal, bound, name=''):
print '[cell %r changed from %r to %r, it was %s]' % (
name, oldVal, newVal, ['not bound', 'bound '][bound]),
_unbound = type('unbound', (), {})() # just an unique dummy value
def updateDependents(dependents):
seen = {}
for dependent in dependents:
if dependent not in seen:
seen[dependent] = True
dependent.recalculate()
updateDependents(dependent._dependents)
class cell(object):
def __init__(self, formula, dependsOn=(), onChange=None):
self.formula = formula
self.dependencies = dependsOn
self.onChange = onChange
self._val = _unbound
for dependency in self.dependencies:
if self not in dependency._dependents:
dependency._dependents.append(self)
self._dependents = []
def __str__(self):
return str(self.val)
def recalculate(self):
newVal = self.formula()
if self.onChange is not None:
oldVal = self._val
self.onChange(self, oldVal, newVal, oldVal is not _unbound)
self._val = newVal
def getVal(self):
if self._val is _unbound:
self.recalculate()
return self._val
def setVal(self, value):
self._val = value
updateDependents(self._dependents)
val = property(getVal, setVal)
'as
From: Bill Atkins
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <87mzdt4ri5.fsf@rpi.edu>
Alexander Schmolck <··········@gmail.com> writes:
> [trimmed groups]
>
> Ken Tilton <·········@gmail.com> writes:
>
>> yes, but do not feel bad, everyone gets confused by the /analogy/ to
>> spreadsheets into thinking Cells /is/ a spreadsheet. In fact, for a brief
>> period I swore off the analogy because it was so invariably misunderstood.
>> Even Graham misunderstood it.
>
> Count me in.
>
>>
>> But it is such a great analogy! <sigh>
>>
>> > but what's the big deal about PyCells?
>> > Here is 22-lines barebones implementation of spreadsheet in Python,
>> > later I create 2 cells "a" and "b", "b" depends on a and evaluate all
>> > the cells. The output is
>> > a = negate(sin(pi/2)+one) = -2.0
>>
>> > b = negate(a)*10 = 20.0
>>
>> Very roughly speaking, that is supposed to be the code, not the output. So you
>> would start with (just guessing at the Python, it has been years since I did
>> half a port to Python):
>>
>>
>> v1 = one
>> a = determined_by(negate(sin(pi/2)+v1)
>> b = determined_by(negate(a)*10)
>> print(a) -> -2.0 ;; this and the next are easy
>> print(b) -> 20
>> v1 = two ;; fun part starts here
>> print(b) -> 40 ;; of course a got updated, too
>>
>
> do you mean 30?
>
> I've translated my interpretation of the above to this actual python code:
>
> from math import sin, pi
> v1 = cell(lambda: 1)
> a = cell(lambda:-(sin(pi/2)+v1.val), dependsOn=[v1])
> b = cell(lambda: -a.val*10, dependsOn=[a],
> onChange=lambda *args: printChangeBlurp(name='b',*args))
> print 'v1 is', v1
> print 'a is', a # -2.0 ;; this and the next are easy
> print 'b is', b # 20
> v1.val = 2 # ;; fun part starts here
> print 'v1 now is', v1
> print 'b now is', b # 30 ;; of course a got updated, too
>
>
> I get the following printout:
>
> v1 is 1
> a is -2.0
> b is [cell 'b' changed from <__main__.unbound object at 0xb4e2472c> to 20.0,
> it was not bound]20.0
> [cell 'b' changed from 20.0 to 30.0, it was bound ] v1 now is 2
> b now is 30.0
>
> Does that seem vaguely right?
>
>> The other thing we want is (really inventing syntax here):
>>
>> on_change(a,new,old,old-bound?) print(list(new, old, old-bound?)
>
> Is the above what you want (you can also dynamically assign onChange later
> on, as required or have a list of procedures instead)?
>
>>
>> Then the print statements Just Happen. ie, It is not as if we are just hiding
>> computed variables behind syntax and computations get kicked off when a value
>> is read. Instead, an underlying engine propagates any assignment throughout
>> the dependency graph before the assignment returns.
>
> Updating on write rather than recalculating on read does in itself not seem
> particularly complicated.
>
>> My Cells hack does the above, not with global variables, but with slots (data
>> members?) of instances in the CL object system. I have thought about doing it
>> with global variables such as a and b above, but never really seen much of
>> need, maybe because I like OO and can always think of a class to create of
>> which the value should be just one attribute.
>
> OK, so in what way does the quick 35 line hack below also completely miss your
> point?
>
>
> # (NB. for lispers: 'is' == EQ; '==' is sort of like EQUAL)
>
> def printChangeBlurp(someCell, oldVal, newVal, bound, name=''):
> print '[cell %r changed from %r to %r, it was %s]' % (
> name, oldVal, newVal, ['not bound', 'bound '][bound]),
>
> _unbound = type('unbound', (), {})() # just an unique dummy value
> def updateDependents(dependents):
> seen = {}
> for dependent in dependents:
> if dependent not in seen:
> seen[dependent] = True
> dependent.recalculate()
> updateDependents(dependent._dependents)
> class cell(object):
> def __init__(self, formula, dependsOn=(), onChange=None):
> self.formula = formula
> self.dependencies = dependsOn
> self.onChange = onChange
> self._val = _unbound
> for dependency in self.dependencies:
> if self not in dependency._dependents:
> dependency._dependents.append(self)
> self._dependents = []
> def __str__(self):
> return str(self.val)
> def recalculate(self):
> newVal = self.formula()
> if self.onChange is not None:
> oldVal = self._val
> self.onChange(self, oldVal, newVal, oldVal is not _unbound)
> self._val = newVal
> def getVal(self):
> if self._val is _unbound:
> self.recalculate()
> return self._val
> def setVal(self, value):
> self._val = value
> updateDependents(self._dependents)
> val = property(getVal, setVal)
>
>
>
> 'as
Here's how one of the cells examples might look in corrupted Python
(this is definitely not executable):
class FallingRock:
def __init__(self, pos):
define_slot( 'velocity', lambda: self.accel * self.elapsed )
define_slot( 'pos', lambda: self.accel * (self.elapsed ** 2) / 2,
initial_position = cell_initial_value( 100 ) )
self.accel = -9.8
rock = FallingRock(100)
print rock.accel, rock.velocity, rock.pos
# -9.8, 0, 100
rock.elapsed = 1
print rock.accel, rock.velocity, rock.pos
# -9.8, -9.8, -9.8
rock.elapsed = 8
print rock.accel, rock.velocity, rock.pos
# -9.8, -78.4, -627.2
Make sense? The idea is to declare what a slot's value represents
(with code) and then to stop worrying about keeping different things
synchronized.
Here's another of the examples, also translated into my horrific
rendition of Python (forgive me):
class Menu:
def __init__(self):
define_slot( 'enabled',
lambda: focused_object( self ).__class__ == TextEntry and
focused_object( self ).selection )
Now whenever the enabled slot is accessed, it will be calculated based
on what object has the focus. Again, it frees the programmer from
having to keep these different dependencies updated.
--
This is a song that took me ten years to live and two years to write.
- Bob Dylan
From: Bill Atkins
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <87d5ep4rbt.fsf@rpi.edu>
Bill Atkins <············@rpi.edu> writes:
> Alexander Schmolck <··········@gmail.com> writes:
>
>> [trimmed groups]
>>
>> Ken Tilton <·········@gmail.com> writes:
>>
>>> yes, but do not feel bad, everyone gets confused by the /analogy/ to
>>> spreadsheets into thinking Cells /is/ a spreadsheet. In fact, for a brief
>>> period I swore off the analogy because it was so invariably misunderstood.
>>> Even Graham misunderstood it.
>>
>> Count me in.
>>
>>>
>>> But it is such a great analogy! <sigh>
>>>
>>> > but what's the big deal about PyCells?
>>> > Here is 22-lines barebones implementation of spreadsheet in Python,
>>> > later I create 2 cells "a" and "b", "b" depends on a and evaluate all
>>> > the cells. The output is
>>> > a = negate(sin(pi/2)+one) = -2.0
>>>
>>> > b = negate(a)*10 = 20.0
>>>
>>> Very roughly speaking, that is supposed to be the code, not the output. So you
>>> would start with (just guessing at the Python, it has been years since I did
>>> half a port to Python):
>>>
>>>
>>> v1 = one
>>> a = determined_by(negate(sin(pi/2)+v1)
>>> b = determined_by(negate(a)*10)
>>> print(a) -> -2.0 ;; this and the next are easy
>>> print(b) -> 20
>>> v1 = two ;; fun part starts here
>>> print(b) -> 40 ;; of course a got updated, too
>>>
>>
>> do you mean 30?
>>
>> I've translated my interpretation of the above to this actual python code:
>>
>> from math import sin, pi
>> v1 = cell(lambda: 1)
>> a = cell(lambda:-(sin(pi/2)+v1.val), dependsOn=[v1])
>> b = cell(lambda: -a.val*10, dependsOn=[a],
>> onChange=lambda *args: printChangeBlurp(name='b',*args))
>> print 'v1 is', v1
>> print 'a is', a # -2.0 ;; this and the next are easy
>> print 'b is', b # 20
>> v1.val = 2 # ;; fun part starts here
>> print 'v1 now is', v1
>> print 'b now is', b # 30 ;; of course a got updated, too
>>
>>
>> I get the following printout:
>>
>> v1 is 1
>> a is -2.0
>> b is [cell 'b' changed from <__main__.unbound object at 0xb4e2472c> to 20.0,
>> it was not bound]20.0
>> [cell 'b' changed from 20.0 to 30.0, it was bound ] v1 now is 2
>> b now is 30.0
>>
>> Does that seem vaguely right?
>>
>>> The other thing we want is (really inventing syntax here):
>>>
>>> on_change(a,new,old,old-bound?) print(list(new, old, old-bound?)
>>
>> Is the above what you want (you can also dynamically assign onChange later
>> on, as required or have a list of procedures instead)?
>>
>>>
>>> Then the print statements Just Happen. ie, It is not as if we are just hiding
>>> computed variables behind syntax and computations get kicked off when a value
>>> is read. Instead, an underlying engine propagates any assignment throughout
>>> the dependency graph before the assignment returns.
>>
>> Updating on write rather than recalculating on read does in itself not seem
>> particularly complicated.
>>
>>> My Cells hack does the above, not with global variables, but with slots (data
>>> members?) of instances in the CL object system. I have thought about doing it
>>> with global variables such as a and b above, but never really seen much of
>>> need, maybe because I like OO and can always think of a class to create of
>>> which the value should be just one attribute.
>>
>> OK, so in what way does the quick 35 line hack below also completely miss your
>> point?
>>
>>
>> # (NB. for lispers: 'is' == EQ; '==' is sort of like EQUAL)
>>
>> def printChangeBlurp(someCell, oldVal, newVal, bound, name=''):
>> print '[cell %r changed from %r to %r, it was %s]' % (
>> name, oldVal, newVal, ['not bound', 'bound '][bound]),
>>
>> _unbound = type('unbound', (), {})() # just an unique dummy value
>> def updateDependents(dependents):
>> seen = {}
>> for dependent in dependents:
>> if dependent not in seen:
>> seen[dependent] = True
>> dependent.recalculate()
>> updateDependents(dependent._dependents)
>> class cell(object):
>> def __init__(self, formula, dependsOn=(), onChange=None):
>> self.formula = formula
>> self.dependencies = dependsOn
>> self.onChange = onChange
>> self._val = _unbound
>> for dependency in self.dependencies:
>> if self not in dependency._dependents:
>> dependency._dependents.append(self)
>> self._dependents = []
>> def __str__(self):
>> return str(self.val)
>> def recalculate(self):
>> newVal = self.formula()
>> if self.onChange is not None:
>> oldVal = self._val
>> self.onChange(self, oldVal, newVal, oldVal is not _unbound)
>> self._val = newVal
>> def getVal(self):
>> if self._val is _unbound:
>> self.recalculate()
>> return self._val
>> def setVal(self, value):
>> self._val = value
>> updateDependents(self._dependents)
>> val = property(getVal, setVal)
>>
>>
>>
>> 'as
>
> Here's how one of the cells examples might look in corrupted Python
> (this is definitely not executable):
>
> class FallingRock:
> def __init__(self, pos):
> define_slot( 'velocity', lambda: self.accel * self.elapsed )
> define_slot( 'pos', lambda: self.accel * (self.elapsed ** 2) / 2,
> initial_position = cell_initial_value( 100 ) )
> self.accel = -9.8
>
> rock = FallingRock(100)
> print rock.accel, rock.velocity, rock.pos
> # -9.8, 0, 100
>
> rock.elapsed = 1
> print rock.accel, rock.velocity, rock.pos
> # -9.8, -9.8, -9.8
>
> rock.elapsed = 8
> print rock.accel, rock.velocity, rock.pos
> # -9.8, -78.4, -627.2
>
> Make sense? The idea is to declare what a slot's value represents
> (with code) and then to stop worrying about keeping different things
> synchronized.
>
> Here's another of the examples, also translated into my horrific
> rendition of Python (forgive me):
>
> class Menu:
> def __init__(self):
> define_slot( 'enabled',
> lambda: focused_object( self ).__class__ == TextEntry and
> focused_object( self ).selection )
>
> Now whenever the enabled slot is accessed, it will be calculated based
> on what object has the focus. Again, it frees the programmer from
> having to keep these different dependencies updated.
>
> --
> This is a song that took me ten years to live and two years to write.
> - Bob Dylan
Oh dear, there were a few typos:
class FallingRock:
def __init__(self, pos):
define_slot( 'velocity', lambda: self.accel * self.elapsed )
define_slot( 'pos', lambda: self.accel * (self.elapsed ** 2) / 2,
initial_value = cell_initial_value( 100 ) )
self.accel = -9.8
rock = FallingRock(100)
print rock.accel, rock.velocity, rock.pos
# -9.8, 0, 100
rock.elapsed = 1
print rock.accel, rock.velocity, rock.pos
# -9.8, -9.8, 90.2
rock.elapsed = 8
print rock.accel, rock.velocity, rock.pos
# -9.8, -78.4, -527.2
--
This is a song that took me ten years to live and two years to write.
- Bob Dylan
From: Alexander Schmolck
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <yfsbqu9y3rj.fsf@oc.ex.ac.uk>
Bill Atkins <············@rpi.edu> writes:
> Here's how one of the cells examples might look in corrupted Python
> (this is definitely not executable):
>
> class FallingRock:
> def __init__(self, pos):
> define_slot( 'velocity', lambda: self.accel * self.elapsed )
> define_slot( 'pos', lambda: self.accel * (self.elapsed ** 2) / 2,
> initial_position = cell_initial_value( 100 ) )
> self.accel = -9.8
>
> rock = FallingRock(100)
> print rock.accel, rock.velocity, rock.pos
> # -9.8, 0, 100
>
> rock.elapsed = 1
> print rock.accel, rock.velocity, rock.pos
> # -9.8, -9.8, -9.8
>
> rock.elapsed = 8
> print rock.accel, rock.velocity, rock.pos
> # -9.8, -78.4, -627.2
>
> Make sense?
No, not at all.
Why do you pass a ``pos`` parameter to the constructor you never use? Did you
mean to write ``cell_initial_value(pos)``?
Why is elapsed never initialized? Is the dependency computation only meant to
start once elapsed is bound? But where does the value '0' for velocity come
from then? Why would it make sense to have ``pos`` initially be completely
independent of everything else but then suddenly reset to something which is
in accordance with the other parameters?
What happens if I add ``rock.pos = -1; print rock.pos ``? Will I get an error?
Will I get -1? Will I get -627.2?
To make this more concrete, here is how I might implement a falling rock:
class FallingRock(object):
velocity = property(lambda self:self.accel * self.elapsed)
pos = property(lambda self: 0.5*self.accel * self.elapsed**2)
def __init__(self, elapsed=0):
self.elapsed = elapsed
self.accel = -9.8
rock = FallingRock()
print rock.accel, rock.velocity, rock.pos
# => -9.8 -0.0 -0.0
rock.elapsed = 1
print rock.accel, rock.velocity, rock.pos
# => -9.8 -9.8 -4.9
rock.elapsed = 9
print rock.accel, rock.velocity, rock.pos
# => -9.8 -88.2 -396.9
How would you like the behaviour to be different from that (and why)?
> The idea is to declare what a slot's value represents
> (with code) and then to stop worrying about keeping different things
> synchronized.
That's what properties (in python) and accessors (in lisp) are for -- if you
compute the slot-values on-demand (i.e. each time a slot is accessed) then you
don't need to worry about stuff getting out of synch.
So far I haven't understood what cells (in its essence) is meant to offer over
properties/accessors apart from a straightforward efficiency hack (instead of
recomputing the slot-values on each slot-access, you recompute them only when
needed, i.e. when one of the other slots on which a slot-value depends has
changed). So what am I missing?
> Here's another of the examples, also translated into my horrific
> rendition of Python (forgive me):
>
> class Menu:
> def __init__(self):
> define_slot( 'enabled',
> lambda: focused_object( self ).__class__ == TextEntry and
>
OK, now you've lost me completely. How would you like this to be different in
behaviour from:
class Menu(object):
enabled = property(lambda self: isinstance(focused_object(self),TextEntry) \
and focused_object(self).selection)
???
> Now whenever the enabled slot is accessed, it will be calculated based
> on what object has the focus. Again, it frees the programmer from
> having to keep these different dependencies updated.
Again how's that different from the standard property/accessor solution as
above?
'as
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <O7A7g.59$sC4.54@fe09.lga>
Alexander Schmolck wrote:
> [trimmed groups]
>
> Ken Tilton <·········@gmail.com> writes:
>
>
>>yes, but do not feel bad, everyone gets confused by the /analogy/ to
>>spreadsheets into thinking Cells /is/ a spreadsheet. In fact, for a brief
>>period I swore off the analogy because it was so invariably misunderstood.
>>Even Graham misunderstood it.
>
>
> Count me in.
<g> But looking at what it says: "Think of the slots as cells in a
spreadsheet (get it?), and you've got the right idea. ", if you follow
the analogy (and know that slot means "data member" in other OO models)
you also know that Serge's Spreadsheet example would have scored a big
fat zero on the Miller Analogy Test. Serge in no way made slots in
Python classes behave like cells in a spreadsheet. He simply started
work on a Spreadsheet application, using Python classes along the way. Bzzt.
While everyone makes the mistake, it is only because few of us (me
included) read very carefully. Especially if they are more interested in
flaming than learning what someone is saying.
C'mon, people. I linked to Adobe's betting the ranch on such an idea. I
linked to Guy Steele's paper on the same idea. In which he marvelled
that it had not caught on. I could also link you to COSI over at STSCI,
presented at a Lisp Users Group Meeting in 1999 where they were jumping
up and down about the same thing. One of my users gets Cells because he
loved the KR system in Garnet. Look it up. I have more citations of
prior art. And, again, it has gone mainstream: Adobe has adopted the
paradigm.
Y'all might want to ease up on the pissing contest and learn something.
or not, I have been on Usenet before. :)
>
>
>>But it is such a great analogy! <sigh>
>>
>>>but what's the big deal about PyCells?
>>>Here is 22-lines barebones implementation of spreadsheet in Python,
>>>later I create 2 cells "a" and "b", "b" depends on a and evaluate all
>>>the cells. The output is
>>>a = negate(sin(pi/2)+one) = -2.0
>>
>>>b = negate(a)*10 = 20.0
>>
>>Very roughly speaking, that is supposed to be the code, not the output. So you
>>would start with (just guessing at the Python, it has been years since I did
>>half a port to Python):
>>
>>
>> v1 = one
>> a = determined_by(negate(sin(pi/2)+v1)
>> b = determined_by(negate(a)*10)
>> print(a) -> -2.0 ;; this and the next are easy
>> print(b) -> 20
>> v1 = two ;; fun part starts here
>> print(b) -> 40 ;; of course a got updated, too
>>
>
>
> do you mean 30?
>
> I've translated my interpretation of the above to this actual python code:
>
> from math import sin, pi
> v1 = cell(lambda: 1)
> a = cell(lambda:-(sin(pi/2)+v1.val), dependsOn=[v1])
> b = cell(lambda: -a.val*10, dependsOn=[a],
> onChange=lambda *args: printChangeBlurp(name='b',*args))
> print 'v1 is', v1
> print 'a is', a # -2.0 ;; this and the next are easy
> print 'b is', b # 20
> v1.val = 2 # ;; fun part starts here
> print 'v1 now is', v1
> print 'b now is', b # 30 ;; of course a got updated, too
>
>
> I get the following printout:
>
> v1 is 1
> a is -2.0
> b is [cell 'b' changed from <__main__.unbound object at 0xb4e2472c> to 20.0,
> it was not bound]20.0
> [cell 'b' changed from 20.0 to 30.0, it was bound ] v1 now is 2
> b now is 30.0
>
> Does that seem vaguely right?
<g> You have a good start. But you really have to lose the manual wiring
of dependencies, for several reasons;
-- it is a nuisance to do
-- it will be a source of bugs
-- it will be kind of impossible to do, because (in case you missed
it), the rule should be able to call any function and establish a
dependency on any other cell accessed. So when coding a change to a
function, one would have to go track down any existing rule to change
its dependsOn declaration. never mind the pain in th first place of
examining the entire call tree to see what else gets acessed.
-- it gets worse. I want you to further improve your solution by
handling rules such as this (I will just write Lisp):
(if (> a b)
c ;; this would be the "then" form
d)) ;; this is the 'else'
The problem here is that the rule always creates dependencies on a and
b, but only one of c and d. So you cannot write a dependsOn anyway
(never mind all the other reasons for that being unacceptable).
>
>
>>The other thing we want is (really inventing syntax here):
>>
>> on_change(a,new,old,old-bound?) print(list(new, old, old-bound?)
>
>
> Is the above what you want (you can also dynamically assign onChange later
> on, as required or have a list of procedures instead)?
Your onChange seems to be working fine. One thing we are glossing over
here is that we want to use this to extend the object system. In that
case, as I said and as no one bothered to comprehend, we want /slots/ to
behave like spreadsheet cells. Not globals. And I have found that these
onChane deals are most sensibly defined on slots, not cell by cell.
That said, if you did work something up similar for Python classes, i
have no doubt you could do that.
Again, if anyone is reading and not looking to just have a flamewar,
they will recall i have already done once a partial port of Cells to
python. (I should go look for that, eh? It might be two computer systems
back in the closet though. <g>)
>
>
>>Then the print statements Just Happen. ie, It is not as if we are just hiding
>>computed variables behind syntax and computations get kicked off when a value
>>is read. Instead, an underlying engine propagates any assignment throughout
>>the dependency graph before the assignment returns.
>
>
> Updating on write rather than recalculating on read does in itself not seem
> particularly complicated.
<heh-heh> Well, there are some issues. B and C depend on A. B also
depends on C. When A changes, you have to compute C before you compute
B, or B will get computed with an obsolete value of C and be garbage.
And you may not know when A changes that there is a problem, because as
you can see from my example (let me change it to be relevant):
(if (> a d) c e)
It may be that during the prior computation a was less= d and did /not/
depend on c, but with the new value of a is > d and a new code branch
will be taken leading to c.
It was not that hard to figure all that out (and it will be easier for
you given the test case <g>) but I would not say propagation is
straightforward. There are other issues as well, including handling
assignments to cells within observers. This is actually useful
sometimes, so the problem needs solving.
>
>
>>My Cells hack does the above, not with global variables, but with slots (data
>>members?) of instances in the CL object system. I have thought about doing it
>>with global variables such as a and b above, but never really seen much of
>>need, maybe because I like OO and can always think of a class to create of
>>which the value should be just one attribute.
>
>
> OK, so in what way does the quick 35 line hack below also completely miss your
> point?
What is that trash talking? I have not seen your code before, so of
course I have never characterized it as completely missing the point.
Spare me the bullshit, OK?
Alexander, you are off to a, well, OK start on your own PyCells. You
have not made a complete mess of the low-hanging fruit, but neither have
you done a very good job. Requiring the user to declare dependencies was
weak -- I never considered anything that (dare I say it?) unscaleable.
Like GvR with Python, I knew from day one that Cells had to very simple
on the user. Even me, their developer. But do not feel too bad, the GoF
Patterns book described more prior art (I might have mentioned) and they
had explicit (and vague) subscribe/unsubscribe requirements.
As for the rest of your code, well, propagation should stop if a cell
recomputes the same value (it happens). And once you have automatic
dependency detection, well, if the rule is (max a b) and it turns out
that b is just 42 (you had cell(lambda 1)... why not just cell(1) or
just 1), then do not record a dependency on b. (Another reason why the
user cannot code dependsOn -- it is determined at run time, not by
examination of the code.
Now we need to talk about filters on a dependency....
kenny (expecting more pissing and less reading of the extensive on-line
literature on constraints)
--
Cells: http://common-lisp.net/project/cells/
"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
Ken Tilton wrote:
> Alexander Schmolck wrote:
> > [trimmed groups]
> >
> > Ken Tilton <·········@gmail.com> writes:
> >
> >
> >>yes, but do not feel bad, everyone gets confused by the /analogy/ to
> >>spreadsheets into thinking Cells /is/ a spreadsheet. In fact, for a brief
> >>period I swore off the analogy because it was so invariably misunderstood.
> >>Even Graham misunderstood it.
> >
> >
> > Count me in.
>
> <g> But looking at what it says: "Think of the slots as cells in a
> spreadsheet (get it?), and you've got the right idea. ", if you follow
> the analogy (and know that slot means "data member" in other OO models)
> you also know that Serge's Spreadsheet example would have scored a big
> fat zero on the Miller Analogy Test. Serge in no way made slots in
> Python classes behave like cells in a spreadsheet. He simply started
> work on a Spreadsheet application, using Python classes along the way. Bzzt.
>
> While everyone makes the mistake, it is only because few of us (me
> included) read very carefully. Especially if they are more interested in
> flaming than learning what someone is saying.
>
I don't really mean any disrespect here, but if an analogy is not
interpreted correctly by a large group of people, the analogy is crap,
not the people. Yes, I understood it, specifically because I have spent
enough time dinking around with cell functions in a spreadhseet to
understand what you meant.
Maybe it would help to change the wording to "functions with cell
references in a spreadsheet" instead of "cells in a spreadsheet". Yes,
you lose the quippy phrasing but as it is most people use spreadsheets
as "simple database with informal ad hoc schema" and mostly ignore the
more powerful features anyways, so explicit language would probably
help the analogy. I'm guessing if you made some vague allusions to how
"sum(CellRange)" works in most spreadsheets people would get a better
idea of what is going on.
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <0UG7g.8$u17.6@fe08.lga>
Adam Jones wrote:
> Ken Tilton wrote:
>
>>Alexander Schmolck wrote:
>>
>>>[trimmed groups]
>>>
>>>Ken Tilton <·········@gmail.com> writes:
>>>
>>>
>>>
>>>>yes, but do not feel bad, everyone gets confused by the /analogy/ to
>>>>spreadsheets into thinking Cells /is/ a spreadsheet. In fact, for a brief
>>>>period I swore off the analogy because it was so invariably misunderstood.
>>>>Even Graham misunderstood it.
>>>
>>>
>>>Count me in.
>>
>><g> But looking at what it says: "Think of the slots as cells in a
>>spreadsheet (get it?), and you've got the right idea. ", if you follow
>>the analogy (and know that slot means "data member" in other OO models)
>>you also know that Serge's Spreadsheet example would have scored a big
>>fat zero on the Miller Analogy Test. Serge in no way made slots in
>>Python classes behave like cells in a spreadsheet. He simply started
>>work on a Spreadsheet application, using Python classes along the way. Bzzt.
>>
>>While everyone makes the mistake, it is only because few of us (me
>>included) read very carefully. Especially if they are more interested in
>>flaming than learning what someone is saying.
>>
>
>
> I don't really mean any disrespect here, but if an analogy is not
> interpreted correctly by a large group of people, the analogy is crap,
> not the people.
No, I do not think that follows. I reiterate: people (inluding me!) read
too quickly, and this analogy has a trap in it: spreadsheets are /also/
software.
The analogy is fine and the people are fine, but as you suggest there is
a human engineering problem to be acknowledged.
btw, I have a couple of links to papers on similar art and they all use
the spreadshett metaphor. It is too good not to, but...
> Yes, I understood it, specifically because I have spent
> enough time dinking around with cell functions in a spreadhseet to
> understand what you meant.
>
> Maybe it would help to change the wording to "functions with cell
> references in a spreadsheet" instead of "cells in a spreadsheet".
<g> We could do a study. I doubt your change would work, but, hey, that
is what studies are for.
I think probably the best thing to do with the human engineering problem
is attack the misunderstanding explicitly. "Now if you are like most
people, you think that means X. It does not." And then give an example,
and then again say what it is not.
Anyone who comes away from /that/ with the wrong idea just is not trying.
But I would not put that in the project synopsis, and that is all the
original confused poster read. Just not trying.
kenny
--
Cells: http://common-lisp.net/project/cells/
"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
From: Boris Borcic
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <4461b9d5_3@news.bluewin.ch>
Ken Tilton wrote:
> "Now if you are like most
> people, you think that means X. It does not."
As far as natural language and understanding are concerned, "to mean" means
conformity to what most people understand, Humpty Dumpties notwithstanding.
Cheers.
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <vyn8g.2$%L2.1@fe12.lga>
Boris Borcic wrote:
> Ken Tilton wrote:
>
>> "Now if you are like most people, you think that means X. It does not."
>
>
> As far as natural language and understanding are concerned, "to mean"
> means conformity to what most people understand, Humpty Dumpties
> notwithstanding.
Nonsense. You are confusing that quality of natural language with most
people's quality of being sloppy readers, or in your case, a sloppy
thinker. Misapplying an analogy is not a question of usage -- when I
said spreadsheet and they thought of spreadsheets, so far so good,
right? -- it just sloppiness and laziness.
I do it, too, all the time. :) Life is too short, we get by precisely by
using partial information.
You remind me of American educators recent (past several decades, that
is) history of apologizing for asking students to work and softening the
curriculum until they all get A's.
Here is another analogy. Sometimes people hit the gas and think they hit
the brake pedal. They crash around a parking lot pushing the gas pedal
down harder and harder. Did they take out the brake pedal to avoid that
confusion? No, they put an interlock between the brake and the ignition key.
Same thing. :)
kenny
--
Cells: http://common-lisp.net/project/cells/
"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
From: Boris Borcic
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <44621c10_1@news.bluewin.ch>
Ken Tilton wrote:
>
>
> Boris Borcic wrote:
>> Ken Tilton wrote:
>>
>>> "Now if you are like most people, you think that means X. It does not."
>>
>>
>> As far as natural language and understanding are concerned, "to mean"
>> means conformity to what most people understand, Humpty Dumpties
>> notwithstanding.
>
> Nonsense.
:)
> You are confusing that quality of natural language with most
> people's quality of being sloppy readers, or in your case, a sloppy
> thinker. Misapplying an analogy is not a question of usage -- when I
> said spreadsheet and they thought of spreadsheets, so far so good,
> right?
No, as Adam Jones pointed out. Like Bush speaking of "crUSAde" after 9/11.
> -- it just sloppiness and laziness.
>
> I do it, too, all the time. :)
Right.
Ken Tilton <·········@gmail.com> writes:
> ps. flaming aside, PyCells really would be amazingly good for Python. And
> so Google. (Now your job is on the line. <g>) k
Here's something I wrote this week, mostly as a mental exercise ;-)
The whole code is available at <http://www.iki.fi/~lrasinen/cells.py>,
I'll include a test example below. Feel free to flame away ;-)
(As for background: I like CL better as a language, but I also like Python
a lot. However, I was employed for 3 years as a developer and maintainer
in a Python data mining application, so I'm more fluent in Python than CL.)
The code is mostly based on Kenny's descriptions of Cells in the following
messages:
<················@fe12.lga>
<···············@fe11.lga>
<··············@fe08.lga>
In addition, I have looked at the CL source code briefly, but I'm not sure
if any concepts have survived to the Python version. Since Python's object
model is sufficiently different, the system is based on rules being
defined per-class (however, if you define a rule by hand in the __init__
function, it'll work also. I think; haven't tested).
I can possibly be persuaded to fix bugs in the code and/or to implement
new features ;-)
Features:
- Tracks changes to input cells dynamically (normal attributes are not tracked)
- Callbacks for changes (see caveats)
- Requires Python 2.4 for the decorator syntax (@stuff)
- Should calculate a cell only once per change (haven't tested ;-)
Caveats:
- The input cell callbacks are not called with the class instance
as the first argument, while the rule cell callback are. This
is mostly due to laziness.
- There is no cycle detection. If you write cyclic dependencies, you lose.
- There is very little error checking.
Example follows:
def x_callback(oldval, newval):
print "x changed: %s => %s" % (oldval, newval)
class Test(cellular):
def __init__(self):
self.x = InputCell(10, callback=x_callback)
def y_callback(self, oldval, newval):
print "y changed: %s => %s" %(oldval, newval)
def a_callback(self, oldval, newval):
print "a changed: %s => %s" %(oldval, newval)
def g_callback(self, oldval, newval):
print "g changed: %s => %s" %(oldval, newval)
@rule(callback=y_callback)
def y(self):
return self.x ** 2
@rule(callback=a_callback)
def a(self):
return self.y + self.x
@rule(callback=g_callback)
def g(self):
if self.x % 2 == 0:
return self.y
else:
return self.a
$ python cells.py
y changed: __main__.unbound => 100
a changed: __main__.unbound => 110
g changed: __main__.unbound => 100
=============
x changed: 10 => 4
y changed: 100 => 16
a changed: 110 => 20
g changed: 100 => 16
=============
x changed: 4 => 5
y changed: 16 => 25
a changed: 20 => 30
g changed: 16 => 30
--
Lasse Rasinen
········@iki.fi
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <i6T9g.228$qU1.90@fe08.lga>
Lasse Rasinen wrote:
> Ken Tilton <·········@gmail.com> writes:
>
>
>>ps. flaming aside, PyCells really would be amazingly good for Python. And
>>so Google. (Now your job is on the line. <g>) k
>
>
> Here's something I wrote this week, mostly as a mental exercise ;-)
It's fun, right? But what you have is a complete wreck. :)
> The whole code is available at <http://www.iki.fi/~lrasinen/cells.py>,
> I'll include a test example below. Feel free to flame away ;-)
>
> (As for background: I like CL better as a language, but I also like Python
> a lot. However, I was employed for 3 years as a developer and maintainer
> in a Python data mining application, so I'm more fluent in Python than CL.)
>
> The code is mostly based on Kenny's descriptions of Cells in the following
> messages:
> <················@fe12.lga>
> <···············@fe11.lga>
> <··············@fe08.lga>
>
> In addition, I have looked at the CL source code briefly, but I'm not sure
> if any concepts have survived to the Python version. Since Python's object
> model is sufficiently different, the system is based on rules being
> defined per-class...
That will be a total disaster for PyCells, if true. But I do not think
it is. You just need a constructor that takes some slot initializers,
and initialize the slots to one of: a normal value; an InputCell itself
initialized with a starting value, if only nil; or a RuledCell itself
initialized with a lambda.
Trust me, you lose a vast amount of power unless different instances of
the same class can have different rules for the same slot.
>... (however, if you define a rule by hand in the __init__
> function, it'll work also. I think; haven't tested).
>
> I can possibly be persuaded to fix bugs in the code and/or to implement
> new features ;-)
PyCells looks like it will be a project for SoC2006, so you may as well
relax. But I understand if you want to keep going, it is great fun. btw,
I have met more than a few people who had done something like Cells
independently, and there are many full-blown similar implementations
around. Mine is just the best. <g> Kidding, i do not really know, there
are so many.
>
> Features:
> - Tracks changes to input cells dynamically (normal attributes are not tracked)
Ha! All your rules depend on the input cell itself! How about A depends
on B depends on C? :)
> - Callbacks for changes (see caveats)
> - Requires Python 2.4 for the decorator syntax (@stuff)
> - Should calculate a cell only once per change (haven't tested ;-)
Quite hard to test deliberately, but it happens "in nature". But it will
not happen until you do A->B->C. Once you have /that/ working, make A
the input, then have B and C both use A. But also have B use C, and
jiggle things around until A happens to think it should update B first,
then C. What happens is that B runs and uses C, but C has not been
updated yet. C is inconsistent with A, but is being used to calculate a
new value for B which does see the new value of A. Mismatch! B will get
sorted out in a moment when C gets recalculated and tells B to calculate
a second time, but meanwhile after the first recalculation of B the
on-change callback for that got invoked, missiles were launched, and
Moscow has been destroyed.
>
> Caveats:
> - The input cell callbacks are not called with the class instance
> as the first argument, while the rule cell callback are. This
> is mostly due to laziness.
And unacceptable!
have fun. :)
kenny
ps. In the getattr for any Cell-mediated slot, look to see if "parent"
is non-nil. If so, set up a dependency. k
--
Cells: http://common-lisp.net/project/cells/
"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
Ken Tilton <·········@gmail.com> writes:
> > if any concepts have survived to the Python version. Since Python's object
> > model is sufficiently different, the system is based on rules being
> > defined per-class...
>
> That will be a total disaster for PyCells, if true. But I do not think it
> is. You just need a constructor that takes some slot initializers, and
> initialize the slots to one of: a normal value; an InputCell itself
> initialized with a starting value, if only nil; or a RuledCell itself
> initialized with a lambda.
Hmm, just tried it:
>>> class A(cells.cellular):
... def __init__(self):
... self.a = cells.InputCell(10)
... self.b = cells.RuleCell(lambda self: self.a+1, self, None)
>>> a = A()
>>> a.a
10
>>> a.b
11
So it does work out-of-the-box ;-)
> PyCells looks like it will be a project for SoC2006, so you may as well
> relax.
You really want to start a SoC project on something that takes about two
weeks from an average Python programmer? What does the guy do for the rest
of the summer?
(I think I spent 4-5 hours on this actually sitting on the computer,
sandwiched between remodeling and cleaning and work. The rest of the two
weeks would be making it more robust ;-)
> > Features:
> > - Tracks changes to input cells dynamically (normal attributes are not tracked)
>
> Ha! All your rules depend on the input cell itself! How about A depends on
> B depends on C? :)
Oops. I'm sorry for the inaccurate terminology. They depend only the cells
they use as inputs (their "children"), and not only on InputCells.
(I use the parent-child terminology because as English is not my native
language, I had trouble remembering which depend* variable was which ;-)
> Quite hard to test deliberately, but it happens "in nature". But it will
> not happen until you do A->B->C. Once you have /that/ working, make A the
> input, then have B and C both use A. But also have B use C, and jiggle
> things around until A happens to think it should update B first, then C.
> What happens is that B runs and uses C, but C has not been updated yet. C
> is inconsistent with A, but is being used to calculate a new value for B
> which does see the new value of A. Mismatch! B will get sorted out in a
> moment when C gets recalculated and tells B to calculate a second time,
> but meanwhile after the first recalculation of B the on-change callback
> for that got invoked, missiles were launched, and Moscow has been
> destroyed.
If you check the testcase, you'll see there are such dependencies, and all
the callbacks fire just once (and in dependency-related order).
Furthermore, the timestamp mechanism SHOULD take care of those (if the
cell is older than its children, it gets recalculated before it will
provide any data, and thus C will get recalculated before B uses it.
> ps. In the getattr for any Cell-mediated slot, look to see if "parent" is
> non-nil. If so, set up a dependency. k
Already done, see BaseCell.value() ;-)
--
Lasse Rasinen
········@iki.fi
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <44681F95.9040400@gmail.com>
Lasse Rasinen wrote:
> Ken Tilton <·········@gmail.com> writes:
>
>
>>>if any concepts have survived to the Python version. Since Python's object
>>>model is sufficiently different, the system is based on rules being
>>>defined per-class...
>>
>>That will be a total disaster for PyCells, if true. But I do not think it
>>is. You just need a constructor that takes some slot initializers, and
>>initialize the slots to one of: a normal value; an InputCell itself
>>initialized with a starting value, if only nil; or a RuledCell itself
>>initialized with a lambda.
>
>
> Hmm, just tried it:
>
>
>>>>class A(cells.cellular):
>
> ... def __init__(self):
> ... self.a = cells.InputCell(10)
> ... self.b = cells.RuleCell(lambda self: self.a+1, self, None)
>
>
>>>>a = A()
>>>>a.a
>
> 10
>
>>>>a.b
>
> 11
>
> So it does work out-of-the-box ;-)
So why exactly did you say that the differences in the object model made
it impossible? I was really stunned by that claim. And you sounded so
confident. What went wrong there? It was trivial, right? How did you
miss that?
>
>
>>PyCells looks like it will be a project for SoC2006, so you may as well
>>relax.
>
>
> You really want to start a SoC project on something that takes about two
> weeks ...
You sound so confident. :)
Do you know the deliverables? I know you do not know Cells. You say you
looked at the code -- it does not show. I can also tell you have not
done much serious programming, or you would know that twelve weeks is
more like twelve minutes than three months.
A new test suite, documentation (a first, there is none now), a full
port of Cells in all their ten years of sophisticated evolution and
variety (no, not your stupid pet trick), and as a demo project an entire
cells-driven GUI, probably a port of my new Celtk (+ Cells Tk) work, all
in a language without macros, without special variables, with a more
modest OO system, and limited first class functions... oh, I think we'll
keep him busy. :)
Now since you are such a genius, maybe you can help with something.
Trust me on this: this is one place where macros would be able to hide a
ton of implementation wiring it does no one any good to look at, and
actually turns into a maintenance nightmare whenever Cells might get
revised.
Is there any experiemntal macro package out there for Python? Maybe a
preprocessor, at least? Or are there ways to actually hack Python to
extend the syntax? My honest guess is that Cells will port readily to
Python but leave everyone very interested in finding some way to hide
implementation boilerplate. Got anything on that?
kenny
Nothing you have described sounds that complicated, and you never come
up with concrete objections to other peoples code (apart that it took
10 years to write in Lisp, so it must be really hard)
Why are you running a SoC project for PyCells if you dislike the
language so much. People who do like Python can implement it if they
need it (which I haven't seen any good examples that they do)
Please don't force a student to create a macro system just to port a
system to Python, as it won't really be python then. Use Pythonic
methodology instead. There are already plenty of ways to hide
complicated functionality, just not necessarily the way you want to do
it.
Cheers,
Ben
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <MD2ag.777$8K5.396@fe09.lga>
Ben wrote:
>
> Nothing you have described sounds that complicated, and you never come
> up with concrete objections to other peoples code (apart that it took
> 10 years to write in Lisp, so it must be really hard)
Oh, now I have to spend an hour dissecting any code you people toss-off
that does no more than pick the low-hanging fruit? I do not spend enough
time on Usenet already? :)
>
> Why are you running a SoC project for PyCells...
You do not even know what Cells are and have not taken the trouble to
understand, so i will save my breath. Pythonistas will love PyCells, I
promise. Please recall that it is not just me, there is a ton of prior
and current art.
> if you dislike the
> language so much.
There is a difference between disliking a language and thinking PyCells
might end up persuading folks that macros and/or true lambda might be
worth the trouble to extend the language.
Try to think a little more precisely, OK? Thx.
> People who do like Python can implement it if they
> need it (which I haven't seen any good examples that they do)
>
> Please don't force a student to create a macro system just to port a
> system to Python,
You are getting hysterical, sit down, breathe. I asked a question,
because (unlike you) I can see where this is going. But as you say...
> There are already plenty of ways to hide
> complicated functionality,
I know. And that is why the mentor is a Pythonista, not me. I made a
simple inquiry as to the options available should Python have trouble
hiding the wiring. just looking ahead a little (as are the student and
mentor). Something wrong with thinking ahead a few moves?
You on the other hand have made up your mind about something you admit
you do not understand, have now ascribed to me a half dozen sentiments I
do not hold, and are feeling absolutely miserable because you think this
is a flamewar.
No, we are just discussing language synatx and how it impacts language
semantics, which has led inadvertently to a few of us starting an SoC
project to put together a Python version of a very successful dataflow
hack I did for Lisp.
I use it every day, and it just plain makes me smile. I wrote more code
than you can imagine Before Cells, and have now used them intensively
and in more ways than you can imagince since. Even if the wiring cannot
be hidden, the productivity win will be trememndous. Note that this
translates ineluctably to "programming will be more fun".
Even you will love them.
:)
kenny
--
Cells: http://common-lisp.net/project/cells/
"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <dE3ag.34$RM1.15@fe12.lga>
Ken Tilton wrote:
>
>
> Ben wrote:
>
>>
>> Nothing you have described sounds that complicated, and you never come
>> up with concrete objections to other peoples code (apart that it took
>> 10 years to write in Lisp, so it must be really hard)
>
>
> Oh, now I have to spend an hour dissecting any code you people toss-off
> that does no more than pick the low-hanging fruit? I do not spend enough
> time on Usenet already? :)
I want to clarify something. I did look at the code. It was the same
thing we had with Cells after four-five hours. Yet the author admitted
he had looked at the Cells source, so he should have known he had not
implemented, inter alia, synapses, kid-slotting, ephemerals, optional
laziness, and worst of all he had omitted the data integrity mechanism
encapsulated by with-integrity. In the next exchange we discover he
missed the ability to author Python instances individually while
mistakenly thinking it was impossible.
Exactly how much time am I supposed to spend on someone not willing to
spend enough time to understand Cells? I recognize, tho, a kindred
spirit more interested in writing their own code than reading and
understanding someone else's. :)
You too are more eager to flame me over misperceived slights to The
Sacred Python than in Python having a wicked cool constraints package,
and I am wasting too much time on you. I recognize, tho, a fellow Usenet
dlamewar enthusiast. :)
kenny
--
Cells: http://common-lisp.net/project/cells/
"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
Ok, I'm sorry. This kind of discussions between two groups of people,
neither of who know the other person's language very well just wind me
up something chronic! It wasn't your post as such, just reading through
most of the thread in one go.
That will teach me to post while cross :). Sorry for any offence.
Anything that makes programming more fun is good, and I hope the Lisp
true way explodes into my head at some point.
Cheers,
Ben
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <W4nag.20$UI3.14@fe09.lga>
Ben wrote:
> This kind of discussions between two groups of people,
> neither of who know the other person's language very well just wind me
> up something chronic!
I must say, it is pretty funny how a flamewar turned into a pretty
interesting SoC project.
> Anything that makes programming more fun is good, and I hope the Lisp
> true way explodes into my head at some point.
Here is an excerpt of an excerpt from famous Gears demo. I notice it
really makes concrete what Lispniks are talking about in re macros and
multi-line lambda (pity about the formatting):
(defmodel gears-demo (window)
((gear-ct :initform (c-in 1) :accessor gear-ct :initarg :gear-ct))
(:default-initargs
:title$ "Rotating Gear Widget Test"
:kids (c? (the-kids
(mk-stack (:packing (c?pack-self))
(mk-row ()
(mk-button-ex (" Add " (incf (gear-ct .tkw))))
(mk-button-ex ("Remove" (when (plusp (gear-ct .tkw))
(decf (gear-ct .tkw)))))
(mk-entry :id :vtime
:md-value (c-in "10")))
(make-instance 'gears
:fm-parent *parent*
:width 400 :height 400
:timer-interval (max 1
(or (parse-integer (fm^v :vtime))
:junk-allowed t)
0)))))))
Don't worry, Lispniks cannot read that either. It is a delarative
construction of a hierarchical GUI. That is such a common task, that I
have rolled up a bunch of GUI-building macrology so that just the stuff
specific to this GUI gets typed in. Since the Gears widget is a custom
widget I have no macrology for that, and the wiring shows in the
expression ":fm-parent *parent*" (which itself leverages Lisp special
variables).
And no, I cannot remember all my macrology. I can certainly read it and
easily modify my GUI, because all the wiring is hidden, but if I have to
build a new GUI I cut and paste from other GUIs.
Let's look at just one form, which I believe destroys Alex's whole case
for naming every lambda:
(mk-button-ex ("Remove" (when (plusp (gear-ct .tkw))
(decf (gear-ct .tkw)))))
"mk-button-ex" (a) makes fun of MS$ naming standards and (b) expands to:
(make-instance 'button
:fm-parent *parent*
:text "remove"
:on-command (c? (lambda (self)
(when (plusp (gear-ct .tkw))
(decf (gear-ct .tkw))))))
The above is what one really needs to write to stick something in my GUI
framework, but who wants to look at all of that when most of it is
boilerplate? I need ":fm-parent *parent*" on every label and widget
because of some internals requirements, I just do not want to look at it
or have to remember to code it all the time (the latter not being a huge
problem because I really am cutting/pasting when I build a new GUI).
Is mk-button-ex some mysterious new language construct that will make
multi-programmer projects collapse in a heap of programmer-specific
constructs inscrutable to anyone else on the team?
(a) mk-button-ex kinda tells you (1) it makes a button and (2) no, this
is not part of Common Lisp, so where is the confusion?
(b) control-alt-. in my IDE shows me:
(defmacro mk-button-ex ((text command) &rest initargs)
`(make-instance 'button
:fm-parent *parent*
:text ,text
:on-command (c? (lambda (self)
(declare (ignorable self))
,command))
,@initargs))
Looks a lot like the expansion, right? That is really important in
making macrology easy. Once one has mastered the syntax (` , @), writing
a macro gets as natural as writing out the code. (In case you are
wondering, in my little example I did not need any other customizations
on the button, so it is hard to make out what the initargs are doign up
there. here is how they would work (also getting a little fancier by
actually disabling the "Remove" button, not just making it do nothing
when pressed, if the gear count is zero):
(mk-button-ex ("Remove" (decf (gear-ct .tkw)))
:fore-color 'red ;; Tcl/Tk will understand
:enabled (c? (plusp (gear-ct .tkw))))
becomes:
(make-instance 'button
:fm-parent *parent*
:text "remove"
:on-command (c? (lambda (self)
(decf (gear-ct .tkw))))
:fore-color 'red
:enabled (c? (plusp (gear-ct .tkw))))
[ps. Do not try that at home, i invented the enabled thing. It really
should be the Tk syntax, which I forget.]
ie, I created mk-button-ex because, jeez, every button I put in a GUI I
/know/ needs its own label and its own command (and the parent thing),
but there are other options, too. They have to be supported if the macro
is to get used all the time (we want that), but I do not want to make
them positional arguments without identifying keywords, because then the
code would be unreadable as well as unwritable (from memory).
Needless to say, there is more macrology in the expansion. One bit of
fun is .tkw. Background: in the GUIs I roll, a widget always knows its
parent. I guess you noticed. Anyway, because of that, a rule can kick
off code to navigate to any other widget and get information, ie, it
pretty much has global scope and that means power. Now one place a rule
will often look is up the hierarchy, say to a containing radio group for
a radio button. So the first thing I wrote was:
(upper self radio-group) -> (container-typed self 'radio-group)
Hmmph. I am /always/ upper-ing off self, I am surprised I have not
written a macro so I can just do (^upper radio-group). Soon. But since I
often look to the containing window in rules, I was looking for
something insanely short, shorter than even parentheses would allow,
like ".tkw":
(define-symbol-macro .tkw (nearest self window))
Nearest is like "upper" except that it is inclusive of the starting
point of the search (and, no, I am not happy with the name <g>).
And so it goes with Lisp macros. All the tedious boilerplate is hiiden,
in ways that cannot be done with functions. Oh, I skipped that point.
Look again at how the command (decf (gear-ct .tkw)) gets spliced into
the lambda form as so much source code:
From:
(mk-button-ex ("Remove" (decf (gear-ct .tkw))))
To:
(make-instance 'button
:fm-parent *parent*
:text "remove"
:on-command (c? (lambda (self)
(decf (gear-ct .tkw))))))
If mk-button-ex were a function, Lisp would try to evaluate
(decf (gear-ct .tkw))
which would be pretty sad because the "self" in there would not even
exist yet (the form is an /input/ to make-instance of what will become
"self" by the time the macro expansion code runs.
Which brings us to the idea of every multi-line lambda needing a name.
Does this:
(lambda (self)
(when (plusp (gear-ct .tkw))
(decf (gear-ct .tkw)))
Not sure what the Python would be, but maybe:
lambda (self):
if nearest(self,'window').gear_ct > 0:
nearest(self,'window').gear_ct = \
nearest(self,'window').gear_ct - 1
Does that need a name?
kenny
--
Cells: http://common-lisp.net/project/cells/
"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
[I trimmed some of the newsgroups away; this mostly concerns Python and Lisp]
Ken Tilton <·········@gmail.com> writes:
> Lasse Rasinen wrote:
> > Ken Tilton <·········@gmail.com> writes:
> >
> >>>if any concepts have survived to the Python version. Since Python's object
> >>>model is sufficiently different, the system is based on rules being
> >>>defined per-class...
> >>
> >>That will be a total disaster for PyCells, if true. But I do not think it
> >>is. You just need a constructor that takes some slot initializers, and
> >>initialize the slots to one of: a normal value; an InputCell itself
> >>initialized with a starting value, if only nil; or a RuledCell itself
> >>initialized with a lambda.
> > Hmm, just tried it:
> >
> > [snip example]
> >
> > So it does work out-of-the-box ;-)
>
> So why exactly did you say that the differences in the object model made
> it impossible? I was really stunned by that claim. And you sounded so
> confident. What went wrong there? It was trivial, right? How did you miss
> that?
Simple: I didn't think to try that before you asked.
I did not say the differences in the object model made it impossible, I
said the system is based on rules defined per-class. My goal was to
explore the how one would go about defining data flow in Python, which is
why I concentrated on class-level definitions first.
The situation is similar to functions in classes. Normally you'd define
them like this:
class X:
def function(self, ...):
...
However, you can just as well set them, even on per instance basis:
x = X()
x.another_function = lambda self, x: return x+2
I also think(*) that while one would have the option to define
per-instance rules, in a Python implementation one would structure the
code so that common rules would be class-related, and the per-instance
rules would be used less frequently, used only when you absolutely need
them.
(*) Unfounded Gut Feeling(TM); if your project is successful, we can
revisit this prediction in September ;-)
> >>PyCells looks like it will be a project for SoC2006, so you may as well
> >>relax.
> > You really want to start a SoC project on something that takes about two
> > weeks ...
>
> You sound so confident. :)
Indeed. Having reread the post I do think the tone was possibly a tad
arrogant. However:
> A new test suite, documentation (a first, there is none now), a full port
> of Cells in all their ten years of sophisticated evolution and variety
> (no, not your stupid pet trick), and as a demo project an entire
> cells-driven GUI, probably a port of my new Celtk (+ Cells Tk) work, all
> in a language without macros, without special variables, with a more
> modest OO system, and limited first class functions... oh, I think we'll
> keep him busy. :)
I did not know all this. The list above does sound like a full summer of
work ;)
I assumed that PyCells referred to the core dependency tracking module
which (even now) does not sound like a such huge task, especially when one
has the reference implementation ;-)
As I said above, I was mostly concerned with exploring how the data flow
system would work, so I haven't implemented the various different types of
cells in Cells. So there would obviously be a bit more work if one is
implementing all that, but still, two caffeine and youth powered student
weeks can achieve a lot ;-)
What "your stupid pet trick" would be referring to? The use of the
__getattribute__ method (which is quite close to SLOT-VALUE-USING-CLASS),
or the use of @decorator syntax to reduce typing and redundancy?
> Now since you are such a genius, maybe you can help with something. Trust
> me on this: this is one place where macros would be able to hide a ton of
> implementation wiring it does no one any good to look at, and actually
> turns into a maintenance nightmare whenever Cells might get revised.
I'd probably use decorators a lot, since they let you play around with
function objects. If they are suitably chosen and designed, the interface
should stay pretty stable even while the wiring is changed.
(Decorator background:
The regular python Python function definition would be
as follows:
def foo(x,y,z):
...
would be in CL (if CL were Lisp-1 with symbol-function-or-value) more or less:
(setf (symbol-function-or-value 'foo) (lambda (x y z) ...)
The decorated syntax would be:
@magic_decorator
def foo(x,y,z):
...
and the translation:
(setf (symbol-function-or-value 'foo)
(funcall magic-decorator #'(lambda (x y z)
...)))
)
--
Lasse Rasinen
········@iki.fi
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <O15ag.337$RM1.120@fe12.lga>
Lasse Rasinen wrote:
> [I trimmed some of the newsgroups away; this mostly concerns Python and Lisp]
>
> Ken Tilton <·········@gmail.com> writes:
>
>
>>Lasse Rasinen wrote:
>>
>>>Ken Tilton <·········@gmail.com> writes:
>>>
>>>
>>>>>if any concepts have survived to the Python version. Since Python's object
>>>>>model is sufficiently different, the system is based on rules being
>>>>>defined per-class...
>>>>
>>>>That will be a total disaster for PyCells, if true. But I do not think it
>>>>is. You just need a constructor that takes some slot initializers, and
>>>>initialize the slots to one of: a normal value; an InputCell itself
>>>>initialized with a starting value, if only nil; or a RuledCell itself
>>>>initialized with a lambda.
>>>
>>>Hmm, just tried it:
>>>
>>>[snip example]
>>>
>>>So it does work out-of-the-box ;-)
>>
>>So why exactly did you say that the differences in the object model made
>>it impossible? I was really stunned by that claim. And you sounded so
>>confident. What went wrong there? It was trivial, right? How did you miss
>>that?
>
>
> Simple: I didn't think to try that before you asked.
>
> I did not say the differences in the object model made it impossible, I
> said the system is based on rules defined per-class.
Oh, please: "Since Python's object model is sufficiently different, the
system is based on rules being defined per-class".
> I also think(*) that while one would have the option to define
> per-instance rules, in a Python implementation one would structure the
> code so that common rules would be class-related,..
What has Python got to do with it? What you describing is the way every
OO system other than a prototype-based system works. It is why OO failed
to deliver on the Grail of object reuse: every time you need different
behavior, you need a new subclass. Literal values only go so far in
making instances authorable. But when an instance can have a rule with
itself as an argument, and when instances exist in a runtime hierarchy
navigable up and down such that rules have effectively global scope,
yowza, now you have authorability, amd now you have object reuse.
> and the per-instance
> rules would be used less frequently, used only when you absolutely need
> them.
You will see, but not until you have the capability. Then you will
discover you absolutely need them all the time. GUIs are dynamic things,
so you cannot just author a widget with a literal value, it has to be a
rule sensitive to the state of other GUI elements and the model itself,
which might be changing underfoot in response to I/O events.
I happened to be discussing this just now over in comp.lang.tcl,
comparing Cells with Actions, a kindred package motivated by this:
"Two large problems exist for developers of such user interfaces. One is
the need to constantly synchronize the controls with the ever-changing
state of the application or data. When no text is selected, for example,
the cut and copy buttons should be disabled.
"Another problem is that as programs evolve over time it can become
tedious and error prone to update the parts of the code that act upon
these controls."
Found here:
http://www.tcl.tk/community/tcl2004/Papers/BryanOakley/oakley.pdf
>
> (*) Unfounded Gut Feeling(TM); if your project is successful, we can
> revisit this prediction in September ;-)
No need for gut feelings. Anyone working in GUIs knows the problem
(stated by Mr Oakley above) and many partial solutions exist, such as
Tcl/Tk's builtin mechanisms for automatic state management. They called
the company ActiveState for a reason, you know. :)
One person did his homework. vasilsi margioulas sent me cells-gtk, a
two-week marriage of cells and Gtk derived from my Cells-Tk effort. He
was curious if Cells would be useful. he decided, yes. :)
>
>
>>>>PyCells looks like it will be a project for SoC2006, so you may as well
>>>>relax.
>>>
>>>You really want to start a SoC project on something that takes about two
>>>weeks ...
>>
>>You sound so confident. :)
>
>
> Indeed. Having reread the post I do think the tone was possibly a tad
> arrogant. However:
>
>
>>A new test suite, documentation (a first, there is none now), a full port
>>of Cells in all their ten years of sophisticated evolution and variety
>>(no, not your stupid pet trick), and as a demo project an entire
>>cells-driven GUI, probably a port of my new Celtk (+ Cells Tk) work, all
>>in a language without macros, without special variables, with a more
>>modest OO system, and limited first class functions... oh, I think we'll
>>keep him busy. :)
>
>
> I did not know all this. The list above does sound like a full summer of
> work ;)
>
> I assumed that PyCells referred to the core dependency tracking module
> which (even now) does not sound like a such huge task, especially when one
> has the reference implementation ;-)
I think the real problem here is that you have no idea how fast twelve
weeks can go while programming. The hourly beep on my wristwatch sounds
like a metronome.
As for Cells being so damn easy, well, yeah, that is how we get sucked
into these ten year projects. Do you know how long Knuth thought he
would spend on TeX? Look it up. I have high hopes for you as a
developer, you are blessed with that essential cluelessness as to how
hard things will get before you are done.
If you want to insist on how perfect your code is, please go find
ltktest-cells-inside.lisp in the source you downloaded and read the long
comment detailing the requirements I have identified for "data
integrity". Then (a) tell me how your code fails at integrity, (b) fix
it, and (c) tell me again how easy Cells is. :)
Having the reference implementation is the only thing that makes this
conceivably doable in a summer. What you are missing is something I have
often also gotten wrong: the core, cool functionality always comes easy
in the proof-of-concept stage. We make these ridiculous extrapolations
from that to a shipped product and come in five times over budget.
Not just the extra drudgery (doc, testing) of a finished product, but
also what happens when you scale a package to real-world requirments. I
knew my system was as bad as yours at data integrity for seven years
before I finally hit an application where it mattered (RoboCup, of all
things).
>
> As I said above, I was mostly concerned with exploring how the data flow
> system would work, so I haven't implemented the various different types of
> cells in Cells. So there would obviously be a bit more work if one is
> implementing all that, but still, two caffeine and youth powered student
> weeks can achieve a lot ;-)
See above. That is the fun proof-of-concept bit. And go back further to
my point about you really not understanding time in the software
dimension: two weeeks /is/ twelve weeks!
>
> What "your stupid pet trick" would be referring to?
Just picking off the low-hanging fruit. You ducked all the hard issues.
I think in another article in this thread I demonstrated how you
destroyed Moscow.
> The use of the
> __getattribute__ method (which is quite close to SLOT-VALUE-USING-CLASS),
> or the use of @decorator syntax to reduce typing and redundancy?
No, that stuff is fine.
>
>
>>Now since you are such a genius, maybe you can help with something. Trust
>>me on this: this is one place where macros would be able to hide a ton of
>>implementation wiring it does no one any good to look at, and actually
>>turns into a maintenance nightmare whenever Cells might get revised.
>
>
> I'd probably use decorators a lot, since they let you play around with
> function objects. If they are suitably chosen and designed, the interface
> should stay pretty stable even while the wiring is changed.
>
> (Decorator background:
>
> The regular python Python function definition would be
> as follows:
>
> def foo(x,y,z):
> ...
>
> would be in CL (if CL were Lisp-1 with symbol-function-or-value) more or less:
>
> (setf (symbol-function-or-value 'foo) (lambda (x y z) ...)
>
> The decorated syntax would be:
>
> @magic_decorator
> def foo(x,y,z):
> ...
>
> and the translation:
> (setf (symbol-function-or-value 'foo)
> (funcall magic-decorator #'(lambda (x y z)
> ...)))
> )
Wow. I thought Python did not have macros.
This project is looking better all the time. Thx.
kenny
--
Cells: http://common-lisp.net/project/cells/
"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
Ken Tilton <·········@gmail.com> wrote:
+---------------
| Having the reference implementation is the only thing that makes this
| conceivably doable in a summer. What you are missing is something I have
| often also gotten wrong: the core, cool functionality always comes easy
| in the proof-of-concept stage. We make these ridiculous extrapolations
| from that to a shipped product and come in five times over budget.
+---------------
Or as Fred Brooks said it in "The Mythical Man-Month" [paraphrased],
if a program takes one unit of effort, a programming *system* takes
three units of effort, and a programming systems *product* takes nine
units of effort.
-Rob
-----
Rob Warnock <····@rpw3.org>
627 26th Avenue <URL:http://rpw3.org/>
San Mateo, CA 94403 (650)572-2607
Ken Tilton <·········@gmail.com> writes:
> If you want to insist on how perfect your code is, please go find
> ltktest-cells-inside.lisp in the source you downloaded and read the long
> comment detailing the requirements I have identified for "data integrity".
> Then (a) tell me how your code fails at integrity, (b) fix it, and (c)
> tell me again how easy Cells is. :)
Found it and read it; it was most enlightening. I claim my system fulfills
the first three requirements(*), while most likely failing gloriously on
the two last ones.
I'll postpone (b) while I've had a chance to think it over(**), but in the
face of the evidence I'm willing to admit my earlier work estimates (which
should have had a smiley next to them anyway ;-) were in error.
I won't admit to destroying Moscow, though. See (*).
(*) All values depending on the changed cell are marked as invalid before
anything else is done; trying to access an invalid value forces a
recalculation which also marks the cell as valid again, and so on
recursively down to cells that have already converged to a proper value.
Setting input cells in callbacks will f*** that up, though.
(**) Which probably doesn't occur until July or something ;(
> > [Decorator example]
>
> Wow. I thought Python did not have macros.
Calling decorators macros is an insult to CL macros ;-)
All they do is make it a bit more convinent to apply transformations to
functions, or as the abstract in the original spec[1] says:
The current method for transforming functions and methods (for instance,
declaring them as a class or static method) is awkward and can lead to
code that is difficult to understand. Ideally, these transformations
should be made at the same point in the code where the declaration
itself is made.
> This project is looking better all the time. Thx.
In that case I have something else you might like: The "with" Statement[2]
From the abstract:
This PEP adds a new statement "with" to the Python language to make
it possible to factor out standard uses of try/finally statements.
In practice one can use it implement some of the with-macros in a pretty
straightforward manner, especially those that are expanded into
(call-with-* (lambda () ,@body)). I believe the previously advertised
with-integrity macro could also be made to work in a satisfying manner.
To quote from the examples:
2. A template for opening a file that ensures the file is closed
when the block is left:
@contextmanager
def opened(filename, mode="r"):
f = open(filename, mode)
try:
yield f
finally:
f.close()
Used as follows:
with opened("/etc/passwd") as f:
for line in f:
print line.rstrip()
Does that last part look familiar to you?)
The problem with this is that this is Python 2.5 syntax, which currently
appears to be planned for release in late August, a bit late for SoC.
Alpha(*) versions are available, so if you want to take a chance and live
on the bleeding edge, you can probably gain from it.
(*) I don't keep a very close eye on the alpha releases, which is why I
didn't remember this yesterday. I like my programming tools steady
and stable ;-)
[1] http://www.python.org/dev/peps/pep-0318/#abstract
[2] http://www.python.org/dev/peps/pep-0343/
--
Lasse Rasinen
········@iki.fi
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <446A3148.4020801@gmail.com>
Lasse Rasinen wrote:
> Ken Tilton <·········@gmail.com> writes:
>
>
>>If you want to insist on how perfect your code is, please go find
>>ltktest-cells-inside.lisp in the source you downloaded and read the long
>>comment detailing the requirements I have identified for "data integrity".
>>Then (a) tell me how your code fails at integrity, (b) fix it, and (c)
>>tell me again how easy Cells is. :)
>
>
> Found it and read it; it was most enlightening. I claim my system fulfills
> the first three requirements(*), while most likely failing gloriously on
> the two last ones.
<sigh>
From #1: "recompute all and (for efficiency) only state computed off X
(directly or indirectly through some intermediate datapoint)"
Bzzzt! Another 47hrs, directions from a mentor, and a reference
implementation and you /still/ do not even understand the requirements,
let alone have a working port. The good news is that it is not an
integrity requirement that is being missed, it is the efficiency
requirement I snuck in there. The bad news is, see below.
Want to find the efficiency shortcoming yourself, or should I tell you?
You are entitled to the latter given the rules of the game (simulating a
pythonista student making off with five thousand undeserved dollars). :)
>
> I'll postpone (b) while I've had a chance to think it over(**), but in the
> face of the evidence I'm willing to admit my earlier work estimates (which
> should have had a smiley next to them anyway ;-) were in error.
Aw, shucks, then I will admit that, before today, I never actually
looked at your code. :)
Well, I followed the URL and glanced at it, but I missed the use of the
timestamp. Speaking of which, Holy Granularity, Batman! You use
Time.time() to determine currency of a computation?!:
"time()
Return the time as a floating point number expressed in seconds
since the epoch, in UTC. Note that even though the time is always
returned as a floating point number, not all systems provide time with a
better precision than 1 second."
One /second/?!!!!! Exactly how slow is Python? I know you guys love that
issue as much as Lispniks. In un-compiled Lisp:
CTK(4): (loop repeat 2 do (print (get-internal-real-time)))
464033837
464033837
And you have no idea how slow PRINT is. btw, I thought Python was
portable. What is with the Time class and "not all systems..."? Check
out the Lisp:
(defun zoom ()
(loop with start = (get-internal-real-time)
while (= start (get-internal-real-time))
count 1 into cities-destroyed
finally (format t "~a cities destroyed in 1/~a of a second"
cities-destroyed internal-time-units-per-second)))
internal-time-units-per-second is (from the standard):
"Constant Value:
A positive integer, the magnitude of which is
implementation-dependent. "
So we vary, too, but my Lisp has to tell me so I can normalize. Anyway,
running that repeatedly I get pretty wild variation. My high score is:
CTK(18): 11637 cities destroyed in 1/1000 of a second
My low was under a thousand! I guess I have to wait until we cross a
millisecond boundary:
(defun zoom ()
(symbol-macrolet ((now (get-internal-real-time)))
(loop with start = (loop for mid = now
while (= mid now)
finally (return now))
while (= start now)
count 1 into cities-destroyed
finally (format t "~a cities destroyed in 1/~a of a second"
cities-destroyed
internal-time-units-per-second))))
Ok, now I am consistently taking out about 11.5k Russian cities. And you
need to fix your system.
Just use a counter -- does Python have bignums? if not, you'll have to
worry about wrapping. (the sound you hear is a project schedule
slipping. <g>)
>
> I won't admit to destroying Moscow, though. See (*).
Sorry, you actually /have/ violated the data integrity requirement. You
confess below to missing this one:
"a corollary: should a client observer SETF a datapoint Y, all the above
must happen with values current with not just X, but also with the value
of Y /prior/ to the change to Y."
Well, how can you claim integrity when some values do not get
recalculated until the world has moved on to state N+2, if you will?
State N+1 had the information that headed off the launch.
Bye bye, Kremlin.
The easiest way to construct such a scenario would be with an ephemeral
cell. As you know... oops. Maybe you do not. Well, spreadsheets are
kinda steady state in orientation. Given a world of other values, this
is what my value should be. But what about events? You have been using
your version of PyCells in real-world applications for a while... oops.
No you have not. Well, when you start trying to handle events from an
event loop, you will discover a need to model events. (Trust your
mentor.) You need a slot that can be assigned normally, propagate
according to the above rules and regulations, and then revert to a null
state but /not/ as a state change -- neither propagating nor
observer-notifying.
They are, like, ephemeral, aka, "fleeting".
Now as I said, I got away with such holes for /years/ in all sorts of
hairy applications of Cells before finally, well, falling down those
holes, but it does happen so the holes are really worth filling in.
And wait till you see what this does to your algorithm. :) I was even
tempted to ban observer writebacks, if you will, but I think someday I
will write an application driven by such things, periodically calling
Tcl_DoOneEvent to keep in touch with the outside world.
Which means we cannot propagate on the stack, as your system would.
<slip...slip...slip>
>
> (*) All values depending on the changed cell are marked as invalid before
> anything else is done; trying to access an invalid value forces a
> recalculation which also marks the cell as valid again, and so on
> recursively down to cells that have already converged to a proper value.
>
> Setting input cells in callbacks will f*** that up, though.
>
> (**) Which probably doesn't occur until July or something ;(
>
>
>>>[Decorator example]
>>
>>Wow. I thought Python did not have macros.
>
>
> Calling decorators macros is an insult to CL macros ;-)
> All they do is make it a bit more convinent to apply transformations to
> functions, or as the abstract in the original spec[1] says:
>
> The current method for transforming functions and methods (for instance,
> declaring them as a class or static method) is awkward and can lead to
> code that is difficult to understand. Ideally, these transformations
> should be made at the same point in the code where the declaration
> itself is made.
>
>
>>This project is looking better all the time. Thx.
>
>
> In that case I have something else you might like: The "with" Statement[2]
I had high hopes for that when I saw it, then was not sure. The good
news is that Serious Pythonistas will be doing this, not me. I will just
help them grok the requirements (and hope Ryan is taking notes <g>).
>
> From the abstract:
> This PEP adds a new statement "with" to the Python language to make
> it possible to factor out standard uses of try/finally statements.
>
> In practice one can use it implement some of the with-macros in a pretty
> straightforward manner, especially those that are expanded into
> (call-with-* (lambda () ,@body)). I believe the previously advertised
> with-integrity macro could also be made to work in a satisfying manner.
>
> To quote from the examples:
>
> 2. A template for opening a file that ensures the file is closed
> when the block is left:
>
> @contextmanager
> def opened(filename, mode="r"):
> f = open(filename, mode)
> try:
> yield f
> finally:
> f.close()
>
> Used as follows:
>
> with opened("/etc/passwd") as f:
> for line in f:
> print line.rstrip()
>
> Does that last part look familiar to you?)
>
> The problem with this is that this is Python 2.5 syntax, which currently
> appears to be planned for release in late August, a bit late for SoC.
> Alpha(*) versions are available, so if you want to take a chance and live
> on the bleeding edge, you can probably gain from it.
That is up to the Pythonistas. As a rule, if there is advantage to be
had, I go with the bleeding edge until it proves to me it is unusable.
Thx for the heads up.
Nice job on the code, btw. A lot better than my first efforts.
kenny
--
Cells: http://common-lisp.net/project/cells/
"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
Ken Tilton wrote:
> Is there any experiemntal macro package out there for Python? Maybe a
> preprocessor, at least? Or are there ways to actually hack Python to
> extend the syntax?
Yes. I've just released EasyExtend that does this kind of job:
http://www.fiber-space.de/EasyExtend/doc/EE.html
It fits quite nice with Python and is conceptually simple, safe and
reasonably fast. Using EasyExtend PyCells could be made an own language
( including Python ) defined in a Python package ( i.e. no C-code and
complex build process is required ). I would be interested in user
experience. I wouldn't consider EE as "experimental" i.e. open for
severe changes. It still lacks some comfort but it also improves
gradually in this respect.
Kay Schluehr wrote:
> http://www.fiber-space.de/EasyExtend/doc/EE.html
Well, I have not read that page yet, but the name "fiber space" reminds
me of old
memories, when I was doing less prosaic things than now. Old times ..
;)
Michele Simionato
> It fits quite nice with Python and is conceptually simple, safe and
> reasonably fast. Using EasyExtend PyCells could be made an own language
> ( including Python ) defined in a Python package ( i.e. no C-code and
> complex build process is required ). I would be interested in user
> experience. I wouldn't consider EE as "experimental" i.e. open for
> severe changes. It still lacks some comfort but it also improves
> gradually in this respect.
Michele Simionato wrote:
> Kay Schluehr wrote:
> > http://www.fiber-space.de/EasyExtend/doc/EE.html
>
> Well, I have not read that page yet, but the name "fiber space" reminds
> me of old
> memories, when I was doing less prosaic things than now. Old times ..
> ;)
>
> Michele Simionato
But I guess that time this stuff was taught to you as "fiber bundles",
right? Oh, yes. Old times ;)
Well, besides the cute rhyme on "cyberspace" I had this analogy in mind
and that's why I called the extension languages "fibers". The
association is "fibers over a base space" or a base language as in this
case. The terms are not used strictly of course. I could not even say
what triviality could be in this context. But I still seek for an
acceptable glue mechanism which is beyond the scope of the current
first release :)
Kay
Alex Martelli wrote:
> Ken Tilton <·········@gmail.com> wrote:
> ...
>
>>True but circular, because my very point is that () was a great design
>>choice in that it made macros possible and they made CL almost
>>infinitely extensible, while indentation-sensitivity was a mistaken
>>design choice because it makes for very clean code (I agree
>>wholeheartedly) but placed a ceiling on its expressiveness.
>
>
> Having to give functions a name places no "ceiling on expressiveness",
> any more than, say, having to give _macros_ a name.
As was pointed out, if a Lisper wants anonymous macros, it's a cinch to
create a scheme to allow this. Whether or not it's a good idea in
general, if someone feels they need it, they can do it. An attitude like
that beats "No you can't do that, but you shouldn't do it anyway."
>>(defun |(| (aside) (format nil "Parenthetically speaking...~a." aside))
>>=> |(|
>>(|(| "your Lisp /is/ rusty.")
>>=> "Parenthetically speaking...your Lisp /is/ rusty.."
> Interestingly, the SECOND lisper to prove himself unable to read the
> very text he's quoting. Reread carefully, *USE THE ***SINGLE***
> CHARACTER* ... *AS AN ORDINARY IDENTIFIER*. What makes you read a
> ``PART OF'' that I had never written? You've shown how to use the
> characters as *PART* of an identifier [[and I believe it couldn't be the
> very start]], and you appear to believe that this somehow refutes my
> assertion?
Those vertical bars are just quoting characters, rather like the quoting
characters you'd undoubtedly need in Python to create a variable name
with a space in it.
> The point is, OF COURSE any design choice places limitations on future
> design choices; but some limitations are even DESIRABLE (a language
> where *every* single isolated character could mean anything whatsoever
> would not be "expressive", but rather totally unreadable) or at least
> utterly trivial (syntax-sugar level issues most typically are).
Yeah, a language like that would probably have some strange name like
"TeX." But we'd never know, because nobody would ever use it.
> Yes, we are, because the debate about why it's better for Python (as a
> language used in real-world production systems, *SCALABLE* to extremely
> large-scale ones) to *NOT* be insanely extensible and mutable is a
> separate one -- Python's uniformity of style allows SCALABILITY of
> teams, and teams-of-teams, which is as crucial in the real world as
> obviously not understood by you (the law you misquoted was about adding
> personnel to a LATE project making it later -- nothing to do with how
> desirable it can be to add personnel to a large and growing collection
> of projects, scaling and growing in an agile, iterative way to meet
> equally growing needs and market opportunities).
You seem to be using the most primitive definition of scalable that
there is, viz, if one coder can write one one page program in a day, two
coders can write two (or more likely 1.8) one page programs in a day.
Lispers tend to the view that *OF COURSE* most other decent languages
scale linearly, but what we want is force multipliers and exponential
scaling, not "throw money at it" scaling. Thus program writing programs,
program analyzing programs, compiled domain specific languages and great
quotes like
"I'd rather write programs to write programs than write programs."
"Programs that write programs are the happiest programs in the world."
From: Bill Atkins
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <87wtcyy5i0.fsf@rpi.edu>
·······@yahoo.com (Alex Martelli) writes:
> Yes, we are, because the debate about why it's better for Python (as a
> language used in real-world production systems, *SCALABLE* to extremely
> large-scale ones) to *NOT* be insanely extensible and mutable is a
> separate one -- Python's uniformity of style allows SCALABILITY of
> teams, and teams-of-teams, which is as crucial in the real world as
> obviously not understood by you (the law you misquoted was about adding
> personnel to a LATE project making it later -- nothing to do with how
> desirable it can be to add personnel to a large and growing collection
> of projects, scaling and growing in an agile, iterative way to meet
> equally growing needs and market opportunities).
>
> This specific debate grew from your misuse of "scalable" to mean or
> imply "a bazillion feechurz can [[and, implicitly, should]] be added to
> a language, and therefore anything that stands in the way of feechuritis
> is somehow holding the language back". That's bad enough, even though
> in its contextual misuse of "scalable" it breaks new ground, and I don't
> want to waste even more time re-treading *old* ground as to whether the
> "*insane* extensibility" afforded by macros is a good or a bad thing in
> a language to be used for real-world software production (as opposed to
> prototyping and research).
It's interesting how much people who don't have macros like to put
them down and treat them as some arcane art that are too "*insane*"ly
powerful to be used well.
They're actually very straightforward and can often (shock of shocks!)
make your code more readable, without your efficiency taking a hit.
For example, at work I recently had to generate PDF reports in PHP.
Certain sections would need to be indented, and then once they were
done, I wanted to move back out to the previous level of indentation.
I ended up with stuff like this (totally made up on the spot, but
conveys the general idea):
function out_main_text( ) {
$old_indent = $pdf->indentation;
$pdf->indent_to( $pdf->indentation + 4 );
out_header();
out_facts();
$pdf->set_indentation( $old_indent );
}
function out_header() {
$old_indent = $pdf->indentation;
$pdf->indent_to( $pdf->indentation + 4 );
$pdf->write( "some text" );
$pdf->set_indentation( $old_indent );
}
function out_facts() {
$old_indent = $pdf->indentation;
$pdf->indent_to( $pdf->indentation + 4 );
out_some_subsection();
out_another_subsection();
$pdf->set_indentation( $old_indent );
}
Obviously, this is very much pseudocode. The point is that managing
indentation was a hassle, because each of these subfunctions indents
to a new position. This can pretty clearly get tedious, and is
definitely error-prone, especially when you consider that different
groups of functions are called depending upon the input and that some
of the functions might return early.
But why should I have to worry about any of this? Why can't I do:
(with-indentation (pdf (+ (indentation pdf) 4))
(out-header)
(out-facts))
and then within, say out-facts:
(with-indentation (pdf (+ (indentation pdf) 4))
(write pdf "some text"))
More readable, and no bookkeeping to worry about. This is great! And
here's the macro:
(defmacro with-indentation (pdf new-value &body body)
(let ((old-indent (gensym)))
`(let ((,old-indent (indentation pdf)))
(unwind-protect (progn ,@body)
(setf (indentation pdf) ,old-indent)))))
Bam, all of that bookkeeping, all of those potential errors have taken
care of themselves. WITH-INDENTATION will expand into code that uses
UNWIND-PROTECT to ensure that the indentation always gets returned to
its previous value, even if an exception occurs or the code within
calls RETURN. The WITH-INDENTATION call sets up an environment where
there is a new indentation level in effect, and then cleans it up when
it's done. I can nest these to my heart's content.
Obviously, to someone totally unfamiliar with Lisp, the contents of
that macro are pretty daunting. But you're crazy if you argue that
having WITH-INDENTATION around isn't an improvement over manually
ensuring that indentation gets saved and restored for every function
call.
I could even generalize this (as CLISP does) to this:
(letf (((indentation pdf) (+ 4 (indentation pdf))))
(write "some text"))
Now I can use LETF to temporarily set any value at all for as long as
the code inside is running, and to restore it when it's done.
Macros rock.
--
This is a song that took me ten years to live and two years to write.
- Bob Dylan
Bill Atkins wrote:
> But why should I have to worry about any of this? Why can't I do:
>
> (with-indentation (pdf (+ (indentation pdf) 4))
> (out-header)
> (out-facts))
>
> and then within, say out-facts:
>
> (with-indentation (pdf (+ (indentation pdf) 4))
> (write pdf "some text"))
>
> More readable, and no bookkeeping to worry about. This is great! And
> here's the macro:
. [...]
Can you explain to a non-Lisper why macros are needed for this ? I'm a
Smalltalker, and Smalltalk has no macros, nor anything like 'em, but the
equivalent of the above in Smalltalk is perfectly feasible, and does not
require a separate layer of semantics (which is how I think of true macros).
aPdf
withAdditionalIndent: 4
do: [ aPdf writeHeader; writeFacts ].
and
aPdf
withAdditionalIndent: 4
do: [ aPdf write: '... some text...' ].
Readers unfamiliar with Smalltalk may not find this any easier to read that
your Lisp code, but I can assure them that to any Smalltalker that code would
be both completely idiomatic and completely transparent. (Although I think a
fair number of Smalltalkers would choose to use a slightly different way of
expressing this -- which I've avoided here only in order to keep things
simple).
> Macros rock.
I have yet to be persuaded of this ;-)
-- chris
From: Bill Atkins
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <87d5eq6rx1.fsf@rpi.edu>
"Chris Uppal" <···········@metagnostic.REMOVE-THIS.org> writes:
> Bill Atkins wrote:
>
>> But why should I have to worry about any of this? Why can't I do:
>>
>> (with-indentation (pdf (+ (indentation pdf) 4))
>> (out-header)
>> (out-facts))
>>
>> and then within, say out-facts:
>>
>> (with-indentation (pdf (+ (indentation pdf) 4))
>> (write pdf "some text"))
>>
>> More readable, and no bookkeeping to worry about. This is great! And
>> here's the macro:
> . [...]
>
> Can you explain to a non-Lisper why macros are needed for this ? I'm a
> Smalltalker, and Smalltalk has no macros, nor anything like 'em, but the
> equivalent of the above in Smalltalk is perfectly feasible, and does not
> require a separate layer of semantics (which is how I think of true macros).
>
> aPdf
> withAdditionalIndent: 4
> do: [ aPdf writeHeader; writeFacts ].
>
> and
>
> aPdf
> withAdditionalIndent: 4
> do: [ aPdf write: '... some text...' ].
>
> Readers unfamiliar with Smalltalk may not find this any easier to read that
> your Lisp code, but I can assure them that to any Smalltalker that code would
> be both completely idiomatic and completely transparent. (Although I think a
> fair number of Smalltalkers would choose to use a slightly different way of
> expressing this -- which I've avoided here only in order to keep things
> simple).
To be honest, all I know about Smalltalk are the parts of it that were
grafted onto Ruby. But if your example is doing the same as my code,
then I can't say that there's much of an advantage to using a macro
over this.
>
>> Macros rock.
>
> I have yet to be persuaded of this ;-)
My favorite macro is ITERATE, which was written at MIT to add a
looping mini-language to CL. There is a macro called LOOP that is
part of Common Lisp, but ITERATE improves upon it in lots of ways.
The cool thing about ITERATE is that it lets you express looping
concepts in a language designed explicitly for such a purpose, e.g.
(iter (for x in '(1 3 3))
(summing x)) => 7
(iter (for x in '(1 -3 2))
(finding x maximizing (abs x))) => -3
(iter (for x in '(a b c 1 d 3 e))
(when (symbolp x)
(collect x))) => (a b c d e)
This is a tiny, tiny chunk of what ITERATE offers and of the various
ways these clauses can be combined. But it should be obvious that
being able to express loops this way is going to make your code much
more concise and more readable.
But it gets even better: since ITERATE is a macro, the code passed to
it is error-checked and analyzed at compile-time, and then converted
to primitive Lisp forms - also at compile-time. What this means is
you get all this extra expressive power at absolutely no runtime cost.
At the risk of causing brains to explode, here is what the third ITER
form expands into:
(LET* ((#:LIST17 NIL)
(X NIL)
(#:RESULT16 NIL)
(#:END-POINTER18 NIL)
(#:TEMP19 NIL))
(BLOCK NIL
(TAGBODY
(SETQ #:LIST17 '(A B C 1 D 3 E))
LOOP-TOP-NIL
(IF (ATOM #:LIST17) (GO LOOP-END-NIL))
(SETQ X (CAR #:LIST17))
(SETQ #:LIST17 (CDR #:LIST17))
(IF (SYMBOLP X)
(PROGN
NIL
(PROGN
(SETQ #:TEMP19 (LIST X))
(SETQ #:END-POINTER18
(IF #:RESULT16
(SETF (CDR #:END-POINTER18) #:TEMP19)
(SETQ #:RESULT16 #:TEMP19)))
#:RESULT16))
NIL)
(GO LOOP-TOP-NIL)
LOOP-END-NIL)
#:RESULT16))
This is obviously not something any sane programmer would sit down and
write just to get an efficient loop. But this code is highly tuned;
for instance, it tracks the end of the list it's building up to save
time. I, as a programmer, get all this expressiveness and efficiency
for free. I think that's awesome.
There are lots more examples of macros. My second favorite macro to
use as an example are Peter Seibel's macros for processing binary files:
http://gigamonkeys.com/book/practical-parsing-binary-files.html
He builds a set of macros so that in the next chapter he can define an
ID3 tag reader and writer with:
(define-tagged-binary-class id3v2.3-frame ()
((id (frame-id :length 4))
(size u4)
(flags u2)
(decompressed-size (optional :type 'u4 :if (frame-compressed-p flags)))
(encryption-scheme (optional :type 'u1 :if (frame-encrypted-p flags)))
(grouping-identity (optional :type 'u1 :if (frame-grouped-p flags))))
(:dispatch (find-frame-class id)))
The macro generates code for methods that let you read and write ID3
files in a structured way - and it can of course be used for any other
kind of binary file. The :dispatch parameter at the end will jump to
a different class to parse the rest based on the result of executing
the expression attached to it.
There are lots more, too. Lispers and their environments are often
much more proficient at writing and modifying s-expressions than HTML,
so there are neat HTML generation macros like:
(<:html (<:head (<:title "foo"))
(<:body (<:b :style "font-color: orange" "hello!")))
This example uses Marco Baringer's YACLML package. Again, we get this
expressiveness (no need to use closing tags, the ability to let our
editor indent or transpose tags and so on) for free. At compile-time,
these macros all get parsed down to much, much simpler code:
(PROGN
(WRITE-STRING "<html>
<head>
<title>
foo</title>
</head>
<body>
<b style=\"font-color: orange\">
"
*YACLML-STREAM*)
SOME-VARIABLE
(WRITE-STRING "</b>
</body>
</html>
"
*YACLML-STREAM*))
All of the tags have been streamlined down to a single stream
constant. Again we get more expressiveness but still keep killer
efficiency.
There are still more! _On Lisp_ has a lot of interesting ones, like
an embedded Prolog interpreter and compiler:
(<-- (father billsr billjr))
(?- (father billsr ?))
? = billjr
We have Marco Baringer's ARNESI package, which adds continuations to
Common Lisp through macros, even though Common Lisp does not include
them. The Common Lisp Object System (CLOS), although now part of
Common Lisp, could be (and originally was) built entirely from macros
[1]. That should give a rough idea of their power.
There are macros like DESTRUCTURING-BIND, which is part of Common
Lisp, but could just as easily have been written by you or me. It
handles destructuring of lists. A very simple case:
(destructuring-bind (a b c) '(1 2 3)
(+ a b c)) => 6
I hope this gives you a general idea of the coolness of macros.
[1] Strictly speaking, there are a couple of things that the Lisp
implementation has to take care of, but they are not absolutely
essential features.
--
This is a song that took me ten years to live and two years to write.
- Bob Dylan
From: Bill Atkins
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <87mzdu5d0g.fsf@rpi.edu>
Bill Atkins <············@rpi.edu> writes:
> There are still more! _On Lisp_ has a lot of interesting ones, like
> an embedded Prolog interpreter and compiler:
>
> (<-- (father billsr billjr))
> (?- (father billsr ?))
>
> ? = billjr
Actually, these might not be implemented as macros.
--
This is a song that took me ten years to live and two years to write.
- Bob Dylan
Bill Atkins wrote:
> My favorite macro is ITERATE [...]
Thanks for the examples.
-- chris
From: Marcin 'Qrczak' Kowalczyk
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <87d5em9dhp.fsf@qrnik.zagroda>
Followup-To: comp.lang.lisp
Bill Atkins <············@rpi.edu> writes:
> The cool thing about ITERATE is that it lets you express looping
> concepts in a language designed explicitly for such a purpose, e.g.
>
> (iter (for x in '(1 3 3))
> (summing x)) => 7
>
> (iter (for x in '(1 -3 2))
> (finding x maximizing (abs x))) => -3
>
> (iter (for x in '(a b c 1 d 3 e))
> (when (symbolp x)
> (collect x))) => (a b c d e)
While such macros indeed allow to generate efficient code, I don't
find these examples convincing in terms of readability. The same
examples in my language where iteration is based on higher order
functions are shorter and also clear:
Sum [1 3 3]
=> 7
[1 (-3) 2]->MaximumBy Abs
=> -3
[#a #b #c 1 #d 3 #e]->Select (_ %Is SYMBOL)
=> [#a #b #c #d #e]
--
__("< Marcin Kowalczyk
\__/ ······@knm.org.pl
^^ http://qrnik.knm.org.pl/~qrczak/
From: Boris Borcic
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <4461bd09$1_6@news.bluewin.ch>
Bill Atkins wrote:
>
> It's interesting how much people who don't have macros like to put
> them down and treat them as some arcane art that are too "*insane*"ly
> powerful to be used well.
>
> They're actually very straightforward and can often (shock of shocks!)
> make your code more readable, without your efficiency taking a hit.
Not even efficiency of debugging ? A real problem with macros is that run-time
tracebacks etc, list macro outputs and not your macro'ed source code. And that
becomes an acute problem if you leave code for somebody else to update. Or did
lisp IDEs make progress on that front ?
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <9An8g.3$%L2.2@fe12.lga>
Boris Borcic wrote:
> Bill Atkins wrote:
>
>>
>> It's interesting how much people who don't have macros like to put
>> them down and treat them as some arcane art that are too "*insane*"ly
>> powerful to be used well.
>>
>> They're actually very straightforward and can often (shock of shocks!)
>> make your code more readable, without your efficiency taking a hit.
>
>
> Not even efficiency of debugging ? A real problem with macros is that
> run-time tracebacks etc, list macro outputs and not your macro'ed source
> code. And that becomes an acute problem if you leave code for somebody
> else to update. Or did lisp IDEs make progress on that front ?
AllegroCL now shows macros in the stack frame. Relatively recent
feature, and their IDE really stands out above the rest.
kenny
--
Cells: http://common-lisp.net/project/cells/
"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
Alex Martelli wrote:
> Having to give functions a name places no "ceiling on expressiveness",
> any more than, say, having to give _macros_ a name.
And what about having to give numbers a name?
> Yes, we are, because the debate about why it's better for Python (as a
> language used in real-world production systems, *SCALABLE* to extremely
> large-scale ones) to *NOT* be insanely extensible and mutable is a
> separate one -- Python's uniformity of style allows SCALABILITY of
> teams, and teams-of-teams
I think this kind of language scalability is most important for Google,
see below.
> if your need SCALE, well then, PYTHON IS SCALABLE, and will remain a
> *SIMPLE, CLEAN, LITTLE AND POWERFUL LANGUAGE* (letting nobody do
> anything INSANE to it;-) while scaling up to whatever size of project(s)
> you need (including systems so large...
But honestly, isn't this scalability a result of the data processing
model you use, which is language independent? My point is that when you
have such a well scaling data processing model, most languages scale
well on the "data" and "load" axes, so you pick your language based on
how well it scales on the "teams" and "features" axes.
You seem to claim that there is something in Python that lets it scale
well on "data" and "load", and is not related to "teams" and "features",
and also not related to Google's data processing model. Can you tell
me what it is?
Best regards
Tomasz
Tomasz Zielonka <···············@gmail.com> wrote:
> Alex Martelli wrote:
> > Having to give functions a name places no "ceiling on expressiveness",
> > any more than, say, having to give _macros_ a name.
>
> And what about having to give numbers a name?
Excellent style, in most cases; I believe most sensible coding guides
recommend it for most numbers -- cfr
<http://en.wikipedia.org/wiki/Magic_number_(programming)> , section
"magic numbers in code".
> > Yes, we are, because the debate about why it's better for Python (as a
> > language used in real-world production systems, *SCALABLE* to extremely
> > large-scale ones) to *NOT* be insanely extensible and mutable is a
> > separate one -- Python's uniformity of style allows SCALABILITY of
> > teams, and teams-of-teams
>
> I think this kind of language scalability is most important for Google,
> see below.
It's definitely very important for Google, just as for most organization
doing software development on a large scale.
> > if your need SCALE, well then, PYTHON IS SCALABLE, and will remain a
> > *SIMPLE, CLEAN, LITTLE AND POWERFUL LANGUAGE* (letting nobody do
> > anything INSANE to it;-) while scaling up to whatever size of project(s)
> > you need (including systems so large...
>
> But honestly, isn't this scalability a result of the data processing
> model you use, which is language independent? My point is that when you
> have such a well scaling data processing model, most languages scale
> well on the "data" and "load" axes, so you pick your language based on
> how well it scales on the "teams" and "features" axes.
There's one more axis, the "size of software project" one (in some unit
of measure such as function points). That is partly related to
"language level", but it's not _just_ about that -- e.g., Ruby's
language level is basically the same as Python, but (in my modest
experience) it doesn't scale up quite as well to very large software
projects, as it's more prone to subtle coupling between modules (as it
makes it easy for one module to affect others indirectly, by making
modifications to fundamental globals such as "Object" and other built-in
types).
> You seem to claim that there is something in Python that lets it scale
> well on "data" and "load", and is not related to "teams" and "features",
> and also not related to Google's data processing model. Can you tell
> me what it is?
I don't think that (apart from whatever infrastructure Google may have
developed, and partly published in various whitepapers while partly
deciding to not discuss it publically) Python's scaling on "data" and
"load", to use your terminology, should be intrinsically different
(i.e., due to language differences) from that of other languages with
similar characteristics, such as, say, Ruby or Smalltalk.
Alex
Alex Martelli wrote:
> Tomasz Zielonka <···············@gmail.com> wrote:
>
>> Alex Martelli wrote:
>> > Having to give functions a name places no "ceiling on expressiveness",
>> > any more than, say, having to give _macros_ a name.
>>
>> And what about having to give numbers a name?
>
> Excellent style, in most cases; I believe most sensible coding guides
> recommend it for most numbers -- cfr
><http://en.wikipedia.org/wiki/Magic_number_(programming)> , section
> "magic numbers in code".
I was a bit unclear. I didn't mean constants (I agree with you on
magic numbers), but results of computations, for example
(x * 2) + (y * 3)
Here (x * 2), (y * 3) and (x * 2) + 3 are anonymous numbers ;-)
Would you like if you were forced to write it this way:
a = x * 2
b = y * 3
c = a * b
?
Thanks for your answers to my questions.
Best regards
Tomasz
Tomasz Zielonka wrote:
> (x * 2) + (y * 3)
>
> Here (x * 2), (y * 3) and (x * 2) + 3 are anonymous numbers ;-)
^^^^^^^^^^^
Of course it should be (x * 2) + (y * 3).
Best regards
Tomasz
Tomasz Zielonka <···············@gmail.com> wrote:
> Alex Martelli wrote:
> > Tomasz Zielonka <···············@gmail.com> wrote:
> >
> >> Alex Martelli wrote:
> >> > Having to give functions a name places no "ceiling on expressiveness",
> >> > any more than, say, having to give _macros_ a name.
> >>
> >> And what about having to give numbers a name?
> >
> > Excellent style, in most cases; I believe most sensible coding guides
> > recommend it for most numbers -- cfr
> ><http://en.wikipedia.org/wiki/Magic_number_(programming)> , section
> > "magic numbers in code".
>
> I was a bit unclear. I didn't mean constants (I agree with you on
> magic numbers), but results of computations, for example
Ah, good that we agree on _some_thing;-)
>
> (x * 2) + (y * 3)
>
> Here (x * 2), (y * 3) and (x * 2) + 3 are anonymous numbers ;-)
>
> Would you like if you were forced to write it this way:
>
> a = x * 2
> b = y * 3
> c = a * b
>
> ?
>
> Thanks for your answers to my questions.
I do not think there would be added value in having to name every
intermediate result (as opposed to the starting "constants", about which
we agree); it just "spreads things out". Fortunately, Python imposes no
such constraints on any type -- once you've written out the starting
"constants" (be they functions, numbers, classes, whatever), which may
require naming (either language-enforced, or just by good style),
instances of each type can be treated in perfectly analogous ways (e.g.,
calling callables that operate on them and return other instances) with
no need to name the intermediate results.
The Function type, by design choice, does not support any overloaded
operators, so the analogy of your example above (if x and y were
functions) would be using named higher-order-functions (or other
callables, of course), e.g.:
add_funcs( times_num(x, 2), times_num(y, 3) )
whatever HOF's add and times were doing, e.g.
def add_funcs(*fs):
def result(*a):
return sum(f(*a) for f in fs)
return result
def times_num(f, k):
def result(*a):
return k * f(*a)
return result
or, add polymorphism to taste, if you want to be able to use (e.g.) the
same named HOF to add a mix of functions and constants -- a side issue
that's quite separate from having or not having a name, but rather
connected with how wise it is to overload a single name for many
purposes (PEAK implements generic-functions and multimethods, and it or
something like it is scheduled for addition to Python 3.0; Python 2.*
has no built-in way to add such arbitrary overloads, and multi-dispatch
in particular, so you need to add a framework such as PEAK for that).
Alex
Ken Tilton <·········@gmail.com> writes:
> As for:
>
> > At a syntax-sugar
> > level, for example, Lisp's choice to use parentheses as delimiter means
> > it's undesirable, even unfeasible, to use the single character '(' as an
> > ordinary identifier in a future release of the language.
>
> (defun |(| (aside) (format nil "Parenthetically speaking...~a." aside))
> => |(|
> (|(| "your Lisp /is/ rusty.")
> => "Parenthetically speaking...your Lisp /is/ rusty.."
>
> :) No, seriously, is that all you can come up with?
Well, you have to quote your (s-as-identifiers. I tried a goofy hack
of a reader macro for ( to make this parse:
(let (( ( 10))) ( ))
[The spaces are just for readability, not necessary]
Alas, short of cps-transforming the whole reader, I can't see a
reasonable way to get that to parse as a single sexp that, when
evaluated, returns 10. What my reader macro gave me was five sexps:
\(, LET, \(, ((10)), NIL
I'll follow up with the Lisp code (on c.l.l only), in case I'm missing
something simple.
> OK, I propose a duel. We'll co-mentor this:
>
> http://www.lispnyc.org/wiki.clp?page=PyCells
>
> In the end Python will have a Silver Bullet, and only the syntax will
> differ, because Python has a weak lambda, statements do not always
> return values, it does not have macros, and I do not know if it has
> special variables.
I have no idea what the big problem with a multi-line lambda is in
Python, but I wonder if Cells wouldn't run against the same thing. I
often pass around anonymous formulas that eventually get installed in
a slot. Seems annoying to have to name every formula with a
labels-like mechanism.
···@conquest.OCF.Berkeley.EDU (Thomas F. Burdick) writes:
> Well, you have to quote your (s-as-identifiers. I tried a goofy hack
> of a reader macro for ( to make this parse:
>
> (let (( ( 10))) ( ))
>
> [The spaces are just for readability, not necessary]
>
> Alas, short of cps-transforming the whole reader, I can't see a
> reasonable way to get that to parse as a single sexp that, when
> evaluated, returns 10. What my reader macro gave me was five sexps:
>
> \(, LET, \(, ((10)), NIL
>
> I'll follow up with the Lisp code (on c.l.l only), in case I'm missing
> something simple.
And here it is. To keep it minimally simple, it doesn't handle dotted
pair notation. Also, unread-string is an ugly sbcl-specific hack that
causes this code to only work there, on string-streams. Whatever,
those are things that can easily be solved if someone can see how to
make this read the "right" parens as symbols.
(defun \(-reader (stream char)
(declare (ignore char))
(let ((contents (slurp stream)))
(unread-string stream contents)
(labels ((eof (error)
(when (eql (stream-error-stream error) stream)
(unread-string stream contents)
(return-from \(-reader (intern "(")))))
(handler-bind ((end-of-file #'eof))
(loop for next-char = (next-nonwhite-char stream)
until (char= next-char #\))
do (unread-string stream (string next-char))
collect (read stream t nil t))))))
(defun make-kwazy-readtable ()
(let ((rt (with-standard-io-syntax (copy-readtable))))
(set-macro-character #\( #'\(-reader nil rt)
(set-macro-character #\) #'\)-reader nil rt)
rt))
(defvar *kwazy-readtable* (make-kwazy-readtable))
(defun slurp (stream)
(with-output-to-string (out)
(loop for c = (read-char stream nil nil t)
while c
do (write-char c out))))
(defun next-nonwhite-char (stream)
(loop for c = (read-char stream t nil t)
unless (find c #(#\Space #\Tab #\Newline #\Return))
return c))
;; Evil hack -- the correct way to do this is with a gray stream or
;; simple stream, but this is easier for a simple prototype, at least
;; on SBCL ;-)
(defun unread-string (stream string)
(declare (type sb-impl::string-input-stream stream))
(macrolet ((get-string (&rest r) (cons 'sb-impl::string-input-stream-string r))
(get-current (&rest r) (cons 'sb-impl::string-input-stream-current r))
(get-end (&rest r) (cons 'sb-impl::string-input-stream-end r)))
(let ((left (subseq (get-string stream)
(get-current stream)
(get-end stream))))
(setf (get-string stream) (concatenate 'string string left)
(get-current stream) 0
(get-end stream) (+ (length string) (length left)))
stream)))
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <ndo7g.15$e22.8@fe10.lga>
Thomas F. Burdick wrote:
> Ken Tilton <·········@gmail.com> writes:
>
>
>>As for:
>>
>>
>>> At a syntax-sugar
>>>level, for example, Lisp's choice to use parentheses as delimiter means
>>>it's undesirable, even unfeasible, to use the single character '(' as an
>>>ordinary identifier in a future release of the language.
>>
>>(defun |(| (aside) (format nil "Parenthetically speaking...~a." aside))
>>=> |(|
>>(|(| "your Lisp /is/ rusty.")
>>=> "Parenthetically speaking...your Lisp /is/ rusty.."
>>
>>:) No, seriously, is that all you can come up with?
>
>
> Well, you have to quote your (s-as-identifiers. I tried a goofy hack
> of a reader macro for ( to make this parse:
>
> (let (( ( 10))) ( ))
>
> [The spaces are just for readability, not necessary]
>
> Alas, short of cps-transforming the whole reader, I can't see a
> reasonable way to get that to parse as a single sexp that, when
> evaluated, returns 10. What my reader macro gave me was five sexps:
>
> \(, LET, \(, ((10)), NIL
>
> I'll follow up with the Lisp code (on c.l.l only), in case I'm missing
> something simple.
Stop, you are scaring the pythonistas. Even Alex thought I was serious
with (|(| "Hi mom"). Ouch that is hard to type.
>
>
>>OK, I propose a duel. We'll co-mentor this:
>>
>> http://www.lispnyc.org/wiki.clp?page=PyCells
>>
>>In the end Python will have a Silver Bullet, and only the syntax will
>>differ, because Python has a weak lambda, statements do not always
>>return values, it does not have macros, and I do not know if it has
>>special variables.
>
>
> I have no idea what the big problem with a multi-line lambda is in
> Python, but I wonder if Cells wouldn't run against the same thing. I
> often pass around anonymous formulas that eventually get installed in
> a slot. Seems annoying to have to name every formula with a
> labels-like mechanism.
(a) Right.
(b) Bloated syntax will hurt in a linear way. the decomposition of
application complexity into so many tractable small rules is a nonlinear
win. (ok, painful coding of X does diminish ones use of X, but Cells
force one to use Cells everywhere or not at all, so it comes with its
own press gang, er, discipline.)
(c) (b) might convince GvR to fix (a)
kenny
--
Cells: http://common-lisp.net/project/cells/
"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
Ken Tilton wrote:
> Python has a weak lambda, statements do not always
> return values, it does not have macros, and I do not know if it has
> special variables.
I am pretty much ignorant of Common Lisp, but I have the impression
they are the
same as Scheme parameters, i.e. thread-local dynamically scoped
variables
(feel free to correct me if I am mistaken). If I am right, here is how
you would emulate them in recent versions of Python:
import threading, time
special = threading.local()
special.x = 0
def getx():
return special.x
def set_x(value):
special.x = value
time.sleep(3-value) # thread-2 completes after thread-1
print "%s is setting x to %s" % (threading.currentThread(), getx())
if __name__ == '__main__':
print getx() # => 0
threading.Thread(None, lambda : set_x(1)).start() # => 1
threading.Thread(None, lambda : set_x(2)).start() # => 2
time.sleep(3)
print getx() # => 0
Michele Simionato
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <xcn8g.11$FO5.5@fe08.lga>
sross wrote:
>>I do wonder what would happen to Cells if I ever want to support
>>multiple threads. Or in a parallel processing environment.
>
>
> AFAIK It should be fine.
> In LW, SBCL and ACL all bindings of dynamic variables are thread-local.
>
Ah, I was guilty of making an unspoken segue: the problem is not with
the *dependent* special variable, but with the sequentially growing
numeric *datapulse-id* ("the ID") that tells a cell if it needs to
recompute its value. The ID is not dynamically bound. If threads T1 and
T2 each execute a toplevel, imperative assignment, two threads will
start propagating change up the same dependency graph... <shudder>
Might need to specify a "main" thread that gets to play with Cells and
restrict other threads to intense computations but no Cells?
Actually, I got along quite a while without an ID, I just propagated to
dependents and ran rules. This led sometimes to a rule running twice for
one change and transiently taking on a garbage value, when the
dependency graph of a Cell had two paths back to some changed Cell.
Well, Cells have always been reengineered in the face of actual use
cases, because I am not really smart enough to work these things out in
the abstract. Or too lazy or something. Probably all three.
kenny
--
Cells: http://common-lisp.net/project/cells/
"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
From: Frank Goenninger DG1SBG
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <m2mzdpk0lo.fsf@pcsde001.local>
Ken Tilton <·········@gmail.com> writes:
> sross wrote:
>>>I do wonder what would happen to Cells if I ever want to support
>>>multiple threads. Or in a parallel processing environment.
>> AFAIK It should be fine.
>> In LW, SBCL and ACL all bindings of dynamic variables are thread-local.
>>
>
> Ah, I was guilty of making an unspoken segue: the problem is not with
> the *dependent* special variable, but with the sequentially growing
> numeric *datapulse-id* ("the ID") that tells a cell if it needs to
> recompute its value. The ID is not dynamically bound. If threads T1
> and T2 each execute a toplevel, imperative assignment, two threads
> will start propagating change up the same dependency
> graph... <shudder>
>
> Might need to specify a "main" thread that gets to play with Cells and
> restrict other threads to intense computations but no Cells?
Hmmm. I am wondering if a Cells Manager class could be the home for
all Cells. Each thread could the have its own Cells Manager...
>
> Actually, I got along quite a while without an ID, I just propagated
> to dependents and ran rules. This led sometimes to a rule running
> twice for one change and transiently taking on a garbage value, when
> the dependency graph of a Cell had two paths back to some changed
> Cell.
>
> Well, Cells have always been reengineered in the face of actual use
> cases, because I am not really smart enough to work these things out
> in the abstract. Or too lazy or something. Probably all three.
Nah. It's me asking again and again those silly questions about
real Cells usage in some real life apps ;-)
Frank
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <4Hw8g.72$%L2.71@fe12.lga>
Frank Goenninger DG1SBG wrote:
> Ken Tilton <·········@gmail.com> writes:
>>Ah, I was guilty of making an unspoken segue: the problem is not with
>>the *dependent* special variable, but with the sequentially growing
>>numeric *datapulse-id* ("the ID") that tells a cell if it needs to
>>recompute its value. The ID is not dynamically bound. If threads T1
>>and T2 each execute a toplevel, imperative assignment, two threads
>>will start propagating change up the same dependency
>>graph... <shudder>
>>
>>Might need to specify a "main" thread that gets to play with Cells and
>>restrict other threads to intense computations but no Cells?
>
>
> Hmmm. I am wondering if a Cells Manager class could be the home for
> all Cells. Each thread could the have its own Cells Manager...
You mean no shared Cells? then there would not be a problem anyway. We
could split the difference: tag a cell with its thread, then let cells
read cells in other threads but not set them (in observers) and not have
them as dependencies... you know, it will probably work out fuzzily,
developers just will not have to code setups requiring unfuzziness.
>
> Nah. It's me asking again and again those silly questions about
> real Cells usage in some real life apps ;-)
No, you're OK, just gotta get those dangling parentheses all in one place,
:)
kenny
--
Cells: http://common-lisp.net/project/cells/
"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
Ken Tilton <·········@gmail.com> writes:
> Frank Goenninger DG1SBG wrote:
> > Ken Tilton <·········@gmail.com> writes:
> >>Ah, I was guilty of making an unspoken segue: the problem is not with
> >>the *dependent* special variable, but with the sequentially growing
> >>numeric *datapulse-id* ("the ID") that tells a cell if it needs to
> >>recompute its value. The ID is not dynamically bound. If threads T1
> >>and T2 each execute a toplevel, imperative assignment, two threads
> >>will start propagating change up the same dependency
> >>graph... <shudder>
> >>
> >>Might need to specify a "main" thread that gets to play with Cells and
> >>restrict other threads to intense computations but no Cells?
> > Hmmm. I am wondering if a Cells Manager class could be the home for
> > all Cells. Each thread could the have its own Cells Manager...
>
> You mean no shared Cells? then there would not be a problem anyway. We
> could split the difference: tag a cell with its thread, then let cells
> read cells in other threads but not set them (in observers) and not
> have them as dependencies... you know, it will probably work out
> fuzzily, developers just will not have to code setups requiring
> unfuzziness.
At the moment, there's just one big Cells world. I've actually
refactored that in the past, so that there could be several
non-interacting Cells worlds in one image. Multiple threads can then
do their own thing in their own world, and that's fine. If you want
two threads accessing the same Cells world, they need to serialize
their access to it -- basically, each Cells world needs a Great Big
Lock around each data pulse.
I've avoided actually hacking this in because I hate multithreaded
programming, and would rather tackle the problem using some
higher-level abstraction -- maybe multiple processes sharing their
data through Allegro Cache, and serializing their access with
transactions? The acache notion of a transaction maps very nicely to
the Cells notion of a consistent state between data pulses.
> > Nah. It's me asking again and again those silly questions about real
> > Cells usage in some real life apps ;-)
>
> No, you're OK, just gotta get those dangling parentheses all in one place,
>
> :)
Be sure to check that you're not inside a #+nil, too :-)
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <OaK8g.3$Hu1.2@fe11.lga>
Thomas F. Burdick wrote:
> Ken Tilton <·········@gmail.com> writes:
>
>
>>Frank Goenninger DG1SBG wrote:
>>
>>>Ken Tilton <·········@gmail.com> writes:
>>>
>>>>Ah, I was guilty of making an unspoken segue: the problem is not with
>>>>the *dependent* special variable, but with the sequentially growing
>>>>numeric *datapulse-id* ("the ID") that tells a cell if it needs to
>>>>recompute its value. The ID is not dynamically bound. If threads T1
>>>>and T2 each execute a toplevel, imperative assignment, two threads
>>>>will start propagating change up the same dependency
>>>>graph... <shudder>
>>>>
>>>>Might need to specify a "main" thread that gets to play with Cells and
>>>>restrict other threads to intense computations but no Cells?
>>>
>>>Hmmm. I am wondering if a Cells Manager class could be the home for
>>>all Cells. Each thread could the have its own Cells Manager...
>>
>>You mean no shared Cells? then there would not be a problem anyway. We
>>could split the difference: tag a cell with its thread, then let cells
>>read cells in other threads but not set them (in observers) and not
>>have them as dependencies... you know, it will probably work out
>>fuzzily, developers just will not have to code setups requiring
>>unfuzziness.
>
>
> At the moment, there's just one big Cells world. I've actually
> refactored that in the past, so that there could be several
> non-interacting Cells worlds in one image. Multiple threads can then
> do their own thing in their own world, and that's fine. If you want
> two threads accessing the same Cells world, they need to serialize
> their access to it -- basically, each Cells world needs a Great Big
> Lock around each data pulse.
Yeah, that would be a big blow to any value gotten out of
multithreading. I think. Never used it yet, so maybe it would still have
some value even if Cell propagation was single-threaded.
>
> I've avoided actually hacking this in because I hate multithreaded
> programming, and would rather tackle the problem using some
> higher-level abstraction -- maybe multiple processes sharing their
> data through Allegro Cache, and serializing their access with
> transactions? The acache notion of a transaction maps very nicely to
> the Cells notion of a consistent state between data pulses.
Really? Brilliant. I had great fun marrying Cells and AllegroStore, glad
to hear its a fit.
kenny
--
Cells: http://common-lisp.net/project/cells/
"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
"Michele Simionato" <·················@gmail.com> writes:
> Ken Tilton wrote:
> > I was not thinking about the thread issue (of which I know little). The
> > big deal for Cells is the dynamic bit:
> >
> > (let ((*dependent* me))
> > (funcall (rule me) me))
> >
> > Then if a rule forces another cell to recalculate itself, *dependent*
> > gets rebound and (the fun part) reverts back to the original dependent
> > as soon as the scope of the let is exited.
>
> Python 2.5 has a "with" statement (yes, the name is Lispish on purpose)
> that could be used to implement this. See
> http://www.python.org/dev/peps/pep-0343
You are mistaken. In particular, VAR doesn't have dynamic scope.
/Jon
--
'j' - a n t h o n y at romeo/charley/november com
jayessay wrote:
> "Michele Simionato" <·················@gmail.com> writes:
>
> > Ken Tilton wrote:
> > > I was not thinking about the thread issue (of which I know little). The
> > > big deal for Cells is the dynamic bit:
> > >
> > > (let ((*dependent* me))
> > > (funcall (rule me) me))
> > >
> > > Then if a rule forces another cell to recalculate itself, *dependent*
> > > gets rebound and (the fun part) reverts back to the original dependent
> > > as soon as the scope of the let is exited.
> >
> > Python 2.5 has a "with" statement (yes, the name is Lispish on purpose)
> > that could be used to implement this. See
> > http://www.python.org/dev/peps/pep-0343
>
> You are mistaken. In particular, VAR doesn't have dynamic scope.
>
I said "it could be used to implement this", and since in a previous
post on mine in this
same thread I have shown how to implement thread local variables in
Python I figured
out people would be able to do the exercise for themselves.
Michele Simionato
"Michele Simionato" <·················@gmail.com> writes:
> jayessay wrote:
> > "Michele Simionato" <·················@gmail.com> writes:
> >
> > > Ken Tilton wrote:
> > > > I was not thinking about the thread issue (of which I know little). The
> > > > big deal for Cells is the dynamic bit:
> > > >
> > > > (let ((*dependent* me))
> > > > (funcall (rule me) me))
> > > >
> > > > Then if a rule forces another cell to recalculate itself, *dependent*
> > > > gets rebound and (the fun part) reverts back to the original dependent
> > > > as soon as the scope of the let is exited.
> > >
> > > Python 2.5 has a "with" statement (yes, the name is Lispish on purpose)
> > > that could be used to implement this. See
> > > http://www.python.org/dev/peps/pep-0343
> >
> > You are mistaken. In particular, VAR doesn't have dynamic scope.
> >
>
> I said "it could be used to implement this", and since in a previous
> post on mine in this
> same thread I have shown how to implement thread local variables in
> Python I figured
> out people would be able to do the exercise for themselves.
I was saying that you are mistaken in that pep-0343 could be used to
implement dynamically scoped variables. That stands.
/Jon
--
'j' - a n t h o n y at romeo/charley/november com
jayessay wrote:
> I was saying that you are mistaken in that pep-0343 could be used to
> implement dynamically scoped variables. That stands.
Proof by counter example:
from __future__ import with_statement
import threading
special = threading.local()
def getvar(name):
return getattr(special, name)
def setvar(name, value):
return setattr(special, name, value)
class dynamically_scoped(object):
def __init__(self, name, value):
self.name = name
self.value = value
def __context__(self):
return self
def __enter__(self):
self.orig_value = getvar(self.name)
setvar(self.name, self.value)
def __exit__(self, Exc, msg, tb):
setvar(self.name, self.orig_value)
if __name__ == '__main__': # test
setvar("*x*", 1)
print getvar("*x*") # => 1
with dynamically_scoped("*x*", 2):
print getvar("*x*") # => 2
print getvar("*x*") # => 1
If you are not happy with this implementation, please clarify.
Michele Simionato
"Michele Simionato" <·················@gmail.com> writes:
> jayessay wrote:
> > I was saying that you are mistaken in that pep-0343 could be used to
> > implement dynamically scoped variables. That stands.
>
> Proof by counter example:
>
> from __future__ import with_statement
> import threading
>
> special = threading.local()
>
> def getvar(name):
> return getattr(special, name)
>
> def setvar(name, value):
> return setattr(special, name, value)
>
> class dynamically_scoped(object):
> def __init__(self, name, value):
> self.name = name
> self.value = value
> def __context__(self):
> return self
> def __enter__(self):
> self.orig_value = getvar(self.name)
> setvar(self.name, self.value)
> def __exit__(self, Exc, msg, tb):
> setvar(self.name, self.orig_value)
>
> if __name__ == '__main__': # test
> setvar("*x*", 1)
> print getvar("*x*") # => 1
> with dynamically_scoped("*x*", 2):
> print getvar("*x*") # => 2
> print getvar("*x*") # => 1
>
> If you are not happy with this implementation, please clarify.
I can't get this to work at all - syntax errors (presumably you must
have 2.5?, I only have 2.4). But anyway:
This has not so much to do with WITH as relying on a special "global"
object which you must reference specially, which keeps track (more or
less) of its attribute values, which you use as "faked up" variables.
Actually you probably need to hack this a bit more to even get that as
it doesn't appear to stack the values beyond a single level.
/Jon
--
'j' - a n t h o n y at romeo/charley/november com
From: Boris Borcic
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <44637e47_4@news.bluewin.ch>
jayessay wrote:
> "Michele Simionato" <·················@gmail.com> writes:
>
>> jayessay wrote:
>>> I was saying that you are mistaken in that pep-0343 could be used to
>>> implement dynamically scoped variables. That stands.
>> Proof by counter example:
>>
>> from __future__ import with_statement
>> import threading
>>
>> special = threading.local()
>>
>> def getvar(name):
>> return getattr(special, name)
>>
>> def setvar(name, value):
>> return setattr(special, name, value)
>>
>> class dynamically_scoped(object):
>> def __init__(self, name, value):
>> self.name = name
>> self.value = value
>> def __context__(self):
>> return self
>> def __enter__(self):
>> self.orig_value = getvar(self.name)
>> setvar(self.name, self.value)
>> def __exit__(self, Exc, msg, tb):
>> setvar(self.name, self.orig_value)
>>
>> if __name__ == '__main__': # test
>> setvar("*x*", 1)
>> print getvar("*x*") # => 1
>> with dynamically_scoped("*x*", 2):
>> print getvar("*x*") # => 2
>> print getvar("*x*") # => 1
>>
>> If you are not happy with this implementation, please clarify.
>
> I can't get this to work at all - syntax errors (presumably you must
> have 2.5?, I only have 2.4). But anyway:
>
> This has not so much to do with WITH as relying on a special "global"
> object which you must reference specially, which keeps track (more or
> less) of its attribute values, which you use as "faked up" variable
> Actually you probably need to hack this a bit more to even get that as
> it doesn't appear to stack the values beyond a single level.
Actually there's no problem there. hint : dynamically_scoped is a class that the
with statement will instantiate before (any) entry. OTOH, as it is written, I am
not convinced it will work in a multithreaded setting : isn't it the case that
all threads that will import eg dynamically_scoped/getvar/setvar will act
without sync on the /single/ special object of the /single/ thread that
initialized the module ?
But I'm not sure, it's been ages since I used python threading.
>
>
> /Jon
>
jayessay wrote:
> "Michele Simionato" <·················@gmail.com> writes:
> I can't get this to work at all - syntax errors (presumably you must
> have 2.5?, I only have 2.4).
You can download Python 2.5 from www.python.org, but the important bit,
i.e. the use of threading.local to get thread-local variables is
already there in Python 2.4.
'with' gives you just a nicer lisp-like syntax.
> This has not so much to do with WITH as relying on a special "global"
> object which you must reference specially, which keeps track (more or
> less) of its attribute values, which you use as "faked up" variables.
> Actually you probably need to hack this a bit more to even get that as
> it doesn't appear to stack the values beyond a single level.
Yes, but it would not be difficult, I would just instantiate
threading.local inside
the __init__ method of the dynamically_scoped class, so each 'with'
block
would have its own variables (and I should change getvar and setvar a
bit).
I was interested in a proof of concept, to show that Python can emulate
Lisp
special variables with no big effort.
Michele Simionato
"Michele Simionato" <·················@gmail.com> writes:
> I was interested in a proof of concept, to show that Python can
> emulate Lisp special variables with no big effort.
OK, but the sort of "proof of concept" given here is something you can
hack up in pretty much anything. So, I wouldn't call it especially
convincing in its effect and capability.
/Jon
--
'j' - a n t h o n y at romeo/charley/november com
From: Alexander Schmolck
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <yfs3bffuqj7.fsf@oc.ex.ac.uk>
jayessay <······@foo.com> writes:
> "Michele Simionato" <·················@gmail.com> writes:
>
> > I was interested in a proof of concept, to show that Python can
> > emulate Lisp special variables with no big effort.
>
> OK, but the sort of "proof of concept" given here is something you can
> hack up in pretty much anything.
Care to provide e.g. a java equivalent?
'as
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <r559g.16$Id.14@fe10.lga>
Alexander Schmolck wrote:
> jayessay <······@foo.com> writes:
>
>
>>"Michele Simionato" <·················@gmail.com> writes:
>>
>>
>>>I was interested in a proof of concept, to show that Python can
>>>emulate Lisp special variables with no big effort.
>>
>>OK, but the sort of "proof of concept" given here is something you can
>>hack up in pretty much anything.
>
>
> Care to provide e.g. a java equivalent?
I think the point is that, with the variable actually being just a
string and with dedicated new explicit functions required as
"accessors", well, you could hack that up in any language with
dictionaries. It is the beginnings of an interpreter, not Python itself
even feigning special behavior.
perhaps the way to go is to take the Common Lisp:
(DEFVAR *x*)
*x* = special_var(v=42) ;; I made this syntax up
that could make for cleaner code:
*x*.v = 1
print *x*.v -> 1
(Can we hide the .v?) But there is still the problem of knowing when to
revert a value to its prior binding when the scope of some WITH block is
left.
Of course that is what indentation is for in Python, so... is that
extensible by application code? Or would this require Python internals work?
kenny
--
Cells: http://common-lisp.net/project/cells/
"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
Ken Tilton <·········@gmail.com> writes:
> Alexander Schmolck wrote:
> > jayessay <······@foo.com> writes:
> >
> >>"Michele Simionato" <·················@gmail.com> writes:
> >>
> >>
> >>>I was interested in a proof of concept, to show that Python can
> >>>emulate Lisp special variables with no big effort.
> >>
> >>OK, but the sort of "proof of concept" given here is something you can
> >> hack up in pretty much anything.
> > Care to provide e.g. a java equivalent?
>
> I think the point is that, with the variable actually being just a
> string and with dedicated new explicit functions required as
> "accessors", well, you could hack that up in any language with
> dictionaries. It is the beginnings of an interpreter, not Python
> itself even feigning special behavior.
Exactly. Of course this is going to be totally lost on the intended
audience...
/Jon
--
'j' - a n t h o n y at romeo/charley/november com
From: Alexander Schmolck
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <yfsk68rt3cq.fsf@oc.ex.ac.uk>
Ken Tilton <·········@gmail.com> writes:
> Alexander Schmolck wrote:
> > jayessay <······@foo.com> writes:
> >
>
> >>"Michele Simionato" <·················@gmail.com> writes:
> >>
> >>
> >>>I was interested in a proof of concept, to show that Python can
> >>>emulate Lisp special variables with no big effort.
> >>
> >>OK, but the sort of "proof of concept" given here is something you can
> >> hack up in pretty much anything.
>
> > Care to provide e.g. a java equivalent?
>
>
> I think the point is that, with the variable actually being just a string and
> with dedicated new explicit functions required as "accessors", well, you could
> hack that up in any language with dictionaries.
Great -- so can I see some code? Can't be that difficult, it takes about 10-15
lines in python (and less in scheme).
> It is the beginnings of an interpreter, not Python itself even feigning
> special behavior.
>
>
> perhaps the way to go is to take the Common Lisp:
>
> (DEFVAR *x*)
>
> *x* = special_var(v=42) ;; I made this syntax up
>
> that could make for cleaner code:
>
> *x*.v = 1
>
> print *x*.v -> 1
>
> (Can we hide the .v?)
I'd presumably write special variable access as something like:
with specials('x','y','z'):
special.x = 3 + 4
special.y = special.x + 10
...
I haven't tested this because I haven't got the python 2.5 alpha and won't go
through the trouble of installing it for this usenet discussion, but I'm
pretty sure this would work fine (I'm sure someone else can post an
implementation or prove me wrong). I also can't see how one could sensibly
claim that this doesn't qualify as an implementation of dynamically scoped
variables. Doesn't look any worse to me than
(let (x y z)
(declare (special x y z))
...)
-- in fact it looks better.
> But there is still the problem of knowing when to revert a value to its
> prior binding when the scope of some WITH block is left.
Can you explain what you mean by this statement? I'm not quite sure but I've
got the impression you're a possibly confused. Have you had a look at
<http://docs.python.org/dev/whatsnew/pep-343.html> or some other explanation
of the with statement?
> Of course that is what indentation is for in Python, so... is that extensible
> by application code?
The meaning of indentation? No.
> Or would this require Python internals work?
Alexander Schmolck <··········@gmail.com> writes:
> Ken Tilton <·········@gmail.com> writes:
>
> > Alexander Schmolck wrote:
> > > jayessay <······@foo.com> writes:
> > >
> >
> > >>"Michele Simionato" <·················@gmail.com> writes:
> > >>
> > >>
> > >>>I was interested in a proof of concept, to show that Python can
> > >>>emulate Lisp special variables with no big effort.
> > >>
> > >>OK, but the sort of "proof of concept" given here is something you can
> > >> hack up in pretty much anything.
> >
> > > Care to provide e.g. a java equivalent?
> >
> >
> > I think the point is that, with the variable actually being just a string and
> > with dedicated new explicit functions required as "accessors", well, you could
> > hack that up in any language with dictionaries.
>
> Great -- so can I see some code? Can't be that difficult, it takes about 10-15
> lines in python (and less in scheme).
Do you actually need the code to understand this relatively simple concept???
/Jon
--
'j' - a n t h o n y at romeo/charley/november com
From: Alexander Schmolck
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <yfszmhmst9h.fsf@oc.ex.ac.uk>
jayessay <······@foo.com> writes:
> > Great -- so can I see some code? Can't be that difficult, it takes about 10-15
> > lines in python (and less in scheme).
>
> Do you actually need the code to understand this relatively simple concept???
Yes. I'd be genuinely curious to see how an implementation in Java, Pascal, C,
(or any other language that has little more than dictionaries) compares to
python and CL.
In my limited understanding I have trouble seeing how you'd do without either
unwind-protect/try-finally or reliable finalizers for starters.
'as
From: Marcin 'Qrczak' Kowalczyk
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <87k68qjjjq.fsf@qrnik.zagroda>
Ken Tilton <·········@gmail.com> writes:
> I think the point is that, with the variable actually being just
> a string and with dedicated new explicit functions required as
> "accessors", well, you could hack that up in any language with
> dictionaries. It is the beginnings of an interpreter, not Python
> itself even feigning special behavior.
If the semantics and the global structure of the code is right, only
you don't like the local concrete syntax, then the complaint is at
most as justified as complaints against Lisp parentheses.
--
__("< Marcin Kowalczyk
\__/ ······@knm.org.pl
^^ http://qrnik.knm.org.pl/~qrczak/
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <yV49g.15$Id.13@fe10.lga>
Michele Simionato wrote:
> jayessay wrote:
>
>>I was saying that you are mistaken in that pep-0343 could be used to
>>implement dynamically scoped variables. That stands.
>
>
> Proof by counter example:
>
> from __future__ import with_statement
> import threading
>
> special = threading.local()
>
> def getvar(name):
> return getattr(special, name)
>
> def setvar(name, value):
> return setattr(special, name, value)
>
> class dynamically_scoped(object):
> def __init__(self, name, value):
> self.name = name
> self.value = value
> def __context__(self):
> return self
> def __enter__(self):
> self.orig_value = getvar(self.name)
> setvar(self.name, self.value)
> def __exit__(self, Exc, msg, tb):
> setvar(self.name, self.orig_value)
>
> if __name__ == '__main__': # test
> setvar("*x*", 1)
> print getvar("*x*") # => 1
> with dynamically_scoped("*x*", 2):
> print getvar("*x*") # => 2
> print getvar("*x*") # => 1
>
> If you are not happy with this implementation, please clarify.
Can you make it look a little more as if it were part of the language,
or at least conceal the wiring better? I am especially bothered by the
double-quotes and having to use setvar and getvar.
In Common Lisp we would have:
(defvar *x*) ;; makes it special
(setf *x* 1)
(print *x*) ;;-> 1
(let ((*x* 2))
(print *x*)) ;; -> 2
(print *x*) ;; -> 1
kenny
--
Cells: http://common-lisp.net/project/cells/
"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
From: Alexander Schmolck
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <yfsody3t4jd.fsf@oc.ex.ac.uk>
Ken Tilton <·········@gmail.com> writes:
> In Common Lisp we would have:
>
> (defvar *x*) ;; makes it special
> (setf *x* 1)
> (print *x*) ;;-> 1
> (let ((*x* 2))
> (print *x*)) ;; -> 2
> (print *x*) ;; -> 1
You seem to think that conflating special variable binding and lexical
variable binding is a feature and not a bug. What's your rationale?
'as
From: Duane Rettig
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <o01wuz2el9.fsf@franz.com>
Alexander Schmolck <··········@gmail.com> writes:
> Ken Tilton <·········@gmail.com> writes:
>
>> In Common Lisp we would have:
>>
>> (defvar *x*) ;; makes it special
>> (setf *x* 1)
>> (print *x*) ;;-> 1
>> (let ((*x* 2))
>> (print *x*)) ;; -> 2
>> (print *x*) ;; -> 1
>
> You seem to think that conflating special variable binding and lexical
> variable binding is a feature and not a bug. What's your rationale?
A bug is a non-conformance to spec. Kenny's statement was specifically
about Common Lisp, which has a spec. Now, what was your rationale for
it _being_ a bug?
--
Duane Rettig ·····@franz.com Franz Inc. http://www.franz.com/
555 12th St., Suite 1450 http://www.555citycenter.com/
Oakland, Ca. 94607 Phone: (510) 452-2000; Fax: (510) 452-0182
From: Alexander Schmolck
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <yfsejyzt0d9.fsf@oc.ex.ac.uk>
Duane Rettig <·····@franz.com> writes:
> Alexander Schmolck <··········@gmail.com> writes:
>
> > Ken Tilton <·········@gmail.com> writes:
> >
> >> In Common Lisp we would have:
> >>
> >> (defvar *x*) ;; makes it special
> >> (setf *x* 1)
> >> (print *x*) ;;-> 1
> >> (let ((*x* 2))
> >> (print *x*)) ;; -> 2
> >> (print *x*) ;; -> 1
> >
> > You seem to think that conflating special variable binding and lexical
> > variable binding is a feature and not a bug. What's your rationale?
>
> A bug is a non-conformance to spec.
There is a world beyond specs, you know. If copies of allegro CL accidently
sent out death-threats to the US president on a weekly basis, because someone
at franz accidently or purposefully left in some pranky debugging code the
fact that this behaviour would likely neither violate the ansi spec nor any
other specs that ACL officially purports to adhere to wouldn't make it any
less of a bug (or help to pacify your customers).
> Kenny's statement was specifically about Common Lisp
No Kenny's statement was about contrasting the way something is done in python
and the way something is done in common lisp (with the implication that the
latter is preferable). Of course the way something is done in common lisp is
almost tautologically in closer agreement with the ansi common lisp spec than
the way it is done in python, so agreement with the clhs is not a useful
criterion when talking about design features and misfeatures when contrasting
languages.
I thought it would have been pretty obvious that I was talking about language
design features and language design misfeatures (Indeed the infamously post
hoc, "It's a feature, not a bug" I was obviously alluding too doesn't make
much sense in a world were everything is tightly specified, because in it
nothing is post-hoc).
>, which has a spec.
Bah -- so does fortran. But scheme also has operational semantics.
> Now, what was your rationale for it _being_ a bug?
I just don't think the way special variable binding (or variable binding in
general[1]) is handled in common lisp is particularly well designed or
elegant.
Special variables and lexical variables have different semantics and using
convention and abusing[2] the declaration mechanism to differentiate between
special and lexical variables doesn't strike me as a great idea.
I can certainly think of problems that can occur because of it (E.g. ignoring
or messing up a special declaration somewhere; setf on a non-declared variable
anyone? There are also inconsistent conventions for naming (local) special
variables within the community (I've seen %x%, *x* and x)).
Thus I don't see having to use syntactically different binding and assignment
forms for special and lexical variables as inherently inferior.
But I might be wrong -- which is why was asking for the rationale of Kenny's
preference. I'd be even more interested in what you think (seriously; should
you consider it a design feature (for reasons other than backwards
compatiblity constraints), I'm pretty sure you would also give a justification
that would merrit consideration).
'as
Footnotes:
[1] The space of what I see as orthogonal features (parallel vs. serial
binding, single vs. multiple values and destructuring vs non-destructuring
etc.) is sliced in what appear to me pretty arbitrary, non-orthogonal and
annoying (esp. superfluous typing and indentation) ways in CL.
[2] Generally declarations don't change the meaning of an otherwise
well-defined program. The special declaration does. It's also a potential
source of errors as the declaration forces you to repeat yourself and to
pay attention to two places rather than one.
From: Duane Rettig
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <o08xp6vq7e.fsf@franz.com>
Alexander Schmolck <··········@gmail.com> writes:
> Duane Rettig <·····@franz.com> writes:
>
>> Alexander Schmolck <··········@gmail.com> writes:
>>
>> > Ken Tilton <·········@gmail.com> writes:
>> >
>> >> In Common Lisp we would have:
>> >>
>> >> (defvar *x*) ;; makes it special
>> >> (setf *x* 1)
>> >> (print *x*) ;;-> 1
>> >> (let ((*x* 2))
>> >> (print *x*)) ;; -> 2
>> >> (print *x*) ;; -> 1
>> >
>> > You seem to think that conflating special variable binding and lexical
>> > variable binding is a feature and not a bug. What's your rationale?
>>
>> A bug is a non-conformance to spec.
>
> There is a world beyond specs, you know. If copies of allegro CL accidently
> sent out death-threats to the US president on a weekly basis, because someone
> at franz accidently or purposefully left in some pranky debugging code the
> fact that this behaviour would likely neither violate the ansi spec nor any
> other specs that ACL officially purports to adhere to wouldn't make it any
> less of a bug (or help to pacify your customers).
It wouldn't be a bug in Allegro CL, because it would never happen in an Allegro CL
that hasn't been enhanced with some kind of program. And although that program
itself could have a bug whereby such a threat were accidental, I would tend not
to call it accidental, I would tend to call it expicit, and thus not a bug but
an intended consequence of such explicit programming.
My reason for responding to you in the first place was due to your poor use
of the often misused term "bug". You could have used many other words or
phrases to describe the situation, and I would have left any of those alone.
For example:
>> Kenny's statement was specifically about Common Lisp
>
> No Kenny's statement was about contrasting the way something is done in python
> and the way something is done in common lisp (with the implication that the
> latter is preferable). Of course the way something is done in common lisp is
> almost tautologically in closer agreement with the ansi common lisp spec than
> the way it is done in python, so agreement with the clhs is not a useful
> criterion when talking about design features and misfeatures when contrasting
> languages.
>
> I thought it would have been pretty obvious that I was talking about language
> design features and language design misfeatures (Indeed the infamously post
> hoc, "It's a feature, not a bug" I was obviously alluding too doesn't make
> much sense in a world were everything is tightly specified, because in it
> nothing is post-hoc).
Whether it is preferable is a matter of opinion, and whether Kenny meant it
to infer preferability (I suspect so) or not has nothing to due with whether
it is a bug. Instead, you should call it a "design misfeature", which would
set the stage for a more cogent argumentation on the point, rather than on the
hyperbole. By the way, if you do call it a design misfeature, I would be
arguing against you, but that is another conversation.
>>, which has a spec.
>
> Bah -- so does fortran. But scheme also has operational semantics.
>
>> Now, what was your rationale for it _being_ a bug?
>
> I just don't think the way special variable binding (or variable binding in
> general[1]) is handled in common lisp is particularly well designed or
> elegant.
Then call it "misdesigned" or "inelegant".
> Special variables and lexical variables have different semantics and using
> convention and abusing[2] the declaration mechanism to differentiate between
> special and lexical variables doesn't strike me as a great idea.
Then call it a "bad idea".
> I can certainly think of problems that can occur because of it (E.g. ignoring
> or messing up a special declaration somewhere; setf on a non-declared variable
> anyone? There are also inconsistent conventions for naming (local) special
> variables within the community (I've seen %x%, *x* and x)).
Then call it "not fully standardized or normative".
> Thus I don't see having to use syntactically different binding and assignment
> forms for special and lexical variables as inherently inferior.
Then call it "inherently inferior".
> But I might be wrong -- which is why was asking for the rationale of Kenny's
> preference.
But you _didn't_ ask him what rationale he had for his _preference_, you
asked him his rationale for considering it not a _bug_.
> I'd be even more interested in what you think (seriously; should
> you consider it a design feature (for reasons other than backwards
> compatiblity constraints), I'm pretty sure you would also give a justification
> that would merrit consideration).
Well, OK, let's change the conversation away from "bug"-ness and toward any of
the other negatives we discussed above. I actually doubt that I can provide
a justification in a small space without first understanding who you are
and from what background you are coming, so let me turn it around and ask
you instead to knock down a straw-man:
You seem to be saying that pure lexical transparency is always preferable
to statefulness (e.g. context). Can we make that leap? If not, set me
straight. If so, tell me: how do we programmatically model those situations
in life which are inherently contextual in nature, where you might get
a small piece of information and must make sense of it by drawing on
information that is _not_ given in that information, but is (globally,
if you will) "just known" by you? How about conversations in English?
And, by the way, how do you really know I'm writing to you in English, and
not some coded language that means something entirely different?
--
Duane Rettig ·····@franz.com Franz Inc. http://www.franz.com/
555 12th St., Suite 1450 http://www.555citycenter.com/
Oakland, Ca. 94607 Phone: (510) 452-2000; Fax: (510) 452-0182
From: Alexander Schmolck
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <yfs3bfeu7wv.fsf@oc.ex.ac.uk>
Duane Rettig <·····@franz.com> writes:
> My reason for responding to you in the first place was due to your poor use
> of the often misused term "bug". You could have used many other words or
> phrases to describe the situation, and I would have left any of those alone.
I'm happy to accept your terminology of bug (not conforming to a certain
specification) for the remainder of this discussion so that we can stop
quibbling over words.
[...]
> > I'd be even more interested in what you think (seriously; should
> > you consider it a design feature (for reasons other than backwards
> > compatiblity constraints), I'm pretty sure you would also give a justification
> > that would merrit consideration).
>
> Well, OK, let's change the conversation away from "bug"-ness and toward any of
> the other negatives we discussed above. I actually doubt that I can provide
> a justification in a small space without first understanding who you are
> and from what background you are coming, so let me turn it around and ask
> you instead to knock down a straw-man:
>
> You seem to be saying that pure lexical transparency is always preferable
> to statefulness (e.g. context).
No.
> Can we make that leap? If not, set me straight.
I think that in most contexts lexical transparency is desirable so that
deviations from lexical transparency ought to be well motivated. I also
believe that a construct that is usually used to establish a lexically
transparent binding shouldn't be equally used for dynamic bindings so that it
isn't syntactically obvious what's going on. I've already provided some
reasons why CL's design wrt. binding special and lexical variables seems bad
to me. I don't think these reasons were terribly forceful but as I'm not aware
of any strong motivation why the current behaviour would be useful I'm
currently considering it a minor wart.
To make things more concrete: What would be the downside of, instead of doing
something like:
(let ((*x* ...)) [(declare (special *x*))] ...) ; where [X] denotes maybe X
doing any of the below:
a) using a different construct e.g. (fluid-let ((*x* ...)) ...) for binding
special variables
b) having to use *...* (or some other syntax) for special variables
c) using (let ((q (special *x*) (,a ,b ,@c)) (values 1 2 '(3 4 5 6)))
(list q ((lambda () (incf *x*))) a b c)) ; => (1 3 3 4 (5 6))
(It's getting late, but hopefully this makes some vague sense)
> If so, tell me: how do we programmatically model those situations in life
> which are inherently contextual in nature, where you might get a small piece
> of information and must make sense of it by drawing on information that is
> _not_ given in that information, but is (globally, if you will) "just known"
> by you? How about conversations in English? And, by the way, how do you
> really know I'm writing to you in English, and not some coded language that
> means something entirely different?
We can skip that part.
'as
From: Duane Rettig
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <o07j4qmk9r.fsf@franz.com>
Alexander Schmolck <··········@gmail.com> writes:
> I think that in most contexts lexical transparency is desirable so that
> deviations from lexical transparency ought to be well motivated. I also
> believe that a construct that is usually used to establish a lexically
> transparent binding shouldn't be equally used for dynamic bindings so that it
> isn't syntactically obvious what's going on. I've already provided some
> reasons why CL's design wrt. binding special and lexical variables seems bad
> to me. I don't think these reasons were terribly forceful but as I'm not aware
> of any strong motivation why the current behaviour would be useful I'm
> currently considering it a minor wart.
>
> To make things more concrete: What would be the downside of, instead of doing
> something like:
>
> (let ((*x* ...)) [(declare (special *x*))] ...) ; where [X] denotes maybe X
Let's start with this. You seem to be saying that the above construct is inferior
to the alternatives you are about to suggest. Why? And since you are adding
an optional form, let's break it down into its separate situations:
1. (let ((*x* ...)) (declare (special *x*)) ...)
Here there is no question about the specialness of *x*; it is textually
obvious what the binding is - that it is not a lexical binding but a special
binding.
2. (let ((*x* ...)) ...)
[where there is no special declaration for *x* within the form]
Here, the issue is that it is not obvious that *x* is special (in this case,
it would have to already be a dynamic variable (what we internally call
"globally special"), because a special declaration within a lexical context
does not affect inner bindings. Perhaps this form is the one you are really
having trouble with.
> doing any of the below:
>
> a) using a different construct e.g. (fluid-let ((*x* ...)) ...) for binding
> special variables
Unless you also _remove_ the #2 case above, this seems no diferent than writing
a macro for the #1 case, above.
> b) having to use *...* (or some other syntax) for special variables
In fact, the spec does suggest precisely this (see
http://www.franz.com/support/documentation/8.0/ansicl/dictentr/defparam.htm,
in the Notes section), and to the extent that programmers obey the suggestion,
the textual prompting is present in the name.
> c) using (let ((q (special *x*) (,a ,b ,@c)) (values 1 2 '(3 4 5 6)))
> (list q ((lambda () (incf *x*))) a b c)) ; => (1 3 3 4 (5 6))
>
> (It's getting late, but hopefully this makes some vague sense)
Well, sort of; this seems simply like a sometimes-fluid-let, whose syntax could
easily be established by a macro (with destructurings whose form is (special X)
could be specially [sic] treated.
Now if in the above example you would have trouble with (a) and/or (c)
based on the absence of a "lexical" declaration (i.e. one that would undo
the effect of a globally special declaration), thus guaranteeing that a
fluid-let or a "sometimes-fluid-let" would work, you should know that while
I was working on the Environments Access module I theorized and demonstrated
that such a declaration could be easily done within a conforming Common Lisp.
I leave you with that demonstration here (though it really is only for
demonstration purposes only; I don't necessarily propose that CL should add
a lexical declaration to the language):
[This only works on Allegro CL 8.0]:
CL-USER(1): (defvar pie pi)
PIE
CL-USER(2): (compile (defun circ (rad) (* pie rad rad)))
CIRC
NIL
NIL
CL-USER(3): (circ 10)
314.1592653589793d0
CL-USER(4): (compile (defun foo (x) (let ((pie 22/7)) (circ x))))
FOO
NIL
NIL
CL-USER(5): (foo 10)
2200/7
CL-USER(6): (float *)
314.2857
CL-USER(7): (sys:define-declaration sys::lexical (&rest vars)
nil
:variable
(lambda (declaration env)
(declare (ignore env))
(let* ((spec '(lexical t))
(res (mapcar #'(lambda (x) (cons x spec))
(cdr declaration))))
(values :variable res))))
SYSTEM::LEXICAL
CL-USER(8): (compile (defun foo (x) (let ((pie 22/7)) (declare (sys::lexical pie)) (circ x))))
; While compiling FOO:
Warning: Variable PIE is never used.
FOO
T
NIL
CL-USER(9): (foo 10)
314.1592653589793d0
CL-USER(10):
--
Duane Rettig ·····@franz.com Franz Inc. http://www.franz.com/
555 12th St., Suite 1450 http://www.555citycenter.com/
Oakland, Ca. 94607 Phone: (510) 452-2000; Fax: (510) 452-0182
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <cq99g.52$vH7.5@fe09.lga>
Alexander Schmolck wrote:
> Duane Rettig <·····@franz.com> writes:
>
>
>>Alexander Schmolck <··········@gmail.com> writes:
>>
>>
>>>Ken Tilton <·········@gmail.com> writes:
>>>
>>>
>>>>In Common Lisp we would have:
>>>>
>>>> (defvar *x*) ;; makes it special
>>>> (setf *x* 1)
>>>> (print *x*) ;;-> 1
>>>> (let ((*x* 2))
>>>> (print *x*)) ;; -> 2
>>>> (print *x*) ;; -> 1
>>>
>>>You seem to think that conflating special variable binding and lexical
>>>variable binding is a feature and not a bug. What's your rationale?
I will expand on my earlier "transparency" rationale with a further
rationale for transparency: I do not need no stinkin' rationale. A
special variable is still a variable. They should be set, read, and
bound (say, by "let") the same way as any other variable.
You need a rationale. It sounds as if you want some noisey syntax to
advertise the specialness. I do not think the Python community will
appreciate you messing up their pretty code.
You are right about one thing: specialness needs advertising. You know
what we do in Lisp? We obediently name special variables with bracketing
*s, like *this*. Too simple?
>>
>>A bug is a non-conformance to spec.
>
>
> There is a world beyond specs, you know. If copies of allegro CL accidently
> sent out death-threats to the US president on a weekly basis, because someone
> at franz accidently or purposefully left in some pranky debugging code the
> fact that this behaviour would likely neither violate the ansi spec nor any
> other specs that ACL officially purports to adhere to wouldn't make it any
> less of a bug (or help to pacify your customers).
>
>
>>Kenny's statement was specifically about Common Lisp
>
>
> No Kenny's statement was about contrasting the way something is done in python
> and the way something is done in common lisp (with the implication that the
> latter is preferable).
Close, but no. The question I was weighing in was "has Michele
replicated special variables?". My implication was, "Not yet -- can you
match the transparency?", and it was an honest question, I do not know.
Again, transparency is a qualitative difference.
I liked your solution better, btw, because it does minimize the noise.
For fun, you should call the class ** instead of special, so we end up
with: **.b = 42
We'll understand. :)
> Of course the way something is done in common lisp is
> almost tautologically in closer agreement with the ansi common lisp spec than
> the way it is done in python, so agreement with the clhs is not a useful
> criterion when talking about design features and misfeatures when contrasting
> languages.
Again, no, it is not the spec, it is the highly-valued Python quality of
clean code. Also, the consistency of treating variables as variables,
regardless of some special/dynamic quality.
Some background. Lisp is a big language, and I am self taught and do not
like to read, grew up in Lisp in isolation. Not many Lispers in the
exercise yard. Discovered special variables only when we hired an old
hand who gently corrected a howler:
(let* ((old-x *x*))
(setf *x* 42)
....
(setf *x* old-x))
I still laugh at that. Anyway, as soon as I learned that, I was able to
make Cells syntax infinitely more transparent. And guess what? It also
made dependency identification automatic instead of cooperative, and
when I rebuilt a huge Cells-based app I discovered two or three cases
where I had neglected to publish a dependency.
It's a mystery, but somehow simpler syntax... oh, wait, this is
c.l.python, I am preaching to the choir.
>
> I just don't think the way special variable binding (or variable binding in
> general[1]) is handled in common lisp is particularly well designed or
> elegant.
See above. There is nothing like a concrete experience of implementing a
hairy library like Cells /without/ leveraging specials and then
converting to specials. Talk about an Aha! experience. I mean, bugs ran
screaming from their nests simply because of the implementation change--
we call that A Message From God that the design has taken a correct turn.
>
> Special variables and lexical variables have different semantics and using
> convention and abusing[2] the declaration mechanism to differentiate between
> special and lexical variables doesn't strike me as a great idea.
I know what you mean, but I like reading tea leaves, and I find it
fascinating that *this* somehow eliminates all ambiguity. Background:
don't know where I might find it, but I once saw a thread demonstrating
the astonishing confusion one could create with a special variable such
as a plain X (no *s). Absolutely mind-bogglingly confusing. Go back and
rename the special version *x*, and use *x* where you want to rebind it.
Result? Utterly lucid code. Scary, right?
>
> I can certainly think of problems that can occur because of it (E.g. ignoring
> or messing up a special declaration somewhere; setf on a non-declared variable
> anyone?
Sh*t, you don't respond to compiler warnings? Don't blame CL for your
problems. :)
> There are also inconsistent conventions for naming (local) special
> variables within the community (I've seen %x%, *x* and x)).
OK, you are in flamewar mode, now you are just making things up.
>
> Thus I don't see having to use syntactically different binding and assignment
> forms for special and lexical variables as inherently inferior.
DUDE! They are both variables! Why the hell /should/ the syntax be
different? "Oh, these are /cross-training/ sneakers. I'll wear them on
my hands." Hunh?
:)
kenny
Alexander Schmolck <··········@gmail.com> writes:
> > (defvar *x*) ;; makes it special
> > (setf *x* 1)
> > (print *x*) ;;-> 1
> > (let ((*x* 2))
> > (print *x*)) ;; -> 2
> > (print *x*) ;; -> 1
>
> You seem to think that conflating special variable binding and lexical
> variable binding is a feature and not a bug. What's your rationale?
I thought special variables meant dynamic binding, i.e.
(defvar *x* 1)
(defun f ()
(print *x*) ;; -> 2
(let ((*x* 3))
(g)))
(defun g ()
(print *x*)) ;; - > 3
That was normal behavior in most Lisps before Scheme popularlized
lexical binding. IMO it was mostly an implementation convenience hack
since it was implemented with a very efficient shallow binding cell.
That Common Lisp adapted Scheme's lexical bindings was considered a
big sign of CL's couthness. So I'm a little confused about what Ken
Tilton is getting at.
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <1w99g.54$vH7.47@fe09.lga>
Paul Rubin wrote:
> Alexander Schmolck <··········@gmail.com> writes:
>
>>> (defvar *x*) ;; makes it special
>>> (setf *x* 1)
>>> (print *x*) ;;-> 1
>>> (let ((*x* 2))
>>> (print *x*)) ;; -> 2
>>> (print *x*) ;; -> 1
>>
>>You seem to think that conflating special variable binding and lexical
>>variable binding is a feature and not a bug. What's your rationale?
>
>
> I thought special variables meant dynamic binding, i.e.
>
> (defvar *x* 1)
> (defun f ()
> (print *x*) ;; -> 2
> (let ((*x* 3))
> (g)))
> (defun g ()
> (print *x*)) ;; - > 3
>
> That was normal behavior in most Lisps before Scheme popularlized
> lexical binding. IMO it was mostly an implementation convenience hack
> since it was implemented with a very efficient shallow binding cell.
> That Common Lisp adapted Scheme's lexical bindings was considered a
> big sign of CL's couthness. So I'm a little confused about what Ken
> Tilton is getting at.
Paul, there is no conflict between your example and mine, but I can see
why you think mine does not demonstrate dynamic binding: I did not
demonstrate the binding applying across a function call.
What might be even more entertaining would be a nested dynamic binding
with the same function called at different levels and before and after
each binding.
I just had the sense that this chat was between folks who fully grokked
special vars. Sorr if I threw you a curve.
kenny
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <zf79g.34$8U3.30@fe08.lga>
Alexander Schmolck wrote:
> Ken Tilton <·········@gmail.com> writes:
>
>
>>In Common Lisp we would have:
>>
>> (defvar *x*) ;; makes it special
>> (setf *x* 1)
>> (print *x*) ;;-> 1
>> (let ((*x* 2))
>> (print *x*)) ;; -> 2
>> (print *x*) ;; -> 1
>
>
> You seem to think that conflating special variable binding and lexical
> variable binding is a feature and not a bug. What's your rationale?
Transparency. That is where power comes from. I did the same things with
Cells. Reading a slot with the usual Lisp reader method transparently
creates a dependency on the variable. To change a variable and have it
propagate throughout the datamodel, Just Change It.
Exposed wiring means more work and agonizing refactoring.
kenny
--
Cells: http://common-lisp.net/project/cells/
"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
From: Alexander Schmolck
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <yfswtcqssup.fsf@oc.ex.ac.uk>
Ken Tilton <·········@gmail.com> writes:
> Alexander Schmolck wrote:
> > Ken Tilton <·········@gmail.com> writes:
> >
>
> >>In Common Lisp we would have:
> >>
> >> (defvar *x*) ;; makes it special
> >> (setf *x* 1)
> >> (print *x*) ;;-> 1
> >> (let ((*x* 2))
> >> (print *x*)) ;; -> 2
> >> (print *x*) ;; -> 1
> > You seem to think that conflating special variable binding and lexical
>
> > variable binding is a feature and not a bug. What's your rationale?
>
> Transparency.
That's is circular. You might be right, but you failed to provide a rationale
and not just a restatement.
> That is where power comes from. I did the same things with Cells. Reading a
> slot with the usual Lisp reader method transparently creates a dependency on
> the variable.
Let me see if I understand it right -- if an instance of class A has a ruled
slot a that reads an instance of class B's slot b then it is noted somewhere
that A's a depends on b?
> To change a variable and have it propagate throughout the datamodel, Just
> Change It.
>
>
> Exposed wiring means more work and agonizing refactoring.
Well, you claim that in that instance python suffers from exposed wiring and I
claim that CL suffers from a (minor) booby trap. You can't typically safely
ignore whether a variable is special as a mere wiring detail or your code
won't work reliably (just as you can't typically safely ignore whether
something is rigged or not even if booby-trapness is pretty transparent) --
it's as simple as that (actually its a bit worse because the bug can be hard
to detect as lexical and special variables will result in the same behaviour
in many contexts).
So in the case of booby traps and special variables, I generally prefer some
exposed wiring (or strong visual clues) to transparency.
I'd like to see a demonstration that using the same binding syntax for special
and lexical variables buys you something apart from bugs.
'as
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <gI99g.56$vH7.40@fe09.lga>
Everything else responded to separately, but...
> I'd like to see a demonstration that using the same binding syntax for special
> and lexical variables buys you something apart from bugs.
Buys me something? Why do I have to sell simplicity, transparency, and
clean syntax on c.l.python?
kenny
--
Cells: http://common-lisp.net/project/cells/
"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
From: Marcin 'Qrczak' Kowalczyk
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <87fyjejgji.fsf@qrnik.zagroda>
Alexander Schmolck <··········@gmail.com> writes:
> I'd like to see a demonstration that using the same binding syntax
> for special and lexical variables buys you something apart from bugs.
There are 3 fundamental operations related to plain mutable variables:
A1. Making a new mutable variable with an initial value.
A2. Getting the current value.
A3. Setting the new value.
and 4 operations related to dynamically scoped variables:
B1. Making a new dynamic variable with an initial value.
B2. Getting the current value.
B3. Setting the new value.
B4. Local rebinding with a new initial value.
If you don't ever use B4, dynamic variables behave exactly like plain
variables. For this reason I see no point in distinguishing A2 from B2,
or A3 from B3. Dynamic variables are a pure extension of plain variables
by providing an additional operation.
Distinguishing the syntax of A1 and B1 is natural: somehow it must be
indicated what kind of variable is created.
Mutability is orthogonal to dynamic scoping. It makes sense to have a
variable which is like a plain variable but without A3, and a variable
which is like a dynamic variable but without B3, although it doesn't
provide anything new, only allows to express more constraints with a
potential for optimization. I won't consider them here.
Common Lisp does something weird: it uses the same syntax for A1 and B4,
where the meaning is distinguished by a special declaration. Here is
its syntax:
Directly named plain variables:
A1. (let ((name value)) body) and other forms
A2. name
A3. (setq name value), (setf name value)
First-class dynamic variables:
B1. (gensym)
B2. (symbol-value variable)
B3. (set variable value), (setf (symbol-value variable) value)
B4. (progv `(variable) `(value) body)
Directly named dynamic variables:
B1. (defvar name value), (defparameter name value)
B2. name
B3. (setq name value), (setf name value)
B4. (let ((name value)) body) and other forms
Dynamic variables in Lisp come in two flavors: first-class variables
and directly named variables. Directly named variables are always
global. You can convert a direct name to a first-class variable by
(quote name).
Plain variables have only the directly named flavor and they are
always local. You can emulate the first-class flavor by wrapping a
variable in a pair of closures or a closure with dual getting/setting
interface (needs a helper macro in order to be convenient). You can
emulate a global plain variable by wrapping a dynamic variable in a
symbol macro, ignoring its potential for local rebinding. You can
emulate creation of a new first-class variable by using a dynamic
variable and ignoring its potential for local rebinding, but this
can't be used to refer to an existing directly named plain variable.
In order to create a plain variable, you must be sure that its name is
not already used by a dynamic variable in the same scope.
So any essential functionality is possible to obtain, but the syntax
is very irregular.
--
__("< Marcin Kowalczyk
\__/ ······@knm.org.pl
^^ http://qrnik.knm.org.pl/~qrczak/
Alexander Schmolck <··········@gmail.com> writes:
> Ken Tilton <·········@gmail.com> writes:
>
> > In Common Lisp we would have:
> >
> > (defvar *x*) ;; makes it special
> > (setf *x* 1)
> > (print *x*) ;;-> 1
> > (let ((*x* 2))
> > (print *x*)) ;; -> 2
> > (print *x*) ;; -> 1
>
> You seem to think that conflating special variable binding and lexical
> variable binding is a feature and not a bug. What's your rationale?
And the particularly ugly, crappy, half baked python emulation is what?
A feature? Right.
/Jon
--
'j' - a n t h o n y at romeo/charley/november com
Alex Martelli wrote:
> ``An unneeded feature "cannot" be added (elegantly) in future releases
> of the language'' is just as trivial and acceptable for the unneded
> feature ``allow ( as an ordinary single-character identifier'' as for
> the unneded feature ``allow unnamed functions with all the flexibility
> of named ones''.
You can't be seriously claiming that these two features are equally
(un)needed. Anonymous functions come from the foundations of computer
science - Lamda Calculus. They are just a natural step on the road to
higher level languages. There are useful programming techniques, like
monadic programming, that are infeasible without anonymous functions.
Anonymous functions really add some power to the language.
On the other hand, what do you get by allowing ( as an indentifier?
Significant whitespace is a good thing, but the way it is designed in
Python it has some costs. Can't you simply acknowledge that?
Best regards
Tomasz
Tomasz Zielonka <···············@gmail.com> wrote:
...
> higher level languages. There are useful programming techniques, like
> monadic programming, that are infeasible without anonymous functions.
> Anonymous functions really add some power to the language.
Can you give me one example that would be feasible with anonymous
functions, but is made infeasible by the need to give names to
functions? In Python, specifically, extended with whatever fake syntax
you favour for producing unnamed functions?
I cannot conceive of one. Wherever within a statement I could write the
expression
lambda <args>: body
I can *ALWAYS* obtain the identical effect by picking an otherwise
locally unused identifier X, writing the statement
def X(<args>): body
and using, as the expression, identifier X instead of the lambda.
> On the other hand, what do you get by allowing ( as an indentifier?
Nothing useful -- the parallel is exact.
> Significant whitespace is a good thing, but the way it is designed in
> Python it has some costs. Can't you simply acknowledge that?
I would have no problem "acknowledging" problems if I agreed that any
exist, but I do not agree that any exist. Please put your coding where
your mouth is, and show me ONE example that would be feasible in a
Python enriched by unlimited unnamed functions but is not feasible just
because Python requires naming such "unlimited" functions.
Alex
On Sat, 06 May 2006 23:05:59 -0700, Alex Martelli wrote:
> Tomasz Zielonka <···············@gmail.com> wrote:
> ...
>> higher level languages. There are useful programming techniques, like
>> monadic programming, that are infeasible without anonymous functions.
>> Anonymous functions really add some power to the language.
>
> Can you give me one example that would be feasible with anonymous
> functions, but is made infeasible by the need to give names to
> functions? In Python, specifically, extended with whatever fake syntax
> you favour for producing unnamed functions?
Monads are one of those parts of functional programming I've never really
got my head around, but as I understand them, they're a way of
transforming what looks like a sequence of imperative programming
statements that operate on a global state into a sequence of function
calls that pass the state between them.
So, what would be a statement in an imperative language is an anonymous
function that gets added to the monad, and then, when the monad is run,
these functions get executed. The point being, that you have a lot of
small functions (one for each statement) which are likely not to be used
anywhere else, so defining them as named functions would be a bit of a
pain in the arse.
Actually, defining them as unnamed functions via lambdas would be annoying
too, although not as annoying as using named functions - what you really
want is macros, so that what looks like a statement can be interpreted is
a piece of code to be executed later.
Here, for instance, is an article about using scheme (and, in particular,
macros), to produce a fairly elegant monad implementation:
http://okmij.org/ftp/Scheme/monad-in-Scheme.html
I V <········@gmail.com> wrote:
...
> >> higher level languages. There are useful programming techniques, like
> >> monadic programming, that are infeasible without anonymous functions.
> >> Anonymous functions really add some power to the language.
> >
> > Can you give me one example that would be feasible with anonymous
> > functions, but is made infeasible by the need to give names to
> > functions? In Python, specifically, extended with whatever fake syntax
> > you favour for producing unnamed functions?
>
> Monads are one of those parts of functional programming I've never really
> got my head around, but as I understand them, they're a way of
> transforming what looks like a sequence of imperative programming
> statements that operate on a global state into a sequence of function
> calls that pass the state between them.
Looks like a fair enough summary to me (but, I'm also shaky on monads,
so we might want confirmation from somebody who isn't;-).
> So, what would be a statement in an imperative language is an anonymous
> function that gets added to the monad, and then, when the monad is run,
> these functions get executed. The point being, that you have a lot of
> small functions (one for each statement) which are likely not to be used
> anywhere else, so defining them as named functions would be a bit of a
> pain in the arse.
It seems to me that the difference between, say, a hypothetical:
monad.add( lambda state:
temp = zipper(state.widget, state.zrup)
return state.alteredcopy(widget=temp)
)
and the you-can-use-it-right now alternative:
def zipperize_widget(state):
temp = zipper(state.widget, state.zrup)
return state.alteredcopy(widget=temp)
monad.add(zipperize_widget)
is trivial to the point of evanescence. Worst case, you name all your
functions Beverly so you don't have to think about the naming; but you
also have a chance to use meaningful names (such as, presumably,
zipperize_widget is supposed to be here) to help the reader.
IOW, monads appear to me to behave just about like any other kind of
HOFs (for a suitably lax interpretation of that "F") regarding the issue
of named vs unnamed functions -- i.e., just about like the difference
between:
def double(f):
return lambda *a: 2 * f(*a)
and
def double(f):
def doubled(*a): return 2 * f(*a)
return doubled
I have no real problem using the second form (with a name), and just
don't see it as important enough to warrant adding to the language (a
language that's designed to be *small*, and *simple*, so each addition
is to be seen as a *cost*) a whole new syntaxform 'lambda'.
((The "but you really want macros" debate is a separate one, which has
been held many times [mostly on comp.lang.python] and I'd rather not
repeat at this time, focusing instead on named vs unnamed...))
Alex
Alex Martelli wrote:
> Worst case, you name all your functions Beverly so you don't have to
> think about the naming
I didn't think about this, probably because I am accustomed to Haskell,
where you rather give functions different names (at the module top-level
you have no other choice). I just checked that it would work for nested
Beverly-lambdas (but could be quite confusing), but how about using more
then one lambda in an expression? You would have to name them
differently.
> but you also have a chance to use meaningful names (such as,
> presumably, zipperize_widget is supposed to be here) to help the
> reader.
[OK, I am aware that you are talking solely about lambdas in Python,
but I want to talk about lambdas in general.]
Sometimes body of the function is its best description and naming what
it does would be only a burden. Consider that the same things that you
place in a loop body in python, you pass as a function to a HOF in
Haskell. Would you propose that all loops in Python have the form:
def do_something_with_x(x):
...
do something with x
for x in generator:
do_something_with_x(x)
Also, having anonymous functions doesn't take your common sense away, so
you still "have a chance".
Best regards
Tomasz
Tomasz Zielonka <···············@gmail.com> wrote:
...
> Also, having anonymous functions doesn't take your common sense away, so
> you still "have a chance".
I've seen many people (presumably coming from Lisp or Scheme) code
Python such as:
myname = lambda ...
rather than the obvious Python way to do it:
def myname(...
((they generally do that right before they start whining that their
absurd choice doesn't let them put statements inside the "unnamed
function that they need to assign to a name")).
_THAT_ is what having many semantically overlapping (or identically
equivalent) ways to perform the same task does to people: it takes the
common sense away from enough of them that I'm statistically certain to
have to wrestle with some of them (be it as suppliers, people I'm trying
to help out on mailing lists etc, students I'm mentoring -- at least
being at Google means I don't have to fear finding such people as my
colleagues, but the memories and the scars of when I was a freelance
consultant are still fresh, and my heart goes out to the 99% of sensible
Pythonistas who don't share my good luck).
As long as Guido planned to remove lambda altogether in Python 3.0, I
could console myself with the thought that this frequent, specific
idiocy wasn't one I would have to wrestle with forever; now I know I
will have no such luck -- it's back to the dark ages. ((If I ever _DO_
find a language that *DOES* mercilessly refactor in pursuit of the ideal
"only one obvious way", I may well jump ship, since my faith in Python's
adherence to this principle which I cherish so intensely has been so
badly broken by GvR's recent decisions to keep lambdas, keep [<genexp>]
as an identical synonym for list(<genexp>), add {1,2,3} as an identical
synonym for set((1,2,3))...); though, being a greedy fellow, I'll
probably wait until all my Google options have vested;-)).
Alex
>If I ever _DO_ find a language that *DOES* mercilessly refactor in pursuit
> of the ideal "only one obvious way", I may well jump ship, since my faith in
> Python's adherence to this principle which I cherish so intensely has
> been so badly broken ...
The phrase "only one obvious way..." is nearly the most absurd
marketing bullshit I have ever heard; topped only by "it fits your
brain". Why are so many clearly intelligent and apparently
self-respecting hard-core software engineers repeating this kind of
claptrap? It sounds more like a religious cult than a programming
language community. If one of my students answered the question: "Why
use X for Y?" with "X fits your brain." or "There's only one obvious
way to do Y in X." I'd laugh out loud before failing them.
On 2006-05-08 02:51:22 -0400, ········@gmail.com said:
> The phrase "only one obvious way..." is nearly the most absurd
> marketing bullshit I have ever heard; topped only by "it fits your
> brain". Why are so many clearly intelligent and apparently
> self-respecting hard-core software engineers repeating this kind of
> claptrap?
Really should read "only one obvious way to people with a similar
background and little creativity" or "it fits your brain if you've
mostly programmed in algol syntax languages and alternative ideas make
said brain hurt."
trimmed to c.l.python and c.l.lisp
I V wrote:
> Monads are one of those parts of functional programming I've never really
> got my head around, but as I understand them, they're a way of
> transforming what looks like a sequence of imperative programming
> statements that operate on a global state into a sequence of function
> calls that pass the state between them.
This is a description of only one particular kind of monad - a state
monad. A generalisation of your statement would be something like this:
"they're a way of writing what looks like a sequence of imperative
programming statements that, depending on the monad, can have certain
computational side-effects (like operating on a global state) in a
purely functional way". But this doesn't explain much. If you want to
know more, there are some pretty good tutorials on
http://www.haskell.org/.
> So, what would be a statement in an imperative language is an anonymous
> function that gets added to the monad, and then, when the monad is run,
> these functions get executed.
A monad is a type, it isn't run. The thing you run can be called a
monadic action. You don't add functions to a monad (in this sense), you
build a monadic action from smaller monadic actions, gluing them with
functions - here's where anonymous functions are natural.
> The point being, that you have a lot of small functions (one for each
> statement) which are likely not to be used anywhere else, so defining
> them as named functions would be a bit of a pain in the arse.
Exactly!
> Actually, defining them as unnamed functions via lambdas would be annoying
> too, although not as annoying as using named functions - what you really
> want is macros, so that what looks like a statement can be interpreted is
> a piece of code to be executed later.
Haskell has one such "macro" - this is the do-notation syntax. But it's
translation to ordinary lambdas is very straightforward, and the choice
between using the do-notation or lambdas with >>= is a matter of
style.
Best regards
Tomasz
Alex Martelli wrote:
> Tomasz Zielonka <···············@gmail.com> wrote:
> ...
>> higher level languages. There are useful programming techniques, like
>> monadic programming, that are infeasible without anonymous functions.
>> Anonymous functions really add some power to the language.
>
> Can you give me one example that would be feasible with anonymous
> functions, but is made infeasible by the need to give names to
> functions?
Perhaps you were speaking more about Python, and I was speaking more
generally. There are useful programming techniques that require using
many functions, preferably anonymous ones, but these techniques probably
won't fit Python very well anyway.
In Haskell when I write IO intensive programs, when I use monadic
parsing libraries, etc. I use many lambdas, sometimes disguised as a
do-notation bind syntax (do x <- a; ...). I wouldn't want to name
all those functions.
Here is the random page with Haskell code I found on haskell.org
http://haskell.org/haskellwiki/Sudoku
See how many are there uses of lambdas:
\... ->
and do-bindings, which are basically a syntactic sugar for lambdas
and monadic bind operations:
do
x <- something ...
Also, there are many anonymous functions created by using higher
order functions or by partial application.
I want to make it clear that monadic programming in Haskell is not only
something that we *have* to do in order to perform IO in a purely
functional program, but it's often something we *can* and *want* to do,
because it's powerful, convenient, etc.
Haskell programs without many monadic programming also contain many
anonymous functions.
Of course I wouldn't use monadic programming too often in Python, mostly
because I wouldn't have to (eg. for IO), but also because it would be
inconvenient and difficult in Python.
> In Python, specifically, extended with whatever fake syntax
> you favour for producing unnamed functions?
>
> I cannot conceive of one. Wherever within a statement I could write the
> expression
> lambda <args>: body
> I can *ALWAYS* obtain the identical effect by picking an otherwise
> locally unused identifier X, writing the statement
> def X(<args>): body
> and using, as the expression, identifier X instead of the lambda.
I know that. But the more such functions you use, the more cumbersome it
gets. Note also that I could use your reasoning to show that support
for expressions in the language is not neccessary.
>> On the other hand, what do you get by allowing ( as an indentifier?
>
> Nothing useful -- the parallel is exact.
What we are discussing here is language expressivity and power,
something very subtle and hard to measure. I think your logical
reasoning proves nothing here. After all, what is really needed
in a programming language? Certainly not indentation sensitivity :-)
0s and 1s should be enough.
>> Significant whitespace is a good thing, but the way it is designed in
>> Python it has some costs. Can't you simply acknowledge that?
>
> I would have no problem "acknowledging" problems if I agreed that any
> exist, but I do not agree that any exist. Please put your coding where
> your mouth is, and show me ONE example that would be feasible in a
> Python enriched by unlimited unnamed functions but is not feasible just
> because Python requires naming such "unlimited" functions.
You got me, monadic programming is infeasible in Python even with
lambdas ;-)
Best regards
Tomasz
Alex Martelli wrote:
> Tomasz Zielonka <···············@gmail.com> wrote:
> ...
>> higher level languages. There are useful programming techniques, like
>> monadic programming, that are infeasible without anonymous functions.
>> Anonymous functions really add some power to the language.
>
> Can you give me one example that would be feasible with anonymous
> functions, but is made infeasible by the need to give names to
> functions?
Perhaps you were speaking more about Python, and I was speaking more
generally. There are useful programming techniques that require using
many functions, preferably anonymous ones, but these techniques probably
won't fit Python very well anyway.
In Haskell when I write IO intensive programs, when I use monadic
parsing libraries, etc. I use many lambdas, sometimes disguised as a
do-notation bind syntax (do x <- a; ...). I wouldn't want to name
all those functions.
Here is the random page with Haskell code I found on haskell.org
http://haskell.org/haskellwiki/Sudoku
See how many are there uses of lambdas:
\... ->
and do-bindings, which are basically a syntactic sugar for lambdas
and monadic bind operations:
do
x <- something ...
Also, there are many anonymous functions created by using higher
order functions or by partial application.
I want to make it clear that monadic programming in Haskell is not only
something that we *have* to do in order to perform IO in a purely
functional program, but it's often something we *can* and *want* to do,
because it's powerful, convenient, etc.
Haskell programs without many monadic programming also contain many
anonymous functions.
Of course I wouldn't use monadic programming too often in Python, mostly
because I wouldn't have to (eg. for IO), but also because it would be
inconvenient and difficult in Python.
> In Python, specifically, extended with whatever fake syntax
> you favour for producing unnamed functions?
>
> I cannot conceive of one. Wherever within a statement I could write the
> expression
> lambda <args>: body
> I can *ALWAYS* obtain the identical effect by picking an otherwise
> locally unused identifier X, writing the statement
> def X(<args>): body
> and using, as the expression, identifier X instead of the lambda.
I know that. But the more such functions you use, the more cumbersome it
gets. Note also that I could use your reasoning to show that support
for expressions in the language is not neccessary.
>> On the other hand, what do you get by allowing ( as an indentifier?
>
> Nothing useful -- the parallel is exact.
What we are discussing here is language expressivity and power,
something very subtle and hard to measure. I think your logical
reasoning proves nothing here. After all, what is really needed
in a programming language? Certainly not indentation sensitivity :-)
0s and 1s should be enough.
>> Significant whitespace is a good thing, but the way it is designed in
>> Python it has some costs. Can't you simply acknowledge that?
>
> I would have no problem "acknowledging" problems if I agreed that any
> exist, but I do not agree that any exist. Please put your coding where
> your mouth is, and show me ONE example that would be feasible in a
> Python enriched by unlimited unnamed functions but is not feasible just
> because Python requires naming such "unlimited" functions.
You got me, monadic programming is infeasible in Python even with
lambdas ;-)
Best regards
Tomasz
Alex Martelli wrote:
> I cannot conceive of one. Wherever within a statement I could write the
> expression
> lambda <args>: body
> I can *ALWAYS* obtain the identical effect by picking an otherwise
> locally unused identifier X, writing the statement
> def X(<args>): body
> and using, as the expression, identifier X instead of the lambda.
This is true, but with lambda it is easier to read:
http://www.frank-buss.de/lisp/functional.html
http://www.frank-buss.de/lisp/texture.html
Would be interesting to see how this would look like in Python or some of
the other languages to which this troll thread was posted :-)
--
Frank Buss, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
Frank Buss <··@frank-buss.de> wrote:
> Alex Martelli wrote:
>
> > I cannot conceive of one. Wherever within a statement I could write the
> > expression
> > lambda <args>: body
> > I can *ALWAYS* obtain the identical effect by picking an otherwise
> > locally unused identifier X, writing the statement
> > def X(<args>): body
> > and using, as the expression, identifier X instead of the lambda.
>
> This is true, but with lambda it is easier to read:
>
> http://www.frank-buss.de/lisp/functional.html
> http://www.frank-buss.de/lisp/texture.html
>
> Would be interesting to see how this would look like in Python or some of
> the other languages to which this troll thread was posted :-)
Sorry, but I just don't see what lambda is buying you here. Taking just
one simple example from the first page you quote, you have:
(defun blank ()
"a blank picture"
(lambda (a b c)
(declare (ignore a b c))
'()))
which in Python would be:
def blank():
" a blank picture "
return lambda a, b, c: []
while a named-function variant might be:
def blank():
def blank_picture(a, b, c): return []
return blank_picture
Where's the beef, really? I find the named-function variant somewhat
more readable than the lambda-based variant, but even if your
preferences are the opposite, this is really such a tiny difference that
I can't see why so many bits should gets wasted debating it (perhaps
it's one of Parkinson's Laws at work...).
Alex
Alex Martelli wrote:
> Sorry, but I just don't see what lambda is buying you here. Taking just
> one simple example from the first page you quote, you have:
>
> (defun blank ()
> "a blank picture"
> (lambda (a b c)
> (declare (ignore a b c))
> '()))
You are right, for this example it is not useful. But I assume you need
something like lambda for closures, e.g. from the page
http://www.frank-buss.de/lisp/texture.html :
(defun black-white (&key function limit)
(lambda (x y)
(if (> (funcall function x y) limit)
1.0
0.0)))
This function returns a new function, which is parametrized with the
supplied arguments and can be used later as building blocks for other
functions and itself wraps input functions. I don't know Python good
enough, maybe closures are possible with locale named function definitions,
too.
--
Frank Buss, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
Frank Buss <··@frank-buss.de> wrote:
> Alex Martelli wrote:
>
> > Sorry, but I just don't see what lambda is buying you here. Taking just
> > one simple example from the first page you quote, you have:
> >
> > (defun blank ()
> > "a blank picture"
> > (lambda (a b c)
> > (declare (ignore a b c))
> > '()))
>
> You are right, for this example it is not useful. But I assume you need
> something like lambda for closures, e.g. from the page
Wrong and unfounded assumption.
> http://www.frank-buss.de/lisp/texture.html :
>
> (defun black-white (&key function limit)
> (lambda (x y)
> (if (> (funcall function x y) limit)
> 1.0
> 0.0)))
>
> This function returns a new function, which is parametrized with the
> supplied arguments and can be used later as building blocks for other
> functions and itself wraps input functions. I don't know Python good
> enough, maybe closures are possible with locale named function definitions,
> too.
They sure are, I gave many examples already all over the thread. There
are *NO* semantic advantages for named vs unnamed functions in Python.
Not sure what the &key means here, but omitting that
def black_white(function, limit):
def result(x,y):
if function(x, y) > limit: return 1.0
else: return 0.0
return result
Alex
> There are *NO* semantic advantages for named vs unnamed functions in Python.
I feel that this conversation has glanced off the point. Let me try a
new approach:
There is the Pythonic way (whatever that is), and then The Lisp Way. I
don't know what the former is, but it has something to do with
indentation, and the (meaningless to me*) phrase "It fits your mind."
The Lisp way is quite specific: Pure Compositionality. Compositionality
encompasses and defines all aspects of Lisp, from the parens to
functional style to fundamental recursion to lambda, and even the
language itself is classically composed from the bottom up, and
compositionality enables us to create new complete languages nearly
trivially.
How this concept plays into the current conversation is not subtle:
LAMBDA forms server to directly modify the forms in which they appear.
SORT is the example that comes to mind for me. If one says: (sort ...
#'(lambda (a b) ...)) [I realize that the #' is optional, I use it here
for emphasis that there is a function being formed.] the lambda form
composes, with sort, a new type of sort -- a sort of type <whatever the
lambda function does>. Thus, the semantics of this form are localized
to the sort expression, and do not leave it -- they are, indeed,
conceptually a part of the sort expression, and to require it/them to
be moved outside and given a name breaks the conceptual
compositionality -- that is, the compositional locality of the form.
Similarly, parens and the functional fact that every form returns a
value provide compositional locality and, perhaps more importantly in
practice, compositional *mobility* -- so that, pretty much anywhere in
Lisp where you need an argument, you can pick up a form and drop it in.
[Macros often break this principle, I'll get to those in a moment.]
This is something that no other language (except some dead ones, like
APL) were able to do, and these provide incredible conceptual
flexibility -- again, I'll use the term "mobility" -- one can, in most
cases, literally move code as though it were a closed concept to
anywhere that that concept is needed.
Macros, as I have said, bear a complex relationship to this concept of
composition mobility and flexibility. The iteration macro, demonstrated
elsewhere in this thread, is an excellent example. But macros are more
subtly related to compositionality, and to the present specific
question, because, as you yourself said: All you need to do is make up
a name that isn't used....But how is one to find a name that isn't used
if one has macros? [Actually, in Lisp, even if we didn't have lambda we
could do this by code walking, but I'll leave that aside, because
Python can't do that, nor can it do macros.]
I do not hesitate to predict that Python will someday sooner than later
recognize the value of compositional flexibility and mobility, and that
it will struggle against parentheses and lambdas, but that in the end
it will become Lisp again. They all do, or die.
===
[*] BA - Biographical Annotation: Yeah, I've programmed all those
things too for years and years and years. I also have a PhD in
cognitive psychology from CMU, where I worked on how people learn
complex skills, and specifically programming. When I say that "fits
your brain" is meaningless to me, I mean that in a technical sense:
If it had any meaning, I, of all people, would know what it means;
meaning that I know that it doesn't mean anything at all.
Alex Martelli wrote:
> Not sure what the &key means here, but omitting that
>
> def black_white(function, limit):
> def result(x,y):
> if function(x, y) > limit: return 1.0
> else: return 0.0
> return result
&key is something like keyword arguments in Python. And looks like you are
right again (I've tested it in Pyhton) and my assumption was wrong, so the
important thing is to support closures, which Python does, even with local
function definitions.
--
Frank Buss, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
Frank Buss <··@frank-buss.de> wrote:
> Alex Martelli wrote:
>
> > Not sure what the &key means here, but omitting that
> >
> > def black_white(function, limit):
> > def result(x,y):
> > if function(x, y) > limit: return 1.0
> > else: return 0.0
> > return result
>
> &key is something like keyword arguments in Python. And looks like you are
Ah, thanks.
> right again (I've tested it in Pyhton) and my assumption was wrong, so the
> important thing is to support closures, which Python does, even with local
> function definitions.
We do appear to entirely agree. In Python <= 2.4, where if is just a
statement (not an expression), you'd need some trick to get this effect
with a lambda, e.g.:
def black_white(function, limit, key=None):
return lambda x,y: 1.0 * (function(x,y) > limit)
assuming it's important to get a float result -- the > operator per se
returns an int, so you can call float() on it, or multiply it by 1.0,
etc -- if you had two arbitrary colors, e.g.
def two_tone(function, limit, key=None, low=0.0, high=1.0):
return lambda x,y: (low, high)[function(x,y) > limit]
which is a pretty obscure alternative. In Python >= 2.5, an if
expression has been added, but I'll leave you to judge if it's actually
an improvement (sigh)...:
def two_tone(function, limit, key=None, low=0.0, high=1.0):
return lambda x,y: high if function(x,y) > limit else low
Personally, I'd rather use the named-function version. Anyway, they're
all semantically equivalent (sigh), and the key point is that the
semantics (building and returning functions on the fly) IS there,
whether the functions are named or unnamed, as we agree.
Alex
Tomasz Zielonka wrote:
> On the other hand, what do you get by allowing ( as an indentifier?
>
> Significant whitespace is a good thing, but the way it is designed in
> Python it has some costs. Can't you simply acknowledge that?
One can admit this but what is it worth or how should those costs be
evaluated? This problem is unsolvable because we cannot agree on a
common standard of the value of PL design. All we can do is making the
decisions more visible. Obviously Lispers want the boundary between
application-level and interpreter-level programming as low as possible.
It is still present but making it invisible and expressing language
semantics through the language is regarded of high value. This makes
application level meta-programming as simple as it could be. Others
tend to separate the concerns/boundaries more strictly and standardize
the language while implementing all features regarded necessary for
application-level programming in a huge opaque runtime and a vast
amount of libraries. Application level metaprogramming is restricted to
runtime-reflection, metaclass-protocols, annotations etc. From this
point of view a programming language is basically an interface to a
virtual machine that is much like an operating system i.e. it supports
basic functions and hides complexity. This is the way Java and Python
have gone. The benefit of the latter approach lies not so much in
creating new language capabilities but considering the language as just
another application where requirement engineering and carefull design
can be done without sprawling in every possible direction ( see Forth
as an example for this tendency ). This way standard libraries (
"batteries included" ) become almost equal important and
"language-designer" is a job that is not once be done. This is roughly
my interpretation of GvRs "design view" on PLs that Xah Lee obviously
doesn't get - Xah is not important here, because he never gets anything
right but here are enough Lispers with quite some working brain cells
who seem to think that the best thing to do is giving a programmer
unlimited programming power.
Concluding remark: I'm not sure I want to defend one point of view all
the way long. Although not being a Lisper myself I can clearly see that
the DSL topic is hot these days and the pendulum oscillates into the
direction of more liberty. I have mixed feelings about this but I have
my own strong opinion of how those *can* fit into the CPython design
space and a working model that is to be published soon. As with the
dialectic double negation there is no return to a former position.
From: Patrick May
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <m2iroiupef.fsf@Dagney.local>
·······@yahoo.com (Alex Martelli) writes:
> In my opinion (and that of several others), the best way for Python to
> grow in this regard would be to _lose_ lambda altogether, since named
> functions are preferable
Why? I find the ability to create unnamed functions on the fly
to be a significant benefit when coding in Common Lisp.
Regards,
Patrick
------------------------------------------------------------------------
S P Engineering, Inc. | The experts in large scale distributed OO
| systems design and implementation.
···@spe.com | (C++, Java, Common Lisp, Jini, CORBA, UML)
Patrick May wrote:
> ·······@yahoo.com (Alex Martelli) writes:
>> In my opinion (and that of several others), the best way for Python to
>> grow in this regard would be to _lose_ lambda altogether, since named
>> functions are preferable
>
> Why? I find the ability to create unnamed functions on the fly
> to be a significant benefit when coding in Common Lisp.
1. They don't add anything new to the language semantically i.e. you
can always used a named function to accomplish the same task
as an unnamed one.
2. Giving a function a name acts as documentation (and a named
function is more likely to be explicitly documented than an unnamed
one). This argument is pragmatic rather than theoretical.
3. It adds another construction to the language.
Cheers,
Brian
<·····@sweetapp.com> wrote:
> Patrick May wrote:
> > ·······@yahoo.com (Alex Martelli) writes:
> >> In my opinion (and that of several others), the best way for Python to
> >> grow in this regard would be to _lose_ lambda altogether, since named
> >> functions are preferable
> >
> > Why? I find the ability to create unnamed functions on the fly
> > to be a significant benefit when coding in Common Lisp.
>
> 1. They don't add anything new to the language semantically i.e. you
> can always used a named function to accomplish the same task
> as an unnamed one.
> 2. Giving a function a name acts as documentation (and a named
> function is more likely to be explicitly documented than an unnamed
> one). This argument is pragmatic rather than theoretical.
> 3. It adds another construction to the language.
Creating *FUNCTIONS* on the fly is a very significant benefit, nobody on
the thread is disputing this, and nobody ever wanted to take that
feature away from Python -- it's the obsessive focus on the functions
needing to be *unnamed* ones, that's basically all the debate. I wonder
whether all debaters on the "unnamed is a MUST" side fully realize that
a Python's def statement creates a function on the fly, just as much as
a lambda form does. Or maybe the debate is really about the distinction
between statement and expression: Python does choose to draw that
distinction, and while one could certainly argue that a language might
be better without it, the distinction is deep enough that nothing really
interesting (IMHO) is to be gleaned by the debate, except perhaps as
pointers for designers of future languages (and there are enough
programming languages that I personally see designing yet more of them
as one of the least important tasks facing the programming community;-).
Alex
From: Patrick May
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <m2ac9tvcm1.fsf@Dagney.local>
·······@yahoo.com (Alex Martelli) writes:
>> >> In my opinion (and that of several others), the best way for
>> >> Python to grow in this regard would be to _lose_ lambda
>> >> altogether, since named functions are preferable
>> >
>> > Why? I find the ability to create unnamed functions on the
>> > fly to be a significant benefit when coding in Common Lisp.
>>
>> 1. They don't add anything new to the language semantically
>> i.e. you can always used a named function to accomplish the same
>> task as an unnamed one.
Sure, but it won't necessarily be as expressive or as convenient.
>> 2. Giving a function a name acts as documentation (and a named
>> function is more likely to be explicitly documented than an
>> unnamed one). This argument is pragmatic rather than
>> theoretical.
Using lambda in an expression communicates the fact that it will
be used only in the scope of that expression. Another benefit is that
declaration at the point of use means that all necessary context is
available without having to look elsewhere. Those are two pragmatic
benefits.
>> 3. It adds another construction to the language.
That's a very minimal cost relative to the benefits.
You haven't made your case for named functions being preferable.
Regards,
Patrick
------------------------------------------------------------------------
S P Engineering, Inc. | The experts in large scale distributed OO
| systems design and implementation.
···@spe.com | (C++, Java, Common Lisp, Jini, CORBA, UML)
Patrick May <···@spe.com> wrote:
...an alleged reply to me, which in fact quotes (and responds to) only
to statements by Brian, without mentioning Brian...
Mr May, it seems that you're badly confused regarding Usenet's quoting
conventions. You may want to repeat your answer addressing specifically
the poster you ARE apparently answering. Nevertheless, I'll share my
opinions:
> Using lambda in an expression communicates the fact that it will
> be used only in the scope of that expression. Another benefit is that
> declaration at the point of use means that all necessary context is
> available without having to look elsewhere. Those are two pragmatic
> benefits.
You still need to look a little bit upwards to the "point of use",
almost invariably, to see what's bound to which names -- so, you DO
"have to look elsewhere", nullifying this alleged benefit -- looking at
the def statement, immediately before the "point of use", is really no
pragmatic cost when you have to go further up to get the context for all
other names used (are they arguments of this function, variables from a
lexically-containing outer function, assigned somewhere...), which is
almost always. And if you think it's an important pragmatic advantage
to limit "potential scope" drastically, nothing stops you from wrapping
functions just for that purpose around your intended scope -- me, I find
that as long as functions are always kept small (as they should be for a
host of other excellent reasons anyway), the "ambiguity" of scope being
between the def and the end of the containing function is nil (literally
nil when the statement right after the def, using the named function, is
a return, as is often the case -- pragmatically equivalent to nil when
the statements following the def are >1 but sufficiently few).
Your "pragmatic benefits", if such they were, would also apply to the
issue of "magic numbers", which was discussed in another subthread of
this unending thread; are you therefore arguing, contrary to widespread
opinion [also concurred in by an apparently-Lisp-oriented discussant],
that it's BETTER to have magic unexplained numbers appear as numeric
constants "out of nowhere" smack in the middle of expressions, rather
than get NAMED separately and then have the names be used? If you
really believe in the importance of the "pragmatic benefits" you claim,
then to be consistent you should be arguing that...:
return total_amount * 1.19
is vastly superior to the alternative which most everybody would deem
preferable,
VAT_MULTIPLIER = 1.19
return total_amount * VAT_MULTIPLIER
because the alternative with the magic number splattered inexplicably
smack in the middle of code "communicated the fact" that it's used only
within that expression, and makes all context available without having
to look "elsewhere" (just one statement up of course, but then this
would be identically so if the "one statement up" was a def, and we were
discussing named vs unnamed functions vs "magic numbers").
> >> 3. It adds another construction to the language.
>
> That's a very minimal cost relative to the benefits.
To my view of thinking, offering multiple semantically equivalent ways
(or, perhaps worse, "nearly equivalent but with subtle differences"
ones) to perform identical tasks is a *HUGE* conceptual cost: I like
languages that are and stay SMALL and SIMPLE. Having "only one obvious
way to do it" is just an ideal, but that's no reason to simply abrogate
it when it can so conveniently be reached (my only serious beef with
Python it has it *HAS* abdicated the pursuit of that perfect design
principle by recent decisions to keep lambda, and to keep the syntax
[<genexp>] as an identical equivalent to list(<genexp>), in the future
release 3.0, which was supposed to simplify and remove redundant stuff
accreted over the years: suddenly, due to those decisions, I don't
really look forward to Python 3.0 as I used to - though, as I've already
mentioned, being a greedy fellow I'll no doubt stick with Python until
all my Google options have vested).
> You haven't made your case for named functions being preferable.
I think it's made at least as well as the case for using constant-names
rather than "magic numbers" numeric constants strewn throughout the
code, and THAT case is accepted by a wide consensus of people who care
about programming style and clarity, so I'm pretty happy with that.
Alex
Alex Martelli wrote:
>
> Your "pragmatic benefits", if such they were, would also apply to the
> issue of "magic numbers", which was discussed in another subthread of
> this unending thread; are you therefore arguing, contrary to widespread
> opinion [also concurred in by an apparently-Lisp-oriented discussant],
> that it's BETTER to have magic unexplained numbers appear as numeric
> constants "out of nowhere" smack in the middle of expressions, rather
> than get NAMED separately and then have the names be used? If you
> really believe in the importance of the "pragmatic benefits" you claim,
> then to be consistent you should be arguing that...:
>
> return total_amount * 1.19
>
> is vastly superior to the alternative which most everybody would deem
> preferable,
>
> VAT_MULTIPLIER = 1.19
> return total_amount * VAT_MULTIPLIER
>
> because the alternative with the magic number splattered inexplicably
> smack in the middle of code "communicated the fact" that it's used only
> within that expression, and makes all context available without having
> to look "elsewhere" (just one statement up of course, but then this
> would be identically so if the "one statement up" was a def, and we were
> discussing named vs unnamed functions vs "magic numbers").
Most languages allow `unnamed numbers'. The `VAT_MULTIPLIER' argument
is a
strawman. Would you want to have to use a special syntax to name the
increment
in loop?
defnumber zero 0
defnumber one { successor (zero); }
for (int i = zero; i < limit; i += one) { ...}
If you language allows unnamed integers, unnamed strings, unnamed
characters, unnamed arrays or aggregates, unnamed floats, unnamed
expressions, unnamed statements, unnamed argument lists, etc. why
*require* a name for trivial functions?
Wouldn't all the other constructs benefit by having a required name as
well?
>
> To my view of thinking, offering multiple semantically equivalent ways
> (or, perhaps worse, "nearly equivalent but with subtle differences"
> ones) to perform identical tasks is a *HUGE* conceptual cost: I like
> languages that are and stay SMALL and SIMPLE.
Then why not stick with S and K combinators? There are few languages
SMALLER and
SIMPLER.
Joe Marshall <··········@gmail.com> wrote:
...
> If you language allows unnamed integers, unnamed strings, unnamed
> characters, unnamed arrays or aggregates, unnamed floats, unnamed
> expressions, unnamed statements, unnamed argument lists, etc. why
> *require* a name for trivial functions?
I think it's reasonable to make a name a part of functions, classes and
modules because they may often be involved in tracebacks (in case of
uncaught errors): to me, it makes sense to let an error-diagnosing
tracebacks display packages, modules, classes and functions/methods
involved in the chain of calls leading to the point of error _by name_.
I think it's reasonable to make a name a part of types for a different
reason: new types are rarely meant to be used "just once"; but also, if
during debugging any object is displayed, it's nice to be able to show,
as part of the display, "this object is of type X and ...", with X shown
as a name rather than as a complete (thus lengthy) description. (any
decent interactive shell/debugger will let you drill down into the
details as and when you need to, of course, but a well-chosen name can
be often sufficient during such interactive exploration/debugging
sessions, and therefore save time and effort).
This doesn't stop a programmer from using a meaningless name, of course,
but it does nudge things in the right direction.
> Wouldn't all the other constructs benefit by having a required name as
> well?
I believe this is a delicate style call, but I agree with your
implication that a language should at least _allow_ any object to have a
name (even when such objects are more often constructed on the fly, they
could still usefully borrow the first [or, maybe, the latest] name
they're bound to, if any). If I was designing a language from scratch,
I'd probably have as the first few fields of any object _at least_...:
a cell pointing to the type object,
a utility cell for GC (reference count or generation-count +
markflag)
a cell pointing to the name object,
...rest of the object's value/state to follow...
Indeed, "given an object, how do I get its NAME" (for inspection and
debugging purposes) is the most frequently asked question on
comp.lang.python, and I've grown a bit tired of answering "you can't, an
object in general intrinsically ``has no name'', it might have many or
none at all, blah blah" -- yeah, this is technically true (in today's
Python), but there's no real reason why it should stay that way forever
(IMHO). If we at least ALLOWED named objects everywhere, this would
further promote the use of names as against mysterious "magic numbers",
since the programmer would KNOW that after
VAT_MULTIPLIER = 1.19
then displaying in a debugger or other interactive session that
PARTICULAR instance of the value 1.19 would show the name string
'VAT_MULTIPLIER' as well (or no doubt a more structured name constructed
on the fly, identifying package and module-within-package too).
As to what good practices should be more or less mandated by the
language, and what other good practices instead should be just gently
nudged towards, that's an interesting design question in each case; to
me, a cornerstone for answering it is generally _language simplicity_.
When mandating a certain good practice DETRACTS from language
simplicity, make it a matter of convention instead; when so mandating
ENHANCES language simplicity (by not needing the addition of some other
construct, otherwise unneeded in the language), go for the mandate.
Mandating names for _everything_ would complicate the language by
forcing it to provide builtin names for a lot of elementary building
blocks: so for most types of objects it's best to "gently nudge". For
functions, classes, modules, and packages, I think the naming is
important enough (as explained above) to warrant a syntax including the
name; better, therefore, not to complicate the language by providing
another different syntax in each case just to allow the name to be
omitted -- why encourage a pratice that's best discouraged, at the price
of language simplicity? This DOES imply that some (functions, modules,
etc) that are fundamental to the language (and needed to build others)
should be provided with a name, but then one tends to do that anyway:
what language *DOESN'T* provide (perhaps in some suitable "trigonometry"
module) elementary functions named (e.g.) sin, cos, tan, ..., to let the
user build richer ones on top of those?
Alex
Alex Martelli wrote:
> Joe Marshall <··········@gmail.com> wrote:
> ...
>> If you language allows unnamed integers, unnamed strings, unnamed
>> characters, unnamed arrays or aggregates, unnamed floats, unnamed
>> expressions, unnamed statements, unnamed argument lists, etc. why
>> *require* a name for trivial functions?
>
> I think it's reasonable to make a name a part of functions, classes and
> modules because they may often be involved in tracebacks (in case of
> uncaught errors): to me, it makes sense to let an error-diagnosing
> tracebacks display packages, modules, classes and functions/methods
> involved in the chain of calls leading to the point of error _by name_.
>
> I think it's reasonable to make a name a part of types for a different
> reason: new types are rarely meant to be used "just once"; but also, if
> during debugging any object is displayed, it's nice to be able to show,
> as part of the display, "this object is of type X and ...", with X shown
> as a name rather than as a complete (thus lengthy) description. (any
> decent interactive shell/debugger will let you drill down into the
> details as and when you need to, of course, but a well-chosen name can
> be often sufficient during such interactive exploration/debugging
> sessions, and therefore save time and effort).
Any time you want an anonymous function (or class, or type, or number)
it would be because that thing is sufficiently small and simple that the
best name for it is the code itself. In one game I worked on, there was
a function named canPerformAction_and_isNotActionInQueue. It was a
simple, one line function:
bool canPerformAction_and_isNotActionInQueue( Action action ) {
return canPerformAction( action ) && !isActionInQueue( action );
}
There was no better, more abstract name, as the design required this
logic for a completely arbitrary reason -- so arbitrary it changed
multiple times in development. For a little while it was used in two
places. Then one of those places changed to have only the
isActionInQueue part. There was no useful abstraction to be made, and
it is in cases like these (which come up a lot when using functions as
parameters) where anonymous functions are a win.
-- MJF
M Jared Finder <·····@hpalace.com> wrote:
...
> Any time you want an anonymous function (or class, or type, or number)
> it would be because that thing is sufficiently small and simple that the
> best name for it is the code itself. In one game I worked on, there was
That's not what I see happen in practice in the real world -- please
check this thread for the guy who pointed me at some Lisp code of his to
draw pictures, and how each anonymous function is code returned had a
nice little comment (which would yield a perfectly suitable name, as I
showed in a Python translation of one of them). In the real world,
people don't choose anonymous functions only in these alleged cases
where anonymous is best -- if anonymous functions are available, they're
used in even more cases where naming would help (just as, again in the
real world, plenty of "magic numbers" sully the code which SHOULD be
named... but just don't GET named).
BTW, in your case canPerformQueuedAction seems a good name to me (I'd
probably eliminate the 'Action' part, since the _argument_ is an Action,
but I'd need to see more context to suggest the best name).
Alex
·····@mac.com (Alex Martelli) writes:
>> Any time you want an anonymous function (or class, or type, or number)
>> it would be because that thing is sufficiently small and simple that the
>> best name for it is the code itself.
> In the real world, people don't choose anonymous functions only in
> these alleged cases where anonymous is best
In the real world, people do a lot of things they shouldn't.
Any feature can be abused, and poor style is possible in any
language. I just checked my code for lambdas, and they are
exclusively short half-liners passed as parameters to higher order
functions. Naming them would only complicate the code, just like
naming (other) intermediate results would.
> if anonymous functions are available, they're used in even more
> cases where naming would help
Perhps, but not necessarily. But how about the converse: if every
function must be named, they will be named even where naming them
hurts.
-k
--
If I haven't seen further, it is by standing in the footprints of giants
From: Stefan Nobis
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <87ac9r1tem.fsf@snobis.de>
·····@mac.com (Alex Martelli) writes:
> if anonymous functions are available, they're used in even more
> cases where naming would help
Yes, you're right. But don't stop here. What about expressions? Many
people write very complex expression, that are hard to understand. A
good language should forbid these abuse and don't allow expressions
with more than 2 or maybe 3 operators!
Forbid everything that might be abused and you have a perfect
language?
I like Python and I teach Python, I'm also used to Java and C#. But
one of the best solutions to problems of beginners and not so
brilliant programmers is something like the Dr. Scheme enviroment:
Choose your language level yourself. Everyting (allowed constructs,
error messages,...) will be adjusted to your choosen level. And
experts have all at their fingertips they want and need. Code isn't
blowed up and screwed with workaround for language limitations (that
are aimed at beginners and less capable programmers).
Why only adjust to the less capable people? Give everyone what they
need and want! :)
--
Stefan.
Stefan Nobis <······@gmx.de> wrote:
> ·····@mac.com (Alex Martelli) writes:
>
> > if anonymous functions are available, they're used in even more
> > cases where naming would help
>
> Yes, you're right. But don't stop here. What about expressions? Many
> people write very complex expression, that are hard to understand. A
> good language should forbid these abuse and don't allow expressions
> with more than 2 or maybe 3 operators!
That would _complicate_ the language (by adding a rule). I repeat what
I've already stated repeatedly: a good criterion for deciding which good
practices a language should enforce and which ones it should just
facilitate is _language simplicity_. If the enforcement is done by
adding rules or constructs it's probably not worth it; if the
"enforcements" is done by NOT adding extra constructs it's a double win
(keep the language simpler AND push good practices).
Alex
M Jared Finder <·····@hpalace.com> wrote:
...
> Your reasoning, taken to the extreme, implies that an assembly language,
> by virtue of having the fewest constructs, is the best designed language
Except that the major premise is faulty! Try e.g.
<http://docs.sun.com/app/docs/doc/817-5477/6mkuavhrf#hic> and count the
number of distinct instructions -- general purpose, floating point,
SIMD, MMX, SSE, SSE2, OS support... there's *hundreds*, each with its
own rules as to what operand(s) are allowed plus variants such as (e.g.)
cmovbe{w,l,q} for "conditional move if below or equal" for word, long,
quadword (no byte variant) -- but e.g cmpxchg{b,w,l,q} DOES have a byte
variant too, while setbe for "set if below or equal" ONLY has a byte
variant, etc, etc -- endless memorization;-).
When you set up your strawman arguments, try to have at least ONE of the
premises appear sensible, will you?-)
I never argued against keeping languages at a high level, of course
(that's why your so utterly unfounded argument would be a "strawman"
even if it WAS better founded;-).
> prone, code. I think the advantages of anonymous functions:
...
> e) making the language simpler to implement
Adding one construct (e.g., in Python, having both def and lambda with
vast semantic overlap, rather than just one) cannot "make the language
simpler to implement" -- no doubt this kind of "reasoning" (?) is what
ended up making the instruction-set architecture of the dominant
families of CPUs so bizarre, intricate, and abstruse!-)
Alex
Alex Martelli wrote:
> M Jared Finder <·····@hpalace.com> wrote:
> ...
>> Your reasoning, taken to the extreme, implies that an assembly language,
>> by virtue of having the fewest constructs, is the best designed language
>
> Except that the major premise is faulty! Try e.g.
> <http://docs.sun.com/app/docs/doc/817-5477/6mkuavhrf#hic> and count the
> number of distinct instructions -- general purpose, floating point,
> SIMD, MMX, SSE, SSE2, OS support... there's *hundreds*, each with its
> own rules as to what operand(s) are allowed plus variants such as (e.g.)
> cmovbe{w,l,q} for "conditional move if below or equal" for word, long,
> quadword (no byte variant) -- but e.g cmpxchg{b,w,l,q} DOES have a byte
> variant too, while setbe for "set if below or equal" ONLY has a byte
> variant, etc, etc -- endless memorization;-).
>
> When you set up your strawman arguments, try to have at least ONE of the
> premises appear sensible, will you?-)
>
> I never argued against keeping languages at a high level, of course
> (that's why your so utterly unfounded argument would be a "strawman"
> even if it WAS better founded;-).
>
>> prone, code. I think the advantages of anonymous functions:
> ...
>> e) making the language simpler to implement
>
> Adding one construct (e.g., in Python, having both def and lambda with
> vast semantic overlap, rather than just one) cannot "make the language
> simpler to implement" -- no doubt this kind of "reasoning" (?) is what
> ended up making the instruction-set architecture of the dominant
> families of CPUs so bizarre, intricate, and abstruse!-)
It sure can. First, let's cover the cost. I'll be measuring everything
in terms of lines of code, with the assumption that the code has been
kept readable.
Here's an implementation of lambda (anonymous functions) in Lisp based
on flet (lexically scoped functions):
(defmacro lambda (args &rest body)
(let ((name (gensym)))
`(flet ((,name ,args ,@body)) (function ,name))))
That's three lines of code to implement. An almost trivial amount.
Now by using anonymous functions, you can implement many other language
level features simpler. Looping can be made into a regular function
call. Branching can be made into a regular function call. Defining
virtual functions can be made into a regular function call. Anything
that deals with code blocks can be made into a regular function call.
By removing the special syntax and semantics from these language level
features and making them just pain old function calls, you can reuse the
same evaluator, optimizer, code parser, introspector, and other code
analyzing parts of your language for these (no longer) special
constructs. That's a HUGE savings, well over 100 lines of code.
Net simplification, at least 97 lines of code. For a concrete example
of this in action, see Smalltalk.
-- MJF
M Jared Finder wrote:
> Alex Martelli wrote:
>> Stefan Nobis <······@gmx.de> wrote:
>>> ·····@mac.com (Alex Martelli) writes:
>>>
>>>> if anonymous functions are available, they're used in even more
>>>> cases where naming would help
>>>
>>> Yes, you're right. But don't stop here. What about expressions? Many
>>> people write very complex expression, that are hard to understand. A
>>> good language should forbid these abuse and don't allow expressions
>>> with more than 2 or maybe 3 operators!
>>
>> That would _complicate_ the language (by adding a rule). I repeat what
>> I've already stated repeatedly: a good criterion for deciding which good
>> practices a language should enforce and which ones it should just
>> facilitate is _language simplicity_. If the enforcement is done by
>> adding rules or constructs it's probably not worth it; if the
>> "enforcements" is done by NOT adding extra constructs it's a double win
>> (keep the language simpler AND push good practices).
>
> Your reasoning, taken to the extreme, implies that an assembly language,
> by virtue of having the fewest constructs, is the best designed language
> ever.
Assembly languages don't have the fewest constructs; kernel languages such
as Core ML or Kernel-Oz do. In any case, I didn't read Alex's point as
being that simplicity was the only criterion on which to make decisions
about what practices a language should enforce or facilitate; just
"a good criterion".
However, IMHO anonymous lambdas do not significantly increase the complexity
of the language or of programs, and they can definitely simplify programs in
functional languages, or languages that use them for control constructs.
--
David Hopwood <····················@blueyonder.co.uk>
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <yYy8g.416$CQ6.327@fe11.lga>
Alex Martelli wrote:
> Stefan Nobis <······@gmx.de> wrote:
>
>
>>·····@mac.com (Alex Martelli) writes:
>>
>>
>>>if anonymous functions are available, they're used in even more
>>>cases where naming would help
>>
>>Yes, you're right. But don't stop here. What about expressions? Many
>>people write very complex expression, that are hard to understand. A
>>good language should forbid these abuse and don't allow expressions
>>with more than 2 or maybe 3 operators!
>
>
> That would _complicate_ the language (by adding a rule). I repeat what
> I've already stated repeatedly: a good criterion for deciding which good
> practices a language should enforce and which ones it should just
> facilitate is _language simplicity_. If the enforcement is done by
> adding rules or constructs it's probably not worth it; if the
> "enforcements" is done by NOT adding extra constructs it's a double win
> (keep the language simpler AND push good practices).
Gosh, that looks like fancy footwork. But maybe I misunderstand, so I
will just ask you to clarify.
In the case of (all syntax imaginary and not meant ot be Python):
if whatever = 42
dothis
do that
do something else
else
go ahead
make my day
You do not have a problem with unnamed series of statements. But in the
case of:
treeTravers( myTree, lambda (node):
if xxx(node)
print "wow"
return 1
else
print "yawn"
return 0)
...no, no good, you want a named yawnOrWow function? And though they
look similar, the justification above was that IF-ELSE was lucky enough
to get multiline branches In the Beginning, so banning it now would be
"adding a rule", whereas lambda did not get multiline In the Beginning,
so allowing it would mean "adding a construct". So by positing "adding a
rule or construct" as always bad (even if they enforce a good practice
such as naming an IF branch they are bad since one is /adding/ to the
language), the inconsistency becomes a consistency in that keeping IF
powerful and denying lambda the same power each avoids a change?
In other words, we are no longer discussing whether unnamed multi-line
statements are a problem. The question is, would adding them to lambda
mean a change?
Oh, yeah, it would. :)
hth, kenny
--
Cells: http://common-lisp.net/project/cells/
"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
Alex Martelli wrote:
> I think it's reasonable to make a name a part of functions, classes and
> modules because they may often be involved in tracebacks (in case of
> uncaught errors): to me, it makes sense to let an error-diagnosing
> tracebacks display packages, modules, classes and functions/methods
> involved in the chain of calls leading to the point of error _by name_.
It seems to me that there's something of a circularity here. Either in your
logic if applied to language design in general, or as a self-perpetuating habit
when applied to Python in particular.
The assumption is that functions are in some sense rather "big" things -- that
each function performs a well-defined action which can be understood in
isolation. That may well be true in idiomatically written Python (I've never
cared for the look of the language myself, so I don't know what's considered
"normal"), but it isn't true in general. With the assumption, it makes sense
to say that every function /could/ have a name, and so, why not /give/ it a
name ? But without the assumption, when many little anonymous functions are
used, the idea of giving them all names appears pettifogging to the point of
idiocy. If the language and/or culture expects that all/most functions will be
named, then "little" functions (in my sense) won't be used; hence the
self-perpetuating habit. But I don't think that the underlying logic supports
that habit independently of its own self-perpetuating nature.
E.g. consider the Smalltalk code (assumed to be the body of a method):
aCollection
do: [:each |
each > 0 ifTrue: [^ true]].
^ false.
which iterates over a collection checking to see if any element is > 0. If so
then the method answers true ("^" -- spelled "return" in Java), otherwise it
answers false. In that code,
[^ true]
is syntactically and semantically an anonymous function, which is only invoked
if the antecedent is true (in point of fact the compiler inlines that function
away but I don't think that's relevant here). The passage beginning
[:each | ...
and reaching to the matching ] is also an anonymous function with one parameter
(each) which is applied to each element of the collection in turn. (In this
case it really is an anonymous function, even at the implementation level.)
What "name" would you give to either of them ? I don't believe that /any/ name
is possible, and certainly that no name is desirable.
In my working Smalltalk environment today, there are 60099 methods defined
across 3369 classes. In that codebase there are 38112 anonymous functions. Do
you really want to have to find names for them all ?
-- chris
Petr Prikryl wrote:
> for element in aCollection:
> if element > 0:
> return True
> return False
[I'm not sure whether this is supposed to be an example of some specific
language (Python ?) or just a generic illustration. I'll take it as the
latter, since it makes my point easier to express. I'll also exaggerate, just
a little...]
But now, in order to hack around the absence of a sensible and useful
feature -- /only/ in order to do so -- you have added two horrible new
complications to your language. You have introduced a special syntax to
express conditionals, and (worse!) a special syntax to express looping. Not
only does that add a huge burden of complexity to the syntax, and semantics, of
the language (and, to a lesser extent, its implementation), but it also throws
out any semblance of uniformity.
Once you've started down that route then you've lost all hope of user-defined
control structures which operate on a par with the built-in ones. And please
note that "control structures" are not limited to C-style "for" "while" etc.
E.g. in Java there's an unresolved, and irresolvable, tension between whether a
failing operation should return an error condition or throw an exception -- the
problem is that exceptions are (IMO and most other peoples' here) intended for
exceptional conditions, and for many operations "failure" is not exceptional.
One, highly effective, resolution to the problem is for the operation to be
parameterised with the action to take if it fails (defaulting to code to raise
an exception). In Java that approach, though technically possible, is totally
infeasible due to the pathetic overuse of unwanted syntax, and the underuse of
simple and powerful primitives.
E.g. can you add three-way comparisons (less-than, same-as, greater-than to,
say, Python with corresponding three-way conditional control structures to
supplement "if" etc ? Are they on a semantic and syntactic par with the
existing ones ? In Smalltalk that is trivial (too trivial to be particularly
interesting, even), and I presume the same must be true of Lisp (though I
suspect you might be forced to use macros).
I should say that if your example /is/ in fact Python, then I believe that
language allows fairly deep hooks into the execution mechanism, so that at
least the "for" bit can be mediated by the collection itself -- which is better
than nothing, but nowhere near what I would call "good".
-- chris
Chris Uppal wrote:
> E.g. can you add three-way comparisons (less-than, same-as, greater-than to,
> say, Python with corresponding three-way conditional control structures to
> supplement "if" etc ? Are they on a semantic and syntactic par with the
> existing ones ? In Smalltalk that is trivial (too trivial to be particularly
> interesting, even), and I presume the same must be true of Lisp (though I
> suspect you might be forced to use macros).
As an illustration, here's the definition and usage of such a numeric-if
in Lisp.
Using raw lambdas, it's ugly, but doable:
(defun fnumeric-if (value lt-body eq-body gt-body)
(cond ((< value 0) (funcall lt-body))
((= value 0) (funcall eq-body))
((> value 0) (funcall gt-body))))
(fnumeric-if (- a b)
(lambda () (print "a < b"))
(lambda () (print "a = b"))
(lambda () (print "a > b")))
A macro helps clean that up and make it look prettier:
(defmacro numeric-if (value lt-body eq-body gt-body)
`(fnumeric-if ,value
(lambda () ,lt-body)
(lambda () ,eq-body)
(lambda () ,gt-body)))
(numeric-if (- a b)
(print "a < b")
(print "a = b")
(print "a > b"))
-- MJF
"Chris Uppal" wrote:
> Petr Prikryl wrote:
>
> > for element in aCollection:
> > if element > 0:
> > return True
> > return False
>
> [I'm not sure whether this is supposed to be an example of some specific
> language (Python ?) or just a generic illustration. I'll take it as the
> latter, since it makes my point easier to express. I'll also exaggerate,
just
> a little...]
Sorry, I do not know Smalltalk, but this was meant as the transcription
of your...
| E.g. consider the Smalltalk code (assumed to be the body of a method):
|
| aCollection
| do: [:each |
| each > 0 ifTrue: [^ true]].
| ^ false.
into Python
> But now, in order to hack around the absence of a sensible and useful
> feature -- /only/ in order to do so -- you have added two horrible new
> complications to your language. You have introduced a special syntax to
> express conditionals, and (worse!) a special syntax to express looping.
Not
> only does that add a huge burden of complexity to the syntax, and
semantics, of
> the language (and, to a lesser extent, its implementation), but it also
throws
> out any semblance of uniformity.
I guess that it is not me who is confused here. The subject clearly
says that the thread is related to Python and to lambda supported
by Python. It was only crossposted to other groups and I did
not want to remove them -- other people may want to read
the thread in the other newsgroups.
So, I did not introduced any horible syntax, nor looping
construct that would look strange to people used to
classical procedural languages. The lambda syntax
in Python is the thing that could be viewed as a complication,
not the "for" loop or the "if" construction.
If you take any English speaking human (even the non-programmer),
I could bet that the Python transcription will be more understandable
than your Smalltalk example.
> E.g. in Java there's an unresolved, and irresolvable, tension between
whether a
> failing operation should return an error condition or throw an exception
[...].
It is more a design problem than the language problem. And it is also
the implementation problem (i.e. what is the price of exceptions
in comparison with the other code). In Python, the exceptions
are intesively used.
> E.g. can you add three-way comparisons (less-than, same-as, greater-than
to,
> say, Python with corresponding three-way conditional control structures to
> supplement "if" etc ? Are they on a semantic and syntactic par with the
> existing ones ? In Smalltalk that is trivial (too trivial to be
particularly
> interesting, even), and I presume the same must be true of Lisp (though I
> suspect you might be forced to use macros).
Such built-in function already is in Python. But you could add it
by hand if it were not:
def cmp(x, y):
if x < y:
return -1
if x == y:
return 0
return 1
and the "if" supplement (the "switch" or "case" command) could be
replaced easily in Python using the hash-table (dictionary) structure.
> I should say that if your example /is/ in fact Python, then I believe that
> language allows fairly deep hooks into the execution mechanism, so that at
> least the "for" bit can be mediated by the collection itself -- which is
better
> than nothing, but nowhere near what I would call "good".
It is a dual point of view. Should the collection be passive data, or not?
I believe that the "pure" object oriented view (there are no functions,
only the object methods) is not very practical and does not reflect
well the big part of the reality that is simulated by programs.
Python and C++, for example, allow mixing functions and objects.
You should try Python. The "for" construct iterates through
a sequence or through all values of the generator, thus making
the for loop much more generic than for example in C or other
languages.
Every language forms the way of thinking. Every language has its strong
and weak points. Every language has its followers and haters.
Not every language is practical enough to fly around the Earth
in the space ship.
pepr
(Sorry for my broken English.)
Alex Martelli wrote:
> Joe Marshall <··········@gmail.com> wrote:
> ...
> > The problem is that a `name' is a mapping from a symbolic identifier to
> > an object and that this mapping must either be global (with the
> > attendant name collision issues) or within a context (with the
> > attendant question of `in which context').
>
> Why is that a problem? Even for so-called "global" names, Python
> supports a structured, hierarchical namespace, so there can never be any
> collision betwen the "globals" of distinct modules (including modules
> which happen to have the same name but live in distinct packages or
> subpackages) -- I did mention that names could usefully be displayed in
> some strcutured form such as apackage.somemodule.thefunction but perhaps
> I was too tangential about it;-).
Can you refer to inner functions from the global context? Suppose I
have this Python code:
def make_adder(x):
def adder_func(y):
sum = x + y
return sum
return adder_func
Can I refer to the inner adder_func in any meaningful way?
>
>
> > Matthias Felleisen once suggested that *every* internal function should
> > be named. I just said `continuations'. He immediately amended his
> > statement with `except those'.
>
> If I used continuations (I assume you mean in the call/cc sense rather
> than some in which I'm not familiar?) I might feel the same way, or not,
> but I don't (alas), so I can't really argue the point either way for
> lack of real-world experience.
I meant continuations as in the receiver function in
continuation-passing-style. If you have a function that has to act
differently in response to certain conditions, and you want to
parameterize the behavior, then one possibility is to pass one or more
thunks to the function in addition to the normal arguments. The
function acts by selecting and invoking one of the thunks. A classic
example is table lookup. It is often the case you wish to proceed
differently depending upon whether a key exists in a table or not.
There are several ways to provide this functionality. One is to have a
separate `key-exists?' predicate. Another is to have a special return
value for `key not found'. Another is to throw an exception when a key
is not found. There are obvious advantages and drawbacks to all of
these methods. By using continuation-passing-style, we can
parameterize how the table lookup proceeds once it determines whether
or not the key is found. We have the lookup procedure take two thunks
in addition to the key. If the key is found, the first thunk is
invoked on the associated value. If the key is not found, the second
thunk is invoked. We can subsume all the previous behaviors:
(define (key-exists? key table)
(lookup key table
(lambda (value) #t) ;; if found, ignore value, return true
(lambda () #f))) ;; if not found, return false.
(define (option1 key table)
(lookup key table
(lambda (value) value)
(lambda () 'key-not-found)))
(define (option2 key table)
(lookup key table
(lambda (value) value)
(lambda () (raise 'key-not-found-exception))))
(define (option3 key table default-value)
(lookup key table
(lambda (value) value)
(lambda () default-value)))
The unnamed functions act in this regard much like a `local label'. We
wrap two chunks of code in a lambda and the lookup function `jumps' to
the appropriate chunk. (If the compiler knows about thunks, the
generated assembly code really will have local labels and jump
instructions. It can be quite efficient.)
This may look odd and cumbersome, but with a little practice the
lambdas fade into the background and it becomes easy to read.
My point with Matthias, however, was that defining all these
continuations (the thunks) as named internal functions was not only
cumbersome, but it obscured the control flow. Notice:
(define (named-option3 key table default-value)
(define (if-found value)
value)
(define (if-not-found)
default-value)
(lookup key table if-found if-not-found))
When we enter the function, we skip down to the bottom (past the
internal definitions) to run lookup, which transfers control to a
function defined earlier in the code.
There are many reasons to avoid this style in Python, so this probably
won't win you over, but my point is that there are times where
anonymous functions have an advantage over the named alternative and
that disallowing anonymous functions can be as cumbersome as
disallowing anonymous integers.
Joe Marshall <··········@gmail.com> wrote:
> Alex Martelli wrote:
> > Joe Marshall <··········@gmail.com> wrote:
> > ...
> > > The problem is that a `name' is a mapping from a symbolic identifier to
> > > an object and that this mapping must either be global (with the
> > > attendant name collision issues) or within a context (with the
> > > attendant question of `in which context').
> >
> > Why is that a problem? Even for so-called "global" names, Python
> > supports a structured, hierarchical namespace, so there can never be any
> > collision betwen the "globals" of distinct modules (including modules
> > which happen to have the same name but live in distinct packages or
> > subpackages) -- I did mention that names could usefully be displayed in
> > some strcutured form such as apackage.somemodule.thefunction but perhaps
> > I was too tangential about it;-).
>
> Can you refer to inner functions from the global context? Suppose I
> have this Python code:
>
> def make_adder(x):
> def adder_func(y):
> sum = x + y
> return sum
> return adder_func
>
> Can I refer to the inner adder_func in any meaningful way?
You can refer to one instance/closure (which make_adder returns), of
course -- you can't refer to the def statement itself (but that's a
statement, ready to create a function/closure each time it executes, not
a function, thus, not an object) except through introspection. Maybe I
don't understand what you mean by this question...
> > If I used continuations (I assume you mean in the call/cc sense rather
> > than some in which I'm not familiar?) I might feel the same way, or not,
> > but I don't (alas), so I can't really argue the point either way for
> > lack of real-world experience.
>
> I meant continuations as in the receiver function in
> continuation-passing-style. If you have a function that has to act
> differently in response to certain conditions, and you want to
> parameterize the behavior, then one possibility is to pass one or more
> thunks to the function in addition to the normal arguments. The
Ah, OK, I would refer to this as "callbacks", since no
call-with-continuation is involved, just ordinary function calls; your
use case, while pretty alien to Python's typical style, isn't all that
different from other uses of callbacks which _are_ very popular in
Python (cfr the key= argument to the sort methods of list for a typical
example). I would guess that callbacks of all kinds (with absolutely
trivial functions) is the one use case which swayed Guido to keep lambda
(strictly limited to just one expression -- anything more is presumably
worth naming), as well as to add an if/else ternary-operator. I still
disagree deeply, as you guessed I would -- if I had to work with a
framework using callbacks in your style, I'd name my callbacks, and I
wish Python's functools module provided for the elementary cases, such
as:
def constant(k):
def ignore_args(*a): return k
return ignore_args
def identity(v): return v
and so on -- I find, for example, that to translate your
> (define (option3 key table default-value)
> (lookup key table
> (lambda (value) value)
> (lambda () default-value)))
I prefer to use:
def option3(key, table, default_value):
return lookup(key, table, identity, constant(default_value))
as being more readable than:
def option3(key, table, default_value):
return lookup(key, table, lambda v: v, lambda: default_value)
After all, if I have in >1 place in my code the construct "lambda v: v"
(and if I'm using a framework that requires a lot of function passing
I'm likely to be!), the "Don't Repeat Yourself" (DRY) principle suggests
expressing the construct *ONCE*, naming it, and using the name.
By providing unnamed functions, the language aids and abets violations
of DRY, while having the library provide named elementary functions (in
the already-existing appropriate module) DRY is reinforced and strongly
supported, which, IMHO, is a very good thing.
Alex
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <ruo8g.8$CQ6.1@fe11.lga>
Ketil Malde wrote:
>
> Sometimes the best documentation is the code itself. Sometimes the
> best name for a function is the code itself.
Absolutely. When I take over someone else's code I begin by deleting all
the comments. Then I read the code. If a variable or function name makes
no sense (once I have figured out what they /really/ do) I do a global
change. Pretty soon the system is "documented". And I usually find a
couple of bugs as the renaming produces things like:
count = count + weight
I think one good argument for anonymous functions is a hefty Cells
application, with literally hundreds of rules. The context is set by the
instance and slot name, and as you say, the rule speaks for itself:
(make-instance 'frame-widget
:bounds (c? (apply 'rect-union (all-bounds (subwidgets self)))))
Why do I have to give that a name? And if the algorithm gets hairier,
well, why is the reader looking at my code? If the reader is debugging
or intending to modify the rule, they damn well better be looking at the
code, not the name and not the comments. (Never a problem with my code. <g>)
kenny
--
Cells: http://common-lisp.net/project/cells/
"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
From: Takehiko Abe
Subject: comments (was Re: A critic of Guido's blog on Python's lambda)
Date:
Message-ID: <keke-1305061809430001@192.168.1.2>
Ken Tilton wrote:
> Absolutely. When I take over someone else's code I begin by deleting all
> the comments. Then I read the code.
Is this a great programming tip I've never heard before? Or
a mere joke?
From: Ken Tilton
Subject: Re: comments (was Re: A critic of Guido's blog on Python's lambda)
Date:
Message-ID: <Dek9g.308$Id.78@fe10.lga>
Takehiko Abe wrote:
> Ken Tilton wrote:
>
>
>>Absolutely. When I take over someone else's code I begin by deleting all
>>the comments. Then I read the code.
>
>
> Is this a great programming tip I've never heard before? Or
> a mere joke?
Somewhere in between. I do not recall ever sweeping through an entire
code base deleting all the commnets, I just delete them from stretches
of code on which I have to work so I can see the damn code.
Recall that most people write borribly, and that natural language is
anyway naturally ambiguous. Note also that comments do not run, code
runs. Comments do not always (often? ever?) get maintained. Some
comments state the obvious, especially for folks from the
comment-every-line school, and do no more than obscure the landscape.
Finally, if I am now the maintainer of a project, guess what? If there
is any non-obvious code I need to understand it, and, yeah, I know how
to read code. As I said, if the code is not readable, rectification of
names is an entertaining and insanely productive way of studying code.
Just reading it puts one to sleep.
kenneth
--
Cells: http://common-lisp.net/project/cells/
"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
From: Takehiko Abe
Subject: Re: comments (was Re: A critic of Guido's blog on Python's lambda)
Date:
Message-ID: <keke-1605060017170001@192.168.1.2>
Ken Tilton wrote:
> >>Absolutely. When I take over someone else's code I begin by deleting all
> >>the comments. Then I read the code.
> >
> > Is this a great programming tip I've never heard before? Or
> > a mere joke?
>
> Somewhere in between. I do not recall ever sweeping through an entire
> code base deleting all the commnets, I just delete them from stretches
> of code on which I have to work so I can see the damn code.
Somehow I thought you wrote that you delete code without reading
them. But you didn't. It was in my head.
>
> Recall that most people write borribly, and that natural language is
> anyway naturally ambiguous. Note also that comments do not run, code
> runs. Comments do not always (often? ever?) get maintained. Some
> comments state the obvious, especially for folks from the
> comment-every-line school, and do no more than obscure the landscape.
> Finally, if I am now the maintainer of a project, guess what? If there
> is any non-obvious code I need to understand it, and, yeah, I know how
> to read code.
Code only tells what it does. Sometimes it doesn't tell (or is not
obvious) why it does what it does.
Recently I put lots of comments to explain why those ugly codes are
necessary -- it's because of buggy API calls. (They are not my faults.)
And you delete them? I weep. You must give me a chance to explain!
> As I said, if the code is not readable, rectification of
> names is an entertaining and insanely productive way of studying code.
> Just reading it puts one to sleep.
>
> kenneth
>>>>> "Takehiko" == Takehiko Abe <····@gol.com> writes:
Takehiko> Ken Tilton wrote:
>> Absolutely. When I take over someone else's code I begin by
>> deleting all the comments. Then I read the code.
Takehiko> Is this a great programming tip I've never heard before?
Takehiko> Or a mere joke?
I've done it. I don't do it regularly, but I have Emacs lisp lying
around that does it for C code, from when I had to clean up a
co-worker's horror. It removes the comments, and then re-indents it.
--
"I said, `Shut up!' " Ms. Glass recalled ... " `You do not!
Oh my God! Oh my God! Oh my God!' So I went to Nina, my boss, and
said, `Oh my God! Oh my God! Oh my God!' "
--- Julie Salamon, in the New York Times
Joe Marshall wrote:
> Alex Martelli wrote:
> Most languages allow `unnamed numbers'. The `VAT_MULTIPLIER' argument
> is a
> strawman. Would you want to have to use a special syntax to name the
> increment
> in loop?
>
> defnumber zero 0
> defnumber one { successor (zero); }
>
> for (int i = zero; i < limit; i += one) { ...}
>
> If you language allows unnamed integers, unnamed strings, unnamed
> characters, unnamed arrays or aggregates, unnamed floats, unnamed
> expressions, unnamed statements, unnamed argument lists, etc. why
> *require* a name for trivial functions?
> Wouldn't all the other constructs benefit by having a required name as
> well?
>
Is this a Slippery Slope fallacious argument?
(http://c2.com/cgi/wiki?SlipperySlope)
"if python required you to name every function then soon it will
require you to name every number, every string, every immediate result,
etc. And we know that is bad. Therefore requiring you to name your
function is bad!!!! So Python is bad!!!!"
How about:
If Common Lisp lets you use unnamed function, then soon everyone will
start not naming their function. Then soon they will start not naming
their variable, not naming their magic number, not naming any of their
class, not naming any function, and then all Common Lisp program will
become one big mess. And we know that is bad. So allowing unnamed
function is bad!!!! So Common Lisp is bad!!!!!
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <z9V7g.119$u17.21@fe08.lga>
Pisin Bootvong wrote:
> Joe Marshall wrote:
>
>>Alex Martelli wrote:
>>Most languages allow `unnamed numbers'. The `VAT_MULTIPLIER' argument
>>is a
>>strawman. Would you want to have to use a special syntax to name the
>>increment
>>in loop?
>>
>> defnumber zero 0
>> defnumber one { successor (zero); }
>>
>> for (int i = zero; i < limit; i += one) { ...}
>>
>>If you language allows unnamed integers, unnamed strings, unnamed
>>characters, unnamed arrays or aggregates, unnamed floats, unnamed
>>expressions, unnamed statements, unnamed argument lists, etc. why
>>*require* a name for trivial functions?
>>Wouldn't all the other constructs benefit by having a required name as
>>well?
>>
>
>
> Is this a Slippery Slope fallacious argument?
> (http://c2.com/cgi/wiki?SlipperySlope)
>
> "if python required you to name every function then soon it will
> require you to name every number, every string, every immediate result,
> etc. And we know that is bad. Therefore requiring you to name your
> function is bad!!!! So Python is bad!!!!"
>
>
> How about:
>
> If Common Lisp lets you use unnamed function, then soon everyone will
> start not naming their function. Then soon they will start not naming
> their variable, not naming their magic number, not naming any of their
> class, not naming any function, and then all Common Lisp program will
> become one big mess. And we know that is bad. So allowing unnamed
> function is bad!!!! So Common Lisp is bad!!!!!
Funny you should mention that. Cells (obviously) have turned out to be a
gold mine for me in terms of development. And they have exactly one
non-limiting limitation: once you start using them, you have to use them
almost everywhere, because they create a different dataflow, or better
put, the dataflow replaces the control flow of imperative programming,
so there is no way to add a little subsection of functionality with
imperative code because all the action is over in dataflow-land.
I call it non-limiting because it is more like a healthy discipline: as
a consequence, all application semantics end up expressed as so many
discrete little cell rules. Which brings me to the punch line...
try to debug an app where all the code is in anonymous functions!!!
well, it was not a complete disaster because by hook or by crook one
could figure out which rule was at fault even in the worst case, and
most of the time there was not much question. But still...
well, it took me embarrasingly long to notice something. I never
actually coded (lambda (self) yada yada) for a rule. I always used a
macro: (c? yada yada)
This was great because it is more succinct and because once I had a
couple hundred of these i had little problem making serious overhauls to
the implementation. And of course if you know Lisp and macros...
duhhhhh! They operate on the code! So in two seconds I added a new slot
to a Cell called "code", and part of the macro expansion was to stuff
the source code into the code slot. ie...
The code: (c? (yada yada yada))
Becomes:
(make-c-dependent
:code '((yada yada yada))
:value-state :unevaluated
:rule (c-lambda (yada yada yada)))
c-lambda? I have a few of those c? macros, so I "submacro" the necessary
lambda form:
(lambda (slot-c &aux (self (c-model slot-c)) (.cache (c-value slot-c)))
(declare (ignorable .cache self))
(yada yada yada))
I almost never have to look at that code slot (as I said, most of the
time I can tell from the instance class and the slot name which rule I
screwed up) but when I am stumped, I just inspect the source code. :)
Oh, wait, this is not the "Should Python have macros" thread, is it?
:)
kenny
--
Cells: http://common-lisp.net/project/cells/
"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
Op 2006-05-09, Pisin Bootvong schreef <··········@gmail.com>:
>
> Joe Marshall wrote:
>> Alex Martelli wrote:
>> Most languages allow `unnamed numbers'. The `VAT_MULTIPLIER' argument
>> is a
>> strawman. Would you want to have to use a special syntax to name the
>> increment
>> in loop?
>>
>> defnumber zero 0
>> defnumber one { successor (zero); }
>>
>> for (int i = zero; i < limit; i += one) { ...}
>>
>> If you language allows unnamed integers, unnamed strings, unnamed
>> characters, unnamed arrays or aggregates, unnamed floats, unnamed
>> expressions, unnamed statements, unnamed argument lists, etc. why
>> *require* a name for trivial functions?
>> Wouldn't all the other constructs benefit by having a required name as
>> well?
>>
>
> Is this a Slippery Slope fallacious argument?
> (http://c2.com/cgi/wiki?SlipperySlope)
No it is not.
> "if python required you to name every function then soon it will
> require you to name every number, every string, every immediate result,
> etc. And we know that is bad. Therefore requiring you to name your
> function is bad!!!! So Python is bad!!!!"
I think this is a strawman. IMO requiring to name a function can
make things cumbersome.
I don't suppose anyone thinks the following is good practice.
one = 1.
lst.append(one)
Yet this practice is forced upon you in a number of cases when
you require functions to be named. Look at the following:
def incr_cnt_by_one(obj):
obj.cnt += 1
treat_all(lst, incr_cnt_by_one)
So the question I have is: Why is requiring me to give this function
a name considered a good thing, when it leads to a situation that
is considered bad practice in case of a number.
--
Antoon Pardon
Antoon Pardon wrote:
> Op 2006-05-09, Pisin Bootvong schreef <··········@gmail.com>:
> > Is this a Slippery Slope fallacious argument?
> > (http://c2.com/cgi/wiki?SlipperySlope)
>
> No it is not.
>
> [...]
>
> So the question I have is: Why is requiring me to give this function
> a name considered a good thing, when it leads to a situation that
> is considered bad practice in case of a number.
>
> --
Slippery Slope::
"Argumentation that A is bad, because A might lead to B, and B
to C, and we all know C is very bad."
> Why is requiring me to give this function
> a name considered a good thing, when it leads to a situation that
> is considered bad practice in case of a number.
A === "requiring me to give function a name"
no B
C === "requiring me to give number a name"
"Argumentation that requiring one to give function a name is bad,
because that might lead to requiring one to give number a name, and we
all know that that is very bad."
Now you tell me which part of that is not Slippery slope argument.
-- Or are you trying to make a sarcastic joke? I'm sorry if I didn't
get it. --
Op 2006-05-09, Pisin Bootvong schreef <··········@gmail.com>:
>
> Antoon Pardon wrote:
>> Op 2006-05-09, Pisin Bootvong schreef <··········@gmail.com>:
>> > Is this a Slippery Slope fallacious argument?
>> > (http://c2.com/cgi/wiki?SlipperySlope)
>>
>> No it is not.
>>
>> [...]
>>
>> So the question I have is: Why is requiring me to give this function
>> a name considered a good thing, when it leads to a situation that
>> is considered bad practice in case of a number.
>>
>> --
>
> Slippery Slope::
> "Argumentation that A is bad, because A might lead to B, and B
> to C, and we all know C is very bad."
But I have seen noone here argue that requiring functions to be named
leads to requiring all variables to be named.
>> Why is requiring me to give this function
>> a name considered a good thing, when it leads to a situation that
>> is considered bad practice in case of a number.
>
> A === "requiring me to give function a name"
> no B
> C === "requiring me to give number a name"
>
> "Argumentation that requiring one to give function a name is bad,
> because that might lead to requiring one to give number a name, and we
> all know that that is very bad."
That is not the arguement I'm making.
The argument is that a particular pratice is considered bad coding,
(with an example giving a number) and then showing that requiring
a name for a function, almost makes such a practice inevitable (for
certain functions used as parameters)
--
Antoon Pardon
Pisin Bootvong wrote:
> Slippery Slope::
> "Argumentation that A is bad, because A might lead to B, and B
> to C, and we all know C is very bad."
For the Slippery Slope criticism to be applicable, there would have to be some
suggestion that removing anonymous functions /would actually/ (tend to) lead to
removing anonymous values in general. There was no such suggestion.
The form of the argument was more like reasoning by analogy: if context A has
features like context B, and in B some feature is known to be good (bad) then
the analogous feature in A is also good (bad). In that case an attack on the
validity of the argument would centre on the relevance and accuracy of the
analogy.
Alternatively the argument might be seen as a generalisation/specialisation
approach. Functions are special cases of the more general notion of values.
We all agree that anonymous values are a good thing, so anonymous functions
should be too. If you parse the argument like that, then the attack should
centre on showing that functions have relevant special features which are not
shared by values in general, and so that we cannot validly deduce that
anonymous functions are good.
-- chris
From: Patrick May
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <m264kguu4c.fsf@Dagney.local>
·····@mac.com (Alex Martelli) writes:
> ...an alleged reply to me, which in fact quotes (and responds to)
> only to statements by Brian, without mentioning Brian...
>
> Mr May, it seems that you're badly confused regarding Usenet's
> quoting conventions.
It seems that someone pisses in your cornflakes nearly every
morning.
For the record, I was attempting to respond to your post which I
only saw quoted in another message. Please excuse any accidental
misquoting.
>> Using lambda in an expression communicates the fact that it
>> will be used only in the scope of that expression. Another benefit
>> is that declaration at the point of use means that all necessary
>> context is available without having to look elsewhere. Those are
>> two pragmatic benefits.
>
> You still need to look a little bit upwards to the "point of use",
> almost invariably, to see what's bound to which names -- so, you DO
> "have to look elsewhere", nullifying this alleged benefit -- looking at
> the def statement, immediately before the "point of use", is really no
> pragmatic cost when you have to go further up to get the context for all
> other names used (are they arguments of this function, variables from a
> lexically-containing outer function, assigned somewhere...), which is
> almost always.
It appears that you write much longer functions than I generally
do. Requiring that all functions be named adds even more to the
clutter.
> And if you think it's an important pragmatic advantage to limit
> "potential scope" drastically, nothing stops you from wrapping
> functions just for that purpose around your intended scope
Or, I could just use a language that supports unnamed functions.
> Your "pragmatic benefits", if such they were, would also apply to the
> issue of "magic numbers",
That claim is, frankly, silly. A function is far more
understandable without a name than a value like 1.19 in isolation.
The situations aren't remotely comparable.
>> >> 3. It adds another construction to the language.
>>
>> That's a very minimal cost relative to the benefits.
>
> To my view of thinking, offering multiple semantically equivalent
> ways (or, perhaps worse, "nearly equivalent but with subtle
> differences" ones) to perform identical tasks is a *HUGE* conceptual
> cost: I like languages that are and stay SMALL and SIMPLE.
Like Scheme?
Regards,
Patrick
------------------------------------------------------------------------
S P Engineering, Inc. | The experts in large scale distributed OO
| systems design and implementation.
···@spe.com | (C++, Java, Common Lisp, Jini, CORBA, UML)
Patrick May <···@spe.com> wrote:
> ·····@mac.com (Alex Martelli) writes:
> > ...an alleged reply to me, which in fact quotes (and responds to)
> > only to statements by Brian, without mentioning Brian...
> >
> > Mr May, it seems that you're badly confused regarding Usenet's
> > quoting conventions.
>
> It seems that someone pisses in your cornflakes nearly every
> morning.
>
> For the record, I was attempting to respond to your post which I
> only saw quoted in another message. Please excuse any accidental
> misquoting.
Your message was an immediate followup to mine, but all the text you
quoted in it was by Brian (w/o mentioning him) -- you quoted no text
written by me.
> > Your "pragmatic benefits", if such they were, would also apply to the
> > issue of "magic numbers",
>
> That claim is, frankly, silly. A function is far more
> understandable without a name than a value like 1.19 in isolation.
> The situations aren't remotely comparable.
I think the comparability is definitely there. Somebody asked me about
translating a bunch of Lisp he'd written (he later admitted he had
misunderstood the power of python's def, and that it lets one do all
he's using unnamed functions for); well, each of those HOF's returns an
unnamed function *with a nice short comment explaining WHAT it does*.
The code would be of far worse quality without the nice short comments,
but it would be much better if the comments were turned into *NAMES*
(allowing easier inspection in interactive development, debugging
including examination of tracebacks, etc). What's the *POINT* of coding
(python syntax):
def blank(*a):
" return a blank picture "
return lambda *ignore_args: []
rather than:
def blank(*a):
def blank_picture(*ignore_args): return []
return blank_picture
and so forth? The former is obscure (ok, it's an anonymous function
taking and ignoring arbitrary args and returning an empty list, but WHY,
WHAT DOES IT MEAN?!), except for the explanatory comment; the latter
clearly defines the purpose of the returned-function by its name. The
situation is exactly parallel to "magic numbers", as in:
total *= 1.19
is entirely mysterious (OK, total, is being multiplied by 1.19, but WHY,
WHAT DOES IT MEAN?!), better is
# augment total by VAT
total *= 1.19
and better still
VAT_MULTIPLIER = 1.19
total *= VAT_MULTIPLIER
A comment is better than nothing (given that the 1.19 constant, or the
function ignoring its arguments and returning empty list, are mysterious
in their purpose), a name is better still.
> > cost: I like languages that are and stay SMALL and SIMPLE.
>
> Like Scheme?
Didn't want to trigger some flamewar;-), but, yes, if that was my only
choice, I'd much rather use small, simple Scheme than huge, complicated,
rich, powerful Common Lisp. ((But in this case I'm biased by early
experiences, since when I learned and used Lisp-ish languages there WAS
no Common Lisp, while Scheme was already there, although not quite the
same language level as today, I'm sure;-)).
Alex
Alex Martelli wrote:
yes, if that was my only
> choice, I'd much rather use small, simple Scheme than huge, complicated,
> rich, powerful Common Lisp. ((But in this case I'm biased by early
> experiences, since when I learned and used Lisp-ish languages there WAS
> no Common Lisp, while Scheme was already there, although not quite the
> same language level as today, I'm sure;-)).
Alas, today Scheme is not minimal at all. I mean, the document
describing the standard is short, but real implementations are pretty
rich.
I am also surprised by your claim in this thread that Scheme macros are
simpler than
Common Lisp macros; perhaps, you are not familiar with syntax-case.
BTW, there is still research going on on macros, for instance look at
http://srfi.schemers.org/srfi-72/srfi-72.html which is pretty nice.
Just to bring some info in yet another useless usenet flamewar.
Michele Simionato
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <eF87g.503$SV1.75@fe10.lga>
Ken Tilton wrote:
>
> Come on, try just one meaty Common Lisp project at Google. Have someone
> port Cells to Python. I got halfway done but decided I would rather be
> doing Lisp. uh-oh. Does Python have anything like special variables? :)
Omigod. I scare myself sometimes. This would be a great Summer of Code
project. Port Cells (see sig) to Python. Trust me, this is Silver Bullet
stuff. (Brooks was wrong on that.)
If a strong Pythonista wants to submit a proposal, er, move fast. I am
mentoring through LispNYC: http://www.lispnyc.org/soc.clp
Gotta be all over Python metaclasses, and everything else pure Python.
PyGtk would be a good idea for the demo, which will involve a GUI mini
app. Just gotta be able to /read/ Common Lisp.
kenny
--
Cells: http://common-lisp.net/project/cells/
"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
From: Cameron Laird
Subject: Python's DSLs (was: A critic of Guido's blog on Python's lambda)
Date:
Message-ID: <3mh4j3-9pm.ln1@lairds.us>
In article <·······························@yahoo.com>,
Alex Martelli <·······@yahoo.com> wrote:
.
.
.
>Of course, the choice of Python does mean that, when we really truly
>need a "domain specific little language", we have to implement it as a
>language in its own right, rather than piggybacking it on top of a
>general-purpose language as Lisp would no doubt afford; see
><http://labs.google.com/papers/sawzall.html> for such a DSLL developed
>at Google. However, I think this tradeoff is worthwhile, and, in
>particular, does not impede scaling.
>
>
>Alex
You lost me, Alex.
I recognize that most of this thread has been far away, in the
land of the anonymity of function definitions, and so on. I've
redirected follow-ups to clp.
On this one isolated matter, though, I'm confused, Alex: I sure
think *I* have been writing DSLs as specializations of Python,
and NOT as "a language in its own right". Have I been fooling
myself, or are you making the point that Lisp-based DSLs live in
a larger syntactic universe than Python's, or ...?
Alex Martelli wrote:
> Ken Tilton <·········@gmail.com> wrote:
> ...
> > But the key in the whole thread is simply that indentation will not
> > scale. Nor will Python.
>
> Absolutely. That's why firms who are interested in building *seriously*
> large scale systems, like my employer (and supplier of your free mail
> account), would never, EVER use Python,
So how much Python code runs when I check my gmail?
> nor employ in prominent
> positions such people as the language's inventor and BDFL, the author of
> the most used checking tool for it, and the author of the best-selling
> reference book about that language; and, for that matter, a Director of
> Search Quality who, while personally a world-renowned expert of AI and
> LISP, is on record as supporting Python very strongly, and publically
> stating its importance to said employer.
Doesn't Google also employ such people as the inventor of Limbo
programming language, one of the inventors of Dylan, and a Smalltalk
expert?
Joe Marshall <··········@gmail.com> wrote:
...
> Doesn't Google also employ such people as the inventor of Limbo
> programming language, one of the inventors of Dylan, and a Smalltalk
> expert?
...not to mention Lisp gurus (such as Peter Norvig), C++ gurus (such as
Matt Austern) and Java ones (such as Josh Bloch) [[and I'm certainly
forgetting many!]].
The difference, if any, is that gurus of Java, C++ and Python get to
practice and/or keep developing their respectively favorite languages
(since those three are the "blessed" general purpose languages for
Google - I say "general purpose" to avoid listing javascript for
within-browser interactivity, SQL for databases, XML for data
interchange, HTML for web output, &c, &c), while the gurus of Lisp,
Limbo, Dylan and Smalltalk don't (Rob Pike, for example, is one of the
architects of sawzall -- I already pointed to the whitepaper on that
special-purpose language, and he co-authored that paper, too).
Alex
>>>>> "Alex" == Alex Martelli <·····@mac.com> writes:
Alex> The difference, if any, is that gurus of Java, C++ and Python get to
Alex> practice and/or keep developing their respectively favorite languages
Alex> (since those three are the "blessed" general purpose languages for
Alex> Google - I say "general purpose" to avoid listing javascript for
Alex> within-browser interactivity, SQL for databases, XML for data
Alex> interchange, HTML for web output, &c, &c), while the gurus of Lisp,
Alex> Limbo, Dylan and Smalltalk don't (Rob Pike, for example, is one of the
Alex> architects of sawzall -- I already pointed to the whitepaper on that
Alex> special-purpose language, and he co-authored that paper, too).
That's crazy. Some of the key developers of Smalltalk continue to work
on the Squeak project (Alan Kay, Dan Ingalls, and I'm leaving someone
out, I know it...). So please remove Smalltalk from that list.
--
Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095
<······@stonehenge.com> <URL:http://www.stonehenge.com/merlyn/>
Perl/Unix/security consulting, Technical writing, Comedy, etc. etc.
See PerlTraining.Stonehenge.com for onsite and open-enrollment Perl training!
*** Posted via a free Usenet account from http://www.teranews.com ***
Randal L. Schwartz wrote:
> >>>>> "Alex" == Alex Martelli <·····@mac.com> writes:
>
> Alex> The difference, if any, is that gurus of Java, C++ and Python get to
> Alex> practice and/or keep developing their respectively favorite languages
> Alex> (since those three are the "blessed" general purpose languages for
> Alex> Google - I say "general purpose" to avoid listing javascript for
> Alex> within-browser interactivity, SQL for databases, XML for data
> Alex> interchange, HTML for web output, &c, &c), while the gurus of Lisp,
> Alex> Limbo, Dylan and Smalltalk don't (Rob Pike, for example, is one of the
> Alex> architects of sawzall -- I already pointed to the whitepaper on that
> Alex> special-purpose language, and he co-authored that paper, too).
>
> That's crazy. Some of the key developers of Smalltalk continue to work
> on the Squeak project (Alan Kay, Dan Ingalls, and I'm leaving someone
> out, I know it...). So please remove Smalltalk from that list.
I thought it was clear that Alex was talking about "smalltalk gurus who
work for Google."
-Jonathan
On Fri, 05 May 2006 21:16:50 -0400, Ken Tilton wrote:
> The upshot of
> what he wrote is that it would be really hard to make semantically
> meaningful indentation work with lambda.
Pretty much correct. The complete thought was that it would be painful
all out of proportion to the benefit.
See, you don't need multi-line lambda, because you can do this:
def make_adder(x):
def adder_func(y):
sum = x + y
return sum
return adder_func
add5 = make_adder(5)
add7 = make_adder(7)
print add5(1) # prints 6
print add5(10) # prints 15
print add7(1) # prints 8
Note that make_adder() doesn't use lambda, and yet it makes a custom
function with more than one line. Indented, even.
You could also do this:
lst = [] # create empty list
def f(x):
return x + 5
lst.append(f)
del(f) # now that the function ref is in the list, clean up temp name
print lst[0](1) # prints 6
Is this as convenient as the lambda case?
lst.append(lambda x: x + 7)
print lst[1](1) # prints 8
No; lambda is a bit more convenient. But this doesn't seem like a very
big issue worth a flame war. If GvR says multi-line lambda would make
the lexer more complicated and he doesn't think it's worth all the effort,
I don't see any need to argue about it.
> But the key in the whole thread is simply that indentation will not
> scale. Nor will Python.
This is a curious statement, given that Python is famous for scaling well.
I won't say more, since Alex Martelli already pointed out that Google is
doing big things with Python and it seems to scale well for them.
--
Steve R. Hastings "Vita est"
·····@hastings.org http://www.blarg.net/~steveha
Steve R. Hastings <·····@hastings.org> wrote:
...
> > But the key in the whole thread is simply that indentation will not
> > scale. Nor will Python.
>
> This is a curious statement, given that Python is famous for scaling well.
I think "ridiculous" is a better characterization than "curious", even
if you're seriously into understatement.
> I won't say more, since Alex Martelli already pointed out that Google is
> doing big things with Python and it seems to scale well for them.
And of course we're not the only ones. In fact, I believe that we're
not even among the firms which have reported their experiences in the
official "Python Success Stories" -- IBM, Industrial Light and Magic,
NASA, etc, etc, are there, but we arent. I guess we just prefer to play
our cards closer to our chest -- after all, if our competitors choose to
use inferior languages, it's hardly to our advantage to change that;-).
Alex
Alex Martelli wrote:
> Steve R. Hastings <·····@hastings.org> wrote:
> ...
> > > But the key in the whole thread is simply that indentation will not
> > > scale. Nor will Python.
> >
> > This is a curious statement, given that Python is famous for scaling well.
>
> I think "ridiculous" is a better characterization than "curious", even
> if you're seriously into understatement.
>
When you consider that there was just a big flamewar on comp.lang.lisp
about the lack of standard mechanisms for both threading and sockets in
Common Lisp (with the lispers arguing that it wasn't needed) I find it
"curious" that someone can say Common Lisp scales well.
From: Bill Atkins
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <87k68xy5gd.fsf@rpi.edu>
·······@verizon.net writes:
> Alex Martelli wrote:
>> Steve R. Hastings <·····@hastings.org> wrote:
>> ...
>> > > But the key in the whole thread is simply that indentation will not
>> > > scale. Nor will Python.
>> >
>> > This is a curious statement, given that Python is famous for scaling well.
>>
>> I think "ridiculous" is a better characterization than "curious", even
>> if you're seriously into understatement.
>>
>
> When you consider that there was just a big flamewar on comp.lang.lisp
> about the lack of standard mechanisms for both threading and sockets in
> Common Lisp (with the lispers arguing that it wasn't needed) I find it
> "curious" that someone can say Common Lisp scales well.
>
It's not all that curious. Every Common Lisp implementation supports
sockets, and most support threads. The "flamewar" was about whether
these mechanisms should be (or could be) standardized across all
implementation. It has little to do with CL's ability to scale well.
You simply use the socket and thread API provided by your
implementation; if you need to move to another, you write a thin
compatibility layer. In Python, since there is no standard and only
one implementation that counts, you write code for that implementation
the same way you write for the socket and thread API provided by your
Lisp implementation.
I still dislike the phrase "scales well," but I don't see how
differences in socket and thread API's across implementations can be
interpreted as causing Lisp to "scale badly." Can you elaborate on
what you mean?
--
This is a song that took me ten years to live and two years to write.
- Bob Dylan
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <Skz7g.58$sC4.5@fe09.lga>
·······@verizon.net wrote:
> Alex Martelli wrote:
>
>>Steve R. Hastings <·····@hastings.org> wrote:
>> ...
>>
>>>>But the key in the whole thread is simply that indentation will not
>>>>scale. Nor will Python.
>>>
>>>This is a curious statement, given that Python is famous for scaling well.
>>
>>I think "ridiculous" is a better characterization than "curious", even
>>if you're seriously into understatement.
>>
>
>
> When you consider that there was just a big flamewar on comp.lang.lisp
> about the lack of standard mechanisms for both threading and sockets in
> Common Lisp (with the lispers arguing that it wasn't needed) I find it
> "curious" that someone can say Common Lisp scales well.
>
We're talking about whether the language can grow to have new
capabilities, while you are talking about libraries, and specifically
whether different implementations have the same API. They all have
sockets, just not the same API, probably because, to be honest that is
not something that belongs in a /language/ API.
But those of us who bounce from implementation to implementation see a
standard APi as saving us some conditional compilation and (effectively)
rolling our own common API out of dii implementation's socket APIs, so a
few socket gurus are working on a standard now.
And yes, they will be able to do this with Common Lisp as it stands.
Try to think a little more rigorously in these discussions, Ok?
Thx, kenny
--
Cells: http://common-lisp.net/project/cells/
"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
·······@verizon.net writes:
> Alex Martelli wrote:
> > Steve R. Hastings <·····@hastings.org> wrote:
> > ...
> > > > But the key in the whole thread is simply that indentation will not
> > > > scale. Nor will Python.
> > >
> > > This is a curious statement, given that Python is famous for scaling well.
> >
> > I think "ridiculous" is a better characterization than "curious", even
> > if you're seriously into understatement.
>
> When you consider that there was just a big flamewar on comp.lang.lisp
> about the lack of standard mechanisms for both threading and sockets in
> Common Lisp (with the lispers arguing that it wasn't needed) I find it
> "curious" that someone can say Common Lisp scales well.
You really need to get better at distinguishing between reality and
usenet flamewars. While some comp.lang.lispers were bitching back and
forth about this, others of us were in Hamburg listening to Martin
Cracauer from ITA talking about "Common Lisp in a high-performance
search environment". In case you aren't aware, ITA is the company
that makes the search engine behind Orbitz.
From: Anton Vredegoor
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <445efe7d$1@usenet.zapto.org>
·······@verizon.net wrote:
> When you consider that there was just a big flamewar on comp.lang.lisp
> about the lack of standard mechanisms for both threading and sockets in
> Common Lisp (with the lispers arguing that it wasn't needed) I find it
> "curious" that someone can say Common Lisp scales well.
In comp.lang.python there are often discussions about which is the best
web framework or what is the best gui. There seems to be some common
meme in these kinds of discussions and the lambda controversy. I'm even
ready to expand the concept even more and include documentation problems
and polymorphic typing.
So what is the big advantage of using parens then that is making people
give up documenting their code by naming functions? (See, I'm getting
into the right kind of lingo for discussing these kind of questions)
Well, there seems to be some advantage to conceptually decoupling a
function from what it is doing *now* (which can be easily named) and
what it is doing in some other situation. Naming things is only a
ballast and makes the mental model not "fit the brain" (introducing
pythonic terminology here for the lispers).
This is a lot like polymorphic functions. For example an adding function
sometimes adds integers and sometimes floats or complex variables and
it can be defined just once without specifying which type of parameters
it is going to get. I assume this to be a piece of cake for most lispers
and pythoneers, but possibly this could still confuse some static typers.
An anonymous function is like a polymorphic function in that it is
possible to make the "mental model" about it polymorphic, instead of
just its parameters. This enables the lispers to just "take what it does
and paste it where that needs to be done" (inventing crypto speak here).
This is a very effective way of handling operations and it would
surprise me if not 99 percent of the Python coders do things mentally
this way too and only add names and documentation at the last possible
moment (mental compile time documentation procedure).
So here we're integrating mental models concerning polymorphism into the
way we talk and think about code, and naming things explicitly always
seems to be a burden.
But now we let the other side of our brain speak for a moment, it was
always the side that translated everything we wanted to say to each
other here into mental Unicode so that we can hear what the others are
saying (further diving into the linguistic pit I am digging here).
Yes, communication is what suffers from *not* naming things, and right
after it documentation and standardization. How else are we going to
communicate our findings verbally to the non coders and the trans coders?
Also naming functions and variables can help us create appropriate
mental models that 'fix' certain things in place and keep them in the
same state, because now they are 'documented'. This promotes people
being able to work together and also it enables measuring progress, very
important aspects for old world companies who won't understand the way
things are evolving (even if they seem to have roaring success at the
moment).
Not to say that I invented something new, it was always a theme, but now
it's a meme,(he, he), the conflict between the scripture and the
mysticism. It's such a pity that everyone understands some way or
another that mysticism is the way things work but that none wants to
acknowledge it.
What am I doing here coding Python one might ask, well, the knowledge
has to be transfered to my brain first *somehow*, and until someone
finds a better way to do that or until there is so much procedural
information in my head that I can start autocoding (oh no) that seems to
be the better option.
Anton
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <wvW6g.1646$Ez6.507@fe09.lga>
Steve R. Hastings wrote:
> On Fri, 05 May 2006 21:16:50 -0400, Ken Tilton wrote:
>
>>The upshot of
>>what he wrote is that it would be really hard to make semantically
>>meaningful indentation work with lambda.
>
>
> Pretty much correct. The complete thought was that it would be painful
> all out of proportion to the benefit.
>
> See, you don't need multi-line lambda, because you can do this:
>
>
> def make_adder(x):
> def adder_func(y):
> sum = x + y
> return sum
> return adder_func
>
> add5 = make_adder(5)
> add7 = make_adder(7)
>
> print add5(1) # prints 6
> print add5(10) # prints 15
> print add7(1) # prints 8
>
>
> Note that make_adder() doesn't use lambda, and yet it makes a custom
> function with more than one line. Indented, even.
>
> You could also do this:
>
>
> lst = [] # create empty list
> def f(x):
> return x + 5
> lst.append(f)
> del(f) # now that the function ref is in the list, clean up temp name
>
> print lst[0](1) # prints 6
>
>
> Is this as convenient as the lambda case?
>
> lst.append(lambda x: x + 7)
> print lst[1](1) # prints 8
>
>
> No; lambda is a bit more convenient. But this doesn't seem like a very
> big issue worth a flame war.
<g> Hopefully it can be a big issue and still not justify a flame war.
Mileages will always vary, but one reason for lambda is precisely not to
have to stop, go make a new function for this one very specific use,
come back and use it as the one lambda statement, or in C have an
address to pass. but, hey, what are editors for? :)
the bigger issue is the ability of a lambda to close over arbitrary
lexically visible variables. this is something the separate function
cannot see, so one has to have a function parameter for everything.
but is such lexical scoping even on the table when Ptyhon's lambda comes
up for periodic review?
> If GvR says multi-line lambda would make
> the lexer more complicated and he doesn't think it's worth all the effort,
> I don't see any need to argue about it.
Oh, no, this is just front porch rocking chair BS. But as an enthuiastic
developer I am sensitive to how design choices express themselves in
ways unanticipated. Did the neat idea of indentation-sensitivity doom
pythonistas to a life without the sour grapes of lambda?
If so, Xah's critique missed that issue and was unfair to GvR in
ascribing his resistance to multi-statement lamda to mere BDFLism.
kenny
--
Cells: http://common-lisp.net/project/cells/
"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
Ken Tilton <·········@gmail.com> writes:
> <g> Hopefully it can be a big issue and still not justify a flame war.
>
> Mileages will always vary, but one reason for lambda is precisely not
> to have to stop, go make a new function for this one very specific
> use, come back and use it as the one lambda statement, or in C have an
> address to pass. but, hey, what are editors for? :)
>
> the bigger issue is the ability of a lambda to close over arbitrary
> lexically visible variables. this is something the separate function
> cannot see, so one has to have a function parameter for everything.
>
> but is such lexical scoping even on the table when Ptyhon's lambda
> comes up for periodic review?
This is second-hand, as I don't actually follow Python closely, but
from what I've heard, they now have reasonable scoping rules (or maybe
they're about to, I'm not sure). And you can use def as a
Scheme-style inner define, so it's essentially a LABELS that gets the
indentation wrong. This means they have proper closures, just not
anonymous ones. And an egregiously misnamed lambda that should be
fixed or thrown out.
If Python gets proper macros it won't matter one bit that they only
have named closures, since you can macro that away in a blink of an
eye.
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <xO27g.15$pv7.9@fe08.lga>
Thomas F. Burdick wrote:
> Ken Tilton <·········@gmail.com> writes:
>
>
>><g> Hopefully it can be a big issue and still not justify a flame war.
>>
>>Mileages will always vary, but one reason for lambda is precisely not
>>to have to stop, go make a new function for this one very specific
>>use, come back and use it as the one lambda statement, or in C have an
>>address to pass. but, hey, what are editors for? :)
>>
>>the bigger issue is the ability of a lambda to close over arbitrary
>>lexically visible variables. this is something the separate function
>>cannot see, so one has to have a function parameter for everything.
>>
>>but is such lexical scoping even on the table when Ptyhon's lambda
>>comes up for periodic review?
>
>
> This is second-hand, as I don't actually follow Python closely, but
> from what I've heard, they now have reasonable scoping rules (or maybe
> they're about to, I'm not sure). And you can use def as a
> Scheme-style inner define, so it's essentially a LABELS that gets the
> indentation wrong.
Cool. And I know how much you like labels/flet. :)
> This means they have proper closures, just not
> anonymous ones. And an egregiously misnamed lambda that should be
> fixed or thrown out.
>
> If Python gets proper macros it won't matter one bit that they only
> have named closures, since you can macro that away in a blink of an
> eye.
Ah, well, there we go again. Without sexpr notation, the lexer/parser
again will be "hard", and "hardly worth it": we get even more sour
grapes, this time about macros not being such a big deal.
One of the hardest things for a technologist to do is admit that a neat
idea has to be abandoned. Initial success creates a giddy
over-commitment to the design choice. After then all difficulties get
brushed aside or kludged.
This would not be a problem for Python if it had stayed a scripting
language... well, maybe "no Macro!s" and "no real lambda!" and "no
continuations!" are GvR's way of keeping Python just a scripting language.
:)
kenny
--
Cells: http://common-lisp.net/project/cells/
"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
[ I pruned the cross-posting down to a reasonable level ]
Ken Tilton <·········@gmail.com> writes:
> Thomas F. Burdick wrote:
>
> > This is second-hand, as I don't actually follow Python closely, but
> > from what I've heard, they now have reasonable scoping rules (or maybe
> > they're about to, I'm not sure). And you can use def as a
> > Scheme-style inner define, so it's essentially a LABELS that gets the
> > indentation wrong.
>
> Cool. And I know how much you like labels/flet. :)
Well, I love LABELS but I hate inner defines with an equal passion --
so for me it's a wash :-)
As much as I like nice low-level, close-to-the-machine mechanisms as
labels and lambda, sometimes you just want the high-level
expressiveness of tagbody/go, which Python doesn't have ... which in
my opinion is quite a crime to readability and the ability to
transcribe Knuth algorithms, which any engineer should find offensive
to their sensibilities.
> > This means they have proper closures, just not
> > anonymous ones. And an egregiously misnamed lambda that should be
> > fixed or thrown out.
> > If Python gets proper macros it won't matter one bit that they only
> > have named closures, since you can macro that away in a blink of an
> > eye.
>
> Ah, well, there we go again. Without sexpr notation, the lexer/parser
> again will be "hard", and "hardly worth it": we get even more sour
> grapes, this time about macros not being such a big deal.
>
> One of the hardest things for a technologist to do is admit that a
> neat idea has to be abandoned. Initial success creates a giddy
> over-commitment to the design choice. After then all difficulties get
> brushed aside or kludged.
Y'never know, they could always Greenspun their way to almost-sexps.
What with the way that selective pressure works, it's gonna be that or
die, so it is a possibility.
Also addressing the Python and scaling question is the
kamaelia.sourceforge.net project whos objective is to solve the
problems of putting the BBCs vast archives on the web, and who use
Python.
-- Pad.
Steve R. Hastings wrote:
> On Fri, 05 May 2006 21:16:50 -0400, Ken Tilton wrote:
> > The upshot of
> > what he wrote is that it would be really hard to make semantically
> > meaningful indentation work with lambda.
>
> Pretty much correct. The complete thought was that it would be painful
> all out of proportion to the benefit.
>
> See, you don't need multi-line lambda, because you can do this:
>
>
> def make_adder(x):
> def adder_func(y):
> sum = x + y
> return sum
> return adder_func
Now imagine you had to do this with every object.
def add_five(x)
# return x + 5 <-- anonymous integer literal, not allowed!!!
five = 5 # define it first
return x + five
Think about the ramifications of every object having to have a name in
some environment, so that at the leaves of all expressions, only names
appear, and literals can only be used in definitions of names.
Also, what happens in the caller who invokes make_adder? Something like
this:
adder = make_adder(42)
Or perhaps even something like this
make_adder(2)(3) --> 5
Look, here the function has no name. Why is that allowed? If anonymous
functions are undesireable, shouldn't there be a requirement that the
result of make_adder has to be bound to a name, and then the name must
be used?
> Note that make_adder() doesn't use lambda, and yet it makes a custom
> function with more than one line. Indented, even.
That function is not exactly custom. What is custom are the environment
bindings that it captures. The code body comes from the program itself.
What about actually creating the source code of a function at run-time
and compiling it?
(let ((source-code (list 'lambda (list 'x 'y) ...)))
(compile nil source-code))
Here, we are applying the compiler (available at run-time) to syntax
which represents a function. The compiler analyzes the syntax and
compiles the function for us, giving us an object that can be called.
Without that syntax which can represent a function, what do you pass to
the compiler?
If we didn't have lambda in Lisp, we could still take advantage of the
fact that the compiler can also take an interpreted function object and
compile that, rather than source code. So we could put together an
expression which looks like this:
(flet ((some-name (x y) ...)) #'some-name)
We could EVAL this expression, which would give us a function object,
which can then be passed to COMPILE. So we have to involve the
evaluator in addition to the compiler, and it only works because the
compiler is flexible enough to accept function objects in addition to
source code.
> No; lambda is a bit more convenient. But this doesn't seem like a very
> big issue worth a flame war. If GvR says multi-line lambda would make
> the lexer more complicated and he doesn't think it's worth all the effort,
> I don't see any need to argue about it.
I.e. GvR is the supreme authority. If GvR rationalizes something as
being good for himself, that's good enough for me and everyone else.
> I won't say more, since Alex Martelli already pointed out that Google is
> doing big things with Python and it seems to scale well for them.
That's pretty amazing for something that doesn't even have a native
compiler, and big mutexes in its intepreter core.
Look at "docs.python.org" in section 8.1 en titled "Thread State and
the Global Interpreter Lock":
"The Python interpreter is not fully thread safe. In order to support
multi-threaded Python programs, there's a global lock that must be held
by the current thread before it can safely access Python objects.
Without the lock, even the simplest operations could cause problems in
a multi-threaded program: for example, when two threads simultaneously
increment the reference count of the same object, the reference count
could end up being incremented only once instead of twice. Therefore,
the rule exists that only the thread that has acquired the global
interpreter lock may operate on Python objects or call Python/C API
functions. In order to support multi-threaded Python programs, the
interpreter regularly releases and reacquires the lock -- by default,
every 100 bytecode instructions (this can be changed with
sys.setcheckinterval())."
That doesn't mean you can't develop scalable solutions to all kinds of
problems using Python. But it does mean that the scalability of the
overall solution comes from architectural details that are not related
to Python itself. Like, say, having lots of machines linked by a fast
network, working on problems that decompose along those lines quite
nicely.
Kaz Kylheku wrote:
>
> Now imagine you had to do this with every object.
>
> def add_five(x)
> # return x + 5 <-- anonymous integer literal, not allowed!!!
> five = 5 # define it first
> return x + five
>
I mentioned that as Slippery slope fallacious argument in other reply.
> [...]
> That doesn't mean you can't develop scalable solutions to all kinds of
> problems using Python. But it does mean that the scalability of the
> overall solution comes from architectural details that are not related
> to Python itself. Like, say, having lots of machines linked by a fast
> network, working on problems that decompose along those lines quite
> nicely.
Is there such language that allow scalability without any need for
design on the underlying architecture?
Python doesn't obscure or become obstacle in utilise those
architecture. Python allow one to design scalable architecture. So
Python IS scalable, isn't it? Only when Python prevent the up-scaling
or Python made scaled up project unmanagable that you can say that
Python is not scalable.
In 'Team scalable' axis, Python is easy to learn for average
programmer. So it is easier for Python to scale up.
'Data scalable' axis is language neutral, it depends on how you
architecture your database, etc.
'User requirement scalable' axis require both infrastructure and
language to provide:
No matter how scalable your language is, you cannot make a 100MHz/128MB
server serve 100,000 client a second over the internet.
No matter how many server and load balancing you have, you cannot
practically program gmail using purely MS-DOS bat file.
Pisin Bootvong <··········@gmail.com> wrote:
+---------------
| No matter how scalable your language is, you cannot make a 100MHz/128MB
| server serve 100,000 client a second over the internet.
+---------------
Sure you can! That's ~1000 CPU cycles/request, which [assuming at least
a 100BASE-TX NIC] is plenty to service 100K *small* requests/s... ;-}
Of course, you might have to write it in assembler on bare metal,
but the good news is that with only a 1000 cycle budget, at least
the code won't be very large! ;-}
-Rob [someone who remembers 0.5 MIPS DEC PDP-10s being used
for >100 simultaneous commercial timesharing users]
-----
Rob Warnock <····@rpw3.org>
627 26th Avenue <URL:http://rpw3.org/>
San Mateo, CA 94403 (650)572-2607
Rob Warnock wrote:
> Pisin Bootvong <··········@gmail.com> wrote:
> +---------------
> | No matter how scalable your language is, you cannot make a 100MHz/128MB
> | server serve 100,000 client a second over the internet.
> +---------------
>
> Sure you can! That's ~1000 CPU cycles/request, which [assuming at least
> a 100BASE-TX NIC] is plenty to service 100K *small* requests/s... ;-}
>
> Of course, you might have to write it in assembler on bare metal,
> but the good news is that with only a 1000 cycle budget, at least
> the code won't be very large! ;-}
>
Well, I was really asking for a service that really service something
complicate and useful though :-D
And donot forget to account for OS CPU time (well may be you can write
your own OS for it too :-D )
>
> -Rob [someone who remembers 0.5 MIPS DEC PDP-10s being used
> for >100 simultaneous commercial timesharing users]
>
> -----
> Rob Warnock <····@rpw3.org>
> 627 26th Avenue <URL:http://rpw3.org/>
> San Mateo, CA 94403 (650)572-2607
Pisin Bootvong <··········@gmail.com> wrote:
+---------------
| Rob Warnock wrote:
| > | No matter how scalable your language is, you cannot make a
| > | 100MHz/128MB server serve 100,000 client a second over the internet.
| > +---------------
| >
| > Sure you can! That's ~1000 CPU cycles/request, which [assuming at least
| > a 100BASE-TX NIC] is plenty to service 100K *small* requests/s... ;-}
|
| Well, I was really asking for a service that really service something
| complicate and useful though :-D
+---------------
If "only" being useful is enough, 100 cycles is enough for a DNS server,
or an NTP server, or even a stub HTTP server that delivers some small
piece of real-time data, like a few realtime environmental sensors
[temperature, voltages, etc.].
+---------------
| > Of course, you might have to write it in assembler on bare metal,
| > but the good news is that with only a 1000 cycle budget, at least
| > the code won't be very large! ;-}
|
| And donot forget to account for OS CPU time (well may be you can write
| your own OS for it too :-D )
+---------------
Uh... What I meant by "bare metal" is *no* "O/S" per se, only a
simple poll loop servicing the attention flags[1] of the various
I/O devices -- a common style in lightweight embedded systems.
-Rob
[1] a.k.a. "interrupt request" bits, except with interrupts not enabled.
-----
Rob Warnock <····@rpw3.org>
627 26th Avenue <URL:http://rpw3.org/>
San Mateo, CA 94403 (650)572-2607
Ken Tilton wrote:
> Oh, my, you are preaching to the herd (?!) of lemmings?! Please tell me
> you are aware that lemmings do not have ears. You should just do Lisp
> all day and add to the open source libraries to speed Lisp's ascendance.
> The lemmings will be liberated the day Wired puts John McCarthy on the
> cover, and not a day sooner anyway.
And then the 12th vanished Lisper returns and Lispers are not
suppressed anymore and won't be loosers forever. The world will be
united in the name of Lisp and Lispers will be leaders and honorables.
People stop worrying about Lispers as psychpaths and do not consider
them as zealots, equipped with the character of suicide bombers. No,
Lisp means peace and paradise.
From: Bill Atkins
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <87hd43bme3.fsf@rpi.edu>
"Kay Schluehr" <············@gmx.net> writes:
> And then the 12th vanished Lisper returns and Lispers are not
> suppressed anymore and won't be loosers forever. The world will be
The mark of a true loser is the inability to spell 'loser.' Zing!
> them as zealots, equipped with the character of suicide bombers. No,
A very reasonable comparison. Yes, the more I think about it, we Lisp
programmers are a lot like suicide bombers.
Doofus.
--
This is a song that took me ten years to live and two years to write.
- Bob Dylan
Bill Atkins wrote:
> "Kay Schluehr" <············@gmx.net> writes:
>
> > And then the 12th vanished Lisper returns and Lispers are not
> > suppressed anymore and won't be loosers forever. The world will be
>
> The mark of a true loser is the inability to spell 'loser.' Zing!
There is not much lost.
> > them as zealots, equipped with the character of suicide bombers. No,
>
> A very reasonable comparison. Yes, the more I think about it, we Lisp
> programmers are a lot like suicide bombers.
Allah Inschallah
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <y_27g.17$pv7.8@fe08.lga>
Kay Schluehr wrote:
> Ken Tilton wrote:
>
>
>>Oh, my, you are preaching to the herd (?!) of lemmings?! Please tell me
>>you are aware that lemmings do not have ears. You should just do Lisp
>>all day and add to the open source libraries to speed Lisp's ascendance.
>>The lemmings will be liberated the day Wired puts John McCarthy on the
>>cover, and not a day sooner anyway.
>
>
> And then the 12th vanished Lisper returns and Lispers are not
> suppressed anymore and won't be loosers forever. The world will be
> united in the name of Lisp and Lispers will be leaders and honorables.
> People stop worrying about Lispers as psychpaths and do not consider
> them as zealots, equipped with the character of suicide bombers. No,
> Lisp means peace and paradise.
>
"The Twelfth Vanished Lisper"? I love it. Must start a secret society....
:)
kenny
--
Cells: http://common-lisp.net/project/cells/
"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
Ken Tilton <·········@gmail.com> writes:
> "The Twelfth Vanished Lisper"? I love it. Must start a secret society....
Cool, can I join? Are you going to announce things on Usenet, set up
a mailing list, use cliki, or use the LispNYC page, or what? Should we start a
common-lisp.net project?
The first project should be an Eco-Eliza, that would kill.
On 2006-05-06, Thomas F. Burdick <···@conquest.OCF.Berkeley.EDU> wrote:
> Ken Tilton <·········@gmail.com> writes:
>
>> "The Twelfth Vanished Lisper"? I love it. Must start a secret society....
>
> Cool, can I join? Are you going to announce things on Usenet, set up
> a mailing list, use cliki, or use the LispNYC page, or what? Should we start a
> common-lisp.net project?
>
> The first project should be an Eco-Eliza, that would kill.
ok, you are a nail
marc
--
······@sdf.lonestar.org
SDF Public Access UNIX System - http://sdf.lonestar.org
Ken Tilton wrote:
> [...] The upshot of what [Guido] wrote is that it would be really hard to make
> semantically meaningful indentation work with lambda.
Haskell manages it.
--
David Hopwood <····················@blueyonder.co.uk>
From: Ken Tilton
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <haU6g.89$Ey5.85@fe12.lga>
David Hopwood wrote:
> Ken Tilton wrote:
>
>>[...] The upshot of what [Guido] wrote is that it would be really hard to make
>>semantically meaningful indentation work with lambda.
>
>
> Haskell manages it.
>
To be honest, I was having a hard time imagining precisely how
indentation broke down because of lambda. does text just sail out to the
right too fast?
kenny
--
Cells: http://common-lisp.net/project/cells/
"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
From: Glyn Millington
Subject: Re: A critic of Guido's blog on Python's lambda
Date:
Message-ID: <87k68zybzg.fsf@nowhere.org>
Ken Tilton <·········@gmail.com> writes:
> kenny (wondering what to call a flock (?!) of lemmings)
Couldn't find it here:-
http://ojohaven.com/collectives/
So I would propose a "leap" of lemmings :-)
WAY OT! Sorry.
atb
Glyn
Reported for excessive crossposting.
--
John Bokma Freelance software developer
&
Experienced Perl programmer: http://castleamber.com/
From: Luc The Perverse
Subject: Re: [Reported] (was Re: A critic of Guido's blog on Python's lambda)
Date:
Message-ID: <lecvi3xjh7.ln2@loki.cmears.id.au>
"John Bokma" <····@castleamber.com> wrote in message
·······························@130.133.1.4...
> Reported for excessive crossposting.
Did u report yourself?
--
LTP
:)
"Xah Lee" <···@xahlee.org> writes:
> In this post, i'd like to deconstruct one of Guido's recent blog about
> lambda in Python.
Why couldn't you keep this to comp.lang.python where it would almost
be relevant? Before I pulled down the headers, I thought maybe
something interesting was posted to comp.lang.lisp.
Followups set.
--
http://www.david-steuber.com/
1998 Subaru Impreza Outback Sport
2006 Honda 599 Hornet (CB600F) x 2 Crash & Slider
It's OK. You only broke your leg in three places. Walk it off.