On Apr 21, 5:51 pm, ·······@gmail.com" <······@gmail.com> wrote:
> If you want to iterate through a list, should you use a recursive
> function or a loop?
Unless you have a very good reason for reinventing your own list
iteration, you should use what's already in the Common Lisp library.
So yes, one possibility is a loop. But not just any loop, but
specifically the for-as-in-list or for-as-on-list clauses of the LOOP
macro. Then there is DOLIST, the mapping applicator functions--MAPC,
MAPCAR, etc--and of course the sequences library, which combines
iteration with useful algorithm: REMOVE-IF, FIND, EVERY, ...
·······@gmail.com" <······@gmail.com> writes:
> If you want to iterate through a list, should you use a recursive
> function or a loop?
I think I should use a loop, unless a recursive solution is better. It
might be the other way around though.
Paul Donnelly wrote:
> ·······@gmail.com" <······@gmail.com> writes:
>
>
>>If you want to iterate through a list, should you use a recursive
>>function or a loop?
>
>
> I think I should use a loop, unless a recursive solution is better. It
> might be the other way around though.
I see it the other way, but it might be the opposite.
Meanwhile, to the OP: you said you want to iterate, that might might be
a clue.
kt
--
http://smuglispweeny.blogspot.com/
http://www.theoryyalgebra.com/
"I've never read the rulebook. My job is to catch the ball."
-- Catcher Josh Bard after making a great catch on a foul ball
and then sliding into the dugout, which by the rules allowed the
runners to advance one base costing his pitcher a possible shutout
because there was a runner on third base.
"My sig is longer than most of my articles."
-- Kenny Tilton
In article
<····································@26g2000hsk.googlegroups.com>,
·······@gmail.com" <······@gmail.com> wrote:
> If you want to iterate through a list, should you use a recursive
> function or a loop?
Yes.
--
Barry Margolin, ······@alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***
*** PLEASE don't copy me on replies, I'll read them in the group ***
On Apr 21, 5:51 pm, ·······@gmail.com" <······@gmail.com> wrote:
> If you want to iterate through a list, should you use a recursive
> function or a loop?
Neither. Use higher order functions like map, remove, reduce.
······@gmail.com wrote:
> If you want to iterate through a list, should you use a recursive
> function or a loop?
If the recursive function defines a iterative process, it might be as
good as a loop. If the loop defines a linear recursive process, it might
be as bad as the recursive function.
·······@gmail.com" <······@gmail.com> writes:
> If you want to iterate through a list, should you use a recursive
> function or a loop?
In Common Lisp, always prefer a loop where practical unless you know
with relative certainty that the problem cannot grow very big. A
recursive function can run out of stack since CL does not guarantee
tail call elimination. Anyone who tells you otherwise is living in a
fantasy world or is telling you something implementation-dependent or
is confusing Lisp with Scheme.
Kent M Pitman <······@nhplace.com> wrote:
> ·······@gmail.com" <······@gmail.com> writes:
>
> > If you want to iterate through a list, should you use a recursive
> > function or a loop?
>
> In Common Lisp, always prefer a loop where practical unless you know
> with relative certainty that the problem cannot grow very big. A
> recursive function can run out of stack since CL does not guarantee
> tail call elimination.
Any pointers to the rationale for this decision?
····@stablecross.com (Bob Felts) wrote on Mon, 21 Apr 2008:
> Kent M Pitman <······@nhplace.com> wrote:
>> CL does not guarantee tail call elimination.
> Any pointers to the rationale for this decision?
Reduces flexibility in possible conforming implementations.
Common Lisp is a multi-paradigm language, and some folks want to use it but
are not especially interested in the functional programming style. CL wanted
to be able to accomodate those folks as well.
That said, the lack of guarantee clearly has negative impact to those that
_do_ want to program in a (purely? mostly?) functional style. But
fortunately, they have Scheme to choose from.
-- Don
_______________________________________________________________________________
Don Geddis http://don.geddis.org/ ···@geddis.org
Why settle for the lesser evil? Cthulhu for President.
····@stablecross.com (Bob Felts) writes:
> Kent M Pitman <······@nhplace.com> wrote:
>
> > ·······@gmail.com" <······@gmail.com> writes:
> >
> > > If you want to iterate through a list, should you use a recursive
> > > function or a loop?
> >
> > In Common Lisp, always prefer a loop where practical unless you know
> > with relative certainty that the problem cannot grow very big. A
> > recursive function can run out of stack since CL does not guarantee
> > tail call elimination.
>
> Any pointers to the rationale for this decision?
Stack debugging.
Kent M Pitman wrote:
> ····@stablecross.com (Bob Felts) writes:
>> Kent M Pitman <······@nhplace.com> wrote:
>>> A
>>> recursive function can run out of stack since CL does not guarantee
>>> tail call elimination.
>> Any pointers to the rationale for this decision?
>
> Stack debugging.
While technically correct, I don't think this really holds
water. In the absence of guaranteed tail call elimination,
iterative processes have to be written using loops, which
destroy exactly the same information as tail calls.
IMO the only real argument against guaranteed tail call elimination
is that it may be too expensive for some implementations.
There are some programming techniques that are only possible with
tail call elimination, yet have little to do with functional
programming. Implementing a state machine as a set of mutually
recursive functions comes to mind, and I'm sure there are others
as well.
--
Pertti
Pertti Kellom�ki <················@tut.fi> writes:
> Kent M Pitman wrote:
>> ····@stablecross.com (Bob Felts) writes:
>>> Kent M Pitman <······@nhplace.com> wrote:
>>>> A
>>>> recursive function can run out of stack since CL does not guarantee
>>>> tail call elimination.
>>> Any pointers to the rationale for this decision?
>> Stack debugging.
>
> While technically correct, I don't think this really holds
> water. In the absence of guaranteed tail call elimination,
> iterative processes have to be written using loops, which
> destroy exactly the same information as tail calls.
>
> IMO the only real argument against guaranteed tail call elimination
> is that it may be too expensive for some implementations.
I think you're getting into the classic Scheme-vs-CL terminology
trap. These discussions have gone on periodically, and from my own
participation in them I shun the use of the phrases "tail call
elimination" or "tail-call optimization", because they are both
misnomers, and each have different meanings in the Scheme and CL
worlds, so talking about them inevitably leads to cross-purposes.
We're not "eliminating" anything here; we are just exchanging a call
for a jump. And though the second term has not shown up in this
thread, the kind of "optimization" being done is not necessarily
time-wise efficient, it is all about optimizing the use of stack
space.
So I personally prefer the terms that came out of some of those
past discussions to describe the two _different_ styles of tail call
handling: for Scheme requirements: "space-efficient tail recursion",
and for CL, "tail-call merging".
So now the question becomes "why don't you use space-efficient tail
recursion for iterative tasks in CL"? And the answer is "because CL
implementations do not implement space-efficient tail recursion - they
only implement tail-merging". And if you're wondering why this is, it
is because CL implements special variables and because it does not
implement CPS, both issues make space-eficient tail recursion very
hard.
> There are some programming techniques that are only possible with
> tail call elimination, yet have little to do with functional
> programming. Implementing a state machine as a set of mutually
> recursive functions comes to mind, and I'm sure there are others
> as well.
"only possible with tail call elimination" is a strong statement.
Be sure of yourself before you tell people what they can't do; it
generally tends to make them more determined to prove you wrong, even
if indeed you are right to start with. Think Turing, and then if you
choose restate your point in terms that are correct.
--
Duane Rettig ·····@franz.com Franz Inc. http://www.franz.com/
555 12th St., Suite 1450 http://www.555citycenter.com/
Oakland, Ca. 94607 Phone: (510) 452-2000; Fax: (510) 452-0182
Duane Rettig wrote:
> Pertti Kellom�ki <················@tut.fi> writes:
> I think you're getting into the classic Scheme-vs-CL terminology
> trap. These discussions have gone on periodically, and from my own
> participation in them I shun the use of the phrases "tail call
> elimination" or "tail-call optimization", because they are both
> misnomers, and each have different meanings in the Scheme and CL
> worlds, so talking about them inevitably leads to cross-purposes.
[...]
> So I personally prefer the terms that came out of some of those
> past discussions to describe the two _different_ styles of tail call
> handling: for Scheme requirements: "space-efficient tail recursion",
> and for CL, "tail-call merging".
Yes, that sounds like more precise terminology, so let me rephrase:
IMO the only real argument against guaranteed space-efficient tail
recursion is that it may be too expensive for some implementations.
> And if you're wondering why this is, it
> is because CL implements special variables and because it does not
> implement CPS, both issues make space-eficient tail recursion very
> hard.
And I don't have any problem with this, it is a perfectly reasonable
design choice.
>> There are some programming techniques that are only possible with
>> tail call elimination, yet have little to do with functional
>> programming. Implementing a state machine as a set of mutually
>> recursive functions comes to mind, and I'm sure there are others
>> as well.
>
> "only possible with tail call elimination" is a strong statement.
> Be sure of yourself before you tell people what they can't do; it
> generally tends to make them more determined to prove you wrong, even
> if indeed you are right to start with. Think Turing, and then if you
> choose restate your point in terms that are correct.
This is in fact something where I would love to be proved wrong.
However, I am pretty certain that one cannot implement a state
machine in Common Lisp as a set of mutually recursive functions,
and be guaranteed that in every conforming implementation the
state machine runs in constant space.
I haven't looked at Kaz's TAILPROG macro, so it is entirely
possible that one can give a CL solution that is syntactically
sufficiently close to a set of recursive functions without actually
being one.
The larger point, though, is that one may care about tail calls
even if one is not writing in a purely functional style, as
someone commented earlier.
--
Pertti
Pertti Kellom�ki <················@tut.fi> writes:
>>> There are some programming techniques that are only possible with
>>> tail call elimination, yet have little to do with functional
>>> programming. Implementing a state machine as a set of mutually
>>> recursive functions comes to mind, and I'm sure there are others
>>> as well.
>> "only possible with tail call elimination" is a strong statement.
>> Be sure of yourself before you tell people what they can't do; it
>> generally tends to make them more determined to prove you wrong, even
>> if indeed you are right to start with. Think Turing, and then if you
>> choose restate your point in terms that are correct.
>
> This is in fact something where I would love to be proved wrong.
> However, I am pretty certain that one cannot implement a state
> machine in Common Lisp as a set of mutually recursive functions,
> and be guaranteed that in every conforming implementation the
> state machine runs in constant space.
See below.
> The larger point, though, is that one may care about tail calls
> even if one is not writing in a purely functional style, as
> someone commented earlier.
Of course. But it's not a matter of possibility, but of convenience.
Here's a simple state machine with recursive functions; they are not
100% mutually recursive (i.e. as in a circular graph) but they
demonstrate the point that the recursion is finite and thus
manageable, and with some simple changes these functions can be made
truly mutually recursive to any controlled degree. I've shown some
example runs, as well.
You may also argue that this isn't a pure example because it contains
a loop. OK, well, you didn't specify that loops are not allowed, but
even if you had, it would be easy to merge the requirement for mutual
recursion and the removal of the loop - each function would carry a
second "level" argument and the size of that argument would determine
whether state3 throws to state1 or calls state1.
CL-USER(2): (shell "cat state.cl")
(defun state1 (states)
(tagbody
loop
(when (car states)
(setq states
(catch 'state1
(setq states (state2 (cdr states)))))
(go loop))))
(defun state2 (states)
(when (car states)
(setq states
(catch 'state2
(setq states (state3 (cdr states))))))
(cdr states))
(defun state3 (states)
(if (car states)
(throw 'state1 (cdr states))
(throw 'state2 (cdr states))))
0
CL-USER(3): :cf state
;;; Compiling file state.cl
;;; Writing fasl file state.fasl
;;; Fasl write complete
CL-USER(4): :ld state
; Fast loading state.fasl
CL-USER(5): :tr state1 state2 state3
(STATE3 STATE2 STATE1)
CL-USER(6): (state1 '(t t t t t))
0[2]: (STATE1 (T T T T T))
1[2]: (STATE2 (T T T T))
2[2]: (STATE3 (T T T))
2[2]: returned-by-throwing to tag STATE1: (T T)
1[2]: returned-by-throwing to tag STATE1: (T T)
1[2]: (STATE2 (T))
2[2]: (STATE3 NIL)
2[2]: returned-by-throwing to tag STATE2: NIL
1[2]: returned NIL
0[2]: returned NIL
NIL
CL-USER(7): (state1 '(t t nil t nil))
0[2]: (STATE1 (T T NIL T NIL))
1[2]: (STATE2 (T NIL T NIL))
2[2]: (STATE3 (NIL T NIL))
2[2]: returned-by-throwing to tag STATE2: (T NIL)
1[2]: returned (NIL)
0[2]: returned NIL
NIL
CL-USER(8): (state1 '(t nil t nil t))
0[2]: (STATE1 (T NIL T NIL T))
1[2]: (STATE2 (NIL T NIL T))
1[2]: returned (T NIL T)
1[2]: (STATE2 (NIL T))
1[2]: returned (T)
1[2]: (STATE2 NIL)
1[2]: returned NIL
0[2]: returned NIL
NIL
CL-USER(9): (state1 '(t nil t nil t t))
0[2]: (STATE1 (T NIL T NIL T T))
1[2]: (STATE2 (NIL T NIL T T))
1[2]: returned (T NIL T T)
1[2]: (STATE2 (NIL T T))
1[2]: returned (T T)
1[2]: (STATE2 (T))
2[2]: (STATE3 NIL)
2[2]: returned-by-throwing to tag STATE2: NIL
1[2]: returned NIL
0[2]: returned NIL
NIL
CL-USER(10):
--
Duane Rettig ·····@franz.com Franz Inc. http://www.franz.com/
555 12th St., Suite 1450 http://www.555citycenter.com/
Oakland, Ca. 94607 Phone: (510) 452-2000; Fax: (510) 452-0182
Duane Rettig wrote:
> Here's a simple state machine with recursive functions; they are not
> 100% mutually recursive (i.e. as in a circular graph) but they
> demonstrate the point that the recursion is finite and thus
> manageable, and with some simple changes these functions can be made
> truly mutually recursive to any controlled degree. I've shown some
> example runs, as well.
I should probably have been more explicit. I was referring to this
style of state machine implementation:
(defun state0 ()
(case (read)
((eliminate) (state1))
(t (error-state))))
(defun state1 ()
(case (read)
((tail) (state2))
(t (error-state))))
(defun state2 ()
(case (read)
((tail) (state2))
((recursion) (success-state))
(t (error-state))))
(defun success-state () 'success)
(defun error-state () 'utter-failure)
Each state is a function, whose body is basically a lookup
table mapping inputs to the next state. Here is a sample run
in sbcl.
* (state0)
eliminate tail recursion
SUCCESS
* (state0)
eliminate tail tail tail recursion
SUCCESS
* (state0)
eliminate recursion
UTTER-FAILURE
*
I am not going to debate the relative merits of this or other state
machine implementations. I am just pointing out that this is a very
clear and concise way to express a state machine in Common Lisp, but
there are no guarantees what the storage space behavior is going to be.
We have recently used a similar technique for compiled simulation of
processors. Each basic block of target code is mapped to a C function,
and we rely on gcc's limited tail call recognition to convert function
calls to jumps. This is not meant as an endorsement of C, which is just
as agnostic to tail calls as CL, but rather to point out that the state
machine implementation is not just an academic construct without real
life applicability.
--
Pertti
Pertti Kellom�ki <················@tut.fi> writes:
> Duane Rettig wrote:
>> Here's a simple state machine with recursive functions; they are not
>> 100% mutually recursive (i.e. as in a circular graph) but they
>> demonstrate the point that the recursion is finite and thus
>> manageable, and with some simple changes these functions can be made
>> truly mutually recursive to any controlled degree. I've shown some
>> example runs, as well.
>
> I should probably have been more explicit.
That's precisely why I warned you t be more careful in stating
absolutes. Absolute messages like "impossible" are often a sign that
the speaker or writer has not thought through what is being
communicated.
I was referring to this
> style of state machine implementation:
>
> (defun state0 ()
> (case (read)
> ((eliminate) (state1))
> (t (error-state))))
>
> (defun state1 ()
> (case (read)
> ((tail) (state2))
> (t (error-state))))
>
> (defun state2 ()
> (case (read)
> ((tail) (state2))
> ((recursion) (success-state))
> (t (error-state))))
>
> (defun success-state () 'success)
>
> (defun error-state () 'utter-failure)
That's a perfectly fine implementation, which should run in every
Common Lisp.
> Each state is a function, whose body is basically a lookup
> table mapping inputs to the next state. Here is a sample run
> in sbcl.
>
> * (state0)
> eliminate tail recursion
>
> SUCCESS
> * (state0)
> eliminate tail tail tail recursion
>
> SUCCESS
> * (state0)
> eliminate recursion
>
> UTTER-FAILURE
> *
It runs in Allegro CL as well. But it demonstrates nothing; Allegro
CL can tail merge or not (depending on the value of the debug
optimization quality), and this example works perfectly well under
both compilation situations. What is not shown is whether each state
was jumped to or called and then returned from. The only sure way to
show that would be to demonstrate a deep recursion and a failure at
that due to stack overflow.
> I am not going to debate the relative merits of this or other state
> machine implementations. I am just pointing out that this is a very
> clear and concise way to express a state machine in Common Lisp, but
> there are no guarantees what the storage space behavior is going to be.
Right. As it turns out, there is no problem with your little example;
it may run in constant stack space or not, but it runs in every
conforming CL.
> We have recently used a similar technique for compiled simulation of
> processors. Each basic block of target code is mapped to a C function,
> and we rely on gcc's limited tail call recognition to convert function
> calls to jumps. This is not meant as an endorsement of C, which is just
> as agnostic to tail calls as CL, but rather to point out that the state
> machine implementation is not just an academic construct without real
> life applicability.
Of course. But you still haven't demonstrated your point, that
recursive-function state-machines in constant stack space are
impossible in CL.
--
Duane Rettig ·····@franz.com Franz Inc. http://www.franz.com/
555 12th St., Suite 1450 http://www.555citycenter.com/
Oakland, Ca. 94607 Phone: (510) 452-2000; Fax: (510) 452-0182
Duane Rettig wrote:
> Right. As it turns out, there is no problem with your little example;
> it may run in constant stack space or not, but it runs in every
> conforming CL.
The possibility that it does not run in constant space means that
it is not a portable way to implement finite state machines in
programs that may run arbitrarily long.
--
Pertti
Pertti Kellom�ki <················@tut.fi> writes:
> Duane Rettig wrote:
>> Right. As it turns out, there is no problem with your little example;
>> it may run in constant stack space or not, but it runs in every
>> conforming CL.
>
> The possibility that it does not run in constant space means that
> it is not a portable way to implement finite state machines in
> programs that may run arbitrarily long.
Exactly, but your little example doesn't show that. My example,
though convoluted, does show itself running in constant stack space.
--
Duane Rettig ·····@franz.com Franz Inc. http://www.franz.com/
555 12th St., Suite 1450 http://www.555citycenter.com/
Oakland, Ca. 94607 Phone: (510) 452-2000; Fax: (510) 452-0182
Pertti Kellomäki <················@tut.fi> writes:
> Duane Rettig wrote:
> > Right. As it turns out, there is no problem with your little example;
> > it may run in constant stack space or not, but it runs in every
> > conforming CL.
>
> The possibility that it does not run in constant space means that
> it is not a portable way to implement finite state machines in
> programs that may run arbitrarily long.
You didn't reply to my previous example, so I don't know if you're not seeing
it or if you didn't think it suited you.
Is it important that these be top-level programs? (This is an easy fix.)
Is it important that the states also be callable in non-tail positions?
That is, are you really defining state machines or are you defining things
that might be state machines or might not. (This is a more complicated
fix, but I'm pretty sure it is possible as well. The tricky part of doing
it well is how to understand the machine's state when it's called recursively,
since a typical state machine has some global state, and you want to know
if you've called a state recursively rather than in a tail position whether
it binds the machine's state, too, or whether it just works like a regular
state... it's not part of the classical definition of the state machine.)
Incidentally, FWIW, I do think it's a good thing for implementations
to support the option of what you want, but I observe that if you
really just want cross-calling of functions to leave no record of how
they got there due to a tail call, that's a nightmare to debug and I'm
forever grateful that CL implementations tend not to do that.
The real difficulty is that if you define tail calling to be just an
option that you can suppress, e.g., by some compiler switch, then what
you get is that some programs are pushing stack you don't want while
others are being debugged... the reason I like iteration constructs is
that usually I don't care about keeping information about where
something came from laterally, so use of loops where I'm just wanting
to do tail calls and other constructs where I want to remember stack
works well for me. At compile time, the annotation typically created in
CL by selecting which routines are recursive and which not seems to me to
do the best job of leaving things debuggable.
From: =?UTF-8?B?UGVydHRpIEtlbGxvbcOka2k=?=
Subject: Re: Iteration in lisp
Date:
Message-ID: <fuqf0k$i17$1@news.cc.tut.fi>
Kent M Pitman wrote:
> You didn't reply to my previous example, so I don't know if you're not seeing
> it or if you didn't think it suited you.
I did see it, and it was indeed a pretty neat way to express
state machines. However, it demonstrates that one can use
CL macros to express state machines in a way that syntactically
resembles a set of recursive functions. This is not something
I have doubted all along. It is after all pretty obvious that
one could implement a Scheme compiler as a CL macro and then
use Scheme idioms.
> Incidentally, FWIW, I do think it's a good thing for implementations
> to support the option of what you want, but I observe that if you
> really just want cross-calling of functions to leave no record of how
> they got there due to a tail call, that's a nightmare to debug and I'm
> forever grateful that CL implementations tend not to do that.
Yes, I agree that sometimes it would indeed be useful to have
the backtrace there.
I don't actually "want" anything. I am just trying to point out
in a rather long winded way that the absence of guarantees of
how tail calls behave makes certain programming techniques
inapplicable. And these techniques do not only have to do with
pure functional programming.
--
Pertti
Pertti Kellomäki <················@tut.fi> writes:
> Kent M Pitman wrote:
> > You didn't reply to my previous example, so I don't know if you're not seeing
> > it or if you didn't think it suited you.
>
> I did see it, and it was indeed a pretty neat way to express
> state machines. However, it demonstrates that one can use
> CL macros to express state machines in a way that syntactically
> resembles a set of recursive functions. This is not something
> I have doubted all along. It is after all pretty obvious that
> one could implement a Scheme compiler as a CL macro and then
> use Scheme idioms.
>
> > Incidentally, FWIW, I do think it's a good thing for implementations
> > to support the option of what you want, but I observe that if you
> > really just want cross-calling of functions to leave no record of how
> > they got there due to a tail call, that's a nightmare to debug and
> > I'm forever grateful that CL implementations tend not to do that.
>
> Yes, I agree that sometimes it would indeed be useful to have
> the backtrace there.
>
> I don't actually "want" anything. I am just trying to point out
> in a rather long winded way that the absence of guarantees of
> how tail calls behave makes certain programming techniques
> inapplicable. And these techniques do not only have to do with
> pure functional programming.
I guess I don't see what you're saying or we disagree on the definition
of words.
To me, a "programming technique" is more than the mere "presence or
use of a particular primitive operator". To me, you are not inhibited
from the programming technique of doubly-linked lists by Lisp, even
though it doesn't primitively provide that functionality, since you
can quite easily write it.
Saying you want to be able to write your program a certain way and have
it mean other than it does is something that is a possible criticism of
nearly all languages, and hence vacuous.
I understood you originally to be saying that you thought you couldn't
use a particular style. I demonstrated use of the style to some degree
and asked some questions about what was missing.
If the essence of what you're wanting is "for what I want to be there
out-of-the-box", and it has to be "just so", then why not just use it
as it is provided in Scheme? CL does things differently, and for a
reason.
If the essence is more subtle, and there is some particular
qualitative aspect that CL is fighting, then I'd like to know what
that was just or my general info. It might indeed be missing such an
aspect--I don't make claims that CL is the perfect language in all
ways. No language is. It suits me, but I have my own things I wish
it did differently. And I'm sure others do too, which is why I ask.
One thing CL is strong on, though, and better on than many (perhaps
most) languages, is the notion of both redefinability of basic ways it
does things (while staying within the base paradigm). And another is
its abstraction capability, and ability to overcome its out-of-the-box
weaknesses. So if you don't like the lack of global lexicals (to pick
a popular example, you can write that). If you don't like its
namespacing, you can load packages like Ron wrote that provide
alternatives. If you don't like its basic syntax, you can get or
write something like CGOL to change the syntax. If you want new
control forms, you can write them as macros. If you don't like its
class system, you can get a copy of AMOP and learn how to (admittedly
not strictly portably, but still in a widely implemented way) extend
or customize the class system.
So when you say it's missing a basic capability, I don't doubt that it
might be true. But I'd still like to know what that is. Because the
inability to write a state machine with functional syntax was not on
my list of CL weaknesses. And it will remain not on that list for me
because of Clarke's First Law [0] until you say what it is about that
which isn't doable so I can see if I believe it or if you're just
being defeatist and not realizing the power of the operations in front
of you.
If this really all reduces to just your summation statement there:
> I don't actually "want" anything.
then that's very sad. Because I had spent a lot of time listening
carefully to what you were saying on the assumption you had more of
substance to say. I'm still hoping you do and just haven't phrased it
in a way I understand, and that you'll try again to articulate it.
[0] = http://en.wikipedia.org/wiki/Clarke's_three_laws
From: =?UTF-8?B?UGVydHRpIEtlbGxvbcOka2k=?=
Subject: Re: Iteration in lisp
Date:
Message-ID: <fusog6$n9f$1@news.cc.tut.fi>
Kent M Pitman wrote:
> So when you say it's missing a basic capability, I don't doubt that it
> might be true. But I'd still like to know what that is. Because the
> inability to write a state machine with functional syntax was not on
> my list of CL weaknesses. And it will remain not on that list for me
> because of Clarke's First Law [0] until you say what it is about that
> which isn't doable so I can see if I believe it or if you're just
> being defeatist and not realizing the power of the operations in front
> of you.
>
> If this really all reduces to just your summation statement there:
>> I don't actually "want" anything.
> then that's very sad. Because I had spent a lot of time listening
> carefully to what you were saying on the assumption you had more of
> substance to say. I'm still hoping you do and just haven't phrased it
> in a way I understand, and that you'll try again to articulate it.
Ok, I'll give it one more shot, but I think then it is time
to leave the good folks in comp.lang.lisp in peace.
Early on in the thread there was this exchange :
Don Geddis wrote:
> ····@stablecross.com (Bob Felts) wrote on Mon, 21 Apr 2008:
>> Kent M Pitman <······@nhplace.com> wrote:
>>> CL does not guarantee tail call elimination.
>> Any pointers to the rationale for this decision?
>
> Reduces flexibility in possible conforming implementations.
>
> Common Lisp is a multi-paradigm language, and some folks want to use
it but
> are not especially interested in the functional programming style.
CL wanted
> to be able to accomodate those folks as well.
>
> That said, the lack of guarantee clearly has negative impact to those
that
> _do_ want to program in a (purely? mostly?) functional style. But
> fortunately, they have Scheme to choose from.
>
> -- Don
>
_______________________________________________________________________________
> Don Geddis http://don.geddis.org/
···@geddis.org
> Why settle for the lesser evil? Cthulhu for President.
While tail calls are of course important in a functional
style of programming, I have found them to be quite useful
as a general programming tool. Essentially a tail call is
an assignment of new values to a set of state variables,
followed by a jump. If the computation of the new values
does not have side-effects, one gets an atomic state change,
which is nice if you want to reason about invariants.
The state machine implementation was the simplest example
of non-functional programming that I could think of. Even
though the current state of the machine is not encoded in a
mutable variable, there is no requirement that any computation
in the function bodies be functional. It would in fact need
to be imperative in some sense in order to have any effect.
As I have stated previously, I have never doubted that one
could not express state machines with functional syntax. This
is why I wrote:
> I haven't looked at Kaz's TAILPROG macro, so it is entirely
> possible that one can give a CL solution that is syntactically
> sufficiently close to a set of recursive functions without actually
> being one.
The fact that I have been trying to convey is that because of
the lack of guarantees of the behavior of tail calls, one cannot
map the states of a state machine to regular CL functions, map
state transitions to tail calls, and expect the program to run
in constant space.
This is not a criticism of CL, nor do I have any desire to turn
CL into a language where this would be true. Hence my comment
that I don't "want" anything, as in wanting CL to be something
that it is not.
--
Pertti
Pertti Kellom�ki <················@tut.fi> writes:
> While tail calls are of course important in a functional
> style of programming, I have found them to be quite useful
> as a general programming tool. Essentially a tail call is
> an assignment of new values to a set of state variables,
> followed by a jump.
Which CL implements by allowing you to have lexical variables and GO.
There is a lot written about how GO is ugly. But given that there is
a straightforward, mechanical way of rewriting any set of GO
structures into a community-accepted tail-recursive set of mutually
recursive procedures (as by Scheme's letrec mediated by call/cc), this
argument against GO as a legitimate solution seems to me just an
argument about trivial syntax--any complexity problem inherent in GO
is also inherent in mutually recursive programs.
> If the computation of the new values does not have side-effects, one
> gets an atomic state change, which is nice if you want to reason
> about invariants.
I certainly do understand that the ability to use this paradigm out-of-the-box
is appealing to people. I'm assuming it's not the core of what you're saying,
though.
> The state machine implementation was the simplest example
> of non-functional programming that I could think of.
Ah, I thought you were seeking to define state machines as "functional",
hence part of my confusion. I'm not sure I totally understand the rest
of your points, but this detail seems critical. I'll try to muddle forth.
> Even though the current state of the machine is not encoded in a
> mutable variable,
Although it could be, by using CLOS appropriately.
> there is no requirement that any computation
> in the function bodies be functional. It would in fact need
> to be imperative in some sense in order to have any effect.
>
> As I have stated previously, I have never doubted that one
> could not express state machines with functional syntax. This
> is why I wrote:
>
> > I haven't looked at Kaz's TAILPROG macro, so it is entirely
> > possible that one can give a CL solution that is syntactically
> > sufficiently close to a set of recursive functions without actually
> > being one.
>
> The fact that I have been trying to convey is that because of
> the lack of guarantees of the behavior of tail calls, one cannot
> map the states of a state machine to regular CL functions, map
> state transitions to tail calls, and expect the program to run
> in constant space.
Not to regular CL functions. But CL does not keep you from using
that style. Whether CLOS is "regular CL functions" or a set of
macrology on top of it is a point of view thing. It is certainly
possible to implement an alternate system of objects as a user
exercise.
> This is not a criticism of CL, nor do I have any desire to turn
> CL into a language where this would be true. Hence my comment
> that I don't "want" anything, as in wanting CL to be something
> that it is not.
But at its core, I thought I might (time permitting) try to write a
program to demonstrate that the capability you ... uh, ... don't want
.... is probably there. But I couldn't figure out what you don't
want, so I couldn't thrust it upon you.
For example, I wrote something that showed it working for something
that looks like local functions. I allege it's possible to do the
same as what I wrote as a single form but in-the-large, with separated
functions... Even portably. Not by DEFUN, but by something that looks
like DEFUN ... in the same sense as DEFMETHOD looks like DEFUN and
defines something callable. The only extra detail you need is an
overarching form that defines the scope of the state machine, in order
to know what point in stack is possible to dispense with. Wouldn't
that satisfy your non-need? Or, back to my question about whether
this is a trivial matter of syntax, is it the need to repurpose
something defined with DEFUN? While that's admittedly a limitation of
CL, it doesn't strike me as much of a fatal one. In practice, the cost
repairing it is just
(def-foo blah (x) (blah x))
for each part you want to connect to, which is a kind of O(1) cost to
the programmer that is generally bearable.
In article <·············@nhplace.com>,
Kent M Pitman <······@nhplace.com> wrote:
> Pertti Kellom�ki <················@tut.fi> writes:
>
> > While tail calls are of course important in a functional
> > style of programming, I have found them to be quite useful
> > as a general programming tool. Essentially a tail call is
> > an assignment of new values to a set of state variables,
> > followed by a jump.
>
> Which CL implements by allowing you to have lexical variables and GO.
>
> There is a lot written about how GO is ugly. But given that there is
> a straightforward, mechanical way of rewriting any set of GO
> structures into a community-accepted tail-recursive set of mutually
> recursive procedures (as by Scheme's letrec mediated by call/cc), this
> argument against GO as a legitimate solution seems to me just an
> argument about trivial syntax--any complexity problem inherent in GO
> is also inherent in mutually recursive programs.
This is not a valid argument. There are straightforward, mechanical
ways of rewriting any construct in any language into another
Turing-complete language. It does not follow that all language
constructs are therefore "just an argument about trivial syntax".
This is not to say that your point about GO is not correct (I don't
think it is but that's beside the point), just that your argument
doesn't adequately support your position IMHO.
rg
Ron Garret <·········@flownet.com> writes:
> In article <·············@nhplace.com>,
> Kent M Pitman <······@nhplace.com> wrote:
>
> > Pertti Kellomäki <················@tut.fi> writes:
> >
> > > While tail calls are of course important in a functional
> > > style of programming, I have found them to be quite useful
> > > as a general programming tool. Essentially a tail call is
> > > an assignment of new values to a set of state variables,
> > > followed by a jump.
> >
> > Which CL implements by allowing you to have lexical variables and GO.
> >
> > There is a lot written about how GO is ugly. But given that there is
> > a straightforward, mechanical way of rewriting any set of GO
> > structures into a community-accepted tail-recursive set of mutually
> > recursive procedures (as by Scheme's letrec mediated by call/cc), this
> > argument against GO as a legitimate solution seems to me just an
> > argument about trivial syntax--any complexity problem inherent in GO
> > is also inherent in mutually recursive programs.
>
> This is not a valid argument. There are straightforward, mechanical
> ways of rewriting any construct in any language into another
> Turing-complete language. It does not follow that all language
> constructs are therefore "just an argument about trivial syntax".
>
> This is not to say that your point about GO is not correct (I don't
> think it is but that's beside the point), just that your argument
> doesn't adequately support your position IMHO.
Well, I admit I was speaking out-of-context about a more specific
debate that raged for years, and that I felt was ultimately put to bed
by this argument. The argument seemed to be that people would claim
to prefer a functional style because (they claimed) use of GO would
inherently create tangles of dataflow that compilers could not sort
out, and (they claimed) function calls would not. First, I doubt that
compilers can't sort out what GO does, since GO is very directly
related to what most compilers need to do. And second, I don't think
that function calling can avoid getting into a tangle (probably
because they are Turing-complete, as you say), but also I don't know
that the fact of the tangle is a barrier to anything material that the
compiler really needs to do... other than things that are already made
impossible by the Halting Problem, I mean.
But if your point is that there can be material issues related to
syntax that people can really care about, that's surely so. Matters of
syntax matter a lot to me in some contexts, and I didn't mean to
handwave those away. I just like it when people spell those out fairly
carefully so that I can learn more about them.
In particular, what Pertti had said that caught my eye was this phrase:
| There are some programming techniques that are only possible with
| tail call elimination,
I wanted to figure out what he meant by "technique"--whether this was
purely a matter of syntax, or whether it was a matter of capability.
Because achieving individual syntax is sometimes hard but achieving
capability is often possible in spite of it. And I believe CL has
tools powerful enough to do offer stack space control in mutually
recursive functions, you just have to write some scaffolding involving
occasional lexical return or dynamic throw in order to unwind the
stack, and you can hide that in macros pretty straightforwardly. Which
reduces it to a matter of syntax.
Whether I should say "matter of syntax" or "mere matter of syntax" is
perhaps the substance of your complaint?
In article <·············@nhplace.com>,
Kent M Pitman <······@nhplace.com> wrote:
> Ron Garret <·········@flownet.com> writes:
>
> > In article <·············@nhplace.com>,
> > Kent M Pitman <······@nhplace.com> wrote:
> >
> > > Pertti Kellomäki <················@tut.fi> writes:
> > >
> > > > While tail calls are of course important in a functional
> > > > style of programming, I have found them to be quite useful
> > > > as a general programming tool. Essentially a tail call is
> > > > an assignment of new values to a set of state variables,
> > > > followed by a jump.
> > >
> > > Which CL implements by allowing you to have lexical variables and GO.
> > >
> > > There is a lot written about how GO is ugly. But given that there is
> > > a straightforward, mechanical way of rewriting any set of GO
> > > structures into a community-accepted tail-recursive set of mutually
> > > recursive procedures (as by Scheme's letrec mediated by call/cc), this
> > > argument against GO as a legitimate solution seems to me just an
> > > argument about trivial syntax--any complexity problem inherent in GO
> > > is also inherent in mutually recursive programs.
> >
> > This is not a valid argument. There are straightforward, mechanical
> > ways of rewriting any construct in any language into another
> > Turing-complete language. It does not follow that all language
> > constructs are therefore "just an argument about trivial syntax".
> >
> > This is not to say that your point about GO is not correct (I don't
> > think it is but that's beside the point), just that your argument
> > doesn't adequately support your position IMHO.
>
> Well, I admit I was speaking out-of-context about a more specific
> debate that raged for years, and that I felt was ultimately put to bed
> by this argument. The argument seemed to be that people would claim
> to prefer a functional style because (they claimed) use of GO would
> inherently create tangles of dataflow that compilers could not sort
> out, and (they claimed) function calls would not.
I find that hard to believe, in part because:
> First, I doubt that
> compilers can't sort out what GO does, since GO is very directly
> related to what most compilers need to do.
Yes, exactly. So I have a hard time imagining anyone who had even half
a clue making this argument. OTOH, I have a very easy time envisioning
someone making the argument that using GO results in code that
*programmers* would have a hard time sorting out.
> And second, I don't think
> that function calling can avoid getting into a tangle (probably
> because they are Turing-complete, as you say)
That is beside the point. Even if function calls cannot avoid tangles
in all cases, it still might be the case that it can avoid tangles in
some cases where GO cannot.
> In particular, what Pertti had said that caught my eye was this phrase:
>
> | There are some programming techniques that are only possible with
> | tail call elimination,
Just shooting from the hip, here's an idiom:
(defun fooN (f ...)
...
(funcall f ...))
that might not be so easy to convert into GO in an elegant manner,
particularly for large values of N, and even more particularly if new
fooN's could be created at run-time.
rg
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Iteration in lisp
Date:
Message-ID: <rem-2008apr26-012@yahoo.com>
> From: Ron Garret <·········@flownet.com>
> Just shooting from the hip, here's an idiom:
> (defun fooN (f ...)
> ...
> (funcall f ...))
> that might not be so easy to convert into GO in an elegant
> manner, particularly for large values of N, and even more
> particularly if new fooN's could be created at run-time.
This looks like a job for Superman^H^H^H^H^H^H^H^Htable-driven dispatch loop.
But actually I see no place where any fooN calls any fooM, so what's the point?
Where are the mutually recursive function-calls there??
Does the notation ... in the second line of the template hide the calls?
P� Fri, 25 Apr 2008 16:13:40 +0200, skrev Pertti Kellom�ki
<················@tut.fi>:
>
> While tail calls are of course important in a functional
> style of programming, I have found them to be quite useful
> as a general programming tool. Essentially a tail call is
> an assignment of new values to a set of state variables,
> followed by a jump. If the computation of the new values
> does not have side-effects, one gets an atomic state change,
> which is nice if you want to reason about invariants.
>
I agree from a provability point of view invariants in iterations are a
lot easier to find using generator induction than with the more classic
Hoare logic or similar.
I recommend writing the algorithm inductively and using transforms to
convert the code to a 'loopy' form as a more efficient way. Note that
'formal' logic and human logic differ somewhat in this respect.
--------------
John Thingstad
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Iteration in lisp
Date:
Message-ID: <rem-2008apr26-011@yahoo.com>
> From: =?UTF-8?B?UGVydHRpIEtlbGxvbcOka2k=?= <················@tut.fi>
> While tail calls are of course important in a functional
> style of programming,
In the light of what I say later, is that really a *given*?
Can you please find a Web page that clearly explains why this is
alleged to be true, why there's no better alternative which makes
this "important" use moot?
> I have found them to be quite useful as a general programming tool.
But, in the light of what I say later, are they the *right* tool??
Next, here comes the *biggie*:
> Essentially a tail call is an assignment of new values to a set
> of state variables, followed by a jump.
Now that you say that so clearly, I find myself strongly opposed to
any tail-recursion whatsoever! In fact I'm thinking that the
default behaviour for the compiler and interpretor when in
debugging mode is to issue a compiler warning whenever a tail call
is discovered. It should say something like:
** WARNING **
Tail call used to emulate long jump. Everyone knows long jump is
evil, and using tail calls to emulate long jumps clandestinely is
even more evil, like the difference between whether you wear a mask
when robbing a bank. But moreso, tail recursion wastes stack space
in debug mode, whereas a true long jump doesn't ever waste stack
space. Please use iteration instead if at all possible.
Now if you can prove me wrong by showing a simple easily-understood
example where tail-recursion is the only decent way to get the job
done, then I'll retract my compiler-warning suggestion.
Let me explain my logic more fully: Suppose you have a function
that makes two recursive function calls from within it. If the
result of the first call affects the behaviour of the second call,
then you're really doing sequential statements, and you really
should use an iterative construct. If the result of the first call
does *not* affect the behaviour of the second call, then you really
should be forking the process, running both calls in parallel, and
waiting for both to complete, instead of forcing the first to
complete before the second starts. In neither case is
tail-recursion the best technique.
Now suppose you have two mutually-recursive functions, each of
which calls the other but not itself. Thus in abstract the code is
like this:
(defun rec1 (statevector)
(let ((newstate (changes1 statevector)))
(rec2 newstate)))
(defun rec2 (statevector)
(let ((newstate (changes2 statevector)))
(rec1 newstate)))
This really should have been written as:
(defun loop12 (initialstatevector)
(let ((statevector initialstatevector))
(loop (setq statevector (changes1 statevector))
(setq statevector (changes2 statevector))
)))
Now suppose you have a haywire of a finite-state automaton you are
trying to emulate. Instead of writing a haywire of mutually
tail-recursive functions, you should write an explicit toplevel
dispatch-loop, see my proposed master loop here:
<http://groups.google.com/group/comp.lang.lisp/msg/c0d4538b33d72322?dmode=source>
= Message-ID: <·················@yahoo.com>
Having done that, it's easy to debug each of the state-change
functions independently, and it's relatively easy to switch between
table-driven and hand-coded finite-state-machines if you ever feel
so inclined.
> If the computation of the new values does not have side-effects,
> one gets an atomic state change, which is nice if you want to
> reason about invariants.
Passing around an explicit state object, with two parts, the
primitive finite-state keyword, and the auxilary data which is
open-ended, not really finite state, gives you the opportunity to
simply modify the finite-state keyword without needing to re-build
a whole new state object. And if for some reason you really do need
to create new states in a non-destructive way, so that parallel
processes can share most of the structure without experencing each
other's side effects, then you can clone the state object with the
finite-state keyword changed but with the pointer to auxilary data
EQ to the original.
> The state machine implementation was the simplest example of
> non-functional programming that I could think of.
And IMO it was a bad example, a clear example where an explicit
dispatch loop instead of haywire of cross-recursive calls is best.
Even in Scheme, where you're guaranteed constant-space tail calls,
I would prefer a dispatch loop. (But I never programmed in scheme,
so this is not a statement from direct experience, just based on
high-level principles.) (And I would never *outlaw* tail recursion,
so don't get mad at me for *that*. Heck, I didn't even want Red #2
food dye outlawed, just a prominent warning THIS PRODUCT CONTAINS
RED #2 WHICH IS KNOWN TO CAUSE CANCER. USE AT YOUR OWN RISK, rather
like the CHOKING HAZARD, NOT FOR ANYONE YOUNGER THAN 3 YEARS OLD)
TAIL RECURSION, HAZARDOUS TO YOUR STACK SPACE, USE AT YOUR OWN RISK.
P� Sun, 27 Apr 2008 04:55:59 +0200, skrev Robert Maas,
http://tinyurl.com/uh3t <·················@SpamGourmet.Com>:
>
> And IMO it was a bad example, a clear example where an explicit
> dispatch loop instead of haywire of cross-recursive calls is best.
> Even in Scheme, where you're guaranteed constant-space tail calls,
> I would prefer a dispatch loop. (But I never programmed in scheme,
> so this is not a statement from direct experience, just based on
> high-level principles.) (And I would never *outlaw* tail recursion,
> so don't get mad at me for *that*. Heck, I didn't even want Red #2
> food dye outlawed, just a prominent warning THIS PRODUCT CONTAINS
> RED #2 WHICH IS KNOWN TO CAUSE CANCER. USE AT YOUR OWN RISK, rather
> like the CHOKING HAZARD, NOT FOR ANYONE YOUNGER THAN 3 YEARS OLD)
>
> TAIL RECURSION, HAZARDOUS TO YOUR STACK SPACE, USE AT YOUR OWN RISK.
Sorry to break in with a side note:
Maybe I am bit dim but to me the obvious implementation of a state machine
is just a tagbody and goto.
If you wish to make it more explicit you could:
(defmacro state-machine (&body body)
`(tagbody ,@body))
(defmacro change-state (state)
`(go ,state))
(defun reader ()
(let (<state variables>)
(locally (declare...)) ; make sure the variables are on the stack
(state-machine
:prelude
...
:start
...
:read-next
(change-state :start)
...)))
The class version is a waste of memory, too slow and too much work.
The recursive version has a unclear execution model if it uses dynamic
variables etc..
Dispatch just adds overhead with no added clarity.
Seems to me people go to almost extreme lengths to avoid goto even when it
is the right abstraction.
A continuation mechanism could be used if you need to exit and resume the
machine from another point in the program (like the one in the library
cl-cont).
--------------
John Thingstad
"John Thingstad" <·······@online.no> writes:
> Sorry to break in with a side note:
>
> Maybe I am bit dim but to me the obvious implementation of a state
> machine is just a tagbody and goto.
> If you wish to make it more explicit you could:
>
> (defmacro state-machine (&body body)
> `(tagbody ,@body))
(defun parse-state-machine-body (body)
(loop :with parameters = '()
:with states = '()
:with forms = '()
:for state :in body
:do (destructuring-bind (state-name state-lambda-list &body body) state
(dolist (state-param state-lambda-list)
(pushnew state-param parameters))
(push (cons state-name state-lambda-list) states)
(push state-name forms)
(push `(progn ,@body) forms))
:finally (return (values parameters states forms))))
(defmacro state-machine (&body body)
(multiple-value-bind (state-parameters states body)
(parse-state-machine-body body)
`(let ,state-parameters
(macrolet ((change-state (state-name &rest args)
(let ((state (find state-name ',states :key (function first))))
(if state
`(progn (psetf ,@(mapcan (function list) (rest state) args))
(go ,(first state)))
`(error "No state named ~S" ',state)))))
(tagbody ,@(reverse body))))))
(state-machine
(prelude () (print 'start)
(change-state start 1))
(start (count) (print count)
(if (< count 4)
(change-state start (1+ count))
(change-state read-next "Hi")))
(read-next (message) (print message)))
prints:
START
1
2
3
4
"Hi"
--> NIL
(let ((xx 42) (yy 18))
(block gcd
(state-machine
(prelude () (change-state gcd xx yy))
(gcd (x y) (cond ((= x y) (change-state done x))
((< x y) (change-state gcd (- y x) x))
(t (change-state gcd (- x y) y))))
(done (result) (return-from gcd result)))))
--> 6
So, it would be nice to add a name to the state machine, using it to
name a block, and it is not clear form the syntax that all the state
parameters are in the scope of the whole state machine, shadowing of
course the surrounding lexical variables of same name. We should
rename them with gensyms to avoid this. Otherwise, this is like
LABELS, but using only JMP instead of JSR. Of course, we could add
macrolets for each state name to be able to write (gcd (- y x) x)
instead of (change-state gcd (- y x) x). (Also, better handle
state-lambda-lists to accept full ordinary-lambda-lists).
--
__Pascal Bourguignon__ http://www.informatimago.com/
IMPORTANT NOTICE TO PURCHASERS: The entire physical universe,
including this product, may one day collapse back into an
infinitesimally small space. Should another universe subsequently
re-emerge, the existence of this product in that universe cannot be
guaranteed.
P� Sun, 27 Apr 2008 12:39:36 +0200, skrev Pascal Bourguignon
<···@informatimago.com>:
>
> (defun parse-state-machine-body (body)
> (loop :with parameters = '()
> :with states = '()
...
> So, it would be nice to add a name to the state machine, using it to
> name a block, and it is not clear form the syntax that all the state
> parameters are in the scope of the whole state machine, shadowing of
> course the surrounding lexical variables of same name. We should
> rename them with gensyms to avoid this. Otherwise, this is like
> LABELS, but using only JMP instead of JSR. Of course, we could add
> macrolets for each state name to be able to write (gcd (- y x) x)
> instead of (change-state gcd (- y x) x). (Also, better handle
> state-lambda-lists to accept full ordinary-lambda-lists).
>
Seems unneccesairly complicated to me.
If the states are not entangled why not just call a function instead?
--------------
John Thingstad
"John Thingstad" <·······@online.no> writes:
> P� Sun, 27 Apr 2008 12:39:36 +0200, skrev Pascal Bourguignon
> <···@informatimago.com>:
>
>>
>> (defun parse-state-machine-body (body)
>> (loop :with parameters = '()
>> :with states = '()
> ...
>> So, it would be nice to add a name to the state machine, using it to
>> name a block, and it is not clear form the syntax that all the state
>> parameters are in the scope of the whole state machine, shadowing of
>> course the surrounding lexical variables of same name. We should
>> rename them with gensyms to avoid this. Otherwise, this is like
>> LABELS, but using only JMP instead of JSR. Of course, we could add
>> macrolets for each state name to be able to write (gcd (- y x) x)
>> instead of (change-state gcd (- y x) x). (Also, better handle
>> state-lambda-lists to accept full ordinary-lambda-lists).
>>
>
> Seems unneccesairly complicated to me.
> If the states are not entangled why not just call a function instead?
What do you mean exactly with "entangled"?
My state-machine macro makes O(1) use of the stack. This is why we
would use it in Common Lisp.
Basically, in scheme you would just write a named let. But scheme
named lets must be well balanced, while this state-machine macro
allows you to jump from one state to any other state. This is what I
would call "entangled" here.
--
__Pascal Bourguignon__ http://www.informatimago.com/
"Logiciels libres : nourris au code source sans farine animale."
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Iteration in lisp
Date:
Message-ID: <rem-2008may01-004@yahoo.com>
> From: "John Thingstad" <·······@online.no>
> Maybe I am bit dim but to me the obvious implementation of a
> state machine is just a tagbody and goto.
If you're never going to change it mid-run, and if you don't need
to include any instrumentation of a general nature about the
process, such as a trace of state changes, then clearly that's the
simplest way to go.
But if you need to change the state-transition-matrix in the middle
of a run and *not* have to re-start from the top, then you need
something that has an explicit state variable and a single toplevel
loop and either table-driven or separate-functions for defining the
new state that follows any given state. The table can simply be
modified mid-run to show a new next-state, while a separate
function can be re-defined (in Lisp, not so easy in other
langugages) at any time to return the new-state value differently
from the previous definition.
> If you wish to make it more explicit you could:
> (defmacro state-machine (&body body)
> `(tagbody ,@body))
> (defmacro change-state (state)
> `(go ,state))
[rest o details snipped]
If you want an easy way to recompile a new tagbody containing a new
state-change matrix, but you will be re-starting from the top
rather than picking up where you left off, this sounds optimal in
regard to simplicity.
> The class version is a waste of memory, too slow and too much work.
I'm not sure what you mean by "class version". Perhaps that's some
part of the thread I haven't seen yet.
> The recursive version has a unclear execution model if it uses
> dynamic variables etc..
I agree. But I can't think of any good reason to include dynamic
variables. More normally I envision a single global state-object
which contains all the embedded data about the current overall
state, which includes the basic state (modeled by the finite-state
machine) plus all the auxiliary data that is being computed by the
machine. For example, a SGML or XML parser might be modeled as a
finite-state machine (for parse mode: toplevel, inside tag, inside
attribute, inside numeric value, inside string, inside white space
between attributes, inside character/unicode entity, etc.) plus
auxiliary stacks (for emulating recursion into sub-expressions i.e.
text bounded by nested opening/closing tag pairs).
> Dispatch just adds overhead with no added clarity.
Dispatch within single loop allows instrumentation to be placed in
just one spot within the loop rather than duplicated once per each
state. Compare these:
(setq state (cons 'state1 (make-auxilary-record ...)))
(loop (trace-state state) (one-step state))
(setq aux (make-auxilary-record ...))
(tagbody state1 (trace-state1 aux)
(...handcoded-conditional... (go state2) ... (go state4))
state2 (trace-state2 aux)
(...handcoded-conditional... (go state3) ... (go state4))
state3 (trace-state3 aux)
(...handcoded-conditional... (go state1) ... (go state5))
state4 (trace-state4 aux)
(...handcoded-conditional... (go state1) ... (go state3))
state5 (trace-state5 aux)
(...handcoded-conditional... (go state2) ... (go state5)))
See how much messier it is to duplicate the tracing for each state?
> Seems to me people go to almost extreme lengths to avoid goto
> even when it is the right abstraction.
I'm not one of those people. You should say that "some" people go
to almost extreme lengths ...
I like to instrument my code, and if I'm doing a whole lot of
things that are almost identical in some ways then I like to write
that identical stuff in just one place instead of many, assuming I
anticipate the potential duplication in advance of course. I make
liberal use of goto in my code when first developing it one line at
a time, and the goto remains until and unless I can refactor the
code to use some higher-level construct to replace it nicely.
One use for goto that never disappears is a common return point.
Typically I have two ways to exit, one when it finds what it's
looking for, and one where it exhausts the search space without
finding what it's looking for. If I want to return multiple values,
only one of which is different depending on success or failure,
then SETQ of the key difference variable then GO to a common
(return (values ...)) seems the best pattern to me. On the other
hand if I return a different number of values depending on how the
loop was exited, then I write (return (values ...)) individually at
each exit-from-loop point instead of using any GO there.
> A continuation mechanism could be used if you need to exit and
> resume the machine from another point in the program (like the one
> in the library cl-cont).
Or pass the (rawstate . auxinfo) as parameter to the dispatch loop,
then when needing to temporarily exit just throw the current
machine state to achieve non-local immediate exit from the loop,
and then pass it as parameter next time the dispatch loop is called
from some other place in the higher-level application. One use for
this I can see is if the state transition matrix is being built
incrementally, with some cases not yet programmed, one case of the
function to compute next state might discover the next state can't
yet be determined, so it throws the previous state just before that
trouble, and the high-level program fixes the problem by changing
the state-handler for that particular old state to correctly
determine the new state, then on re-start it runs fine to next
state. (This is an alternative to fixing the problem from *inside*
the dynamic context, such as from inside a breakpoint, whereby the
dispatch loop never needs to re-start picking up where it left
off.) I guess you had some other useful application in mind, but I
presume my methodology would work for your application too.
·················@SpamGourmet.Com (Robert Maas, http://tinyurl.com/uh3t)
writes:
> TAIL RECURSION, HAZARDOUS TO YOUR STACK SPACE, USE AT YOUR OWN RISK.
FUNCTION CALLS, HAZARDOUS TO YOUR STACK SPACE, USE AT YOUR OWN RISK.
Paul Donnelly <·············@sbcglobal.net> writes:
> ·················@SpamGourmet.Com (Robert Maas, http://tinyurl.com/uh3t)
> writes:
>
> > TAIL RECURSION, HAZARDOUS TO YOUR STACK SPACE, USE AT YOUR OWN RISK.
>
> FUNCTION CALLS, HAZARDOUS TO YOUR STACK SPACE, USE AT YOUR OWN RISK.
This attempt at underscoring symmetry seems to me to be the right
approach to analyzing this problem. It highlights something behind
several of the issues dividing Scheme and (Common) Lisp. It's not
that one or the other is Right, but rather languages are designed
around the idea of making it easy and unobtrusive to do things that
people are intended to do and harder or more conspicuous to do things
that are intended to be not-so-common.
The question of whether tail recursion to accomplish a "jump" is
conspicuous or just something incidental is a point-of-view matter,
and there are people who think that if you want to do this, you should
do so more conspicuously so that your purpose isn't buried in
subtlety.
To offer a sense of balance, there are places where CL programmers
think it's equally obvious that (f (g) (h)) should involve
left-to-right evaluation of (g) and (h), and where Scheme folks think
that's too much to be left "too chance" and that you should have to
explicitly write something like:
((lambda (gval) (f gval (h))) (g))
or the equivalent to assure the order. [There are certainly
syntactically simpler ways to say it, but none of them are as simple
as (f (g) (h))].
So it's a legitimate practice done on both sides of the aisle to make
judgments about what goes without saying and what does not, and to
make the language syntax biased in that way. All languages do it,
really; sometimes they do it for intentional aesthetic reasons, but in
the end, they all must do it at some point or another. It's just not
possible to make all possible things in a language equally easy.
I continue to note that I'm not unsympathetic to those who like tail
recursion, I just think it comes at a cost, and I'm trying to say it's
got legit arguments on both sides, such that while some people may not
like what CL does, they shouldn't say it is a no-brainer that it
should be other than it is.
The present behavior was chosen knowingly by people well familiar with
the optimization in question, and NOT for reasons of backward
compatibility, since no program can really be dependent on running out
of stack space... they did it because they hadn't felt stack space to
be an insurmountable problem, while they believed that debugging
errant programs was a routine problem that could be majorly hampered
without good stack debugging.
There was at the time talk of using some sort of strange data
structure that offered, during debugging mode, a finite history of
recently elided calls at each stack level, etc. I don't recall the
details. But there was not real-world experience in a worked
implementation showing that this compensated enough that the user
community would be happy, so it wasn't assumed to be something the
language designers could cite as "proof" that the feature would be
accepted.
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Iteration in lisp
Date:
Message-ID: <rem-2008may03-003@yahoo.com>
> > > TAIL RECURSION, HAZARDOUS TO YOUR STACK SPACE, USE AT YOUR OWN RISK.
> > FUNCTION CALLS, HAZARDOUS TO YOUR STACK SPACE, USE AT YOUR OWN RISK.
> From: Kent M Pitman <······@nhplace.com>
> This attempt at underscoring symmetry seems to me to be the right
> approach to analyzing this problem. It highlights something behind
> several of the issues dividing Scheme and (Common) Lisp. It's not
> that one or the other is Right, but rather languages are designed
> around the idea of making it easy and unobtrusive to do things that
> people are intended to do and harder or more conspicuous to do things
> that are intended to be not-so-common.
(This is the same principle as data compression!
Ideally cost should be proportinal to surprise value,
computed as negative logarithm of frequency of use,
thereby achieving constant=balanced cost/value ratio.)
> The question of whether tail recursion to accomplish a "jump" is
> conspicuous or just something incidental is a point-of-view matter,
Good point.
> and there are people who think that if you want to do this, you
> should do so more conspicuously so that your purpose isn't buried
> in subtlety.
Another good point. The principle of "self-documenting code"
requires that the typical reader of your code can see at a glance
what's going on at the line-by-line level, using the inline
comments to describe the givens/does/results of each function/method
and to individually document any line of code whose syntax per se
isn't sufficient to make it clear what's really going on.
But each viewer of the code comes from a different background, and
has different expectations of what are typical ways that code can
process data, and hence different ideas of which coding tricks are
so subtle that they require inline comments. Whether a different
syntax to make the trick "obvious" is necessary, or even sufficient,
is a matter of opinion. There are three cases, for example:
- The concise code typical of Scheme is self-explanatory.
(x y z)
- The more verbose code typical of Common Lisp is necessary to make it clear.
(funcall x y z)
- Neither is sufficient, either would need a comment to explain it.
<eitherAbove> ;X was the continuation passed from foo, a function to apply.
;Y is the environment structure, and Z is the local data.
> To offer a sense of balance, there are places where CL
> programmers think it's equally obvious that (f (g) (h)) should
> involve left-to-right evaluation of (g) and (h), and where Scheme
> folks think that's too much to be left "too chance" and that you
> should have to explicitly write something like:
> ((lambda (gval) (f gval (h))) (g))
> or the equivalent to assure the order.
Yeah: (let* ((gval (g))
(hval (h)))
(f gval hval))
which ought to force sequential bindings, as opposed to:
(let ((gval (g))
(hval (h)))
(f gval hval))
which would presumably allow parallel or backwards sequence of g,h calls.
I wasn't aware of this difference between Scheme and Common Lisp,
although I might have read about it once long ago. However it seems
to me this is a great opportunity for you to write another of your
great essays, this time on the **major** design choices that differ
between Scheme and Common Lisp. I'm aware of these:
- Single function/value cell, where (f x y) means (funcall f x y),
compared to separate function and value cells where the former
works only for the function cell while the latter works only for
the value cell of f. I greatly prefer the CL approach.
- Sequential execution of actual parameters to function in CL.
- Large built-in library in CL to make it workhorse for D/P
compared to very small built-in essentials only in Scheme to
make it easy to teach the whole language in one semester.
You are probably aware of a few other major design differences?
> It's just not possible to make all possible things in a language
> equally easy.
Candidate for the koan repository!
> I continue to note that I'm not unsympathetic to those who like
> tail recursion, I just think it comes at a cost, and I'm trying
> to say it's got legit arguments on both sides, such that while
> some people may not like what CL does, they shouldn't say it is a
> no-brainer that it should be other than it is.
Well said. The buzzword is "tradeoff". For the benefit of doing one
kind of thing differently to be more efficient or easy to use, we
would have to pay the price of something else being less efficient
or more difficult to use. In the case of tail recursion optimizing
to avoid adding to the stack, we would lose a *lot* of benefit, as
somebody enumerated earlier in this thread (viewing stack frames
during debugging, special variables, non-local returns via
throw/catch or error restart, etc.).
> The present behavior was chosen knowingly by people well familiar
> with the optimization in question, and NOT for reasons of backward
> compatibility, since no program can really be dependent on running
> out of stack space... they did it because they hadn't felt stack
> space to be an insurmountable problem, while they believed that
> debugging errant programs was a routine problem that could be
> majorly hampered without good stack debugging.
How do Scheme programmers debug their code?? (Serious question, curious Robert.)
Actually I'm also curious how Forth programmers debug their code!
(If no experienced Forth programmers see this thread, kindly ignore.)
I personally develop+test line-at-a-time, which mostly makes
stack-frame debugging unnecessary, but every once in a while
stack-frame debugging of my code does turn out useful after all.
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Iteration in lisp
Date:
Message-ID: <rem-2008may03-002@yahoo.com>
> > TAIL RECURSION, HAZARDOUS TO YOUR STACK SPACE, USE AT YOUR OWN RISK.
> From: Paul Donnelly <·············@sbcglobal.net>
> FUNCTION CALLS, HAZARDOUS TO YOUR STACK SPACE, USE AT YOUR OWN RISK.
There's a fundamental difference between tail recursion to emulate
iteration (my main point) and regular functions to build tools on
top of other tools. The stack depth needed for tail recursion is
limited only by the size of your list you're recuriterating down.
So if you have a program that works just fine with a thousand
items, it blows up if you try to use it with a trillion items. But
the stack depth needed for bottom-up tool building is determined by
your software design, so any given program will continue to work
without stack overflow no matter how large a dataset you work with.
Thus *you* (the programmer) have control over stack depth when
building tools on top of tools bottom-up by function calling, but
your user has control to break any software you write if you use
tail recursion.
Now regular recursion used responsibly, such as computing the
factorial via divide-and-conquer, or traversing a balanced binary
tree depth-first, etc., has intermediate behaviour, log(n) stack
depth, so the user has some ability to overflow your stack, but in
practice the user would need more data than the entire observable
universe just to overflow a medium-sized stack, so it ain't a problem.
But what about depth-first exploration of a virtual universe far
larger than any actual universe? Log(n) might just overflow the
stack if n is a googleplex. Let me know if this is the case that
concerns you, and we can maybe discuss it by specific examples. For
example, what if we're exploring large prime numbers (a few hundred
bits/digits) for purpose of devising cryptographic keys? In that
case, stack depth is small enough not to be a problem. Can you
think of a problem domain where exploring a really really large
virtual space might overflow a decent-sized stack even with log(n)
stack depth? Exhaustive traversing the space wouldn't be a problem,
because the time n consumed would be more a problem than the stack
depth log(n). It'd have to be random sampling of the space, where
time would be reasonable but stack depth would be too large.
You got a good example to discuss?
Kent M Pitman wrote:
> The real difficulty is that if you define tail calling to be just an
> option that you can suppress, e.g., by some compiler switch, then what
> you get is that some programs are pushing stack you don't want while
> others are being debugged...
Possibly totally without merit:
Given CL has funcall, how about introduction of an "special" variant of
funcall ("funtail" in the example below) for indicating your code
_requires_ call elimination to run as you intended?
Implementations supporting it would do it or error or warn if the
compiler can't guarantee the elimination.
Implementations that don't know about it would error out given "funtail"
would be meaningless to them. If it's a thing you're writing code
that won't work as you intended unless call elimination takes place,
maybe spelling it out at the place of the tail call could be the lispy
way ? [also: consider providing for elimination when functions are
passed in as arguments to functions and funcalled?] Possibly a high
debug setting could inhibit the elimination - but unlike where the
elimination it to be just considered an optimization, the compiler
could warn ("Not eliminating in funtail due to debug = 3").
e.g., to mutate Pertti's trivial function-style state machine:
(defun state0 ()
(case (read)
((eliminate) (funtail #'state1))
(t (funtail #'error-state))))
(defun state1 ()
(case (read)
((tail) (funtail #'state2))
(t (funtail #'error-state))))
(defun state2 ()
(case (read)
((tail) (funtail #'state2))
((recursion) (funtail #'success-state))
(t (funtail #'error-state))))
(defun success-state () 'success)
(defun error-state () 'utter-failure)
On Apr 25, 8:39 am, David Golden <············@oceanfree.net> wrote:
> Kent M Pitman wrote:
> > The real difficulty is that if you define tail calling to be just an
> > option that you can suppress, e.g., by some compiler switch, then what
> > you get is that some programs are pushing stack you don't want while
> > others are being debugged...
>
> Possibly totally without merit:
>
> Given CL has funcall, how about introduction of an "special" variant of
> funcall ("funtail" in the example below) for indicating your code
> _requires_ call elimination to run as you intended?
I like this, though not as much as my own proposal, which is that CL
compilers shouldn't tail-merge by default, but there should be a
simple way to declare a set of top-level `defun's calls among which
should be tail-merged.
This means you can write your state machines in Scheme style with just
one additional declaration to say that that's what you're doing.
-- Scott
David Golden <············@oceanfree.net> writes:
> Kent M Pitman wrote:
>
>> The real difficulty is that if you define tail calling to be just an
>> option that you can suppress, e.g., by some compiler switch, then what
>> you get is that some programs are pushing stack you don't want while
>> others are being debugged...
>
>
> Possibly totally without merit:
>
> Given CL has funcall, how about introduction of an "special" variant of
> funcall ("funtail" in the example below) for indicating your code
> _requires_ call elimination to run as you intended?
It would be like the "become" keyword in Newsqueak, which behaved like a
return statement except it replaced the calling function completely,
even in the stack.
It's not even a matter of call elimination (as performed by a smart
compiler) anymore, it's a matter of explicit replacement, just like
(tagbody
(print (1+ (go quit)))
quit)
and
(block nil
(print (1+ (return))))
don't entail worrying about (print (1+ ...)) except insofar as the
compiler might warn you that your code is strange.
(defun recurse-without-crashing ()
(become (recurse-without-crashing)))
would be pretty cool by me.
Pertti Kellom�ki wrote:
> Kent M Pitman wrote:
>> ····@stablecross.com (Bob Felts) writes:
>>> Kent M Pitman <······@nhplace.com> wrote:
>>>> A
>>>> recursive function can run out of stack since CL does not guarantee
>>>> tail call elimination.
>>> Any pointers to the rationale for this decision?
>>
>> Stack debugging.
>
> While technically correct, I don't think this really holds
> water. In the absence of guaranteed tail call elimination,
> iterative processes have to be written using loops, which
> destroy exactly the same information as tail calls.
No, they don't have to be written using loops.
You could, for example, decide to use a recursive version _in order_ to
have more debug information at runtime.
> There are some programming techniques that are only possible with
> tail call elimination, yet have little to do with functional
> programming. Implementing a state machine as a set of mutually
> recursive functions comes to mind, and I'm sure there are others
> as well.
It would be good if the decision whether you want more debug information
or not were orthogonal to whether you use recursion or iteration. The
latter should ideally depend on whatever makes the code easier to
understand. More often than not, iteration constructs reveal the intent
of the programmer more clearly, though, because they are typically more
specialized.
Pascal
--
1st European Lisp Symposium (ELS'08)
http://prog.vub.ac.be/~pcostanza/els08/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
Pascal Costanza wrote:
> Pertti Kellom�ki wrote:
>> In the absence of guaranteed tail call elimination,
>> iterative processes have to be written using loops, which
>> destroy exactly the same information as tail calls.
>
> No, they don't have to be written using loops.
>
> You could, for example, decide to use a recursive version _in order_ to
> have more debug information at runtime.
But then you have a recursive process, not an iterative one.
My point is that if you want to portably guarantee that your program
can be run forever -- i.e. it runs in constant space -- the top
level repetition must be written as a loop.
A corollary of this is that if your top level is a state machine,
it cannot be implemented as a set of mutually recursive functions.
--
Pertti
Pertti Kellom�ki wrote:
> Pascal Costanza wrote:
>
>> Pertti Kellom�ki wrote:
>>
>>> In the absence of guaranteed tail call elimination,
>>> iterative processes have to be written using loops, which
>>> destroy exactly the same information as tail calls.
>>
>>
>> No, they don't have to be written using loops.
>>
>> You could, for example, decide to use a recursive version _in order_
>> to have more debug information at runtime.
>
>
> But then you have a recursive process, not an iterative one.
Right. If the process is spiritually iterative, elements processed
before me are irrelevant to debugging me.
kenny
--
http://smuglispweeny.blogspot.com/
http://www.theoryyalgebra.com/
"I've never read the rulebook. My job is to catch the ball."
-- Catcher Josh Bard after making a great catch on a foul ball
and then sliding into the dugout, which by the rules allowed the
runners to advance one base costing his pitcher a possible shutout
because there was a runner on third base.
"My sig is longer than most of my articles."
-- Kenny Tilton
Pertti Kellom�ki wrote:
> Pascal Costanza wrote:
>> Pertti Kellom�ki wrote:
>>> In the absence of guaranteed tail call elimination,
>>> iterative processes have to be written using loops, which
>>> destroy exactly the same information as tail calls.
>>
>> No, they don't have to be written using loops.
>>
>> You could, for example, decide to use a recursive version _in order_
>> to have more debug information at runtime.
>
> But then you have a recursive process, not an iterative one.
>
> My point is that if you want to portably guarantee that your program
> can be run forever -- i.e. it runs in constant space -- the top
> level repetition must be written as a loop.
>
> A corollary of this is that if your top level is a state machine,
> it cannot be implemented as a set of mutually recursive functions.
Yes, that's of course correct.
Pascal
--
1st European Lisp Symposium (ELS'08)
http://prog.vub.ac.be/~pcostanza/els08/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Iteration in lisp
Date:
Message-ID: <rem-2008apr26-008@yahoo.com>
> From: Pascal Costanza <····@p-cos.net>
> You could, for example, decide to use a recursive version _in
> order_ to have more debug information at runtime.
I actually disagree! Your idea locks together the design of the
algorithm as tail-recursive calls with the amount of debugging
information you'll have when you hit a breakpoint. If you want to
change the amount of debugging information, such as more info
because you've realized you have a difficult-to-track bug, or less
info because you believe your code is now working and stable, you
need to change all your code to use a different call structure,
which will likely introduce new bugs.
I'd rather divorce those two concepts. Write a simple LOOP. If you
feel you need to maintain a stack of past states, directly CONS up
a past-state list as you go around the loop. When you are done
debugging, one semicolon in your source file, or (when NIL ...)
wrapped around the stack-pushing code, will stop building that
info. You can independently build whatever amount of past-state
info you need at *each* major program-loop in your application,
independently switching such CONSing on/off with a trivial edit,
and never introduce new bugs from such changes. You can even do
sorta like the Unix shell does, by having a limit on the number of
history entries, and have a counter that keeps track of how long
your emulated-stack is, and whenever it passes the upper-bound
threshold you chop the linked-list down to the lower-bound
threshold and reset the counter to that number. Or implement a
QUEUE (FIFO) API and simply pop from the end of the queue whenever
you've pushed one too many items into the queue, so then you need
only one threshold. So much more flexibility at changing the
quantity of various types of debug info if you emulates a history
list yourself instead of relying on stack frames to have that info.
You even have the option of pushing the entire history list onto a
GLOBAL meta-histories list whenever you return from such a large
function, so that even after you've returned you can still go back
to see why it returned the wrong value which you didn't discover
until sometime later.
> It would be good if the decision whether you want more debug
> information or not were orthogonal to whether you use recursion or
> iteration. The latter should ideally depend on whatever makes the
> code easier to understand. More often than not, iteration
> constructs reveal the intent of the programmer more clearly,
> though, because they are typically more specialized.
Hey, that's sorta what I said just above! I should have read ahead
before composing my reply! "divorced" = "orthogonal"
Note that if you write your algorithm recursively, you can *still*
maintain a history of states and/or state changes in a global
history list.
thus spoke Pertti Kellomäki <················@tut.fi>:
>>>> A recursive function can run out of stack since CL does not
>>>> guarantee tail call elimination.
>>> Any pointers to the rationale for this decision?
>> Stack debugging.
> While technically correct, I don't think this really holds
> water. In the absence of guaranteed tail call elimination,
> iterative processes have to be written using loops, which
> destroy exactly the same information as tail calls.
Holds true for tail recursion, but tail calls are eliminated even when
no recursion occurs:
(defun foo1 ()
(foo2))
[...]
(defun foo4 ()
(foo5))
(defun foo5 ()
(error "badness"))
Backtrace:
0: (FOO5)
1: (FOO5)[:EXTERNAL]
2: (SB-INT:SIMPLE-EVAL-IN-LEXENV (FOO1) #<NULL-LEXENV>)
3: (SWANK::EVAL-REGION "(foo1)
")
> IMO the only real argument against guaranteed tail call elimination
> is that it may be too expensive for some implementations.
It's not as reliable as one might think. It's hard to maintain tail
positions with LET bindings of dynamic variables, tad similar to
fluid-let in some implementations of Scheme.
--
Nawet świnka wejdzie na drzewo kiedy ją chwalą.
Pertti Kellomäki <················@tut.fi> writes:
> There are some programming techniques that are only possible with
> tail call elimination, yet have little to do with functional
> programming. Implementing a state machine as a set of mutually
> recursive functions comes to mind, and I'm sure there are others
> as well.
I only spent a few minutes on this, so it's possible it could be done better.
This is done with LispWorks 4.4.5 ...
(defmacro with-state-machine (bindings &body body-forms)
(let* ((args-var (gensym "ARGS"))
(state-var (gensym "STATE"))
(exit-tag (gensym "EXIT"))
(main-fn (gensym "MAIN"))
(restart-tag (gensym "RESTART")))
`(let ((,args-var '()) ,state-var)
(block ,exit-tag
(setq ,state-var ',main-fn)
(tagbody
,restart-tag
(labels (,@(mapcar #'(lambda (binding)
`(,(car binding) (&rest args)
(setq ,state-var ',(car binding))
(setq ,args-var args)
(go ,restart-tag)))
bindings))
(return-from ,exit-tag
(ecase ,state-var
,@(mapcar #'(lambda (binding)
`(,(car binding)
(apply #'(lambda ,@(cdr binding)) ,args-var)))
bindings)
(,main-fn () ,@body-forms)))))))))
(progn
(compile (defun test (i m)
(let ((k 0) (n 0) (d (- m 1)))
(declare (fixnum i m k n d))
(with-state-machine ((foo () (setq k 0) (incf n) (bar))
(oof () (incf k) (bar))
(bar () (cond ((= i 0) n)
(t
(decf i)
(if (= k d) (foo) (oof))))))
(bar)))))
(list (mp:without-preemption (time (test 1000 100)))
(mp:without-preemption (time (test 10000 100)))))
Timing the evaluation of (TEST 1000 100)
user time = 0.000
system time = 0.000
Elapsed time = 0:00:00
Allocation = 0 bytes standard / 0 bytes conses
0 Page faults
Timing the evaluation of (TEST 10000 100)
user time = 0.015
system time = 0.000
Elapsed time = 0:00:00
Allocation = 0 bytes standard / 0 bytes conses
0 Page faults
(10 100)
(Note, btw, that one detail that gets lost in this particular solution
is the option of it not being a state machine by adjusting the call to
not be in a tail position; even if you put these "function" calls in
non-tail positions, they will do tail calls (go's) rather than regular
calls. So that's still different than how Scheme, for example,
handles it. I didn't worry about that TOO much because your claim was
that a state machine couldn't be written, and state machines don't
have that property. As you can see above, a state machine can be written
that way.)
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Iteration in lisp
Date:
Message-ID: <rem-2008apr26-007@yahoo.com>
> From: =?ISO-8859-1?Q?Pertti_Kellom=E4ki?= <················@tut.fi>
> There are some programming techniques that are only possible with
> tail call elimination, yet have little to do with functional
> programming. Implementing a state machine as a set of mutually
> recursive functions comes to mind, and I'm sure there are others as
> well.
I personally would prefer to emulate a state machine as an explicit
dispatch loop, making heavy use of records of some type (tagged
lists, defstructs, CLOS objects, whatever).
Note in this proposed skeleton, the STATE variable is a record
which contains the formal 'state' of the "finite-state machine",
and the vector of variables related to that state.
There are constructors for various states that combine the formal
'state' with the initial values of the variables, accessors for
getting the formal 'state' and the various variables individualy,
mutators for changing the value of a variable in an existing state
object. Apart from that there's a table mapping the formal 'state'
to a method that decides what to do next, based on the values of
those variables, and returns the resultant state object, which may
be the previous state object modified, or a newly constructed state
object, or some other previously-constructed state object that had
been embedded within the previous state object to be kept until
needed next.
(defun cpu (initialstate &aux (state initialstate) formalstate method newstate)
(loop (setq formalstate (state-get-formalstate state))
(setq method (formalstate-lookup-method formalstate))
(setq newstate (funcall method (state-get-vectorvars state)))
;Debugging printouts can be placed here to show the transition
(setq state newstate)))
By encapsulating *all* of the state within a single object, rather
than allow direct memory access via pointers, this is much more
theoretically clean than a regular Von Neumann CPU. But by doing
this in Lisp (or Java it could be done equally well, at greater
pain getting it to compile), we can build nested structures
arbitrarily deep so that the state object can contain anything you
may need to pass around, even a hash table with "global variable
bindings" if you want to go that route.
The nice thing about doing it this way is that you can *call* each
method individually, with any data you feel will help test it for
correctness, and know that it will promptly *terminate* by
returning a new state object, rather than recursively call
everything else in the system without bound before returning.
Even in an application that is inherently strongly recursive, such
as a divide-and-conquer exhaustive-search-through-many-levels
algorithm, if there's a lot of new code to be written to handle
each level of recursion, it might be better to write each such
block of code as a standalone callable state-transition function,
and then after all those pieces are believed mostly debugged then
write the main recursive algorithm simply as a trivial skeleton
that simply calls a state-transition function whose return state
includes a flag telling whether to recurse or return next.
Whether the main skeleton function is a program loop that emulates
a stack via linked list, or a true recursive function, is then a
trivial decision. This is at present just a preliminary idea to try
someday, something I'm pretty sure would be a better way to do this
kind of algorithm instead of what I tried a couple years ago which
was to directly write a set of mutually recursive functions for a
deep-and-complicated divide-and-conquer exhaustive-search algorithm
which turned out to be too difficult to debug in that form. Some
day I plan to refactor the whole thing per my new idea here and
hopefully it'll be easy enough to debug that I can finally get the
project working. The lesson is that bottom-up development, first
build a totally self-contained (*) tool and get it fully working,
then use it to build a bigger tool on top of it, is better than
fullfledged recursive debugging, even though the code for the
latter looks prettier in theory.
* (except for the API you start with, consisting of vendor-supplied tools)
Bob Felts <····@stablecross.com> wrote:
+---------------
| Kent M Pitman <······@nhplace.com> wrote:
| > A recursive function can run out of stack since CL does not guarantee
| > tail call elimination.
|
| Any pointers to the rationale for this decision?
+---------------
In addition to what others have replied, there are many constructs
in CL for which it's either difficult to "do the right thing" with
a tail call or even know *what* "the right thing" would be if one
were trying to perform tail-call optimization, for example:
- Tail-calling out of blocks in which dynamic variables were rebound.
- Tail-calling out of blocks wrapped by UNWIND-PROTECT.
- Tail-calling out of blocks wrapped by HANDLER-CASE or HANDLER-BIND.
[IGNORE-ERRORS is the same as HANDLER-CASE.]
- Non-local exits from exception handlers or restarts.
- Heck, non-local exits, period!
And even in the simple cases, it's a debatable question whether the
tail-call optimization has to be *completely* "safe-for-space" or
merely "safe-for-stack". The difference has to do with references
to dynamically "dead" data which might appear to still be lexically
"live" due to the way closure environments are constructed. See the
plentiful published Scheme & ML literature for more on this.[1]
Also see <http://groups.google.com/group/comp.lang.lisp/browse_thread/
thread/d62865297bde3c41/8f9dcf58a00aca27?hl=en&lnk=st&q=#8f9dcf58a00aca27>
"CL Implementations and Tail-Call Elimination", from September 2007,
in which I developed some simple CL macrology for allowing "cooperating
tail-call groups" of functions to share a loop/trampoline emulation of
true tail-call optimization. [But note that this is still not necessarily
completely "safe-for-space", depending on your implementation, nor does it
"do the right thing" in the exceptional cases mentioned above!!]
-Rob
[1] A couple of refs to get you started:
http://flint.cs.yale.edu/flint/publications/escc.html
"Efficient and Safe-for-Space Closure Conversion".
Zhong Shao & Andrew Appel
[In ACM Transactions on Programming Languages and
Systems (TOPLAS), 22(1), pages 129-161, January 2000.]
ftp://ftp.ccs.neu.edu/pub/people/will/tail.ps.gz
"Proper tail recursion and space efficiency"
William D. Clinger
[In the Proceedings of the 1998 ACM Conference on Programming
Language Design and Implementation, June 1998, pages 174-185.]
-----
Rob Warnock <····@rpw3.org>
627 26th Avenue <URL:http://rpw3.org/>
San Mateo, CA 94403 (650)572-2607
On Apr 21, 8:45 pm, ····@stablecross.com (Bob Felts) wrote:
> Kent M Pitman <······@nhplace.com> wrote:
>
> > ·······@gmail.com" <······@gmail.com> writes:
>
> > > If you want to iterate through a list, should you use a recursive
> > > function or a loop?
>
> > In Common Lisp, always prefer a loop where practical unless you know
> > with relative certainty that the problem cannot grow very big. A
> > recursive function can run out of stack since CL does not guarantee
> > tail call elimination.
>
> Any pointers to the rationale for this decision?
This isn't rationale for that decision, but look through January
archives of this newsgroup for the TAILPROG macro, by which I show
that tail recursion is just syntactic sugar for goto, and portable
tail recursion with syntax resembling FLET or LABELS can be obtained
from a fairly simple, small macro.
On Apr 21, 8:34 pm, Kent M Pitman <······@nhplace.com> wrote:
> ·······@gmail.com" <······@gmail.com> writes:
> > If you want to iterate through a list, should you use a recursive
> > function or a loop?
>
> In Common Lisp, always prefer a loop where practical unless you know
> with relative certainty that the problem cannot grow very big. A
> recursive function can run out of stack since CL does not guarantee
> tail call elimination. Anyone who tells you otherwise is living in a
> fantasy world or is telling you something implementation-dependent or
> is confusing Lisp with Scheme.
Even if you have tail recursion, you are still wasting productivity by
writing your own list iteration instead of using the existing library
functions.
P� Tue, 22 Apr 2008 05:34:34 +0200, skrev Kent M Pitman
<······@nhplace.com>:
> ·······@gmail.com" <······@gmail.com> writes:
>
>> If you want to iterate through a list, should you use a recursive
>> function or a loop?
>
> In Common Lisp, always prefer a loop where practical unless you know
> with relative certainty that the problem cannot grow very big. A
> recursive function can run out of stack since CL does not guarantee
> tail call elimination. Anyone who tells you otherwise is living in a
> fantasy world or is telling you something implementation-dependent or
> is confusing Lisp with Scheme.
Fantasy world? Nearly all compilers seem to do tail call elimination with
the default settings.
You would have to set (declare (optimize (debug 3))) (On LispWorks, but as
long as speed > debug it should work on all) to turn it off. Yes, it won't
won't on evaluated code and yes, the standard doesn't require it.
That said I generally think looping is a better idea for list traversal,
but for other reasons.
Tail recursion requires you to specify the argument to be passed many
times. One time through each call. This makes maintenance harder as if you
change the type you would need to update ALL calls. In loop you would
usually only need one change. So using tail recursion rather than a loop
can introduce bugs in complex looping constructs.
--------------
John Thingstad
"John Thingstad" <·······@online.no> writes:
> På Tue, 22 Apr 2008 05:34:34 +0200, skrev Kent M Pitman
> <······@nhplace.com>:
>
> > In Common Lisp, always prefer a loop where practical unless you know
> > with relative certainty that the problem cannot grow very big. A
> > recursive function can run out of stack since CL does not guarantee
> > tail call elimination. Anyone who tells you otherwise is living in a
> > fantasy world or is telling you something implementation-dependent or
> > is confusing Lisp with Scheme.
>
> Fantasy world? Nearly all compilers seem to do tail call elimination
> with the default settings.
My remarks were directed not any user making a properly informed
choice, but at someone advising ("anyone who tells you") if they do
not properly qualify their remarks as non-portable. When I speak of
Common Lisp, I generally mean "any Common Lisp"; when I want to talk about
a particular implementation, I tend to identify that.
There are a great many ways one can legitimately program if one knows
the assumptions one is making and has examined the risks. But the
language definition is what it is, and pretending it's not what it is
for the sake of easier explanation is the fantasy to which I was
referring.
I have recently remarked that I don't like the default CL treatment of
SETQ and that I tend to program in a non-portable fashion. I don't disagree
that this will create problems. I don't recommend others ignore the fact
that the standard makes this issue dicey. I think as long as people know the
truth, they can make an informed decision.
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Iteration in lisp
Date:
Message-ID: <rem-2008apr26-010@yahoo.com>
> From: Kent M Pitman <······@nhplace.com>
> When I speak of Common Lisp, I generally mean "any Common Lisp";
> when I want to talk about a particular implementation, I tend to
> identify that.
You're implying that's a good practice, being clear as to
*every*conforming*CL* vs. the CL which I happen to use this year,
and I agree it's good practice.
> I have recently remarked that I don't like the default CL
> treatment of SETQ and that I tend to program in a non-portable
> fashion.
I haven't seen that, and now that you mention it I'm very curious.
Would you possibly create a Web page where you have links to
anything you've posted on this topic plus maybe a little "glue"
(introductory) text, and then post the URL for your Web page here?
I can only guess that maybe you don't like the way that
interpretation or JIT-compilation of a SETQ form of an undeclared
unbound symbol automatically declares it SPECIAL? If not that, I
can't guess at all.
I try to make my CL code totally portable (except for
non-conforming implementations such as the crap I downloaded to my
Mac), but since I have no way to test it on anything except CMUCL I
can't really be sure it's portable. I do try as hard as possible to
read the documentation and *strictly* conform to what it says I am
guaranteed to be allowed to do, and avoid (usually anyway) using
"features" that I discover CMUCL also has which aren't in the
standard documentation. For example I totally avoid using COERCE
and any other political-compromise crocks I discover in my code,
and I use TYPE-OF only for interactive debugging (including
higher-level debugging aids I write). I'm curious how you
deliberately program in a non-portable way, especially after you
played such a large role in the ISO/ANSI standard.
Oh, and when I really do need to use a non-standard function,
something CMUCL has which isn't in the standard, I would like to
isolate that code to a special file that contains
advertised-non-portable code. But maybe I am not so diligent as I
should be, because EXT:RUN-PROGRAM is so immensely useful all over
the place, and it just naturally slips in lots of places.
Currently/recently I use it in:
2007-3-spamu.lisp: (ext:run-program "/usr/local/bin/readmsg"
2007-3-whois.lisp: (ext:run-program "/usr/bin/whois"
2007-4-lynx.lisp: (ext:run-program "/usr/local/bin/lynx"
2007-4-whois.lisp: (ext:run-program "/usr/bin/whois"
2007-4-whois.lisp: (ext:run-program "/usr/bin/whois"
2007-4-whois.lisp: (ext:run-program "/usr/bin/whois"
2007-5-dig.lisp: (ext:run-program "/usr/bin/dig" (list domain "mx")
2007-5-dig.lisp: (ext:run-program "/usr/bin/dig" (list hostname "a")
My older software developed from 2001 through appx. 2006, whereupon
just about everything related to fighting spam (filtering e-mail,
auto-complaining, etc.) was in one huge file, before I started
refactoring the files by putting each small module in a separate
sourcefile, none larger than 30k bytes (the maximum for copy+paste
up/download on my Mac), had a lot more uses of it, too many for you
to see, but with duplicaes removed maybe you won't go blind:
2001.Nov.lisp: (ext:run-program "/bin/mv"
2001.Nov.lisp: (ext:run-program "/bin/sh" (list)
2001.Nov.lisp: (ext:run-program "/usr/bin/dig" (list domain "a")
2001.Nov.lisp: (ext:run-program "/usr/bin/dig" (list domain "mx")
2001.Nov.lisp: (ext:run-program "/usr/bin/mail" (list "-I" "-f" "tmpmbox")
2001.Nov.lisp: (ext:run-program "/usr/bin/mail" progargs :output t :input sichan)
2001.Nov.lisp: (ext:run-program "/usr/bin/telnet"
2001.Nov.lisp: (ext:run-program "/usr/bin/whois"
2001.Nov.lisp: (ext:run-program "/usr/local/bin/lynx"
2001.Nov.lisp: (ext:run-program "/usr/local/bin/readmsg"
2001.Nov.lisp: (ext:run-program "/usr/sbin/traceroute"
Hmmm, looking at that one example where I say (list) gives me the
idea that in addition to my eventual essay on the intention of
atomic data types (such as using unsigned integer to emulate a
bitmask or a boolean or a US-ASCIi character code or a UniCode
codepoint or a UCS-8 toke), maybe I should also write an essay on
expressing the intention of data by what constructor you use for
it. For example, in that call to ext:run-program, the second
parameter is a list of Unix shell parameters to pass to the called
program, and in this case I pass the empty list, so it makes sense
to say (list) or '() instead of NIL to express my deep intention of
conforming to the API for ext:run-program. Both of these essays
would become companions to your super essay about intention of
copying from a handle on a CONS cell, depending on whether that
CONS cell is considered an isolated cell or part of a proper list
or part of an assoc list or part of a binary tree etc.
I just had a horrible thought: What if the original implementors of
Lisp had recognized this problem of the same kind of pair-cell
being used to build several different kinds of data structures,
with all ambiguity as to what is the container and what is the
contained within each kind of structure, so they decided instead of
having just one kind of CONS cell, at least four kinds, each with
its own flavor of constructor, and each printing with a different
favor of dot:
(pair-cons a d)
=> (a .p. d)
(list-cons e1 (list-cons e2 nil))
=> (e1 .l. (e2 .l. nil))
= (e1 e2)
(a-cons (ac-cons k1 v1) (a-cons (ac-cons k2 v2) nil))
=> ((k1 .ac. v1) .a. ((k2 .ac. v2) .a. nil))
= #a((k1 . v1) (k2 . v2))
Corresponding different flavors of the LIST vararg-function:
(llist a b c)
(alist (ac-cons k1 v1) (ac-cons k2 v2))
I think the problem with intention that can't be determined by the
runtime system is not as bad a problem as the alternative. So long
as every newbie reads your 'intention' essay and really understands
it, I think we can live with a single type of CONS-cell that fails
to express the intention of the structure it's part of. Maybe it's
actually a good thing that newbies must deal with this runtime-type
ambiguity (regarding intention) very early on, so they get used to
the idea that in a flexible extendable system such as Lisp it's
common to re-use an old data type to emulate a new abstract data
type without needing to get the new intention formally registered
with the vendor, you just hack the new data type and express your
intention by the extra layer of software for your new ADT. In other
words, never call CAR or CDR or CONS directly from application-level
software. Call those primitives *only* from the implementation of a
new API. So what you do is: Design an ADT and describe an API for
it, then implement that API using CL primitives, and then from
application-level code you always call the API functions instead of
the underlying CL primitives.
OK, I have a riddle for everyone to brainstorm about.
You want to build a list (the usual, CONS-cells CDR-linked),
where each element is either a tree or a list or an alist,
and you want to call (mapcar #'COPY-DO-THE-RIGHT-THING thatList),
where it does copy-list or copy-cons or copy-alist as appropriate.
Obviously it won't work the obvious way, as Kent's essay explains.
So how can you design things so that this *will* work??
·················@SpamGourmet.Com (Robert Maas, http://tinyurl.com/uh3t) writes:
> > From: Kent M Pitman <······@nhplace.com>
> > [...]
> > I have recently remarked that I don't like the default CL
> > treatment of SETQ and that I tend to program in a non-portable
> > fashion.
>
> I haven't seen that, and now that you mention it I'm very curious.
> [...]
> I can only guess that maybe you don't like the way that
> interpretation or JIT-compilation of a SETQ form of an undeclared
> unbound symbol automatically declares it SPECIAL? If not that, I
> can't guess at all.
No, that's annoying, but I consider that almost a bug. Not a
violation of spec, just bad design, since having such things happen in
a pervasive way without users requesting it seems bad to me.
I was talking more about the notion that
(progv '(x) '(3)
(eval '(+ x 1)))
is not portably well-defined without doing
(progv '(x) '(3)
(eval '(locally (declare (special x)) (+ x 1))))
which I think is a bit silly and makes use of EVAL altogether too cumbersome.
I also think the inability to do
(setq x 1)
in a textbook is a travesty because it forces people to get into the
discussion of DEFVAR way too early. You want to be able to talk about
and test program fragments, and the inability to do that "portably" is
silly. I consider it unreasonable that an implementation ever
complain about this. Even if it only does a lexical assignment
instead of a special assignment, which I suppose one could do, it
still ought not complain. But if it doesn't do ANYTHING and it just
complains out of neurotic concern that somewhere there's an
implementation not supporting assignment, I think that's not useful to
the community.
But that's all just my personal opinion as a consumer of CL.
> OK, I have a riddle for everyone to brainstorm about.
> You want to build a list (the usual, CONS-cells CDR-linked),
> where each element is either a tree or a list or an alist,
> and you want to call (mapcar #'COPY-DO-THE-RIGHT-THING thatList),
> where it does copy-list or copy-cons or copy-alist as appropriate.
> Obviously it won't work the obvious way, as Kent's essay explains.
> So how can you design things so that this *will* work??
I guess you could just avoid conses and use some other struct type of
your own that kept the info. My question for you would be: what are
you REALLY trying to do? Why would you need to do this? I don't know
that I believe you that there is a well-defined case where you can
want this.
On Apr 26, 8:34 pm, Kent M Pitman <······@nhplace.com> wrote:
> I also think the inability to do
>
> (setq x 1)
>
> in a textbook is a travesty because it forces people to get into the
> discussion of DEFVAR way too early. You want to be able to talk about
> and test program fragments, and the inability to do that "portably" is
> silly. I consider it unreasonable that an implementation ever
> complain about this.
Hear, hear!
I don't understand how some implementors have convinced themselves
that emitting warnings in this case is acceptable. Yes, I know that
the standard doesn't really say what should happen and this is
technically an error. But it doesn't make sense to elevate the
standard above the precedent that has been set by existing
implementations when that precedent is important to users.
The standard is supposed to serve the user community. Some seem to
think it should be the other way around. I think this is the
fundamental problem.
It really astounded me the unsympathetic response I got the last time
I raised this issue here. Nice to know you're on my side, at least :)
-- Scott
Scott Burson <········@gmail.com> writes:
> On Apr 26, 8:34 pm, Kent M Pitman <······@nhplace.com> wrote:
[I have to admit that my first reading of Kent's articles on this
subject came from a slightly different point of view, (since Allegro
CL indeed never warns when you simply setq a variable at the repl)
but because of this follow-on I have to speak up for the implementors,
perhaps speaking indirectly to Kent as well - it really depends on how
extensive his use of "ever" in this paragraph was meant to be...]
>> I also think the inability to do
>>
>> (setq x 1)
>>
>> in a textbook is a travesty because it forces people to get into the
>> discussion of DEFVAR way too early. You want to be able to talk about
>> and test program fragments, and the inability to do that "portably" is
>> silly. I consider it unreasonable that an implementation ever
>> complain about this.
>
> Hear, hear!
>
> I don't understand how some implementors have convinced themselves
> that emitting warnings in this case is acceptable. Yes, I know that
> the standard doesn't really say what should happen and this is
> technically an error. But it doesn't make sense to elevate the
> standard above the precedent that has been set by existing
> implementations when that precedent is important to users.
As I said in the bracketed comments above, I originally didn't read
much into this issue because Allegro CL doesn't warn you when you've
assigned to an undefined variable at the repl, since it is likely that
the person doing the typing is looking for somewhat of a "dwim"
experience, and we believe that warning in that case is not too
helpful. (I always got a kick out of anal programs, in any language,
that had error messages that say something like "Error: missing comma
after 'this'" [well, why doesn't the program give the programmer a
break and add the comma - if it's unambiguous, what's the harm? that
the wayward programmer will never learn his lesson???] ... sheesh)
I think that it is a more likely correlation that those lisps that
don't have and interpreter, or lisps which by default are set to
compile their forms, will be the ones which end up warning in this
way.
However, there is absolutely a very good reason for doing this warning
in other situations - namely, in compiled code. That reason is, in a
single word: "spelling". In reality, lisp programs which are
candidates for compilation are less likely to have (setq x 1) for a
form, and more likely to have a form that looks something like
(setq *long-variable-with-spelling-that-doesnt-match-its-defunition*
1) for which it may be hard to see the mistake (how long did it take
for you to find it?). In this case it is likely that no actual
variable looks like this one, and thus it is likely that this name is
a mistake. And for both categories of programmers: those of us whose
typing is painfully slow, and for those who type like lightening, the
result is always the same: some words inevitably end up misspelled.
Hence the reasoning for a warning.
> The standard is supposed to serve the user community. Some seem to
> think it should be the other way around. I think this is the
> fundamental problem.
That's precisely what we try to do. Some things we do have that
effect better than others. You can bet that most of the behaviors in
most of the CL implementations have been considered very carefully,
and to the extent that the behaviors are different, well, it's simply
due to different resting points on many tradeoff axes. The question
of whether to compile top-level forms automatically or to require the
user to explicitly call compile on them is a good example. There are
reasons on both sides of the argument, and some prefer one way and
others prefer the other way. Some consequences come more or less
directly out of such decisions, like the behavior of warnings on
setqs of undefined variables. I think your statement is much too
harsh. You should instead strive to understand _why_ the decisions
are made the way they are, and that will bring enlightenment through
many facets. Many implementors are very accessible and can shed light
on some of these tradeoffs, some on this very newsgroup.
> It really astounded me the unsympathetic response I got the last time
> I raised this issue here. Nice to know you're on my side, at least :)
I'd be interested to read that thread (hopefully I wasn't a
contributor to the non-sympathy, though I won't guarantee I will agree
with all of your arguments). Do you have a pointer to that thread?
--
Duane Rettig ·····@franz.com Franz Inc. http://www.franz.com/
555 12th St., Suite 1450 http://www.555citycenter.com/
Oakland, Ca. 94607 Phone: (510) 452-2000; Fax: (510) 452-0182
Duane Rettig <·····@franz.com> writes:
> However, there is absolutely a very good reason for doing this warning
> in other situations - namely, in compiled code.
In my personal opinion (i.e., not an attempt to interpret the spec):
I agree that style warnings for all manner of things in compiled code
is fine, because they are suppressable ... with the caveat that
in implementations that implement eval by
(defun eval (x)
(funcall (compile nil `(lambda () ,exp))))
I think they should suppress style warnings that might be given by
the file compiler or even perhaps explicit uses of COMPILE. Because
compile is sub-primitive here, the implementation shouldn't take its
incidental use of compilation as an excuse to overwhelm the user with
random output.
And interactively, with an immediate funcall, the user will find out
soon enough about the style problem.
My remarks were about the ability to do things like automatic
programming by cobbling together bits of code and EVAL'ing it.
> That reason is, in a
> single word: "spelling". In reality, lisp programs which are
> candidates for compilation are less likely to have (setq x 1) for a
> form, and more likely to have a form that looks something like
> (setq *long-variable-with-spelling-that-doesnt-match-its-defunition*
> 1) for which it may be hard to see the mistake (how long did it take
> for you to find it?). In this case it is likely that no actual
> variable looks like this one, and thus it is likely that this name is
> a mistake.
And even so, with EVAL, I'd rather not know and find out at runtime.
- - -
The reason for my remark was particularly aimed at two camps:
(1) The extreme camp that says "oops, you misspelled something. I've
helpfully autodeclared it pervasively special." I think this is
really definitely not good.
(2) The moderate but still unwanted-by-me situation of taking
TOPLEVEL-LISP-PROMPT> (setq x 3)
or a programmatic call to (eval '(setq x 3))
as an opportunity to lecture the person about how x is undeclared and
maybe you don't really mean for it to be special. As I said, I can tolerate
it being silently taken as lexical in an implementation that wants to offer
such support, even by default. But what I don't want to be told is "I'm not
going to do something cool and innovative, nor am I even going to do the
obvious only thing that's left to do in the absence of something cool and
innovative--I'm just going to tell you that Lisp is hard to use and that you
should have an unpleasant interactive experience."
I actually don't know which implementations do this. I just know people
complain that some do and seem not to like writing SETQ at toplevel.
I have found Lisp very hard to teach with DEFVAR having to be used
interactively in a teaching session, because that forces me to have to use
*'s or have non-*'d variables end up pervasively SPECIAL. So it's the cascade
of those two things that troubles me. I want to use non-*'d variables in
early teaching, and so I want to use SETQ on them, not DEFVAR.
OK, All of this falls more into line with the way I had originally
read your article.
Kent M Pitman <······@nhplace.com> writes:
...
--
Duane Rettig ·····@franz.com Franz Inc. http://www.franz.com/
555 12th St., Suite 1450 http://www.555citycenter.com/
Oakland, Ca. 94607 Phone: (510) 452-2000; Fax: (510) 452-0182
>
> (1) The extreme camp that says "oops, you misspelled something. I've
> helpfully autodeclared it pervasively special." I think this is
> really definitely not good.
>
right! A real pain.
> (2) The moderate but still unwanted-by-me situation of taking
>
> TOPLEVEL-LISP-PROMPT> (setq x 3)
>
> or a programmatic call to (eval '(setq x 3))
>
> as an opportunity to lecture the person about how x is undeclared and
> maybe you don't really mean for it to be special. As I said, I can
> tolerate
> it being silently taken as lexical in an implementation that wants to
> offer
> such support, even by default. But what I don't want to be told is "I'm
> not
> going to do something cool and innovative, nor am I even going to do the
> obvious only thing that's left to do in the absence of something cool and
> innovative--I'm just going to tell you that Lisp is hard to use and that
> you
> should have an unpleasant interactive experience."
>
> I actually don't know which implementations do this. I just know people
> complain that some do and seem not to like writing SETQ at toplevel.
Usually I would want defparameter at the toplevel REPL.
I use the history to retrieve it to re-test.
defvar is only for things you want to survive through the session.
As such it is almost as dangerous as declaring something defconstant.
(on my list list of regrettable decisions)
--------------
John Thingstad
John Thingstad <·······@online.no> replied [to Kent]:
+---------------
| > I actually don't know which implementations do this. I just know people
| > complain that some do and seem not to like writing SETQ at toplevel.
|
| Usually I would want defparameter at the toplevel REPL.
| I use the history to retrieve it to re-test.
| defvar is only for things you want to survive through the session.
| As such it is almost as dangerous as declaring something defconstant.
+---------------
Agreed, which is why when I defined my version of DEFLEX[1]
to use DEFPARAMETER semantics (similar in that way to Scheme's
top-level DEFINE) instead of DEFVAR semantics. [From time to
time I've thought of adding a DEFLEX/ONCE with DEFVAR semantics,
but sufficient motivation has never arisen. ;-} ]
Typing (DEFLEX FOO 13) is almost as short as typing (SETQ FOO 13),
but is a *lot* less dangerous if later code should rebind FOO! ;-}
And it's also more portable[2]...
-Rob
[1] http://rpw3.org/hacks/lisp/deflex.lisp
[2] The semantics, that is, assuming you load
the DEFLEX macro in your CL's init file.
-----
Rob Warnock <····@rpw3.org>
627 26th Avenue <URL:http://rpw3.org/>
San Mateo, CA 94403 (650)572-2607
Duane Rettig wrote:
> helpful. (I always got a kick out of anal programs, in any language,
> that had error messages that say something like "Error: missing comma
> after 'this'" [well, why doesn't the program give the programmer a
> break and add the comma - if it's unambiguous, what's the harm? that
> the wayward programmer will never learn his lesson???] ... sheesh)
Because while inserting a comma produces valid syntax the editing error
might not have been omitting a comma. It usually is, but do we want
compilers guessing at which mistake me made? Without mentioning it?
People have had little success writing software that works, it may be
too soon for people to be writing software that figures out what
software other people meant to write.
kenny
-----
ECLM 2008 Resources: http://www.weitz.de/eclm2008/
Rant:
http://video.google.com/videoplay?docid=-1331906677993764413
The Lisp Way:
http://video.google.com/videoplay?docid=-9173722505157942928
Ken Tilton <·········@gmail.com> muses:
+---------------
| People have had little success writing software that works, it may
| be too soon for people to be writing software that figures out what
| software other people meant to write.
+---------------
Madness! That way lies SkyNet!! Consider feeding your hypothetical
super-DWIM to itself... Then stand back and prepare to be obsoleted! :-{
I can just hear[1] it now:
Super-DWIM: "Hmmm... I'm sure you didn't mean for this
undefined behavior to be here. Let's re-write
it into 'Launch all thermonuclear missiles!'.
Yes, there, that's much better..."
-Rob
[1] For some value of "hear" that probably involves a debugging trace
and a "sufficiently-smart" output formatter.
-----
Rob Warnock <····@rpw3.org>
627 26th Avenue <URL:http://rpw3.org/>
San Mateo, CA 94403 (650)572-2607
On Apr 30, 12:15 am, Duane Rettig <·····@franz.com> wrote:
> Scott Burson <········@gmail.com> writes:
> > On Apr 26, 8:34 pm, Kent M Pitman <······@nhplace.com> wrote:
> >> I also think the inability to do
>
> >> (setq x 1)
>
> >> in a textbook is a travesty because it forces people to get into the
> >> discussion of DEFVAR way too early.
>
> > Hear, hear!
>
> > I don't understand how some implementors have convinced themselves
> > that emitting warnings in this case is acceptable.
>
> As I said in the bracketed comments above, I originally didn't read
> much into this issue because Allegro CL doesn't warn you when you've
> assigned to an undefined variable at the repl
Well, right. Allegro is one of the "existing implementations" I was
referring to.
> I think that it is a more likely correlation that those lisps that
> don't have an interpreter, or lisps which by default are set to
> compile their forms, will be the ones which end up warning in this
> way.
I agree that is a likely correlation.
> However, there is absolutely a very good reason for doing this warning
> in other situations - namely, in compiled code.
Absolutely agreed. This is not the issue.
> > The standard is supposed to serve the user community. Some seem to
> > think it should be the other way around.
>
> I think your statement is much too harsh.
It probably was, but some of these conversations have left me with
that feeling. When I argue that certain behavior, consistent with but
not mandated by the standard, is important for newbies (and therefore
tutorial writers) to be able to count on, and I get back "well, the
spec doesn't guarantee it, so you can't count on it, too bad", to me,
that is putting the proverbial cart before the horse.
But I have to admit I have not approached the CMUCL and SBCL
implementors about this. When the issue last came up -- the link is
below, as you request -- I wanted to first see whether I could drum up
any support on the newsgroup for such a change. (It seemed much
easier to lobby for a widely-desired change than for an idiosyncratic
request.) The result of that effort was so discouraging that I gave
up.
> > It really astounded me the unsympathetic response I got the last time
> > I raised this issue here.
>
> I'd be interested to read that thread (hopefully I wasn't a
> contributor to the non-sympathy, though I won't guarantee I will agree
> with all of your arguments). Do you have a pointer to that thread?
Here you go:
http://groups.google.com/group/comp.lang.lisp/browse_frm/thread/fcd63c187dadffe8/6b82159366b46a1d?lnk=gst&q=setq+redux#6b82159366b46a1d
I think the most important points I could make here are (1) how
valuable it could be to us if we could grow the CL community more
rapidly; (2) how important it is, therefore, to make CL as newbie-
friendly as possible; and (3) how much of a difference a detail like
this can make, because a newbie is not yet able to define macros or
whatever to make CL behave they way they want.
But I'm out of time for tonight. More to follow.
-- Scott
Scott Burson <········@gmail.com> writes:
> On Apr 30, 12:15 am, Duane Rettig <·····@franz.com> wrote:
>> Scott Burson <········@gmail.com> writes:
>> > The standard is supposed to serve the user community. Some seem to
>> > think it should be the other way around.
>>
>> I think your statement is much too harsh.
>
> It probably was, but some of these conversations have left me with
> that feeling. When I argue that certain behavior, consistent with but
> not mandated by the standard, is important for newbies (and therefore
> tutorial writers) to be able to count on, and I get back "well, the
> spec doesn't guarantee it, so you can't count on it, too bad", to me,
> that is putting the proverbial cart before the horse.
Agreed.
> But I have to admit I have not approached the CMUCL and SBCL
> implementors about this. When the issue last came up -- the link is
> below, as you request -- I wanted to first see whether I could drum up
> any support on the newsgroup for such a change. (It seemed much
> easier to lobby for a widely-desired change than for an idiosyncratic
> request.) The result of that effort was so discouraging that I gave
> up.
This is absolutely the wrong way to go about it. You'll seldom get
implementor's viewpoints from here, and that thread is no exception; I
don't know if any of the responders are also contributors to any
lisp implementations, but the spirit of their answers was clearly that
of the fellow-user point of view.
You _must_ go to your vendor first, if you want any kind of action.
You may indeed not get what you want, but at least you will have done
the right thing, and that's the best chance that you have of getting
things changed.
>> > It really astounded me the unsympathetic response I got the last time
>> > I raised this issue here.
>>
>> I'd be interested to read that thread (hopefully I wasn't a
>> contributor to the non-sympathy, though I won't guarantee I will agree
>> with all of your arguments). Do you have a pointer to that thread?
>
> Here you go:
>
> http://groups.google.com/group/comp.lang.lisp/browse_frm/thread/fcd63c187dadffe8/6b82159366b46a1d?lnk=gst&q=setq+redux#6b82159366b46a1d
Thanks.
> I think the most important points I could make here are (1) how
> valuable it could be to us if we could grow the CL community more
> rapidly; (2) how important it is, therefore, to make CL as newbie-
> friendly as possible; and (3) how much of a difference a detail like
> this can make, because a newbie is not yet able to define macros or
> whatever to make CL behave they way they want.
All agreed.
--
Duane Rettig ·····@franz.com Franz Inc. http://www.franz.com/
555 12th St., Suite 1450 http://www.555citycenter.com/
Oakland, Ca. 94607 Phone: (510) 452-2000; Fax: (510) 452-0182
Scott Burson <········@gmail.com> writes:
> On Apr 26, 8:34 pm, Kent M Pitman <······@nhplace.com> wrote:
>> I also think the inability to do
>>
>> (setq x 1)
>>
>> in a textbook is a travesty because it forces people to get into the
>> discussion of DEFVAR way too early. You want to be able to talk about
>> and test program fragments, and the inability to do that "portably" is
>> silly. I consider it unreasonable that an implementation ever
>> complain about this.
>
> Hear, hear!
>
> I don't understand how some implementors have convinced themselves
> that emitting warnings in this case is acceptable. Yes, I know that
> the standard doesn't really say what should happen and this is
> technically an error. But it doesn't make sense to elevate the
> standard above the precedent that has been set by existing
> implementations when that precedent is important to users.
>
> The standard is supposed to serve the user community. Some seem to
> think it should be the other way around. I think this is the
> fundamental problem.
>
> It really astounded me the unsympathetic response I got the last time
> I raised this issue here. Nice to know you're on my side, at least :)
Write a CDR!
--
__Pascal Bourguignon__
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Iteration in lisp
Date:
Message-ID: <rem-2008may02-003@yahoo.com>
> > > I have recently remarked that I don't like the default CL
> > > treatment of SETQ and that I tend to program in a non-portable
> > > fashion.
> > I haven't seen that, and now that you mention it I'm very curious.
> > [...]
> > I can only guess that maybe you don't like the way that
> > interpretation or JIT-compilation of a SETQ form of an undeclared
> > unbound symbol automatically declares it SPECIAL? If not that, I
> > can't guess at all.
> From: Kent M Pitman <······@nhplace.com>
> No, that's annoying, but I consider that almost a bug. Not a
> violation of spec, just bad design, since having such things happen in
> a pervasive way without users requesting it seems bad to me.
It's a trade-off. When developing new code one line at a time,
every local variable in the eventual code is in fact a global
variable during the development/debugging session. It would be a
bit of a pain to need to explicitly declare every eventual-local
variable to be temporarily local during development and then
remember to remove all those special proclamations as soon as those
variables are safely encapsulated inside a PROG or LET* so that
they no longer need to be special. On the other hand, it might
actually not turn out so bad, because the extra effort of declaring
them at the start means a search-and-substitute of all those
proclamations will yield a set of un-proclamations to revert them
all back to unbound status whereupon any subsequent use of them as
free variables will be flagged as an error.
On the other hand, the main advantage of Lisp over other languages
(especially Java which is so similar in some other great ways) is
that you *don't* need to make a lot of declarations of stuff before
you can start to write-and-test a single line of code. You don't
need to embed a single line of code in a complete program before
you can test it. You don't need to declare the type and scope of
variables before you can use them. Requiring special-variable
proclamations before you can test a single line of code would seem
to be a step back toward those *other* languages.
Currently I don't bother to undo the automatic special-proclamation
when I'm done using a local as if global during line-at-a-time
development. I just do my best to eliminate all free references to
that symbol, and if I make a mistake then it's caught the next time
I start with a fresh core image and re-load that same code. Maybe I
should change my practice to keep track of all those temporary
locals and deliberately make them unbound and unspecial as soon as
I'm done using them as temporary globals? I'll think about it.
Meanwhile I'm sorta in favor of the current behaviour.
> I was talking more about the notion that
> (progv '(x) '(3)
> (eval '(+ x 1)))
> is not portably well-defined without doing
> (progv '(x) '(3)
> (eval '(locally (declare (special x)) (+ x 1))))
Oh, maybe I saw that many months ago and ignored it because I never
use PROGV in my code anyway. I just treat PROGV as a useless crock,
like COERCE. Why do *you* care if PROGV is usable or not? What do
*you* ever use it for in practice? (Yeah, I know, as a matter of
priciple, since you are responsible for the HyperSpec, and were a
major player in the ANSI standardization, it sorta must feel bad
that a crock like PROGV leaked into the standard. But why not just
say it's "deprecated" in practice now?? Unless there's a really
good practical use for it?)
> which I think is a bit silly and makes use of EVAL altogether too cumbersome.
Agreed but only in that context. EVAL at the top level with
proclaimed special variables works just fine, unless you have
another bad example in mind. EVAL inside PROGV is a crock, we agree?
> I also think the inability to do
> (setq x 1)
> in a textbook is a travesty because it forces people to get into
> the discussion of DEFVAR way too early.
Hmm, now we're back to that topic I guessed at. (Reminds me of
Abbott and Costello, "Who's on First". Costello asks "why are we
back on first base". Abbott replies "because you mentionned his
name". Costello asks "Whose name?". Abbott says "Yes, that's right,
Who's name. "Who" is the name of the player on first.")
Anyway, that's a good point, that all implementations we ordinarily
use automatically proclaim free variables as special, but we can't
say that in a book about ANSI Common Lisp because that behaviour
isn't in the spec and some perverse dogmatist might implement a CL
that requires the user to explicitly issue the special proclamation
or equivalent defvar before using that free variable and then some
utter newbie would complain the book was wrong, and the newbie
would win the technical argument.
So if you ruled the world, what would be your solution?
Perhaps change the standard to have a global something like
*auto-special* with these possible values:
NIL (never auto-declare, require strict conformance)
:SILENT (silently proclaim special)
:WARN (current default behaviour, print warning to *error-output* etc.)
:QUERY (signal a CERROR with a restart for proclaiming special)
The default would then be :WARN in most implementations, but the
book could say to explicitly (setq *auto-special* :WARN) before
proceeding with the examples in the book, just in case that wasn't
the default in *your* implementation. (Which reminds me of the Bill
Cosby comedy routine about the monster that is in *your* town,
coming up *your* street, coming to *your* door, wanting to come in
and eat *you*. Here the pedantic Lisp implementation "eats" newbies!)
> You want to be able to talk about and test program fragments, and
> the inability to do that "portably" is silly. I consider it
> unreasonable that an implementation ever complain about this.
I don't equate "complain" with "warn". If the standard said it'd
auto-proclaim special but might warn on *error-output* which is
normally *terminal-io*, then the book could simply warn that using
a free variable in these program fragments might result in such a
warning, which can be somewhat ignored for the moment, but should
be taken seriously later when more complete programs are presented.
> Even if it only does a lexical assignment instead of a special
> assignment, which I suppose one could do, it still ought not
> complain.
I'm not sure how that would make any sense.
First line of code to be tested directly in REP:
(setq foo (make-string 42 :initial-element #\x))
Second line of code to be tested directly in REP:
(setf (elt foo 3) #\-)
Second line of code to be tested directly in REP:
(subseq foo 0 6) => "xxx-xx"
In what lexical environment do you propose the symbol foo be a
variable, so that all three lines of code will work as expected?
> But if it doesn't do ANYTHING and it just complains out of
> neurotic concern that somewhere there's an implementation not
> supporting assignment, I think that's not useful to the community.
You mean assignment of free variables due to their being explicitly
mentionned by code entered at the REP? You mean neither assign the
value nor proclaim the symbol to be a special variable?
> But that's all just my personal opinion as a consumer of CL.
Your personal opinion counts a little bit more than most any other of us.
By the way, if I use SET (with quoted symbol) instead of SETQ, then
there's no complaint and no special declaration and no warning.
Unfortunately that would mean I'd need to go through my lines of
code and change them to SETQs at the moment when I compose the
sequence of lines of code into a PROG. Now if I compose them into a
LET* instead, then I need to get rid of the SETQs anyway, so
getting rid of SET ' instead would be no extra work, so if I
anticipate doing that I might consider doing the SET ' alternative.
BTW for beginners working code snippets from a book, I don't like
the SET ' alternative at all, too confusing too early.
> > OK, I have a riddle for everyone to brainstorm about.
> > You want to build a list (the usual, CONS-cells CDR-linked),
> > where each element is either a tree or a list or an alist,
> > and you want to call (mapcar #'COPY-DO-THE-RIGHT-THING thatList),
> > where it does copy-list or copy-cons or copy-alist as appropriate.
> > Obviously it won't work the obvious way, as Kent's essay explains.
> > So how can you design things so that this *will* work??
> I guess you could just avoid conses and use some other struct
> type of your own that kept the info. My question for you would be:
> what are you REALLY trying to do? Why would you need to do this?
I want to be able to mostly just CONS up structures to represent
parts of trees including various info present at each node, but I
want to exactly control which parts of the tree print and which
parts are hidden so that I can see the essential stuff near the top
of what I'm working on and not be bothered by the time and
screen-space it takes to print out deep stuff I'm not interested
in. My low-tech method is to hide each level of structure inside
the value cell of an uninterned symbol but another person suggested
defining a STRUCT type which is a reasonable alternative.
> I don't know that I believe you that there is a well-defined case
> where you can want this.
Wanting that print-behaviour is just my personal opinion as a
consumer of CL, as a solution to the problem of immense spew-out
when I try to trace something which takes me longer to view the
spew to see what I need to see to know if the code I wrote is
really working as I intended it to. The jargon for the problem is
"information overload". A related case is labels for nodes
(vertices) in a graph, where I use uninterned symbols whose
property list has all the various info needed at that node.
On 3 Mai, 07:14, ·················@SpamGourmet.Com (Robert Maas,
http://tinyurl.com/uh3t) wrote:
> > From: Kent M Pitman <······@nhplace.com>
> > I was talking more about the notion that
> > (progv '(x) '(3)
> > (eval '(+ x 1)))
> > is not portably well-defined without doing
> > (progv '(x) '(3)
> > (eval '(locally (declare (special x)) (+ x 1))))
>
> Oh, maybe I saw that many months ago and ignored it because I never
> use PROGV in my code anyway. I just treat PROGV as a useless crock,
> like COERCE.
PROGV is not the issue here. Free variable references are the issue.
Isn't the behaviour of
(let ((x 3))
(declare (special x))
(eval '(+ x 1)))
just as ill-defined?
Matthias
Matthias Benkard <··········@gmail.com> writes:
> On 3 Mai, 07:14, ·················@SpamGourmet.Com (Robert Maas,
> http://tinyurl.com/uh3t) wrote:
> > > From: Kent M Pitman <······@nhplace.com>
> > > I was talking more about the notion that
> > > (progv '(x) '(3)
> > > (eval '(+ x 1)))
> > > is not portably well-defined without doing
> > > (progv '(x) '(3)
> > > (eval '(locally (declare (special x)) (+ x 1))))
> >
> > Oh, maybe I saw that many months ago and ignored it because I never
> > use PROGV in my code anyway. I just treat PROGV as a useless crock,
> > like COERCE.
>
> PROGV is not the issue here. Free variable references are the issue.
> Isn't the behaviour of
>
> (let ((x 3))
> (declare (special x))
> (eval '(+ x 1)))
>
> just as ill-defined?
Thanks for making this point. Yes. PROGV is a red herring here.
The main reason that PROGV was used in my example is that if you were
building a simple scripting language or a rule-based language and
wanted to establish a few bindings and then allow a situation where
script-supplied expressions were evaluated in the context of those names,
you couldn't just do a simple eval on what the user provides you, and
that's a shame because while it's not done a whole lot, it's very appealing
to people trying to understand why Lisp is different than other languages
because other languages don't easily let you do that... and the way it's
defined right now, Lisp is also putting roadblocks in your way, so it
makes the point blurrier.
I rarely PROGV directly, and its syntax is kind of lousy, but it is
one of those subprimitive thigns that is sometimes useful for
implementing other facilities.
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Nigerian/419 spammer harvested this address (was: Iteration in lisp)
Date:
Message-ID: <rem-2008jun11-003@yahoo.com>
> From: ·················@SpamGourmet.Com (Robert Maas, http://tinyurl.com/uh3t)
I posted several articles under this address, but one of them got
harvested by a Nigerian spammer, who then sent me a spam to this
address. (Fortunately I discovered it before a second spam was sent
to it:) As a result, I've shut down this address so that I won't
get any more spam via this address. Any e-mail sent to this address
will be accepted by the server then just discarded without any
non-delivery notice. That's why I'm posting this warning, just in
case somebody saw any of my articles just now and wants to reply to
me privately If anybody wants to send private e-mail to me
regarding anything I've posted, you'll have to look around to find
some other variant address that I haven't yet disabled, or go to my
Web site and click on "Contact me".
As of a few days ago, Yahoo! Mail no longer provides any way to see
full headers of spam that comes in, so that's why the info isn't
included here. Previous recent Nigerian 419 spam of the same type
came from an IP number owned by Egyptian University, with a dropbox
at HotMail. Obviously neither Egypt nor MicroSoft is doing anything
to stop such spam.
Today in fact I received six spam, using four newly-harvested
SpamGourmet forwarding addresses, all of which I've now shut down:
·················@spamgourmet.com
···············@spamgourmet.com
···············@spamgourmet.com
···················@spamgourmet.com
···················@spamgourmet.com
···················@spamgourmet.com
-
Nobody in their right mind likes spammers, nor their automated assistants.
To open an account here, you must demonstrate you're not one of them.
Please spend a few seconds to try to read the text-picture in this box:
/----------------------------------------------------------------------------\
| |~_ __|_ |_| _ ._ _ _ |||_| |) _ .__|_o _o|) _ _|_o._ (~| |
| |_|}_ | _|(_)|_|| |_|_\|_|(_||| _| | (_|| | |(_|| (_| | || | _| |
| __|_ _ ._ _|o._ (~| |_ |_| ._ _ |_| | _ _ _ _|_o _ ._ |
| _\ | (_|| |(_||| | _| |_) _| | | | _| |(_)(_(_| | |(_)| |o |
\----(Rendered by means of <http://www.schnoggo.com/figlet.html>)------------/
(You don't need JavaScript or images to see that ASCII-text image!!
You just need to view this in a fixed-pitch font such as Monaco.)
Then enter your best guess of the text (50-150 chars) into this TextArea:
+------------------------------------------------------------+
| |
| |
| |
| |
+------------------------------------------------------------+
In article <·················@yahoo.com>,
·················@spamgourmet.com (Robert Maas,
http://tinyurl.com/uh3t) wrote:
> I posted several articles under this address, but one of them got
> harvested by a Nigerian spammer, who then sent me a spam to this
> address.
You're surprised that spammers harvest addresses from Usenet?
--
Barry Margolin, ······@alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***
*** PLEASE don't copy me on replies, I'll read them in the group ***
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Nigerian/419 spammer harvested this address (was: Iteration in lisp)
Date:
Message-ID: <rem-2008jun11-005@yahoo.com>
> From: Barry Margolin <······@alum.mit.edu>
> You're surprised that spammers harvest addresses from Usenet?
Nope, I was just posting a notice to anyone who might have seen my
earlier article under that address and wanted to reply to me
privately at that address, that the address has been disabled, so
that any e-mail sent to that address will *appear* to have been
delivered but in fact it will be discarded without issuing a
Non-Delivery Notice. It's a courtesy to anyone who might want to
talk to me privately using an address that I posted for people to
reply, it's no longer an address that can be used.
Note that of the several tens of different variations of my
SpamGourmet address that I've posted to newsgroups, so-far only six
of them have been harvested by Nigerian 419 spammers (your recently
deceased relative has left you milions of dollars), and none have
been harvested by any other types of spammers (viagra/pharmacy,
trouble with PayPal/bank account, college/teen/child porn, meet
latino/black/Jewish singles, Comcast/AT&T, Obama/Clinton).
By the way, I also had to cut off a variant address that was sent
to an employer who had posted a job on CareerBuilder, but
CareerBuilder somehow got that address and started sending me their
regular spam, so I had to cut off that address too, so if that
employer wants to hire me I've lost that opportunity thanks to
CareerBuilder's criminal activities. (Yes, it's illegal to
intercept e-mail addressed to somebody else and use it for your own
purposes that aren't desired by either the sender or the intended
recipient. If anybody knows of a police agency that is actively
enforcing this kind of violation of the Computer Crime law, please
let me know by contacting me through my Web site.)
From: Raymond Wiker
Subject: Re: Nigerian/419 spammer harvested this address
Date:
Message-ID: <m2bq264d7a.fsf@RAWMBP.local>
··················@spamgourmet.com.remove (Robert Maas, http://tinyurl.com/uh3t) writes:
>> From: Barry Margolin <······@alum.mit.edu>
>> You're surprised that spammers harvest addresses from Usenet?
>
> Nope, I was just posting a notice to anyone who might have seen my
> earlier article under that address and wanted to reply to me
> privately at that address, that the address has been disabled, so
> that any e-mail sent to that address will *appear* to have been
> delivered but in fact it will be discarded without issuing a
> Non-Delivery Notice. It's a courtesy to anyone who might want to
> talk to me privately using an address that I posted for people to
> reply, it's no longer an address that can be used.
You need to devise a better solution than this - this method
just won't work. Spammers will pick up your new address as soon as you
post a message with it, and you cannot expect people to look for
messages from you about an address change before trying to email you.
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Nigerian/419 spammer harvested this address
Date:
Message-ID: <rem-2008jun12-006@yahoo.com>
> From: Raymond Wiker <····@RawMBP.local>
> You need to devise a better solution than this - this method just
> won't work. Spammers will pick up your new address as soon as you
> post a message with it, and you cannot expect people to look for
> messages from you about an address change before trying to email
> you.
I have posted with several tens of variations on my SpamGourmet
address, and so-far only six of them have been harvested by the
Nigerian/419 spammer, none at all by any other spammers.
I had a much better system several years ago: I had a spam filter I wrote
myself, which distinguished between:
- E-mail from known penpals, automatically white-listed.
- E-mail from known sources of spam, automatically complained to
appropriate spam-complaint address there.
- E-mail from new sources, ask me whether it's spam or not, and
move the source to one of the first two classes.
But then the spam-friendly admin at Savvis complained to my ISP's
admin that they don't like my spam complaints, so my own admin
ordered me to stop running all that software.
Then a few months ago I got the idea to have a Web site that tells
about a keyword that can be put in Subject fields to divert them
into my good folder on Yahoo! Mail which I usually check several
times per day. But several people complained that putting a keyword
in the Subject field was too complicated.
Also, for the past several years I've had another method, whereby
you send e-mail and it's all mixed with spam, but then you fill out
a form on my Web site telling me what e-mail address you sent from,
so then in Yahoo! Mail I can explicitly search for your e-mail even
though it didn't have the keyword in the Subject field. People
complained they refuse to use my Web site to send me that instant
alert of e-mail.
So now I'm trying the SpamGourmet address, which spammers are
harvesting sometimes but not always. If the address hasn't yet been
harvested, then the keyword "SpamGourmet" in the To: field triggers
my Yahoo! Mail filter to put the message in my good folder, just
the same as if they are explicitly white-listed or have the keyword
in the Subject field.
Note that Yahoo! Mail allows only 15 filter-recipes, and I've used
all of them, so there are no spares to individually white-list
anybody new. So everyone new must use one of my SpamGourmet
addresses, or the keyword in Subject field, or instant-alert on my
Web site, or e-mail to a secret address that hasn't been harvested
because I haven't ever told anybody about it (but I check only
about once per month, because I never get any e-mail at it).
If you can think of anything better than all those methods listed
above (except the one my sysadmin ordered stopped) which are still
usable, let me know.
··················@spamgourmet.com.remove (Robert Maas,
http://tinyurl.com/uh3t) writes:
>> From: Raymond Wiker <····@RawMBP.local>
>> You need to devise a better solution than this - this method just
>> won't work. Spammers will pick up your new address as soon as you
>> post a message with it, and you cannot expect people to look for
>> messages from you about an address change before trying to email
>> you.
>
> I have posted with several tens of variations on my SpamGourmet
> address, and so-far only six of them have been harvested by the
> Nigerian/419 spammer, none at all by any other spammers.
>
> I had a much better system several years ago: I had a spam filter I wrote
> myself, which distinguished between:
> - E-mail from known penpals, automatically white-listed.
> - E-mail from known sources of spam, automatically complained to
> appropriate spam-complaint address there.
> - E-mail from new sources, ask me whether it's spam or not, and
> move the source to one of the first two classes.
> But then the spam-friendly admin at Savvis complained to my ISP's
> admin that they don't like my spam complaints, so my own admin
> ordered me to stop running all that software.
>
> Then a few months ago I got the idea to have a Web site that tells
> about a keyword that can be put in Subject fields to divert them
> into my good folder on Yahoo! Mail which I usually check several
> times per day. But several people complained that putting a keyword
> in the Subject field was too complicated.
>
> Also, for the past several years I've had another method, whereby
> you send e-mail and it's all mixed with spam, but then you fill out
> a form on my Web site telling me what e-mail address you sent from,
> so then in Yahoo! Mail I can explicitly search for your e-mail even
> though it didn't have the keyword in the Subject field. People
> complained they refuse to use my Web site to send me that instant
> alert of e-mail.
>
> So now I'm trying the SpamGourmet address, which spammers are
> harvesting sometimes but not always. If the address hasn't yet been
> harvested, then the keyword "SpamGourmet" in the To: field triggers
> my Yahoo! Mail filter to put the message in my good folder, just
> the same as if they are explicitly white-listed or have the keyword
> in the Subject field.
>
> Note that Yahoo! Mail allows only 15 filter-recipes, and I've used
> all of them, so there are no spares to individually white-list
> anybody new. So everyone new must use one of my SpamGourmet
> addresses, or the keyword in Subject field, or instant-alert on my
> Web site, or e-mail to a secret address that hasn't been harvested
> because I haven't ever told anybody about it (but I check only
> about once per month, because I never get any e-mail at it).
>
> If you can think of anything better than all those methods listed
> above (except the one my sysadmin ordered stopped) which are still
> usable, let me know.
Why make it so bloody complicated? Just don't use you real e-mail
address in the from line in posts to usernet and put something in your
signature that a human can understand, but is difficult for automatic
mail address harvesters to pick up.
On your e-mail, use one of the many very effective anti-spam solutions
out there. A well configured spamassassin setup is extremely effective,
especially when configured to use spam clearing houses and blacklists.
At any rate, please stop blabbering about your spam problem. Its not
like the rest of us haven't had to learn how to deal with it and there
is absolutely nothing you have contributed that hasn't already been done
and done in a far better way than your rather poorly cobbled together
and inconvenient solution.
--
tcross (at) rapttech dot com dot au
On Jun 13, 3:02 pm, Tim X <····@nospam.dev.null> wrote:
> Why make it so bloody complicated? Just don't use you real e-mail
> address in the from line in posts to usernet and put something in your
> signature that a human can understand, but is difficult for automatic
> mail address harvesters to pick up.
>
> On your e-mail, use one of the many very effective anti-spam solutions
> out there. A well configured spamassassin setup is extremely effective,
> especially when configured to use spam clearing houses and blacklists.
I'm pretty sure you can't get to Spam Assassin on Santa Clara county
busses, and besides, working solutions are incompatible with Robert's
Apple II dialup connection over a printer cable to a VT100 plugged
into a shared IBM PC/RT running cmucl 12 from which he's about to be
evicted. But I'm sure all the attention poured on him every time he
publicly complains about some insane problem of his own making doesn't
encourage him at all.
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Nigerian/419 spammer harvested this address
Date:
Message-ID: <rem-2008jun13-002@yahoo.com>
> From: Tim X <····@nospam.dev.null>
> put something in your signature that a human can understand, but
> is difficult for automatic mail address harvesters to pick up.
That would be a bloody pain to have to remember every time I post
an article to copy&paste the signature at the end. I've been trying
to post a figlet signature at the end of every article, but as you
may have seen, sometimes I forget. I'll try really hard to remember
to do that at the end of this article, but if you check my past
articles you'll see I often forget.
So what if somebody sees an article where I forgot to past a
signature at the end, and they want to e-mail me privately, but
they have no idea how to do it? IMO it's better to put the URL of
my Web site in the From: line, which is in the canned header that
is necessary to use to make posting work at all, so there's no way
I can post an article with it mistakenly missing.
> On your e-mail, use one of the many very effective anti-spam
> solutions out there.
There is no such thing available to me. The only effective
anti-spam solution would be a filter applied at the SMTP level to
refuse all e-mail that I don't want to receive. Yahoo! Mail doesn't
provide that service. SpamGourmet doesn't provide that service. My
own shell-account ISP doesn't provide that service. I do not have
access to any other MSP (Mail Service Provider) that provides that
service. If *you* are an admin at such a MSP and are willing to
provide me with free e-mail filtering service whereby I can
configure the filter any way I want, to protect me to the maximum
degree from e-mail I don't want accepted by your SMTP server to
then be forwarded to me, please offer that service. Otherwise, if
you won't provide me with free SMTP-level filtering, and you don't
know of any free service available elsewhere, then don't tell me
it's available and suggest I use it.
> A well configured spamassassin setup is extremely effective,
> especially when configured to use spam clearing houses and
> blacklists.
Yahoo! Mail doesn't provide that service. SpamGourmet doesn't
provide that service. My own Unix-shell ISP doesn't provide that
service. I do not have access to any other MSP (Mail Service
Provider) that provides that service. If *you* are an admin at such
a MSP and are willing to provide me with free e-mail filtering
service whereby I can configure the filter any way I want, to
protect me to the maximum degree from e-mail I don't want accepted
by your SMTP server to then be forwarded to me, please offer that
service. Otherwise, if you won't provide me with free SMTP-level
filtering, and you don't know of any free service available
elsewhere, then don't tell me it's available and suggest I use it.
> At any rate, please stop blabbering about your spam problem.
I'm not blabbering. I'm posting notices, in the appropriate
threads, in followup to the appropriate articles, where one of my
public-posted addresses inviting individual responses by e-mail has
been disabled so nobody should use it. If you aren't planning to
send me private e-mail in response to something I posted, then my
notices of e-mail address no-longer-usable are not relevant to you,
and you should skip over such articles.
> Its not like the rest of us haven't had to learn how to deal with
> it and there is absolutely nothing you have contributed that hasn't
> already been done and done in a far better way than your rather
> poorly cobbled together and inconvenient solution.
I've seen nothing suitable done by anyone else that is available to
me. If you provide recipient-configurable SMTP-level filtering of
incoming e-mail, which issues 5yz refusals to any unwanted e-mail,
and are willing to provide me with free access to that service,
please make me an offer. If you know of somebody else willing to
provide me with such free service, tell me about it. Otherwise
please stop bragging about how people like you with lots of money
have services that I can't afford and how I should somehow be
satisfied that *they* have services that I don't have. How can the
fact that *other* people have services that aren't available to me
be a reason for me *not* to "cobble together" something that is
available to me and works for me even if it isn't exactly what I'd
really want if I had enough money to pay for the full services
*they* enjoy?
-
Nobody in their right mind likes spammers, nor their automated assistants.
To open an account here, you must demonstrate you're not one of them.
Please spend a few seconds to try to read the text-picture in this box:
/----------------------------------------------------------------------\
| |_| _ ._ _|||_| _ ._ |_| _._ (~|o._ _ _ o|| |_ _ |
| | |(_|| (_|| _| (_|| | _| }_| | _||| |}__\ \/\/||| |_)}_ |
| _|o |~ |~o _ |_|_ _ _|_ _ ._ _ _ _|_o _ _._ _ _ |
| (_||~|~~|~|(_|_|| | (_||_| | (_)| | |(_| | |(_ (_| }_\/\/_\o |
\----(Rendered by means of <http://www.schnoggo.com/figlet.html>)------/
(You don't need JavaScript or images to see that ASCII-text image!!
You just need to view this in a fixed-pitch font such as Monaco.)
Then enter your best guess of the text (50-100 chars) into this TextArea:
+------------------------------------------------------------+
| |
| |
+------------------------------------------------------------+
Robert Maas, http://tinyurl.com/uh3t wrote:
> So what if somebody sees an article where I forgot to past a
> signature at the end, and they want to e-mail me privately, but they
> have no idea how to do it?
Suggestion from an idiot: put a link to your homepage in your sig, and
put the instructions (and whatever other crap you feel like putting) on
that homepage.
--
Jack.
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Nigerian/419 spammer harvested this address
Date:
Message-ID: <rem-2008jun14-002@yahoo.com>
> > So what if somebody sees an article where I forgot to past a
> > signature at the end, and they want to e-mail me privately, but they
> > have no idea how to do it?
> From: MrD <···········@jackpot.invalid>
> Suggestion from an idiot: put a link to your homepage in your
> sig, and put the instructions (and whatever other crap you feel
> like putting) on that homepage.
Apparently you missed where I said
"what if ... I forgot to past a signature at the end".
If I forgot to include the signature, then having the contact info
in in a Web site linked from the not-posted signature won't help
somebody find out how to contact me by e-mail or my private Web
site. For now, I'm going to continue to include the URL for my Web
site in my From: line, which can't possibly be forgotten to be
included each time I post.
-
Nobody in their right mind likes spammers, nor their automated assistants.
To open an account here, you must demonstrate you're not one of them.
Please spend a few seconds to try to read the text-picture in this box:
/-----------------------------------------------------------------------------\
|_|_ _ _ ._ | _ |_ _| (~|| _ |_ _ | o _|(~|._ _ _._ _|_ _ _| _
| | (_(_|| | |(_||_)}_| _||(_)|_)(_|| _||_|(_| _|| | |}_| | | _\) (_|(_)
|_| _ |) _._._ _ o_|_ _|_|_ _._ _ ~)
_|(_)|_| | }_| | | || | | | |}_| | |o
\-------(Rendered by means of <http://www.schnoggo.com/figlet.html>)----------/
(You don't need JavaScript or images to see that ASCII-text image!!
You just need to view this in a fixed-pitch font such as Monaco.)
Then enter your best guess of the text (40-60 chars) into this TextField:
+------------------------------------------------------------+
| |
+------------------------------------------------------------+
"Robert Maas, http://tinyurl.com/uh3t"
<··················@spamgourmet.com.remove> wrote in message
······················@yahoo.com...
....
> Apparently you missed where I said
> "what if ... I forgot to past a signature at the end".
> If I forgot to include the signature, then having the contact info
> in in a Web site linked from the not-posted signature won't help
> somebody find out how to contact me by e-mail or my private Web
> site. For now, I'm going to continue to include the URL for my Web
> site in my From: line, which can't possibly be forgotten to be
> included each time I post.
I think that anyone who googles maas on newsgroups will find you if they
want too.
Robert Maas, http://tinyurl.com/uh3t wrote:
>> From: ·················@SpamGourmet.Com (Robert Maas, http://tinyurl.com/uh3t)
>
> I posted several articles under this address,
Were thay all the same???
--
Jack.
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Nigerian/419 spammer harvested this address
Date:
Message-ID: <rem-2008jun12-002@yahoo.com>
> From: MrD <···········@jackpot.invalid>
> >> From: ·················@SpamGourmet.Com (Robert Maas, http://tinyurl.com/uh3t)
> > I posted several articles under this address,
> Were thay [sic] all the same???
Are you some kind of idiot who doesn't know how to use Google to
answer that kind of question yourself? Here are instructions for
idiots like you:
Enter this URL in the address bar of your browser:
http://groups.google.com/advanced_search?hl=en&q=&hl=en&
Press ENTER. You now see something like this:
Help | Sign in
[groups_medium.gif]
[dot_clear.gif]
Advanced Search
Find messages
with all of the words _________________________ [10 messages_]
[Sort by relevance] Google Search
with the exact phrase _________________________
with at least one of the words _________________________
without the words _________________________
Group Return only messages from the group at this location
______________________________
(Examples: groups.google.com/group/Google-SMS, comp.os.*)
Subject Return only messages where the subject contains
______________________________
Author Return only messages where the author is
______________________________
Where it says sort by relevance, change that to sort by date, like this:
[Sort by date_____] Google Search
Where it says author, fill it in like this:
Author Return only messages where the author is
·················@SpamGourmet.
(The last part is hidden because the text field is too small to
show the entire e-mail address. You should copy&paste the whole
address from the top of this article, not just the part visible in
the text field where I show it above.)
Back up near the top, where it says 10 messages, change that to 100, like this:
with all of the words _________________________ [100 messages]
Now click on the button that says "Google Search", and you get to
this URL:
http://groups.google.com/groups?as_q=&num=100&scoring=d&hl=en&as_epq=&as_oq=&as_eq=&as_ugroup=&as_usubject=&·····························@SpamGourmet.Com&lr=&as_drrb=q&as_qdr=&as_mind=1&as_minm=1&as_miny=1981&as_maxd=12&as_maxm=6&as_maxy=2008&safe=off
where you will see a list of 59 articles I posted under that
address, the 58 before my address was harvested, and the one
announcement that the address had been harvested so the address was
no longer usable. Try browsing those articles. You will see they
are all different, unless you are too stupid to realize that
blatantly obvious fact. No, they are not all the same!!!
It's too bad you didn't know how to use Google. But now you know,
because I showed you how, right? So you won't post such a stupid
question again, right?
·················@SpamGourmet.Com (Robert Maas, http://tinyurl.com/uh3t) writes:
> OK, I have a riddle for everyone to brainstorm about.
> You want to build a list (the usual, CONS-cells CDR-linked),
> where each element is either a tree or a list or an alist,
> and you want to call (mapcar #'COPY-DO-THE-RIGHT-THING thatList),
> where it does copy-list or copy-cons or copy-alist as appropriate.
> Obviously it won't work the obvious way, as Kent's essay explains.
> So how can you design things so that this *will* work??
You have all you need to do that with Common Lisp:
(defstruct nlist elements) ; normal list
(defun nlist (&rest elements) (make-nlist :elements elements))
(defmethod copy ((self nlist))
(make-nlist :elements (copy-list (nlist-elements self))))
(defstruct (alist (:copier nil)) pairs)
(defun alist (&rest pairs) (make-alist :pairs pairs))
(defmethod copy ((self alist))
(make-alist :pairs (copy-alist (alist-pairs self))))
(defstruct llist lisp-lists)
(defun llist (&rest lists) (make-llist :lisp-lists lists))
(defmethod copy ((self llist))
(make-llist :lisp-lists (mapcar (function copy-list) (llist-lisp-lists self))))
(defmethod copy ((self null)) 'nil)
(defmethod copy ((self cons)) (copy-tree self))
(setf *print-circle* t)
(let ((that-list (list (list 'a 'b 'c)
(nlist 'a 'b 'c)
(alist '(a . 1) '(b . 2) '(c . 3))
(llist '(a 1) '(b 2) '(c 3)))))
(print (mapcar (function type-of) that-list))
(values that-list
(mapcar (function copy) that-list)))
prints: (CONS NLIST ALIST LLIST)
--> ((A B C)
#S(NLIST :ELEMENTS (A B C))
#S(ALIST :PAIRS ((A . 1) (B . 2) (C . 3)))
#S(LLIST :LISP-LISTS ((A 1) (B 2) (C 3)))) ;
((A B C)
#S(NLIST :ELEMENTS (A B C))
#S(ALIST :PAIRS ((A . 1) (B . 2) (C . 3)))
#S(LLIST :LISP-LISTS ((A 1) (B 2) (C 3))))
--
__Pascal Bourguignon__ http://www.informatimago.com/
"Klingon function calls do not have "parameters" -- they have
"arguments" and they ALWAYS WIN THEM."
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Iteration in lisp
Date:
Message-ID: <rem-2008apr28-001@yahoo.com>
> > OK, I have a riddle for everyone to brainstorm about.
> > You want to build a list (the usual, CONS-cells CDR-linked),
> > where each element is either a tree or a list or an alist,
> > and you want to call (mapcar #'COPY-DO-THE-RIGHT-THING thatList),
> > where it does copy-list or copy-cons or copy-alist as appropriate.
> > Obviously it won't work the obvious way, as Kent's essay explains.
> > So how can you design things so that this *will* work??
> From: Pascal Bourguignon <····@informatimago.com>
> You have all you need to do that with Common Lisp:
> (defstruct nlist elements) ; normal list
> (defun nlist (&rest elements) (make-nlist :elements elements))
> (defmethod copy ((self nlist))
> (make-nlist :elements (copy-list (nlist-elements self))))
[snipped a complete software module complete with testing, see at URL:]
<http://groups.google.com/group/comp.lang.lisp/msg/8538f72e89917c9b?dmode=source>
= Message-ID: <··············@hubble.informatimago.com>
Indeed that's exactly what I was fishing for, but I only expected a
brief statment of the gist of the method, not a completely debugged
software module. For newbies who might still need to see the gist:
Put a different wrapper around each intentional type of data, using
CLOS objects or DEFSTRUCTs etc., in somewhat the same way that Java
has class Integer which is essentially just a wrapper around an int
value. Write a generic function or CLOS method which dispatches on
those intentional types to do the appropriate copy-something
operation on the structure inside the wrapper. Then you can do:
(mapcar #'genericFunctionToCopyPerWrapper listOfMixedIntentionalTypes)
Of course any other function that needs to accept these wrapped
structures as input will also need to look inside the wrapper to
see the actual data, so this complicates other code slightly.
But a generic function that can accept either wrapped or
non-wrapped values can make the distinction transparent, allowing a
mix of wrapped and non-wrapped structures, the former where mixes
of intentional types are needed, the latter most efficient where
the intentional type is known a priori. I presume this is common
practice in Java, where a method might have two parameter
signatures that differ only by whether it takes Integer or int.
(let ((that-list (list (list 'a 'b 'c)
(nlist 'a 'b 'c)
(alist '(a . 1) '(b . 2) '(c . 3))
(llist '(a 1) '(b 2) '(c 3)))))
(print (mapcar (function type-of) that-list))
...)
Usual caveat for newbies: type-of is *great* for debugging.
Use it "all the time". But don't use it *in* code for dispatching.
Instead, use (if (typep object expectedType) ... ...)
or (typecase object (expTyp1 ...) (expTyp2 ...) ... (otherwise ...))
I was actually hoping some newbie would take a stab at my riddle
and either show great understanding of Lisp or make a mistake and
then learn something and *then* understand Lisp better. Oh well.
By the way, there's a more crufty low-tech solution too, in case
anybody cares. It has the disadvantage that it sorta usurpts a
theoretically possible data value for an out-of-band meaning.
Simply create a non-interned symbol (or other object for that
matter) which is known only to the data-wrapper module, and CONS
that followed by the intentional type in front of each data value
(or use a different non-interned symbol for each intentional type,
so then only one CONS is needed per value). The disadvantage is
that if you accidently pass such a wrapped object to some function
built into Lisp, it'll process the whole thing instead of
signalling a error (mumbleObject is not of type mumbleWantedType)
that you can realize means you forgot to unwrap the data. So using
defstruct or something to make a brand-new Lisp type, as PascalB
did above, is better in that way. But the crufty low-tech way might
its own advantage, that you can load stuff in any sequence and it
still either works or signals an error that a function is not
defined.
Another low-tech way to wrap is simply to GENSYM or otherwise make
an uninterned symbol and have its value cell or property list
contain the original value being wrapped. (See my other thread
about *really* using non-interned symbols.)
Note that the original riddle was "just" a riddle, since I don't
recall any time I've ever needed to mix more than one intentional
type of CONS-based structure in a list and then handle them
differently depending on type. At most I've had a function that
took either a wrapped (low-tech method such as uninterned symbol)
or non-wrapped value and coerced it to be either wrapped or
not-wrapped as needed by the code that comes next. That allowed
graceful transition from old code that passed a very large string
(which spews out for ten minutes if my code hits a breakpoint) and
new code that passed an uninterned symbol with that string as its
value (which allowed debugging without spew-out). First thing the
function does upon entry is to coerce its parameter to wrapped
string if it was raw string, and the break package then sees that
when the function hits a breakpoint.
······@gmail.com wrote:
> If you want to iterate through a list, should you use a recursive
> function or a loop?
If you want to get from A to B, should you use a bike or a car?
Pascal
--
1st European Lisp Symposium (ELS'08)
http://prog.vub.ac.be/~pcostanza/els08/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
Pascal Costanza <··@p-cos.net> writes:
> ······@gmail.com wrote:
>> If you want to iterate through a list, should you use a recursive
>> function or a loop?
>
> If you want to get from A to B, should you use a bike or a car?
Or walk?
--
Duane Rettig ·····@franz.com Franz Inc. http://www.franz.com/
555 12th St., Suite 1450 http://www.555citycenter.com/
Oakland, Ca. 94607 Phone: (510) 452-2000; Fax: (510) 452-0182
In article <··············@gemini.franz.com>,
Duane Rettig <·····@franz.com> wrote:
> Pascal Costanza <··@p-cos.net> writes:
>
> > ······@gmail.com wrote:
> >> If you want to iterate through a list, should you use a recursive
> >> function or a loop?
> >
> > If you want to get from A to B, should you use a bike or a car?
>
> Or walk?
Consult a map. (It had to be said.)
rg
Ron Garret <·········@flownet.com> writes:
> In article <··············@gemini.franz.com>,
> Duane Rettig <·····@franz.com> wrote:
>
> > Pascal Costanza <··@p-cos.net> writes:
> >
> > > ······@gmail.com wrote:
> > >> If you want to iterate through a list, should you use a recursive
> > >> function or a loop?
> > >
> > > If you want to get from A to B, should you use a bike or a car?
> >
> > Or walk?
>
> Consult a map. (It had to be said.)
Would that be a car map? or a list map? Or do I have it all backward?
--
Thomas A. Russ, USC/Information Sciences Institute
In article <···············@mid.individual.net>,
Pascal Costanza <··@p-cos.net> wrote:
> ······@gmail.com wrote:
> > If you want to iterate through a list, should you use a recursive
> > function or a loop?
>
> If you want to get from A to B, should you use a bike or a car?
>
>
> Pascal
A bike.
--
http://lispm.dyndns.org/
Pascal Costanza <··@p-cos.net> writes:
> ······@gmail.com wrote:
>> If you want to iterate through a list, should you use a recursive
>> function or a loop?
>
> If you want to get from A to B, should you use a bike or a car?
You should use the higher-order conveyances provided by your city: bus,
subway, etc.
Paul Donnelly <·············@sbcglobal.net> wrote:
> Pascal Costanza <··@p-cos.net> writes:
>
> > ······@gmail.com wrote:
> >> If you want to iterate through a list, should you use a recursive
> >> function or a loop?
> >
> > If you want to get from A to B, should you use a bike or a car?
>
> You should use the higher-order conveyances provided by your city: bus,
> subway, etc.
Wormhole?
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Iteration in lisp
Date:
Message-ID: <rem-2008apr26-009@yahoo.com>
> >> If you want to iterate through a list, should you use a recursive
> >> function or a loop?
> > If you want to get from A to B, should you use a bike or a car?
> From: Paul Donnelly <·············@sbcglobal.net>
> You should use the higher-order conveyances provided by your city: bus,
> subway, etc.
Unfortunately where I reside (Santa Clara county of California)
busses generally run only once per half hour or hour (except a very
few number of busses, none of which go within convenient walking
distance of where I reside), even the light rail runs only once per
half hour at the time I need it, and some connections from one
public mode to another require too long a walk to make the transfer
before the second vehicle has already passed and I have to wait a
half hour for the next and it's too long a walk to catch a
different vehicle on a different route. Accordingly riding a
bicycle *to* the bus stop, then loading the bike onto the rack,
then riding the bus, then unloading the bike and riding it to the
next public vehicle, is the only practical way to get around.
Details for anyone curious: I need to get to Project Hired at 10 AM
on Tuesdays. So I need to ride bike to catch 32 bus, then at Santa
Clara CalTrain station I need to move bike from 32 to 22 or 522,
then at San Fernando or Diridon station I need to ride bike from 22
or 522 to light rail except if I missed the 22 or 522 and/or their
bike racks were full so I have to take a later 22 then I need to
get off the 22 a mile before Diridon, at Race street, and load bike
onto 63 bus, so I'm only 15 minutes late instead of a half hour late.
After noon, I need to ride bike to catch light rail or 63 bus or 65
bus, then depending on which bus I took I end up on First Street
south of San Fernando or on First Street north of San Fernando or
on 7th street north of San Fernando and a bike is the only
practical way to get from those three locations to my next intended
location.
Then in the evening, I need to get off the 22 or 522 bus at
Sunnyvale Avenue or Fair Oaks respectively (Sunnyvale Avenue is
closer to home, but the 522 doesn't stop there). If I get off at
Sunnyvale avenue, I just ride home. If I get off at Fair Oaks, then
depending on the bus 55 schedule and availability of bike rack I
might load my bike on the bus or just ride all the way home from
there.
So now back to Lisp: Find a vendor-supplied utility that does the
main part of what you need, such as MAPCAR or LOOP or POSITION or
REMOVE etc., then add your own layer of extra code as glue to
connect the utilities to make it do *exactly* what you need. Does
everyone see the metaphor?
bus/lightrail = MAPCAR etc.
bike = that extra layer of code you write
travel from here to there = process input data to produce output data
public transit (with bike) = D/P (data processing)
Nice thing about Common Lisp: The bike rack is never full.
You can always add your extra code and never worry that it won't work later.
Nit: Note that CPU is really CDPU (Central Data-Processing Unit)
DEC (Digital Equipment Corporation) had the name right:
PDP = Programmed Data-Processor