From: jurgen_defurne
Subject: Strategies for defining named functions at load or compile time ?
Date: 
Message-ID: <4b2c9d32-83fb-4df9-8f2d-60696acee264@l16g2000yqo.googlegroups.com>
When I have the following macro

(defmacro m-a (name)
       `(defun ,name () (quote ,name)))

and the following loop

    (dolist (name '(a b c)) (m-a name))

instead of doing

    (m-a a)
    (m-a b)
    (m-a c)

the result is not what I expect, three functions called a, b and c,
but only one function called name. With a little thinking this is not
hard to understand.

Now, I have done this already in the past, but looking through my
code, I seems that I created two macros, one to do the real expansion,
and one to run the loop, so that

What I really want to do is to create a bunch of named functions which
have the same contents, but which have different constants embedded in
them. They must be named so I can call them literally, like with the
previous definitions :

    (a)
    (b)
    (c)

I can replace this with

    (dolist (name '(a b c)) (eval `(defun ,name () (quote ,name))))

When evaluating my options here, it seems that when using macros, I
cannot do the compilation in one single construct. Digging further in
my old code, I see that I create a macro to do the expansion, but the
real work is done by some functions, which first expand the passed
list into another list, and this list is then plugged into the macro-
code, which uses it to expand into a prog containing all necessary
defun's.

The other option seems to be the above structure using 'eval'.

Any other options that you know of ?

Regards,

Jurgen

From: Pascal Costanza
Subject: Re: Strategies for defining named functions at load or compile time ?
Date: 
Message-ID: <729obbForvt3U1@mid.individual.net>
jurgen_defurne wrote:
> When I have the following macro
> 
> (defmacro m-a (name)
>        `(defun ,name () (quote ,name)))
> 
> and the following loop
> 
>     (dolist (name '(a b c)) (m-a name))
> 
> instead of doing
> 
>     (m-a a)
>     (m-a b)
>     (m-a c)
> 
> the result is not what I expect, three functions called a, b and c,
> but only one function called name. With a little thinking this is not
> hard to understand.
> 
> Now, I have done this already in the past, but looking through my
> code, I seems that I created two macros, one to do the real expansion,
> and one to run the loop, so that
> 
> What I really want to do is to create a bunch of named functions which
> have the same contents, but which have different constants embedded in
> them. They must be named so I can call them literally, like with the
> previous definitions :
> 
>     (a)
>     (b)
>     (c)
> 
> I can replace this with
> 
>     (dolist (name '(a b c)) (eval `(defun ,name () (quote ,name))))
> 
> When evaluating my options here, it seems that when using macros, I
> cannot do the compilation in one single construct. Digging further in
> my old code, I see that I create a macro to do the expansion, but the
> real work is done by some functions, which first expand the passed
> list into another list, and this list is then plugged into the macro-
> code, which uses it to expand into a prog containing all necessary
> defun's.
> 
> The other option seems to be the above structure using 'eval'.
> 
> Any other options that you know of ?

(dolist (name '(a b c))
   (setf (symbol-function name)
         (constantly name)))

-- 
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Pascal Costanza
Subject: Re: Strategies for defining named functions at load or compile time   ?
Date: 
Message-ID: <729oi2Fp2783U1@mid.individual.net>
Pascal Costanza wrote:
> jurgen_defurne wrote:
>> When I have the following macro
>>
>> (defmacro m-a (name)
>>        `(defun ,name () (quote ,name)))
>>
>> and the following loop
>>
>>     (dolist (name '(a b c)) (m-a name))
>>
>> instead of doing
>>
>>     (m-a a)
>>     (m-a b)
>>     (m-a c)
>>
>> the result is not what I expect, three functions called a, b and c,
>> but only one function called name. With a little thinking this is not
>> hard to understand.
>>
>> Now, I have done this already in the past, but looking through my
>> code, I seems that I created two macros, one to do the real expansion,
>> and one to run the loop, so that
>>
>> What I really want to do is to create a bunch of named functions which
>> have the same contents, but which have different constants embedded in
>> them. They must be named so I can call them literally, like with the
>> previous definitions :
>>
>>     (a)
>>     (b)
>>     (c)
>>
>> I can replace this with
>>
>>     (dolist (name '(a b c)) (eval `(defun ,name () (quote ,name))))
>>
>> When evaluating my options here, it seems that when using macros, I
>> cannot do the compilation in one single construct. Digging further in
>> my old code, I see that I create a macro to do the expansion, but the
>> real work is done by some functions, which first expand the passed
>> list into another list, and this list is then plugged into the macro-
>> code, which uses it to expand into a prog containing all necessary
>> defun's.
>>
>> The other option seems to be the above structure using 'eval'.
>>
>> Any other options that you know of ?
> 
> (dolist (name '(a b c))
>   (setf (symbol-function name)
>         (constantly name)))

...in general: See whether there is a "functional" API underneath the 
macro abstractions. Macro abstractions are typically defined for making 
it convenient at the user-level to use certain functionality, but for 
programmatically using it, you need first-class representations of 
entities (like the function for a symbol that is the result of 
evaluating an expression).

There are a few cases where the macro is the only accessible API, and 
then the best option is to use (eval some-expression), or if you are 
worried about efficiency maybe (funcall (compile nil `(lambda 
,some-expression))) [but make sure that you compile only once, or as 
little as possible at runtime.

Pascal

-- 
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Thomas A. Russ
Subject: Re: Strategies for defining named functions at load or compile time ?
Date: 
Message-ID: <ymi4oxrptnz.fsf@blackcat.isi.edu>
jurgen_defurne <··············@pandora.be> writes:

> When I have the following macro
> 
> (defmacro m-a (name)
>        `(defun ,name () (quote ,name)))
> 
> and the following loop
> 
>     (dolist (name '(a b c)) (m-a name))
....
> Now, I have done this already in the past, but looking through my
> code, I seems that I created two macros, one to do the real expansion,
> and one to run the loop, so that

Well, you could easily replace this with a single macro that does the
loop and creates everything you need.  Just have it use an &rest
argument:

(defmacro m-a (&rest names)
  `(progn ,@(loop for name in names
                  collect `(defun ,name () ',name))))

or if you want to stick with dolist:

(defmacro m-a (&rest names)
  (let ((forms nil))
   `(progn ,@(dolist (name names (nreverse forms))
                  (push `(defun ,name () ',name) forms)))))


Note that the NREVERSE is optional, since the order of definitions of
functions shouldn't really matter.

You would then invoke it with non-quoted arguments like this:

  (m-a a b c)

or even

  (m-a d)

and it would do what you want it to do.

There is no loss of function by going to unquoted arguments, since in
order for the macro-expansion to work, the values must be known at
compile time anyway, and that would normally mean you are not computing
them.

Now as to why you prefer to write
   (a)
to
   'a
is entirely another matter, but I'll assume this is a distilled example
from some more complicated operation that you really care about.

-- 
Thomas A. Russ,  USC/Information Sciences Institute
From: D Herring
Subject: Re: Strategies for defining named functions at load or compile time ?
Date: 
Message-ID: <49c048d2$0$3339$6e1ede2f@read.cnntp.org>
jurgen_defurne wrote:
> When I have the following macro
> 
> (defmacro m-a (name)
>        `(defun ,name () (quote ,name)))
> 
> and the following loop
> 
>     (dolist (name '(a b c)) (m-a name))
> 
> instead of doing
> 
>     (m-a a)
>     (m-a b)
>     (m-a c)
> 
> the result is not what I expect, three functions called a, b and c,
> but only one function called name. With a little thinking this is not
> hard to understand.

Sometimes macros turn out to be the wrong tool for the job.  They are 
over-eager at compile-time.

Try replacing your macro with
(defun m-a (name)
   (setf (symbol-function name)
	(lambda () name)))

;; example usage
(m-a 'self-naming-function)
(dolist (name '(a b c)) (m-a name))

Later,
Daniel
From: Mark Wooding
Subject: Re: Strategies for defining named functions at load or compile time ?
Date: 
Message-ID: <87prgfkej6.fsf.mdw@metalzone.distorted.org.uk>
jurgen_defurne <··············@pandora.be> writes:

> When I have the following macro
>
> (defmacro m-a (name)
>        `(defun ,name () (quote ,name)))
>
> and the following loop
>
>     (dolist (name '(a b c)) (m-a name))

;; Hacky but easy.  Programs are lists, after all!
(progn . #.(mapcar (lambda (name) `(m-a ,name)) '(a b c)))

;; More tedious.
(macrolet ((frob (&rest things) 
             `(progn ,@(mapcar (lambda (name) `(m-a ,name))
                               things))))
  (frob a b c))

-- [mdw]
From: Tobias C. Rittweiler
Subject: Re: Strategies for defining named functions at load or compile time ?
Date: 
Message-ID: <877i2mne8v.fsf@freebits.de>
Mark Wooding <...> writes:

> ;; Hacky but easy.  Programs are lists, after all!
> (progn . #.(mapcar (lambda (name) `(m-a ,name)) '(a b c)))
>
> ;; More tedious.
> (macrolet ((frob (&rest things) 
>              `(progn ,@(mapcar (lambda (name) `(m-a ,name))
>                                things))))
>   (frob a b c))

The latter one is the idiomatic choice. It also copes much better with
program-analysing tools (which includes, for example, Slime.)

  -T.
From: Mark Wooding
Subject: Re: Strategies for defining named functions at load or compile time ?
Date: 
Message-ID: <87hc1pjsjz.fsf.mdw@metalzone.distorted.org.uk>
"Tobias C. Rittweiler" <···@freebits.de.invalid> writes:

> Mark Wooding <...> writes:
> > ;; More tedious.
> > (macrolet ((frob (&rest things) 
> >              `(progn ,@(mapcar (lambda (name) `(m-a ,name))
> >                                things))))
> >   (frob a b c))
>
> The latter one is the idiomatic choice. 

It is indeed -- right down to the name `frob'!

> It also copes much better with program-analysing tools (which
> includes, for example, Slime.)

That's true.

-- [mdw]
From: Alex Mizrahi
Subject: Re: Strategies for defining named functions at load or compile time ?
Date: 
Message-ID: <49c27137$0$90263$14726298@news.sunsite.dk>
 jd> Any other options that you know of ?

there are plenty of options:
 * normal macro
 * macro via macrolet
 * read-time evaluation (#.(loop ... ))
 * eval
 * using symbol-function

which is best.. it depends.

for this example, symbol-function thingie seems being cleanest.

read-time evaluation is pretty concise, but it is rarely used.

macrolet is somewhat more verbose, but pretty good too.

eval sucks.

one more option: inline, anonymous macro. that is essentially a macrolet,
but more concise. see this post by Kent Pitman:
http://groups.google.com/group/comp.lang.lisp/msg/4da9387c1b81573e?dmode=source

if you define macro called META as he does, you can write:

(meta `(progn ,@(loop for name in '(a b c) collect `(m-a ,name)))

or just inline it there:

(meta `(progn ,@(loop for name in '(a b c) collect `(defun ,name () (quote 
,name))))

compare with:

(progn #.(loop loop for name in '(a b c) collect `(defun ,name () (quote 
,name))))

you just replace #. with a meta.

but note what Pitman writes: "FWIW, the total number of times I have ever 
wanted this in my career
is approximately five;", that is, it's not worth effort.

as for a normal macros, you can use mapmacro macro as described by D Herring 
here:
http://groups.google.com/group/comp.lang.lisp/msg/408c58b9209d3971

(mapmacro m-a (a b c))

very concise, but again, introducing this mapmacro is not worth the effort. 
From: eric-and-jane-smith
Subject: Re: Strategies for defining named functions at load or compile time ?
Date: 
Message-ID: <Dcvwl.184698$6r1.24180@newsfe19.iad>
"Alex Mizrahi" <········@users.sourceforge.net> wrote in
······························@news.sunsite.dk: 

> which is best.. it depends.
> 
> for this example, symbol-function thingie seems being cleanest.
> 
> read-time evaluation is pretty concise, but it is rarely used.
> 
> macrolet is somewhat more verbose, but pretty good too.
> 
> eval sucks.

In what way does eval suck?  If it makes the code clearer, why wouldn't you 
use it?  It might not have any advantages when defining functions in a 
loop, but what about when defining macros or symbol macros in a loop?
From: John Thingstad
Subject: Re: Strategies for defining named functions at load or compile time ?
Date: 
Message-ID: <op.uq1xkbxkut4oq5@pandora.alfanett.no>
P� Thu, 19 Mar 2009 18:23:47 +0100, skrev eric-and-jane-smith  
<·······@nospam.com>:

> "Alex Mizrahi" <········@users.sourceforge.net> wrote in
> ······························@news.sunsite.dk:
>
>> which is best.. it depends.
>>
>> for this example, symbol-function thingie seems being cleanest.
>>
>> read-time evaluation is pretty concise, but it is rarely used.
>>
>> macrolet is somewhat more verbose, but pretty good too.
>>
>> eval sucks.
>
> In what way does eval suck?  If it makes the code clearer, why wouldn't  
> you
> use it?  It might not have any advantages when defining functions in a
> loop, but what about when defining macros or symbol macros in a loop?

It copies the read enviroment. Does not share the lexical scope of it's  
parent. Is slow to start. In all it is better to use 'compile.

--------------
John Thingstad
From: eric-and-jane-smith
Subject: Re: Strategies for defining named functions at load or compile time ?
Date: 
Message-ID: <YTBwl.114258$RJ7.55759@newsfe18.iad>
"John Thingstad" <·······@online.no> wrote in
······················@pandora.alfanett.no: 

> P� Thu, 19 Mar 2009 18:23:47 +0100, skrev eric-and-jane-smith  
> <·······@nospam.com>:
> 
>> "Alex Mizrahi" <········@users.sourceforge.net> wrote in
>> ······························@news.sunsite.dk:
>>
>>> eval sucks.
>>
>> In what way does eval suck?  If it makes the code clearer, why
>> wouldn't  you
>> use it?  It might not have any advantages when defining functions in
>> a loop, but what about when defining macros or symbol macros in a
>> loop? 
> 
> It copies the read enviroment. Does not share the lexical scope of
> it's  parent. Is slow to start. In all it is better to use 'compile.

For example, suppose I have a list of names of symbol macros I want to 
define, and a list of information to define each of them.  How can I 
define them more elegantly than by building the define-symbol-macro forms 
in a loop and using eval on each of those forms in that loop?  It's 
clearer because when browsing the code you see the define-symbol-macro 
and that tells you what's happening.  The read environment and lexical 
scope don't matter much because the symbol macros are being defined 
globally.

What do you mean by slow to start?  How slow?  Like it could make a 1000-
line program take twice as long to compile?
From: Alex Mizrahi
Subject: Re: Strategies for defining named functions at load or compile time ?
Date: 
Message-ID: <49c3d1ae$0$90272$14726298@news.sunsite.dk>
 ??>> eval sucks.

 eaj> In what way does eval suck?

99.99% of cases where one could use eval are better served
with other tools (i've listed them in my message). (perhaps it
is 100% of cases, i don't know for sure.)

 eaj> If it makes the code clearer, why wouldn't you use it?

if i'd know it makes code clearer, i'd use it, but i do not
know cases where it can make code clearer.

 eaj>   It might not have any advantages when defining functions in a loop,
 eaj> but what about when defining macros or symbol macros in a loop?

did you read my post except "eval sucks" line? there are plenty
of ways to define something in a loop, not necessarily functions.

 eaj> For example, suppose I have a list of names of symbol macros I want to
 eaj> define, and a list of information to define each of them.  How can I
 eaj> define them more elegantly than by building the define-symbol-macro 
forms
 eaj> in a loop and using eval on each of those forms in that loop?

as i understand, you have something like this:

(loop for (sym . form) in '((a . 'a)
                            (b . 'b)
                            (c . 'c))
    do (eval `(define-symbol-macro ,sym ,form)))

you can also do this via META macro (essentially, a macrolet):

(meta `(progn ,@(loop for (sym . form) in '((a . 'a)
                                            (b . 'b)
                                            (c . 'c))
                      collect `(define-symbol-macro ,sym ,form))))

or via a read-time-evaluation:

#.(cons 'progn (loop for (sym . form) in '((a . 'a)
                                           (b . 'b)
                                           (c . 'c))
                      collect `(define-symbol-macro ,sym ,form)))

as you see, all three solutions are very similar, you can't say eval
one is clearer, perhaps it has few less tokens, but it is not a significant
difference.

however, there is a significant difference in the semantics.

put this thing in a file:

(loop for (sym . form) in '((a2 . 'a)
                            (b2 . 'b)
                            (c2 . 'c))
    do (eval `(define-symbol-macro ,sym ,form)))

(print a2)


and do a compile-file on it. SBCL says:

; compiling (LOOP FOR ...)
; compiling (PRINT A2)
;
;
; caught WARNING:
;   undefined variable: A2

it does not work for obvious reason -- loop is not executed during 
compilation!
now load it. it does not work:

The variable A2 is unbound.
   [Condition of type UNBOUND-VARIABLE]


now let's try read-time evaluation version:

; compiling (DEFINE-SYMBOL-MACRO A4 ...)
; compiling (DEFINE-SYMBOL-MACRO B4 ...)
; compiling (DEFINE-SYMBOL-MACRO C4 ...)
; compiling (PRINT A4)

; /home/alex/test.fasl written
; compilation finished in 0:00:00

it compiles perfectly and works when you load it. same thing with meta or a 
macrolet
 -- it compiles and works just fine.

you can fix issue with EVAL compilation via eval-when:

(eval-when (:compile-toplevel :load-toplevel :execute)
  (loop for (sym . form) in '((a7 . 'a)
                              (b7 . 'b)
                              (c7 . 'c))
    do (eval `(define-symbol-macro ,sym ,form))))

then it works, but it is not as clear as it was, isn't it?

but even with this addition, eval solution is inferior -- it does not refer 
source
code location. if you use either solution with read-time evaluation or 
macrolet,
you can go to source code location pressing M-. in SLIME, but if you use 
EVAL,
it is absolutely impossible. this looks like an unimportant issue, but it 
might be
a HUGE pain during debugging of some complex code.

so, you see, eval is messy, it does not play well with Common Lisp 
compilation semantics,
and it is better to avoid it -- usually there IS a better solution. 
From: Kenneth Tilton
Subject: Re: Strategies for defining named functions at load or compile time ?
Date: 
Message-ID: <49c3d5b2$0$5912$607ed4bc@cv.net>
Alex Mizrahi wrote:
>  ??>> eval sucks.
> 
>  eaj> In what way does eval suck?
> 
> 99.99% of cases where one could use eval are better served
> with other tools (i've listed them in my message). (perhaps it
> is 100% of cases, i don't know for sure.)

No more "code as data"? McCarthy is gonna kill us!

kt
From: Pillsy
Subject: Re: Strategies for defining named functions at load or compile time ?
Date: 
Message-ID: <3b0318c9-d558-4cdb-9090-a12655780286@o11g2000yql.googlegroups.com>
On Mar 20, 1:43 pm, Kenneth Tilton <·········@gmail.com> wrote:
> Alex Mizrahi wrote:
[...]
> > 99.99% of cases where one could use eval are better served
> > with other tools (i've listed them in my message). (perhaps it
> > is 100% of cases, i don't know for sure.)

> No more "code as data"? McCarthy is gonna kill us!

EVAL isn't for using code as data, it's for using data as code! ;)

Cheers,
Pillsy
From: Kenneth Tilton
Subject: Re: Strategies for defining named functions at load or compile time ?
Date: 
Message-ID: <49c3ffc9$0$24466$607ed4bc@cv.net>
Pillsy wrote:
> On Mar 20, 1:43 pm, Kenneth Tilton <·········@gmail.com> wrote:
>> Alex Mizrahi wrote:
> [...]
>>> 99.99% of cases where one could use eval are better served
>>> with other tools (i've listed them in my message). (perhaps it
>>> is 100% of cases, i don't know for sure.)
> 
>> No more "code as data"? McCarthy is gonna kill us!
> 
> EVAL isn't for using code as data, it's for using data as code! ;)
> 

I think you are missing the chicken-egg consequences of finding one-way 
directionality in "code as data".

kt
From: Alex Mizrahi
Subject: Re: Strategies for defining named functions at load or compile time ?
Date: 
Message-ID: <49c3dcc1$0$90274$14726298@news.sunsite.dk>
 ??>> 99.99% of cases where one could use eval are better served
 ??>> with other tools (i've listed them in my message). (perhaps it
 ??>> is 100% of cases, i don't know for sure.)

 KT> No more "code as data"? McCarthy is gonna kill us!

code is data, but there are different ways to use/run this code:

  * macros
  * read-time evaluation
  * (coerce lambda-code 'function)
  * (compile nil lambda-code)
  * eval

eval is the most basic and primitive way, and usually it is
better to use a more specialized tool. 
From: Kenneth Tilton
Subject: Re: Strategies for defining named functions at load or compile time ?
Date: 
Message-ID: <49c3ef56$0$5916$607ed4bc@cv.net>
Alex Mizrahi wrote:
>  ??>> 99.99% of cases where one could use eval are better served
>  ??>> with other tools (i've listed them in my message). (perhaps it
>  ??>> is 100% of cases, i don't know for sure.)
> 
>  KT> No more "code as data"? McCarthy is gonna kill us!
> 
> code is data, but there are different ways to use/run this code:
> 
>   * macros
>   * read-time evaluation
>   * (coerce lambda-code 'function)
>   * (compile nil lambda-code)
>   * eval
> 
> eval is the most basic and primitive way, and usually it is
> better to use a more specialized tool. 
> 
> 

And when the code arrives as a string over a pipe or is read from a 
database? It is not clear to me your percentages are gonna hold up, you 
might wanna dial those back a hair. Well, OK, just noticed you have 
retreated to "usually"...maybe McCarthy will go easy on us. Whew!

kt
From: Alex Mizrahi
Subject: Re: Strategies for defining named functions at load or compile time ?
Date: 
Message-ID: <49c3fdb0$0$90265$14726298@news.sunsite.dk>
 KT> And when the code arrives as a string over a pipe
 KT>  or is read from a database?

yes.
it might be a right job for EVAL.

you see, we were discussing EVAL being used for application's internal 
purposes, like generating some code automatically, and it appears that there 
are better ways to use this generated code in a program.

when eval is a part of problem's description, like in a read-EVAL-print 
loop, or in RPC where you need to EVALuate code which goes over pipe, then 
it is a right tool.

if we'll take into account such cases, percentage would be lower, like maybe 
99%, but still, i think people need some automatic code generation MUCH more 
frequently than they write REPL or RPC.


am 
From: Kazimir Majorinc
Subject: Re: Strategies for defining named functions at load or compile time ?
Date: 
Message-ID: <op.uq3zppy91cfios@kazimir-pc>
> eval is the most basic and primitive way, andusually it is better to use  
> a more specialized tool.

One should use the most basic tool unless he really needs
specialization due to some optimization concerns. Not
other way around. I'm sure you understand that principle
and that you use it in your programs.


--
http://kazimirmajorinc.blogspot.com
From: Alex Mizrahi
Subject: Re: Strategies for defining named functions at load or compile time ?
Date: 
Message-ID: <49c639e6$0$90263$14726298@news.sunsite.dk>
 ??>> eval is the most basic and primitive way, andusually it is better to
 ??>> use  a more specialized tool.

 KM> One should use the most basic tool unless he really needs
 KM> specialization due to some optimization concerns.

macros are used instead of eval not for "some optimization concerns", but 
because
they provide much cleaner semantics. and i can't say they are less basic 
than eval
 -- both are conceptually simple constructs, doing slightly different 
things.

sure, EVAL appeared first. early version of LISP were using dynamic state 
for
everything, so EVAL could do macro's job, and there was no big need for 
macros.

but later it appeared that as programs become more complex, doing everything
via dynamic state is too error prone, as functions behaviour depends on how
it is called, not only on function itself. lexical environments were a 
solution to
this problem --- there are much less chances to screw up. another thing is
a structuring of a program as a whole rather than some chaotic stream of
commands and separation of read, macroexpansion, compile and run times
 -- this helps to find errors early, before program executes, and program 
appears
much more robust to a small changes. when program works with a dynamic 
state,
there only way to check if it works is to run it, and to check all possible 
code pathes
you need to run it through all possible code pathes, something that might be 
impossible
due to combinatoric explosion. however, when you have a clear structure of a 
program,
it is possible to analyse  it before execution and delect a large class of 
errors, without
going through all possible pathes. e.g. if you see that variable being 
referenced is not
declared in a lexical environment, it is definitely an error, however with a 
dynamic scope
program that could be just a variable defined elsewhere, or analyser might 
not even recognize
a variable if it sits inside backquote meant to be executed with EVAL.

macros do eval's job in a lexically structured programs. eval simply does 
not work there,
macros do. they are not anyhow more complex, they are just doing something 
different -- they inject
a piece of code into existing code, rather than executing it immidiately in 
a current dynamic state.
what makes macros more complex is macrolet -- lexically-scoped macros, but 
that is another story,
you do not need to use macrolet. macros hygiene is also a different story --  
CL macros provide
a raw macros, just like EVAL.

so this stuff like lexical environments, compilation and macros allowed to 
create more reliable programs,
program easier and therefore develop larger programs in less time, it 
allowed to use larger teams because
different pieces are most self-contained and require less communication 
between team members.

yes, you can call this optimizations, but it is same kind of optimization 
that makes Lisp better than assembly
language, i betcha most will agree that this optimization is absolutely 
necessary. (except, maybe, assembly freaks.)

but if you're a hobbist, never worked with large/complex programs and never 
worked in teams, you might
see no point in using more structured programming style and thus no point in 
avoiding eval. "heh, it works for me,
then it must be good." i think this is the way newlisp was created -- people 
just rejected decades of experience,
and starting making something on their own, with their limited experience. 
it's actually somewhat sad..

 KM>  Not other way around. I'm sure you understand that principle
 KM> and that you use it in your programs.

the principle is actually quite simple -- use what works best. you can't 
make any other quality
a dogma -- use simpliest, fastest, most terse or whatever, it just does not 
makes sense in
some situations. world is not one-dimensional 
From: Kazimir Majorinc
Subject: Re: Strategies for defining named functions at load or compile time ?
Date: 
Message-ID: <op.uq75kgdw1cfios@kazimir-pc>
On Sun, 22 Mar 2009 14:15:13 +0100, Alex Mizrahi


>
> yes, you can call this optimizations, but it is same kind of optimization
> that makes Lisp better than assembly
> language, i betcha most will agree that this optimization is absolutely
> necessary. (except, maybe, assembly freaks.)
>
> but if you're a hobbist, never worked with large/complex programs and  
> never
> worked in teams, you might
> see no point in using more structured programming style and thus no  
> point in
> avoiding eval.

The mistake is, I think, in claim that use of
eval causes problems that competent programmers
cannot avoid, except by avoiding eval itself.
I fail to see arguments for that claim.

John McCarthy wrote in Lisp 1.5 manual: "LISP differs
 from most programming languages in three important ways....
Third, LISP can interpret and execute programs written
in the form of S-expressions."

Now, you say "eval sucks." I've read similar
claims on this newsgroup. Of course, McCarthy can
be wrong, but where is that departure from the
tradition recorded? Is there any published article
against eval or it is purely Usenet/IRC thing?



Blog:    http://kazimirmajorinc.blogspot.com
From: Thomas A. Russ
Subject: Re: Strategies for defining named functions at load or compile time ?
Date: 
Message-ID: <ymi1vsodwpw.fsf@blackcat.isi.edu>
"Kazimir Majorinc" <·····@email.address> writes:

> On Sun, 22 Mar 2009 14:15:13 +0100, Alex Mizrahi
> >
> > but if you're a hobbist, never worked with large/complex programs and
> > never
> > worked in teams, you might
> > see no point in using more structured programming style and thus no
> > point in
> > avoiding eval.
> 
> The mistake is, I think, in claim that use of
> eval causes problems that competent programmers
> cannot avoid, except by avoiding eval itself.
> I fail to see arguments for that claim.

Well, I don't think that there is a serious claim that EVAL causes
problems that competent programmers cannot avoid.  The general reaction
against EVAL is that it is often used by novices in ways that either can
cause problems or that are not the best way to solve a particular
problem.

So EVAL certainly has its place in a programmer's toolkit, but it also
has the potential for misuse.  So, especially for new lisp programmers,
there is a general guideline that one should avoid EVAL if possible.  It
is the "if possible" part that often goes missing.  But the use of EVAL
by novices is often a sign that something really should be done
differently.

-- 
Thomas A. Russ,  USC/Information Sciences Institute
From: Thomas F. Burdick
Subject: Re: Strategies for defining named functions at load or compile time ?
Date: 
Message-ID: <7e7c7a60-1d18-47dc-868c-0bf30efba5fd@o11g2000yql.googlegroups.com>
On 23 mar, 17:34, ····@sevak.isi.edu (Thomas A. Russ) wrote:
> "Kazimir Majorinc" <·····@email.address> writes:
> > On Sun, 22 Mar 2009 14:15:13 +0100, Alex Mizrahi
>
> > > but if you're a hobbist, never worked with large/complex programs and
> > > never
> > > worked in teams, you might
> > > see no point in using more structured programming style and thus no
> > > point in
> > > avoiding eval.
>
> > The mistake is, I think, in claim that use of
> > eval causes problems that competent programmers
> > cannot avoid, except by avoiding eval itself.
> > I fail to see arguments for that claim.
>
> Well, I don't think that there is a serious claim that EVAL causes
> problems that competent programmers cannot avoid.  The general reaction
> against EVAL is that it is often used by novices in ways that either can
> cause problems or that are not the best way to solve a particular
> problem.
>
> So EVAL certainly has its place in a programmer's toolkit, but it also
> has the potential for misuse.  So, especially for new lisp programmers,
> there is a general guideline that one should avoid EVAL if possible.  It
> is the "if possible" part that often goes missing.  But the use of EVAL
> by novices is often a sign that something really should be done
> differently.

Assuming by "if possible" you mean something more like "if possible
with a reasonable amount of effort and without making a hash of
things", I completely agree. In Common Lisp. CL has a very nice
division of evaluation into different stages, and powerful tools for
intervening in those stages. This, combined with its impoverished
EVAL, makes this a pretty easy call most of the time. The fact that
this is Lisp and you can easily write special-purpose mini-
interpreters for sexprs that aren't CL:EVAL pushes one further from
CL:EVAL. CL give you lots of specail-purpose tools, and the ability to
invent more, that are more specialized to a certain task than is EVAL,
and using the more specialized tool is usually a win. ABusing a
specialized tool is generally a disaster, though; whether that tool be
defmacro or format ... building back up to the GP-language level can
make for evilness.

(Gratuitous example: a colleague sometimes jokingly threatens to
implement a WITH-BASIC macro, using FORMAT. I'm sure he could
implement a classic line-numbered basic without using EVAL. Both of us
know it's a terrible idea)
From: Alex Mizrahi
Subject: Re: Strategies for defining named functions at load or compile time ?
Date: 
Message-ID: <49c81803$0$90276$14726298@news.sunsite.dk>
??>> but if you're a hobbist, never worked with large/complex programs and
??>> never worked in teams, you might see no point in using more structured
??>> programming style and thus no  point in avoiding eval.

KM> The mistake is, I think, in claim that use of
KM> eval causes problems that competent programmers cannot avoid, except by
KM> avoiding eval itself. I fail to see arguments for that claim.

it looks like you did not read what i've wrote.. EVAL works like advertised, 
it is pretty good for the job it does. the problem is that it does not work 
with lexical environment, it simply cannot, by definition of EVAL. no matter 
how competent you are, it will not work that way.

once more, lexical environments are superior to dynamic environments, as
they make programs more reliable and easier to understand.

argument that "sufficiently competent" programmers can overcome problems
of dynamic state weirdness is laughable. yes, "sufficiently competent"
programmers can do that. also, they can program in assembly language or 
directly in freaking machine codes, there were actually people who could do 
this. but the thing is that given a better tools they could do much better 
programs and develop them faster.

KM> John McCarthy wrote in Lisp 1.5 manual: "LISP differs
KM>  from most programming languages in three important ways....
KM> Third, LISP can interpret and execute programs written
KM> in the form of S-expressions."

true

KM> Now, you say "eval sucks."

huh, and you think that if you avoid eval you're not executing programs
written in the form of S-expression?

macros work in same way -- they take S-expressions and embed it into the 
code, and later these and those S-expressions get executed. JUST LIKE IN 
EVAL'S CASE. except that macros work in slightly different time. it is not a 
big difference.

KM> Of course, McCarthy can be wrong,

I fail to see how exactly avoiding EVAL makes McCarthy wrong. we are still
using programs written in the form of S-expressions all the way, it is an
extremely useful feature.

KM>  but where is that departure from the tradition recorded?
KM>  Is there any published article
KM> against eval

you're looking for a wrong thing. there was departure from dynamic stuff to 
a lexical stuff. once it happened, EVAL simply stopped being adequate (for 
anything else than REPL, RPC and stuff like that -- when you actually need 
to evaluate some commands).

also you might want to read about FEXPRS vs MACROS:

http://www.nhplace.com/kent/Papers/Special-Forms.html

because that's how you were going to use eval, it seems.
Kent Pitman writes:
----
It should be clear from this discussion that FEXPR's are only safe if no 
part of their ``argument list'' is to be evaluated, and even then only when 
there is a declaration available in the environment in which they appear. 
Using FEXPR's to define control primitives will be prone to failure due to 
problems of evaluation context and due to their potential for confusing 
program-manipulating programs such as compilers and macro packages.

MACRO's on the other hand, offer a more straightforward and reliable 
approach to all of the things which we have said should be required of a 
mechanism for defining special forms. They can handle problems from implicit 
quoting to definition of control structure in a very straightforward way 
because they are functionally transparent; macro forms whose meaning is not 
understood may be expanded to produce forms whose meaning can be understood.
----

the stuff Pitman wrote about FEXPRS applies to _your_ use of EVAL.

it is perfectly fine to use EVAL for stuff like REPL or RPC, though.

KM> or it is purely Usenet/IRC thing?

no, it is not a Usenet thing, it is a common sense thing -- you know, it is 
actually hard to use a tool when it does not work.

do you think that we've lost connection with reality here?
From: Kazimir Majorinc
Subject: Re: Strategies for defining named functions at load or compile time ?
Date: 
Message-ID: <op.urandtt71cfios@kazimir-pc>
On Tue, 24 Mar 2009 00:15:04 +0100, Alex Mizrahi






>
> once more, lexical environments are superior to dynamic environments, as
> they make programs more reliable and easier to understand.

M... too dogmatic for my taste.




>
> macros work in same way -- they take S-expressions and embed it into the
> code, and later these and those S-expressions get executed. JUST LIKE IN
> EVAL'S CASE. except that macros work in slightly different time. it is  
> not a
> big difference.



There are significant differences between runtime and compile time.

Why Photoshop process data during runtime and not during compile time?  
Because data is not known in the time program is written. The same for a  
program that, say, process matematical formulas. These formulas are  
typically unkown in the time program is written. But, one can write the  
programs that define mathematical formulas as s-expressions, and evaluate  
the formulas during runtime. Millions of these.

It is use of the code as data. It cannot be done with macros. You need  
eval, one built in, or one you develop on your own.

Macros serve to much more restricted purpose. They barely extend Lisp  
syntax.







> KM> Of course, McCarthy can be wrong,
>
> I fail to see how exactly avoiding EVAL makes McCarthy wrong. we are  
> still
> using programs written in the form of S-expressions all the way, it is an
> extremely useful feature.

What, you are trying to understand his sentence "Lisp system
CAN interpret or execute programs written as s-expressions."
in trivial sense, just like Fortran system can execute Fortran
programs, but avoiding the conclusion that he wrote about interpreting
and executing code stored as data?

It doesn't lead you very far, because in the next sentence he said
that Lisp is, in that respect, like machine code and not other
higher level languages.

McCarthy also wrote in 1980, and later in 1999 in his
LISP-NOTES ON ITS PAST AND FUTURE" that eval is one of
the most important characteristics of Lisp, while he
mentioned macros as "fancy."








>
> also you might want to read about FEXPRS vs MACROS:
>
> http://www.nhplace.com/kent/Papers/Special-Forms.html

I have read it many times. I think not many people
read and understood what Pitman tried to say, especially
after fexprs are removed from Lisp. But yes, I think he is
largely wrong in his conclusions.






--
Blog:    http://kazimirmajorinc.blogspot.com
From: ······@nhplace.com
Subject: Re: Strategies for defining named functions at load or compile time ?
Date: 
Message-ID: <aa996c6d-0717-47b3-b164-47204a908401@f19g2000yqo.googlegroups.com>
On Mar 24, 7:26 am, "Kazimir Majorinc" <·····@email.address> wrote:

> McCarthy also wrote in 1980, and later in 1999 in his
> LISP-NOTES ON ITS PAST AND FUTURE" that eval is one of
> the most important characteristics of Lisp, while he
> mentioned macros as "fancy."
>
> > also you might want to read about FEXPRS vs MACROS:
> > http://www.nhplace.com/kent/Papers/Special-Forms.html
>
> I have read it many times. I think not many people
> read and understood what Pitman tried to say, especially
> after fexprs are removed from Lisp. But yes, I think he is
> largely wrong in his conclusions.

Can you clarify the referent of "he"/"his" in your final sentence
there?

I'm not sure I disagree with McCarthy about EVAL.  Although I don't
see macros as "fancy", I see them as just as obvious a consequence of
having a representation for programs as EVAL.

I'm not opposed to EVAL for situations like Alex suggests earlier
(REPL and RPC are the examples he cited).  Those are situations that
provoke a top-level evaluation with an expression received from
elsewhere that pretty clearly understands the evaluation environment.
But using EVAL in normal programs as a way of implementing special
forms, as we did in Maclisp with
 (defun if fexpr (body)
   (cond ((eval (car body))
          (eval (caddr body)))
         (t
          (eval (cadddr body)))))
or something like that.. that's where I don't like EVAL.  Among other
things, in Maclisp, that definition did a special-binding of BODY if
run interpreted and only didn't special-bind it (instead it "compiled
away the name", which is not quite the same as "lexical binding" since
you had no closure capability) if you compiled it.  So it ran less
buggily compiled.  Lexical binding fixes some of the problems but it
can't fix the problem that IF is hard to compile, hard to code-walk,
etc. if itw as defined by a user this way.  A fixed number of special
forms works fine if it never changes; it's the user-extensibility
that's the problem.  And even then only in those cases where the call
to EVAL participates in the ordinary execution place as a way of
zipping in and out of quoted structure you didn't really need to have
used.
From: Kazimir Majorinc
Subject: Re: Strategies for defining named functions at load or compile time ?
Date: 
Message-ID: <op.urip9agf1cfios@kazimir-pc>
On Thu, 26 Mar 2009 23:03:14 +0100, <······@nhplace.com> wrote:

>
> Can you clarify the referent of "he"/"his" in your final sentence
> there?
>

I'll try.

Fexprs shouldnt be compared generally - all fexprs vs
all macros, but fexprs vs equivalent macros.

(1) You wanted transparence of functionality and efficient
compilation, you couldn't get it from fexprs, but you could
 from macros. However, SOME fexprs are "good": they allow
two things you want. For example, if program uses macro M1

     (defmacro M1 (...) <code1>)

one can construct equivalent program using "good" fexpr F1

    (deffexpr F1 (...) (eval <code1>))

where F1 inherited all "good" properties of the M1. F1 can be
explicitly fexpr-expanded. Compiler can automatically, or with  
programmer's help, decide to expand F1.

So, fexprs are as "good" as macros if equivalent macros exist.




(2) Sometimes "bad" fexprs provide functionality macros cannot
that easily. For example, for lazy functions. One might dislike
"bad" fexprs. But equivalent macros do not exist.




(3) Here is where macros really hurt: If program EVALs code
that contains macro calls, theoretically unlimited macroexpansion
time is needed although evaluation of expanded code can short.

For example, in EVAL (push 0 A) need 3-100 times more time than
(setq A (cons 0 A)) in current versions of Lispworks, Allegro,
Clisp. With some more complicated macro, it could be MUCH worse.
For comparison, in Newlisp push is twice faster than other
expression.

It is hard to overemphasize how severe this problem is. So
severe that if one generates code to be evaluated during
runtime, he should actually avoid macros. And it is even
worse because in CL even basic operators, as PUSH are macros.

In these cases, fexprs are more efficient than equivalent macros.



Conclusion:

So, if we compare fexprs with equivalent macros, we'll
find that fexprs are always better or equal to equivalent
macros. And if fexprs are "bad" in the sense you described
in your article, and one doesn't find it acceptable - he
simply shouldn't use such fexprs. It is not questionable
whether he should use macros instead, because equivalent
macros do not exist.


(From optimization point of view, expansion shouldn't
be decided on the base of whole language or program, but
independently for each individual occurence of special form)


--
Blog:    http://kazimirmajorinc.blogspot.com
WWW:     http://www.instprog.com
From: Raffael Cavallaro
Subject: Re: Strategies for defining named functions at load or compile time ?
Date: 
Message-ID: <8d03d4b1-2a9d-44d8-bbca-7ac35a1e4dc5@o11g2000yql.googlegroups.com>
On Mar 28, 4:09 pm, "Kazimir Majorinc" <·····@email.address> wrote:

> (2) Sometimes "bad" fexprs provide functionality macros cannot
> that easily. For example, for lazy functions. One might dislike
> "bad" fexprs. But equivalent macros do not exist.

If this is a claim that one can't use common lisp macros to do lazy
evaluation Marco Antoniotti's CLAZY is an existence proof to the
contrary.

 > It is hard to overemphasize how severe this problem is.

Oh I'd say you're doing a pretty good job of overemphasizing it.
From: Kazimir Majorinc
Subject: Re: Strategies for defining named functions at load or compile time ?
Date: 
Message-ID: <op.urklxrcs1cfios@kazimir-pc>
On Sun, 29 Mar 2009 06:33:01 +0200, Raffael Cavallaro  
<················@gmail.com> wrote:


>> (2) Sometimes "bad" fexprs provide functionality macros cannot
>> that easily. For example, for lazy functions. One might dislike
>> "bad" fexprs. But equivalent macros do not exist.
>
> If this is a claim that one can't use common lisp macros to do lazy
> evaluation Marco Antoniotti's CLAZY is an existence proof to the
> contrary.

It is not THAT claim.

"'bad' fexprs provide functionality macros cannot *that easily*",

Where 'bad' is described in the context of the post related to Kent  
Pitman's article.

--
Blog:    http://kazimirmajorinc.blogspot.com
WWW:     http://www.instprog.com
From: Kazimir Majorinc
Subject: Re: Strategies for defining named functions at load or compile time ?
Date: 
Message-ID: <op.urt17qsa1cfios@kazimir-pc>
On Sun, 29 Mar 2009 06:33:01 +0200, Raffael Cavallaro


>
>  > It is hard to overemphasize how severe this problem is.
>
> Oh I'd say you're doing a pretty good job of overemphasizing it.


I described it in more details here:

http://kazimirmajorinc.blogspot.com/2009/04/on-macro-expansion-evaluation-and.html

Kent Pitman, I'd like to hear your opinion specifically
since main claim of the post refers on your argument pro
macros. But of course - it is public discussion place.

--
Blog:    http://kazimirmajorinc.blogspot.com
WWW:     http://www.instprog.com
From: Thomas A. Russ
Subject: Re: Strategies for defining named functions at load or compile time ?
Date: 
Message-ID: <ymihc155i07.fsf@blackcat.isi.edu>
"Kazimir Majorinc" <·····@email.address> writes:

> On Sun, 29 Mar 2009 06:33:01 +0200, Raffael Cavallaro
> 
> 
> >
> >  > It is hard to overemphasize how severe this problem is.
> >
> > Oh I'd say you're doing a pretty good job of overemphasizing it.
> 
> 
> I described it in more details here:
> 
> http://kazimirmajorinc.blogspot.com/2009/04/on-macro-expansion-evaluation-and.html

Well, one can cleanly separate the evaluation control element and the
functional element of FEXPRs fairly simply.  Consider the following,
much faster implementation of your example in Common Lisp:

In the following, I break up the parts, for convenience, into a function
and a macro wrapper.  The function could always be in-lined in the macro
if desired:

(defun at-least-fn (n forms)
  (if (zerop n)
      t
      (loop for f in forms
            when (eval f) do (decf n)
            when (zerop n) return t
            finally (return nil))))

(defmacro at-least (n &rest es)
   `(at-least-fn ,n ',es))

If you try this formulation, I'm sure you'll find it much, much faster.
That's because there really isn't much that needs to be done in the
macro expansion, and for one-off applications it would be a mistake to
try to compute a complicated unrolling of the loop each time.

So the only thing that you need a macro for is to control the evaluation
of the arguments to what is essentially a function.

-- 
Thomas A. Russ,  USC/Information Sciences Institute
From: Raffael Cavallaro
Subject: Re: Strategies for defining named functions at load or compile time ?
Date: 
Message-ID: <78d8bb04-5a68-4d50-892d-b9a1edc83778@q16g2000yqg.googlegroups.com>
On Apr 3, 9:23 pm, ····@sevak.isi.edu (Thomas A. Russ) wrote:

> If you try this formulation, I'm sure you'll find it much, much faster.

In fact by separating it this way, under CCL, the test case runs twice
as fast as with newlisp. If you eliminate the macro by quoting the
expression list and just calling the function, it becomes 4x as fast
as newlisp.
From: Kazimir Majorinc
Subject: Re: Strategies for defining named functions at load or compile time ?
Date: 
Message-ID: <op.uruou8ov1cfios@kazimir-pc>
On Sat, 04 Apr 2009 03:23:36 +0200, Thomas A. Russ <···@sevak.isi.edu>  
wrote:

> (defun at-least-fn (n forms)
>   (if (zerop n)
>       t
>       (loop for f in forms
>             when (eval f) do (decf n)
>             when (zerop n) return t
>             finally (return nil))))

> (defmacro at-least (n &rest es)
>    `(at-least-fn ,n ',es))

> If you try this formulation, I'm sure you'll find it much, much faster.
> That's because there really isn't much that needs to be done in the
> macro expansion, and for one-off applications it would be a mistake to
> try to compute a complicated unrolling of the loop each time.
> So the only thing that you need a macro for is to control the evaluation
> of the arguments to what is essentially a function.



Yes, in my tests, 50%-1700% slower than Newlisp on Windows,
which is significant improvement over other two CL versions.

However, it doesn't work as generalized version of the macro OR
any more. That was the starting condition of the problem.
 From my first post, test is:

;; Both Newlisp and Common Lisp
(let ((x 1) (y 2) (z 3) (n 3))
    (print (at-least n
                  (at-least (- n 1) (= x 7) (= y 2) (= z 3))
                  (at-least (- n n) nil nil nil nil)
                  (at-least (* 1 z) 1 (= 2 2) (let ((z 100))
                                                   (= z 1000))))))


--
Blog:    http://kazimirmajorinc.blogspot.com
WWW:     http://www.instprog.com
From: Raffael Cavallaro
Subject: Re: Strategies for defining named functions at load or compile time ?
Date: 
Message-ID: <d58fc1c2-0e87-4bbd-b85e-e6b5ac7c5215@u8g2000yqn.googlegroups.com>
On Apr 4, 3:10 am, "Kazimir Majorinc" <·····@email.address> wrote:

no matter how many times you go round in circles here or on your
private blog playground you can't change these simple facts:

1. newlisp's so called macros are broken by default.

2. it doesn't matter how quickly you can eval one-offs.

Your tests mean nothing because they're self contradictory. If you're
going to run code once only, how long it takes to macroexpand doesn't
matter. If you're going to run code many times, you want it to be
compiled, not interpreted, and for that you want real macros, not
fexprs and an interpreter.

newlisp is a broken, slow language.
From: Raffael Cavallaro
Subject: Re: Strategies for defining named functions at load or compile time ?
Date: 
Message-ID: <449ba36f-d30a-4e6b-ae2d-f5f333d38dd6@o6g2000yql.googlegroups.com>
On Apr 3, 7:00 pm, "Kazimir Majorinc" <·····@email.address> wrote:

>
> I described it in more details here:
>
> http://kazimirmajorinc.blogspot.com/2009/04/on-macro-expansion-evalua...
>

You're making the mistake of thinking that common lisp code generated
at runtime will be evaled. It is just as likely or more likely if the
code is generated by common lisp macros that the code will be
*compiled* and funcalled/applied.

You're overemphasizing this because fexprs only have an advantage if
both of these things are true:

1. you generate code at runtime
2. you only call that code a small number of times

Neither of these is the common case, and both *together* are even more
uncommon.

IOW, fexprs optimize for the uncommon case not the common case. In the
common case, where code is available before runtime and/or code will
be run many, many times, real macros and compilation are far faster.

Finally, your hand waving notwithstanding, newlisp's so-called macros
are broken by default as I demonstrated in this thread.
From: ······@nhplace.com
Subject: Re: Strategies for defining named functions at load or compile time ?
Date: 
Message-ID: <5d280cd4-ea66-4949-9e82-e52d0937db6c@l25g2000vba.googlegroups.com>
On Apr 3, 7:00 pm, "Kazimir Majorinc" <·····@email.address> wrote:

> http://kazimirmajorinc.blogspot.com/2009/04/on-macro-expansion-evalua...
>
> KentPitman, I'd like to hear your opinion specifically
> since main claim of the post refers on your argument pro
> macros. But of course - it is public discussion place.

You're calling EVAL and then doing timing.  I have to assume that
means you don't understand the difference between compile-time costs
and runtime costs.  EVAL does both parts.  That is, however it's
interpreted--lazily or ambitiously--it must necessarily do a
macroexpansion.  That should be factored out and cannot be precisely
because of the opacity to the compiler of what you are doing.  What
you'd like the compiler to rewrite this to is:
 (let* ((form '(at-least ...)) (expansion (macroexpand form)))
   (dotimes (i 177000) ;strange number
     (eval expansion)))
Of course, you can claim that your application needs to make these on
the fly, but you can't explain why it's ever so important to do be
using EVAL.  I'm not so sure it is.  Closures would work better.
Moreover, what you are implicitly making an argument for is the idea
that macroexpansion has a runtime cost if you do it at runtime; but
that's of course true.  The answer is: don't do it at runtime.  For
the test of things for which you can do what you're doing, EVAL must
necessarily be called, and you're even presuming that EVAL is
implemented a certain way.  It might do
 (funcall (compile nil '(lambda ...)))
instead, and doing that COMPILE at runtime might be expensive, too.
But the main reason that you don't want AT-LEAST-N implemented as a
FEXPR isn't that it's not fast (no one ever argued that fexprs were
slow), it's that it's not semantically meaningful.  People can't write
code that interacts with it unless the set of special forms are fixed
and can be code-walked.  Macros offer an extension mechanism that can
be code-walked by a party that doesn't know about your syntactic
extensions.  It used to happen in MACLISP [see www.maclisp.info/pitmanual]
that some operators would be compiled in a way that had to do what you
were saying. This veritably forced all variables to be dynamic (which
in compiled code, Maclisp didn't have by default--it was a semantic
mess), and so if you had
 (defun foo (x) (some-op (blah x)))
the compiler, when it couldn't compile some-op had to turn it into
 (defun foo (x) (|SOME-OP FEXPR| '((blah x))))
but that meant that inside some-op-handler, the lexical binding of x
wasn't visible.  That makes lexical variables useless or unreliable,
depending on how you look at it.

Also, beyond all of this, calling a fexpr a macro confuses things.
Macros are, in lisp, things that are expandable.  By definition, a
fexpr is not expandable.  At the 1980 Lisp conference, when I
presented the Special Forms paper, someone asked about this in the Q&A
and I didn't know the answer. I said simply "I thought hard about this
and it didn't seem like it would work."  Someone (Joachim Laubsch, I
think it was), came up to me after the talk and said "the answer you
were looking for was 'the halting problem.'"  And indeed that's
right.  The problem is that once you're left with a Turing machine
(which is what a fexpr is) executing arbitrary data (the body of the
fexpr), it's easy to show you can't always know what it will do in the
general case statically.  And that's bad.  The inability to know what
a form does tends to imply its non-compilability in the general case.
So yes,  you can patch that with compiler optimizers (which will look
a lot like macros) and then you'll be back where Lisp is, saying that
any macro can be backed up with a special form if the implementation
wants to allow it.  But that the macro is required, the special form
is not.

Calling fexprs macros confuses the sense.  In some other languages
like Teco or PERL, this may indeed count as macro processing.  But
it's not what lispers mean, and you'll confuse everyone by claiming
you're doing macros when you do fexprs because you'll falsify a lot of
perfectly good true statements by changing the definition of the word
after the statement has been made in good-faith understanding of the
correct and advertised meaning of the word.

Fexprs don't cause an individual point problem in a language.  Their
problem is how they behave as part of a complex ecology.  Languages
are ecologies, and the ecology of Lisp, which involves a lot of macro
processing, doesn't coexist well with them.  Some other languages,
which are only interpreted and never compiled, like for example TCL,
might do better with your strategy.

As Tobias is fond of quoting me about, and I'm glad he's picked up
this particular lesson, because it's most important: you never get
anything for free--life is all about trade-offs.
From: Kazimir Majorinc
Subject: Re: Strategies for defining named functions at load or compile time ?
Date: 
Message-ID: <op.ur2yofre1cfios@kazimir-pc>
Thanks for your answer.

--
Blog:    http://kazimirmajorinc.blogspot.com
WWW:     http://www.instprog.com
From: Alex Mizrahi
Subject: Re: Strategies for defining named functions at load or compile time ?
Date: 
Message-ID: <49ddc0b7$0$90273$14726298@news.sunsite.dk>
 KM> I described it in more details here:
 KM> 
http://kazimirmajorinc.blogspot.com/2009/04/on-macro-expansion-evaluation-and.html

 KM> However, it is not discussed what happens if same code
 KM> has to evaluated only once or few times.
 KM> It is not unusual  if the code is generated during runtime.

do you know any real world sitation where you actually need to generate code 
in loop
through millions and millions of iterations and cannot parametrize it in any 
way?
i'd say it is  unusual to the point i've never seen anything like this.

it is completely another thing if you generate some code in runtime and run 
it through
many iterations. 
From: Alex Mizrahi
Subject: Re: Strategies for defining named functions at load or compile time ?
Date: 
Message-ID: <49cce17d$0$90273$14726298@news.sunsite.dk>
??>> once more, lexical environments are superior to dynamic environments,
??>> as they make programs more reliable and easier to understand.

KM> M... too dogmatic for my taste.

huh? with lexical variables you're be notified about undefined variables at
compile time, with dynamically-scoped variables -- at run time, and moreover 
only when you execute  that branch of a program. thus, lexical environments 
allow to reliably detect large number of errors before you run program, 
while with dynamic variables you can't be so sure.

do you think that detecting errors is not important? reliability is the
biggest problem of software development, and if we can improve reliability 
just a small bit -- it's a big win.

another issue is that with dynamic variables behaviour of program depends on
how it is called. so, it could be possible that if function foo is called 
from bar it works correctly, but if it is called from baz, it does not work. 
again, with dynamic state you have to test all possible combinations which 
might be not possible at all, while with  lexical variables it just works.

another important issue in software development is complexity -- at some
point program becomes so complex that it cannot be developped further. 
lexical variables assist here a lot -- they are fully encapsulated inside 
functions, so you do not need to know interfactions between entrails of 
different functions, you can concentrate at one function at time. with 
dynamic state, in general, you must know the whole program to understand
its behaviour in full, and at some point it becomes prohibitive.

so, i've presented clear arguments. why do you discard them and call me
dogmatic?
do you think that program reliability is not important? or that people
should not write programs longer than 100 lines of code?

KM> There are significant differences between runtime and compile time.
KM> Why Photoshop process data during runtime and not during compile time?
KM> Because data is not known in the time program is written.
...
KM> It is use of the code as data. It cannot be done with macros. You need
KM> eval, one built in, or one you develop on your own.

yep, when dynamic evalutation is a part of program description, you need
EVAL or something similar (like COMPILE, if function is time-consuming, or
you're going to use it more than once).

but, on the other hand, if everything IS known at compile time, it makes
sense to do transformation at compile time. using eval in this case sucks,
don't you agree? :)

KM> Macros serve to much more restricted purpose. They barely extend Lisp
KM> syntax.

well, you know, 99.9% of time i'm not writing REPL or math formula
evaluator, i'm writing programs where all code is known at compile time. and 
macros are extremely helpful there.

i invite you to go and count how many eval's in your code actually evaluate
someting that is known only at runtime, preferably only in practical and
non-trivial programs rather than in weird experimentations. and then count
how many evals do you have due to fexpr weirdness.

KM> What, you are trying to understand his sentence "Lisp system
KM> CAN interpret or execute programs written as s-expressions."
KM> in trivial sense,
KM>  just like Fortran system can execute Fortran programs,
KM>  but avoiding the conclusion that he wrote about interpreting and
KM> executing code stored as data?

there are two ways to interpret it:

 * metaprogramming, that is, programs that write programs and then
   result gets executed
 * evaluation of code that is known only in runtime

note that when if you speak about runtime evaluation, it is not important
how exactly data is represented -- it might be in form of strings, or in
form of S-expressions or whatever, what is important is that there is
possibility to execute something that was entered dynamically or generated 
somehow.

but if you speak about metaprogramming, data representation format IS
important, because it defines how feasible metaprogramming is -- it is much
more cumbersome to work with strings than with lists that are essentially
program's AST.

McCarthy explicitly emphasised usage of the data representation format that 
is easy to manipulate -- S-expressions -- that's why i think he was thinking
more about metaprogramming possibilities rather than about mere runtime
evaluation. well, maybe he meant a combination of metaprogramming and
runtime evaluation, kinda self-modifying code, as he was working on AI 
stuff, it was  considered cool those days. (but nowadays it is clear that is 
is more cool than practical.)

KM> It doesn't lead you very far, because in the next sentence he said
KM> that Lisp is, in that respect, like machine code and not other
KM> higher level languages.

let's check what he've said: "Thus, like machine language, and unlike most
other high level languages, it can be used to generate programs for further 
executions."

this is clearly about metaprogramming, programs that write programs.

KM> McCarthy also wrote in 1980, and later in 1999 in his
KM> LISP-NOTES ON ITS PAST AND FUTURE" that eval is one of
KM> the most important characteristics of Lisp, while he
KM> mentioned macros as "fancy."

well, maybe EVAL was important in old days, as it was an unique feature,
but now almost all shitty interpreters, like PHP and JS, have EVAL,
so it is not cool anymore.

metaprogramming is still pretty cool, very few languages have fully
functional metaprogramming.

 KM> I have read it many times. I think not many people
 KM> read and understood what Pitman tried to say, especially
 KM> after fexprs are removed from Lisp. But yes, I think he is
 KM> largely wrong in his conclusions.

how exactly? as far as i know, FEXPRs are still not safe in Newlisp,
they suffer from variable name clashes, don't they? so part about
unsafety is correct. and they can't be analysed. so his conclusions
are largerly true. 
From: Alex Mizrahi
Subject: FEXPRs
Date: 
Message-ID: <49ccefaa$0$90263$14726298@news.sunsite.dk>
Possibly i'm missing something about Newlisp's FEXPRs..
Can you please write something similar to a macro below in Newlisp?

(defmacro with-slot-accessors ((&rest slots) object &body body)
    (let ((object-var (gensym)))
      `(let ((,object-var ,object))
        (flet ,(loop for (fn sn) in slots collecting `(,fn () (slot-value 
,object-var ',sn)))
   ,@body))))

It can be used like this:

CL-USER> (defclass thing () ((x :initarg :x) (y :initarg :y)))

CL-USER>
(with-slot-accessors ((x1 x) (y1 y)) (make-instance 'thing :x 1 :y 2)
    (with-slot-accessors ((x2 x) (y2 y)) (make-instance 'thing :x 3 :y 4)
      (list (x1) (y1) (x2) (y2))))

(1 2 3 4)

(Normally i'd use symbol-macrolet, but i don't know if newlisp has 
symbol-macros.)

As I remember you've proposed to use special variable naming scheme to avoid
clashes, but it, obviously, won't work here, as both FEXPR invocations will 
share
same variable names. Also you've wrote about contexts saying that they are
"lexical scope features", but looking at documentation I fail to understand
how exactly contexts are lexical scope features as they look just like 
packages
in CL. And a third option would be to use some substitutions in macro code 
itself
on each macro invocation, but it is fucked beyond belief, as running 
interpreter in
interpreter in interpreter. 
From: Raffael Cavallaro
Subject: Re: FEXPRs
Date: 
Message-ID: <affb34d7-1fa6-42c3-bd92-e14dfa7490f6@n17g2000vba.googlegroups.com>
On Mar 27, 11:24 am, "Alex Mizrahi" <········@users.sourceforge.net>
wrote:

> As I remember you've proposed to use special variable naming scheme to avoid
> clashes, but it, obviously, won't work here, as both FEXPR invocations will
> share
> same variable names. Also you've wrote about contexts saying that they are
> "lexical scope features", but looking at documentation I fail to understand
> how exactly contexts are lexical scope features as they look just like
> packages
> in CL.

Right. The lack of lexical scope makes the nesting of macro calls
problematic if you're relying on a naming scheme. In the general case
it's impossible to know when you're nesting macros; if you make one
library call when you're inside another library call both could be
using the same macro and you'd be nesting that macro without knowing
it. You'd have to know the source code of every single library you use
intimately. ouch.
From: ······@nhplace.com
Subject: Re: FEXPRs
Date: 
Message-ID: <6162390e-e649-47fd-bc0f-f193021e222c@r36g2000vbr.googlegroups.com>
On Mar 27, 11:48 am, Raffael Cavallaro <················@gmail.com>
wrote:
> ... The lack of lexical scope makes the nesting of macro calls
> problematic if you're relying on a naming scheme. In the general case
> it's impossible to know when you're nesting macros; if you make one
> library call when you're inside another library call both could be
> using the same macro and you'd be nesting that macro without knowing
> it. You'd have to know the source code of every single library you use
> intimately. ouch.

This is more of a theoretical problem than a practical one.

For functions and macros in your own module, you can always know that
they don't run into this problem because you wrote them personally or
you cooperate in a closed group that uses a shared set of standards.
(If you're sharing a package with someone who doesn't have a tight
agreement with you about programming conventions, you're in all kinds
of problems, not just this.)

For functions and macros outside your own package, they should not be
rebinding variables or functions/macros in your package unless you've
told them it's ok in the documentation.  (This follows from an
analysis similar to that for the Prisoner's Dilemma, in my view--
cooperation without communication based on optimizing general good.  I
don't think there's a requirement that you not do this, but I think
most people would tell you it was silly to be binding others'
variables without knowing the effect.)  And so no conflicts can happen
here, either.

And so in practice the conflict does not occur.
From: Raffael Cavallaro
Subject: Re: FEXPRs
Date: 
Message-ID: <3b907f9c-5371-4b23-874b-149230a883ae@z14g2000yqa.googlegroups.com>
On Apr 6, 12:36 pm, ······@nhplace.com wrote:
> On Mar 27, 11:48 am, Raffael Cavallaro <················@gmail.com>
> wrote:
>
> > ... The lack of lexical scope makes the nesting of macro calls
> > problematic if you're relying on a naming scheme. In the general case
> > it's impossible to know when you're nesting macros; if you make one
> > library call when you're inside another library call both could be
> > using the same macro and you'd be nesting that macro without knowing
> > it. You'd have to know the source code of every single library you use
> > intimately. ouch.
>
> This is more of a theoretical problem than a practical one.
>
> For functions and macros in your own module, you can always know that
> they don't run into this problem because you wrote them personally or
> you cooperate in a closed group that uses a shared set of standards.
> (If you're sharing a package with someone who doesn't have a tight
> agreement with you about programming conventions, you're in all kinds
> of problems, not just this.)
>
> For functions and macros outside your own package, they should not be
> rebinding variables or functions/macros in your package unless you've
> told them it's ok in the documentation.  (This follows from an
> analysis similar to that for the Prisoner's Dilemma, in my view--
> cooperation without communication based on optimizing general good.  I
> don't think there's a requirement that you not do this, but I think
> most people would tell you it was silly to be binding others'
> variables without knowing the effect.)  And so no conflicts can happen
> here, either.
>
> And so in practice the conflict does not occur.

The example given didn't use namespaces/packages but merely a naming
convention. As you know, the chances of collisions are much more real
when the nested invocations are in the same namespace.
From: ······@nhplace.com
Subject: Re: FEXPRs
Date: 
Message-ID: <ccbc98fb-e9e1-444c-b1ab-56d9e24df68f@j8g2000yql.googlegroups.com>
On Apr 6, 12:56 pm, Raffael Cavallaro <················@gmail.com>
wrote:
> On Apr 6, 12:36 pm, ······@nhplace.com wrote:
>
> > And so in practice the conflict does not occur.
>
> The example given didn't use namespaces/packages but merely a naming
> convention. As you know, the chances of collisions are much more real
> when the nested invocations are in the same namespace.

Sure.  But my point is that the only place where namespaces/packages
should not be used is where the individual or a close associate is
capable of enforcing naming conventions.  It can't be an accident.

I even protect myself from myself-on-other-days by packages for this
and other reason.

In effect, why is this different than saying you can get in trouble by
doing
 (let ((x 3))
    ...
    (let ((x 4))
      ... use x ...))

and claiming that the use of x might be confused about which x it's
using?

The answer is "it's all the same code, and the person that wrote it is
responsible for knowing.

It doesn't seem like the same thing maybe.  But I think it is.
From: Pillsy
Subject: Re: FEXPRs
Date: 
Message-ID: <fbe402f4-1c8a-47c7-84fa-6555e9bc0a03@y7g2000yqa.googlegroups.com>
On Apr 10, 2:38 pm, ······@nhplace.com wrote:
[...]
> The answer is "it's all the same code, and the person that wrote it is
> responsible for knowing.

If my experience programming has taught me anything, it's different
people are comfortable with taking responsibility for knowing
different things. C.f. the nth iteration of the "static v. dynamic
typing" argument taking place elsethread.

Cheers,
Pillsy
From: Raffael Cavallaro
Subject: Re: FEXPRs
Date: 
Message-ID: <c31c4385-c51b-463a-b33d-62191be6dc05@w9g2000yqa.googlegroups.com>
On Apr 10, 2:38 pm, ······@nhplace.com wrote:

> Sure.  But my point is that the only place where namespaces/packages
> should not be used is where the individual or a close associate is
> capable of enforcing naming conventions.

Absolutely. But again, I wasn't referring to what you believe, but to
what Kazimir M. wrote. He wrote that newlisp's so-called macros were
simpler, in part, because one could write them by simply using a
naming convention, without gensym (which the language doesn't
provide), without lexical scope (which the language doesn't provide),
and without namespaces (which the language does provide, but which K M
said were unnecessary if one uses a naming convention). I was simply
pointing out (along with others) that newlisp's so-called macros
(really fexprs) become significantly more complex, when one admits
that naming conventions won't cut it, and one uses namespaces and
gensym.
From: ······@nhplace.com
Subject: Re: FEXPRs
Date: 
Message-ID: <565ed51f-a534-41d6-9931-fe105d63a02d@37g2000yqp.googlegroups.com>
On Apr 10, 4:00 pm, Raffael Cavallaro <················@gmail.com>
wrote:
> On Apr 10, 2:38 pm, ······@nhplace.com wrote:
>
> > Sure.  But my point is that the only place where namespaces/packages
> > should not be used is where the individual or a close associate is
> > capable of enforcing naming conventions.
>
> Absolutely. But again, I wasn't referring to what you believe, but to
> what Kazimir M. wrote. He wrote that newlisp's so-called macros were
> simpler, in part, because one could write them by simply using a
> naming convention, without gensym (which the language doesn't
> provide), without lexical scope (which the language doesn't provide),
> and without namespaces (which the language does provide, but which K M
> said were unnecessary if one uses a naming convention). I was simply
> pointing out (along with others) that newlisp's so-called macros
> (really fexprs) become significantly more complex, when one admits
> that naming conventions won't cut it, and one uses namespaces and
> gensym.

I haven't had time to follow all the conversation but if you're saying
that hygiene systems can only function based on static knowledge and
fexprs are opaque to static knowledge, so they fight it, then indeed
I'm in agreement with that.  Thanks for the summary.
From: Alex Mizrahi
Subject: Re: FEXPRs
Date: 
Message-ID: <49ddbe9c$0$90273$14726298@news.sunsite.dk>
 p> For functions and macros in your own module, you can always know that
 p> they don't run into this problem because you wrote them personally or
 p> you cooperate in a closed group that uses a shared set of standards.

No, no naming scheme or convention can save you in the case you need
to handle nested FEXPR invocation (*): you need lexical variables (or 
similar)
semantics to handle this. You say Maclisp have "compiled away" variable
names, that's one of the ways to deal with a problem.

Newlisp does not have this functionality, to handle such situations they
need to make a really weird workaround. Say, if they need to implement thing
like this:

;; introduces a local function called fn that executes form value when 
called.
(defmacro with-fn (fn value &body body)
   `(flet ((,fn () ,value)) ,@body))

then can't just write (this is not a valid Newlisp code but something 
semantically similar) (*):

(defexpf with-fn (fn value . body)
   (let ((fn (lambda () (eval value))))
    (eval body)))

but they need to run FEXPR code through a subst first:

(defexpr with-fn (fn value . body)
  (eval (subst value 'value
                `(let ((fn (lambda () (eval value))))
                   (eval body))))

This is like manually compiling away the name, value is pasted instead of 
name,
so there is no name clash anymore, but this stuff is weird and slow like 
hell,
also i think it is very error prone, as it is hard to do substitution 
correctly. Kazimir
adds some gensyms on top of this, so there are layers upon layers of 
complexity,
and in order to to execute just a single FEXPR it has to generate bunch of 
symbols
and do substitutions in multiple rounds.

So, in order to have save FEXPRs one either has to:
 * introduce concept of "compiled away" variables
 * introduce lexical variables, but in this case FEXPRs would be nearly 
useless,
   as they cannot operate with lexical scope
 * go through a weird substitution rounds that can be compared to writing
   a sub-interpreter in complexity.

This makes me think that FEXPRs are conceptually broken because it is
not possible to make them work safely without introducing additional
concepts or complexities.

(I must say I have no FEXPR programming experience so this could be wrong.
It is just my current understanding of the problem.)

 p> (If you're sharing a package with someone who doesn't have a tight
 p> agreement with you about programming conventions, you're in all kinds
 p> of problems, not just this.)

I think it is more quantative question than qualitative -- with FEXPRs you 
need
MORE things to watch and communicate, and it increases development 
complexity,
hence it is a bad thing.

*: If defined in a simple way, folllowing won't work;
  (with-fn a 3
   (with-fn b 4
    (print a)))

This example will print 4 instead of 3 because special variable "value" was 
rebound. 
From: Mark Wooding
Subject: Re: FEXPRs
Date: 
Message-ID: <87myaoersf.fsf.mdw@metalzone.distorted.org.uk>
"Alex Mizrahi" <········@users.sourceforge.net> writes:

> Newlisp does not have this functionality, to handle such situations they
> need to make a really weird workaround. Say, if they need to implement thing
> like this:
>
> ;; introduces a local function called fn that executes form value when 
> called.
> (defmacro with-fn (fn value &body body)
>    `(flet ((,fn () ,value)) ,@body))
>
> then can't just write (this is not a valid Newlisp code but something 
> semantically similar) (*):
>
> (defexpf with-fn (fn value . body)
>    (let ((fn (lambda () (eval value))))
>     (eval body)))

This certainly doesn't seem right.  In particular, it binds the symbol
FN rather than the symbol named by FN.

> but they need to run FEXPR code through a subst first:
>
> (defexpr with-fn (fn value . body)
>   (eval (subst value 'value
>                 `(let ((fn (lambda () (eval value))))
>                    (eval body))))

And this seems even more wrong.  Now VALUE is evaluated twice -- once by
your outer EVAL and once by the inner one.

Besides, you don't need more than one layer of evaluation here.

        (defexpr with-fn (fn value . body)
          (eval `(let ((,fn (lambda () ,value)))
                   ,@body)))

And this is true of every macro, of course.  Indeed, we can even write
this:

        (defexpr defmacro (name bvl . body)
          (eval `(defexpr ,name ,bvl
                   (eval (progn ,@body)))))

(You can make this do displacement if you really care.)  Notice in
particular that variable bindings made by the BODY are undone by the
time we invoke EVAL, so this doesn't introduce unexpected name bindings.

DEFMACRO is only slightly easier to work with -- compare the above with

        (defmacro defmacro (name bvl . body)
          `(defexpr ,name ,bvl
             (eval (progn ,@body))))

which is what I'd have written if I'd had DEFMACRO already lying around.

None of this means that I think FEXPRs are a good thing.

-- [mdw]
From: ······@nhplace.com
Subject: Re: FEXPRs
Date: 
Message-ID: <41ad62fa-5e8b-4b98-a75f-288bef3a55c1@s20g2000yqh.googlegroups.com>
On Apr 9, 5:23 am, "Alex Mizrahi" <········@users.sourceforge.net>
wrote:
> This makes me think that FEXPRs are conceptually broken because it is
> not possible to make them work safely without introducing additional
> concepts or complexities.

Mostly this is right, but with that acknowledged I'll add a couple
extra notes below.

> (I must say I have no FEXPR programming experience so this could be wrong.
> It is just my current understanding of the problem.)

FEXPRs are nothing more than syntactic sugar for functions that take a
single quoted list as an argument.  If you've done that, you've used
FEXPRs.  Count up the number of times you've suggested to someone that
his life can be made better by passing a quoted list [and sometimes
therefore using EVAL in the callee, though some fexprs just want
quoted data and don't use eval--such as TRACE, where (trace foo) is
syntactic sugar for (*trace '(foo))] and you'll know what the
experience of using FEXPRs is. :)

That said, though, I once made a Lisp that leaned heavily on fexprs
and did fine with it because it had no compiler.  There is some reason
to believe (and I've heard it suggested) that if you make all first-
class environments and if you observe that compilation MOSTLY gets you
O(1) speedup, then it may be worth just sacrificing the constant
factor of compiling at all and having only fexprs.  That wouldn't be
the language we have now.  But it would satisfy a niche.

So rather than saying they're not useful, I'd say they're really niche
and not general purpose and they create an ecological burden on the
whole language such that the whole application probably has to be
niche. But for some scripting situations it might be good.  I don't
mean to stomp on them so completely that they never get thought of
again. But I think for the general purpose compilable language that CL
speaks to be, they have no place other than in bootstrapping the
initial set of special forms or in providing interpreter-hints for
some macros that don't want to expand if a person has access to and
knowledge of interpreter data structures beyond the scope of CL.
From: Pillsy
Subject: Re: FEXPRs
Date: 
Message-ID: <ebe4663c-79f2-402a-9013-8cf97c4325c4@v15g2000yqn.googlegroups.com>
On Apr 10, 2:47 pm, ······@nhplace.com wrote:
[...]
> FEXPRs are nothing more than syntactic sugar for functions that take a
> single quoted list as an argument.  If you've done that, you've used
> FEXPRs.  Count up the number of times you've suggested to someone that
> his life can be made better by passing a quoted list [and sometimes
> therefore using EVAL in the callee, though some fexprs just want
> quoted data and don't use eval--such as TRACE, where (trace foo) is
> syntactic sugar for (*trace '(foo))] and you'll know what the
> experience of using FEXPRs is. :)
[...]
> So rather than saying they're not useful, I'd say they're really niche
> and not general purpose and they create an ecological burden on the
> whole language such that the whole application probably has to be
> niche. But for some scripting situations it might be good.

This is actually a pretty good description of how things are done in
Mathematica[1]; the reason there is it's very, very common to want to
manipulate an expression in some circumstances (since it's a computer
algebra system) and to evaluate it in others. The balance between
whether you usually use code as data versus usually using code as code
is very different from what you want in, say, Common Lisp.

Cheers,
Pillsy

[1] Where they're called functions with the "Hold" attribute. My day
job involves Mathematica enough, and it's got enough in common with
Lisp that I find it's a useful example for conversations here[2].

[2] Although, depressingly, I find that the examples are usually of
the, "Look what Common Lisp got right and Mathematica totally
botched." This is a happy exception.
From: ······@nhplace.com
Subject: Re: FEXPRs
Date: 
Message-ID: <e743d510-5b2f-4e2b-b0a5-554882718eda@r37g2000yqn.googlegroups.com>
On Apr 10, 5:57 pm, Pillsy <·········@gmail.com> wrote:
> On Apr 10, 2:47 pm, ······@nhplace.com wrote:
> [...]
>
> > FEXPRs are nothing more than syntactic sugar for functions that take a
> > single quoted list as an argument.  If you've done that, you've used
> > FEXPRs.  Count up the number of times you've suggested to someone that
> > his life can be made better by passing a quoted list [and sometimes
> > therefore using EVAL in the callee, though some fexprs just want
> > quoted data and don't use eval--such as TRACE, where (trace foo) is
> > syntactic sugar for (*trace '(foo))] and you'll know what the
> > experience of using FEXPRs is. :)
> [...]
> > So rather than saying they're not useful, I'd say they're really niche
> > and not general purpose and they create an ecological burden on the
> > whole language such that the whole application probably has to be
> > niche. But for some scripting situations it might be good.
>
> This is actually a pretty good description of how things are done in
> Mathematica[1]; the reason there is it's very, very common to want to
> manipulate an expression in some circumstances (since it's a computer
> algebra system) and to evaluate it in others. The balance between
> whether you usually use code as data versus usually using code as code
> is very different from what you want in, say, Common Lisp.
>
> Cheers,
> Pillsy
>
> [1] Where they're called functions with the "Hold" attribute. My day
> job involves Mathematica enough, and it's got enough in common with
> Lisp that I find it's a useful example for conversations here[2].
>
> [2] Although, depressingly, I find that the examples are usually of
> the, "Look what Common Lisp got right and Mathematica totally
> botched." This is a happy exception.

I'm familiar with several other symbolic algebra systems, not
Mathematica particularly.  The prevalance of a hold capability or of
an "inert", "unapplied", or "noun form" of an expression is common in
some form or another.  The noun form of an integral that is returned
from a symbolic algebra inquiry to actively integrate something for
which the answer is not known is very different than a Lisp expression
returning me a list of symbols.  Those symbols in the latter case
might not be code at all, even though in some cases they might look
like it.

But an important difference in such systems is that it's well-
understood that the code still _means_ what it means when quoted or
not.  In Lisp, when symbols are quoted, they might not be code any
more.  Things that look like binding forms are not binding forms
delayed, they are potentially just a list of my friends' names or the
random fragments of fuzz taken from my pockets.  The opportunity for
semantic confusion from lisp's quote is very different than from a
lazily-evaluated or imperatively-inhibited quoted expression.
From: Kazimir Majorinc
Subject: Re: FEXPRs
Date: 
Message-ID: <op.uriyeja81cfios@kazimir-pc>
In Newlisp, one needs to use same thing as you do: gensym.
There is no built in gensym, so one has to define his own, and
there are at least two libraries for that. Here is how code
might look in Newlisp:

         (letex ((i (gensym 'i))
                 (j (gensym 'j)))
                 (let ((i 1)
                       (j 2))
                      (println 'i "= " i ", " 'j "= " j)))

>     (let ((object-var (gensym)))
>       `(let ((,object-var ,object))
>         (flet ,(loop for (fn sn) in slots collecting `(,fn () (slot-value
> ,object-var ',sn)))
>    ,@body))))

Similarity is obvious.

I've gone bit further and defined few macros, so instead
of using brute force gensym, I defined library with genlet,
genfor, gendolist and similar constructs. I think Graham
described something similar.

    (genlet((i 1)
            (j 2))
           (println 'i "= " ... ))

Here is how it works - at the worst example of funarg
problem I know. Fexpr calls itself, passing its own
internal variable enclosed in function.

;--------8<---- cut and paste in Newlisp IDE -------------------
(load "http://www.instprog.com/Instprog.default-library.lsp")

(set 'hard-example
   (lambda-macro(f)
       (genfor(i 1 3)
          (unless done          ; avoiding infinite
              (set 'done true)  ; recursion

              (hard-example (lambda(x)i)))  ; danger

          (println i " =>" (f i))))) ; which i will be printed?

(hard-example (lambda(x)x))

; 1 =>1
; 2 =>1
; 3 =>1
; 1 =>1
; 2 =>2
; 3 =>3

(exit)
;-------------8<---------------------------------------------

It works.

Details how gensym, genlet etc are defined:
http://kazimirmajorinc.blogspot.com/2009/03/gensym-and-genlet.html

Jeff Ober has also defined gensym:
http://static.artfulcode.net/newlisp/

--
Blog:    http://kazimirmajorinc.blogspot.com
WWW:     http://www.instprog.com
From: Raffael Cavallaro
Subject: Re: FEXPRs
Date: 
Message-ID: <b83bcd63-d421-4232-9525-87148212db00@c9g2000yqm.googlegroups.com>
On Mar 28, 7:04 pm, "Kazimir Majorinc" <·····@email.address> wrote:
> In Newlisp, one needs to use same thing as you do: gensym.
> There is no built in gensym, so one has to define his own, and
> there are at least two libraries for that.

Now this is special pleading. Your whole argument here and in your
blog post was about how much simpler newlisp macros are. But it turns
out they're simpler because they're broken by default. They can only
be fixed by using gensym (which isn't even included in the language,
testifying to its brokenness) which makes them more complex, just like
common lisp macros.
From: Kazimir Majorinc
Subject: Re: FEXPRs
Date: 
Message-ID: <op.urkd7udk1cfios@kazimir-pc>
On Sun, 29 Mar 2009 05:34:19 +0200, Raffael Cavallaro  
<················@gmail.com> wrote:

> On Mar 28, 7:04�pm, "Kazimir Majorinc" <·····@email.address> wrote:
>> In Newlisp, one needs to use same thing as you do: gensym.
>> There is no built in gensym, so one has to define his own, and
>> there are at least two libraries for that.
>
> Now this is special pleading. Your whole argument here and in your
> blog post was about how much simpler newlisp macros are.

My claim is that Newlisp macros are both simple and expressive.

The simplicity is the result of the Newlisp macro working directly,  
instead of constructing the code that will do the job. One can use Newlisp  
macro on that, other way - but he doesn't have.

Expressiveness is the result of (1) Newlisp macros are first class  
citizens, so they can be assigned as values, mapped and applied; anonymous  
macros are also possible. (2) Newlisp macros are evaluated during runtime,  
so they have access to some information unknown during compile time. (3)  
Newlisp macros are identical to their own definitions, so they can be  
analyzed, copied, even mutated during runtime.








> But it turns out they're simpler because they're broken by default.

Newlisp macros are simpler because they execute code directly, they do not  
construct the code that will be executed. Newlisp macros are "broken by  
default" only if one thinks that all macros need to use gensyms, or that  
Newlisp macros cannot use gensyms or equivalent libraries. But it is not  
true, not for Newlips and not for CL either.

But it you believe in "broken by default", you can go to that blogpost  
with your and my solution of AT-LEAST and construct the example of use  
that can accidentally break my - but not yours code. Then I should either  
fill that crack - or show that your CL code has similar crack.






--
Blog:    http://kazimirmajorinc.blogspot.com
WWW:     http://www.instprog.com
From: Raffael Cavallaro
Subject: Re: FEXPRs
Date: 
Message-ID: <618c7147-0f9a-4b8b-b8f1-be72261cccf9@u39g2000yqu.googlegroups.com>
On Mar 29, 1:44 pm, "Kazimir Majorinc" <·····@email.address> wrote:

> But it you believe in "broken by default", you can go to that blogpost  
> with your and my solution of AT-LEAST and construct the example of use  
> that can accidentally break my - but not yours code. Then I should either  
> fill that crack - or show that your CL code has similar crack.


Here's your definition from your blog:

(define-macro (at-least at-least_n)
      (let ((at-least_en (eval at-least_n)))
           (doargs (at-least_i (zero? at-least_en))
                  (when (eval at-least_i)
                        (dec at-least_en)))
           (zero? at-least_en)))

Here's a test case that produces the wrong answer in newlisp but the
correct answer in common lisp using my macro which you have already in
your blog:

(let ((x 1) (y 2) (z 3) (n 1))
   (print (at-least n
                 (at-least (setq at-least_en 25)  (= y 2) (= z 3))
                 (at-least 0 t t t)
                 (at-least 0 t t t))))

this should clearly evaluate to t since only 1 clause need be true for
the whole expression to be true, and there are two true clauses, the
second and third. In newlisp it evaluates to nil (i.e., false), using
my common lisp macro version of at-least it evaluates to t as it
should.

Near as I can tell, this failure is a direct result of your using a
naming scheme rather than gensyms which newlisp lacks by default. This
is what I meant by "broken by default."

It's no use arguing that your naming scheme will prevent collisions
because for this to be true clients of libraries would have to know
the forbidden naming conventions of all the libraries they use, and
they're pretty likely to use precisely these sorts of odd names when
writing their own macros for precisely the same reason.
From: Kazimir Majorinc
Subject: Re: FEXPRs
Date: 
Message-ID: <op.urk6s5c71cfios@kazimir-pc>
On Mon, 30 Mar 2009 00:36:28 +0200, Raffael Cavallaro  
<················@gmail.com> wrote:

>
> Here's your definition from your blog:
>
> (define-macro (at-least at-least_n)
>       (let ((at-least_en (eval at-least_n)))
>            (doargs (at-least_i (zero? at-least_en))
>                   (when (eval at-least_i)
>                         (dec at-least_en)))
>            (zero? at-least_en)))
>
> Here's a test case that produces the wrong answer in newlisp but the
> correct answer in common lisp using my macro which you have already in
> your blog:
>
> (let ((x 1) (y 2) (z 3) (n 1))
>    (print (at-least n
>                  (at-least (setq at-least_en 25)  (= y 2) (= z 3))
>                  (at-least 0 t t t)
>                  (at-least 0 t t t))))


You changed at-least_en, although you knew that it is internal variable.  
It can happen accidentally, but it is greater chance that one will  
accidentally rewrite at-least itself. But what in other cases?

We can assume that one is user of my Newlisp macros who is unwilling to  
remember naming convention. Hm ... I'll say for while that it is  
borderline case. But, you are right here:






> It's no use arguing that your naming scheme will preventcollisions  
> because for this to be true clients of librarieswould have to know the  
> forbidden naming conventions of allthe libraries they use, and they're  
> pretty likely to useprecisely these sorts of odd names when writing  
> their ownmacros for precisely the same reason.

Absolutely. But you speak about libraries here, not about individual  
macros. Whatever one writes - functions, macros, variables - symbols will  
accumualte. This problem should be solved with namespaces - they serve  
exactly that purpose.





Gensyms are still sometimes needed. Like in case function or macro calls  
itself, passing one of its local variables as free variable. It is not  
that case. As far as I know, it is never observed "in the wild." But, such  
cases are theoretically possible - and I have developed tools and  
techniques for that.

But it is not one of these.

--
Blog:    http://kazimirmajorinc.blogspot.com
WWW:     http://www.instprog.com
From: Raffael Cavallaro
Subject: Re: FEXPRs
Date: 
Message-ID: <8acafc9d-7b5c-43b7-a59a-525ae9d5fbd2@e38g2000yqa.googlegroups.com>
On Mar 30, 12:01 am, "Kazimir Majorinc" <·····@email.address> wrote:
> Gensyms are still sometimes needed. Like in case function or macro calls  
> itself, passing one of its local variables as free variable. It is not  
> that case. As far as I know, it is never observed "in the wild." But, such  
> cases are theoretically possible - and I have developed tools and  
> techniques for that.
>
> But it is not one of these.

More special pleading. The standards of argument apply to me, but not
to you. You asked for an example where your macro is broken because of
newlisp's defaults, and I provided it. Somehow it doesn't count
according to you (why am I not surprised).
From: Kazimir Majorinc
Subject: Re: FEXPRs
Date: 
Message-ID: <op.urlkqleq1cfios@kazimir-pc>
On Mon, 30 Mar 2009 06:55:55 +0200, Raffael Cavallaro

I'm sorry that you think that way, because I really tried
to be honest with pro and contra arguments.

OK.

--
Blog:    http://kazimirmajorinc.blogspot.com
WWW:     http://www.instprog.com
From: Kaz Kylheku
Subject: Re: FEXPRs
Date: 
Message-ID: <20090408082148.245@gmail.com>
On 2009-03-30, Kazimir Majorinc <·····@email.address> wrote:
> Gensyms are still sometimes needed. Like in case function or macro calls  
> itself, passing one of its local variables as free variable. It is not  
> that case. As far as I know, it is never observed "in the wild." But, such  
> cases are theoretically possible - and I have developed tools and  
> techniques for that.

Are you referring to the piles of nonsense in your blog, which convicts you of
deep incompetence?  You are incapable of developing a tool that anyone with two
brain cells to rub together would want to use for any real work.  
From: Pascal J. Bourguignon
Subject: Re: FEXPRs
Date: 
Message-ID: <87prg0b4lv.fsf@galatea.local>
"Kazimir Majorinc" <·····@email.address> writes:
> Newlisp macros are simpler because they execute code directly, they do
> not  construct the code that will be executed. 

Something that execute code directly, that doesn't construct the code
that will be executed IS NOT a macro.


> Newlisp macros are "broken by  default" [...]

They're not macros.  Perhaps you mean "FEXPR"?


-- 
__Pascal Bourguignon__
From: Kaz Kylheku
Subject: Re: FEXPRs
Date: 
Message-ID: <20090407122048.851@gmail.com>
On 2009-03-29, Kazimir Majorinc <·····@email.address> wrote:
> On Sun, 29 Mar 2009 05:34:19 +0200, Raffael Cavallaro  
><················@gmail.com> wrote:
>
>> On Mar 28, 7:04 pm, "Kazimir Majorinc" <·····@email.address> wrote:
>>> In Newlisp, one needs to use same thing as you do: gensym.
>>> There is no built in gensym, so one has to define his own, and
>>> there are at least two libraries for that.
>>
>> Now this is special pleading. Your whole argument here and in your
>> blog post was about how much simpler newlisp macros are.
>
> My claim is that Newlisp macros are both simple and expressive.

Your claim perpetuates incorrect terminology, since those macros are not
what everyone else in the Lisp world understands to be macros.

Simplicitly isn't a virtue; it's usually a sign of immaturity.  Complex
solutions that work often evolve out of simple crap that, perhaps decades ago,
wasn't found adequate.

Expressiveness is worthwhile. Be that as it may, however expressive these
macros are, they are trapped in a piece-of-shit C project which makes a sad
travesty out of the Lisp family of languages.

Anyone whose first contact with the Lisp family of languages is newLISP should
be regarded as someone who has suffered braindamage which must now be carefully
and thoroughly reversed.
From: Kaz Kylheku
Subject: Re: FEXPRs
Date: 
Message-ID: <20090407123839.221@gmail.com>
On 2009-03-29, Kazimir Majorinc <·····@email.address> wrote:
> My claim is that Newlisp macros are both simple and expressive.

My claim is that Majorinc is both simple and excretive.
From: Pillsy
Subject: Re: FEXPRs
Date: 
Message-ID: <c0573207-1caf-45e0-b4d3-c095cf8ac6d2@s20g2000yqh.googlegroups.com>
On Mar 28, 7:04 pm, "Kazimir Majorinc" <·····@email.address> wrote:
> In Newlisp, one needs to use same thing as you do: gensym.
> There is no built in gensym, so one has to define his own, and
> there are at least two libraries for that.
[...]
> Details how gensym, genlet etc are  defined:
> http://kazimirmajorinc.blogspot.com/2009/03/gensym-and-genlet.html

From there:
] ; Once generated, symbols are elements of the symbols-table until
] ; explicitly deleted.

Those are some pretty crappy gensyms. Let me guess, in addition to all
the other things that Newlisp lacks, it also lacks uninterned symbols,
doesn't it?

Cheers,
Pillsy
From: Alex Mizrahi
Subject: Re: FEXPRs
Date: 
Message-ID: <49cfcb18$0$90276$14726298@news.sunsite.dk>
 KM> In Newlisp, one needs to use same thing as you do: gensym.
 KM> There is no built in gensym, so one has to define his own, and
 KM> there are at least two libraries for that. Here is how code
 KM> might look in Newlisp:

 KM>          (letex ((i (gensym 'i))
 KM>                  (j (gensym 'j)))
 KM>                  (let ((i 1)
 KM>                        (j 2))
 KM>                       (println 'i "= " i ", " 'j "= " j)))

no, Kazimir, interesting thing here is not gensym but letex, which fakes 
lexical variables with code expansion in terribly inefficient way.

so, if you call a macro like with-slots it first takes fexpr code and 
performs expansion on it, and only then it executes it. this is pretty close 
to running fexpr code with custom interpreter -- that is, inside defunct 
Newlisp interpreter that does not support lexical variables you make a new 
interpreter that fakes something like lexical variables and run FEXPR code 
there, which then runs body with eval. this shit is really funny..

can you please make a fully working code of thing like with-slot-accessors 
(working with lists rather than objects, for example) so we can benchmark 
it?

 ??>>     (let ((object-var (gensym)))
 ??>>       `(let ((,object-var ,object))
 ??>>         (flet ,(loop for (fn sn) in slots collecting `(,fn ()
 ??>> (slot-value ,object-var ',sn)))   ,@body))))

 KM> Similarity is obvious.

well, gensym is not really critical in this macro. if we omit it, it will 
still sort of work..
From: Kazimir Majorinc
Subject: Re: FEXPRs
Date: 
Message-ID: <op.urlzkai91cfios@kazimir-pc>
On Sun, 29 Mar 2009 21:25:07 +0200, Alex Mizrahi  
<········@users.sourceforge.net> wrote:



>
>  KM>          (letex ((i (gensym 'i))
>  KM>                  (j (gensym 'j)))
>  KM>                  (let ((i 1)
>  KM>                        (j 2))
>  KM>                       (println 'i "= " i ", " 'j "= " j)))
>

> no, Kazimir, interesting thing here is not gensym but letex, which fakes  
> lexical variables ...

I would agree that it is interesting, but not that it "fakes" lexical  
variables. It provides same functionality - i.e. solves funarg problem -  
in this case WITHOUT restricting scope.



--
Blog:    http://kazimirmajorinc.blogspot.com
WWW:     http://www.instprog.com
From: Tobias C. Rittweiler
Subject: Re: Strategies for defining named functions at load or compile time ?
Date: 
Message-ID: <87wsagqerg.fsf@freebits.de>
"Kazimir Majorinc" <·····@email.address> writes:

> > eval is the most basic and primitive way, andusually it is better to
> > use a more specialized tool.
>
> One should use the most basic tool unless he really needs
> specialization due to some optimization concerns. Not
> other way around. I'm sure you understand that principle
> and that you use it in your programs.

You surely write your programs using LAMBDA only, as it's the most basic
thing, don't you?

  -T.
From: eric-and-jane-smith
Subject: Re: Strategies for defining named functions at load or compile time ?
Date: 
Message-ID: <V3Vwl.128$3g7.75@newsfe15.iad>
"Alex Mizrahi" <········@users.sourceforge.net> wrote in
······························@news.sunsite.dk: 

> it does not work for obvious reason -- loop is not executed during 
> compilation!

In my case it was fine, because the symbol-macros were used in different 
files than where they were defined.

In any case, I changed it just now to use read-time evaluation, and it's 
fine that way.  Just in case I ever want to use slime.