From: Randall Randall
Subject: Runtime macros
Date: 
Message-ID: <42bedf36$1_4@alt.athenanews.com>
I've heard talk occasionally about "first class macros" in this newsgroup,
and in other places around the Lisp world, such as Paul Graham's essays
regarding Arc.  Since other people have complained that macros in Common
Lisp are already first class, I wonder what term they would like us to
use for macros that may be passed around as functions may be and used in
that way?  Perhaps "runtime macro"?

While I've read that runtime macros would be a problem to implement for
the compiler writer, it seems to me that it carries only the same
problems as having EVAL in the language.  In any case, if we had runtime
macros, an interesting further development of that would be if we
allowed macros to determine if they were replaced by their result in the
way that CL macros are now.  So the macro could signal that the "normal"
processing would happen (in a runtime macro context), which would be the
evaluation of the macro, and then the evaluation of its result, or it
could signal that it was to be *replaced* by its result, so that macro
expansion would only occur once (or would never occur again, more 
accurately),
as happens in CL now.

I don't know if this has been done in a Lisp before.  If so, could someone
point me in the direction of the Lisp in question?  If not, you may be
wondering why someone would want that.  In CL, you can put declarations
in the source to assist the compiler in creating optimized code.  Such
declarations clutter the code a lot, often mostly obscuring the actual
algorithm, and it would be nice to have a perfectly intelligent compiler
that always knew what kind of type would result from evaluating any form
or symbol (for those cases where the type didn't vary from run to run;
this is rare for me), but even a perfectly intelligent compiler wouldn't
always be able to figure out what type an argument to a function would
be, since it might depend on something outside the program (but known to
be a static type to the programmer).  Runtime macros with optional
replacement would allow DEFUN forms, for example, to register the "function"
as a macro rather than a function, and at runtime, the macro could examine
the actual arguments it got, in addition to their names, and provide
appropriate declarations in its output, and then ask the compiler to
replace it with the function in question.  The programmer need not write
declarations again (except for those cases where it wouldn't be possible
to figure out the proper declaration from a single example), and yet,
functions are more optimizable without the compiler having to do a lot
of reasoning about potential types.

No doubt I'll quickly be informed if my rambling doesn't make sense... :)


--
Randall Randall <·······@randallsquared.com>

From: Coby Beck
Subject: Re: Runtime macros
Date: 
Message-ID: <UgCve.77292$HI.65125@edtnps84>
"Randall Randall" <·······@randallsquared.com> wrote in message 
·················@alt.athenanews.com...
> be a static type to the programmer).  Runtime macros with optional
> replacement would allow DEFUN forms, for example, to register the 
> "function"
> as a macro rather than a function, and at runtime, the macro could examine
> the actual arguments it got, in addition to their names, and provide
> appropriate declarations in its output, and then ask the compiler to
> replace it with the function in question.  The programmer need not write
> declarations again (except for those cases where it wouldn't be possible
> to figure out the proper declaration from a single example), and yet,

It seems to me this last qualification dooms your idea in the way I was 
about to try to explain!  When is a single example *ever* going to tell you 
what to expect everytime?  ISTM, if there are cases where you as a 
programmer can know that the first type seen indicates it is always that 
type that that can only be an artifact of some implementation detail about 
your whole system, eg harware, or third party code etc.  If this is the 
case, you should be able to determine these things at complile time.  If it 
really must wait til runtime, then just delay compilation until then (which 
is all you are doing in a round-about way with your plan anyway).

You could have a special defun type form that checks if it has yet been 
called, if this is the first time, check the types of its arguments, put the 
declarations into the body of its code, compile it and them setf the 
symbol-function so that next time it just proceeds.

-- 
Coby Beck
(remove #\Space "coby 101 @ bigpond . com")
From: Randall Randall
Subject: Re: Runtime macros
Date: 
Message-ID: <42bf16b5$1_2@alt.athenanews.com>
Coby Beck wrote:
> "Randall Randall" <·······@randallsquared.com> wrote in message 
> ·················@alt.athenanews.com...
> 
>>be a static type to the programmer).  Runtime macros with optional
>>replacement would allow DEFUN forms, for example, to register the 
>>"function"
>>as a macro rather than a function, and at runtime, the macro could examine
>>the actual arguments it got, in addition to their names, and provide
>>appropriate declarations in its output, and then ask the compiler to
>>replace it with the function in question.  The programmer need not write
>>declarations again (except for those cases where it wouldn't be possible
>>to figure out the proper declaration from a single example), and yet,
> 
> 
> It seems to me this last qualification dooms your idea in the way I was 
> about to try to explain!  

"dooms?"  :)

> When is a single example *ever* going to tell you 
> what to expect everytime?  ISTM, if there are cases where you as a 
> programmer can know that the first type seen indicates it is always that 
> type that that can only be an artifact of some implementation detail about 
> your whole system, eg harware, or third party code etc.  

Is it really the case that most people write most functions such
that they can either take, for example, an integer, cons, or array
for a given argument? I would expect to use CLOS methods for that,
in CL, but maybe I'm odd.

> If this is the 
> case, you should be able to determine these things at complile time.  

People talk about how smart the OCaml compiler is for being able to
determine *many* of these things at compile time, so my assumption
is that it's hard to do.  If there's some simple way to do it without
actually inspecting the values received, then the example is, as you
say, pointless.

> If it 
> really must wait til runtime, then just delay compilation until then (which 
> is all you are doing in a round-about way with your plan anyway).

Well, and adding a choice about whether to compile finally or
repeatedly.

> You could have a special defun type form that checks if it has yet been 
> called, if this is the first time, check the types of its arguments, put the 
> declarations into the body of its code, compile it and them setf the 
> symbol-function so that next time it just proceeds.

It's true that that's an alternative way of doing the example
use of what I was talking about.  I'll have to think about it
some more.

--
Randall Randall <·······@randallsquared.com>
From: Coby Beck
Subject: Re: Runtime macros
Date: 
Message-ID: <W5Jve.108620$on1.73042@clgrps13>
"Randall Randall" <·······@randallsquared.com> wrote in message 
·················@alt.athenanews.com...
> Coby Beck wrote:
>> "Randall Randall" <·······@randallsquared.com> wrote in message 
>> ·················@alt.athenanews.com...
>>
>>>be a static type to the programmer).  Runtime macros with optional
>>>replacement would allow DEFUN forms, for example, to register the 
>>>"function"
>>>as a macro rather than a function, and at runtime, the macro could 
>>>examine
>>>the actual arguments it got, in addition to their names, and provide
>>>appropriate declarations in its output, and then ask the compiler to
>>>replace it with the function in question.  The programmer need not write
>>>declarations again (except for those cases where it wouldn't be possible
>>>to figure out the proper declaration from a single example), and yet,
>
>> When is a single example *ever* going to tell you what to expect 
>> everytime?  ISTM, if there are cases where you as a programmer can know 
>> that the first type seen indicates it is always that type that that can 
>> only be an artifact of some implementation detail about your whole 
>> system, eg harware, or third party code etc.
>
> Is it really the case that most people write most functions such
> that they can either take, for example, an integer, cons, or array
> for a given argument? I would expect to use CLOS methods for that,
> in CL, but maybe I'm odd.

No, not so often, *but* this is something you usually know at programming 
time, which is usually before compile time.

>> You could have a special defun type form that checks if it has yet been 
>> called, if this is the first time, check the types of its arguments, put 
>> the declarations into the body of its code, compile it and them setf the 
>> symbol-function so that next time it just proceeds.
>
> It's true that that's an alternative way of doing the example
> use of what I was talking about.  I'll have to think about it
> some more.

I can't help thinking that this is a case of very premature optimization 
though.  Usually declarations are the kind of thing you do well after 
everything is settled and humming along and you need to fix a bottleneck. 
In such a case surely you would know why you might get one type or 
another...

-- 
Coby Beck
(remove #\Space "coby 101 @ bigpond . com")
From: Randall Randall
Subject: Re: Runtime macros
Date: 
Message-ID: <42bf642a$1_2@alt.athenanews.com>
Coby Beck wrote:
> "Randall Randall" <·······@randallsquared.com> wrote in message 
> ·················@alt.athenanews.com...
>>Is it really the case that most people write most functions such
>>that they can either take, for example, an integer, cons, or array
>>for a given argument? I would expect to use CLOS methods for that,
>>in CL, but maybe I'm odd.
> 
> 
> No, not so often, *but* this is something you usually know at programming 
> time, which is usually before compile time.

Well, sure, you know, but the main benefit to dynamic
typing, as I see it, is that you don't have to worry
about *declaring* to the compiler what every variable
and function's types are.  If you can get that benefit
and keep the benefits of static typing, I think a
staticly typed lisp would be interesting.

>>It's true that that's an alternative way of doing the example
>>use of what I was talking about.  I'll have to think about it
>>some more.
> 
> I can't help thinking that this is a case of very premature optimization 
> though.  Usually declarations are the kind of thing you do well after 
> everything is settled and humming along and you need to fix a bottleneck. 
> In such a case surely you would know why you might get one type or 
> another...

I think I miscommunicated: I didn't mean that this would
help the programmer write code for which he didn't *know*
the types.  Rather, I thought it would allow the programmer
who is concerned about optimization to have everything
automatically declared in those parts of the program that
actually run (obviously if a macro was never run, in this
scheme, it would never examine the types it would have
received), which might convert a large number of unacceptably
slow programs into acceptably fast, with no extra work on
the part of the programmer, and no requirement to have very
smart compilers on the part of the implementors.

Of course, as you showed, it would be possible to do this
using symbol-function in Common Lisp.

I'm still not sure that macros that can specify when they're
expanded and what happens with the expansion are a bad idea,
though.

--
Randall Randall <·······@randallsquared.com>
From: Ray Dillinger
Subject: Re: Runtime macros
Date: 
Message-ID: <_l5we.2300$p%3.15145@typhoon.sonic.net>
Randall Randall wrote:

> I've heard talk occasionally about "first class macros" in this newsgroup,
> and in other places around the Lisp world, such as Paul Graham's essays
> regarding Arc.  Since other people have complained that macros in Common
> Lisp are already first class, I wonder what term they would like us to
> use for macros that may be passed around as functions may be and used in
> that way?  Perhaps "runtime macro"?

Ummm, I went down that path a few years ago, and started implementing
things.  What with one thing and another, once you decide you want
truly first-class macros, if you keep pushing, I think you inevitably
wind up with no macros at all - just a radically extended function
call semantics that allows first-class functions to do any of the
things that CL or scheme use macros for.

Certainly that's what happened with my toy-lisp.  I won't bore you
with all the details of how it's evolved from the time I was just
trying to make better macros, but what I've currently got has
lazy calling semantics (in that promises are passed instead of
computed arguments).

The macrology-like aspects of function calls happen because the
promises are transparent objects.  You can use accessors on them
to get the literal form that's being evaluated, or the environment
in which the form is scoped.  So you can pass an "invalid" call
to a function as an argument (for example a cond clause) and if
it unpacks the forms, manipulates them into shape using runtime
code, and then calls eval on its computed forms with the argument
environment, the "invalid" call never gets evaluated and the macro-
ish magic happens instead.

Ordinary functions just evaluate their arguments from left to
right before continuing; but functions can be defined using
extra syntax in the lambda list with args (promises) they
don't evaluate, and use accessors on those to get the literal
forms of the arguments and environments of each argument, and
do any arbitrary thing with them.

When I first got it running it was doing the code manipulations
and "interpreting" every time a function that used it was called,
and I still have to generate code that does everything "the hard
way" so I can fall back to it if the optimizations' runtime
constraints are violated. But for a lot of cases, (ie, "sane
use") the runtime constraints are never violated and I never
have to fall back on the unoptimized versions.

The system inlines and constant-propagates fairly aggressively,
and as long as nobody mutates source code, inlining and
constant propagation gets you a long way toward not having to
do the code manipulations on each call.  But the minute somebody
assigns to source code (appending a quoted list from source for
example) I have to throw out everything that depended on the
constant-sourcecode constraint and fall back on the unoptimized
versions.  That's a major pain, btw; it may get better if I
find a way to keep track of *which* source code is mutated and
don't worry about stuff that's not reachable from the argument
forms. But then I wind up doing reachability checks for every
macro use whenever source code is mutated, and that's also a
major pain.

Other than that, what's still expensive are the dynamic cases:
If the system can't prove that a variable is "pure" (ie, never
mutated) then when that variable is called as a function, no
optimized forms are used.  Likewise when functions are called
from higher-order functions or the argument forms are dynamically
computed.

And, unfortunately, I haven't got any optimizations working
at all across module boundaries.  I've tried several times and
I'm still working on figuring out why it always messes up.  I'm
pretty sure its a solvable problem, but I haven't solved it yet.

Anyway, the short version of the story is, it's working and it's
pretty cool -- but it's only cheap in the same cases that ordinary
macros are cheap.  Get outside those bounds and you watch all your
work of optimizing wind up in the toilet.

> While I've read that runtime macros would be a problem to implement for
> the compiler writer, it seems to me that it carries only the same
> problems as having EVAL in the language.

It's considerably more pervasive than that.  In any case where
you can't know whether a given function has "standard" or "extended"
semantics, you can't make optimizations that depend on one or
the other.  And that means all the interesting cases. it makes
higher-order functions, closures, and functions stored in structures
substantially more expensive to call, and motivates me to build
multiple executable versions of the same function; one that I can
keep calling as long as none of its constraints are violated, and
one that I fall back to the minute the optimized form becomes
invalid.

>  In any case, if we had runtime
> macros, an interesting further development of that would be if we
> allowed macros to determine if they were replaced by their result in the
> way that CL macros are now.  

Been there, did that.  Mutable source code.  Not a good idea.
Use lazy semantics instead and let the macros get their arg
forms through promise accessors and work with explicit calls
to eval using the caller environment. Much cleaner, but still
messy as all getout.

> I don't know if this has been done in a Lisp before.  If so, could someone
> point me in the direction of the Lisp in question?  

Well, aside from the toy system I've got, which is not ready for
primetime, you could look at 3-Lisp, which I've only recently
been pointed at myself.

> If not, you may be
> wondering why someone would want that.  In CL, you can put declarations
> in the source to assist the compiler in creating optimized code.  Such
> declarations clutter the code a lot, often mostly obscuring the actual
> algorithm, and it would be nice to have a perfectly intelligent compiler
> that always knew what kind of type would result from evaluating any form
> or symbol (for those cases where the type didn't vary from run to run;

Um.  Haven't touched that; the toy system is "unityped" so far.
However, your idea is pretty darned ambitious.  I think one of
the only ways to do it is to provide alternate execution paths
for your "optimized" code and for situations that don't match
its assumed entry conditions in terms of types.  So in a case where
foo calls baz and bar, and baz and bar both call foobar and foobaz,
you could have a version of foo that you call if all three of its
arguments are integers, which then calls the integer versions of
baz and bar, which in turn call the integer versions of foobaz and
foobar, all with no further typechecks. But you'd also need a unitype
version of foo that gets called if a particular foo-call doesn't
qualify for the optimized treatment, and it would need more general
forms of baz and bar with typechecks, etc.

> No doubt I'll quickly be informed if my rambling doesn't make sense... :)

It makes sense... but it's a long road.  And it can work, but it
still sounds like you've got some pretty woolley ideas about how
and are likely to be disappointed to some extent by the harsh
and even probabilistic/heuristic realities.

The only way I've been able to get anywhere in terms of optimizing
this stuff is to just "assume" the best, but keep the unoptimized
versions around in case some dynamic situation violates the
assumptions so you can fall back on it.  There are multiple
versions of every function, source forms still in the system at
runtime, and lots of code that checks constraint integrity so I
can tell which versions to use.  When it works out it's nice, but
in the worst case, it's no faster than interpreted code (and in
fact somewhat slower because of the baroque function call semantics
and the fact that it keeps spending effort keeping track of
constraints and *trying* to get a grip on some optimizations it
can make...).

A few weeks ago I redefined 'cons' while a program was running,
replacing it with a "special" function that doesn't force its
second argument (good for making lazy structures, etc).  'cons'
is inlined everywhere, and there's a list of optimized functions
to invalidate a mile long attached to the binding.  So when the
binding changed, it went down the list invalidating every
compiled function that inlined cons, and the system ground
to a near-halt for several minutes as it tried to continue the
program using the "fallback" versions, and interleaved that
with recompiling everything.  (function calls to unoptimized
functions trigger recompilation/reoptimization of that function
stochastically, about every tenth call...).  All the compilation
work done up to that point, went down the toilet.  And to make
this work, you just have to be okay with throwing a lot of CPU
cycles down the toilet and starting over when somebody violates
a constraint like that.

				Bear
From: Kent M Pitman
Subject: Re: Runtime macros
Date: 
Message-ID: <u4qbi7vxh.fsf@nhplace.com>
Ray Dillinger <····@sonic.net> writes:

> Randall Randall wrote:
> 
> > I've heard talk occasionally about "first class macros" in this newsgroup,
> > and in other places around the Lisp world, such as Paul Graham's essays
> > regarding Arc.  Since other people have complained that macros in Common
> > Lisp are already first class, I wonder what term they would like us to
> > use for macros that may be passed around as functions may be and used in
> > that way?  Perhaps "runtime macro"?
> 
> Ummm, I went down that path a few years ago, and started implementing
> things.  What with one thing and another, once you decide you want
> truly first-class macros, if you keep pushing, I think you inevitably
> wind up with no macros at all - just a radically extended function
> call semantics that allows first-class functions to do any of the
> things that CL or scheme use macros for.

An interesting analysis [elided here for brevity, but worth a read in full].

I agree with your essential conclusion here.  Ultimately, the function
of macros is to manipulate the syntax tree for the purpose of
tranforming an input syntax that is statically known at compile time
into something else that is statically analyzable at compile time.  If
you make that opaque, all you get is function calling, since all that
distinguishes function calling is that it's computation that by its
nature can't be statically analyzed.

But sometimes people have to work through it to see this.
From: Randall Randall
Subject: Re: Runtime macros
Date: 
Message-ID: <42c1bb34$1_4@alt.athenanews.com>
Ray Dillinger wrote:
> Randall Randall wrote:
> 
>> I've heard talk occasionally about "first class macros" in this 
>> newsgroup,
>> and in other places around the Lisp world, such as Paul Graham's essays
>> regarding Arc.  Since other people have complained that macros in Common
>> Lisp are already first class, I wonder what term they would like us to
>> use for macros that may be passed around as functions may be and used in
>> that way?  Perhaps "runtime macro"?
> 
> Ummm, I went down that path a few years ago, and started implementing
> things.  What with one thing and another, once you decide you want
> truly first-class macros, if you keep pushing, I think you inevitably
> wind up with no macros at all - just a radically extended function
> call semantics that allows first-class functions to do any of the
> things that CL or scheme use macros for.

[snip interesting explanation]

It appears that extending functions to be able to manipulate
their arguments in a macro fashion makes it hard to optimize,
and hard for the developer and compiler both to reason about.

In light of that, wouldn't keeping macros as a separate type
be more reasonable?  Pun not intended.

>> While I've read that runtime macros would be a problem to implement for
>> the compiler writer, it seems to me that it carries only the same
>> problems as having EVAL in the language.
> 
> 
> It's considerably more pervasive than that.  In any case where
> you can't know whether a given function has "standard" or "extended"
> semantics, you can't make optimizations that depend on one or
> the other.  And that means all the interesting cases. it makes
> higher-order functions, closures, and functions stored in structures
> substantially more expensive to call, and motivates me to build
> multiple executable versions of the same function; one that I can
> keep calling as long as none of its constraints are violated, and
> one that I fall back to the minute the optimized form becomes
> invalid.

To me, it appears that this is because of your merging of
functions and macros, which isn't what I was talking about,
though perhaps I would have ended up there, as you say.

Rather, my suggestion was to still have argument source
modification limited to macros, and just allow macros to
indicate at what pass they should be replaced by the
modification.  The hope is to make things simpler, rather
than immensely more complex.

>>  In any case, if we had runtime
>> macros, an interesting further development of that would be if we
>> allowed macros to determine if they were replaced by their result in the
>> way that CL macros are now.  
> 
> Been there, did that.  Mutable source code.  Not a good idea.
> Use lazy semantics instead and let the macros get their arg
> forms through promise accessors and work with explicit calls
> to eval using the caller environment. Much cleaner, but still
> messy as all getout.

You've been explaining how that's difficult to do and
difficult to reason about, though... :)  It seems to me
that many benefits could be obtained merely by allowing
a macro to be expanded and then the result evaluated as
a normal course, and then have a method for producing
the behavior  of CL macros, in which the macro is
replaced in the compiled source.  For example, we could
allow setting &whole to NIL to indicate the usual CL-like
behavior, and otherwise, the macro would be expanded and
the result run each time it was encountered.

>> I don't know if this has been done in a Lisp before.  If so, could 
>> someone
>> point me in the direction of the Lisp in question?  
> 
> 
> Well, aside from the toy system I've got, which is not ready for
> primetime, you could look at 3-Lisp, which I've only recently
> been pointed at myself.

Yes, I read said pointing at, and looked for a bit on Google
for it.  Other than a PDF accessible with a membership to the
ACM, and a bunch of references to an interim manual which no
one seems to have online, I haven't found much.

>> If not, you may be
>> wondering why someone would want that.  In CL, you can put declarations
>> in the source to assist the compiler in creating optimized code.  Such
>> declarations clutter the code a lot, often mostly obscuring the actual
>> algorithm, and it would be nice to have a perfectly intelligent compiler
>> that always knew what kind of type would result from evaluating any form
>> or symbol (for those cases where the type didn't vary from run to run;
> 
> 
> Um.  Haven't touched that; the toy system is "unityped" so far.
> However, your idea is pretty darned ambitious.  I think one of
> the only ways to do it is to provide alternate execution paths
> for your "optimized" code and for situations that don't match
> its assumed entry conditions in terms of types.  So in a case where
> foo calls baz and bar, and baz and bar both call foobar and foobaz,
> you could have a version of foo that you call if all three of its
> arguments are integers, which then calls the integer versions of
> baz and bar, which in turn call the integer versions of foobaz and
> foobar, all with no further typechecks. But you'd also need a unitype
> version of foo that gets called if a particular foo-call doesn't
> qualify for the optimized treatment, and it would need more general
> forms of baz and bar with typechecks, etc.

To make things easier, I'd assumed that this would be used only
in those cases where the programmer knew that types wouldn't
vary between evaluations of the code in question.  When the coder
believed that a function would only get a certain type of argument,
he could use, say, "defstatic" instead of "defun" to get a macro
that was replaced like the usual CL macro by a function, but upon
the first evaluation, rather than as part of compile time.  The
whole point is conciseness.

>> No doubt I'll quickly be informed if my rambling doesn't make sense... :)
> 
> 
> It makes sense... but it's a long road.  And it can work, but it
> still sounds like you've got some pretty woolley ideas about how
> and are likely to be disappointed to some extent by the harsh
> and even probabilistic/heuristic realities.

Indeed.  I'm quite the newbie in this area; the only thing
that might save me is that I think my ideas are simpler to
implement than promise-based function evaluation.

--
Randall Randall <·······@randallsquared.com>
From: Kent M Pitman
Subject: Re: Runtime macros
Date: 
Message-ID: <ubr5qqbbh.fsf@nhplace.com>
Randall Randall <·······@randallsquared.com> writes:

> Ray Dillinger wrote:
> > Randall Randall wrote:
> >
> >> I've heard talk occasionally about "first class macros" in this
> >> newsgroup,
> >> and in other places around the Lisp world, such as Paul Graham's essays
> >> regarding Arc.  Since other people have complained that macros in Common
> >> Lisp are already first class, I wonder what term they would like us to
> >> use for macros that may be passed around as functions may be and used in
> >> that way?  Perhaps "runtime macro"?
> > Ummm, I went down that path a few years ago, and started implementing
> > things.  What with one thing and another, once you decide you want
> > truly first-class macros, if you keep pushing, I think you inevitably
> > wind up with no macros at all - just a radically extended function
> > call semantics that allows first-class functions to do any of the
> > things that CL or scheme use macros for.
> 
> [snip interesting explanation]
> 
> It appears that extending functions to be able to manipulate
> their arguments in a macro fashion makes it hard to optimize,
> and hard for the developer and compiler both to reason about.

What does "manipulate their arguments in a macro fashion" mean?
Anyone can use DESTRUCTURING-BIND.  There's nothing special about 
the destructuring macros do.

> In light of that, wouldn't keeping macros as a separate type
> be more reasonable?  Pun not intended.

I don't see what you mean.  Macros simply manipulate nested lists.  So
do programs.  Making these into a separate type would ruin the most
powerful thing about Lisp macros: that you use ordinary programs
you're already used to in writing macros.  In most other languages,
you have to learn a special macro or template language that is a
different kind of programming than you use in your daily life.  In
Lisp, you just apply the skills you already have from ordinary data
manipulation.  That's why so many people get excited about Lisp
macros--they are able to approach the problem as experts immediately
rather than having to take a new certificate course.  Making a separate
type would destroy all of this.

> >> While I've read that runtime macros would be a problem to implement for
> >> the compiler writer, it seems to me that it carries only the same
> >> problems as having EVAL in the language.
> > It's considerably more pervasive than that.  In any case where
> > you can't know whether a given function has "standard" or "extended"
> > semantics, you can't make optimizations that depend on one or
> > the other.  And that means all the interesting cases. it makes
> > higher-order functions, closures, and functions stored in structures
> > substantially more expensive to call, and motivates me to build
> > multiple executable versions of the same function; one that I can
> > keep calling as long as none of its constraints are violated, and
> > one that I fall back to the minute the optimized form becomes
> > invalid.
> 
> To me, it appears that this is because of your merging of
> functions and macros, which isn't what I was talking about,
> though perhaps I would have ended up there, as you say.

There is NO DIFFERENCE between functions and macros other than that
macros do this one step [here I'm ignoring environment arguments].
All that's different is how they take their arguments originally.

 (foo-macro a b c) => (funcall (macro-function 'foo-macro) '(a b c))

whereas 

 (foo-function '(a b c)) => (funcall #'foo-function '(a b c))

That's not a lot of difference.

See http://www.nhplace.com/kent/Papers/Special-Forms.html for more detail.
 
> Rather, my suggestion was to still have argument source
> modification limited to macros, and just allow macros to
> indicate at what pass they should be replaced by the
> modification.

pass?

Common Lisp semantics are defined the same for interpreted and compiled
code.  Certainly you're not trying to destroy that.  That would take us
back a whole generation in programming languages.

So ... what's a pass? 

> The hope is to make things simpler, rather
> than immensely more complex.

I don't see how introducing this new concept possibly could make things 
simpler.

> >>  In any case, if we had runtime
> >> macros, an interesting further development of that would be if we
> >> allowed macros to determine if they were replaced by their result in the
> >> way that CL macros are now.
> > Been there, did that.  Mutable source code.  Not a good idea.
> > Use lazy semantics instead and let the macros get their arg
> > forms through promise accessors and work with explicit calls
> > to eval using the caller environment. Much cleaner, but still
> > messy as all getout.
> 
> You've been explaining how that's difficult to do and
> difficult to reason about, though... :)  It seems to me
> that many benefits could be obtained merely by allowing
> a macro to be expanded and then the result evaluated as
> a normal course, and then have a method for producing
> the behavior  of CL macros,

I can't make sense of this, I think because you are not being
presentationally clear.

 in which the macro is
> replaced in the compiled source.  For example, we could
> allow setting &whole to NIL to indicate the usual CL-like

Setting &whole?  &whole is a symbolic marker.  Are you suggesting recycling
it as a variable?  That's already a bad plan since most compilers flag
assignments to any &anything as a likely syntax error.  But further, it's
not obvious how you'd scope this.  Or why you'd bother.

> behavior, and otherwise, the macro would be expanded and
> the result run each time it was encountered.

The use of the word "time" here is confusing. Do you mean each syntactic
location in the source form, or each dynamically encountered use at runtime?

> >> I don't know if this has been done in a Lisp before.  If so, could
> >> someone
> >> point me in the direction of the Lisp in question?
> > Well, aside from the toy system I've got, which is not ready for
> > primetime, you could look at 3-Lisp, which I've only recently
> > been pointed at myself.
> 
> Yes, I read said pointing at, and looked for a bit on Google
> for it.  Other than a PDF accessible with a membership to the
> ACM, and a bunch of references to an interim manual which no
> one seems to have online, I haven't found much.

Reading about 3lisp will certainly teach you to be more specific.

> >> If not, you may be
> >> wondering why someone would want that.  In CL, you can put declarations
> >> in the source to assist the compiler in creating optimized code.  Such
> >> declarations clutter the code a lot, often mostly obscuring the actual
> >> algorithm, and it would be nice to have a perfectly intelligent compiler
> >> that always knew what kind of type would result from evaluating any form
> >> or symbol (for those cases where the type didn't vary from run to run;
> > Um.  Haven't touched that; the toy system is "unityped" so far.
> > However, your idea is pretty darned ambitious.  I think one of
> > the only ways to do it is to provide alternate execution paths
> > for your "optimized" code and for situations that don't match
> > its assumed entry conditions in terms of types.  So in a case where
> > foo calls baz and bar, and baz and bar both call foobar and foobaz,
> > you could have a version of foo that you call if all three of its
> > arguments are integers, which then calls the integer versions of
> > baz and bar, which in turn call the integer versions of foobaz and
> > foobar, all with no further typechecks. But you'd also need a unitype
> > version of foo that gets called if a particular foo-call doesn't
> > qualify for the optimized treatment, and it would need more general
> > forms of baz and bar with typechecks, etc.
> 
> To make things easier, I'd assumed that this would be used only
> in those cases where the programmer knew that types wouldn't
> vary between evaluations of the code in question.  When the coder
> believed that a function would only get a certain type of argument,
> he could use, say, "defstatic" instead of "defun" to get a macro
> that was replaced like the usual CL macro by a function, but upon
> the first evaluation, rather than as part of compile time.  The
> whole point is conciseness.

Examples would help here, but I don't see what you're getting at.

> >> No doubt I'll quickly be informed if my rambling doesn't make sense... :)
> > It makes sense... but it's a long road.  And it can work, but it
> > still sounds like you've got some pretty woolley ideas about how
> > and are likely to be disappointed to some extent by the harsh
> > and even probabilistic/heuristic realities.
> 
> Indeed.  I'm quite the newbie in this area; the only thing
> that might save me is that I think my ideas are simpler to
> implement than promise-based function evaluation.

Ease of implementation is often a false lure in programming language design.
I suggest worrying about ease and correctness of end-user programming.
Languages are implemented very few times compared to end-user programs,
so making the implementation hard isn't a huge cost because it's not 
done that much.  The "inner loop" of programming is end-user programming,
not compiler implementation.
From: Robert Maas, see http://tinyurl.com/uh3t
Subject: Re: Runtime macros
Date: 
Message-ID: <REM-2005jul08-006@Yahoo.Com>
> From: Kent M Pitman <······@nhplace.com>
> In most other languages, you have to learn a special macro or
> template language that is a different kind of programming than you
> use in your daily life.  In Lisp, you just apply the skills you
> already have from ordinary data manipulation.  That's why so many
> people get excited about Lisp macros--they are able to approach the
> problem as experts immediately rather than having to take a new
> certificate course.

Ah, once again you hit the nail on the head (for example, your essay on
"The best of intentions" is the most insightful of your many). I'm
quoting you as a marker interface to highlight what you posted so I can
retrieve it later and so that anyone following my posts via Google will
see this particular "great quote" of yours too, also known in newsgroup
jargon as "carrying". When discussing various languages with newcomers,
I'll want to show them this quote of yours. Maybe "tagging" is the best
term to say what I'm doing to your quote.
From: Ray Dillinger
Subject: Re: Runtime macros
Date: 
Message-ID: <oPlwe.2400$p%3.15367@typhoon.sonic.net>
Randall Randall wrote:
> Ray Dillinger wrote:
> 
>> Randall Randall wrote:

> It appears that extending functions to be able to manipulate
> their arguments in a macro fashion makes it hard to optimize,
> and hard for the developer and compiler both to reason about.

Yah.  It makes it so you can't really prove much about the
code statically; all you can do is assume stuff which may or
may not be true, and then keep a close eye in case the code
does something you assumed it wouldn't do, so you can "back
out" of any particular assumption.  You wind up either doing
the very *simplest* and correct (unoptimized) thing, or you
risk spending a lot of time recompiling at runtime.

> In light of that, wouldn't keeping macros as a separate type
> be more reasonable?  Pun not intended.

Well, okay, a terminology thing; you're talking about First-
order (code transforming) entities which are also First-class
(values that exist at runtime).  By most lights, these are
functions - not actually macros as the term is commonly
understood.  What you want is to be able to store these
entities in structures, pass them as arguments, return them
from functions, call them from higher-order functions,
etc....  at runtime.

In order to do this, they have to be callable, creating
stack frames and control environments, just like functions
do, and be part of your flow of control, just like functions
are.  If you can call other functions from them, both kinds
of functions have to understand how to access their lexical
and dynamic scopes just as though called from another function.

Until fairly recently, I had two function types; standard
functions that used call-by-value and lexical scope, and
"special functions" or "mu functions" that used lazy
calling and dynamic scope.  I defined the one with
"lambda" and the other with "mu". But in the presence
of my mu functions, I was already running into the
optimization problems with lambda functions that I'd
alluded to, because those optimizations are based on
assumptions that mu functions violate.

Also, making them able to call each other involved
ever-escalating amounts of hair and ever more specialized
versions of higher-order functions and control constructs,
until I developed a "standard" kind of stack frame that
either kind of function could instantiate and from which
which both kinds of called functions knew how to access
their inherited scopes.  The only way to do that was to
make the function responsible for evaluating its own
arguments so the calling site didn't have to know which
kind of function it was calling - ie, make the "lazy
semantics" already implemented for mu functions apply to
standard functions as well.  This was simple; all I had
to do was use "mu" for every function, but put code in the
standard functions to evaluate all their arguments.
Call-by-value semantics are easy to simulate in a lazy
language; the converse is not true.

And once I'd done that, they were "brothers under the skin"
so to speak, and there was no further point keeping them
separate.  I redefined "lambda" in terms of "mu", unified
the function types, and the semantics got a whole lot
cleaner.  I no longer had to have separate higher-order
functions for different function types, no longer had to
have alternate execution paths depending on whether the
caller or callee was a mu function or lambda function, no
longer had to have hairy code to access parent scopes
depending on the type of the stack frame, etc...  unifying
the function types made the implementation code a *LOT*
simpler, while simultaneously making the semantics a lot
more powerful. And, as I said, I wasn't getting much in
optimization benefits any more from the existence of a
separate class of lambda functions.

If you haven't yet seen the hair that comes from
maintaining separate function types for code-transforming
and not, it may not be obvious to you; it wasn't obvious
to me, anyway.  But now I feel sort of like the design
was inevitable given the goal of having entities at
runtime which are both first-class and first-order.

> Rather, my suggestion was to still have argument source
> modification limited to macros, and just allow macros to
> indicate at what pass they should be replaced by the
> modification.  The hope is to make things simpler, rather
> than immensely more complex.

:-)  Simpler.  <chuckle>  Simpler.  Right.  It's my
opinion that if you do anything that goes beyond CL
macros, you're going to be in territory more complicated
than what I've described, until you "come all the way"
to merging them completely.

>> Well, aside from the toy system I've got, which is not ready for
>> primetime, you could look at 3-Lisp, which I've only recently
>> been pointed at myself.
> 
> 
> Yes, I read said pointing at, and looked for a bit on Google
> for it.  Other than a PDF accessible with a membership to the
> ACM, and a bunch of references to an interim manual which no
> one seems to have online, I haven't found much.

Yah, that's annoying.  The basic thing of 3Lisp is the infinite
"reflective tower"  - which means basically that code which
transforms other code is considered part of a separate "layer"
of the language, and 3-lisp determines lazily how many layers
it needs, giving as much macroexpansion and code examination
as a program wants, depending on the particular program at
runtime.  But the implementation and semantics are, to put
it mildly, not simple.  Under the hood, it uses something
remarkably similar to the kind of extended functions I've
described - but does a better job of keeping the "layers"
of the language separate, which is extremely disciplined and
complex, but which may give opportunities to optimize that
I haven't realized.

> Indeed.  I'm quite the newbie in this area; the only thing
> that might save me is that I think my ideas are simpler to
> implement than promise-based function evaluation.

If you implement first-order entities that exist at runtime,
you have to call them somehow.  As far as I can tell, stack
frames and argument passing (making them functions) is the
only way.  And argument passing to first-order entities
involves passing the argument forms and environments - and
indeed, requires a separate environment for each form in the
argument so you can correctly match up subexpressions to
the environments in which they're to be evaluated.  Packaging
them into promises is kinda-optional, but given your stated
goal you're going to have to implement 99% of it anyway;
why not go the last yard?

				Bear
From: Randall Randall
Subject: Re: Runtime macros
Date: 
Message-ID: <1120600734.907877.74630@g43g2000cwa.googlegroups.com>
I'm reposting this, because Google Groups seems to show that it didn't
appear anywhere but on my news server.  :(  Sorry if you get two.

Ray Dillinger wrote:
> Randall Randall wrote:
>> It appears that extending functions to be able to manipulate
>> their arguments in a macro fashion makes it hard to optimize,
>> and hard for the developer and compiler both to reason about.
>
>
> Yah.  It makes it so you can't really prove much about the
> code statically; all you can do is assume stuff which may or
> may not be true, and then keep a close eye in case the code
> does something you assumed it wouldn't do, so you can "back
> out" of any particular assumption.  You wind up either doing
> the very *simplest* and correct (unoptimized) thing, or you
> risk spending a lot of time recompiling at runtime.
>
>> In light of that, wouldn't keeping macros as a separate type
>> be more reasonable?  Pun not intended.

As Kent Pitman pointed out in the guise of misunderstanding me,
I should have used a different word or phrase: "kind", "sort
of thing", since "type" is overloaded with a specific technical
meaning here.

> Well, okay, a terminology thing; you're talking about First-
> order (code transforming) entities which are also First-class
> (values that exist at runtime).


Yes.  However, the main use I'd proposed for them implicitly
depended on having the evaluation of their arguments having
already happened, which I didn't notice.  After having thought
about it for a while, I think that Coby Beck had the right
idea:  just SETF the symbol-function, which is, of course,
doable now.

Your description of the pain of going all the way into
promises suggests to me that I'd rather focus elsewhere. :)

-- 
Randall Randall <·······@randallsquared.com>
From: Lars Brinkhoff
Subject: Re: Runtime macros
Date: 
Message-ID: <85br5p1wh4.fsf@junk.nocrew.org>
Randall Randall <·······@randallsquared.com> writes:
> Ray Dillinger wrote:
>> Well, aside from the toy system I've got, which is not ready for
>> primetime, you could look at 3-Lisp, which I've only recently been
>> pointed at myself.
> Yes, I read said pointing at, and looked for a bit on Google for it.
> Other than a PDF accessible with a membership to the ACM, and a
> bunch of references to an interim manual which no one seems to have
> online, I haven't found much.

http://repository.readscheme.org/ftp/papers/bcsmith-thesis.pdf