From: David
Subject: Lisp2Perl - Lisp to perl compiler
Date: 
Message-ID: <fU9Ob.1168$nC6.986@news-binary.blueyonder.co.uk>
Hi,

Is anyone here at all interested in programs to translate lisp into 
perl? I see there's a lisp interpreter on CPAN but I have implemented 
something a bit different.

Basically its a program which can translate lisp programs (written in a 
rather scheme-like dialect - even has a single namespace) into efficient 
perl scripts. Perl has enough lisp like features to be able to implement 
most things which can be done in lisp (lexical closures - hurrah!).

This lisp2perl translator/compiler works rather well. I've used it for a 
major project at work. It's also self hosting (used to compile itself).

I've put a bit more info, though not much, and the source on my website:-

http://www.hhdave.pwp.blueyonder.co.uk

Any thoughts or comments anyone?

-- David

From: Thomas F. Burdick
Subject: Re: Lisp2Perl - Lisp to perl compiler
Date: 
Message-ID: <xcv7jzqjojn.fsf@famine.OCF.Berkeley.EDU>
David <······@SPAMAROONEY.blueyonder.co.uk> writes:

> Hi,
> 
> Is anyone here at all interested in programs to translate lisp into 
> perl? I see there's a lisp interpreter on CPAN but I have implemented 
> something a bit different.
> 
> Basically its a program which can translate lisp programs (written in a 
> rather scheme-like dialect - even has a single namespace) into efficient 
> perl scripts. Perl has enough lisp like features to be able to implement 
> most things which can be done in lisp (lexical closures - hurrah!).

I have a grungy Lisp->Perl compiler.  The major goal was to produce
readable Perl code, that looked like it might as well have been
written by a human, and could be maintained by a normal Perl hacker.
The lowest-level Lisp dialect is pretty much Perl-in-Lisp, then there
are enough macros and functions built on top of it to make it fairly
Lispy.

I have to say, I'm horrified that you're thinking of a scheme-like
Lisp.  Perl itself has 4 namespaces, and I preserved that at my Lisp
level (special-forms called FUNCTION, SCALAR, HASH, and ARRAY, but an
inferencing engine generally takes care of picking the correct
namespace for you).  Of course, my Lisp->Perl is hosted on Common
Lisp, so it was a pretty natural choice.

> This lisp2perl translator/compiler works rather well. I've used it for a 
> major project at work. It's also self hosting (used to compile itself).
> 
> I've put a bit more info, though not much, and the source on my website:-
> 
> http://www.hhdave.pwp.blueyonder.co.uk
> 
> Any thoughts or comments anyone?

 - Since you're really compiling Scheme to Perl, I'd think scheme2perl
   would be a better name.  You might as well pick the most specific
   name (otherwise someone might think you mean elisp, for example :)

 - If you're at all interested in readability of the resulting Perl,
   don't offer gensym, just make-symbol (I accompanied mine with a
   with-gensyms macro that takes the symbol names from the vars that
   are bound to the new symbols).  The compiler has to pay attention
   to scoping anyway, so you can have it resolve the names.  Hmm, that
   wasn't a very comprehensible sentance, was it?  How about an
   example:

     (let ((a (make-symbol "a"))
           (b (make-symbol "a"))
           (c (make-symbol "a")))
       `(let ((,a 0))
          (let ((,b 100))
            (setf ,a (+ ,b 1)))
          (let ((,c "hi"))
            (print ,c))
          (print ,a)))
     => (let ((#1=#:a 0))
          (let ((#2=#:a 100))
            (setf #1# (+ #2# 1)))
          (let ((#3=#:a "hi"))
            (print #3#))
          (print #1#))
     => { my $a = 0;
          my $a2 = 100;
          $a = $a2 + 100;
          {
            my $a = "hi";
            print $a
          }
          print $a; }

 - I'm not sure how you can get better undefined-function error
   reporting if you use the scheme approach.  If you use multiple
   namespaces in your Lisp, the function namespace maps directly from
   Lisp to Perl, so you get normal Perl error reporting.

   One thing you might consider here is losing the Lisp1-ness, but
   keeping the Scheme-like handling (normal evaluation) of the first
   position in a form.  IE:

   (defun foo (x) ...)
   (let ((foo (lambda () ...)))
     ((complement (function foo)) ((var foo))))

   For an unadorned symbol in the first position, you could have the
   compiler infer the namespace based on the innermost lexical
   binding.  EG:

   (let ((x 1))
     (foo x)
     (let ((foo (lambda (x) ...)))
       (foo x)))
   <==>
   (let ((x 1))
     ((function foo) x)
     (let ((foo (lambda (x) ...)))
       ((var foo) x)))

-- 
           /|_     .-----------------------.                        
         ,'  .\  / | No to Imperialist war |                        
     ,--'    _,'   | Wage class war!       |                        
    /       /      `-----------------------'                        
   (   -.  |                               
   |     ) |                               
  (`-.  '--.)                              
   `. )----'                               
From: David
Subject: Re: Lisp2Perl - Lisp to perl compiler
Date: 
Message-ID: <LLDOb.600$pF2.340@news-binary.blueyonder.co.uk>
Thomas F. Burdick wrote:
> David <······@SPAMAROONEY.blueyonder.co.uk> writes:
> 
> 
>>Hi,
>>
>>Is anyone here at all interested in programs to translate lisp into 
>>perl? I see there's a lisp interpreter on CPAN but I have implemented 
>>something a bit different.
>>
>>Basically its a program which can translate lisp programs (written in a 
>>rather scheme-like dialect - even has a single namespace) into efficient 
>>perl scripts. Perl has enough lisp like features to be able to implement 
>>most things which can be done in lisp (lexical closures - hurrah!).
> 
> 
> I have a grungy Lisp->Perl compiler.  The major goal was to produce
> readable Perl code, that looked like it might as well have been
> written by a human, and could be maintained by a normal Perl hacker.
> The lowest-level Lisp dialect is pretty much Perl-in-Lisp, then there
> are enough macros and functions built on top of it to make it fairly
> Lispy.
> 
Producing readable, hackable perl code was not one of the goals of my 
program (as you might have guessed if you've seen any of the output!). I 
had sort of figured that when you started using macros of any degree of 
complexity then, whether you are translating to perl or not, you 
wouldn't want to see the entirety of the macro expanded code anyway. 
This is certainly the case with one largeish program I've written - the 
amount of expanded code is enormous compared to the un-macro-expanded 
code. Also, using (cond) statements, especially nested ones, produces 
horrible looking perl code. They compile down to lots of trinary ifs 
(a?b:c) in perl. The fact is, if I were programming in perl my program 
just wouldn't be structured like that, whereas in lisp it seems a 
natural thing to do.

I guess readable code is nice to have if you are 'supposed to be' 
programming in perl, but really want to use Lisp :)

I'd be very interested in seeing the source code to your program if I may.

> I have to say, I'm horrified that you're thinking of a scheme-like
> Lisp.  Perl itself has 4 namespaces, and I preserved that at my Lisp
> level (special-forms called FUNCTION, SCALAR, HASH, and ARRAY, but an
> inferencing engine generally takes care of picking the correct
> namespace for you).  Of course, my Lisp->Perl is hosted on Common
> Lisp, so it was a pretty natural choice.
> 
I didn't intend it to be horrific! (well, not _too_ horrific :)
I know it seems an odd thing to use a 1 namespace language to translate 
to a 4 namespace language. The reason I'm doing the scheme thing as 
opposed to the common lisp thing is that I just kind of like the 1 
namespace approach. It seems to be a lot simpler and remove the 
necessity for ways of dealing with different kinds of bindings. I guess 
that's just personal preference. It does produce rather odd looking perl 
code (lots of '$fn->(...)'), but as I say, I don't really care about 
that. As long as it executes fast enough. I don't know if $fn->() 
executes any slower than &fn() - haven't checked.

The other thing I find, with hashes and arrays and such, is that half 
the time in perl I end up using references to those things stored in 
scalars anyway. Particularly when I need nested structures.
> 
>>This lisp2perl translator/compiler works rather well. I've used it for a 
>>major project at work. It's also self hosting (used to compile itself).
>>
>>I've put a bit more info, though not much, and the source on my website:-
>>
>>http://www.hhdave.pwp.blueyonder.co.uk
>>
>>Any thoughts or comments anyone?
> 
> 
>  - Since you're really compiling Scheme to Perl, I'd think scheme2perl
>    would be a better name.  You might as well pick the most specific
>    name (otherwise someone might think you mean elisp, for example :)
> 
Yeah, I know. It's just that, although it is scheme-like, it certainly 
isn't scheme. It doesn't conform to the standard. It has unschemish 
concepts of truth, for example, and the fundamental datatypes don't 
behave as they should. 'lisp' is a less specific term though, 
encompassing a multitude of similar, but distinct thing. It's its own 
peculiar dialect of lisp :)

>  - If you're at all interested in readability of the resulting Perl,
>    don't offer gensym, just make-symbol (I accompanied mine with a
>    with-gensyms macro that takes the symbol names from the vars that
>    are bound to the new symbols).  The compiler has to pay attention
>    to scoping anyway, so you can have it resolve the names.  Hmm, that
>    wasn't a very comprehensible sentance, was it?  How about an
>    example:
> 
>      (let ((a (make-symbol "a"))
>            (b (make-symbol "a"))
>            (c (make-symbol "a")))
>        `(let ((,a 0))
>           (let ((,b 100))
>             (setf ,a (+ ,b 1)))
>           (let ((,c "hi"))
>             (print ,c))
>           (print ,a)))
>      => (let ((#1=#:a 0))
>           (let ((#2=#:a 100))
>             (setf #1# (+ #2# 1)))
>           (let ((#3=#:a "hi"))
>             (print #3#))
>           (print #1#))
>      => { my $a = 0;
>           my $a2 = 100;
>           $a = $a2 + 100;
>           {
>             my $a = "hi";
>             print $a
>           }
>           print $a; }
> 
I'm not quite sure I understand this at the moment. I'll have to think 
about it some more. If the generated perl looked like that wouldn't it 
clash with a variable called 'a'? I know I'm probably being thick here.

>  - I'm not sure how you can get better undefined-function error
>    reporting if you use the scheme approach.  If you use multiple
>    namespaces in your Lisp, the function namespace maps directly from
>    Lisp to Perl, so you get normal Perl error reporting.
> 
I know, and it would seem the obvious thing to do wouldn't it? I still 
like the single namespace though, but fear not - I thought of a solution 
[me: looks up solution in files of notes about the program...]
Ah, here we go: I'm planning to modify the compiler to keep track of 
lexical scope. That way, when it compiles a reference to an undefined 
variable it should know and generate a warning about it. This may be 
related to the gensym issue above. I guess generating symbols can be 
done if I keep track of scope. I'll have to think about that some more.

Incidentally, can you think of a good argument AGAINST the scheme single 
namespace approach?

>    One thing you might consider here is losing the Lisp1-ness, but
>    keeping the Scheme-like handling (normal evaluation) of the first
>    position in a form.  IE:
> 
>    (defun foo (x) ...)
>    (let ((foo (lambda () ...)))
>      ((complement (function foo)) ((var foo))))
> 
>    For an unadorned symbol in the first position, you could have the
>    compiler infer the namespace based on the innermost lexical
>    binding.  EG:
> 
>    (let ((x 1))
>      (foo x)
>      (let ((foo (lambda (x) ...)))
>        (foo x)))
>    <==>
>    (let ((x 1))
>      ((function foo) x)
>      (let ((foo (lambda (x) ...)))
>        ((var foo) x)))
> 
That is a thought. I suppose it would make it play nicer with 'normal' 
perl code. It would be particularly useful for using built in functions. 
At the moment I have to 'declare' those:-

(<perl-sub> print)
which expands to a (defmacro ...)

I guess the first thing to do in any case is to extend the compilartion 
functions so that they keep track of lexical scope.
From: Thomas F. Burdick
Subject: Re: Lisp2Perl - Lisp to perl compiler
Date: 
Message-ID: <xcvptdfit3e.fsf@famine.OCF.Berkeley.EDU>
David <······@SPAMAROONEY.blueyonder.co.uk> writes:

> Producing readable, hackable perl code was not one of the goals of my 
> program (as you might have guessed if you've seen any of the output!). I 
> had sort of figured that when you started using macros of any degree of 
> complexity then, whether you are translating to perl or not, you 
> wouldn't want to see the entirety of the macro expanded code anyway. 
> This is certainly the case with one largeish program I've written - the 
> amount of expanded code is enormous compared to the un-macro-expanded 
> code.

It does take some care when writing macros, but it's pretty much the
same as always being in "I might have to debug this macro" mode.  If
you use meaningful variable names, and prune unused branches, it helps
a lot.  Plus, if you allow the macros to insert comments into the
resulting code, the volume of Perl code isn't so daunting (kinda like
if you wrote it by hand).

> Also, using (cond) statements, especially nested ones, produces 
> horrible looking perl code. They compile down to lots of trinary ifs 
> (a?b:c) in perl. The fact is, if I were programming in perl my program 
> just wouldn't be structured like that, whereas in lisp it seems a 
> natural thing to do.

Right, whereas in Lisp you might write (setf x (cond ...)), in Perl,
you'd write:

  if (...) {... $x = 1;}
  elsif (...) {... $x = 2;}
  else {... $x = 3;}

In my compiler, the translators for compound expressions have three
modes: producing code for side-effect only, producing code used for
its value, and producing code to put the value in a specific location.
So, depending on context, cond might expand into if/elsif/else, or
trinary-if, or if it's complicated and/or nested, a call to an
anonymous lambda:

  (sub { if (...) {... return 1;}
         elsif (...) {... return 2;}
         else {... return 3;} })->();

> I guess readable code is nice to have if you are 'supposed to be' 
> programming in perl, but really want to use Lisp :)

Well, it wasn't so much "supposed to be", as much as the final product
had to be in Perl, so it would be easy to find someone later to
maintain it.  No one had any problem with me using whatever expert
development tools I wanted, as long as the output was maintainable
as-is.  But, pretty much, yeah :)

> I'd be very interested in seeing the source code to your program if I may.

I'm sitting on it, pending my thinking about how much time/effort it
would take to make it useful to the general public, and if there's a
market for it or not.  And it's a mess of unfactored hacks, because I
was concentrating on the systems I was supposed to be writing, not the
compiler itself.

> Thomas F. Burdick wrote:
>
> > I have to say, I'm horrified that you're thinking of a scheme-like
> > Lisp.  Perl itself has 4 namespaces, and I preserved that at my Lisp
> > level (special-forms called FUNCTION, SCALAR, HASH, and ARRAY, but an
> > inferencing engine generally takes care of picking the correct
> > namespace for you).  Of course, my Lisp->Perl is hosted on Common
> > Lisp, so it was a pretty natural choice.
>
> I didn't intend it to be horrific! (well, not _too_ horrific :)

Horrific because it's a Lisp->Perl compiler, but not because of the
namespace issue? :)

> I know it seems an odd thing to use a 1 namespace language to translate 
> to a 4 namespace language. The reason I'm doing the scheme thing as 
> opposed to the common lisp thing is that I just kind of like the 1 
> namespace approach. It seems to be a lot simpler and remove the 
> necessity for ways of dealing with different kinds of bindings. I guess 
> that's just personal preference. It does produce rather odd looking perl 
> code (lots of '$fn->(...)'), but as I say, I don't really care about 
> that. As long as it executes fast enough. I don't know if $fn->() 
> executes any slower than &fn() - haven't checked.

I'd imagine it is, but I wouldn't sweat an added indirection when
you're talking about a bytecode interpreter.

> The other thing I find, with hashes and arrays and such, is that half 
> the time in perl I end up using references to those things stored in 
> scalars anyway. Particularly when I need nested structures.

Certainly, references to hashes especially are important, in
particular for supporting defstruct.  But if you want to interact with
Perl builtins, being able to spread arrays is important.  I guess you
don't need all 4 namespaces for that, it just makes the resulting Perl
less crazy-looking.

> >>This lisp2perl translator/compiler works rather well. I've used it for a 
> >>major project at work. It's also self hosting (used to compile itself).

Oooh, just noticed this.  I'm glad I didn't try to go that route, I
was happy to have all of Common Lisp at my disposal when writing my
compiler.  You might want to reconsider this decision, if you find
yourself having implementation difficulties -- compilers are a lot
easier to write in big languages (like CL, or one of the big scheme
implementations' dialects with all the add-ons).

> >  - If you're at all interested in readability of the resulting Perl,
> >    don't offer gensym, just make-symbol (I accompanied mine with a
> >    with-gensyms macro that takes the symbol names from the vars that
> >    are bound to the new symbols).  The compiler has to pay attention
> >    to scoping anyway, so you can have it resolve the names.  Hmm, that
> >    wasn't a very comprehensible sentance, was it?  How about an
> >    example:
> > 
> >      (let ((a (make-symbol "a"))
> >            (b (make-symbol "a"))
> >            (c (make-symbol "a")))
> >        `(let ((,a 0))
> >           (let ((,b 100))
> >             (setf ,a (+ ,b 1)))
> >           (let ((,c "hi"))
> >             (print ,c))
> >           (print ,a)))
> >      => (let ((#1=#:a 0))
> >           (let ((#2=#:a 100))
> >             (setf #1# (+ #2# 1)))
> >           (let ((#3=#:a "hi"))
> >             (print #3#))
> >           (print #1#))
> >      => { my $a = 0;
> >           my $a2 = 100;
> >           $a = $a2 + 100;
> >           {
> >             my $a = "hi";
> >             print $a
> >           }
> >           print $a; }
>
> I'm not quite sure I understand this at the moment. I'll have to think 
> about it some more. If the generated perl looked like that wouldn't it 
> clash with a variable called 'a'? I know I'm probably being thick here.

The point is that a naive translation would be:

  { my $a1 = 0;
    { my $a2 = 100;
      $a1 = $a2 + 100;  # clash
      { my $a3 = "hi";
        print $a3;
      }
    }
    print $a1;
  }

If you named $a1, $a2, and $a3 all just $a, it would work, except for
the line labeld "clash", which refers to an $a from two different
scoping levels.  So you can name $a1 and $a3 plain old $a, and only
need to give $a2 a distinct name.  In $a3's scope, it is the only $a
variable used.

> >  - I'm not sure how you can get better undefined-function error
> >    reporting if you use the scheme approach.  If you use multiple
> >    namespaces in your Lisp, the function namespace maps directly from
> >    Lisp to Perl, so you get normal Perl error reporting.
>
> I know, and it would seem the obvious thing to do wouldn't it? I still 
> like the single namespace though, but fear not - I thought of a solution 
> [me: looks up solution in files of notes about the program...]
> Ah, here we go: I'm planning to modify the compiler to keep track of 
> lexical scope. That way, when it compiles a reference to an undefined 
> variable it should know and generate a warning about it. This may be 
> related to the gensym issue above. I guess generating symbols can be 
> done if I keep track of scope. I'll have to think about that some more.

Yeah, they're def related.

> Incidentally, can you think of a good argument AGAINST the scheme single 
> namespace approach?

You get to it yourself in a second :)

> >    One thing you might consider here is losing the Lisp1-ness, but
> >    keeping the Scheme-like handling (normal evaluation) of the first
> >    position in a form.
 [snip]
> That is a thought. I suppose it would make it play nicer with 'normal' 
> perl code. It would be particularly useful for using built in functions. 
> At the moment I have to 'declare' those:-
> 
> (<perl-sub> print)
> which expands to a (defmacro ...)

Yeah, that's a benefit of recognizing at least the function and
variable namespaces.  That way, you can easily use normal Perl
functions, and your functions aren't second-class citizens (eg, you
can write a module that Perl coders can use directly, normally).

> I guess the first thing to do in any case is to extend the compilartion 
> functions so that they keep track of lexical scope.

That is the traditional thing to do when writing scheme compilers :-)

-- 
           /|_     .-----------------------.                        
         ,'  .\  / | No to Imperialist war |                        
     ,--'    _,'   | Wage class war!       |                        
    /       /      `-----------------------'                        
   (   -.  |                               
   |     ) |                               
  (`-.  '--.)                              
   `. )----'                               
From: David
Subject: Re: Lisp2Perl - Lisp to perl compiler
Date: 
Message-ID: <9s_Ob.178$ff5.141@news-binary.blueyonder.co.uk>
Thomas F. Burdick wrote:

> David <······@SPAMAROONEY.blueyonder.co.uk> writes:
> 
> 
>>Producing readable, hackable perl code was not one of the goals of my 
>>program (as you might have guessed if you've seen any of the output!). I 
>>had sort of figured that when you started using macros of any degree of 
>>complexity then, whether you are translating to perl or not, you 
>>wouldn't want to see the entirety of the macro expanded code anyway. 
>>This is certainly the case with one largeish program I've written - the 
>>amount of expanded code is enormous compared to the un-macro-expanded 
>>code.
> 
> 
> It does take some care when writing macros, but it's pretty much the
> same as always being in "I might have to debug this macro" mode.  If
> you use meaningful variable names, and prune unused branches, it helps
> a lot.  Plus, if you allow the macros to insert comments into the
> resulting code, the volume of Perl code isn't so daunting (kinda like
> if you wrote it by hand).
> 
> 
>>Also, using (cond) statements, especially nested ones, produces 
>>horrible looking perl code. They compile down to lots of trinary ifs 
>>(a?b:c) in perl. The fact is, if I were programming in perl my program 
>>just wouldn't be structured like that, whereas in lisp it seems a 
>>natural thing to do.
> 
> 
> Right, whereas in Lisp you might write (setf x (cond ...)), in Perl,
> you'd write:
> 
>   if (...) {... $x = 1;}
>   elsif (...) {... $x = 2;}
>   else {... $x = 3;}
> 
> In my compiler, the translators for compound expressions have three
> modes: producing code for side-effect only, producing code used for
> its value, and producing code to put the value in a specific location.
> So, depending on context, cond might expand into if/elsif/else, or
> trinary-if, or if it's complicated and/or nested, a call to an
> anonymous lambda:
> 
>   (sub { if (...) {... return 1;}
>          elsif (...) {... return 2;}
>          else {... return 3;} })->();
> 
> 
Sounds much like what I did except that I had 2 modes, not 3. I did have 
to create an anonymous sub (lambda) in the perl in certain cases and 
immediately eval it. That was cunning I thought. My compiler doesn't do 
that in the case above though. I actually noticed (I think) that ifs 
which were forced to compile to the trinary operator (since the value 
was used) were somewhat faster than the expression (side effect only) 
ones too. I suppose that's due to not creating a new scope in perl.

>>I guess readable code is nice to have if you are 'supposed to be' 
>>programming in perl, but really want to use Lisp :)
> 
> 
> Well, it wasn't so much "supposed to be", as much as the final product
> had to be in Perl, so it would be easy to find someone later to
> maintain it.  No one had any problem with me using whatever expert
> development tools I wanted, as long as the output was maintainable
> as-is.  But, pretty much, yeah :)
> 
> 
Fair enough. It was good that you managed to find a way of still using 
lisp despite the requirement for maintainability by a perl programmer.

>>I'd be very interested in seeing the source code to your program if I may.
> 
> 
> I'm sitting on it, pending my thinking about how much time/effort it
> would take to make it useful to the general public, and if there's a
> market for it or not.  And it's a mess of unfactored hacks, because I
> was concentrating on the systems I was supposed to be writing, not the
> compiler itself.
> 
> 
I know the feeling. There are lots of improvements I want to make to 
mine as well. For one thing I want to change the way the whole thing 
works as I've used a bit of a hack to get things like this to work:-

(define (f x)
	...some function of x...)

(defmacro my-macro (a b)
	(list 'foo a (f b)))

Basically, in order that the macro can use the function defined before 
it the function must be evaluated at compile time as well as runtime. I 
think the root of this problem is that I want to be able to translate 
the lisp code into a perl script which can be executed in the normal 
way. I don't want to have to read in the lisp code and translate and 
execute each form one by one. This means that the macro definitions must 
executed at compile time (obviously) hence function definitions must be 
executed at compile time. I don't think there should really be such a 
distinction between compile time and runtime. I have found a solution 
though, which lies in having an intermediate lisp representation which 
is just fully macro expanded lisp code which is generated as a side 
effect of 'running' a lisp file. This means that you can't just compile 
the lisp to perl as such, you have to run it, and it gets compiled 
(partially) as a side effect. Separate modules could then be linked, and 
fully translated to perl, later. Of course, running a file is not a 
problem if all it does is define things (functions and macros).

Does this seem sane?

Even though lisp2perl isn't nearly where I'd like it to be I decided to 
just release it anyway. Where I work, even though I use this and people 
see its benefits, I think it would have been frowned upon if I'd spent 
loads of time on it instead of what I was using it for (what a 
surprise). I wrote much of it at home.

>>Thomas F. Burdick wrote:
>>
>>
>>>I have to say, I'm horrified that you're thinking of a scheme-like
>>>Lisp.  Perl itself has 4 namespaces, and I preserved that at my Lisp
>>>level (special-forms called FUNCTION, SCALAR, HASH, and ARRAY, but an
>>>inferencing engine generally takes care of picking the correct
>>>namespace for you).  Of course, my Lisp->Perl is hosted on Common
>>>Lisp, so it was a pretty natural choice.
>>
>>I didn't intend it to be horrific! (well, not _too_ horrific :)
> 
> 
> Horrific because it's a Lisp->Perl compiler, but not because of the
> namespace issue? :)
> 
> 
Well, quite. There is something 'unsettling' about the concept. Really 
I'm looking forward to the release of Perl 6, because then I could just 
compile lisp to parrot vm code and still get the benefits of using perl 
code from lisp. That doesn't help your perl generation of course. I'm 
surprised to find that someone else has done something similar.

>>I know it seems an odd thing to use a 1 namespace language to translate 
>>to a 4 namespace language. The reason I'm doing the scheme thing as 
>>opposed to the common lisp thing is that I just kind of like the 1 
>>namespace approach. It seems to be a lot simpler and remove the 
>>necessity for ways of dealing with different kinds of bindings. I guess 
>>that's just personal preference. It does produce rather odd looking perl 
>>code (lots of '$fn->(...)'), but as I say, I don't really care about 
>>that. As long as it executes fast enough. I don't know if $fn->() 
>>executes any slower than &fn() - haven't checked.
> 
> 
> I'd imagine it is, but I wouldn't sweat an added indirection when
> you're talking about a bytecode interpreter.
> 
> 
>>The other thing I find, with hashes and arrays and such, is that half 
>>the time in perl I end up using references to those things stored in 
>>scalars anyway. Particularly when I need nested structures.
> 
> 
> Certainly, references to hashes especially are important, in
> particular for supporting defstruct.  But if you want to interact with
> Perl builtins, being able to spread arrays is important.  I guess you
> don't need all 4 namespaces for that, it just makes the resulting Perl
> less crazy-looking.
> 
> 
I know what you mean. Interacting with built ins is something I haven't 
really solved nicely yet.

>>>>This lisp2perl translator/compiler works rather well. I've used it for a 
>>>>major project at work. It's also self hosting (used to compile itself).
> 
> 
> Oooh, just noticed this.  I'm glad I didn't try to go that route, I
> was happy to have all of Common Lisp at my disposal when writing my
> compiler.  You might want to reconsider this decision, if you find
> yourself having implementation difficulties -- compilers are a lot
> easier to write in big languages (like CL, or one of the big scheme
> implementations' dialects with all the add-ons).
> 
> 
>>> - If you're at all interested in readability of the resulting Perl,
>>>   don't offer gensym, just make-symbol (I accompanied mine with a
>>>   with-gensyms macro that takes the symbol names from the vars that
>>>   are bound to the new symbols).  The compiler has to pay attention
>>>   to scoping anyway, so you can have it resolve the names.  Hmm, that
>>>   wasn't a very comprehensible sentance, was it?  How about an
>>>   example:
>>>
>>>     (let ((a (make-symbol "a"))
>>>           (b (make-symbol "a"))
>>>           (c (make-symbol "a")))
>>>       `(let ((,a 0))
>>>          (let ((,b 100))
>>>            (setf ,a (+ ,b 1)))
>>>          (let ((,c "hi"))
>>>            (print ,c))
>>>          (print ,a)))
>>>     => (let ((#1=#:a 0))
>>>          (let ((#2=#:a 100))
>>>            (setf #1# (+ #2# 1)))
>>>          (let ((#3=#:a "hi"))
>>>            (print #3#))
>>>          (print #1#))
>>>     => { my $a = 0;
>>>          my $a2 = 100;
>>>          $a = $a2 + 100;
>>>          {
>>>            my $a = "hi";
>>>            print $a
>>>          }
>>>          print $a; }
>>
>>I'm not quite sure I understand this at the moment. I'll have to think 
>>about it some more. If the generated perl looked like that wouldn't it 
>>clash with a variable called 'a'? I know I'm probably being thick here.
> 
> 
> The point is that a naive translation would be:
> 
>   { my $a1 = 0;
>     { my $a2 = 100;
>       $a1 = $a2 + 100;  # clash
>       { my $a3 = "hi";
>         print $a3;
>       }
>     }
>     print $a1;
>   }
> 
> If you named $a1, $a2, and $a3 all just $a, it would work, except for
> the line labeld "clash", which refers to an $a from two different
> scoping levels.  So you can name $a1 and $a3 plain old $a, and only
> need to give $a2 a distinct name.  In $a3's scope, it is the only $a
> variable used.
> 
> 
>>> - I'm not sure how you can get better undefined-function error
>>>   reporting if you use the scheme approach.  If you use multiple
>>>   namespaces in your Lisp, the function namespace maps directly from
>>>   Lisp to Perl, so you get normal Perl error reporting.
>>
>>I know, and it would seem the obvious thing to do wouldn't it? I still 
>>like the single namespace though, but fear not - I thought of a solution 
>>[me: looks up solution in files of notes about the program...]
>>Ah, here we go: I'm planning to modify the compiler to keep track of 
>>lexical scope. That way, when it compiles a reference to an undefined 
>>variable it should know and generate a warning about it. This may be 
>>related to the gensym issue above. I guess generating symbols can be 
>>done if I keep track of scope. I'll have to think about that some more.
> 
> 
> Yeah, they're def related.
> 
> 
>>Incidentally, can you think of a good argument AGAINST the scheme single 
>>namespace approach?
> 
> 
> You get to it yourself in a second :)
> 
> 
>>>   One thing you might consider here is losing the Lisp1-ness, but
>>>   keeping the Scheme-like handling (normal evaluation) of the first
>>>   position in a form.
> 
>  [snip]
> 
>>That is a thought. I suppose it would make it play nicer with 'normal' 
>>perl code. It would be particularly useful for using built in functions. 
>>At the moment I have to 'declare' those:-
>>
>>(<perl-sub> print)
>>which expands to a (defmacro ...)
> 
> 
> Yeah, that's a benefit of recognizing at least the function and
> variable namespaces.  That way, you can easily use normal Perl
> functions, and your functions aren't second-class citizens (eg, you
> can write a module that Perl coders can use directly, normally).
> 
> 
Well yes, I know _that_ benefit of the 2 namespace approach. But I mean, 
forgetting about perl (ie if we were compiling to something else) can 
you think of any benefit of 2 namespaces?

More sane integration with normal perl code would definitely be good. 
I'll do something about it if/when I get chance. I just wish I had more 
time to work on it. Ho hum.

>>I guess the first thing to do in any case is to extend the compilartion 
>>functions so that they keep track of lexical scope.
> 
> 
> That is the traditional thing to do when writing scheme compilers :-)
> 
Yeah, sounds like a good idea. I'm new at this :) Its a slightly unusual 
compilation target too.
From: Thomas F. Burdick
Subject: Re: Lisp2Perl - Lisp to perl compiler
Date: 
Message-ID: <xcvk73nhx9p.fsf@famine.OCF.Berkeley.EDU>
David <······@SPAMAROONEY.blueyonder.co.uk> writes:

> Thomas F. Burdick wrote:
> 
> > I'm sitting on it, pending my thinking about how much time/effort it
> > would take to make it useful to the general public, and if there's a
> > market for it or not.  And it's a mess of unfactored hacks, because I
> > was concentrating on the systems I was supposed to be writing, not the
> > compiler itself.
>
> I know the feeling. There are lots of improvements I want to make to 
> mine as well. For one thing I want to change the way the whole thing 
> works as I've used a bit of a hack to get things like this to work:-
> 
> (define (f x)
> 	...some function of x...)
> 
> (defmacro my-macro (a b)
> 	(list 'foo a (f b)))
> 
> Basically, in order that the macro can use the function defined before 
> it the function must be evaluated at compile time as well as runtime.

Since your system is self-hosting, that shouldn't be a problem.  Do
you have something like Common Lisp's eval-when?  If not, you should
read the page in the spec

  http://www.lispworks.com/reference/HyperSpec/Body/s_eval_w.htm#eval-when

especially the Notes section at the bottom.  In CL, forms like defun
and defmacro (when they're toplevel) cause the function to be known at
compile time by using eval-when.  It also lets you write your own
forms of this type.

> I 
> think the root of this problem is that I want to be able to translate 
> the lisp code into a perl script which can be executed in the normal 
> way. I don't want to have to read in the lisp code and translate and 
> execute each form one by one. This means that the macro definitions must 
> executed at compile time (obviously) hence function definitions must be 
> executed at compile time. I don't think there should really be such a 
> distinction between compile time and runtime.

Actually, I think the answer is probably having a clearer concept of
time(s) in your Lisp dialect.  When the compiler sees a toplevel form
that macroexpands into (eval-when (compile) ...), it should
recursively invoke itself, to deal with the eval-when, evaluate that
in the Perl interpreter, then get back to the job at hand.

> I have found a solution 
> though, which lies in having an intermediate lisp representation which 
> is just fully macro expanded lisp code which is generated as a side 
> effect of 'running' a lisp file. This means that you can't just compile 
> the lisp to perl as such, you have to run it, and it gets compiled 
> (partially) as a side effect. Separate modules could then be linked, and 
> fully translated to perl, later. Of course, running a file is not a 
> problem if all it does is define things (functions and macros).
> 
> Does this seem sane?

It seems messy.  You should be able to start Perl, load all the
functions that your macros use, compile the files defining and using
them, then quit Perl and load the resulting .pl files.  Using a
CL-like concept of times (compile/macroexpand, load, and eval) would
help keep things cleaner.

(In my system, the macros are run in the hosting Common Lisp, so my
issues were different.  You define Perl functions with perlisp:defun,
but they're not available at compile-time.  Macros can use functions
defined with common-lisp:defun).

> > Horrific because it's a Lisp->Perl compiler, but not because of the
> > namespace issue? :)
>
> Well, quite. There is something 'unsettling' about the concept. Really 
> I'm looking forward to the release of Perl 6

I wouldn't hold my breath.  And personally, given the
backwards-incompatibilities of Perl's past, I hope it never comes.

> > Yeah, that's a benefit of recognizing at least the function and
> > variable namespaces.  That way, you can easily use normal Perl
> > functions, and your functions aren't second-class citizens (eg, you
> > can write a module that Perl coders can use directly, normally).
>
> Well yes, I know _that_ benefit of the 2 namespace approach. But I mean, 
> forgetting about perl (ie if we were compiling to something else) can 
> you think of any benefit of 2 namespaces?

Oh, certainly.  In the realm of purely style issues, there's the
nicety of being able to have a type (eg, cons) whose constructor
function is the same as the type's name (cons), and being able to
stick an instance of this type in a variable of the same name, while
still being able to use the constructor function.  Eg:

  (let ((cons (assoc 'foo alist)))
    (if cons
        (setf (cdr cons) 'bar)
        (setf alist (cons (cons 'foo 'bar) alist))))

There are a lot of style issues like this.  But the big reason I bring
up something that I know Schemers would prefer to avoid, is that it's
important for supporting defmacro-style macros.  In a Lisp-1, you
worry about binding some name with a let, and in the body of that let,
having a macro expand into a call to that name.  This is the problem
that syntactic closures, and Scheme's pattern-matching macros were
invented to solve.  In a Lisp-2, there is still a *posibility* of
doing this, using flet/labels, but it's much less likely to happen.
If you add a package system, like in CL, the likelyhood of a user
accidentally shadowing a global function definition is almost nil.
It's essentially a non-issue.

And remember, Lisp-2'ness is orthogonal to normal evaluation of the
first element in forms.  You could have a perfectly legit Lisp-2 that
would allow

  (flet ((f (x) (lambda (y) (+ x y))))
    ((f 10) 5))

> > That is the traditional thing to do when writing scheme compilers :-)
>
> Yeah, sounds like a good idea. I'm new at this :) Its a slightly unusual 
> compilation target too.

No kidding.  I definately get a kick out of the fact that someone else
is doing it too.  BTW, you might be interested to know about Linj, a
Lisp->Java compiler that produces readable Java from a CL-like dialect
of Lisp.  It's not currently available, but its developer has given
presentations/demos at the last two International Lisp Conferences.

-- 
           /|_     .-----------------------.                        
         ,'  .\  / | No to Imperialist war |                        
     ,--'    _,'   | Wage class war!       |                        
    /       /      `-----------------------'                        
   (   -.  |                               
   |     ) |                               
  (`-.  '--.)                              
   `. )----'                               
From: David
Subject: Re: Lisp2Perl - Lisp to perl compiler
Date: 
Message-ID: <PDCRb.440$Pw5.354@news-binary.blueyonder.co.uk>
Thomas F. Burdick wrote:
> David <······@SPAMAROONEY.blueyonder.co.uk> writes:
> 
> 
>>Thomas F. Burdick wrote:
>>
>>
>>>I'm sitting on it, pending my thinking about how much time/effort it
>>>would take to make it useful to the general public, and if there's a
>>>market for it or not.  And it's a mess of unfactored hacks, because I
>>>was concentrating on the systems I was supposed to be writing, not the
>>>compiler itself.
>>
>>I know the feeling. There are lots of improvements I want to make to 
>>mine as well. For one thing I want to change the way the whole thing 
>>works as I've used a bit of a hack to get things like this to work:-
>>
>>(define (f x)
>>	...some function of x...)
>>
>>(defmacro my-macro (a b)
>>	(list 'foo a (f b)))
>>
>>Basically, in order that the macro can use the function defined before 
>>it the function must be evaluated at compile time as well as runtime.
> 
> 
> Since your system is self-hosting, that shouldn't be a problem.  Do
> you have something like Common Lisp's eval-when?  If not, you should
> read the page in the spec
> 
>   http://www.lispworks.com/reference/HyperSpec/Body/s_eval_w.htm#eval-when
> 
> especially the Notes section at the bottom.  In CL, forms like defun
> and defmacro (when they're toplevel) cause the function to be known at
> compile time by using eval-when.  It also lets you write your own
> forms of this type.
> 
> 
>>I 
>>think the root of this problem is that I want to be able to translate 
>>the lisp code into a perl script which can be executed in the normal 
>>way. I don't want to have to read in the lisp code and translate and 
>>execute each form one by one. This means that the macro definitions must 
>>executed at compile time (obviously) hence function definitions must be 
>>executed at compile time. I don't think there should really be such a 
>>distinction between compile time and runtime.
> 
> 
> Actually, I think the answer is probably having a clearer concept of
> time(s) in your Lisp dialect.  When the compiler sees a toplevel form
> that macroexpands into (eval-when (compile) ...), it should
> recursively invoke itself, to deal with the eval-when, evaluate that
> in the Perl interpreter, then get back to the job at hand.
> 
> 
>>I have found a solution 
>>though, which lies in having an intermediate lisp representation which 
>>is just fully macro expanded lisp code which is generated as a side 
>>effect of 'running' a lisp file. This means that you can't just compile 
>>the lisp to perl as such, you have to run it, and it gets compiled 
>>(partially) as a side effect. Separate modules could then be linked, and 
>>fully translated to perl, later. Of course, running a file is not a 
>>problem if all it does is define things (functions and macros).
>>
>>Does this seem sane?
> 
> 
> It seems messy.  You should be able to start Perl, load all the
> functions that your macros use, compile the files defining and using
> them, then quit Perl and load the resulting .pl files.  Using a
> CL-like concept of times (compile/macroexpand, load, and eval) would
> help keep things cleaner.
> 
> (In my system, the macros are run in the hosting Common Lisp, so my
> issues were different.  You define Perl functions with perlisp:defun,
> but they're not available at compile-time.  Macros can use functions
> defined with common-lisp:defun).
> 
> 
Hmmm. Yes, that does seem a sensible solution. Thanks for the pointers. 
I think I get the general idea of (eval-when). Its _sort of_ how I've 
implemented function definitions at the moment - some things macro 
expand to a special form which is evaluated at compile time as well as 
runtime. I might just implement something closer to the common lisp 
eval-when approach (seems a bit cleaner than what I did).

My main concern with all this was how to ensure that I can handle 
dependencies between lisp files. If a file of lisp code uses macros 
defined in another file then clearly the macro file must be loaded into 
the compiler before the first one can be compiled. I wanted to try and 
make sure that the required files would be recompiled as necessary (if 
they had changed) and then loaded.

Perhaps, though, the thing to do is, as you say, just to load the 
required files into the running lisp/perl interpreter by hand before 
attempting to compile something which uses them. I wondered how common 
lisp handles these sort of things (essentially: building of projects) 
and it seems that it doesn't natively. I noticed that there are lisp 
libraries for handling this sort of problem, which make it possible to 
write build scripts (sort of like make files I guess).

I suppose, then, that the thing to do for a complex, multi file project 
is to have a lisp program whose job it is to build the thing, which it 
would do by compiling (if required) and loading the lisp source code in 
the required order so that macros are defined before they are used. 
Then, in my case, the ultimate product of the compilation (a perl 
script) would be generated by running that compilation program (with a 
repl, say).

Would that be a sensible approach, or am I missing something obvious?

I can see the point of 2 namespaces by the way (particularly when it 
comes to macros). I may change to that, or do something like check the 
lexical scope to implement some kind of hybrid approach. I thought there 
must be a reason for it.
From: Thomas F. Burdick
Subject: Re: Lisp2Perl - Lisp to perl compiler
Date: 
Message-ID: <xcvhdyfesfx.fsf@famine.OCF.Berkeley.EDU>
David <······@SPAMAROONEY.blueyonder.co.uk> writes:

> My main concern with all this was how to ensure that I can handle 
> dependencies between lisp files. If a file of lisp code uses macros 
> defined in another file then clearly the macro file must be loaded into 
> the compiler before the first one can be compiled. I wanted to try and 
> make sure that the required files would be recompiled as necessary (if 
> they had changed) and then loaded.
> 
> Perhaps, though, the thing to do is, as you say, just to load the 
> required files into the running lisp/perl interpreter by hand before 
> attempting to compile something which uses them. I wondered how common 
> lisp handles these sort of things (essentially: building of projects) 
> and it seems that it doesn't natively. I noticed that there are lisp 
> libraries for handling this sort of problem, which make it possible to 
> write build scripts (sort of like make files I guess).

Yeah, system construction is extra-standard in CL.  It's generally
done with a defsystem facility (eg, ASDF http://www.cliki.net/asdf).
You make a file containing a defsystem form, which is a declarative
way of describing the dependencies in the system.  If you want to
compile the system, you tell your defsystem to do so, and it figures
out what order things need to be compiled in, and when/if to load
files during compilation.

> I suppose, then, that the thing to do for a complex, multi file project 
> is to have a lisp program whose job it is to build the thing, which it 
> would do by compiling (if required) and loading the lisp source code in 
> the required order so that macros are defined before they are used. 
> Then, in my case, the ultimate product of the compilation (a perl 
> script) would be generated by running that compilation program (with a 
> repl, say).
> 
> Would that be a sensible approach, or am I missing something obvious?

Sounds about right.  Remember also, you sometimes have control the
order that the resulting files are loaded.

> I can see the point of 2 namespaces by the way (particularly when it 
> comes to macros). I may change to that, or do something like check the 
> lexical scope to implement some kind of hybrid approach. I thought there 
> must be a reason for it.

Just like eval-when, most of the decisions made in the design of
Common Lisp are worth considering carefully; in a lot of ways, it's
the collected wisdom of an era of Lisp development.  It's far from
perfect, but it is very well designed.

-- 
           /|_     .-----------------------.                        
         ,'  .\  / | No to Imperialist war |                        
     ,--'    _,'   | Wage class war!       |                        
    /       /      `-----------------------'                        
   (   -.  |                               
   |     ) |                               
  (`-.  '--.)                              
   `. )----'                               
From: Jens Axel Søgaard
Subject: Re: Lisp2Perl - Lisp to perl compiler
Date: 
Message-ID: <400c7cef$0$177$edfadb0f@dread12.news.tele.dk>
David wrote:

> Well, quite. There is something 'unsettling' about the concept. Really 
> I'm looking forward to the release of Perl 6, because then I could just 
> compile lisp to parrot vm code and still get the benefits of using perl 
> code from lisp. That doesn't help your perl generation of course. I'm 
> surprised to find that someone else has done something similar.

Never say never:

   <http://okmij.org/ftp/Scheme/Scheme-in-Perl.txt>

   <http://www.venge.net/graydon/scm2p.html>

-- 
Jens Axel S�gaard
From: Lars Brinkhoff
Subject: Re: Lisp2Perl - Lisp to perl compiler
Date: 
Message-ID: <85u12rw18v.fsf@junk.nocrew.org>
David <······@SPAMAROONEY.blueyonder.co.uk> writes:
> forgetting about perl (ie if we were compiling to something else)
> can you think of any benefit of 2 namespaces?

Suggested reading:
http://www.nhplace.com/kent/Papers/Technical-Issues.html

-- 
Lars Brinkhoff,         Services for Unix, Linux, GCC, HTTP
Brinkhoff Consulting    http://www.brinkhoff.se/
From: Marcin 'Qrczak' Kowalczyk
Subject: Re: Lisp2Perl - Lisp to perl compiler
Date: 
Message-ID: <pan.2004.01.25.12.39.26.774237@knm.org.pl>
On Tue, 20 Jan 2004 00:18:17 +0000, David wrote:

> This means that the macro definitions must executed at compile time
> (obviously) hence function definitions must be executed at compile time.
> I don't think there should really be such a distinction between compile
> time and runtime.

I think there should. Please read
<http://www.cs.utah.edu/plt/publications/macromod.pdf>

-- 
   __("<         Marcin Kowalczyk
   \__/       ······@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
From: Pascal Costanza
Subject: Re: Lisp2Perl - Lisp to perl compiler
Date: 
Message-ID: <bv406o$pij$1@newsreader2.netcologne.de>
Marcin 'Qrczak' Kowalczyk wrote:

> On Tue, 20 Jan 2004 00:18:17 +0000, David wrote:
> 
>>This means that the macro definitions must executed at compile time
>>(obviously) hence function definitions must be executed at compile time.
>>I don't think there should really be such a distinction between compile
>>time and runtime.
> 
> I think there should. Please read
> <http://www.cs.utah.edu/plt/publications/macromod.pdf>

Oh dear. This is just another one of those bad examples of unbreakable 
abstractions. Unbreakability sucks.

Generally speaking, layering or staging of approaches is probably a good 
idea. But there might be situations in which I need to mix up the 
layers/stages. If a proposed solution doesn't provide a back door for 
these things, it sucks IMHO.

In general, computer scientists tend to confuse description and 
prescription. I can describe a good solution and explain why it works 
and what's good about it. But prescribing that everybody else should do 
it exactly the same is the wrong conclusion. Especially when you don't 
give them a way out.


Pascal

-- 
Tyler: "How's that working out for you?"
Jack: "Great."
Tyler: "Keep it up, then."
From: Joe Marshall
Subject: Re: Lisp2Perl - Lisp to perl compiler
Date: 
Message-ID: <ad4aid5h.fsf@ccs.neu.edu>
Pascal Costanza <········@web.de> writes:

> Marcin 'Qrczak' Kowalczyk wrote:
>
>> On Tue, 20 Jan 2004 00:18:17 +0000, David wrote:
>>
>>>This means that the macro definitions must executed at compile time
>>>(obviously) hence function definitions must be executed at compile time.
>>>I don't think there should really be such a distinction between compile
>>>time and runtime.
>> I think there should. Please read
>> <http://www.cs.utah.edu/plt/publications/macromod.pdf>
>
> Oh dear. This is just another one of those bad examples of unbreakable
> abstractions. Unbreakability sucks.
>
> Generally speaking, layering or staging of approaches is probably a
> good idea. But there might be situations in which I need to mix up the
> layers/stages. If a proposed solution doesn't provide a back door for
> these things, it sucks IMHO.

In this case, you don't want to mix up the layers or stages.  Really.

The macro/module system that Matthew is describing is one that allows
the system to determine what to load and when to load it in order to
ensure that you have the macro-expansion code in place *before* you
attempt to expand the macros that use it.

Mixing up the layers or stages means that you want to have circular
dependencies in your macro expansions, i.e., macro FOO needs the
QUASIQUOTE library to expand, but the QUASIQUOTE library is written
using the macro FOO.
From: Matthias Blume
Subject: Re: Lisp2Perl - Lisp to perl compiler
Date: 
Message-ID: <m1hdyii9p6.fsf@tti5.uchicago.edu>
Joe Marshall <···@ccs.neu.edu> writes:

> Pascal Costanza <········@web.de> writes:
> 
> > Marcin 'Qrczak' Kowalczyk wrote:
> >
> >> On Tue, 20 Jan 2004 00:18:17 +0000, David wrote:
> >>
> >>>This means that the macro definitions must executed at compile time
> >>>(obviously) hence function definitions must be executed at compile time.
> >>>I don't think there should really be such a distinction between compile
> >>>time and runtime.
> >> I think there should. Please read
> >> <http://www.cs.utah.edu/plt/publications/macromod.pdf>
> >
> > Oh dear. This is just another one of those bad examples of unbreakable
> > abstractions. Unbreakability sucks.
> >
> > Generally speaking, layering or staging of approaches is probably a
> > good idea. But there might be situations in which I need to mix up the
> > layers/stages. If a proposed solution doesn't provide a back door for
> > these things, it sucks IMHO.
> 
> In this case, you don't want to mix up the layers or stages.  Really.
>
> The macro/module system that Matthew is describing is one that allows
> the system to determine what to load and when to load it in order to
> ensure that you have the macro-expansion code in place *before* you
> attempt to expand the macros that use it.
> 
> Mixing up the layers or stages means that you want to have circular
> dependencies in your macro expansions, i.e., macro FOO needs the
> QUASIQUOTE library to expand, but the QUASIQUOTE library is written
> using the macro FOO.

I agree with Pascal:  being able to go only forward in time really sucks.

:-)
From: Joe Marshall
Subject: Re: Lisp2Perl - Lisp to perl compiler
Date: 
Message-ID: <wu7emaek.fsf@comcast.net>
Matthias Blume <····@my.address.elsewhere> writes:

>
> I agree with Pascal:  being able to go only forward in time really sucks.

For you, there's continuations.


-- 
~jrm
From: Matthias Blume
Subject: Re: Lisp2Perl - Lisp to perl compiler
Date: 
Message-ID: <m1n089xt2u.fsf@tti5.uchicago.edu>
Joe Marshall <·············@comcast.net> writes:

> Matthias Blume <····@my.address.elsewhere> writes:
> 
> >
> > I agree with Pascal:  being able to go only forward in time really sucks.
> 
> For you, there's continuations.

I just hope you are trying to continue (no pun intended) my attempt at
being sarcastic...
From: Marcin 'Qrczak' Kowalczyk
Subject: Re: Lisp2Perl - Lisp to perl compiler
Date: 
Message-ID: <pan.2004.01.26.21.36.40.694333@knm.org.pl>
On Mon, 26 Jan 2004 22:18:20 +0100, Pascal Costanza wrote:

> Generally speaking, layering or staging of approaches is probably a good 
> idea. But there might be situations in which I need to mix up the 
> layers/stages. If a proposed solution doesn't provide a back door for 
> these things, it sucks IMHO.

The same module can be independently imported to multiple stages,
so I don't see a problem.

-- 
   __("<         Marcin Kowalczyk
   \__/       ······@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
From: Pascal Costanza
Subject: Re: Lisp2Perl - Lisp to perl compiler
Date: 
Message-ID: <bv44n7$3mi$1@newsreader2.netcologne.de>
Marcin 'Qrczak' Kowalczyk wrote:

> On Mon, 26 Jan 2004 22:18:20 +0100, Pascal Costanza wrote:
> 
>>Generally speaking, layering or staging of approaches is probably a good 
>>idea. But there might be situations in which I need to mix up the 
>>layers/stages. If a proposed solution doesn't provide a back door for 
>>these things, it sucks IMHO.
> 
> The same module can be independently imported to multiple stages,
> so I don't see a problem.

I haven't analyzed the issues in all details, but someone reported to me 
that my implementation of dynamically scoped functions wouldn't be 
possible because of the separation of stages in that module system.

I don't see how my code would cause any real problems, so if a module 
system doesn't allow me to write it, there's something wrong with that 
module system, and not with my code.

See my paper about DSF at http://www.pascalcostanza.de/dynfun.pdf that 
also includes the source code.


Pascal

-- 
Tyler: "How's that working out for you?"
Jack: "Great."
Tyler: "Keep it up, then."
From: Joe Marshall
Subject: Re: Lisp2Perl - Lisp to perl compiler
Date: 
Message-ID: <y8rtfl9i.fsf@ccs.neu.edu>
Pascal Costanza <········@web.de> writes:

> I haven't analyzed the issues in all details, but someone reported to
> me that my implementation of dynamically scoped functions wouldn't be
> possible because of the separation of stages in that module system.
>
> See my paper about DSF at http://www.pascalcostanza.de/dynfun.pdf that
> also includes the source code.

From a quick perusal of your code, I see nothing in it that wouldn't
easily work with the module system.
From: Pascal Costanza
Subject: Re: Lisp2Perl - Lisp to perl compiler
Date: 
Message-ID: <bv652v$132$1@newsreader2.netcologne.de>
Joe Marshall wrote:

> Pascal Costanza <········@web.de> writes:
> 
> 
>>I haven't analyzed the issues in all details, but someone reported to
>>me that my implementation of dynamically scoped functions wouldn't be
>>possible because of the separation of stages in that module system.
>>
>>See my paper about DSF at http://www.pascalcostanza.de/dynfun.pdf that
>>also includes the source code.
> 
> From a quick perusal of your code, I see nothing in it that wouldn't
> easily work with the module system.

OK, I have checked my email archive to see what the issue was. Here it 
is: In MzScheme, there is obviously no way to access run-time values at 
compile time / macro expansion time. (I hope I have gotten the 
terminology right here.)

This means that you can't say this:

(define a 1)
(define-macro (foo x) `(+ ,a ,x))
(foo 2)

This will result in a "reference to undefined identifier" error, because 
the foo macro doesn't see the a reference.

In my implementation of dynamically scoped functions in Common Lisp, I 
make use of this collapsing of stages in order to be able to redefine 
existing functions as dynamically scoped functions, without changing 
their defined behavior. See my paper for an example how I turn a CLOS 
generic function into a dynamically scoped one.

Even if I can go only forward in time, I like the fact that I can change 
the decisions I have made in the past. It escapes me why this should be 
an evil thing to do in programs.


Pascal

-- 
Tyler: "How's that working out for you?"
Jack: "Great."
Tyler: "Keep it up, then."
From: Joe Marshall
Subject: Re: Lisp2Perl - Lisp to perl compiler
Date: 
Message-ID: <r7xlfejj.fsf@ccs.neu.edu>
Pascal Costanza <········@web.de> writes:

> OK, I have checked my email archive to see what the issue was. Here it
> is: In MzScheme, there is obviously no way to access run-time values
> at compile time / macro expansion time. (I hope I have gotten the
> terminology right here.)
>
> This means that you can't say this:
>
> (define a 1)
> (define-macro (foo x) `(+ ,a ,x))
> (foo 2)
>
> This will result in a "reference to undefined identifier" error,
> because the foo macro doesn't see the a reference.

(define a 1)

(define-syntax foo
  (syntax-rules ()
   ((foo x) (+ a x))))

(foo 2) => 3

> In my implementation of dynamically scoped functions in Common Lisp, I
> make use of this collapsing of stages in order to be able to redefine
> existing functions as dynamically scoped functions, without changing
> their defined behavior. See my paper for an example how I turn a CLOS
> generic function into a dynamically scoped one.

I can see what you are doing.  You're hijacking the dynamic scoping
mechanism of special variables to simulate dynamically scoped
functions.  To do this, you need a mapping between functions and
unique special variables, and that is kept in the hash table held in
*DYNSYMS*.  Now at compile time, if you encounter a DEFDYNFUN form,
you look in the table for the mapping and generate code that indirects
through the special variable.

At runtime, though, you want to be able to change a regular function
into a dynamic one should you so desire.

There's no problem with doing this with Matthew's module system.

> Even if I can go only forward in time, I like the fact that I can
> change the decisions I have made in the past. It escapes me why this
> should be an evil thing to do in programs.

That's not a problem.  What is a problem is when you attempt to change
decisions in the future.  It doesn't work in Common Lisp, and it
doesn't work in Matthew's system.  Let me illustrate where the problem
crops up in both.

I have a `dotnet' interface layer that works by hacking the
macro-expansion facility to change identifiers like 

    System.Reflection.AssemblyName.class

into forms like

    (clr/find-class 'system 'reflection 'assemblyname)

Part of the macro has to recognize identifiers with dots in them and
split them, and depending on where the dots are expand them in
different ways:


Foobar.Baz  =>  (clr/find-static-method 'foobar 'baz)

.quux-field$  =>  (instance-field-getter 'quux-field)

(setf .quux-field$) => (instance-field-setter 'quux-field)


As should be obvious, utility functions such as `EMBEDDED-DOT-P' and
`SPLIT-ON-DOTS' *must* be present before we can expand the macros.
There is simply no way to get around that!  When compiling a
*different* file that uses this macro, you *must* ensure that the file
that defines EMBEDDED-DOT-P is loaded first.  This is a compile-time
dependency, but this dependency must be maintained.

It often happens that you accidentally introduce these dependencies
through incremental editing of files:  you load up the compiled files,
edit some code, recompile that file, edit some more, do more
recompiles etc.  At some point, you decide to recompile from scratch,
but all hell breaks loose because you added a macro whose expansion
depends on code that is compiled *after* the macro is used.  It only
accidentally worked during development because you never attempted to
recompile the utility file that expanded the macro, only those files
that used it.

What Matthew's module system does is ensure that code that the macro
*uses* at compile time is in scope at compile time and not
accidentally obtained from the runtime environment.  Errors such as
the above are caught sooner because incremental compilation and whole
world compilation are treated more uniformly.
From: Pascal Costanza
Subject: Re: Lisp2Perl - Lisp to perl compiler
Date: 
Message-ID: <bv70ku$b9d$1@newsreader2.netcologne.de>
Joe Marshall wrote:

> At runtime, though, you want to be able to change a regular function
> into a dynamic one should you so desire.
> 
> There's no problem with doing this with Matthew's module system.

OK.

> As should be obvious, utility functions such as `EMBEDDED-DOT-P' and
> `SPLIT-ON-DOTS' *must* be present before we can expand the macros.
> There is simply no way to get around that!  When compiling a
> *different* file that uses this macro, you *must* ensure that the file
> that defines EMBEDDED-DOT-P is loaded first.  This is a compile-time
> dependency, but this dependency must be maintained.

OK.

> It often happens that you accidentally introduce these dependencies
> through incremental editing of files:  you load up the compiled files,
> edit some code, recompile that file, edit some more, do more
> recompiles etc.  At some point, you decide to recompile from scratch,
> but all hell breaks loose because you added a macro whose expansion
> depends on code that is compiled *after* the macro is used.  It only
> accidentally worked during development because you never attempted to
> recompile the utility file that expanded the macro, only those files
> that used it.

OK, I understand the problem better by now. I actually recall having it.

> What Matthew's module system does is ensure that code that the macro
> *uses* at compile time is in scope at compile time and not
> accidentally obtained from the runtime environment.  Errors such as
> the above are caught sooner because incremental compilation and whole
> world compilation are treated more uniformly.

OK, thanks for clarifying my misunderstanding of that paper.

But I still wonder what's the reason why this works:

 > (define a 1)
 >
 > (define-syntax foo
 >   (syntax-rules ()
 >    ((foo x) (+ a x))))
 >
 > (foo 2) => 3

...while that doesn't:

 >>(define a 1)
 >>(define-macro (foo x) `(+ ,a ,x))
 >>(foo 2)
 >>
 >>This will result in a "reference to undefined identifier" error,
 >>because the foo macro doesn't see the a reference.

What's the difference here?


Pascal

-- 
Tyler: "How's that working out for you?"
Jack: "Great."
Tyler: "Keep it up, then."
From: Jens Axel Søgaard
Subject: Re: Lisp2Perl - Lisp to perl compiler
Date: 
Message-ID: <4017b812$0$252$edfadb0f@dread12.news.tele.dk>
Pascal Costanza wrote:
> Joe Marshall wrote:

> But I still wonder what's the reason why this works:
> 
>  > (define a 1)
>  >
>  > (define-syntax foo
>  >   (syntax-rules ()
>  >    ((foo x) (+ a x))))
>  >
>  > (foo 2) => 3
> 
> ...while that doesn't:
> 
>  >>(define a 1)
>  >>(define-macro (foo x) `(+ ,a ,x))
>  >>(foo 2)
>  >>
>  >>This will result in a "reference to undefined identifier" error,
>  >>because the foo macro doesn't see the a reference.
> 
> What's the difference here?

It doesn't work by design.


The definition

     (define a 1)

defines a in the normal environment.

The macro use

     (define-macro (foo x) `(+ ,a ,x))

defines a transformer in the *transformer environment* not the
normal environment. Thus the reference to a (in the transformer
environment) is unbound.


The paper describes the rationale behind the division.


Note that the macro define-macro is a library function
and is implemented in terms of syntax-case.

The documentation can be found here:

<http://download.plt-scheme.org/doc/206/html/mzlib/mzlib-Z-H-16.html#node_chap_16>

The source is found at:

<http://download.plt-scheme.org/scheme/plt/collects/mzlib/defmacro.ss>

(one or two screens)

-- 
Jens Axel S�gaard
From: Jens Axel Søgaard
Subject: Re: Lisp2Perl - Lisp to perl compiler
Date: 
Message-ID: <4017bb33$0$252$edfadb0f@dread12.news.tele.dk>
Jens Axel S�gaard wrote:
> Pascal Costanza wrote:
>>  >>(define a 1)
>>  >>(define-macro (foo x) `(+ ,a ,x))

Oh. If you *really* want use define-macro this way
you could put the definition of a in a module, and then
import the module in both the normal and the transformer
environment.

I prefer Joe's solution though.

-- 
Jens Axel S�gaard
From: Pascal Costanza
Subject: Re: Lisp2Perl - Lisp to perl compiler
Date: 
Message-ID: <bv8loi$18eo$1@f1node01.rhrz.uni-bonn.de>
Jens Axel S�gaard wrote:

> Jens Axel S�gaard wrote:
> 
>> Pascal Costanza wrote:
>>
>>>  >>(define a 1)
>>>  >>(define-macro (foo x) `(+ ,a ,x))
> 
> Oh. If you *really* want use define-macro this way
> you could put the definition of a in a module, and then
> import the module in both the normal and the transformer
> environment.

OK, got it. Thanks a lot for your and Joe's posts on this.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Pascal Costanza
Subject: Re: Lisp2Perl - Lisp to perl compiler
Date: 
Message-ID: <bv8t00$3t0$1@newsreader2.netcologne.de>
Pascal Costanza wrote:

> Marcin 'Qrczak' Kowalczyk wrote:
> 
>> On Tue, 20 Jan 2004 00:18:17 +0000, David wrote:
>>
>>> This means that the macro definitions must executed at compile time
>>> (obviously) hence function definitions must be executed at compile time.
>>> I don't think there should really be such a distinction between compile
>>> time and runtime.
>>
>> I think there should. Please read
>> <http://www.cs.utah.edu/plt/publications/macromod.pdf>
> 
> Oh dear. This is just another one of those bad examples of unbreakable 
> abstractions. Unbreakability sucks.

I stand corrected on the approach taken in that paper. Sorry for 
misrepresenting it.


Pascal

-- 
Tyler: "How's that working out for you?"
Jack: "Great."
Tyler: "Keep it up, then."
From: Uri Guttman
Subject: Re: Lisp2Perl - Lisp to perl compiler
Date: 
Message-ID: <x7vfnaqv8n.fsf@mail.sysarch.com>
>>>>> "D" == David  <······@SPAMAROONEY.blueyonder.co.uk> writes:

  D> This lisp2perl translator/compiler works rather well. I've used it
  D> for a major project at work. It's also self hosting (used to
  D> compile itself).

  D> http://www.hhdave.pwp.blueyonder.co.uk

  D> Any thoughts or comments anyone?

can it translate the entire emacs lisp library? will it integrate with
emacs? there is a perl in emacs port that sorta worked a bit. i think
the author dropped it a few years ago. if you integrate it with that and
make it work on lisp emacs, i bet more than a few emacs users would love
to break their lisp shackles.

uri

-- 
Uri Guttman  ------  ···@stemsystems.com  -------- http://www.stemsystems.com
--Perl Consulting, Stem Development, Systems Architecture, Design and Coding-
Search or Offer Perl Jobs  ----------------------------  http://jobs.perl.org
From: Ben Morrow
Subject: Re: Lisp2Perl - Lisp to perl compiler
Date: 
Message-ID: <bucnt9$17r$1@wisteria.csv.warwick.ac.uk>
Uri Guttman <···@stemsystems.com> wrote:
> >>>>> "D" == David  <······@SPAMAROONEY.blueyonder.co.uk> writes:
> 
>   D> This lisp2perl translator/compiler works rather well.
>
> can it translate the entire emacs lisp library? will it integrate with
> emacs? there is a perl in emacs port that sorta worked a bit. i think
> the author dropped it a few years ago. if you integrate it with that and
> make it work on lisp emacs, i bet more than a few emacs users would love
> to break their lisp shackles.

PleasePleasePleasePleasePlease :)

I love the functional concepts in Perl (I really can't see how any
language manages without closures :) but Lisp just makes my eyes go
funny.

Ben

-- 
For the last month, a large number of PSNs in the Arpa[Inter-]net have been
reporting symptoms of congestion ... These reports have been accompanied by an
increasing number of user complaints ... As of June,... the Arpanet contained
47 nodes and 63 links. [ftp://rtfm.mit.edu/pub/arpaprob.txt] * ···@morrow.me.uk