From: ·····················@gmail.com
Subject: newLISP is simple, terse, and well documented
Date: 
Message-ID: <6a295b19-d0d3-4701-b2ce-8587800125dd@g1g2000pra.googlegroups.com>
A Look at newLISP:
page 1: http://www.newmobilecomputing.com/story/20728/A_Look_at_newLISP
page 2: http://www.newmobilecomputing.com/story/20728/A_Look_at_newLISP/page2/
page 3: http://www.newmobilecomputing.com/story/20728/A_Look_at_newLISP/page3/

newLISP Home Page: http://www.newlisp.org/

Wikipedia article about newLISP: http://en.wikipedia.org/wiki/NewLISP

From: Tamas K Papp
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <6thhu3Fb00goU1@mid.individual.net>
On Sun, 18 Jan 2009 12:17:28 -0800, simple.language.yahoo wrote:

> A Look at newLISP:
> page 1: http://www.newmobilecomputing.com/story/20728/A_Look_at_newLISP
> page 2:
> http://www.newmobilecomputing.com/story/20728/A_Look_at_newLISP/page2/
> page 3:
> http://www.newmobilecomputing.com/story/20728/A_Look_at_newLISP/page3/
> 
> newLISP Home Page: http://www.newlisp.org/
> 
> Wikipedia article about newLISP: http://en.wikipedia.org/wiki/NewLISP

"In fact, there are several notable language-related differences which
bring newLISP much closer to Scheme than Common LISP. However, even in
comparison to Scheme, the dominating aspect of newLISP is
simplification. Just one example: newLISP replaces the various
equality operators in Common LISP and Scheme with a single
one. Therefore, you no longer need to remember the difference between
equal, eql, eq, equalp, =, string=, string-equal, char= and char-eq --
which is really how things should be."

This is where I stopped reading...  Cf. http://www.nhplace.com/kent/PS/
EQUAL.html

It is not clear what newLISP is fixing, or how it improves on CL.  One
can easily define another equality operator in CL, and in fact
implement all of the fantastic new "features" of newLISP.  The fact
that nobody does so should be a hint.

Tamas
From: ······@corporate-world.lisp.de
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <90287650-3bf4-485a-b06d-26dfe3238ca4@r41g2000prr.googlegroups.com>
On 18 Jan., 21:32, Tamas K Papp <······@gmail.com> wrote:
> On Sun, 18 Jan 2009 12:17:28 -0800, simple.language.yahoo wrote:
> > A Look at newLISP:
> > page 1:http://www.newmobilecomputing.com/story/20728/A_Look_at_newLISP
> > page 2:
> >http://www.newmobilecomputing.com/story/20728/A_Look_at_newLISP/page2/
> > page 3:
> >http://www.newmobilecomputing.com/story/20728/A_Look_at_newLISP/page3/
>
> > newLISP Home Page:http://www.newlisp.org/
>
> > Wikipedia article about newLISP:http://en.wikipedia.org/wiki/NewLISP
>
> "In fact, there are several notable language-related differences which
> bring newLISP much closer to Scheme than Common LISP.

http://www.newlisp.org/index.cgi?page=Differences_to_Other_LISPs

Does not sound much like Scheme.
From: Raffael Cavallaro
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <2009011816055043658-raffaelcavallaro@pasespamsilvousplaitmaccom>
On 2009-01-18 15:32:03 -0500, Tamas K Papp <······@gmail.com> said:

> It is not clear what newLISP is fixing, or how it improves on CL.

My impression of newLISP has always been that it "simplifies" common 
lisp by punting on things that common lisp takes the trouble to deal 
with thoughtfully. For example, newlisp integer operations can overflow 
because the language has "simplified" all that silly fixnum/bignum 
stuff. All a real programmer needs is fixed length integers that wrap, 
right?
-- 
Raffael Cavallaro, Ph.D.
From: ·················@gmail.com
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <9ac05ea1-ba50-47b8-8337-90e7df0629db@40g2000prx.googlegroups.com>
Raffael Cavallaro wrote:

> For example, newlisp integer operations can overflow
> because the language has "simplified" all that silly
> fixnum/bignum stuff.

The largest positive integer you can have in newLISP is 9 223 372 036
854 775 807, and the largest negative integer is -9 223 372 036 854
775 808.
From: Raffael Cavallaro
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <200901181946328930-raffaelcavallaro@pasespamsilvousplaitmaccom>
On 2009-01-18 18:14:08 -0500, ·················@gmail.com said:

> The largest positive integer you can have in newLISP is 9 223 372 036
> 854 775 807, and the largest negative integer is -9 223 372 036 854
> 775 808.

An no one *ever* needs integers larger than that so common lisp's 
automatic overflow detection and bignums are pointless, right? For 
example, the thousandth fibonacci number is much smaller than newlisps 
integers, right?

Ooops...

(defun fib (n &optional (b 1))
  (declare
   (ftype (function (integer fixnum) integer) fib)
   (optimize (speed 3) (safety 0) (compilation-speed 0) (space 0) (debug 0)))
  #+lispworks (declare (optimize (fixnum-safety 0)))
  (if (= 0 b)  ;calculate lucas numbers
    (case n
      ((0) 2)
      ((1) 1)
      (otherwise
       (if (evenp n)
         (let* ((k (/ n 2)) (l (fib k 0)) )
           (+ (* l l) (if (evenp k) -2 2)))
         (let* ((k (1- n)) (l (fib k 0)) (f (fib k 1)) )
           (/ (+ (* 5 f) l) 2)))))
    (case n  ;calculate fibonacci numbers
      ((0) 0)
      ((1) 1)
      (otherwise
       (if (evenp n)
         (let* ((k (/ n 2)) (l (fib k 0)) (f (fib k 1)))
           (* f l))
         (let* ((k (1- n)) (l (fib k 0)) (f (fib k 1)))
           (/ (+ f l) 2)))))))

* (fib 1000)

43466557686937456435688527675040625802564660517371780402481729089536555417949051890403879840079255169295922593080322634775209689623239873322471161642996440906533187938298969649928516003704476137795166849228875
*


they're 

called "bignums" for a reason ;^)

-- 
Raffael Cavallaro, Ph.D.
From: Pascal J. Bourguignon
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <874ozwumja.fsf@galatea.local>
Raffael Cavallaro <················@pas.espam.s.il.vous.plait.mac.com> writes:

> On 2009-01-18 18:14:08 -0500, ·················@gmail.com said:
>
>> The largest positive integer you can have in newLISP is 9 223 372 036
>> 854 775 807, and the largest negative integer is -9 223 372 036 854
>> 775 808.
>
> An no one *ever* needs integers larger than that so common lisp's
> automatic overflow detection and bignums are pointless, right? For
> example, the thousandth fibonacci number is much smaller than newlisps
> integers, right?
>
> Ooops...
>
> * (fib 1000)
>
> 43466557686937456435688527675040625802564660517371780402481729089536555417949051890403879840079255169295922593080322634775209689623239873322471161642996440906533187938298969649928516003704476137795166849228875
> *
>
> they're 
>
> called "bignums" for a reason ;^)

Who needs a number that big?   Even the Zimbabwe needs only 15 digits.

-- 
__Pascal Bourguignon__
From: Raffael Cavallaro
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <2009011821250811272-raffaelcavallaro@pasespamsilvousplaitmaccom>
On 2009-01-18 20:12:09 -0500, ···@informatimago.com (Pascal J. 
Bourguignon) said:

> Raffael Cavallaro <················@pas.espam.s.il.vous.plait.mac.com> writes:
> 
>> On 2009-01-18 18:14:08 -0500, ·················@gmail.com said:
>> 
>>> The largest positive integer you can have in newLISP is 9 223 372 036
>>> 854 775 807, and the largest negative integer is -9 223 372 036 854
>>> 775 808.
>> 
>> An no one *ever* needs integers larger than that so common lisp's
>> automatic overflow detection and bignums are pointless, right? For
>> example, the thousandth fibonacci number is much smaller than newlisps
>> integers, right?
>> 
>> Ooops...
>> 
>> * (fib 1000)
>> 
>> 43466557686937456435688527675040625802564660517371780402481729089536555417949051890403879840079255169295922593080322634775209689623239873322471161642996440906533187938298969649928516003704476137795166849228875
*

they're

called 
>> 
>> "bignums" for a reason ;^)
> 
> Who needs a number that big?   Even the Zimbabwe needs only 15 digits.

Apologies - I didn't realize that lisp numerics were only supposed to 
be used for financial calculations.  ;^)
-- 
Raffael Cavallaro, Ph.D.
From: Andrew Reilly
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <6tia71Fb3evaU1@mid.individual.net>
On Sun, 18 Jan 2009 21:25:08 -0500, Raffael Cavallaro wrote:

> Apologies - I didn't realize that lisp numerics were only supposed to be
> used for financial calculations.  ;^)

Sure, but outside of demonstrations of fibbonaci sequences, and perhaps 
some crypto libraries (that know deterministically that they'll be using 
large numbers, and can be coded accordingly), who realistically gets 
milage from the smooth transition to bignums?  I was wondering that 
myself, recently.  Certainly with 32-bit machine integers and 30 or 31 
bit fixnums there could be some problems, but I think that I could live 
quite happily within the limits of a 62 or 63 bit fixnum on a 64-bit 
system.  The caveat would have to be that overflow trapped: silent wrong 
answers are unacceptable, of course.  It certainly seems that most 
compilers have "assume fixnums for integers" options on their performance 
knobs, and I would guess that *most* programs work just fine with that 
setting...

Cheers,

-- 
Andrew
From: Pascal Costanza
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <6tj2amFb393vU1@mid.individual.net>
Andrew Reilly wrote:
> On Sun, 18 Jan 2009 21:25:08 -0500, Raffael Cavallaro wrote:
> 
>> Apologies - I didn't realize that lisp numerics were only supposed to be
>> used for financial calculations.  ;^)
> 
> Sure, but outside of demonstrations of fibbonaci sequences, and perhaps 
> some crypto libraries (that know deterministically that they'll be using 
> large numbers, and can be coded accordingly), who realistically gets 
> milage from the smooth transition to bignums?  I was wondering that 
> myself, recently.  Certainly with 32-bit machine integers and 30 or 31 
> bit fixnums there could be some problems, but I think that I could live 
> quite happily within the limits of a 62 or 63 bit fixnum on a 64-bit 
> system.  The caveat would have to be that overflow trapped: silent wrong 
> answers are unacceptable, of course.  It certainly seems that most 
> compilers have "assume fixnums for integers" options on their performance 
> knobs, and I would guess that *most* programs work just fine with that 
> setting...

640 KB ought to be enough for anybody.


Pascal

-- 
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: ······@corporate-world.lisp.de
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <98bb96bf-d278-488b-bd55-ae71c82ec211@r41g2000prr.googlegroups.com>
On 19 Jan., 11:17, Pascal Costanza <····@p-cos.net> wrote:
> Andrew Reilly wrote:
> > On Sun, 18 Jan 2009 21:25:08 -0500, Raffael Cavallaro wrote:
>
> >> Apologies - I didn't realize that lisp numerics were only supposed to be
> >> used for financial calculations.  ;^)
>
> > Sure, but outside of demonstrations of fibbonaci sequences, and perhaps
> > some crypto libraries (that know deterministically that they'll be using
> > large numbers, and can be coded accordingly), who realistically gets
> > milage from the smooth transition to bignums?  I was wondering that
> > myself, recently.  Certainly with 32-bit machine integers and 30 or 31
> > bit fixnums there could be some problems, but I think that I could live
> > quite happily within the limits of a 62 or 63 bit fixnum on a 64-bit
> > system.  The caveat would have to be that overflow trapped: silent wrong
> > answers are unacceptable, of course.  It certainly seems that most
> > compilers have "assume fixnums for integers" options on their performance
> > knobs, and I would guess that *most* programs work just fine with that
> > setting...
>
> 640 KB ought to be enough for anybody.

Which would be 3200 KB in total.

>
> Pascal
>
> --
> My website:http://p-cos.net
> Common Lisp Document Repository:http://cdr.eurolisp.org
> Closer to MOP & ContextL:http://common-lisp.net/project/closer/
From: Alexander Lehmann
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <gl1l3d$1hi$1@online.de>
Pascal Costanza wrote:
> 640 KB ought to be enough for anybody.

I second that :-) Besides, don't "we all" praise the extensibility and most
notably the flexibility of Lisp? So why does one compare apples and oranges by
saying newLisp was better than Lisp? IMO, both newLisp and Lisp have their
rightful places, yet why stating that one improves the other?
From: ··············@excite.com
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <0f8bf760-8133-43b4-a425-8d4c455122c7@r37g2000prr.googlegroups.com>
On Jan 18, 10:26 pm, Andrew Reilly <···············@areilly.bpc-
users.org> wrote:

> Sure, but outside of demonstrations of fibbonaci sequences, and perhaps
> some crypto libraries (that know deterministically that they'll be using
> large numbers, and can be coded accordingly), who realistically gets
> milage from the smooth transition to bignums?

> --
> Andrew


One of my main uses of Lisp is so simulate timing of systems that will
be implemented on embedded chips, and compare with ideal timing over
the
long term.

I implement the algorithm going on the chip, which will be keeping the
values within "normal" integer size limits, like 32 bits and also 64
bits.

Then I implement a timing measure using the regular Lisp numerical
stack,
where I freely accumulate fractions of seconds.

What happens is that I get a lot of rationals build of giant bignums.
These rationals are entirely accurate - there is no rounding or
trucation
error.  I can allow the simulations to run for long periods and get
validation of accuracy of timing of the embedded algorithm.  For
instance,
if a mistake in the embedded algorithm will drop a least significant
bit
once a year, I pick that up easily.

So I get really valuable mileage out of bignums such as above with the
Fibonacci example, even bigger, just used as parts of rationals.  In
the
end I might convert to a real for final reporting, but the zero loss
in
accumulation is the most important.

How about numerical estimation of solutions to differential equations?
For instance finite element analysis?  Any application that stands to
suffer from round-off error can benefit from bignums & rationals.
From: Kaz Kylheku
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <20090119035445.698@gmail.com>
On 2009-01-19, Andrew Reilly <···············@areilly.bpc-users.org> wrote:
> On Sun, 18 Jan 2009 21:25:08 -0500, Raffael Cavallaro wrote:
>
>> Apologies - I didn't realize that lisp numerics were only supposed to be
>> used for financial calculations.  ;^)
>
> Sure, but outside of demonstrations of fibbonaci sequences, and perhaps 
> some crypto libraries (that know deterministically that they'll be using 
> large numbers, and can be coded accordingly), who realistically gets 
> milage from the smooth transition to bignums?  I was wondering that 
> myself, recently.  Certainly with 32-bit machine integers and 30 or 31 
> bit fixnums there could be some problems, but I think that I could live 
> quite happily within the limits of a 62 or 63 bit fixnum on a 64-bit 
> system.  The caveat would have to be that overflow trapped: silent wrong 
> answers are unacceptable, of course.

What if overflow trapped, substituted the correct bignum value, and continued?

> It certainly seems that most 
> compilers have "assume fixnums for integers" options on their performance 
> knobs, and I would guess that *most* programs work just fine with that 
> setting...

It's not ``assume fixnums for integers''. It's ``assume that these
particular objects are fixnums''.

This means that a symbol, string or cons will be treated as an integer if you
pass it in.
From: Matthias Buelow
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <6tjd2tFb880lU1@mid.dfncis.de>
Andrew Reilly wrote:

> Sure, but outside of demonstrations of fibbonaci sequences, and perhaps 
> some crypto libraries (that know deterministically that they'll be using 
> large numbers, and can be coded accordingly), who realistically gets 
> milage from the smooth transition to bignums

It doesn't matter; if only fixnums are available, then that's a
(somewhat arbitrarily defined) restriction that doesn't have to be
there, that's enough reason to hate it.
From: John Thingstad
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <op.un0dmjugut4oq5@pandora.alfanett.no>
P� Mon, 19 Jan 2009 04:26:25 +0100, skrev Andrew Reilly  
<···············@areilly.bpc-users.org>:

> On Sun, 18 Jan 2009 21:25:08 -0500, Raffael Cavallaro wrote:
>
>> Apologies - I didn't realize that lisp numerics were only supposed to be
>> used for financial calculations.  ;^)
>
> Sure, but outside of demonstrations of fibbonaci sequences, and perhaps
> some crypto libraries (that know deterministically that they'll be using
> large numbers, and can be coded accordingly), who realistically gets
> milage from the smooth transition to bignums?  I was wondering that
> myself, recently.  Certainly with 32-bit machine integers and 30 or 31
> bit fixnums there could be some problems, but I think that I could live
> quite happily within the limits of a 62 or 63 bit fixnum on a 64-bit
> system.  The caveat would have to be that overflow trapped: silent wrong
> answers are unacceptable, of course.  It certainly seems that most
> compilers have "assume fixnums for integers" options on their performance
> knobs, and I would guess that *most* programs work just fine with that
> setting...
>
> Cheers,
>

To me it is more to be able to work at the 'right level' of abstraction.  
is it acceptable to put artificial limit's into general libraries. When  
trying to use find large primes you easily have thousands of digits. To  
use Mathematica as a example, I like the programs abillity to find the  
apropiate algoriathm automatically 'under the hood' letting me focus on  
the math rather than numeric implementaion issues.

--------------
John Thingstad
From: Alex Mizrahi
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <4974b19f$0$90265$14726298@news.sunsite.dk>
 PJB> Who needs a number that big?   Even the Zimbabwe needs only 15 digits.

http://en.wikipedia.org/wiki/Hyperinflation#Examples_of_hyperinflation

Hungary
...
The highest denomination in mid-1946 was 100,000,000,000,000,000,000 pengo.
...
The overall impact of hyperinflation: On 18 August, 1946 
400,000,000,000,000,000,000,000,000,000 or 4�10^29 (four hundred octillion 
[ short scale ] ) pengo became 1 forint.
From: Majorinc Kazimir
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <gl0odt$n42$1@ss408.t-com.hr>
Raffael Cavallaro wrote:

> An no one *ever* needs integers larger than that so common lisp's 
> automatic overflow detection and bignums are pointless, right? For 
> example, the thousandth fibonacci number is much smaller than newlisps 
> integers, right?

Newlisp has interface to GMP library.

http://www.newlisp.org/code/modules/gmp.lsp.html
From: Mike
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <gl0pbj$7n0$1@nntp.motzarella.org>
On 2009-01-19, Majorinc Kazimir <·····@email.address> wrote:
> Raffael Cavallaro wrote:
>
>> An no one *ever* needs integers larger than that so common lisp's 
>> automatic overflow detection and bignums are pointless, right? For 
>> example, the thousandth fibonacci number is much smaller than newlisps 
>> integers, right?
>
> Newlisp has interface to GMP library.
>
> http://www.newlisp.org/code/modules/gmp.lsp.html

Rather than a quasi-lisp that defines 'defun' as 'de', is there a
simple lisp that can be used as a static binary for system
administration and and configuration instead of using cfengine(8)?
I like what Mark Burgess has put together (met him once too, briefly),
and the people around him. That application will not work in my
environment and I prefer to use lisp (not all of Common Lisp nor
the twists of Scheme, just simple LISP, and not all the baggage of
tcl (though I like tcl also)).

It has been odd enough for me to get accustomed to (first) and
(rest) instead of (car) and (cdr).

I want something that I can use cross platform, windows, mac, linux,
unix, to manage my boxes. I want something that I can compile from
source into a static binary so I can manipulate the registry on
windows or plists on mac. And I want to use lisp for this.

Mike
From: Rob Warnock
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <L7udnffojvRnZ-7UnZ2dnUVZ_rydnZ2d@speakeasy.net>
Mike <·····@mikee.ath.cx> wrote:
+---------------
| Rather than a quasi-lisp that defines 'defun' as 'de', is there a
| simple lisp that can be used as a static binary for system
| administration and and configuration ... ?
| ... I prefer to use lisp (not all of Common Lisp nor
| the twists of Scheme, just simple LISP ...)
...
| I want something that I can use cross platform, windows, mac, linux,
| unix, to manage my boxes. I want something that I can compile from
| source into a static binary so I can manipulate the registry on
| windows or plists on mac. And I want to use lisp for this.
+---------------

There's always Lisp500:

    http://www.modeemi.fi/~chery/lisp500/

Of course, it's slow, has rather opaque source, and is missing a few bits
[though has quite a lot, for its size!], but it's a single source file
of only ~500 lines [plus a 5.8 K line init file in Lisp, which could
presumably be hacked into the source as a string]. And its GC is at
least stable enough to handle (ACKERMANN 3 9)!  ;-}


-Rob

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Kaz Kylheku
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <20090119040602.175@gmail.com>
On 2009-01-19, Majorinc Kazimir <·····@email.address> wrote:
> Raffael Cavallaro wrote:
>
>> An no one *ever* needs integers larger than that so common lisp's 
>> automatic overflow detection and bignums are pointless, right? For 
>> example, the thousandth fibonacci number is much smaller than newlisps 
>> integers, right?
>
> Newlisp has interface to GMP library.
>
> http://www.newlisp.org/code/modules/gmp.lsp.html

This examle suggests that a special + function is required to work
with these bignums, and that they are represented as character strings.

Why on Earth are you staining your credibility by defending this pitiful
garbage?
From: Tamas K Papp
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <6thkk5Fb1inbU1@mid.individual.net>
On Sun, 18 Jan 2009 16:05:50 -0500, Raffael Cavallaro wrote:

> On 2009-01-18 15:32:03 -0500, Tamas K Papp <······@gmail.com> said:
> 
>> It is not clear what newLISP is fixing, or how it improves on CL.
> 
> My impression of newLISP has always been that it "simplifies" common
> lisp by punting on things that common lisp takes the trouble to deal
> with thoughtfully. For example, newlisp integer operations can overflow
> because the language has "simplified" all that silly fixnum/bignum
> stuff. All a real programmer needs is fixed length integers that wrap,
> right?

Well, another way to look at it is that you get a built in RNG for all
the arithmetic you do, at no extra cost :-)

Tamas
From: Cor Gest
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <87k58sfh4v.fsf@atthis.clsnet.nl>
Some entity, AKA Tamas K Papp <······@gmail.com>,
wrote this mindboggling stuff:
(selectively-snipped-or-not-p)


> It is not clear what newLISP is fixing, or how it improves on CL.

fixing stupidity is a laudible cause

Cor
-- 
    It is YOUR right to restrict MY rights, but ONLY in YOUR house
 If YOU try that in MY house, YOU better be prepared to be kicked out 
     The only difference between GOD and me is that GOD has mercy  
		     My other cheek is a .40 JHP
From: WalterGR
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <96d2f1a4-0bd6-4d77-a84b-b14eae210d8d@p2g2000prf.googlegroups.com>
On Jan 18, 1:17 pm, Cor Gest <····@clsnet.nl> wrote:
> Some entity, AKA Tamas K Papp <······@gmail.com>,
> wrote this mindboggling stuff:
> (selectively-snipped-or-not-p)
>
> > It is not clear what newLISP is fixing, or how it improves on CL.
>
> fixing stupidity is a laudible cause

Did you mean "laudable"?

WalterGR
From: Alberto Riva
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <gl0f5m$584k$1@usenet.osg.ufl.edu>
WalterGR wrote on 01/18/2009 05:29 PM:
> On Jan 18, 1:17 pm, Cor Gest <····@clsnet.nl> wrote:
>> Some entity, AKA Tamas K Papp <······@gmail.com>,
>> wrote this mindboggling stuff:
>> (selectively-snipped-or-not-p)
>>
>>> It is not clear what newLISP is fixing, or how it improves on CL.
>> fixing stupidity is a laudible cause
> 
> Did you mean "laudable"?

Or "laughable"?

Alberto
From: Cor Gest
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <87bpu3gbfk.fsf@atthis.clsnet.nl>
Some entity, AKA Alberto Riva <·····@nospam.ufl.edu>,
wrote this mindboggling stuff:
(selectively-snipped-or-not-p)


>>>> It is not clear what newLISP is fixing, or how it improves on CL.
>>> fixing stupidity is a laudible cause
>>
>> Did you mean "laudable"?
>
> Or "laughable"?

Ah, yes it is funny indeed.

Cor
-- 
    It is YOUR right to restrict MY rights, but ONLY in YOUR house
 If YOU try that in MY house, YOU better be prepared to be kicked out 
     The only difference between GOD and me is that GOD has mercy  
		     My other cheek is a .40 JHP
From: Cor Gest
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <87fxjfgbhf.fsf@atthis.clsnet.nl>
Some entity, AKA WalterGR <········@gmail.com>,
wrote this mindboggling stuff:
(selectively-snipped-or-not-p)

>> > It is not clear what newLISP is fixing, or how it improves on CL.
>>
>> fixing stupidity is a laudible cause
>
> Did you mean "laudable"?

why would you have people burn in cars ?

Cor
-- 
    It is YOUR right to restrict MY rights, but ONLY in YOUR house
 If YOU try that in MY house, YOU better be prepared to be kicked out 
     The only difference between GOD and me is that GOD has mercy  
		     My other cheek is a .40 JHP
From: Majorinc Kazimir
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <gl0kiv$hc9$1@ss408.t-com.hr>
Tamas K Papp wrote:

> 
> It is not clear what newLISP is fixing, or how it improves on CL. 

In "code=data" area, Newlisp offers several improvements
over CL and Scheme:

(1) Unrestricted eval, i.e. eval can access to
     local variables. CL and Scheme evals cannot do that.
     Eval is, in my opinion, single most important feature
     for code=data.

(2) Fexprs, i.e. "the first class" macros. Newlisp macros
     can be generated during runtime, passed as arguments
     to other macros or functions, applied like functions
     and so forth.

(3) Newlisp has support for dynamic and static scope.
     Local variables are dynamically scoped. The namespaces,
     called "contexts" are statically scoped. That allows
     significantly more expressive functions than in
     dialects with statically scoped local variables.

(4) The Functions are lambda-expressions, not results of
     the evaluation of lambda expressions. That means,
     program can analyze and modify functions during
     runtime. And the same is the case for macros.

(5) Newlisp is an intepreter. That means, if you want
     speed, you are in the same league as Python or Perl.
     But, if your code contains lot of evals - Newlisp is
     faster than many other Lisps.

     Here are some benchmarks i did:

     http://kazimirmajorinc.blogspot.com/2008/12/
     speed-of-newlisp-eval-test-v100.html

So, for those who like Lisp because of code=data,
Newlisp is promising choice. Of course, the languages
have many other different aspects, and not
all are in favor of Newlisp. But it would be too
ambitious to summarize all differences in one post.

The majority of people use it in some form of "scripting."


Another language that might be interesting to look at
is Pico Lisp. It maintains similar spirit as Newlisp.
From: Tamas K Papp
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <6tidj9Fb3debU1@mid.individual.net>
On Mon, 19 Jan 2009 02:21:16 +0100, Majorinc Kazimir wrote:

> Tamas K Papp wrote:
> 
> 
>> It is not clear what newLISP is fixing, or how it improves on CL.
> 
> In "code=data" area, Newlisp offers several improvements over CL and
> Scheme:
> 
> (1) Unrestricted eval, i.e. eval can access to
>      local variables. CL and Scheme evals cannot do that. Eval is, in my
>      opinion, single most important feature for code=data.

I don't think Lisp programmers use eval that much, directly.  But if
you wanted some parametrized eval that uses local variables, it would
be possible to do it in CL.

> (3) Newlisp has support for dynamic and static scope.
>      Local variables are dynamically scoped. The namespaces, called

Ouch.  I don't find pervasive dynamic scope an improvement over
lexical scope.  Quite the opposite.

> (4) The Functions are lambda-expressions, not results of
>      the evaluation of lambda expressions. That means, program can
>      analyze and modify functions during runtime. And the same is the
>      case for macros.

Which again is a trivial improvement that you can easily replicate in
CL, should the need arise -- but it does not, in practice.  If I found
myself directly manipulating S-expressions that create functions
during runtime, I would be pretty sure I am doing something wrong, and
I should be using closures (and hide the machinery with macros).

> (5) Newlisp is an intepreter. That means, if you want
>      speed, you are in the same league as Python or Perl. But, if your
>      code contains lot of evals - Newlisp is faster than many other
>      Lisps.
> 
>      Here are some benchmarks i did:
> 
>      http://kazimirmajorinc.blogspot.com/2008/12/
>      speed-of-newlisp-eval-test-v100.html

But why would your code contain a lot of evals?  I don't see the
point.  Here you try to sell a weakness (no compiler) as a feature --
nice try.

> So, for those who like Lisp because of code=data, Newlisp is promising
> choice. Of course, the languages have many other different aspects, and

Again, you repeat this code=data thing as a buzzword and try to
portray newLisp in a good light.  But CL already treats code as data
--- what do you think macros are doing?

> Another language that might be interesting to look at is Pico Lisp. It
> maintains similar spirit as Newlisp.

You mean making brain-dead changes to CL?  Or "simplifying" the
language by clearing up things like the Great and Horrible Multitude
of Functions that Test for Equality?  Thanks, I think I will pass.

McCarthy has remarked that Lisp is a local optimum in the space of
programming languages.  Maybe CL is a local optimum in the space of
Lisps, too.  At least I frequently find that "improved" dialects
frequently arise from grave misunderstandings and are actually much
worse than CL.  The exceptions that come to mind are Qi (esp. version
2) and maybe Clojure (of which I know very little).

Tamas
From: Dimiter "malkia" Stanev
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <gl2tdn$vl0$1@news.motzarella.org>
> McCarthy has remarked that Lisp is a local optimum in the space of
> programming languages.  Maybe CL is a local optimum in the space of
> Lisps, too.  At least I frequently find that "improved" dialects
> frequently arise from grave misunderstandings and are actually much
> worse than CL.  The exceptions that come to mind are Qi (esp. version
> 2) and maybe Clojure (of which I know very little).

Well said... Heh, So CL is really LOL! Nice! LOL as in Locally Optimum 
Lisp. No that can't be :)

But yes, it's the middle ground! I love it
From: Kaz Kylheku
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <20090125092757.36@gmail.com>
On 2009-01-19, Tamas K Papp <······@gmail.com> wrote:
> McCarthy has remarked that Lisp is a local optimum in the space of
> programming languages.

Damn, I somehow read that as OPIUM on the first scan! :)
From: Dan Weinreb
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <702212f7-f8aa-4f67-8010-f8635a184344@n2g2000vbl.googlegroups.com>
On Jan 18, 8:21 pm, Majorinc Kazimir <·····@email.address> wrote:
> Tamas K Papp wrote:
>
> In "code=data" area, Newlisp offers several improvements
> over CL and Scheme:

I would be interested to see some examples of what
problems you'd want to solve that work out better
in Newlisp than they do in, say, Common Lisp.

The way you have defined the language means that
you cannot write a "compiler" in the sense of
nearly all existing Common Lisp compilers.  In
order to be able to create and "apply" macros
during program execution, it's hard to see any
other way to implement Newlisp other than as
a classic Lisp interpreter, actually manipulating
things at the "S-expression" level.

There's nothing wrong with that per se, as long
as there's some important benefit to be gained
from it.  There are many useful applications of
programming languages that do not particularly
require a lot of speed, so that doesn't bother
me.

The interesting thing is whether Newlisp makes
it easier to solve real-world problems.  Just
saying "it's good for code=data" is too abstract
a claim without real examples to back it up.
They need to be examples of problems that would
be hard to solve in Common Lisp but are easier
to solve in Newlisp.

By the way, I'm sure Newlisp has cleaned up a
lot, but that's not particularly interesting
since it's so easy to see how to do that
if you don't need to retain compatibility.
Any new Lisp without back-compatibility
requirements would be far cleaner than CL.

In short, what problem is it that Newlisp
was created to solve?

Thank you!

-- Dan


>
> (1) Unrestricted eval, i.e. eval can access to
>      local variables. CL and Scheme evals cannot do that.
>      Eval is, in my opinion, single most important feature
>      for code=data.
>
> (2) Fexprs, i.e. "the first class" macros. Newlisp macros
>      can be generated during runtime, passed as arguments
>      to other macros or functions, applied like functions
>      and so forth.
>
> (3) Newlisp has support for dynamic and static scope.
>      Local variables are dynamically scoped. The namespaces,
>      called "contexts" are statically scoped. That allows
>      significantly more expressive functions than in
>      dialects with statically scoped local variables.
>
> (4) The Functions are lambda-expressions, not results of
>      the evaluation of lambda expressions. That means,
>      program can analyze and modify functions during
>      runtime. And the same is the case for macros.
>
> (5) Newlisp is an intepreter. That means, if you want
>      speed, you are in the same league as Python or Perl.
>      But, if your code contains lot of evals - Newlisp is
>      faster than many other Lisps.
>
>      Here are some benchmarks i did:
>
>      http://kazimirmajorinc.blogspot.com/2008/12/
>      speed-of-newlisp-eval-test-v100.html
>
> So, for those who like Lisp because of code=data,
> Newlisp is promising choice. Of course, the languages
> have many other different aspects, and not
> all are in favor of Newlisp. But it would be too
> ambitious to summarize all differences in one post.
>
> The majority of people use it in some form of "scripting."
>
> Another language that might be interesting to look at
> is Pico Lisp. It maintains similar spirit as Newlisp.
From: Tamas K Papp
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <6tjl4uFbbkh2U1@mid.individual.net>
On Mon, 19 Jan 2009 07:23:44 -0800, Dan Weinreb wrote:

> don't need to retain compatibility. Any new Lisp without
> back-compatibility requirements would be far cleaner than CL.

Why is "cleanliness" a goal?  Scheme is "clean", but I don't see that
buying anything (except for those writing implementations).  It is
good to have standardized solutions for common problems, so that
people understand each other's code.  In case one is not satisfied
with the existing solutions in CL, it is always possible to roll one's
own (cf iterate vs loop).

I think that zealously striving for a "clean" and "minimal" language
for its own sake just misses the point.  The article quoted by the OP
thinks that = as a replacement for eq, eql, equal, =, etc is somehow a
good thing, but fails to elaborate.

Tamas
From: ······@corporate-world.lisp.de
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <e71d16f5-0e17-449e-a51d-216f6a772d94@v18g2000pro.googlegroups.com>
On 19 Jan., 16:39, Tamas K Papp <······@gmail.com> wrote:
> On Mon, 19 Jan 2009 07:23:44 -0800, Dan Weinreb wrote:
> > don't need to retain compatibility. Any new Lisp without
> > back-compatibility requirements would be far cleaner than CL.
>
> Why is "cleanliness" a goal?

Why not? If you drive a car do you want the instruments nicely lined
up, nicely visible, easy to use, no visual distractions, etc?
I'd say a good designed cockpit helps the user. Simple,
clean methods are also easier to remember and lead to
less cognitive effort.

There is a cleaner dialect of CL just inside ANSI CL.
ISLisp also is a bit 'cleaner'.

'clean' does not mean that is has less power, it means
that the naming is regular, interfaces look similar,
mechanisms look similar, there are not twenty things
to do the same (just a few), facilities don't overlap
in functionality while being different to use (structures
and classes), there are not widely different interpretations
by implementations tolerated, etc.

Even if I like CL (which I do), I don't have to believe
that the CL design is finished. But pragmatics says
I take the language like it is and like implementations
have extended it.

But:

* CLOS should be wider used (conditions, pathnames, streams, i/o,
readtables, ...)
* get rid of structures, provide a simple DEFCLASS* macro,
  make SLOT-VALUE as fast as structure access
* make streams extensible
* get rid of LOOP and replace it with ITERATE
* steal the collection stuff from Dylan
* all built-in function (minus primitive type specialized functions)
functions
  should be generic functions

etc etc.


> Scheme is "clean", but I don't see that
> buying anything (except for those writing implementations).  It is
> good to have standardized solutions for common problems, so that
> people understand each other's code.  In case one is not satisfied
> with the existing solutions in CL, it is always possible to roll one's
> own (cf iterate vs loop).
>
> I think that zealously striving for a "clean" and "minimal" language
> for its own sake just misses the point.  The article quoted by the OP
> thinks that = as a replacement for eq, eql, equal, =, etc is somehow a
> good thing, but fails to elaborate.
>
> Tamas
From: Tamas K Papp
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <6tk0lhFbcr7fU1@mid.individual.net>
On Mon, 19 Jan 2009 10:29:18 -0800, ······@corporate-world.lisp.de wrote:

> On 19 Jan., 16:39, Tamas K Papp <······@gmail.com> wrote:
>> On Mon, 19 Jan 2009 07:23:44 -0800, Dan Weinreb wrote:
>> > don't need to retain compatibility. Any new Lisp without
>> > back-compatibility requirements would be far cleaner than CL.
>>
>> Why is "cleanliness" a goal?
> 
> Why not? If you drive a car do you want the instruments nicely lined up,
> nicely visible, easy to use, no visual distractions, etc? I'd say a good
> designed cockpit helps the user. Simple, clean methods are also easier
> to remember and lead to less cognitive effort.

That I can agree with in general.  But my perception is that some
language designs have taken "cleanliness" too far, at the cost of the
usability of the language (Scheme comes to mind).

> Even if I like CL (which I do), I don't have to believe that the CL
> design is finished. But pragmatics says I take the language like it is
> and like implementations have extended it.

Indeed, but that is not how newLISP was started.  Its designers didn't
have anything specific in mind that they wanted to improve on, just
some general fuzzy ideas (like "code=data", as if CL didn't have
that).  You can see the result for yourself.

I am not saying CL cannot be improved.  Just that it is pretty hard to
improve on, and takes a lot of experience and conscious effort.

Tamas
From: ······@corporate-world.lisp.de
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <3842d2f1-9034-4b51-823c-1c3f9ec7ba3e@w24g2000prd.googlegroups.com>
On 19 Jan., 19:55, Tamas K Papp <······@gmail.com> wrote:
> On Mon, 19 Jan 2009 10:29:18 -0800, ······@corporate-world.lisp.de wrote:
> > On 19 Jan., 16:39, Tamas K Papp <······@gmail.com> wrote:
> >> On Mon, 19 Jan 2009 07:23:44 -0800, Dan Weinreb wrote:
> >> > don't need to retain compatibility. Any new Lisp without
> >> > back-compatibility requirements would be far cleaner than CL.
>
> >> Why is "cleanliness" a goal?
>
> > Why not? If you drive a car do you want the instruments nicely lined up,
> > nicely visible, easy to use, no visual distractions, etc? I'd say a good
> > designed cockpit helps the user. Simple, clean methods are also easier
> > to remember and lead to less cognitive effort.
>
> That I can agree with in general.  But my perception is that some
> language designs have taken "cleanliness" too far, at the cost of the
> usability of the language (Scheme comes to mind).

Scheme R4RS is perfectly usable. R5RS even more. For teaching,
writing algorithms, etc. It gets even more useful with added libraries
(see SLIB, SRFI). Generally I like Common Lisp more for application
development.

>
> > Even if I like CL (which I do), I don't have to believe that the CL
> > design is finished. But pragmatics says I take the language like it is
> > and like implementations have extended it.
>
> Indeed, but that is not how newLISP was started.  Its designers didn't
> have anything specific in mind that they wanted to improve on, just
> some general fuzzy ideas (like "code=data", as if CL didn't have
> that).  You can see the result for yourself.
>
> I am not saying CL cannot be improved.  Just that it is pretty hard to
> improve on, and takes a lot of experience and conscious effort.

Luckily there is now experience about some extensions (streams,
conditions in
CLOS, MOP, etc.). In several areas one would just write down what some
of the implementations are doing anyway.

>
> Tamas
From: Kaz Kylheku
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <20090119105927.192@gmail.com>
On 2009-01-19, ······@corporate-world.lisp.de <······@corporate-world.lisp.de> wrote:
> On 19 Jan., 16:39, Tamas K Papp <······@gmail.com> wrote:
>> On Mon, 19 Jan 2009 07:23:44 -0800, Dan Weinreb wrote:
>> > don't need to retain compatibility. Any new Lisp without
>> > back-compatibility requirements would be far cleaner than CL.
>>
>> Why is "cleanliness" a goal?
>
> Why not? If you drive a car do you want the instruments nicely lined
> up, nicely visible, easy to use, no visual distractions, etc?

What I'm ``driving'' when I write code is not the programming language,
but the program.

As far as driving goes, the instrument panel of consumer vehicles
is compromised for idiots, so it makes a bad example.

Gauges such as vaccuum and oil pressure are missing. 

If there is an oil pressure problem, there is only an ``idiot light'' that
comes on, when it may be too late. ``You may have lost all your oil! You have
ten seconds to pull over and stop the engine!''
From: Matthew D Swank
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <F59dl.165932$2w3.145039@newsfe19.iad>
On Mon, 19 Jan 2009 10:29:18 -0800, ······@corporate-world.lisp.de wrote:
...

> * all built-in function (minus primitive type specialized functions) 
> functions should be generic functions
> 
> etc etc.
> 
> 

Should funcall _and_ apply both be generic functions?

Matt

-- 
Communicate!  It can't make things any worse.
From: Kaz Kylheku
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <20090119100200.900@gmail.com>
On 2009-01-19, Tamas K Papp <······@gmail.com> wrote:
> On Mon, 19 Jan 2009 07:23:44 -0800, Dan Weinreb wrote:
>
>> don't need to retain compatibility. Any new Lisp without
>> back-compatibility requirements would be far cleaner than CL.
>
> Why is "cleanliness" a goal? 

So that you could enter the temple, filthy infidel.
From: Kaz Kylheku
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <20090119100242.-297@gmail.com>
On 2009-01-19, Dan Weinreb <···@alum.mit.edu> wrote:
> On Jan 18, 8:21�pm, Majorinc Kazimir <·····@email.address> wrote:
>> Tamas K Papp wrote:
>>
>> In "code=data" area, Newlisp offers several improvements
>> over CL and Scheme:
>
> I would be interested to see some examples of what
> problems you'd want to solve that work out better
> in Newlisp than they do in, say, Common Lisp.

The things that work out in newlisp are the completely naive
newbie nonsens that has good reasons not to work.

Q: ``I have a function (lambda (x y) (+ x y)). I tried

    (setq f (lambda (x y) (+ x y)))
    (third f)

    expecting (+ X Y), but it didn't work. Waah!''

A:  ``U SHLD BE USING NEWLISP D00D!''

> The way you have defined the language means that
> you cannot write a "compiler" in the sense of
> nearly all existing Common Lisp compilers.

If you wrote a compiler for newLISP, and it actually achieved a bit of speed,
it would expose the performance weakness of the ``copy-nearly-everything''
memory management scheme.

That would make the author look like an (even bigger) fool, for having
sung odes in praise of the memory management scheme and its performance.

> In
> order to be able to create and "apply" macros
> during program execution, it's hard to see any
> other way to implement Newlisp other than as
> a classic Lisp interpreter, actually manipulating
> things at the "S-expression" level.

What ``classic''? Lisp has been compiled since 1961, which was probably before
Klutz was born.

> There's nothing wrong with that per se, as long
> as there's some important benefit to be gained
> from it.  There are many useful applications of
> programming languages that do not particularly
> require a lot of speed, so that doesn't bother
> me.

Since you have access to the evaluator at run-time, there is no strict
separation between compile time and run time. If you want compiled code to
expand a macro during its run time, you can do that.  This doesn't have to be
elevated into a fully-fledged language restriction.

> The interesting thing is whether Newlisp makes
> it easier to solve real-world problems.  Just
> saying "it's good for code=data" is too abstract

It's not good for code=data, because this si just a misunderstanding. It's
actually ``source code=structured data''.  Code in general /isn't/ data;
not in all of its representations. If you want only one representation for
code, the source code, then you're making a mistake which is on
the same level as ``source code = characters in a file''.

> a claim without real examples to back it up.
> They need to be examples of problems that would
> be hard to solve in Common Lisp but are easier
> to solve in Newlisp.

In the GNU bash, you can eval code in a function, and that eval has access to
local variables.

I took ``advantage'' of this in a script recently.

But it was in a situation where I wanted a local macro. I didn't /want/ to
eval, I wanted to transform some syntax occuring in the function (just once
when the function definition is processed processed).

The eval was just hack to solve the problem, because of a lack of the proper
tools.

The vast majority of eval use is like this.

See, by the way, how I can admit to having worked on a Bash script, without
having to treat my cognitive dissonance by becoming some kind of 
shell scripting advocating nutjob.

It's okay to use something /and/ admit that it's a piece of shit, at the same
time.

If someone wants to script something with newLISP, that's one thing.  Just
don't come into comp.lang.lisp to be the whipping boy for it, and destroy your
credibility in the process, you know what I mean?

The true professional doesn't need to feel that every tool he is working
with is the best, because his pride is in the work, not in the tools.

> By the way, I'm sure Newlisp has cleaned up a
> lot, but that's not particularly interesting
> since it's so easy to see how to do that
> if you don't need to retain compatibility.
> Any new Lisp without back-compatibility
> requirements would be far cleaner than CL.

newLISP is anything but cleaner. In the areas of common functionality, newLISP
is a mess.

Proper garbage collection is cleaner than some ad-hoc scheme where some things
are copied and some things are allowed to be references.

Fixnums and bignums being subtyped from the same type and smoothly
interoperating is cleaner than (GMP:+ "1234..." "456...").

Lexical closures are cleaner than the context nonsense.

Macros are cleaner than frexps.

Cons cells are cleaner than the corresponding contortion in newLISP.

List structure that has complete freedom to be circular if need be
is cleaner than one with restrictions. Restrictions are dirt because
they are something extra that needs to be said in the specificatino
(``by the way, you can't to this, this nor that''), and though they
might save some implementation complexity in the language, working
around them adds complexity to programs.

Dynamic variables are cleaner in Common Lisp because they don't have to be
endowed with bizarre responsibilities in order to emulate lexical variables.


What is dirty and what is clean has be judged through the proper experience.

Here is a very good and relevant Joel on Software article about this.

Making Wrong Code Look Wrong

http://www.joelonsoftware.com/articles/Wrong.html

Here he tells the story of how he worked in a bakery, and had to
learn to see what it means for something to be dirty.
From: ······@corporate-world.lisp.de
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <2c7e873e-ec85-42b8-8794-98e7813a996b@i20g2000prf.googlegroups.com>
On 19 Jan., 19:43, Kaz Kylheku <········@gmail.com> wrote:

...

> Q: ``I have a function (lambda (x y) (+ x y)). I tried
>
>     (setq f (lambda (x y) (+ x y)))
>     (third f)
>
>     expecting (+ X Y), but it didn't work. Waah!''
>
> A:  ``U SHLD BE USING NEWLISP D00D!''

No, you can still use Common Lisp:

CL-USER 23 > (setq f (lambda (x y) (+ x y)))
#<anonymous interpreted function 406000086C>

CL-USER 24 > (third (function-lambda-expression f))
(+ X Y)

...
From: Mark Wooding
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <87r62zns24.fsf.mdw@metalzone.distorted.org.uk>
Majorinc Kazimir <·····@email.address> writes:

> (1) Unrestricted eval, i.e. eval can access to local variables. CL and
>     Scheme evals cannot do that.

Indeed.  It's a tradeoff between efficient compilation and EVAL: if the
compiler is given the necessary freedom to arrange a function's
environment in the most efficient manner, then either EVAL will be
horrifically complicated or just won't be able to `see' lexically bound
variables.

Emacs Lisp does this, by the way.

>     Eval is, in my opinion, single most important feature for
>     code=data.

Ahh.  I think I'd put efficient and comprehensible macros and ready
availability of COMPILE rather higher.

> (2) Fexprs, i.e. "the first class" macros. 

The `F' stands for `funny'.  Interestingly, fexprs were the cause of
many of the problems that macros were introduced to fix, most notably
the fact that very existence of fexprs the compiler doesn't have
built-in knowledge about makes it nearly impossible to compile correct
code.

The MACLISP interpreter used `fsubrs' -- built-in functions which
received an unevaluated argument list -- to implement what we'd now call
the various special forms.  The compiler needed to have special
knowledge of all of these fsubrs in order to compile programs properly
-- and the implementations needed to be kept in step between the
interpreter -- in PDP10 assembler -- and the compiler, written in
MACLISP.

>     Newlisp macros can be generated during runtime, passed as
>     arguments to other macros or functions, applied like functions and
>     so forth.

That's nice.  Err... why?

The idea of APPLYing a macro is strange, since APPLY, by definition,
passes a list of /evaluated/ arguments to the function.

> (3) Newlisp has support for dynamic and static scope.  Local variables
>     are dynamically scoped. The namespaces, called "contexts" are
>     statically scoped. That allows significantly more expressive
>     functions than in dialects with statically scoped local variables.

Of course, CL has both anyway, with lexical scoping as the (correct)
default.  Proper lexical scoping is, of course, the right solution to
the famous historical `funargs' problem.

For better or for worse, Emacs Lisp is entirely dynamically scoped.  It
looks to me (though I didn't look hard, so I might be mistaken) that one
can somewhat emulate newLISP's contexts by messing with Emacs Lisp
obarrays.  Not that it's a lot of fun.  (I speculate that Emacs Lisp
systems run more code from more disparate places than newLISP systems,
and they seem get by pretty well without any of this namespace
malarkey.)

> (4) The Functions are lambda-expressions, not results of the
>     evaluation of lambda expressions. That means, program can analyze
>     and modify functions during runtime. And the same is the case for
>     macros.

Oh, heavens.  Look, if you want Emacs Lisp, you know where to find it.
But it really isn't very new.

> (5) Newlisp is an intepreter. That means, if you want speed, you are
>     in the same league as Python or Perl.  But, if your code contains
>     lot of evals - Newlisp is faster than many other Lisps.

You don't have to be mad to write programs which make heavy use of EVAL
but, no, wait, you do.

>     Here are some benchmarks i did:

OK, let's just put the issue of whether the benchmark is useful to one
side for now.

I compiled newLISP v.10.0.1 from source, and ran it.  I got these
answers:

  421
  170
  409
  446

which I assume are milliseconds.  (This should help to calibrate my
machine against yours.)

I hacked the Common Lisp example code to make it be Emacs Lisp, which
involved replacing `setf' by `setq', and writing a `time' macro.

(defmacro time (form)
  `(let (before after carry diff func)
     (setq func (byte-compile `(lambda () ,',form)))
     (setq before (get-internal-run-time))
     (funcall func)
     (setq after (get-internal-run-time))
     (setq diff (time-subtract after before))
     (message "%d.%06d" (cadr diff) (caddr diff))))

(time (do ((i 0 (+ i 1)))
          ((= i 1000000))
        (progn (setq x 0)
               (setq x (+ x 1)))))

(time (do ((i 0 (+ i 1)))
          ((= i 1000000))
        '(progn (setq x 0)
                (setq x (+ x 1)))))

(time (do ((i 0 (+ i 1)))
          ((= i 1000000))
        (eval '(progn (setq x 0)
                      (setq x (+ x 1))))))

(time (do ((i 0 (+ i 1)))
          ((= i 1000000))
        (progn (setq q (eval 0))
               (setq q (+ x 1)))))

I got these answers:

  0.076005
  0.032002
  0.388024
  0.120007

which are /all/ better than I got from newLISP, by factors between 1.05
and 5.54.

The moral of the story is: if you're an EVAL-crazed idiot, or just enjoy
the slightly musty aroma of old Lisp systems, Emacs Lisp is just better
at it than the new boy.

Did I mention that Emacs Lisp has a byte-compiler?

> So, for those who like Lisp because of code=data, Newlisp is promising
> choice. Of course, the languages have many other different aspects,
> and not all are in favor of Newlisp. But it would be too ambitious to
> summarize all differences in one post.

Except that Emacs Lisp is more mature, is faster, addresses your
perverted `code=data' notions, ...

Oh, I'm told that it comes with a very powerful system for reading news
and mail (in fact, several different mail handling systems), and even
comes with a half-decent text editor.

-- [mdw]
From: Majorinc Kazimir
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <gl365l$5pk$1@ss408.t-com.hr>
Mark Wooding wrote:
> Majorinc Kazimir <·····@email.address> writes:
> 
>> (1) Unrestricted eval, i.e. eval can access to local variables. CL and
>>     Scheme evals cannot do that.
> 
> Indeed.  It's a tradeoff between efficient compilation and EVAL: if the
> compiler is given the necessary freedom to arrange a function's
> environment in the most efficient manner, then either EVAL will be
> horrifically complicated or just won't be able to `see' lexically bound
> variables.

So, CL chose an option that leads to speed, and Newlisp
chose the option that leads to expressive power. That is
exactly what I said.

> 
> Emacs Lisp does this, by the way.
> 
>>     Eval is, in my opinion, single most important feature for
>>     code=data.
> 
> Ahh.  I think I'd put efficient and comprehensible macros and ready
> availability of COMPILE rather higher.

So, you value speed more than expressive power. OK. I do not.

> 
>> (2) Fexprs, i.e. "the first class" macros. 
> 
> The `F' stands for `funny'.  Interestingly, fexprs were the cause of
> many of the problems that macros were introduced to fix, most notably
> the fact that very existence of fexprs the compiler doesn't have
> built-in knowledge about makes it nearly impossible to compile correct
> code.

So, CL chose the path that leads to speed, Newlisp
chose the path that leads to expressive power.


>> (3) Newlisp has support for dynamic and static scope.  Local variables
>>     are dynamically scoped. The namespaces, called "contexts" are
>>     statically scoped. That allows significantly more expressive
>>     functions than in dialects with statically scoped local variables.
> 
> Of course, CL has both anyway, with lexical scoping as the (correct)
> default.  Proper lexical scoping is, of course, the right solution to
> the famous historical `funargs' problem.

So, CL chose the path that leads to what? Safety? Newlisp
chose the path that leads to expressive power.

The funarg is however, not the problem is Newlisp practice.

There are few reasons for that. One reason is that people
simply use support for lexical scope existing in language.

Other reason is that funarg problem is not really that
hard - even if one decide to use only dynamic scope.

>> (4) The Functions are lambda-expressions, not results of the
>>     evaluation of lambda expressions. That means, program can analyze
>>     and modify functions during runtime. And the same is the case for
>>     macros.
> 
> Oh, heavens.  Look, if you want Emacs Lisp, you know where to find it.
> But it really isn't very new.

So, where are we now? You gave up from CL and you are trying to
compare Newlisp with Emacs Lisp. Then go back to fexprs vs macros,
Newlisp has fexprs, Emacs has macros.

> 
>> (5) Newlisp is an intepreter. That means, if you want speed, you are
>>     in the same league as Python or Perl.  But, if your code contains
>>     lot of evals - Newlisp is faster than many other Lisps.
> 
> You don't have to be mad to write programs which make heavy use of EVAL
> but, no, wait, you do.

Mad? Yeah right pal, I have to go, good luck to you ... :)
From: Rainer Joswig
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <joswig-1E4D5A.23353822012009@news-europe.giganews.com>
In article <··················@metalzone.distorted.org.uk>,
 Mark Wooding <···@distorted.org.uk> wrote:

> Majorinc Kazimir <·····@email.address> writes:
> 
> > So, CL chose an option that leads to speed, and Newlisp chose the
> > option that leads to expressive power. That is exactly what I said.
> 
> Err..., no, MACLISP chose speed and Common Lisp followed.

Scheme chose correctness and Common Lisp followed.

FEXPRs are a runtime feature that makes debugging harder and
makes compilation next to useless.


If we have

(foo (some-code ...))

and foo gets it as literal code (s-expressions),
then it can do everything at runtime with it.
We don't know from looking at the source code and
we can't run an expander - because there is none.

If we have a compiled Lisp implementation, the compiler
can check for example the code for basic correctness
(syntax, ...), can replace constant code with the
result, etc.  BEFORE runtime. Since Lisp switched
to lexical semantics and easy to use compilers
I usually don't load code as a file into Lisp,
but compile it first and make the warnings
and errors of the compiler go away. Having code
that compiles cleanly without warnings and errors
is already a first step to working code.
I got rid of syntax errors, typos, expansion errors,
...

-- 
http://lispm.dyndns.org/
From: Kaz Kylheku
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <20090128120710.818@gmail.com>
On 2009-01-22, Rainer Joswig <······@lisp.de> wrote:
> In article <··················@metalzone.distorted.org.uk>,
>  Mark Wooding <···@distorted.org.uk> wrote:
>
>> Majorinc Kazimir <·····@email.address> writes:
>> 
>> > So, CL chose an option that leads to speed, and Newlisp chose the
>> > option that leads to expressive power. That is exactly what I said.
>> 
>> Err..., no, MACLISP chose speed and Common Lisp followed.
>
> Scheme chose correctness and Common Lisp followed.
>
> FEXPRs are a runtime feature that makes debugging harder and
> makes compilation next to useless.

Kent Pitman, _Special Forms in Lisp_, Conclusions:

``It should be clear from this discussion that FEXPR's are only safe
  if no part of their ``argument list'' is to be evaluated, and even
  then only when there is a declaration available in the environment
  in which they appear. Using FEXPR's to define control primitives will
  be prone to failure due to problems of evaluation context and due to
  their potential for confusing program-manipulating programs such as
  compilers and macro packages.''

``MACRO's on the other hand, offer a more straightforward and reliable
  approach to all of the things which we have said should be required
  of a mechanism for defining special forms.''

``It is widely held among members of the MIT Lisp community that FEXPR,
  NLAMBDA, and related concepts could be omitted from the Lisp language
  with no loss of generality and little loss of expressive power, and that
  doing so would make a general improvement in the quality and reliability
  of program-manipulating programs.'' 

  [Pitman, 1980]

The following paper looks like it promises a contrasting viewpoint, but
I don't have access to it:

  _Sometimes an FEXPR is beter than a macro_, Z. Lichtman, ACM SIGART 
      Bulletin, Issue 97, July 1986.

We /do/ have the abstract, at least, which suggests that the reason a FEXPR is
sometimes better has to do with efficiency, not expressiveness --- ``despite
its negative aspects''! Even the author of this paper defending fexprs
recognizes that there are negative aspects:

``Common Lisp, which is becoming THE Lisp standard, does not support
  call by text (FEXPR mechanism in Mac/Franz Lisp). This effect can
  be obtained using macros. Based on the experience of converting an
  OPS5 implementation from Franz Lisp to Common Lisp, it is argued that
  sometimes call by text is needed for efficiency, despite its negative
  aspects. In the case of languages embedded in a Lisp system, using
  the macro alternative for call by text can cause macro expansion at
  execution time. This leads to a some-what less efficient implementation
  of the embedded language.'' 

  [Z. Lichtman, 1986]

Without having the full text of the paper paper, we can nevertheless easily
conjecture what the Abstract might be talking about. How would macro-expansion
happen at execution time? You'd have to EVAL or COMPILE some form that is a
macro call, or contains one.  Sure, doing the macro expansion could be a waste
of time, if the expansion is only evaluated once (or, more generally, a small
number of times). The expanded text of the macro might run no faster than the
original form being interpreted by the FREXP, plus you're pay for the cost of
the macroexpansion, so it could be the case that no matter how many times the
macroexpansion is re-evaluated, it does not pay for the cost of macroexpansion.

Macros are often written with the assumption that expansion time is not
important, and so it could spend many cycles and do a lot of consing; the
efficiency of the code that the macro generates is important.  Given that you
are EVAL'ing this code already, an interpretive alternative to macroexpansion
(call by source code) may be faster.

Indeed, an frexp behaves a lot like an extension to /interpreter/ for handling
a special form, whereas a macro is an extension to a /compiler/ for handling a
special form. So, in a sense, if code is purely interpreted, it doesn't always
make sense to be invoking a /compiler/ extension for the handling of some
special form.

Sometimes interpreting can even be outright faster than compiling.  This
happens, for instance, when some straightforward compiled code generates a loop
that exceeds the size of a small cache on the processor, whereas encoding the
algorithm as a tiny interpreter plus a compact piece of data behaving as code,
allows everything to fit into the cache.

But where is the paper that argues that fexprs are sometimes better than macros
because they are more flexible and powerful than macros? Or easier to use,
learn, etc? It sounds like Mr.  Majorinc is confident enough that he should be
able to set about writing that paper. I'm looking forward to it!
From: Pascal J. Bourguignon
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <87iqo6juj8.fsf@galatea.local>
Kaz Kylheku <········@gmail.com> writes:

> The following paper looks like it promises a contrasting viewpoint, but
> I don't have access to it:
>
>   _Sometimes an FEXPR is beter than a macro_, Z. Lichtman, ACM SIGART 
>       Bulletin, Issue 97, July 1986.
>
> We /do/ have the abstract, at least, which suggests that the reason a FEXPR is
> sometimes better has to do with efficiency, not expressiveness --- ``despite
> its negative aspects''! Even the author of this paper defending fexprs
> recognizes that there are negative aspects:
>
> ``Common Lisp, which is becoming THE Lisp standard, does not support
>   call by text (FEXPR mechanism in Mac/Franz Lisp). This effect can
>   be obtained using macros. Based on the experience of converting an
>   OPS5 implementation from Franz Lisp to Common Lisp, it is argued that
>   sometimes call by text is needed for efficiency, despite its negative
>   aspects. In the case of languages embedded in a Lisp system, using
>   the macro alternative for call by text can cause macro expansion at
>   execution time. This leads to a some-what less efficient implementation
>   of the embedded language.'' 
>
>   [Z. Lichtman, 1986]
>
> Without having the full text of the paper paper, we can nevertheless easily
> conjecture what the Abstract might be talking about. How would macro-expansion
> happen at execution time? You'd have to EVAL or COMPILE some form that is a
> macro call, or contains one.  Sure, doing the macro expansion could be a waste
> of time, if the expansion is only evaluated once (or, more generally, a small
> number of times). The expanded text of the macro might run no faster than the
> original form being interpreted by the FREXP, plus you're pay for the cost of
> the macroexpansion, so it could be the case that no matter how many times the
> macroexpansion is re-evaluated, it does not pay for the cost of macroexpansion.


The embedded languages should do minimal compilation themselves.  Then
macros wouldn't have to be exanded at run-time.


> Macros are often written with the assumption that expansion time is not
> important, and so it could spend many cycles and do a lot of consing; the
> efficiency of the code that the macro generates is important.  Given that you
> are EVAL'ing this code already, an interpretive alternative to macroexpansion
> (call by source code) may be faster.
>
> Indeed, an frexp behaves a lot like an extension to /interpreter/ for handling
> a special form, whereas a macro is an extension to a /compiler/ for handling a
> special form. So, in a sense, if code is purely interpreted, it doesn't always
> make sense to be invoking a /compiler/ extension for the handling of some
> special form.
>
> Sometimes interpreting can even be outright faster than compiling.  This
> happens, for instance, when some straightforward compiled code generates a loop
> that exceeds the size of a small cache on the processor, whereas encoding the
> algorithm as a tiny interpreter plus a compact piece of data behaving as code,
> allows everything to fit into the cache.
>
> But where is the paper that argues that fexprs are sometimes better than macros
> because they are more flexible and powerful than macros? Or easier to use,
> learn, etc? It sounds like Mr.  Majorinc is confident enough that he should be
> able to set about writing that paper. I'm looking forward to it!

We're waiting...

-- 
__Pascal Bourguignon__
From: D Herring
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <497a3a1f$0$3339$6e1ede2f@read.cnntp.org>
Kaz Kylheku wrote:
> On 2009-01-22, Rainer Joswig <······@lisp.de> wrote:
>> In article <··················@metalzone.distorted.org.uk>,
>>  Mark Wooding <···@distorted.org.uk> wrote:
>>
>>> Majorinc Kazimir <·····@email.address> writes:
>>>
>>>> So, CL chose an option that leads to speed, and Newlisp chose the
>>>> option that leads to expressive power. That is exactly what I said.
>>> Err..., no, MACLISP chose speed and Common Lisp followed.
>> Scheme chose correctness and Common Lisp followed.
>>
>> FEXPRs are a runtime feature that makes debugging harder and
>> makes compilation next to useless.
> 
> Kent Pitman, _Special Forms in Lisp_, Conclusions:
> 
> ``It should be clear from this discussion that FEXPR's are only safe
>   if no part of their ``argument list'' is to be evaluated, and even
>   then only when there is a declaration available in the environment
>   in which they appear. Using FEXPR's to define control primitives will
>   be prone to failure due to problems of evaluation context and due to
>   their potential for confusing program-manipulating programs such as
>   compilers and macro packages.''
> 
> ``MACRO's on the other hand, offer a more straightforward and reliable
>   approach to all of the things which we have said should be required
>   of a mechanism for defining special forms.''
> 
> ``It is widely held among members of the MIT Lisp community that FEXPR,
>   NLAMBDA, and related concepts could be omitted from the Lisp language
>   with no loss of generality and little loss of expressive power, and that
>   doing so would make a general improvement in the quality and reliability
>   of program-manipulating programs.'' 
> 
>   [Pitman, 1980]
> 
> The following paper looks like it promises a contrasting viewpoint, but
> I don't have access to it:
> 
>   _Sometimes an FEXPR is beter than a macro_, Z. Lichtman, ACM SIGART 
>       Bulletin, Issue 97, July 1986.
> 
> We /do/ have the abstract, at least, which suggests that the reason a FEXPR is
> sometimes better has to do with efficiency, not expressiveness --- ``despite
> its negative aspects''! Even the author of this paper defending fexprs
> recognizes that there are negative aspects:
> 
> ``Common Lisp, which is becoming THE Lisp standard, does not support
>   call by text (FEXPR mechanism in Mac/Franz Lisp). This effect can
>   be obtained using macros. Based on the experience of converting an
>   OPS5 implementation from Franz Lisp to Common Lisp, it is argued that
>   sometimes call by text is needed for efficiency, despite its negative
>   aspects. In the case of languages embedded in a Lisp system, using
>   the macro alternative for call by text can cause macro expansion at
>   execution time. This leads to a some-what less efficient implementation
>   of the embedded language.'' 
> 
>   [Z. Lichtman, 1986]

I downloaded the paper.  Email me if you need a copy, but its wasn't 
much to read.  He cites Pitman's paper as a reference.

Lichtman's paper was based on experience with
2. Forgy, C.L. OPS5 Users Manual, CMU-CS-81-135 , Dept.
    of Computer Science, Carnegie-Mellon University,
    Pittsburgh, PA, 1981.

He demonstrated two cases where OPS5 ran 7-15% slower without FEXPRs.
- OPS5 loaded plain-text definitions of its functions for compilation; 
this was slowed by macro expansions.
- OPS5 passed all parameters by text; this sometimes triggered a 
"call-by-text" macroexpansion at runtime.

The conclusion:
``
       We showed that in some situations of languages embedded within 
Lisp, doing call by text through a macro call, instead of using an 
explicit call by text mechanism, causes macro expansion at runtime 
which is an undesired overhead.
       Therefore, in spite of some dangerous aspects of explicit call 
by text mechanisms such as FEXPR, &quote and nlambda, it should be 
reconsidered whether such an explicit mechanism be added to Common 
Lisp, for the sake of efficiency when call by text is required at run 
time.''

To me, its sounds like OPS5 could have used some refactoring or a 
better compiler (e.g. save fasls).  Call-by-text will always be slow, 
macroexpansions or no.  I don't believe the above problems affect 
other languages built on lisp, such as [Open]Axiom/FriCAS.  Even toy 
CAS's such as Fateman's MockMMA make an effort to replace text with 
symbols and structures ASAP.

For a better FEXPR, I would recommend TCL -- EIAS.

- Daniel
From: William James
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <gl0fim0och@enews2.newsguy.com>
·····················@gmail.com wrote:

> A Look at newLISP:
> page 1:
> http://www.newmobilecomputing.com/story/20728/A_Look_at_newLISP page
> 2:
> http://www.newmobilecomputing.com/story/20728/A_Look_at_newLISP/page2/
> page 3:
> http://www.newmobilecomputing.com/story/20728/A_Look_at_newLISP/page3/
> 
> newLISP Home Page: http://www.newlisp.org/
> 
> Wikipedia article about newLISP: http://en.wikipedia.org/wiki/NewLISP

My suggestions for NewLisp:

1. Become lexically scoped.
2. Add hash tables (associative arrays).
From: ········@gmail.com
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <8f11067a-20b0-4c1b-9ba5-f94da0768a39@v5g2000pre.googlegroups.com>
·····················@gmail.com:
> A Look at newLISP:
> B SeeThatItsPERFECT
> C UseItAndForgetTheREST

newLISP seems perfect, the definite answer to all Commonly Lisped
questions.
But the name is simply awful.
Lispers hate camelCase.
Please call it new-lisp

thank-you-very-much
From: Kaz Kylheku
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <20090119033021.-341@gmail.com>
On 2009-01-18, ·····················@gmail.com <·····················@gmail.com> wrote, via Google Groups:
> A Look at newLISP:

ROFL!

I was disconnected from the machine where I normally read news,
so I was browsing the newsgroup using Google Groups.

I noticed that someone has been given multiple low Google Groups star ratings
to all of the intelligent criticisms posted by our comp.lang.c regulars in this
thread.

Is that you doing that? Do you actually have friends helping, or is it all sock
puppet accounts?

Do you people think anyone gives a flying leap about newLISP /or/ Google Groups
ratings?  

Yeah, you're really socking it to the infidels who dare criticize newLISP!
One star on Google Groups, three times over again---take THAT, you
nay-saying scumbags!


Here is what I think.

newLISP is a retarded heap of dung worked on by clueless monkeys.

As such, we shouldn't care about it, except that it is proactively harmful,
because it damages the view toward the Lisp family of languages from
the perspective any Lisp newcomer who may acome across it as a point of first
contact. 

Nobody should ever come into contact with newLISP, but newbies are more
susceptible to the damage it may inflict. I'm sure I could poke around in
newLISP without any permanent braindamage, but I fear for the vulnerable
noob.

The name may be ``new''LISP, but it's anything but new. It's something that's
not even state of the art for 1960.  It's represents an almost fifty year
backwards leap in progress.

The author should write a lengthy apology for newLISP, and move on.

I don't know of a beter way to put it: newLISP is to Lisp what Jack
Skellington's concept of Santa Claus is to the real Santa Claus.

Note Jack that Jack let Santa Claus go and repented for being an asshole.

Will Klutz do the same one day?
From: ·····················@gmail.com
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <11905664-4f75-47b3-94ab-9a9dc31ab569@p2g2000prf.googlegroups.com>
The arguments used in this thread are not new. They were used over
2000 years ago when ancient Egyptian scribes were improving their
hieroglyphic alphabet. Some of the hieroglyphs represented consonants,
so they could be used the same way as phonetic letters. The idea of
using phonetic hieroglyphs ONLY was a total insanity according to the
scribes, because hieroglyphs representing words were much more terse;
they saved time and parchment. A professional scribe knew over 5000
hieroglyphs. From his point of view, someone who knew only the
phonetic hieroglyphs was an illiterate idiot wasting parchment.

The ancient Greeks also used fairly complex hieroglyphic script
(called Linear B) before Dorian invasion destroyed their civilization.
After the invasion there were no professional scribes left, so a new
script "for illiterate idiots" had to be invented. The new script was
entirely phonetic, and so simple that anyone could learn it in a few
days. All modern Europeans use a modified version of this script.

In my opinion the Common Lisp resembles the ancient hieroglyphs, as
well as cars with manual transmission (for professional drivers), and
obsolete Unix computers (for computer professionals)... Common Lisp
can do the work efficiently, but is too complex  for part-time
programmers. Most programmers alive today are part-time programmers.
Many of them spend as much time writing AutoLISP programs as driving a
car, but they do not consider themselves professional programmers or
professional drivers. Computer programming is just one of many skills
they need in everyday life. It seems that the newLISP is simple enough
and terse enough for billions of part-time programmers. It is more
terse (clean) than Scheme and more mature than Pico Lisp.
From: Raffael Cavallaro
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <2009011919034716807-raffaelcavallaro@pasespamsilvousplaitmaccom>
On 2009-01-19 18:23:12 -0500, ·····················@gmail.com said:

> It seems that the newLISP is simple enough
> and terse enough for billions of part-time programmers. It is more
> terse (clean) than Scheme and more mature than Pico Lisp.

And more broken than either.

If you want small and clean use scheme. There are plenty of 
implementations including those specifically designed for scripting 
(such as gauche).

newlisp is just an ill conceived jumble which foists restrictions on 
the user to make the language implementor's job easier. This priority 
is backwards.

-- 
Raffael Cavallaro, Ph.D.
From: Tamas K Papp
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <6tkinjFbhhl8U1@mid.individual.net>
On Mon, 19 Jan 2009 15:23:12 -0800, simple.language.yahoo wrote:

> In my opinion the Common Lisp resembles the ancient hieroglyphs, as well
> as cars with manual transmission (for professional drivers), and

Yeah, whatever.

You still haven't showed any actual code to back up your points, yet
you hope to make up for that by bullshitting in metaphors.

I still don't know why eval should be used a extensively.  I still
haven't seen any examples where manipulating the S-expressions in
lambda forms is an advantage, not something hairy and error-prone.
Etc.

I don't care about your stories.  You are discussing a fricking
programming language, support your arguments with code or get lost.

Tamas
From: Bruce Stephens
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <87tz7u3ke0.fsf@cenderis.demon.co.uk>
Tamas K Papp <······@gmail.com> writes:

[...]

> I still don't know why eval should be used a extensively.

It isn't, is it?  (Except in an implicit sort of way.  I mean a lisp
system which didn't evaluate anything would probably not be very
useful.  I mess about with elisp quite a bit, and I don't recall
seeing eval very often, and I'm fairly sure I haven't written any code
using it.  It's the kind of thing one uses interactively: eval-defun,
for example.)

> I still haven't seen any examples where manipulating the
> S-expressions in lambda forms is an advantage, not something hairy
> and error-prone.  Etc.

That strikes me as a false dichotomy.  Macros that manipulate sexprs
can be hairy and error-prone to write, but an advantage when they've
been written.  When written carefully they can be quite safe to
use---that's the advantage of sexprs.  Nevertheless, writing them
isn't (in my limited experience) especially common, but when you need
it, it's useful to be able to have it.

[...]
From: Alex Mizrahi
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <49775cba$0$90267$14726298@news.sunsite.dk>
 sly> obsolete Unix computers (for computer professionals)... Common Lisp
 sly> can do the work efficiently, but is too complex  for part-time
 sly> programmers.

what is complex about Common Lisp and how exactly does newLISP fix this?
i find CL being quite simple in it's core. there are lots of functions in 
standard
library, some of them being weird, but you do not need to learn them all.

on the other hand, newLISP is quite complicated due to ORO restrictions,
as one needs workarounds like symbols and contexts to implement even
basic stuff. and this stuff is actually pretty complex and not good for 
"part-time"
programmers you worry about.

here's an example from newLISP documentation

-----
To avoid passing data objects by value copy, they can be enclosed in context 
objects and passed by reference. The following code snippet shows reference 
passing using a namespace default functor:

   (define (modify data value)
      (push value data))

   (set 'reflist:reflist '(b c d e f g))

   (modify reflist 'a)

   reflist:reflist => (a b c d e f g)
-----
wtf is "namespace default functor" and why one has to use them to implement 
a relatively simple function?

it looks like newLISP features were structured around its interpreter 
implementation -- whatever
was easier to implement was included. this might be simplier for ones who 
write interpreters,
but it is not good for ones who are trying to learn it.

you should be ashamed for supporting this shit, i suggest you doing a 
sepukku
to defend your reputation. 
From: ······@corporate-world.lisp.de
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <5c3c454b-8923-4a76-8a52-3ae0d0d05030@a12g2000pro.googlegroups.com>
On 21 Jan., 18:34, "Alex Mizrahi" <········@users.sourceforge.net>
wrote:
>  sly> obsolete Unix computers (for computer professionals)... Common Lisp
>  sly> can do the work efficiently, but is too complex  for part-time
>  sly> programmers.
>
> what is complex about Common Lisp and how exactly does newLISP fix this?
> i find CL being quite simple in it's core. there are lots of functions in
> standard
> library, some of them being weird, but you do not need to learn them all.
>
> on the other hand, newLISP is quite complicated due to ORO restrictions,
> as one needs workarounds like symbols and contexts to implement even
> basic stuff. and this stuff is actually pretty complex and not good for
> "part-time"
> programmers you worry about.
>
> here's an example from newLISP documentation
>
> -----
> To avoid passing data objects by value copy, they can be enclosed in context
> objects and passed by reference. The following code snippet shows reference
> passing using a namespace default functor:
>
>    (define (modify data value)
>       (push value data))
>
>    (set 'reflist:reflist '(b c d e f g))
>
>    (modify reflist 'a)
>
>    reflist:reflist => (a b c d e f g)
> -----
> wtf is "namespace default functor" and why one has to use them to implement
> a relatively simple function?
>
> it looks like newLISP features were structured around its interpreter
> implementation -- whatever
> was easier to implement was included. this might be simplier for ones who
> write interpreters,
> but it is not good for ones who are trying to learn it.
>
> you should be ashamed for supporting this shit, i suggest you doing a
> sepukku
> to defend your reputation.

ROTFL
From: Majorinc Kazimir
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <gl8nit$qu1$1@ss408.t-com.hr>
Alex Mizrahi wrote:
> 
> what is complex about Common Lisp and how exactly does newLISP fix this?
> i find CL being quite simple in it's core. there are lots of functions in 
> standard
> library, some of them being weird, but you do not need to learn them all.

It is code=data, eval and macros. CL/Scheme macros are
complex. Some CL users repeat all the time "if you use
eval, that means that you're n00b that didn't l3arn3d
h0w to use macros!"

Newlisp users use eval if it solves their problem. Their
eval understands local variables, so it is far more
frequently used.

And what if one WANTS macros? Newlisp macros are easier to
understand, more powerful (CL-ers claim no use
of additional power) and easier to write in practice.

> it looks like newLISP features were structured around 
 > its interpreter > implementation -- whatever
> was easier to implement was included. this might be 
 > simplier for ones who write interpreters,
> but it is not good for ones who are trying to learn it.

Method might be true, but not the result. This is why:

CL tries to serve two masters, to give programmers almost
as much power as interpreted Lisps, at almost as much
speed as Pascal (if he does not use eval.) It is complicated
  - complicated to design, implement, use.

Newlisp is designed to be simple. Like Perl, Python, Ruby,
Basic, it doesn't try to compete with C. It allowed
significant simplifications, and some of these also increase
expressive power. But Newlisp is primarily advertised as
simple to learn and use and good enough for many
'scripting' purposes."

> 
> you should be ashamed for supporting this shit, i suggest you doing a 
> sepukku
> to defend your reputation. 
> 

Yeah right ...

In many areas CL and Scheme standards and implementations
are equal or better than Newlisp. In many areas, they are
just different. But, if one puts *easy to learn and use*
and powerful *code=data* on top of his priorities, Newlisp
is excelent choice.
From: ······@corporate-world.lisp.de
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <498ba512-9c77-43b7-9763-ec04646dfcb1@r15g2000prh.googlegroups.com>
On Jan 22, 4:01 am, Majorinc Kazimir <·····@email.address> wrote:
> Alex Mizrahi wrote:
>
> > what is complex about Common Lisp and how exactly does newLISP fix this?
> > i find CL being quite simple in it's core. there are lots of functions in
> > standard
> > library, some of them being weird, but you do not need to learn them all.
>
> It is code=data, eval and macros. CL/Scheme macros are
> complex. Some CL users repeat all the time "if you use
> eval, that means that you're n00b that didn't l3arn3d
> h0w to use macros!"

One says that EVAL is often just not needed and that it makes the code
slower than it needs to be. In many cases it is better
to generate code at compile-time and have it compiled than generate
code at runtime. Using EVAL where not needed is
like driving and not shifting into a higher gear.

> Newlisp users use eval if it solves their problem.

Which problem? Any practical problem? What kind
of problems are solved by Newlisp users this
way?

> Their
> eval understands local variables, so it is far more
> frequently used.

Stuff like that has been removed earlier in the history of Lisp,
because it makes the understanding of code harder - for
the human and the compiler.

Reason:

In a lexical Lisp I can see what code uses (reads/writes/binds) the
variables.
In a Lisp that provides unrestricted EVAL with access
to local variables, many pieces of code can set the local variables
and it is
not obvious when and where it happens. It provides more
flexibility at the cost of rendering any non-trivial
amount of code to be much harder to understand.

> And what if one WANTS macros? Newlisp macros are easier to
> understand, more powerful (CL-ers claim no use
> of additional power) and easier to write in practice.

Because they are easier to use? Do you have a more specific reason?

For example I can take a piece of code in Common Lisp
and have it macro expanded and look at it. I can trace/debug
the macro expansion in isolation. Sounds easier
than a FEXPR which at runtime does code generation, where
code generation is mixed with execution.

>
> > it looks like newLISP features were structured around
>
>  > its interpreter > implementation -- whatever> was easier to implement was included. this might be
>
>  > simplier for ones who write interpreters,
>
> > but it is not good for ones who are trying to learn it.
>
> Method might be true, but not the result. This is why:
>
> CL tries to serve two masters, to give programmers almost
> as much power as interpreted Lisps, at almost as much
> speed as Pascal (if he does not use eval.) It is complicated
>   - complicated to design, implement, use.

Common Lisp provides support for interpretation and compilation.
What it did was removing the semantic differences
between those. For example before Common Lisp it was often the case
that the Lisp implementation was using dynamic binding
in the interpreter and lexical binding for compiled code.
Common Lisp demands that the interpreter also uses
lexical binding. It makes Lisp follow a simpler execution
model (more in line with lambda calculus).

> Newlisp is designed to be simple.

I was using some Lisps like early Scheme implementations that
look quite a bit simpler to me.

> Like Perl, Python, Ruby,
> Basic, it doesn't try to compete with C. It allowed
> significant simplifications, and some of these also increase
> expressive power. But Newlisp is primarily advertised as
> simple to learn and use and good enough for many
> 'scripting' purposes."

I think a simple Lisp in the Scheme tradition is just
as easy to use for scripting and better understandable.
Actually a former co-worker used scsh (the scheme shell)
for a while and then moved on to use GNU CLISP for
scripting purposes and liked CLISP a lot.

> > you should be ashamed for supporting this shit, i suggest you doing a
> > sepukku
> > to defend your reputation.
>
> Yeah right ...
>
> In many areas CL and Scheme standards and implementations
> are equal or better than Newlisp. In many areas, they are
> just different. But, if one puts *easy to learn and use*
> and powerful *code=data* on top of his priorities, Newlisp
> is excelent choice.

I would think that real newbies are much better starting with
Scheme (R5RS), since it has clearer semantics and is
more powerful at the same time.
From: Majorinc Kazimir
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <gla4i3$pd3$1@ss408.t-com.hr>
······@corporate-world.lisp.de wrote:



> 
>> Newlisp users use eval if it solves their problem.
> 
> Which problem? Any practical problem? What kind
> of problems are solved by Newlisp users this
> way?
> 

Any problem - eval is widely used in Newlisp.
Check my blog for about one hundred of
uses of eval.

> 
> In a lexical Lisp I can see what code uses (reads/writes/binds) the
> variables.
> In a Lisp that provides unrestricted EVAL with access
> to local variables, many pieces of code can set the local variables
> and it is
> not obvious when and where it happens. It provides more
> flexibility at the cost of rendering any non-trivial
> amount of code to be much harder to understand.

If unrestricted eval exists, it doesn't mean you
must use it. If you do not like it, don't use it.
Lisp is not supposed to be police state language.

> 
>> And what if one WANTS macros? Newlisp macros are easier to
>> understand, more powerful (CL-ers claim no use
>> of additional power) and easier to write in practice.
> 
> Because they are easier to use? Do you have a more specific reason?

Yes. CL macros construct CODE that will be inserted
in original source and evaluated during runtime. Generally,
it is harder to write CL macros than functions, it is
one level of abstraction more. Not always, but generally.

Newlisp macros are just functions that do not evaluate
arguments. They do not necessarily construct any code,
they just evaluate during runtime. It is equally hard
to write Newlisp macros as functions.

Here is example of Newlisp macro

(set 'at-least-two
         (lambda-macro()
           (let ((c 0)
                 (L $args))
              (dolist (i L (= c 2))
                      (println i)
                      (if (eval i)
                          (inc c)))
              (>= c 2))))

(println (at-least-two (= 1 1)
                        (= 3 2)
                        (= 2 2)
                        (= 4 4)))

(exit)

at-least-two returns true if at least two arguments
evaluate to true, and nil otherwise. It has to be
macro because it is "lazy." As you can see, it
really looks like function.

Note: it is not the most terse or "intelligent"
macro I can do here, it is written to demonstrate
fundamental difference between CL and Newlisp.

But, one can, however, write CL-style macros in
Newlisp, if that is what he wants.

> For example I can take a piece of code in Common Lisp
> and have it macro expanded and look at it. I can trace/debug
> the macro expansion in isolation. Sounds easier
> than a FEXPR which at runtime does code generation, where
> code generation is mixed with execution.

You can test it like any other function. No special
treatmant required. I have unit tests for my macros
and my functions, so each time I change macro and
load library, test report me: "Macro 27 is bad" if
I did something wrong.

> 
> Common Lisp provides support for interpretation and compilation.
> What it did was removing the semantic differences
> between those. For example before Common Lisp it was often the case
> that the Lisp implementation was using dynamic binding
> in the interpreter and lexical binding for compiled code.

True, it was strange idea.

> 
> I think a simple Lisp in the Scheme tradition is just
> as easy to use for scripting and better understandable.
> Actually a former co-worker used scsh (the scheme shell)
> for a while and then moved on to use GNU CLISP for
> scripting purposes and liked CLISP a lot.

I do not want to put down anyone. CLisp is nice,
mature implementation, with lot of invested work,
without bugs, good bignums, etc. It was my
first Lisp. If authors read this thanks guys.

Nevertheless, macros are the problem.

This are four rules that, according to my best knowledge,
completely describe syntax and semantics of Newlisp macros:
--------------------------------------
1) How I define macros?
    Just like functions, but use define-macro instead of define.

2) How I define anoynomous macros?
    Just like functions, but use keyword
    lambda-macro instead of lambda.

3) And, how such macro works?
    (m expr1 ...exprn)=(f expr1 ... exprn)

    where m and f are macro and function defined
    on the same way, difference only in keywords.

4) How do I test whether object is macro?
    (macro? object).

-------------------------------------
If you try to explain CL macros syntax and semantics,
including whole concept of macroexpand time vs compile
time vs runtime and concept of the non-first class
ciziten, and then macroexpand, macroexpand-1, macrolet,
macro-function, hook you'll certainly need much more space
  and time.

Finally, about easy to use - challenge me. There are
people who use CL for 20 years, whole community around
it, lot of scientific papers etc. Lispers are proud for
macros writing macros writing macros etc. Give me the
hardest, the most sofisticated problem lispers solved
with macro, and I'll solve it in Newlisp, so we can
discuss solutions. As long as problem is in
technique, not in the lack of aglorithm.
From: Majorinc Kazimir
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <gla67r$svk$1@ss408.t-com.hr>
Typo. It should be:

This are four rules that, according to my best knowledge,
completely describe syntax and semantics of Newlisp macros:
--------------------------------------
1) How I define macros?
    Just like functions, but use define-macro instead of define.

2) How I define anoynomous macros?
    Just like functions, but use keyword
    lambda-macro instead of lambda.

3) And, how such macro works?
    (m expr1 ...exprn)=(f 'expr1 ... 'exprn)

    where m and f are macro and function defined
    on the same way, difference only in keywords.

4) How do I test whether object is macro?
    (macro? object).

-------------------------------------
From: Matthias Buelow
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <6trqomFcg831U1@mid.dfncis.de>
Majorinc Kazimir wrote:

> Any problem - eval is widely used in Newlisp.
> Check my blog for about one hundred of
> uses of eval.

Eval is like GOTO in other languages (or GO in CL) -- it exists, it is
useful in some situations but shouldn't be overused.
From: Kaz Kylheku
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <20090128082651.499@gmail.com>
On 2009-01-22, Majorinc Kazimir <·····@email.address> wrote:
> ······@corporate-world.lisp.de wrote:
>
>
>
>> 
>>> Newlisp users use eval if it solves their problem.
>> 
>> Which problem? Any practical problem? What kind
>> of problems are solved by Newlisp users this
>> way?
>> 
>
> Any problem - eval is widely used in Newlisp.
> Check my blog for about one hundred of
> uses of eval.

Fair enough, why should you repost stuff here? Let's go to the blog.

Currently, the very first thing we see on your blog is some completely moronic
code that defines IF as a function.

  ;; Code from KM's blog:
  (set 'IF (lambda()
             (eval 
               ((args)
                 (- 5 (length 
                         (string 
                            (true? 
                               (eval (first (args)))))))))))

Here we see something completely laughable. The result of evaluating
the guard expression must firs be normalized to a boolean using TRUE?

This is a useless stupidity that is eliminated in Lisp with the
concept of the generalized boolean.

Next, it's converted to a string, whose length is used to control evaluation!

Let me guess, the boolean produces either TRUE (symbol name four letters long)
or FALSE (five letters long).

Good grief! What is going on?  Has the aristocracy of stupidity now been
democratized, so you have to actively run for election to take office as the
crowned prince of morons?

You've taken Lisp back to before the day when McCarthy invented IF.

Now the sane Lisp approach. No eval, no character processing of the printed
reprsentation of atoms, no quoting of arguments to the resulting form:

  ;; for example, we have COND but not IF

  (defmacro if (antecedent consequent &optional alternate)
    `(cond (,antecedent ,consequent)
           (t ,alternate)))

Shall we keep going?
From: Rainer Joswig
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <joswig-0FFFDD.23101022012009@news-europe.giganews.com>
In article <············@ss408.t-com.hr>,
 Majorinc Kazimir <·····@email.address> wrote:

> > Because they are easier to use? Do you have a more specific reason?
> 
> Yes. CL macros construct CODE that will be inserted
> in original source and evaluated during runtime. Generally,
> it is harder to write CL macros than functions, it is
> one level of abstraction more. Not always, but generally.
> 
> Newlisp macros are just functions that do not evaluate
> arguments. They do not necessarily construct any code,
> they just evaluate during runtime. It is equally hard
> to write Newlisp macros as functions.
> 
> Here is example of Newlisp macro
> 
> (set 'at-least-two
>          (lambda-macro()
>            (let ((c 0)
>                  (L $args))
>               (dolist (i L (= c 2))
>                       (println i)
>                       (if (eval i)
>                           (inc c)))
>               (>= c 2))))
> 
> (println (at-least-two (= 1 1)
>                         (= 3 2)
>                         (= 2 2)
>                         (= 4 4)))
> 
> (exit)


I find this to be one of the examples why fexprs are bad.

I want the code to look like:

(let ((counter 0))
   (block nil


     (when (= 1 1)
       (incf counter))
      (when (= counter 2)
        (return t))


     ...  ; repeat above for each expression


     nil  ; default value
    ))

I write a macro to generate the code:

(defmacro at-least-two (&rest expressions)
  (let ((c (gensym)))
    `(let ((,c 0))
       (block nil
         ,@(loop for expression in expressions
                 collect `(when ,expression (incf ,c))
                 collect `(when (= ,c 2) (return t)))
         nil))))


Now I can look at the code:

CL-USER 62 > (pprint (macroexpand-1 '(at-least-two (= 1 1)
                                                   (= 3 2)
                                                   (= 2 2)
                                                   (= 4 4))))

(LET ((#:G14496 0))
  (BLOCK NIL
    (WHEN (= 1 1) (INCF #:G14496))
    (WHEN (= #:G14496 2) (RETURN T))
    (WHEN (= 3 2) (INCF #:G14496))
    (WHEN (= #:G14496 2) (RETURN T))
    (WHEN (= 2 2) (INCF #:G14496))
    (WHEN (= #:G14496 2) (RETURN T))
    (WHEN (= 4 4) (INCF #:G14496))
    (WHEN (= #:G14496 2) (RETURN T))
    NIL))

He, I can see the source and it looks like my sketch above.

The macro generated totally simple easy to understand
code that can be compiled just fine.

There is no need to use EVAL in any way.

The compiler checks that it is valid code as an added bonus.

> 
> at-least-two returns true if at least two arguments
> evaluate to true, and nil otherwise. It has to be
> macro because it is "lazy." As you can see, it
> really looks like function.
> 
> Note: it is not the most terse or "intelligent"
> macro I can do here, it is written to demonstrate
> fundamental difference between CL and Newlisp.

That's the style of Lisp's of the 60s, 70s and the
early 80s. I have been using FEXPRs 20 years ago.
I don't miss them...

> This are four rules that, according to my best knowledge,
> completely describe syntax and semantics of Newlisp macros:
> --------------------------------------
> 1) How I define macros?
>     Just like functions, but use define-macro instead of define.
> 
> 2) How I define anoynomous macros?
>     Just like functions, but use keyword
>     lambda-macro instead of lambda.
> 
> 3) And, how such macro works?
>     (m expr1 ...exprn)=(f expr1 ... exprn)
> 
>     where m and f are macro and function defined
>     on the same way, difference only in keywords.
> 
> 4) How do I test whether object is macro?
>     (macro? object).
> 
> -------------------------------------
> If you try to explain CL macros syntax and semantics,
> including whole concept of macroexpand time vs compile
> time vs runtime and concept of the non-first class
> ciziten, and then macroexpand, macroexpand-1, macrolet,
> macro-function, hook you'll certainly need much more space
>   and time.

The problem is not to learn the concept and
machinery of a macro. The problem is understanding written
code. Understanding code with lexical binding
and a separate macro phase is easier than
code that uses fexprs, since fexprs allow
all kinds of non-local effects that only
appear at runtime. Most of the time this is not
needed. Learn to use lexical binding and macros
once, that should help to write better code
for the rest of your life.

> 
> Finally, about easy to use - challenge me. There are
> people who use CL for 20 years, whole community around
> it, lot of scientific papers etc. Lispers are proud for
> macros writing macros writing macros etc. Give me the
> hardest, the most sofisticated problem lispers solved
> with macro, and I'll solve it in Newlisp, so we can
> discuss solutions. As long as problem is in
> technique, not in the lack of aglorithm.

He, you can all kinds of stuff with fexprs, you
just can't compile them in any useful way
and you just can't control the non-locality
of the effects. If I write in Lisp

(lambda (some-function)
  (let ((a 10))
    ; do something
    (funcall some-function)
    ; do something
    ))

I know that some-function can not change the value
of a. Only code in the lexical environment can change it.
I can hand out functions f that change a and make those
available to other functions g. Still these functions f
have to have local lexical access to a.

Allowing non-lexical access to a is evil because
it makes program understanding much harder.

-- 
http://lispm.dyndns.org/
From: Raffael Cavallaro
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <2009012200474616807-raffaelcavallaro@pasespamsilvousplaitmaccom>
On 2009-01-21 22:01:41 -0500, Majorinc Kazimir <·····@email.address> said:

> Newlisp is designed to be simple

...minded?

;^)
-- 
Raffael Cavallaro, Ph.D.
From: Slobodan Blazeski
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <21ce8727-0d61-46a9-a7fc-b7a4a1bf8998@p36g2000prp.googlegroups.com>
On Jan 22, 6:47 am, Raffael Cavallaro
<················@pas.espam.s.il.vous.plait.mac.com> wrote:
> On 2009-01-21 22:01:41 -0500, Majorinc Kazimir <·····@email.address> said:
>
> > Newlisp is designed to be simple
>
> ...minded?
>
> ;^)
> --
> Raffael Cavallaro, Ph.D.

ROFL. Instant classic.
thank you
bobi
From: Kaz Kylheku
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <20090127190945.677@gmail.com>
On 2009-01-22, Majorinc Kazimir <·····@email.address> wrote:
> Alex Mizrahi wrote:
>> 
>> what is complex about Common Lisp and how exactly does newLISP fix this?
>> i find CL being quite simple in it's core. there are lots of functions in 
>> standard
>> library, some of them being weird, but you do not need to learn them all.
>
> It is code=data, eval and macros. CL/Scheme macros are
> complex. Some CL users repeat all the time "if you use
> eval, that means that you're n00b that didn't l3arn3d
> h0w to use macros!"

You've evidently chosen to suffer a language just because in it, eval behaves
in a particular way. No matter what anyone says, you just keep harping about
``code=data'' and ``eval that understands locals''.

You don't seem to realize, that you can write an interpreter in Common Lisp for
a dialect which has the eval that you want.  Writing a Lisp interpreter
in Lisp is a student exercise. You have things like reading, printing,
representation of symbols and memory management taken care of already.

Bad technology like some crummy evaluator can easily be hacked with the good
technology in a few evenings. In a week, you could cob together something
that blows newLISP out of the water.

And, lastly, note that Lisp has dynamic variables. If you want eval to
understand local variables, use those!

We could say that just like Common Lisp, newLISP has an eval which doesn't
give you access to lexical variables. But Common Lisp /has/ lexical variables.

newLISP's access to local variables is easy to implement because they are
dynamic.

If newLISP had real lexical variables, the author of newLISP would probably
work out some excuse why his eval cannot access them. Put a newbie
perspective on it, like it's for ease of use or something. Or he'd point
to real Lisps and claim that they don't do it either.
From: Tamas K Papp
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <6tr7g3Fcbb01U1@mid.individual.net>
On Thu, 22 Jan 2009 04:01:41 +0100, Majorinc Kazimir wrote:

> It is code=data, eval and macros. CL/Scheme macros are complex. Some CL
> users repeat all the time "if you use eval, that means that you're n00b
> that didn't l3arn3d h0w to use macros!"

More abstract handwaving.  Still no code.  Must be a cultural trait
for the newLISP community.

Tamas
From: Alex Mizrahi
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <4978a274$0$90274$14726298@news.sunsite.dk>
 ??>> what is complex about Common Lisp and how exactly does newLISP fix
 ??>> this? i find CL being quite simple in it's core. there are lots of
 ??>> functions in standard library, some of them being weird, but you do
 ??>> not need to learn them all.

 MK> It is code=data, eval and macros. CL/Scheme macros are
 MK> complex.

eval and macros are rarely used even by professionally users
 -- it is not something you do each day. if we are speaking about
non-professional ones, most of them do not need it at all,
and for ones who need eval is there.

 MK>  Some CL users repeat all the time "if you use eval, that means that
 MK> you're n00b that didn't l3arn3d h0w to use macros!"

and what is wrong with it? eval is inferior for most purposes, so if
you want to write good code, do not use it. still, if we are speaking
about "part-time programmers", they do not really need to write
good code, only code that works.

 MK> Newlisp users use eval if it solves their problem.

and CL users can use eval to solve their problems, so how
is it different? do they need blessing to use eval?

 MK>  Their eval understands local variables,

by making all variables special? this is _very_ bad, as it
breaks closures. closures are much more important
than eval, as they allow solving most problems that otherwise would need 
eval
in much more elegant and safe way.

 MK>  so it is far more frequently used.

if local variables is eval is such a big deal, is it that hard to declare 
variables
as special with defvar?

or, if you insist on "local variables", we can fix it with a simple macro 
like this:

(defmacro dlet ((a b) &body body)
    `(let ((,a ,b))
      (declare (special ,a))
      ,@body))

CL-USER> (dlet (x 2)
                       (dlet (y 3)
                          (eval '(+ x y))))
5


huh, eval sees local variables now! will this macro make CL easier to use?

 MK> Method might be true, but not the result.

are you sure you're not biased about the result? if you know the language
inside-out, it might seem simple and logical. but it is not like that to 
external observers.
if I compare newLISP to Scheme, i do not have reason to be biased toward
any of these languages, as i do not use any of them. yet, Scheme appears
well-designed to me, while newLISP appears to be weird.

and i've seen lots of examples like this. JavaScript and PHP are similar in 
their
role, style and heavy use of associative arrays, but they are very different 
in their nature
 -- JavaScript was mostly designed as a language, and PHP was designed as 
interpreter
(whatever was easier to code for Rasmus Lerdorf). and results are 
different -- JS is
pretty nice and solid language (pretty similar to Lisp/Scheme, by the way, 
in everything
except syntax), while PHP is weird. JS is evolving by extending, PHP changes 
all the time
(in version 5, for example, they have totally changed object 
assignment/identity semantics,
because initially semantics was weird.)

by the way, i find PHP and newLISP similar in many ways -- both are 
interpreter-centric,
both do not support closures but some ad-hoc weirdness instead (new version 
of PHP supports
closures in some weird way, though), both have weird assignments and 
refereces semantics
and half-baked memory management model.

 MK>  This is why:

 MK> CL tries to serve two masters, to give programmers almost
 MK> as much power as interpreted Lisps, at almost as much
 MK> speed as Pascal (if he does not use eval.) It is complicated
 MK>   - complicated to design, implement, use.

if you think that CL is complicated to make it fast and eval is deprecated
for same reason, you're wrong. it is just more clean way to do it with
macros and closures, and, incidentally, it is also a faster way. Common Lisp
is not complicated in its core. there are additional features -- such as 
type
declarations -- that make it quite complicated, but these features are 
optional.
(for example, i'm not really a good at type declarations despite i consider
myself a professional CL programmer).

 MK> Newlisp is designed to be simple.

it does not look like that. if you want to make it simple, remove all shit
like ORO, contexts and "default functors". it makes _implementation_ simple,
not the language!

a good example of language which is really simple is JavaScript -- it has
very clean semantics with almost no questionable parts, everything serves
its purpose. and it is very easy to learn, lots of people have mastered it
without any problems.

 MK> it doesn't try to compete with C.

then why ORO memory management is important? i have an impression
that it is made such to avoid GC overhead and its unpredictability.
if you look at it from programmers point of view, GC is strictly superior
to ORO, as GC works totally transparently without making any limitations.

if you worry about GC overhead, then it means that it is newLISP who
serves two masters. but it made wrong compromises, it seems --
Common Lisp almost never sacrifices programmer's convenience
for performance reasons, but newLISP did sacrifice transparency in
a most critical part.

 MK> In many areas CL and Scheme standards and implementations
 MK> are equal or better than Newlisp. In many areas, they are
 MK> just different. But, if one puts *easy to learn and use*
 MK> and powerful *code=data* on top of his priorities, Newlisp
 MK> is excelent choice.

indeed, CL was not designed to be easy to learn. but Scheme was.
you haven't said anything why newLISP is better than Scheme,
and i think as a language Scheme is strictly superior to newLISP,
by leaps and bounds. 
From: Majorinc Kazimir
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <glam60$41n$1@ss408.t-com.hr>
Alex Mizrahi wrote:

> 
> or, if you insist on "local variables", we can fix it with a simple macro 
> like this:
> 
> (defmacro dlet ((a b) &body body)
>     `(let ((,a ,b))
>       (declare (special ,a))
>       ,@body))
> 
> CL-USER> (dlet (x 2)
>                        (dlet (y 3)
>                           (eval '(+ x y))))
> 5
> 
> 
> huh, eval sees local variables now! will this macro make CL easier to use?
> 

Macro loses first class, but the first definition,
is only bit heavy, otherwise that's it. I must accept
90% of this point.

>  MK> Method might be true, but not the result.
> 
> if I compare newLISP to Scheme, i do not have reason to be biased toward
> any of these languages, as i do not use any of them. yet, Scheme appears
> well-designed to me, while newLISP appears to be weird.

But I switched from Scheme to Newlisp, so I should be anti-biased.
Eval, macros and mmutable functions in Newlisp were main reasons.

> 
> by the way, i find PHP and newLISP similar in many ways -- both are 
> interpreter-centric,
> both do not support closures but some ad-hoc weirdness instead (new version 
> of PHP supports
> closures in some weird way, though), both have weird assignments and 
> refereces semantics
> and half-baked memory management model.

In Newlisp, functions are lists and you can store any kind of data
on any way in those. So, if you need closures, you can have it.


> 
>  MK>  This is why:
> 
>  MK> CL tries to serve two masters, to give programmers almost
>  MK> as much power as interpreted Lisps, at almost as much
>  MK> speed as Pascal (if he does not use eval.) It is complicated
>  MK>   - complicated to design, implement, use.
> 
> if you think that CL is complicated to make it fast and eval is deprecated
> for same reason, you're wrong. it is just more clean way to do it with
> macros and closures, and, incidentally, it is also a faster way. 

CL macros are not clean, they are not the first class
citizens, one cannot even be sure whether they exist
or not. Now you see it, now you dont.

Closures are clean, but they are just accidental
functional data type, objects turned inside out.
One can find dozens of such constructs floating around.

Eval is essence of code = data, it just cannot
be different than it is.

Another advocate of eval:

http://blogs.oreilly.com/digitalmedia/2004/12/
lisp-is-better-than-xml-but-wo.html

Common Lisp
> is not complicated in its core. there are additional features -- such as 
> type
> declarations -- that make it quite complicated, but these features are 
> optional.
> (for example, i'm not really a good at type declarations despite i consider
> myself a professional CL programmer).
> 
>  MK> Newlisp is designed to be simple.
> 
> it does not look like that. if you want to make it simple, remove all shit
> like ORO, contexts and "default functors". it makes _implementation_ simple,
> not the language!

I guess that designer was pragmatic about ORO, but it is
important what he did. ORO is in between GC and MM management.

I like ORO because I usually feel GC as heavy monster who does
something behind my back and I do not believe him. ORO gives
more flexibility to me, and it appears that other users are
also satisfied. So, I think it was step in good direction.

ORO symbols are like generalized memory addresses. One cannot
store one object on two addresses. But, he can store adress
of the object on 100 places. Once one realize that, ORO is
natural.

  > if you worry about GC overhead, then it means that it is newLISP who
> serves two masters. but it made wrong compromises, it seems --

Obviously it does. He cannot completely neglect the speed.
However, speed has much lower level of priority, so, if
we are not in nitpicking mood, we should agree.
From: Kaz Kylheku
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <20090128100847.951@gmail.com>
On 2009-01-22, Majorinc Kazimir <·····@email.address> wrote:
> Alex Mizrahi wrote:
>
>> 
>> or, if you insist on "local variables", we can fix it with a simple macro 
>> like this:
>> 
>> (defmacro dlet ((a b) &body body)
>>     `(let ((,a ,b))
>>       (declare (special ,a))
>>       ,@body))
>> 
>> CL-USER> (dlet (x 2)
>>                        (dlet (y 3)
>>                           (eval '(+ x y))))
>> 5
>> 
>> 
>> huh, eval sees local variables now! will this macro make CL easier to use?
>> 
>
> Macro loses first class, but the first definition,
> is only bit heavy, otherwise that's it.

The idiotic processing of a symbol as a string in your newLISP implementation
of IF on your blog is what is heavy!!!

/You/ are heavy. 

(In some East European languages, the word for ``heavy'' is applied to someone
in a way that in English we say someone is ``dense'' or ``thick'').

A heavy bohunk has landed and the newsgroup is still shaking.

> I must accept 90% of this point.

Accepting 90% upfront is a good start, but you must work harder on accepting
the remaining 10%. It's okay, you have lots of time.

Fact is that newLISP has access in eval to local variables because they are
dynamic. In exactly the same way, Common Lisp also has access to dynamic
variables under eval.

newLISP eval doesn't in fact let you access lexical variables in eval, because
lexical variables do not exist in newLISP.  Common Lisp doesn't allow it for a
different reason, namely that eval takes place in a null lexical environment.

(It is recognized that the semantics of evaluation depends on a choice of
environment and that other choices are possible in principle).

The proper way to extend eval is to give it an environment parameter, so it
is not restricted to the null lexical environment. Then you can correctly do
things like this:

 (defun my-eval-wrapper (expr environment)
   (let ((lexical-variable 43))
     (extension:eval-with-environment expr environment)))

 (let ((lexical-variable 42))
   (my-eval-wrapper 'lexical-variable (extension:current-lexical-env)))

See, now the call to EVAL-WITH-ENVIRONMENT takes place in our wrapper function
MY-EVAL-WRAPPER. The lexical environment there has three bindings: EXPR,
ENVIRONMENT and LEXICAL-VARIABLE. But this is a different LEXICAL-VARIABLE from
the one in the expression!!!

The eval will still resolve to the correct LEXICAL-VARIABLE, because the
environment is properly passed through the wrapper down into
EVAL-WITH-ENVIRONMENT. So the result will be that 42 is returned:

 (let ((lexical-variable 42))
   (my-eval-wrapper 'lexical-variable (extension:current-lexical-env)))

   -> 42  ;; not 43!

That's the right way to do it.  Which is precisely why newLISP doesn't do it
this way.  The design principle in newLISP is: ``the wrong way or the
highway''.
From: Tamas K Papp
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <6ts844Fch7p8U1@mid.individual.net>
On Thu, 22 Jan 2009 21:49:57 +0100, Majorinc Kazimir wrote:

> In Newlisp, functions are lists and you can store any kind of data on
> any way in those. So, if you need closures, you can have it.

Ouch.  You clearly don't grok what closures are.

And of course you can have closures in any language if you fight hard
for it, eg you can implement CL in C and then "have" closures in the
latter.  Thinking about it, reimplementing CL in C is a much cleaner
way to get closures than strugging with the brain-dead newLISP.

The mere suggestion that one should emulate closures by fiddling with
lambda forms as lists reeks.

> CL macros are not clean, they are not the first class citizens, one
> cannot even be sure whether they exist or not. Now you see it, now you
> dont.

CL has this thing called a language specification.  Have a look at it,
you are in for a surprise.  Among other things, it will answer a lot
of questions you might have (or should have, given your level of
ignorance) about macros and how they work.

> Closures are clean, but they are just accidental functional data type,
> objects turned inside out. One can find dozens of such constructs
> floating around.

Closures are more fundamental and versatile than objects.  

> Eval is essence of code = data, it just cannot be different than it is.

Repeating your meaningless mantra won't make it true.

> speed has much lower level of priority, so, if we are not in nitpicking
> mood, we should agree.

It is true that speed is less important than programming convenience,
elegance and readability but newLisp doesn't buy you any of the latter
so the point is moot.  You are sacrificing speed for nothing.

Tamas
From: Rob Warnock
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <hqOdnQ9YvqBas-TUnZ2dnUVZ_tTinZ2d@speakeasy.net>
Tamas K Papp  <······@gmail.com> wrote:
+---------------
| And of course you can have closures in any language if you fight hard
| for it, eg you can implement CL in C and then "have" closures in the
| latter.  Thinking about it, reimplementing CL in C is a much cleaner
| way to get closures than strugging with the brain-dead newLISP.
+---------------

Yup, viz.:

    $ cat ctest.c
    typedef unsigned long lispobj;   /* Or whatever your generic type is */

    typedef struct closure_s {
	lispobj (*func)(lispobj *, ...);
	lispobj *env;
    } closure_t, *closure_p;

    #define funcall_closure(closure, args...) \
      (((closure)->func)((closure)->env, ## args))

    #define CFUNCP lispobj (*)(lispobj *, ...)

    lispobj
    foo(lispobj *env, lispobj x, lispobj y)
    {
	return (env[0] * x) + (env[1] * y);
    }

    main()
    {
	lispobj e[2] = {3, 17};               /* Create a small environment */
	closure_t c = {(CFUNCP)&foo, &e[0]};  /* Close FOO over it. */

	printf("%d\n", funcall_closure(&c, 1, 1000));
    }
    $  make ctest
    cc -O -pipe   ctest.c  -o ctest
    $ ./ctest
    17003
    $ 

I've been playing these sorts of games for years, with great success!  ;-}

+---------------
| The mere suggestion that one should emulate closures
| by fiddling with lambda forms as lists reeks.
+---------------

Indeed!


-Rob

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Kaz Kylheku
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <20090128181246.722@gmail.com>
On 2009-01-23, Rob Warnock <····@rpw3.org> wrote:
>     $ ./ctest
>     17003
>     $ 
>
> I've been playing these sorts of games for years, with great success!  ;-}

Who's winning today, and what is the score?
From: Rob Warnock
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <Y6adncPfxpE_B-TUnZ2dnUVZ_hKdnZ2d@speakeasy.net>
Kaz Kylheku  <········@gmail.com> wrote:
+---------------
| Rob Warnock <····@rpw3.org> wrote:
| >     $ ./ctest
| >     17003
| >     $ 
| >
| > I've been playing these sorts of games for years, with great success!  ;-}
| 
| Who's winning today, and what is the score?
+---------------

Seriously, having even ad-hoc closures in my "C toolbox" has been
a big win for me.

- There are some standard C library functions that are a lot less
  convenient if you *don't* have them [e.g., the "ftw(3)/nftw(3)"
  file tree walkers come to mind, or the "twalk(3)" library].

- It makes writing your own polymorphic/generic higher-order functions
  *much* easier.

- It makes writing GUI event callback functions easier.

Yes, you can fake it by requiring all your higher-order functions
to take both a "callback" function pointer and an opaque cookie
(which is passed to the "callback"), but you can't depend on all
the library functions you want to use doing this.


-Rob

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Majorinc Kazimir
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <glbks2$ojm$1@ss408.t-com.hr>
Tamas K Papp wrote:
> On Thu, 22 Jan 2009 21:49:57 +0100, Majorinc Kazimir wrote:
> 
>> In Newlisp, functions are lists and you can store any kind of data on
>> any way in those. So, if you need closures, you can have it.
> 
> Ouch.  You clearly don't grok what closures are.

Closures? Move on ...

>> CL macros are not clean, they are not the first class citizens, one
>> cannot even be sure whether they exist or not. Now you see it, now you
>> dont.
> 
> CL has this thing called a language specification.  Have a look at it,
> you are in for a surprise.  Among other things, it will answer a lot
> of questions you might have (or should have, given your level of
> ignorance) about macros and how they work.

What, you are trying to fake that you have first class
macros in CL? And replace it with reading manuals?

Didn't worked.
From: Helmut Eller
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <m2hc3q9uow.fsf@gmail.com>
* Tamas K Papp [2009-01-22 22:52+0100] writes:

> The mere suggestion that one should emulate closures by fiddling with
> lambda forms as lists reeks.

Why?  It seems to me that that is just the same trick as using the
symbol nil as empty list.

Helmut
From: Kaz Kylheku
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <20090129072606.641@gmail.com>
On 2009-01-23, Helmut Eller <············@gmail.com> wrote:
> * Tamas K Papp [2009-01-22 22:52+0100] writes:
>
>> The mere suggestion that one should emulate closures by fiddling with
>> lambda forms as lists reeks.
>
> Why?  It seems to me that that is just the same trick as using the
> symbol nil as empty list.

Look at that, an apparent Schemer pipes up to criticize NIL.

Don't you have something better to do, like look for places in your
code where you accidentally wrote list rather (not (null? list)),
or (car list) instead of (if (not (null? list)) (car list))?

:)
From: Helmut Eller
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <m24ozp7sdt.fsf@gmail.com>
* Kaz Kylheku [2009-01-23 19:39+0100] writes:

> On 2009-01-23, Helmut Eller <············@gmail.com> wrote:
>> * Tamas K Papp [2009-01-22 22:52+0100] writes:
>>
>>> The mere suggestion that one should emulate closures by fiddling with
>>> lambda forms as lists reeks.
>>
>> Why?  It seems to me that that is just the same trick as using the
>> symbol nil as empty list.
>
> Look at that, an apparent Schemer pipes up to criticize NIL.
>
> Don't you have something better to do, like look for places in your
> code where you accidentally wrote list rather (not (null? list)),
> or (car list) instead of (if (not (null? list)) (car list))?

I'm not a Schemer; I'm a Emacs Lisp guy.  And I find it amusing that
both Schemers and CLers fail to write decent Emacs Lisp code exactly
because they prefer those other "obviously better" languages and can't
make the needed mental switch.

Helmut.
From: Thomas F. Burdick
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <3f7dc624-48f9-4958-8edf-8ca70cb559cd@r41g2000prr.googlegroups.com>
> >> * Tamas K Papp [2009-01-22 22:52+0100] writes:
> >>> The mere suggestion that one should emulate closures by fiddling with
> >>> lambda forms as lists reeks.

> > On 2009-01-23, Helmut Eller <············@gmail.com> wrote:
> >> Why?  It seems to me that that is just the same trick as using the
> >> symbol nil as empty list.

> * Kaz Kylheku [2009-01-23 19:39+0100] writes:
> > Look at that, an apparent Schemer pipes up to criticize NIL.

Wow, whatever Kaz is smoking ... keep that shit away from me! Helmut
is "an apparent Schemer", that's really classic.

> > Don't you have something better to do, like look for places in your
> > code where you accidentally wrote list rather (not (null? list)),
> > or (car list) instead of (if (not (null? list)) (car list))?

On 23 jan, 20:07, Helmut Eller <············@gmail.com> wrote:
> I'm not a Schemer; I'm a Emacs Lisp guy.  And I find it amusing that
> both Schemers and CLers fail to write decent Emacs Lisp code exactly
> because they prefer those other "obviously better" languages and can't
> make the needed mental switch.

Actually, I think it's an occupational hazard of Lisp languages in
general. It's so easy to mold your environment into something more
familiar that you risk doing so before you get a sense of style in
your new environment. But that doesn't mean that you never get a sense
of taste, just that the more you can adapt the environment to your
will, the longer it takes. Myself, I write elisp with a clear CL
accent, but correctly and in good style.

And yeah, funcallable lists can do a lot of things that closures can,
and some that they can't. If you throw in buffer-local variables,
you've got a very nice environment. But you really feel that they're
not closures when you look at something like cl-ppcre. Compiled
networks of closures ... well, they require closures to get the real
benefit.
From: Timofei Shatrov
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <497a25db.94969428@news.motzarella.org>
On Fri, 23 Jan 2009 18:39:14 +0000 (UTC), Kaz Kylheku <········@gmail.com> tried
to confuse everyone with this message:

>On 2009-01-23, Helmut Eller <············@gmail.com> wrote:
>> * Tamas K Papp [2009-01-22 22:52+0100] writes:
>>
>>> The mere suggestion that one should emulate closures by fiddling with
>>> lambda forms as lists reeks.
>>
>> Why?  It seems to me that that is just the same trick as using the
>> symbol nil as empty list.
>
>Look at that, an apparent Schemer pipes up to criticize NIL.
>
>Don't you have something better to do, like look for places in your
>code where you accidentally wrote list rather (not (null? list)),
>or (car list) instead of (if (not (null? list)) (car list))?

That would be "lst", not "list".

-- 
|Don't believe this - you're not worthless              ,gr---------.ru
|It's us against millions and we can't take them all... |  ue     il   |
|But we can take them on!                               |     @ma      |
|                       (A Wilhelm Scream - The Rip)    |______________|
From: Alex Mizrahi
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <4974bc08$0$90264$14726298@news.sunsite.dk>
 sly> newLISP Home Page: http://www.newlisp.org/

http://www.newlisp.org/MemoryManagement.html

"newLISP follows a one reference only (ORO) rule. ... As a result, each 
newLISP object only requires one reference."

Most modern programming languages (including languages of Lisp family) allow 
to reference objects freely,
so programming in newLISP is very different. For example, ORO rule forbids 
creating a graph of
arbitary structure in newLISP -- it is not possible to create a circular 
list, I kid you not. [1] Thus, I think
 newLISP is not good for a general purpose programming.

[1]: 
http://www.alh.net/newlisp/phpbb/viewtopic.php?t=458&sid=07b356982378072a89a4af46bddf5f4b 
From: Majorinc Kazimir
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <gl34h0$3cv$1@ss408.t-com.hr>
Alex Mizrahi wrote:
>  sly> newLISP Home Page: http://www.newlisp.org/
> 
> http://www.newlisp.org/MemoryManagement.html
> 
> "newLISP follows a one reference only (ORO) rule. ... As a result, each 
> newLISP object only requires one reference."
> 
> Most modern programming languages (including languages of Lisp family) allow 
> to reference objects freely,
> so programming in newLISP is very different. For example, ORO rule forbids 
> creating a graph of
> arbitary structure in newLISP -- it is not possible to create a circular 
> list, I kid you not. [1] Thus, I think
>  newLISP is not good for a general purpose programming.

It is easy to get around it, really.
My favorite way is - using symbols.

(set 'x (list 1 'x))
(set 'y (list 1 'y))
(set 'z (list 3 'z))

Symbols can be generated during
runtime. Symbols in Newlisp are
like generalized memory addresses.
From: Alex Mizrahi
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <4977588d$0$90263$14726298@news.sunsite.dk>
 MK> It is easy to get around it, really.
 MK> My favorite way is - using symbols.

 MK> (set 'x (list 1 'x))
 MK> (set 'y (list 1 'y))
 MK> (set 'z (list 3 'z))

you mean

  (set 'x (list 1 'y))

maybe?

this is pretty ugly, as for me. 
From: Thomas F. Burdick
Subject: Re: newLISP is simple, terse, and well documented
Date: 
Message-ID: <fcb43467-75c7-489d-968a-1bfa8b94c5c6@a12g2000pro.googlegroups.com>
As a response to a whole subsection of this thread: I bet you can
indeed compile a Lisp dialect with fexprs. With the same semantics,
and with very good performance.

Rainer has the right argument here: it's the semantics that are the
problem.