From: Nir Sullam
Subject: Newbie questions
Date: 
Message-ID: <FB0o5H.8HM@news2.new-york.net>
Hello everybody!

I am about to get my copy of Paul Graham's book (ANSI CommonLISP) and I am
very anxious.

Still , I have some questions:

FIRST:

Here in Israel , we have thousands of C\C++ programmers and so the
newspapers want ads are filled with requests for this kind of programmers ,
BUT never did I see a CLOS \ Lisp programmer want ad .!!!

If CLOS is so powerfull and (I read that) P Graham sold his software (in
CLOS) to Yahoo in 49 Million U$, How come CLOS is so obscure in the
programming community ?


SECOND:

I have Allegro CL 5.0 Lite and Lispworks Personal Edition.

None of the above can export to EXE in the Windows Environment,
Is there a program to do this (even in its demo version), As I haven't even
started programming in CLOS - I won't be going paying 600 U$ for a program
that I still do not need.  How do CLOS beginners usually start programming.
CLISP is too primitively interfaced (still).

THIRD

I program AutoLISP in an IDE called VitalLISP and I came to like the way it
works. (automatic completion of symbols names) - I once downloaded FreeLISP
and then lost the copy. I can remeber that it reminded me of VitalLISP -
Can anybody tell me where can I find the latest version that Harlquin
released ?

TIA

Nir sullam

sySOFT CAD Solutions
Haifa.

From: Kent M Pitman
Subject: Re: Newbie questions
Date: 
Message-ID: <sfw1zh126oy.fsf@world.std.com>
"Nir Sullam" <······@actcom.co.il> writes:

> Here in Israel , we have thousands of C\C++ programmers and so the
> newspapers want ads are filled with requests for this kind of programmers ,
> BUT never did I see a CLOS \ Lisp programmer want ad .!!!
> 
> If CLOS is so powerfull and (I read that) P Graham sold his software (in
> CLOS) to Yahoo in 49 Million U$, How come CLOS is so obscure in the
> programming community ?

For a whimsy way to see an approximate answer, try this:

In the above, substitute "uranium" for "CLOS", "atomic power" for "Lisp",
"coal" for "C", "oil" for "C+", and "fuel sources" for "programmers" in
the above and see if it helps you understand the answer.

Is atomic power for everyday use?  In principle it probably could be.
Look at atomic subs, for example.  Pretty small-scale use of atomic
power.  But as a practical reality, atomic power just isn't used the
same as fossil fuels.  The market is not segmented that way.  Does
that mean that coal and oil is the future and uranium the past?  Not
nearly as clear.  Observing usage stats is sometimes predictive, but
not always. The quick and easy availability of coal and oil has turned
an artificial distinction into a situation where two "would-be
competitors" appear to be different markets.

Not by necessity but more by what is now custom, Lisp tends to get used in
highly leveraged situations where other solutions don't work.  Not so much
because it has to work that way, but because people usually reach for 
something quick and easy and prevalant for ordinary problems.  But C/C++
doesn't work for some sets of things because it doesn't scale well, and
since Lisp is designed to scale, it works better in really big situations.

Also, Lisp where it is used often it allows fewer people to do what
many would otherwise do.  This means you often don't see large shops
of Lisp programmers.

The analogy isn't perfect, though.  Contrary to prevailing myth, you
don't have to wear special clothing to handle Lisp.  And it isn't unhealthy
to other products living downstream.

Hope this helps.  Have a nice day.
From: Joshua Scholar
Subject: Re: Newbie questions
Date: 
Message-ID: <37326ec5.11965212@news.select.net>
On 01 May 1999 16:59:58 +0200, Lieven Marchand <···@bewoner.dma.be>
wrote:

>* advanced object system with multiple dispatch. You can forget half
>  of the Pattern book when you have these.
Then I'm going to have to figure out this multiple dispatch thing!

>
>* dynamic development environment that allows for incremental change 
>

I guess it saves a lot on building tools, testing programs and user
interfaces for them when you can just apply your functions manually,
I'm jealous.

>* closures in stead of function pointers or similar half baked measures.

I still don't see the difference between closures and objects.  A
closure is just a function or set of functions with some trapped
variables, right?

Josh Scholar
From: Lyman S. Taylor
Subject: Re: Newbie questions
Date: 
Message-ID: <372DDC4B.33470534@mindspring.com>
Joshua Scholar wrote:
> 
> On 01 May 1999 16:59:58 +0200, Lieven Marchand <···@bewoner.dma.be>
> wrote:
> 
> >* advanced object system with multiple dispatch. You can forget half
> >  of the Pattern book when you have these.
> Then I'm going to have to figure out this multiple dispatch thing!

  I think what he was trying to convey was not that you forget the 
  "patterns part" of the pattern's book, but the OTHER half of the 
  book.  The periodic C++ kludges that you need to implement 
  some of those patterns. 

  For example the Vistor pattern, to a certain extent, IS multiple dispatch. 
  You don't have to hack up some explicit representation for it. 
  
  [  It is a matter open to debate as to how much CLOS and a dynamic
     language make those patterns "simplistic" to implement.  

     You can find one side of the debate here: 

         http://www.norvig.com/design-patterns/

     I also think the GoF (Gang of Four) probably picked a set of patterns 
     that include a larger set of things that C++ doesn't do well.  Since
those
     folks are one of their primary audiences, you might as well pick
patterns
     that have a high "wow, doing that was so much more painful before"
     effect. :-) 
   ] 

> >* closures in stead of function pointers or similar half baked measures.
> 
> I still don't see the difference between closures and objects.  A
> closure is just a function or set of functions with some trapped
> variables, right?

   Closures are anonymous.  You don't have to enlarge your global namespace
   to use them.  I suppose you could use nested classes to aviod conflicts.
   However, that is using the outer class more as a "namespace" mechanism
than
   as an "object" mechanism.  Those aren't necessarily the same thing. 

   (defun   add-n ( n ) 
     #'(lambda (x) (+ n  x)))

   (mapcar (add-n 2) '( 1 2 3 4))

   In C++, the mapcar equivalent could invoke a prespecified function on 
   the "closure object" to each value of the num-list to the "computation". 
   It would have to "know" what type of closure it is and what the "name"
   of that function is. 

   In Lisp, the environment takes care of creating these sorts of "objects"
   and of throwing them away when they aren't needed anymore. Therefore,
   there is less "overhead" the user needs to worry about.  Also, the
function
   being passed is any function, closure or otherwise. There is no carefully
   orchestrated mating dance you have to go through to use them. 

   It isn't that they aren't substitutable for each other... it is required
   drudgery involved.   On the flip side, closures make for awkward classes,
   too.

---

Lyman
From: Lieven Marchand
Subject: Re: Newbie questions
Date: 
Message-ID: <m3lnf6yr93.fsf@localhost.localdomain>
·····@removethisbeforesending.cetasoft.com (Joshua Scholar) writes:

> >* dynamic development environment that allows for incremental change 
> >
> 
> I guess it saves a lot on building tools, testing programs and user
> interfaces for them when you can just apply your functions manually,
> I'm jealous.
> 

Not only that. If you've made an error and try to do something that
doesn't make sense like adding a number to a symbol or taking the car
of an atom, your program doesn't happily do that to dump core some 5
functions deeper; no, you get put in a nice debugger where you can
inspect the whole call stack, the arguments of your function etc. and
after you've made the necessary adjustments you can even continue the
computation or restart from higher up the call stack.

-- 
Lieven Marchand <···@bewoner.dma.be>
If there are aliens, they play Go. -- Lasker
From: Raymond Wiker
Subject: Re: Newbie questions
Date: 
Message-ID: <87g15ee8ss.fsf@foobar.orion.no>
·····@removethisbeforesending.cetasoft.com (Joshua Scholar) writes:

> I still don't see the difference between closures and objects.  A
> closure is just a function or set of functions with some trapped
> variables, right?

        The Devil is in the details... In Common Lisp you can easily
define a closure that contains(?) two or more functions that access
a set of values. To do the same in C++, you have to

        1) define a struct/class to hold the closure data
        2) define classes for each of the functions in the closure
        3) instantiate an object for the closure data (from [1])
        4) instantiate objects for each of the classes from [2]. Each
           of these objects need a reference to the closure object,
           passed via the constructor.

        Note that the classes defined in [1] and [2] are named
classes; in Common Lisp, there is no need to introduce classes for the 
functions, and the closure itself is anonymous.

        Compare the following:
        (defun f(a)
               (cons
                  (lambda () (setq a (1+ a)))
                  (lambda () (setq a (+ a a)))))

        (setq g (f 2))
        (funcall (car g))
        (funcall (car g))
        (funcall (cdr g))
        (funcall (cdr g))
        (setq h (f 2))
        (funcall (car h))
        (funcall (car h))
        (funcall (cdr h))
        (funcall (cdr h))

        with: 
class F {
public:
  friend class Car {
  public:
    Car (F& f) : a(F.a) 
      {}
    int operator()()
      { return a++; }
  private:
    int& a;
  }

  friend class Cdr {
  public:
    Cdr(F& f) : a(F.a) 
      {}
    int operator()()
      { return a*=2; }
  private:
    int& a;
  }

  F(aa) : a(aa), car(this), cdr(this)
    {}
  
private:
  int a;
  Car car;
  Cdr cdr;
};

main()
{
  F g(2);
  F h(2);
  cout << g.car() << endl;
  cout << g.car() << endl;
  cout << g.cdr() << endl;
  cout << g.cdr() << endl;

  cout << h.car() << endl;
  cout << h.car() << endl;
  cout << h.cdr() << endl;
  cout << h.cdr() << endl;
}


-- 
Raymond Wiker, Orion Systems AS
+47 370 61150
From: Joshua Scholar
Subject: Re: Newbie questions
Date: 
Message-ID: <372e265f.2137369@news.select.net>
On 03 May 1999 12:39:47 +0200, Raymond Wiker <·······@orion.no> wrote:

>·····@removethisbeforesending.cetasoft.com (Joshua Scholar) writes:
>
>> I still don't see the difference between closures and objects.  A
>> closure is just a function or set of functions with some trapped
>> variables, right?
>
>        The Devil is in the details... In Common Lisp you can easily
>define a closure that contains(?) two or more functions that access
>a set of values. To do the same in C++, you have to
>
>        1) define a struct/class to hold the closure data
>        2) define classes for each of the functions in the closure
>        3) instantiate an object for the closure data (from [1])
>        4) instantiate objects for each of the classes from [2]. Each
>           of these objects need a reference to the closure object,
>           passed via the constructor.
>
>        Note that the classes defined in [1] and [2] are named
>classes; in Common Lisp, there is no need to introduce classes for the 
>functions, and the closure itself is anonymous.
>
>        Compare the following:
>        (defun f(a)
>               (cons
>                  (lambda () (setq a (1+ a)))
>                  (lambda () (setq a (+ a a)))))
>
>        (setq g (f 2))
>        (funcall (car g))
>        (funcall (car g))
>        (funcall (cdr g))
>        (funcall (cdr g))
>        (setq h (f 2))
>        (funcall (car h))
>        (funcall (car h))
>        (funcall (cdr h))
>        (funcall (cdr h))
>

You're thinking in LISP.  

There are lots of ways of doing this stuff in C++, none of which take
the 30 line to define that you took.  When we really want all the
generality of a closure (which we rarely do) we use a template class
that will combine any function pointer with an object pointer to make
a closure that contains both and, (since all such templates are
derived from a single root), they can be passed around like anonymous
functions.

In any case, no matter what you do, the simple mapping from a closure
is one object = one activation record.  

class f
{
    int a;
public:
    f(int _a):a(_a){}
    int operator++() { return ++a; }
    int Times2() { return a*=2; }
};

void main()
{
  f g(2);

//when you really want all the generality of a closure

  IntClosure<f> car(g, &f::operator ++);
  IntClosure<f> cdr(g, &f::Times2);

  cout << car() << endl;
  cout << cdr() << endl;

//Note that the type isn't stuck being specific to this template 
//instantiation

IntClosureBase *anyClosure = &car;

  cout << (*anyClosure)() << endl;

//but most of the time you just want functions
  cout << ++g << endl;
  cout << ++g << endl;
  cout << g.Times2() << endl;
  cout << g.Times2() << endl;

//sometimes just having the function pointers is enough
  int (f::*fn)();

  fn = &f::operator ++();
  cout << (g.*fn)() << endl;
  fn = &f::Times2();
  cout << (g.*fn)() << endl;  
}

Off the top of my head, the template for closures of this type
signature would look like this:

class IntClosureBase
{
public:
  virtual int operator() = 0;
};

template <class T> IntClosure : public IntClosureBase
{
    T *object;
    int (T::*function)();   
public:
  IntClosure(T* _o,(T::*_f)()):object(_o),function(_f){}

  virtual int operator() { return (object->*function)(); }
};

We don't really call our template "Closures" but I'm being LISP
friendly.

Josh Scholar
From: Chris Double
Subject: Re: Newbie questions
Date: 
Message-ID: <wkaevltkg2.fsf@cnd.co.nz>
·····@removethisbeforesending.cetasoft.com (Joshua Scholar) writes:

> There are lots of ways of doing this stuff in C++, none of which take
> the 30 line to define that you took.  
>
> [... example snipped... ]

One problem with your approach is returning the closure objects you
created:

IntClosure< afunc()
{
   f g(2);
 
 
   IntClosure<f> car(g, &f::operator ++);
   return car;
}

This would result in a dangling reference to the destructed 'g' object
- as 'g' would have gone out of scope. 

I miss closures and lambda functions the most when using some of the
stl algorithms. For example:

class Person
{
  string getName();
};

vector<Person*> people = ...;

// Find a person with the name 'John':
vector<Person*>::iterator it = find_if(
  people.begin(),
  people.end(),
  compose(
    bind2nd(equal_to<string>(), string("John")).
    mem_fun(&Person::getName)));

I would much prefer writing an anonymous function inline:

vector<Person*>::iterator it = find_if(
  people.begin(),
  people.end(),
  method(Person* p) { return p->getName() == "John"; } );

Excuse the ugly syntax.

The main problem with closure simulators in C++ is the memory
management and providing access to the local variables without having
to create a class which copies and stores them.

Chris.
From: Joshua Scholar
Subject: Re: Newbie questions
Date: 
Message-ID: <372ea615.34836343@news.select.net>
On 04 May 1999 18:31:25 +1200, Chris Double <·····@cnd.co.nz> wrote:

>·····@removethisbeforesending.cetasoft.com (Joshua Scholar) writes:
>
>> There are lots of ways of doing this stuff in C++, none of which take
>> the 30 line to define that you took.  
>>
>> [... example snipped... ]
>
>One problem with your approach is returning the closure objects you
>created:
>
>IntClosure< afunc()
>{
>   f g(2);
> 
> 
>   IntClosure<f> car(g, &f::operator ++);
>   return car;
>}
>
>This would result in a dangling reference to the destructed 'g' object
>- as 'g' would have gone out of scope. 
>

I know that, I was just simplifying the model.  Most problems seem to
have fairly simple lifetimes, but I have templates that help me with
the ones that don't.  My most common use for counted pointers, by the
way, is sharing an object between threads (I made the counting
atomic).

I've have yet to have a problem where I had to put all of these
templates together, but if you wanted all of the flexibility of gd
here it would be:

class IntClosureBase : public Counted
{
public:
  virtual int operator() = 0;
};

template <class T> IntClosure : public IntClosureBase
{
    CountedPtr<T> object;
    int (T::*function)();   
public:
  IntClosure(CountedPtr<T> _o,(T::*_f)()):object(_o),function(_f){}

  virtual int operator() { return ((*object).*function)(); }
};

class f : public Counted
{
    int a;
public:
    f(int _a):a(_a){}
    int operator++() { return ++a; }
};

CountedPtr<IntClosureBase> foo()
{
    CountedPtr<IntClosureBase> car = 
	(IntClosureBase *)new IntClosure<f>(new f(2), 
					    &f::operator ++);
    cout << ++ *car << endl;
    return car;
}


>...  compose(
>    bind2nd(equal_to<string>(), string("John")).
>    mem_fun(&Person::getName)) ..

I've never seen an expression like this.  Perhaps I misunderstand, but
it looks to me like you're fighting the language in order to get
anonymous functions, FORGET ANONYMITY!  You're sacrificing readable
syntax just in order to not name something - that's a rip off.  So the
function needs a name, so what?

Josh Scholar


P.S In case it helps, here is the counted pointer classes and
templates (specialized for windows multitasking):

struct Counted
{
    LONG refCount;
    Counted():refCount(0){}
};

template <class T> class CountedPtr
{

  protected:
      T *letter;

  public:
      void decRefCount()
      {
          if (letter)
          {
              if ( InterlockedDecrement(&letter->refCount) <= 0)
              {
                  delete letter;
                  letter = NULL;
              }
          }
      }

      void incRefCount()
      {
          if (letter)
          {
              InterlockedIncrement(&letter->refCount);
          }
      }

      CountedPtr():letter(NULL){}
      CountedPtr(const CountedPtr<T> &w);
      CountedPtr(const T *w);

      operator T *(){ return letter; }
      operator const T *()const{ return letter; }

      CountedPtr<T> & operator=(const T *w);        
      CountedPtr<T> & operator=(const CountedPtr<T> &w);        

      operator bool() const
      { return letter!=NULL; }

      int operator !() const
      { return letter==NULL; }

      int operator==(const CountedPtr&r)
      { return letter == r.letter; }

      const T & operator*() const
      { return *letter; }

      T & operator*()
      { return *letter; }

      T * operator->()
      { assert(letter); return letter; }
      T * operator->()const
      { assert(letter); return letter; }

      ~CountedPtr();
};



template <class T> CountedPtr<T> & CountedPtr<T>::operator=(const T
*w)
{
    if (w) InterlockedIncrement(& (const_cast<T *>(w)->refCount));
    decRefCount();
    letter= const_cast<T *>(w);
    return *this;
}
template <class T> CountedPtr<T> & CountedPtr<T>::operator=(const
CountedPtr<T> &w)
{
    return *this = (const T *)w;
}

template <class T> CountedPtr<T>::CountedPtr(const CountedPtr<T> &w)
:letter(const_cast<T *>(w.letter))
{  incRefCount(); }

template <class T> CountedPtr<T>::CountedPtr(const T *w):letter((T
*)w)
{  incRefCount(); }

template <class T> CountedPtr<T>::~CountedPtr()
{  decRefCount(); }
From: Chris Double
Subject: Re: Newbie questions
Date: 
Message-ID: <wkvhe8slzs.fsf@cnd.co.nz>
·····@removethisbeforesending.cetasoft.com (Joshua Scholar) writes:

> >...  compose(
> >    bind2nd(equal_to<string>(), string("John")).
> >    mem_fun(&Person::getName)) ..
> 
> I've never seen an expression like this.  Perhaps I misunderstand, but
> it looks to me like you're fighting the language in order to get
> anonymous functions, FORGET ANONYMITY! 

I don't think using standard library functions and idioms could be
called 'fighting the language'. With the exception of 'compose' of
course. 'compose' never made it into the standard but was in the
original STL and is provided by the SGI STL libraries. 'bind2nd',
'mem_fun' and 'equal_to' are in the standard and the sort of code
above is a reasonably common idiom in standard C++ - certainly not
'fighting the language'.

> You're sacrificing readable syntax just in order to not name
> something - that's a rip off.  So the function needs a name, so
> what?

The amount of overhead in writing a function or class member to
perform small operations can be larger than you realise. Every time I
want to do something like the above i'd have to write a function, give
it a name, pollute some namespace somewhere to do it. It also removes
locality of code. The function to do the comparison for the specific
string won't be located near where it is done. If I want to add one to
every item in a vector I don't want to write a new function just to do
this. I want to be able to 'inline' some code to make it easier to
read:

vector<int> source = [...];
vector<int> destination;

transform(
  source.begin(),
  source.end(),
  back_inserter(destination),
  bind1st(plus<int>(), 1));

I could write an iterative loop to do the above but there is more
likelyhood of error (getting loop termination incorrect, etc). Better
to reuse the standard library components.

for(vector<int>::iterator it = source.begin();
    it != source.end();
    ++it)
{
  destination.push_back(*it + 1);
}

But getting back to the main subject. You mentioned in a previous post
that you didn't see a use for closures. Yet you have the counted
pointer and closure classes that you've created to try and emulate
some of the functionality. This sort of thing is much easier to use
and much more general if it is provided by the language itself. In
Dylan the transform code above is:

destination := map(curry(\+, 1), source);

Chris.
From: Joshua Scholar
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <37306705.3795745@news.select.net>
[Followup to comp.lang.lisp]


On 05 May 1999 06:55:35 +1200, Chris Double <·····@cnd.co.nz> wrote:
>...
>But getting back to the main subject. You mentioned in a previous post
>that you didn't see a use for closures. Yet you have the counted
>pointer and closure classes that you've created to try and emulate
>some of the functionality. This sort of thing is much easier to use
>and much more general if it is provided by the language itself. In
>Dylan the transform code above is:
>
>destination := map(curry(\+, 1), source);
>
>Chris.

You're right.  I first played with closures in languages that didn't
have object libraries (versions of CLisp and Scheme), and I got the
impression that people mostly used closures in places where a regular
object would do just fine.  Places where instead of passing a few
closures, you could pass a whole object and the code knows exactly
what object functions it wants to call.  Since there is a lot of
overlap and closures take less keystrokes and namespace people
probably still do use closures mostly in places where simple objects
would do.  

Of course you can simulate any subset of the semantics of a closure
with objects, but since people started arguing that supporting a lot
of them took too many keystrokes I ended up trying to "prove" that I
could use templates to define objects that have ALL of the semantics
of a closure, including the anonymity of the functions and the
lifetime issue of a shared activation record.  

This wasn't my original point - I think that the exact combination of
semantics of a closure is rarely needed.

And for what little it's worth, the templates we use at work are not
called "Closures" I just renamed them in order to draw a parallel.
And then I felt a little sheepish that I had said "What are closures
for?" and then simply shown that I could create closures in C++. I
know you don't quite agree because you object to "polluting" the
namespace.  But my point of view on that is that what is named and
what isn't is always very arbitrary, depending on your approach and
style, and if it also depends on what language you are using, so be
it. The cost of a name is basically nil, I'm not going to make a big
deal out of it.

Josh Scholar
From: Steve Gonedes
Subject: Re: Newbie questions
Date: 
Message-ID: <m24slsasw1.fsf@KludgeUnix.com>
Chris Double <·····@cnd.co.nz> writes:
 
< I miss closures and lambda functions the most when using some of the
< stl algorithms. For example:
< 
< class Person
< {
<   string getName();
< };
< 
< vector<Person*> people = ...;
< 
< // Find a person with the name 'John':
< vector<Person*>::iterator it = find_if(
<   people.begin(),
<   people.end(),
<   compose(
<     bind2nd(equal_to<string>(), string("John")).
<     mem_fun(&Person::getName)));
< 
< I would much prefer writing an anonymous function inline:
< 
< vector<Person*>::iterator it = find_if(
<   people.begin(),
<   people.end(),
<   method(Person* p) { return p->getName() == "John"; } );
< 
< Excuse the ugly syntax.
< 
< The main problem with closure simulators in C++ is the memory
< management and providing access to the local variables without having
< to create a class which copies and stores them.
< 
< Chris.

You can get local functions with gcc. Can't imagine what I would do
without them.

void
build_families (List *fontlist)
{
  HashTable fonthash;
  void *hash_font (const char *file) {
     return puthash
      ((*function_table [font_file_type (file)].function) (file),
        fonthash);
  }
  InitializeHashTable (fonthash);
  map_list (fontlist, (void (*) (void *))hash_font);
}

I think that g++ allows this as well.
From: Raymond Wiker
Subject: Re: Newbie questions
Date: 
Message-ID: <87d80hdy1t.fsf@foobar.orion.no>
·····@removethisbeforesending.cetasoft.com (Joshua Scholar) writes:

> On 03 May 1999 12:39:47 +0200, Raymond Wiker <·······@orion.no> wrote:
> >        Compare the following:
> >        (defun f(a)
> >               (cons
> >                  (lambda () (setq a (1+ a)))
> >                  (lambda () (setq a (+ a a)))))

        [ ... ]

> There are lots of ways of doing this stuff in C++, none of which take
> the 30 line to define that you took.  When we really want all the
> generality of a closure (which we rarely do) we use a template class
> that will combine any function pointer with an object pointer to make
> a closure that contains both and, (since all such templates are
> derived from a single root), they can be passed around like anonymous
> functions.
> 
> In any case, no matter what you do, the simple mapping from a closure
> is one object = one activation record.  
> 
> class f
> {
>     int a;
> public:
>     f(int _a):a(_a){}
>     int operator++() { return ++a; }
>     int Times2() { return a*=2; }
> };
> 
> void main()
> {
>   f g(2);
> 
> //when you really want all the generality of a closure
> 
>   IntClosure<f> car(g, &f::operator ++);
>   IntClosure<f> cdr(g, &f::Times2);
        
        As somebody else already noted, this may give you a dangling
reference if you allow g to go out of scope. My version had the same
problem, so I won't stress this (although it *obviously* means that
in the general case, you have to build some sort of memory management
into your closure classes.)

> 
>   cout << car() << endl;
>   cout << cdr() << endl;
> 
> //Note that the type isn't stuck being specific to this template 
> //instantiation
> 
> IntClosureBase *anyClosure = &car;
> 
>   cout << (*anyClosure)() << endl;
> 
> //but most of the time you just want functions
>   cout << ++g << endl;
>   cout << ++g << endl;
>   cout << g.Times2() << endl;
>   cout << g.Times2() << endl;
> 
> //sometimes just having the function pointers is enough
>   int (f::*fn)();
> 
>   fn = &f::operator ++();
>   cout << (g.*fn)() << endl;
>   fn = &f::Times2();
>   cout << (g.*fn)() << endl;  
> }
> 
> Off the top of my head, the template for closures of this type
> signature would look like this:
> 
> class IntClosureBase
> {
> public:
>   virtual int operator() = 0;
> };
> 
> template <class T> IntClosure : public IntClosureBase
> {
>     T *object;
>     int (T::*function)();   
> public:
>   IntClosure(T* _o,(T::*_f)()):object(_o),function(_f){}
> 
>   virtual int operator() { return (object->*function)(); }
> };

        Hum. If you count parentheses in the IntClosure* classes and
class f, you actually get a larger number than the 11 in my (borrowed) 
Lisp example. There goes another argument against Lisp :-)

        Also keep in mind that IntClosure is not sufficient; you have
to add similar classes if you want closures with other/different data
than just a single int, and also if you want "enclosed" functions with 
other operation signatures.

-- 
Raymond Wiker, Orion Systems AS
+47 370 61150
From: Joshua Scholar
Subject: Re: Newbie questions
Date: 
Message-ID: <372fb91b.39706364@news.select.net>
On 04 May 1999 10:44:14 +0200, Raymond Wiker <·······@orion.no> wrote:


>        Also keep in mind that IntClosure is not sufficient; you have
>to add similar classes if you want closures with other/different data
>than just a single int, and also if you want "enclosed" functions with 
>other operation signatures.

Which is an advantage if you believe in type checking!!!!

I WANT the compiler to make sure that every function is only used with
compatable parameters!

Joshua Scholar
From: Raymond Wiker
Subject: Re: Newbie questions
Date: 
Message-ID: <87aevldw62.fsf@foobar.orion.no>
·····@removethisbeforesending.cetasoft.com (Joshua Scholar) writes:

> On 04 May 1999 10:44:14 +0200, Raymond Wiker <·······@orion.no> wrote:
> 
> 
> >        Also keep in mind that IntClosure is not sufficient; you have
> >to add similar classes if you want closures with other/different data
> >than just a single int, and also if you want "enclosed" functions with 
> >other operation signatures.
> 
> Which is an advantage if you believe in type checking!!!!
> 
> I WANT the compiler to make sure that every function is only used with
> compatable parameters!

        This is probably not very helpful, but anyway... Simple type
compatibility is the only C++ can check statically - for example, you
can have a function that expects as an argument a value from {1, 2,
5}. There's no way you can check this statically, which means that you 
ultimately have to resort to run-time checking, which (as with
closures) is more painful to do in C++ than in Common Lisp.

        My point? I do not think that the advantages of a dynamic type 
system are worth trading in for a flawed static type system.
        
-- 
Raymond Wiker, Orion Systems AS
+47 370 61150
From: Bagheera, the jungle scout
Subject: Re: Newbie questions
Date: 
Message-ID: <7gnhjt$p9q$1@nnrp1.dejanews.com>
In article <··············@foobar.orion.no>,
  Raymond Wiker <·······@orion.no> wrote:
> ·····@removethisbeforesending.cetasoft.com (Joshua Scholar) writes:
> > On 04 May 1999 10:44:14 +0200, Raymond Wiker <·······@orion.no> wrote:
> > >        Also keep in mind that IntClosure is not sufficient; you have
> > >to add similar classes if you want closures with other/different data
> > >than just a single int, and also if you want "enclosed" functions with
> > >other operation signatures.
> > Which is an advantage if you believe in type checking!!!!
> > I WANT the compiler to make sure that every function is only used with
> > compatable parameters!
>         This is probably not very helpful, but anyway... Simple type
> compatibility is the only C++ can check statically - for example, you
> can have a function that expects as an argument a value from {1, 2,
> 5}. There's no way you can check this statically, which means that you
> ultimately have to resort to run-time checking, which (as with
> closures) is more painful to do in C++ than in Common Lisp.

I think what josh is getting at is that in C++, it prevents you
from stuffing an 8byte value into a two byte parameter.  If you
don't have type checking for this sort of thing, you can easily
overflow your calling stack, which is a bad no-no. Basically it
is a matter of the program protecting itself from the programmer.

I'm sure there are lots of ways to shoot yourself in the foot with
Lisp.  C++ is a concerted effort to reduce that possibility of
self-destruction.  Sure I can get around alot of the constructs
that offer protection in C++, but a properly versed programmer is
not likely to "hang themselves" with the rope the language gives
them.

True, Lisp gives you the option of run-time program correction...
but sometimes requirements don't allow that comfort.

My whole take on it?

"Tastes Great....Less Filling"

--
Bagherra <·······@frenzy.com>
http://www.frenzy.com/~jaebear
  "What use is it to have a leader who walks on water
       if you don't follow in their footsteps?"

-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    
From: Duane Rettig
Subject: Re: Newbie questions
Date: 
Message-ID: <4ogk0a72c.fsf@beta.franz.com>
Bagheera, the jungle scout <········@my-dejanews.com> writes:

> In article <··············@foobar.orion.no>,
>   Raymond Wiker <·······@orion.no> wrote:
> > ·····@removethisbeforesending.cetasoft.com (Joshua Scholar) writes:
> > > On 04 May 1999 10:44:14 +0200, Raymond Wiker <·······@orion.no> wrote:
> > > >        Also keep in mind that IntClosure is not sufficient; you have
> > > >to add similar classes if you want closures with other/different data
> > > >than just a single int, and also if you want "enclosed" functions with
> > > >other operation signatures.
> > > Which is an advantage if you believe in type checking!!!!
> > > I WANT the compiler to make sure that every function is only used with
> > > compatable parameters!
> >         This is probably not very helpful, but anyway... Simple type
> > compatibility is the only C++ can check statically - for example, you
> > can have a function that expects as an argument a value from {1, 2,
> > 5}. There's no way you can check this statically, which means that you
> > ultimately have to resort to run-time checking, which (as with
> > closures) is more painful to do in C++ than in Common Lisp.
> 
> I think what josh is getting at is that in C++, it prevents you
> from stuffing an 8byte value into a two byte parameter.  If you
> don't have type checking for this sort of thing, you can easily
> overflow your calling stack, which is a bad no-no. Basically it
> is a matter of the program protecting itself from the programmer.

This sounds more like an anti-Forth argument, rather than an anti-lisp
argument.  There is no such danger of overflowing the call-stack in
lisp due to argument mismatches; there are far more efficient ways
to accomplish stack overflows ...

> I'm sure there are lots of ways to shoot yourself in the foot with
> Lisp.  C++ is a concerted effort to reduce that possibility of
> self-destruction.  Sure I can get around alot of the constructs
> that offer protection in C++, but a properly versed programmer is
> not likely to "hang themselves" with the rope the language gives
> them.

Interesting take.  I personally find lisp very robust.

> True, Lisp gives you the option of run-time program correction...

Not only run-time correction, but _programmed_ run-time correction...

> but sometimes requirements don't allow that comfort.

What times are those?

> My whole take on it?
> 
> "Tastes Great....Less Filling"

And my take:  If you shoot yourself in the foot in lisp, your foot
hurts.  If you shoot yourself in the foot in C++, you die.

-- 
Duane Rettig          Franz Inc.            http://www.franz.com/ (www)
1995 University Ave Suite 275  Berkeley, CA 94704
Phone: (510) 548-3600; FAX: (510) 548-8253   ·····@Franz.COM (internet)
From: Joshua Scholar
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <37326dd3.5537740@news.select.net>
[Followup to comp.lang.lisp]

On 04 May 1999 13:55:23 -0700, Duane Rettig <·····@franz.com> wrote:

>Bagheera, the jungle scout <········@my-dejanews.com> writes:
>
>> In article <··············@foobar.orion.no>,
>>   Raymond Wiker <·······@orion.no> wrote:
>> > ·····@removethisbeforesending.cetasoft.com (Joshua Scholar) writes:
>> > > On 04 May 1999 10:44:14 +0200, Raymond Wiker <·······@orion.no> wrote:
>> > > >        Also keep in mind that IntClosure is not sufficient; you have
>> > > >to add similar classes if you want closures with other/different data
>> > > >than just a single int, and also if you want "enclosed" functions with
>> > > >other operation signatures.
>> > > Which is an advantage if you believe in type checking!!!!
>> > > I WANT the compiler to make sure that every function is only used with
>> > > compatable parameters!
>> >         This is probably not very helpful, but anyway... Simple type
>> > compatibility is the only C++ can check statically - for example, you
>> > can have a function that expects as an argument a value from {1, 2,
>> > 5}. There's no way you can check this statically, which means that you
>> > ultimately have to resort to run-time checking, which (as with
>> > closures) is more painful to do in C++ than in Common Lisp.
>> 
>> I think what josh is getting at is that in C++, it prevents you
>> from stuffing an 8byte value into a two byte parameter.  If you
>> don't have type checking for this sort of thing, you can easily
>> overflow your calling stack, which is a bad no-no. Basically it
>> is a matter of the program protecting itself from the programmer.
>
>This sounds more like an anti-Forth argument, rather than an anti-lisp
>argument.  There is no such danger of overflowing the call-stack in
>lisp due to argument mismatches; there are far more efficient ways
>to accomplish stack overflows ...
>
>> I'm sure there are lots of ways to shoot yourself in the foot with
>> Lisp.  C++ is a concerted effort to reduce that possibility of
>> self-destruction.  Sure I can get around alot of the constructs
>> that offer protection in C++, but a properly versed programmer is
>> not likely to "hang themselves" with the rope the language gives
>> them.
>
>Interesting take.  I personally find lisp very robust.
>
>> True, Lisp gives you the option of run-time program correction...
>
>Not only run-time correction, but _programmed_ run-time correction...
>
>> but sometimes requirements don't allow that comfort.
>
>What times are those?
>


Well Bagheera didn't state the problem quite right.  The overall point
is that type checking saves you from tons and tons of late night typos
and logic errors.

And the time is when you have to deliver working code in the morning
and there isn't enough budget for more than a week more of testing!
So any error that gets into the code is far too many.  It happens all
the time in the game business.  

<evil voice> WELCOME TO MY WORLD. HA HA HA HA HA! </evil voice>


Passing the wrong parameter, parameters in the wrong order, the wrong
subfield etc. are common typos and often caught by the compiler -
especially if you design your class interfaces to catch as much as
possible.  In code that rarely runs or isn't expected to run under
normal conditions, this sort of correctness checking is very
important.

Joshua Scholar
From: Kent M Pitman
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <sfwso9cxt0s.fsf@world.std.com>
·····@removethisbeforesending.cetasoft.com (Joshua Scholar) writes:

> Well Bagheera didn't state the problem quite right.  The overall point
> is that type checking saves you from tons and tons of late night typos
> and logic errors.

Nothing in CL forbids you from type-declaring every variable.  Knock
yourself out.  Don't forget to send bug reports when the compiler
fails to use them well or flag problems, so your vendor will know
you care.

But the language itself already supports this.  It's simply up to the 
market to decide if this is what stands between it and success. I doubt
it is, but you're welcome to make the case otherwise.

> And the time is when you have to deliver working code in the morning
> and there isn't enough budget for more than a week more of testing!
> So any error that gets into the code is far too many.  It happens all
> the time in the game business.  

This is a pure engineering choice.  To the extent you want to make it,
you're right.  But there are many things in the world that you don't
know well enough at code-writing time and still have to code up.
If the very same compiler won't compile until the code is type-safe,
that code can't be delivered.  What is right depends on your need 
(based on your personal requirements, project requirements, and
customer needs).

I won't try to tell you not to write declarations if you won't try to
tell me that I must write them.  As to the question of how they are
used, the language is designed in a way that it doesn't affect the
semantics--it's purely a performance issue, to be sorted out by 
individual vendors as they see fit.  There are a lot of vendors.  Surely
one of them will care.  Or grab CMU CL and start from there.  It cared
a lot about declarations.

> <evil voice> WELCOME TO MY WORLD. HA HA HA HA HA! </evil voice>

> 
> Passing the wrong parameter, parameters in the wrong order, the wrong
> subfield etc. are common typos and often caught by the compiler -

Yes, already done.  But remember also that passing the wrong parameter
may not be fatal.  Error handlers may catch it.  The code may be going to
get updated before it's run.  The caller may be wrong or the callee.
I would not like to see any of these flexibilities removed.

> especially if you design your class interfaces to catch as much as
> possible.

Lisp is a language for being flexible.  That is why it has survived
all these years. I think it's fine for you to give up your flexibility
on a case by case basis, but please don't confuse your personal need
with a design theory.  None of the things you're saying are false but
neither are they "uniquely determined points of view".

> In code that rarely runs or isn't expected to run under
> normal conditions, this sort of correctness checking is very
> important.

You don't say what your point is.

This sounds like fodder for bug reports.  IMO, There is nothing wrong
with the language in this regard.  You're welcome to disagree, but
please be specific about what you'd like to see done instead so that
people (including myself) can evaluate whether your perfect world
infringes theirs.
From: Joachim Achtzehnter
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <ucvhe7l8td.fsf@soft.mercury.bc.ca>
Kent M Pitman <······@world.std.com> writes:
>
> ·····@removethisbeforesending.cetasoft.com (Joshua Scholar) writes:
> 
> > Well Bagheera didn't state the problem quite right.  The overall
> > point is that type checking saves you from tons and tons of late
> > night typos and logic errors.
> 
> Nothing in CL forbids you from type-declaring every variable. Knock
> yourself out.  Don't forget to send bug reports when the compiler
> fails to use them well or flag problems, so your vendor will know
> you care.

It is true that one cannot fault the language for this. Nevertheless,
until vendors listen to and act on these 'bug reports' the problem
persists in practise. I agree strongly with Joshua that lack of static
type checking is one on the main disadvantages of (at least) the
commercial Common Lisp implementation I am familiar with.

> But the language itself already supports this.  It's simply up to
> the market to decide if this is what stands between it and
> success. I doubt it is, but you're welcome to make the case
> otherwise.

Well, I tend to disagree. Adding static type checking (optional of
course) would go a long way towards convincing experienced C++/Java
programmers to take another look at Lisp. Of course, I can be certain
only about my own opinion, others may disagree. I have heard a
representative of a Lisp vendor seriously argue that Lisp code must
look as simple as Java code to be competitive. IMHO, static type
checking is an order of magnitude more important than that. :-)

> I won't try to tell you not to write declarations if you won't try
> to tell me that I must write them.

Sure, there is no need to take away this flexibility.

> > In code that rarely runs or isn't expected to run under
> > normal conditions, this sort of correctness checking is very
> > important.
> 
> You don't say what your point is.

The point is probably this: A C++/Java compiler cannot catch all
errors, especially not design or logical errors, but at least it
catches most simple errors like typos, passing the wrong number of
arguments, passing a wrong argument, etc.  With existing Lisp
implementations many such errors are detected only at runtime even
when declarations are used. This is less problematic with mainline
code which is likely to be run by the developer anyway, but typos in
sections of the source code that are less frequently run have the
habit of crashing in the hands of a user, or the QA department if
you're lucky. Yes, you should test all your code, but the kind of bug
we're talking about is often introduced by changes that are so
'obvious' that many developers don't imagine a bug may have been
introduced.

Joachim

-- 
·······@kraut.bc.ca      (http://www.kraut.bc.ca)
·······@mercury.bc.ca    (http://www.mercury.bc.ca)
From: Kent M Pitman
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <sfwg15bcks7.fsf@world.std.com>
Joachim Achtzehnter <·······@kraut.bc.ca> writes:

> > > Well Bagheera didn't state the problem quite right.  The overall
> > > point is that type checking saves you from tons and tons of late
> > > night typos and logic errors.
> > 
> > Nothing in CL forbids you from type-declaring every variable. Knock
> > yourself out.  Don't forget to send bug reports when the compiler
> > fails to use them well or flag problems, so your vendor will know
> > you care.
> 
> It is true that one cannot fault the language for this. Nevertheless,
> until vendors listen to and act on these 'bug reports' the problem
> persists in practise. I agree strongly with Joshua that lack of static
> type checking is one on the main disadvantages of (at least) the
> commercial Common Lisp implementation I am familiar with.

Then you should use the power of the marketplace.  If you cannot get
enough people together to say that this is more important than other
things, then perhaps it is not.  I mean this seriously.  It is very
easy for people to complain about things CL doesn't do.  I believe it
is helpful if you make a list of the things that are promised in the
next release of your favorite vendor and tell the vendor your decision
to purchase future copies is premised on their doing this thing
instead of something they've promised.  I think most people either say
this and then purchase anyway (meaning they weren't really serious) or
aren't willing to say what they'd give up (meaning they think the
system has infinite resources).

> > But the language itself already supports this.  It's simply up to
> > the market to decide if this is what stands between it and
> > success. I doubt it is, but you're welcome to make the case
> > otherwise.
> 
> Well, I tend to disagree. Adding static type checking (optional of
> course) would go a long way towards convincing experienced C++/Java
> programmers to take another look at Lisp.

I really honestly and not facetiously think that if you believe this with
certainty, then you might consider getting a Lisp and putting your own
money and reputation on the line over this.  Because the Lisp vendors are
doing this day in and day out, and they do not casually assign their
resources away from the thing you describe.  They might be wrong, of course,
but if you really believe that there is a time-tested way to demonstrate
that.

Thought:

 Nothing in the world is "important" or "unimportant".
 Things are only "more important" and "less important" than other things.
 Say what you think is less important and you'll know what your leverage
 is to get this so-called more important thing.

> Of course, I can be certain
> only about my own opinion, others may disagree. I have heard a
> representative of a Lisp vendor seriously argue that Lisp code must
> look as simple as Java code to be competitive. IMHO, static type
> checking is an order of magnitude more important than that. :-)

So you mean you'd pay an order of magnitude more money?  Or what do
you mean by that?

> > I won't try to tell you not to write declarations if you won't try
> > to tell me that I must write them.
> 
> Sure, there is no need to take away this flexibility.
> 
> > > In code that rarely runs or isn't expected to run under
> > > normal conditions, this sort of correctness checking is very
> > > important.
> > 
> > You don't say what your point is.
> 
> The point is probably this: A C++/Java compiler cannot catch all
> errors, especially not design or logical errors, but at least it
> catches most simple errors like typos, passing the wrong number of
> arguments, passing a wrong argument, etc.  With existing Lisp
> implementations many such errors are detected only at runtime even
> when declarations are used. This is less problematic with mainline
> code which is likely to be run by the developer anyway, but typos in
> sections of the source code that are less frequently run have the
> habit of crashing in the hands of a user, or the QA department if
> you're lucky. Yes, you should test all your code, but the kind of bug
> we're talking about is often introduced by changes that are so
> 'obvious' that many developers don't imagine a bug may have been
> introduced.

Then maybe you should switch vendors.  Tell other vendors that's what
it would take to get you to swtich.  But making this checking a
requirement of the language would just mean there are fewer
implementations (since it would define some implementations not to be
valid).  It would not suddenly create more resources to do these
changes.  Standards are about codifying de facto practice, not about
forcing practice.  The marketplace is where to make your feelings
known, either as a choosy purchaser or an aggressive vendor with a
vision.  If you are willing to be neither, then I suggest you might
ask yourself what it means to say you hold these beliefs strongly.  If
you think you are being one of those things (probably the choosy
customer) and you are finding it doesn't help, then perhaps you should
think about how to create a customer organization that better
represents your market segment, or think about whether there are
enough people for you really to be a market segment.

Markets are elusive.  Just because you can't prove one is there doesn't
mean it's not--I'm not saying it's an easy business.  But what I'm saying
is that wishing as you are doing doesn't make it so.  The game is all
about resources.  What resources vendors have and what products people give
financial resources in exchange for.  If you don't affect that, you will
affect nothing.  That is how markets work (and don't).

I apologize for the somewhat strained tone of this message.  I don't
mean to jump on you personally.  But this is kind of an exemplar of
something I hear from people from time to time that I never quite get
around to responding to and this was a good chance.
From: Joachim Achtzehnter
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <uciua7kvv4.fsf@soft.mercury.bc.ca>
Kent M Pitman <······@world.std.com> writes:
> 
> Then you should use the power of the marketplace.

Agreed. By posting my opinion about this matter I hope to influence my
current vendor who may have an interest in my continued interest in
their product, as well as my vendor's competitors who may want to get
my business.

> If you cannot get enough people together to say that this is more
> important than other things, then perhaps it is not.

Certainly. But this is no reason to refrain form expressing ones
opinion.

> I mean this seriously. It is very easy for people to complain about
> things CL doesn't do.

Well, sorry for coming across as somebody who is whining. I am
critical of all programming languages that I've come across so
far. This doesn't mean one can't get work done with them. But being
able to get work done with a programming language is also no reason
not to try improving it (or its implementations). I am currently using
Lisp and appreciate its advantages. Don't see why I should not express
my opinion about its weaknesses.

> > I have heard a representative of a Lisp vendor seriously argue
> > that Lisp code must look as simple as Java code to be
> > competitive. IMHO, static type checking is an order of magnitude
> > more important than that. :-)

> So you mean you'd pay an order of magnitude more money?  Or what do
> you mean by that?

The decision to use Lisp, and to pay money for an implementation and
how much, wasn't for me to make in this case. In any case I am talking
here about mindshare. How can Lisp grow its user base? This is not
simply a question of who is willing to pay how much for which
features. It is more a question of how many potential developers are
comfortable with the idea of developing in Lisp. The static typing
issue is important in my opinion, others may disagree.

> I apologize for the somewhat strained tone of this message.

No need to apologize, your response contained useful thoughts and was
quite civilized. But if I may say so, the same can't be said about
certain other posters in this thread. This is another point which can
easily put off some developers from considering a non-mainstream
language like Lisp. In C++ newsgroups one can have serious discussions
about possible language improvements, here one is attacked and called
an 'ignorant newbie' and worse names. Of course, one cannot blame the
language for this either... :-)

Joachim

-- 
·······@kraut.bc.ca      (http://www.kraut.bc.ca)
·······@mercury.bc.ca    (http://www.mercury.bc.ca)
From: ·························@thank.you
Subject: Reasons for rejecting Lisp (was Re: Newbie questions [Followup to comp.lang.lisp])
Date: 
Message-ID: <3730c96a.978557761@news.earthlink.net>
On Wed, 05 May 1999 22:14:08 GMT, Joachim Achtzehnter
<·······@kraut.bc.ca> wrote:

>here about mindshare. How can Lisp grow its user base? This is not
>simply a question of who is willing to pay how much for which
>features. It is more a question of how many potential developers are
>comfortable with the idea of developing in Lisp. The static typing
>issue is important in my opinion, others may disagree.

I'm one of those who disagree.  I've talked to a lot of
people about Lisp, and about the possibility of using
it in place of languages such as C++.  None of them
ever mentioned static type checking to me as one of
their reasons for rejecting Lisp.  The overwhelmingly
most common reason is the perception that Lisp is a
special purpose language for academic projects where
efficiency doesn't matter.
From: Joshua Scholar
Subject: Re: Reasons for rejecting Lisp (was Re: Newbie questions [Followup to comp.lang.lisp])
Date: 
Message-ID: <3730dd05.8761061@news.select.net>
On Wed, 05 May 1999 23:36:48 GMT, ·························@thank.you
wrote:

>On Wed, 05 May 1999 22:14:08 GMT, Joachim Achtzehnter
><·······@kraut.bc.ca> wrote:
>
>>here about mindshare. How can Lisp grow its user base? This is not
>>simply a question of who is willing to pay how much for which
>>features. It is more a question of how many potential developers are
>>comfortable with the idea of developing in Lisp. The static typing
>>issue is important in my opinion, others may disagree.
>
>I'm one of those who disagree.  I've talked to a lot of
>people about Lisp, and about the possibility of using
>it in place of languages such as C++.  None of them
>ever mentioned static type checking to me as one of
>their reasons for rejecting Lisp.  The overwhelmingly
>most common reason is the perception that Lisp is a
>special purpose language for academic projects where
>efficiency doesn't matter.
>

This is like the old parable about measuring the Emperor's nose.  No
one has ever seen the Emperor, so any one person's opinion about the
length of his nose has no significance.  The joke is that you try gain
significance by averaging the answers of many people.

I have actual reasons for not being able to use Lisp and you're saying
those reasons are not important because the average programmer (who's
ignorance of what compilers can do is apparently greater than mine)
gives different reasons.  

So I'd think a better question than why people who don't know anything
about modern Lisp systems don't use them would be, what do Lisp
systems need in order to be useful to everyone who does know about
them.

There is nothing wrong with a Lisp being unsuited for many companies
except that the language could change so that it covers all needs.

Joshua Scholar
From: Gavin E. Gleason
Subject: Re: Reasons for rejecting Lisp (was Re: Newbie questions [Followup to comp.lang.lisp])
Date: 
Message-ID: <871zgt3ita.fsf@hasdrubal.nmia.com>
CMUCL alows you to specify types, and I imagine that the commercial 
compilers do as well.  Just because it is not specified in ANSI 
does not mean that it does not exist in most implementations... but 
it would be nice if it were specified.

	Gavin E. Gleason
From: Erik Naggum
Subject: Re: Reasons for rejecting Lisp (was Re: Newbie questions [Followup to comp.lang.lisp])
Date: 
Message-ID: <3135027926977088@naggum.no>
* ········@unm.edu (Gavin E. Gleason)
| CMUCL alows you to specify types, and I imagine that the commercial
| compilers do as well.  Just because it is not specified in ANSI does not
| mean that it does not exist in most implementations... but it would be
| nice if it were specified.

  could you give some examples of the types that CMUCL allows that are not
  the ANSI standard?

#:Erik
From: Kent M Pitman
Subject: Re: Reasons for rejecting Lisp (was Re: Newbie questions [Followup to comp.lang.lisp])
Date: 
Message-ID: <sfwso994hpe.fsf@world.std.com>
Erik Naggum <····@naggum.no> writes:

> * ········@unm.edu (Gavin E. Gleason)
> | CMUCL alows you to specify types, and I imagine that the commercial
> | compilers do as well.  Just because it is not specified in ANSI does not
> | mean that it does not exist in most implementations... but it would be
> | nice if it were specified.
> 
>   could you give some examples of the types that CMUCL allows that are not
>   the ANSI standard?

I haven't used CMUCL, but from what I've heard about it, I'm guessing
he means it hyperoptimizes what type info you give.  My impression is
that most compilers don't go to the same intense effort, but could.
The cool thing about CMUCL is supposed to be exactly the fact that its
many interesting optimizations didn't require language changes to do
in a conforming implementation.
From: Dobes Vandermeer
Subject: Re: Reasons for rejecting Lisp (was Re: Newbie questions [Followup to  comp.lang.lisp])
Date: 
Message-ID: <3732A552.E5A6646E@mindless.com>
Joshua Scholar wrote:
> 
> On Wed, 05 May 1999 23:36:48 GMT, ·························@thank.you
> wrote:
> 
> >The overwhelmingly
> >most common reason is the perception that Lisp is a
> >special purpose language for academic projects where
> >efficiency doesn't matter.

> So I'd think a better question than why people who don't know anything
> about modern Lisp systems don't use them would be, what do Lisp
> systems need in order to be useful to everyone who does know about
> them.

I think its the usual circular reinforcement problem, the reputation of
LISP attracts people writing acandemic projects where efficiency doesn't
matter, and thus LISP conforms ever more to those people, which
increases its reputation in that department.

Of course, efficiency is no longer an issue with LISP, the dynamic
comiler technology keeps up well enough with C++ or whatever.

More important issues are things like packaging; many lisp environment
do not compile executables, and many of the ones that do still dont
provide full functionality of the interpreter for the environment, and
so your generated program can lack the features you typically desire
from a LISP program, like the ability to dynamically self-modify, or
allow the user to type in custom algorithms.  This basically means that
a lot of useful programs are undeliverable without including the lisp
environment itself.

Other places where lisp falls severely short currently is the areas that
interest mainstream programmers the most: database access, sockets,
threading, IPC, and 3d graphics.  None of these interfaces are
standardised or even discussed, and while most implementations now have
some or all of these facilities, they are not cohesive in their
interface, and are sometimes poorly documented and incomplete to boot.

Some LISP implementations still do not interface well with other
languages, while "popular" languages (C/C++/Objective-C, Java, Ada,
Pascal) generally have a relatively safe way of communiating between
languages (although the interface definitions in Ada can be fairly
arduous).

LISP's object-oriented paradigm is powerful and yet... It's essentially
dynamic operator overloading, which is about as interesting to an object
modeller as a stack of bricks.  Although this is certainly only an
opinion and perhaps a trend, object orientation is centered on message
passing, while LISP's object orientation is based on function calls
(still).  I won't say its not USEFUL, but its not attractive, and its
not a step forward unless you are writing "toy" applications.

This is only the beginning of a long list of language weaknesses on
LISP.  Its important to realise that each language has its own use, and
LISP as it stands is great for writing small tests of algorithms before
porting them to a real language.

CU
Dobes
From: Ian Wild
Subject: Re: Reasons for rejecting Lisp (was Re: Newbie questions [Followup to  comp.lang.lisp])
Date: 
Message-ID: <3732AEA9.1E854CFF@cfmu.eurocontrol.be>
Dobes Vandermeer wrote:
> 
> many lisp environment
> do not compile executables, and many of the ones that do still dont
> provide full functionality of the interpreter for the environment, and
> so your generated program can lack the features you typically desire
> from a LISP program, like the ability to dynamically self-modify, or
> allow the user to type in custom algorithms.  This basically means that
> a lot of useful programs are undeliverable without including the lisp
> environment itself.

So the reasoning would run "I'd like to use feature X in Lisp, but
can't without delivering a whole Lisp system, therefore I'll instead
write in a language which lacks X"?  Are you /sure/ people think
this way?  Sounds weird to me.


> Other places where lisp falls severely short currently is the areas that
> interest mainstream programmers the most: database access, sockets,
> threading, IPC, and 3d graphics.  None of these interfaces are
> standardised or even discussed, and while most implementations now have
> some or all of these facilities, they are not cohesive in their
> interface, and are sometimes poorly documented and incomplete to boot.

Compared to, say, ISO C which includes all the things you mention
in the standard.  No?  Well, maybe C++?  No?  Pascal?  Fortran?



> Some LISP implementations still do not interface well with other
> languages, while "popular" languages (C/C++/Objective-C, Java, Ada,
> Pascal) generally have a relatively safe way of communiating between
> languages (although the interface definitions in Ada can be fairly
> arduous).

You ever tried passing an Ada function as a Motif callback?
Or a Java method?  I think you mean "most systems have some
limited way of talking to C, and sometimes C will answer".


> LISP's object-oriented paradigm is powerful and yet... It's essentially
> dynamic operator overloading, which is about as interesting to an object
> modeller as a stack of bricks.  Although this is certainly only an
> opinion and perhaps a trend, object orientation is centered on message
> passing, while LISP's object orientation is based on function calls
> (still).  I won't say its not USEFUL, but its not attractive, and its
> not a step forward unless you are writing "toy" applications.

If you carefully restrict yourself to dispatching only on a single
parameter, the only difference I can see is that the verb comes before
the first argument, rather than after.  Is this /really/ enough to
demolish your "object modeller"'s world view?
From: Marco Antoniotti
Subject: Re: Reasons for rejecting Lisp (was Re: Newbie questions [Followup to   comp.lang.lisp])
Date: 
Message-ID: <lwaevh40ca.fsf@copernico.parades.rm.cnr.it>
Ian Wild <···@cfmu.eurocontrol.be> writes:

> > Some LISP implementations still do not interface well with other
> > languages, while "popular" languages (C/C++/Objective-C, Java, Ada,
> > Pascal) generally have a relatively safe way of communiating between
> > languages (although the interface definitions in Ada can be fairly
> > arduous).
> 
> You ever tried passing an Ada function as a Motif callback?
> Or a Java method?  I think you mean "most systems have some
> limited way of talking to C, and sometimes C will answer".

JNI is not for the faint of heart.

> > LISP's object-oriented paradigm is powerful and yet... It's essentially
> > dynamic operator overloading, which is about as interesting to an object
> > modeller as a stack of bricks.  Although this is certainly only an
> > opinion and perhaps a trend, object orientation is centered on message
> > passing, while LISP's object orientation is based on function calls
> > (still).  I won't say its not USEFUL, but its not attractive, and its
> > not a step forward unless you are writing "toy" applications.
> 
> If you carefully restrict yourself to dispatching only on a single
> parameter, the only difference I can see is that the verb comes before
> the first argument, rather than after.  Is this /really/ enough to
> demolish your "object modeller"'s world view?

I might be very stupid.  But I am doing the following.

I have a hierachy of classes where I implement some manipulation
methods.

Then I have a separate hierarchy which implements some operation to be
dispatched in terms of the first one.

I can't escape doing the following

class Clazz
{
	type_t do_something(A a);
	type_t do_something(B b);
	type_t do_something(C c);
	type_t do_something(D d);
	...
	// You get the idea...
	// Suppose you also have A -> B -> C-> D -> as your hierarchy.

	void do_something_appropriate(A a)
	  {
		do_something(a);
	  }
}

Now, suppose you have code like

{
	Clazz clazz = new Clazz();
	B myB = new B();

	clazz.do_something_appropriate(myB);
}

This is Java like. In C++ you have extra messiness due to 'virtual'
and pointer/references.

Now, what do you expect gets printed?

In Common Lisp it is exactly what you'd expect.

Cheers
	

-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: Marco Antoniotti
Subject: Re: Reasons for rejecting Lisp (was Re: Newbie questions [Followup to   comp.lang.lisp])
Date: 
Message-ID: <lw90b13zod.fsf@copernico.parades.rm.cnr.it>
Marco Antoniotti <·······@copernico.parades.rm.cnr.it> writes:

> Ian Wild <···@cfmu.eurocontrol.be> writes:
> 
> > > LISP's object-oriented paradigm is powerful and yet... It's essentially
> > > dynamic operator overloading, which is about as interesting to an object
> > > modeller as a stack of bricks.  Although this is certainly only an
> > > opinion and perhaps a trend, object orientation is centered on message
> > > passing, while LISP's object orientation is based on function calls
> > > (still).  I won't say its not USEFUL, but its not attractive, and its
> > > not a step forward unless you are writing "toy" applications.
> > 
	....
> {
> 	Clazz clazz = new Clazz();
> 	B myB = new B();
> 
> 	clazz.do_something_appropriate(myB);
> }
> 
> This is Java like. In C++ you have extra messiness due to 'virtual'
> and pointer/references.
> 
> Now, what do you expect gets printed?
> 
> In Common Lisp it is exactly what you'd expect.

Allow me to continue.

You claim that CLOS style object orientation is only for 'toy'
systems.

If you went along with my example and did some extra playing around,
you will have noticed that you soon introduce a lot of "tag"
statements (which, AFAIK Java supports much better that C++) in order
to achieve your goals.  You can always claim that I designed my
hierarchies in a poor way, but that is begging the question.

In CL you do


        (defmethod do-somethin ((c clazz) (an-a a)) ...)
        (defmethod do-somethin ((c clazz) (a-b b)) ...)
        (defmethod do-somethin ((c clazz) (a-c c)) ...)

	(defmethod do-somethin-approriate ((c clazz) (an-a a))
	   (do-something c an-a))

end of story.

Now go ahead, extrapolate and maintain this and the C++ or Java
versions.  Better yet, add a new class to the a->b->c hierarchy. What
would you have to do, had you written the beast in C/C++? What do you
have to do if it was written in CL?

Cheers


-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: Ian Wild
Subject: Re: Reasons for rejecting Lisp (was Re: Newbie questions [Followup to    comp.lang.lisp])
Date: 
Message-ID: <3732D270.FF2C3C7B@cfmu.eurocontrol.be>
Marco Antoniotti wrote:
> 
> Marco Antoniotti <·······@copernico.parades.rm.cnr.it> writes:
> 
> > Ian Wild <···@cfmu.eurocontrol.be> writes:
> >
> > > > LISP's object-oriented paradigm is powerful and yet... It's essentially
> > > > dynamic operator overloading, which is about as interesting to an object
> > > > modeller as a stack of bricks.  Although this is certainly only an
> > > > opinion and perhaps a trend, object orientation is centered on message
> > > > passing, while LISP's object orientation is based on function calls
> > > > (still).  I won't say its not USEFUL, but its not attractive, and its
> > > > not a step forward unless you are writing "toy" applications.

Calumny!

I /responded/ to the above by suggesting that carefully avoiding
interesting CLOS features allows you to emulate the "attractive"
OO style the original author prefers.

The fact that your code needs CLOS features that C++ doesn't
have just goes to show you weren't careful enough. :-)
From: Marco Antoniotti
Subject: Re: Reasons for rejecting Lisp (was Re: Newbie questions [Followup to     comp.lang.lisp])
Date: 
Message-ID: <lw7lql3xqr.fsf@copernico.parades.rm.cnr.it>
Ian Wild <···@cfmu.eurocontrol.be> writes:

> Marco Antoniotti wrote:
> > 
> > Marco Antoniotti <·······@copernico.parades.rm.cnr.it> writes:
> > 
> > > Ian Wild <···@cfmu.eurocontrol.be> writes:
> > >
> > > > > LISP's object-oriented paradigm is powerful and yet... It's essentially
> > > > > dynamic operator overloading, which is about as interesting to an object
> > > > > modeller as a stack of bricks.  Although this is certainly only an
> > > > > opinion and perhaps a trend, object orientation is centered on message
> > > > > passing, while LISP's object orientation is based on function calls
> > > > > (still).  I won't say its not USEFUL, but its not attractive, and its
> > > > > not a step forward unless you are writing "toy" applications.
> 
> Calumny!
> 
> I /responded/ to the above by suggesting that carefully avoiding
> interesting CLOS features allows you to emulate the "attractive"
> OO style the original author prefers.

OOPS. I jumped a citation level. :{ Sorry

> The fact that your code needs CLOS features that C++ doesn't
> have just goes to show you weren't careful enough. :-)

.. in choosing the language I suppose :)

Cheers

-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: Christopher R. Barry
Subject: Re: Reasons for rejecting Lisp (was Re: Newbie questions [Followup to   comp.lang.lisp])
Date: 
Message-ID: <87d80cydxm.fsf@2xtreme.net>
Ian Wild <···@cfmu.eurocontrol.be> writes:

> > Other places where lisp falls severely short currently is the areas that
> > interest mainstream programmers the most: database access, sockets,
> > threading, IPC, and 3d graphics.  None of these interfaces are
> > standardised or even discussed, and while most implementations now have
> > some or all of these facilities, they are not cohesive in their
> > interface, and are sometimes poorly documented and incomplete to boot.

To Joachim: which vendor poorly documents any of these interfaces?

> Compared to, say, ISO C which includes all the things you mention
> in the standard.  No?  Well, maybe C++?  No?  Pascal?  Fortran?

Java. :-) But seriously now, for the ammount of typing time it takes
to pound out a decent Java app (typing speed seems to be the main
bottleneck with Java programming), you could have written it in Lisp
_and_ read the documentation to three different Lisp vendor's products
so that you could write your thin portable abstraction layer to all
their non-standard functionality that you're _actually_ using, not
just imagining that you'll have some need for the portability and then
using Java to write an app that you only get arround to distributing
for Windows-only customers anyways.

Christopher
From: Christopher R. Barry
Subject: Re: Reasons for rejecting Lisp (was Re: Newbie questions [Followup to   comp.lang.lisp])
Date: 
Message-ID: <87aevgydqf.fsf@2xtreme.net>
······@2xtreme.net (Christopher R. Barry) writes:

> To Joachim: which vendor poorly documents any of these interfaces?

That should be Dobes Vandermeer, not Joachim. Apologies to
Joachim. I'd cancel the article and post a corrected one, but it's
probably already too late (3 minutes). That'll teach me to rush in the
morning before class....

Christopher
From: Erik Naggum
Subject: Re: Reasons for rejecting Lisp (was Re: Newbie questions [Followup to   comp.lang.lisp])
Date: 
Message-ID: <3135074736217837@naggum.no>
* Dobes Vandermeer <·····@mindless.com>
[ a _lot_ of ignorant crap deleted ]

  in all my years on USENET, I have seen a lot of destructive idiots post
  their favorite drivel in the guise of facts, and some even manage to look
  like they have a clue, but I cannot recall anyone quite so destructive or
  quite so willing to lie and misrepresent as Dobes Vandermeer of the aptly
  named MINDLESS.COM.  _nothing_ he says is true or even relevant to what
  Lisp can offer today or why this offer is not widely accepted.

  Lisp does have a problem: it doesn't come with a big enough stick with
  which to hit the ignorant fucks so they get out of their retarded state.

#:Erik
From: Dobes Vandermeer
Subject: Re: Reasons for rejecting Lisp (was Re: Newbie questions [Followup to    comp.lang.lisp])
Date: 
Message-ID: <37335E12.FADE956F@mindless.com>
Erik Naggum wrote:
> 
> * Dobes Vandermeer <·····@mindless.com>
> [ a _lot_ of ignorant crap deleted ]
> 
>   in all my years on USENET, I have seen a lot of destructive idiots post
>   their favorite drivel in the guise of facts, and some even manage to look
>   like they have a clue, but I cannot recall anyone quite so destructive or
>   quite so willing to lie and misrepresent as Dobes Vandermeer of the aptly
>   named MINDLESS.COM.  _nothing_ he says is true or even relevant to what
>   Lisp can offer today or why this offer is not widely accepted.
> 
>   Lisp does have a problem: it doesn't come with a big enough stick with
>   which to hit the ignorant fucks so they get out of their retarded state.

Fire it up.

CU
Dobes
From: Joachim Achtzehnter
Subject: Re: Reasons for rejecting Lisp (was Re: Newbie questions [Followup to   comp.lang.lisp])
Date: 
Message-ID: <ucg158wzuk.fsf@soft.mercury.bc.ca>
Dobes Vandermeer <·····@mindless.com> writes:
>
> LISP's object-oriented paradigm is powerful and yet... It's essentially
> dynamic operator overloading, which is about as interesting to an object
> modeller as a stack of bricks.  Although this is certainly only an
> opinion and perhaps a trend, object orientation is centered on message
> passing, while LISP's object orientation is based on function calls
> (still).  I won't say its not USEFUL, but its not attractive, and its
> not a step forward unless you are writing "toy" applications.

Sorry, but this doesn't make much sense to me, I must be missing your
point. Perhaps you should explain what you mean? CLOS generic
functions offer everything that C++ virtual methods, Java methods,
Eiffel features, etc. give you. What is it that makes generic
functions less useful than their more primitive counterparts in most
other OO languages?

Is it multiple dispatch? Well, other languages are envious of this
feature, so it can hardly be a disadvantage. Stroustrup himself says
in one of his books that multiple dispatch is the one major feature he
would have liked to add to C++, if only he had found the time to
integrate it without compromising the performance of virtual function
calls.

Is it the difference in syntax? This can hardly be responsible for
limiting the usefulness to toy applications, and again there are
others who recommend to migrate to the more symmetric placement of
arguments, e.g. see Chris Date's recent book which proposes a database
language along these lines. And in case you haven't noticed: using the
STL with C++ methods would have been much less awkward if the target
argument of C++ method calls was in the natural place.

And unlike other parts of CL, CLOS even requires a fair amount of
static type checking by requiring consistent lambda lists in the
methods of a generic function, although some implementations could
provide more help with type checking than they do (at least warnings),
see my other postings on this subject.

Joachim

-- 
·······@kraut.bc.ca      (http://www.kraut.bc.ca)
·······@mercury.bc.ca    (http://www.mercury.bc.ca)
From: Dobes Vandermeer
Subject: Re: Reasons for rejecting CLOS
Date: 
Message-ID: <373362FB.253DEA@mindless.com>
Joachim Achtzehnter wrote:
> 
> Dobes Vandermeer <·····@mindless.com> writes:
> >
> > LISP's object-oriented paradigm is powerful and yet... It's essentially
> > dynamic operator overloading, which is about as interesting to an object
> > modeller as a stack of bricks.  Although this is certainly only an
> > opinion and perhaps a trend, object orientation is centered on message
> > passing, while LISP's object orientation is based on function calls
> > (still).  I won't say its not USEFUL, but its not attractive, and its
> > not a step forward unless you are writing "toy" applications.
> 
> Sorry, but this doesn't make much sense to me, I must be missing your
> point. Perhaps you should explain what you mean? CLOS generic
> functions offer everything that C++ virtual methods, Java methods,
> Eiffel features, etc. give you. What is it that makes generic
> functions less useful than their more primitive counterparts in most
> other OO languages?

CLOS provides a world view where we have extended functions to overload
based on the class of its operators.  The object abstraction of data and
methods has been lost; CLOS methods are not methods, but functions, and
CLOS does not protect encapsulated data from the "outside world" of
functions and methods.  

A "better" object world view would allow for only acessing fields in the
object directly inside of methods for the object, thus providing the
mystical "encapsulation" that is currently underpromoted in CLOS.  I
hesitate to say that CLOS does not allow you to think in this way, yet I
will assert that typical examples I have encountered make no effort to
do so.

The syntax, although it is often trivialized by many people, is also
fairly important.  In a LISP function call, all arguments are weighted
equally, and so you are led to think about specialising based on any or
all of the parameters.  And object-oriented version, i.e. "(send-msg-to
obj (do-something a b c d))" shows you which object is being operated
upon, and although you could (easily) still specialize on the remaining
variables (i.e. a b c and d) to call a different method, the
message-passing paradigm is preserved.

CU
Dobes
From: Joachim Achtzehnter
Subject: Re: Reasons for rejecting CLOS
Date: 
Message-ID: <uc3e18wjih.fsf@soft.mercury.bc.ca>
Dobes Vandermeer <·····@mindless.com> writes:
> 
> CLOS provides a world view where we have extended functions to
> overload based on the class of its operators.

You mean 'operands' presumably? The function itself is the operator
according to conventional use of these terms.

> The object abstraction of data and methods has been lost;

It has not been lost. In fact, it is much stronger than in most OO
languages. In typical use, all access to an object's state is via
functions, called accessors. The client of a class doesn't have to
know whether an attribute is stored as data (corresponding to a C++
data member), or is computed by a function. the implementation can be
changed from data member to function and vice versa without affecting
clients.

Lisp doesn't enforce information hiding in the same strict manner that
C++ does, which is simply a different philosophy. It has nothing to do
with abstraction, though. Also note that classes are not the primary
tool for information hiding in Lisp. Lisp uses classes as types, and
packages for modularity.

> CLOS methods are not methods, but functions,

C++ methods are also functions. Just think about it a little more:
When you call a C++ (non-static) method the outcome is a function (in
the mathematical sense) of the target object and all other method
arguments. C++ syntax simply puts one of the function arguments in a
different place from others. The argument singled out in this way is
the one used for dynamic dispatch. Note, that nobody would have
proposed using this syntax if the more general case of multiple
dispatch had been considered from the start.

> and CLOS does not protect encapsulated data from the "outside world"
> of functions and methods.

Common Lisp lets you circumvent abstraction layers if you insist, but
this is also true in C++, just requires a little more
effort. Abstraction doesn't require dictatorial enforcement.

> A "better" object world view would allow for only acessing fields in
> the object directly inside of methods for the object, thus providing
> the mystical "encapsulation" that is currently underpromoted in
> CLOS.

You may not realize that CLOS accessors are, in fact, methods. You can
choose not to generate any accessors, in which case the corresponding
slots are not considered part of the class' interface.

Encapsulation is not underpromoted, Lisp simply takes the more general
view that methods can be sent to more than one object. Look at binary
functions: the natural way to express the semantics of the binary
function + doesn't single out one argument over the other. In C++ and
other single-dispatch languages one must awkwardly make such functions
methods of one or the other argument.

> The syntax, although it is often trivialized by many people, is also
> fairly important.

Syntax is a metter of taste.

> In a LISP function call, all arguments are weighted equally, and so
> you are led to think about specialising based on any or all of the
> parameters.

Yes. why, in general, would you want to force us to single out one of
them?

> And object-oriented version, i.e. "(send-msg-to obj (do-something a
> b c d))" shows you which object is being operated upon,

But sometimes you operate on more than one!

> and although you could (easily) still specialize on the remaining
> variables (i.e. a b c and d) to call a different method,

Not as easily as you make it out, it is certainly much more painful
than with multiple dispatch. And on this point the whole issue of
covariant argument redefinition raises its head, you may want to read
some of the heated discussions in comp.lang.eiffel on this topic. As I
mentioned before, even Stroustrup believes multiple dispatch is a good
feature to have.

Joachim

-- 
·······@kraut.bc.ca      (http://www.kraut.bc.ca)
·······@mercury.bc.ca    (http://www.mercury.bc.ca)
From: Pierre R. Mai
Subject: Re: Reasons for rejecting CLOS
Date: 
Message-ID: <87yaj0wh3l.fsf@orion.dent.isdn.cs.tu-berlin.de>
Dobes Vandermeer <·····@mindless.com> writes:

[other confusions elided]

> The syntax, although it is often trivialized by many people, is also
> fairly important.  In a LISP function call, all arguments are weighted
> equally, and so you are led to think about specialising based on any or
> all of the parameters.  And object-oriented version, i.e. "(send-msg-to
> obj (do-something a b c d))" shows you which object is being operated
> upon, and although you could (easily) still specialize on the remaining
> variables (i.e. a b c and d) to call a different method, the
> message-passing paradigm is preserved.

So in effect you are complaining that CLOS doesn't restrict your world 
view enough.  Interesting complaint.  This reminds me of the following 
two prayers (see Douglas Adams, Mostly Harmless):

Protect me from knowing what I don't need to know. Protect me from
even knowing that there are things to know that I don't know.  Protect
me from knowing that I decided not to know about the things that I
decided not to know about. Amen.

Lord, lord, lord. Protect me from the consequences of the above
prayer. Amen...

Regs, Pierre.

PS: Hmmm, somehow I get the feeling we are in the middle of another
Turing-Test again.  Craving for restricted world views might be what
AIs dream of at night... ;)

-- 
Pierre Mai <····@acm.org>               http://home.pages.de/~trillian/
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: Bill Newman
Subject: Re: Reasons for rejecting CLOS
Date: 
Message-ID: <wnewmanFBH8x2.FAw@netcom.com>
Dobes Vandermeer (·····@mindless.com) wrote:
: Joachim Achtzehnter wrote:
: > 
: > Dobes Vandermeer <·····@mindless.com> writes:
: > >
: > > LISP's object-oriented paradigm is powerful and yet... It's essentially
: > > dynamic operator overloading, which is about as interesting to an object
: > > modeller as a stack of bricks.  Although this is certainly only an
: > > opinion and perhaps a trend, object orientation is centered on message
: > > passing, while LISP's object orientation is based on function calls
: > > (still).  I won't say its not USEFUL, but its not attractive, and its
: > > not a step forward unless you are writing "toy" applications.
: > 
: > Sorry, but this doesn't make much sense to me, I must be missing your
: > point. Perhaps you should explain what you mean? CLOS generic
: > functions offer everything that C++ virtual methods, Java methods,
: > Eiffel features, etc. give you. What is it that makes generic
: > functions less useful than their more primitive counterparts in most
: > other OO languages?

: CLOS provides a world view where we have extended functions to overload
: based on the class of its operators.  The object abstraction of data and
: methods has been lost; CLOS methods are not methods, but functions, and
: CLOS does not protect encapsulated data from the "outside world" of
: functions and methods.  

: A "better" object world view would allow for only acessing fields in the
: object directly inside of methods for the object, thus providing the
: mystical "encapsulation" that is currently underpromoted in CLOS.  I
: hesitate to say that CLOS does not allow you to think in this way, yet I
: will assert that typical examples I have encountered make no effort to
: do so.

: The syntax, although it is often trivialized by many people, is also
: fairly important.  In a LISP function call, all arguments are weighted
: equally, and so you are led to think about specialising based on any or
: all of the parameters.  And object-oriented version, i.e. "(send-msg-to
: obj (do-something a b c d))" shows you which object is being operated
: upon, and although you could (easily) still specialize on the remaining
: variables (i.e. a b c and d) to call a different method, the
: message-passing paradigm is preserved.

Sometimes "which object is being operated upon?" is not a meaningful
question. In my experience, it's not a meaningful question in any code
which can be written in a "functional", side-effect-free style.  I'm
currently working on a rather extensive rewrite of CMUCL so that it
can be built cleanly, and fairly recently I was messing around inside
its type system, which treats types themselves as classes, and uses
methods and inheritance to implement operations like union-of-types
and intersection-of-types and is-a-subtype-of-b?

The type system of CMUCL is not written in CLOS (perhaps since CLOS
wasn't available when it was originally written, perhaps because CLOS
is currently implemented as a sort of inefficient afterthought in
CMUCL). Showing the actual code would obfuscate the issue for a number
of reasons, but I can illustrate what I'm talking about with some
CLOS-ish code along the same lines. (Beware that I've always found
CLOS's syntax somewhat non-intuitive and this project has kept me from
writing CLOS for months.) Consider

  (DEFCLASS TYPE () ..)
  (DEFCLASS NUMBER-TYPE (TYPE) ..) 
  (DEFCLASS FLOAT-TYPE (NUMBER-TYPE) ..) 
  (DEFCLASS SINGLE-FLOAT-TYPE (FLOAT-TYPE) ..) ; e.g. (SINGLE-FLOAT 0.0 1.0)
  (DEFCLASS MEMBER-TYPE (TYPE) ; e.g. (MEMBER :YES :NO :MAYBE)
    (MEMBERS))
  (DEFGENERIC TYPE-INTERSECTION ((X TYPE) (Y TYPE)) ..)
  (DEFGENERIC TYPE-UNION ((X TYPE) (Y TYPE)) ..)
  (DEFGENERIC TYPE-SUBTYPEP ((X TYPE) (Y TYPE))
  (DEFGENERIC OBJECT-HAS-TYPE? ((X T) (Y TYPE))

Which object is being operated on in TYPE-INTERSECTION or TYPE-UNION?
Which is better, overloading the first argument or the second?
Which of the methods below don't you like?

  (DEFMETHOD TYPE-SUBTYPEP ((X MEMBER-TYPE) (Y TYPE))
    (EVERY (LAMBDA (MEMBER) (TYPE-SUBTYPEP MEMBER Y)) (MEMBERS X)))
  (DEFMETHOD TYPE-INTERSECTION ((X TYPE) (Y MEMBER-TYPE))
    (MAKE-INSTANCE 'MEMBER-TYPE
                   :MEMBERS
                   (REMOVE-IF (LAMBDA (MEMBER)
                                (NOT (OBJECT-HAS-TYPE? MEMBER X)))
                              (MEMBERS X))))
  (DEFMETHOD TYPE-INTERSECTION ((X MEMBER-TYPE) (Y MEMBER-TYPE))
    ..)
  (DEFMETHOD TYPE-UNION ((X MEMBER-TYPE) (Y MEMBER-TYPE))
    ..)

When you have side effects, then often it does make sense to talk
about what's *the* object and what's not. For example, if we say

  (DEFGENERIC LEARN ((KB KNOWLEDGE-BASE) (F FACT) (TL TIME-LIMIT)))
  (DEFGENERIC UNLEARN ((KB KNOWLEDGE-BASE) (F FACT) (TL TIME-LIMIT)))

then the KNOWLEDGE-BASE is likely to be the object.  However, (1)
multiple inheritance can still be useful (consider different classes
of FACTs..) and (2) I fail to see what CLOS loses by using general
syntax.  Why is

  kb->learn(f,tl)
  
better than 

  (LEARN KB F TL)?

I do see what I think are problems with CLOS, especially the large
number of levels of indirection it imposes on the implementation even
in the simplest cases. I can also understand other people who think
weak static type checking or lack of enforced information hiding
are problems. But criticizing CLOS for not imposing an asymmetric
"this argument is the object, the others are along for the ride"
semantics seems fairly silly.  CLOS evolved from systems which imposed
that asymmetry, it dropped that asymmetry in order to gain some very
useful generality (multiple inheritance), and (as above) it retains
the ability to describe algorithms which have that asymmetry. So I
just don't see the problem -- it honestly is a feature, not a bug.

(Others have pointed out that Bjarne Stroustrup himself has said that
multiple inheritance is very nice. I'll also point out that Scott
Meyers, in _Effective C++_ or _More Effective C++_, I forget which,
also spends a lot of time showing how to do some of this in C++.  You
write you could "easily" specialize on the other variables, but I
actually wouldn't characterize it as all that easy. It's obviously
doable, but it's also tedious and messy.)

  Bill Newman
  ·······@netcom.com
From: Dobes Vandermeer
Subject: Re: Reasons for rejecting CLOS
Date: 
Message-ID: <37364A28.B0D585A9@mindless.com>
Bill Newman wrote:
> 
>
> Sometimes "which object is being operated upon?" is not a meaningful
> question. In my experience, it's not a meaningful question in any code
> which can be written in a "functional", side-effect-free style.

OK, I see your point here, but you yourself dictate it as a "functional"
style, not an object-oriented one.  The case you are referring to is one
where you have written a "function" that takes some arguments, does some
processing, and returns value with no side effects.  This is a useful
functionality to have, but is object-oriented?

> Which object is being operated on in TYPE-INTERSECTION or TYPE-UNION?

Both are; these functions should be implemented as functions, not as
methods on an object.

> Which is better, overloading the first argument or the second?

Overload neither; objects and classes are self-describing, so you can
easily write in a cond or case statement to perform special operations
in these cases.

> Which of the methods below don't you like?

>   (DEFMETHOD TYPE-SUBTYPEP ((X MEMBER-TYPE) (Y TYPE))
>     (EVERY (LAMBDA (MEMBER) (TYPE-SUBTYPEP MEMBER Y)) (MEMBERS X)))
>   (DEFMETHOD TYPE-INTERSECTION ((X TYPE) (Y MEMBER-TYPE))
>     (MAKE-INSTANCE 'MEMBER-TYPE
>                    :MEMBERS
>                    (REMOVE-IF (LAMBDA (MEMBER)
>                                 (NOT (OBJECT-HAS-TYPE? MEMBER X)))
>                               (MEMBERS X))))
>   (DEFMETHOD TYPE-INTERSECTION ((X MEMBER-TYPE) (Y MEMBER-TYPE))
>     ..)
>   (DEFMETHOD TYPE-UNION ((X MEMBER-TYPE) (Y MEMBER-TYPE))
>     ..)

These are all poor methods, they are in every way functions.

> When you have side effects, then often it does make sense to talk
> about what's *the* object and what's not. For example, if we say
> 
>   (DEFGENERIC LEARN ((KB KNOWLEDGE-BASE) (F FACT) (TL TIME-LIMIT)))
>   (DEFGENERIC UNLEARN ((KB KNOWLEDGE-BASE) (F FACT) (TL TIME-LIMIT)))

And this is a case where you are working with object-orientation because
you are using a code/data abstraction.  The previous examples took
several objects, and probably would have used methods to get the
information they needed from it, while these specifically operate upon
one object.

> then the KNOWLEDGE-BASE is likely to be the object.  However, (1)
> multiple inheritance can still be useful (consider different classes
> of FACTs..) and (2) I fail to see what CLOS loses by using general
> syntax.  Why is
> 
>   kb->learn(f,tl)
> 
> better than
> 
>   (LEARN KB F TL)?

Because (LEARN KB F TL) rapidly loses its meaning when you are not
careful.  I do not know the intimates of CLOS, but say for example that
I write a method like:

(defmethod learn (kb (f my-fact) tl) ... )

This method specialises on the second object, but not the first.  While
initially this seems like it may be a feature because we can now
intercept any calls to "learn" our special type of fact.  On the other
hand, (LEARN KB F TL) is no longer equivalent to KB->(LEARN F TL), and
maybe instead decides to have side effects on F instead of KB (we don't
know, and there is no way to find out) while still using the same
generic function.  The other approach means that we do know, given an
object's API, whether it is that object or another that is most likely
to be operated on.  Rather than an inplementation or flexibility issue,
it is about readability and psychology.

> I do see what I think are problems with CLOS, especially the large
> number of levels of indirection it imposes on the implementation even
> in the simplest cases. I can also understand other people who think
> weak static type checking or lack of enforced information hiding
> are problems. But criticizing CLOS for not imposing an asymmetric
> "this argument is the object, the others are along for the ride"
> semantics seems fairly silly.  

Perhaps my criticism there was tagged on for the ride, in response to
someone's trivialisation of it.  It is my personal belief that syntax
has more power over a programmer than anything else in a language, at
many levels.  Syntax decides how a programmer will use things,
regardless of how they CAN be used.  A syntax that places emphasis on a
particular object means that programmers will write in a way that uses
one object.  A syntax that places no emphasis leads to programs that use
any number of the objects, possibly in any number of ways.  The failure
to hide data means that programmers feel free to modify the data from
anywhere in their code, regardless of good practices etc.

> CLOS evolved from systems which imposed
> that asymmetry, it dropped that asymmetry in order to gain some very
> useful generality (multiple inheritance), and (as above) it retains
> the ability to describe algorithms which have that asymmetry. So I
> just don't see the problem -- it honestly is a feature, not a bug.

I fail to see how multiple inheritance is affected by the syntax in this
case.. perhaps I use the word in a different way than you do?

> (Others have pointed out that Bjarne Stroustrup himself has said that
> multiple inheritance is very nice. I'll also point out that Scott
> Meyers, in _Effective C++_ or _More Effective C++_, I forget which,
> also spends a lot of time showing how to do some of this in C++.  You
> write you could "easily" specialize on the other variables, but I
> actually wouldn't characterize it as all that easy. It's obviously
> doable, but it's also tedious and messy.)

You would specialise on the other variables exactly as CLOS does, I
don't see how it could possibly be any more messy than that.  You would
not even require a code change.

Another message has pointed out about 10 lines of code that change
functional syntax into message-passing syntax.  Its easy, and so are
other changes to make CLOS more object-oriented-ish.  The issue is not
difficulty of implementation, the difficulty is that its not in th
standard, and nobody is going to implement it unless there is something
driving them to do so.  With the reputation of LISP as it is, nobody who
would complain about these things (except maybe me, for some reason)
even bothers with LISP, so I suspect word never reaches the ears of the
LISP publishers and standard-setters.

CU
Dobes
From: Frank A. Adrian
Subject: Re: Reasons for rejecting CLOS
Date: 
Message-ID: <njtZ2.201$Rf6.49637@news.uswest.net>
I've been using OO languages for onwards of 15 years now (mainly C++ and
Smalltalk, but also having some FLAVORS and CLOS experience).  There are a
good many problems where sending a message to a single object does not make
sense.

Look at the double dispatch issue in arithmetic implementations in Smalltalk
and for arithmetic objects in C++.  Look at displaying different shape
objects on different devices.  Each requires a level of complexity when you
code them in a message passing style, most of which has to do with the fact
that you need to dispatch n*m cases through an additional n (or m) methods
on one or the other classes.

With multimethods, you can dispatch the m*n cases directly (and quite often
get code reuse by sharing base cases).  It does simplify things...

faa

Dobes Vandermeer wrote in message <·················@mindless.com>...
>Bill Newman wrote:
>>
>>
>> Sometimes "which object is being operated upon?" is not a meaningful
>> question. In my experience, it's not a meaningful question in any code
>> which can be written in a "functional", side-effect-free style.
>
>OK, I see your point here, but you yourself dictate it as a "functional"
>style, not an object-oriented one.  The case you are referring to is one
>where you have written a "function" that takes some arguments, does some
>processing, and returns value with no side effects.  This is a useful
>functionality to have, but is object-oriented?
>
>> Which object is being operated on in TYPE-INTERSECTION or TYPE-UNION?
>
>Both are; these functions should be implemented as functions, not as
>methods on an object.
>
>> Which is better, overloading the first argument or the second?
>
>Overload neither; objects and classes are self-describing, so you can
>easily write in a cond or case statement to perform special operations
>in these cases.
>
>> Which of the methods below don't you like?
>
>>   (DEFMETHOD TYPE-SUBTYPEP ((X MEMBER-TYPE) (Y TYPE))
>>     (EVERY (LAMBDA (MEMBER) (TYPE-SUBTYPEP MEMBER Y)) (MEMBERS X)))
>>   (DEFMETHOD TYPE-INTERSECTION ((X TYPE) (Y MEMBER-TYPE))
>>     (MAKE-INSTANCE 'MEMBER-TYPE
>>                    :MEMBERS
>>                    (REMOVE-IF (LAMBDA (MEMBER)
>>                                 (NOT (OBJECT-HAS-TYPE? MEMBER X)))
>>                               (MEMBERS X))))
>>   (DEFMETHOD TYPE-INTERSECTION ((X MEMBER-TYPE) (Y MEMBER-TYPE))
>>     ..)
>>   (DEFMETHOD TYPE-UNION ((X MEMBER-TYPE) (Y MEMBER-TYPE))
>>     ..)
>
>These are all poor methods, they are in every way functions.
>
>> When you have side effects, then often it does make sense to talk
>> about what's *the* object and what's not. For example, if we say
>>
>>   (DEFGENERIC LEARN ((KB KNOWLEDGE-BASE) (F FACT) (TL TIME-LIMIT)))
>>   (DEFGENERIC UNLEARN ((KB KNOWLEDGE-BASE) (F FACT) (TL TIME-LIMIT)))
>
>And this is a case where you are working with object-orientation because
>you are using a code/data abstraction.  The previous examples took
>several objects, and probably would have used methods to get the
>information they needed from it, while these specifically operate upon
>one object.
>
>> then the KNOWLEDGE-BASE is likely to be the object.  However, (1)
>> multiple inheritance can still be useful (consider different classes
>> of FACTs..) and (2) I fail to see what CLOS loses by using general
>> syntax.  Why is
>>
>>   kb->learn(f,tl)
>>
>> better than
>>
>>   (LEARN KB F TL)?
>
>Because (LEARN KB F TL) rapidly loses its meaning when you are not
>careful.  I do not know the intimates of CLOS, but say for example that
>I write a method like:
>
>(defmethod learn (kb (f my-fact) tl) ... )
>
>This method specialises on the second object, but not the first.  While
>initially this seems like it may be a feature because we can now
>intercept any calls to "learn" our special type of fact.  On the other
>hand, (LEARN KB F TL) is no longer equivalent to KB->(LEARN F TL), and
>maybe instead decides to have side effects on F instead of KB (we don't
>know, and there is no way to find out) while still using the same
>generic function.  The other approach means that we do know, given an
>object's API, whether it is that object or another that is most likely
>to be operated on.  Rather than an inplementation or flexibility issue,
>it is about readability and psychology.
>
>> I do see what I think are problems with CLOS, especially the large
>> number of levels of indirection it imposes on the implementation even
>> in the simplest cases. I can also understand other people who think
>> weak static type checking or lack of enforced information hiding
>> are problems. But criticizing CLOS for not imposing an asymmetric
>> "this argument is the object, the others are along for the ride"
>> semantics seems fairly silly.
>
>Perhaps my criticism there was tagged on for the ride, in response to
>someone's trivialisation of it.  It is my personal belief that syntax
>has more power over a programmer than anything else in a language, at
>many levels.  Syntax decides how a programmer will use things,
>regardless of how they CAN be used.  A syntax that places emphasis on a
>particular object means that programmers will write in a way that uses
>one object.  A syntax that places no emphasis leads to programs that use
>any number of the objects, possibly in any number of ways.  The failure
>to hide data means that programmers feel free to modify the data from
>anywhere in their code, regardless of good practices etc.
>
>> CLOS evolved from systems which imposed
>> that asymmetry, it dropped that asymmetry in order to gain some very
>> useful generality (multiple inheritance), and (as above) it retains
>> the ability to describe algorithms which have that asymmetry. So I
>> just don't see the problem -- it honestly is a feature, not a bug.
>
>I fail to see how multiple inheritance is affected by the syntax in this
>case.. perhaps I use the word in a different way than you do?
>
>> (Others have pointed out that Bjarne Stroustrup himself has said that
>> multiple inheritance is very nice. I'll also point out that Scott
>> Meyers, in _Effective C++_ or _More Effective C++_, I forget which,
>> also spends a lot of time showing how to do some of this in C++.  You
>> write you could "easily" specialize on the other variables, but I
>> actually wouldn't characterize it as all that easy. It's obviously
>> doable, but it's also tedious and messy.)
>
>You would specialise on the other variables exactly as CLOS does, I
>don't see how it could possibly be any more messy than that.  You would
>not even require a code change.
>
>Another message has pointed out about 10 lines of code that change
>functional syntax into message-passing syntax.  Its easy, and so are
>other changes to make CLOS more object-oriented-ish.  The issue is not
>difficulty of implementation, the difficulty is that its not in th
>standard, and nobody is going to implement it unless there is something
>driving them to do so.  With the reputation of LISP as it is, nobody who
>would complain about these things (except maybe me, for some reason)
>even bothers with LISP, so I suspect word never reaches the ears of the
>LISP publishers and standard-setters.
>
>CU
>Dobes
From: Dobes Vandermeer
Subject: Re: Reasons for rejecting CLOS
Date: 
Message-ID: <37369241.C1F504F6@mindless.com>
"Frank A. Adrian" wrote:
> 
> I've been using OO languages for onwards of 15 years now (mainly C++ and
> Smalltalk, but also having some FLAVORS and CLOS experience).  There are a
> good many problems where sending a message to a single object does not make
> sense.
> 
> Look at the double dispatch issue in arithmetic implementations in Smalltalk
> and for arithmetic objects in C++.  Look at displaying different shape
> objects on different devices.  Each requires a level of complexity when you
> code them in a message passing style, most of which has to do with the fact
> that you need to dispatch n*m cases through an additional n (or m) methods
> on one or the other classes.
> 
> With multimethods, you can dispatch the m*n cases directly (and quite often
> get code reuse by sharing base cases).  It does simplify things...

I am beginning to suspect that I am inspecting the problem with a less
advanced perspective.. sort of a C++ bias on the whole issue, because I
see the issue as divided between two seperate systems: functional code
and object-oriented stuff.  CLOS I see conbines the two into one system,
where a method is "shared" between the objects it operates on.

The case of arithmetic operations and so forth I consider to be a
functional problem, and not something you typically solve in an
object-oriented fashion, but from a "puristic" object-oriented
perspective, where functions do not exist, only methods, of course CLOS
is a more valuable model, because you are obviously sending to message
to every argument as a sort of "committee" rather than to a single
object you want to affect.  From this perspective the CLOS method makes
a lot more sense.

I'll have to think about this view for a while.

Thanks
Dobes
From: Thomas A. Russ
Subject: Re: Reasons for rejecting CLOS
Date: 
Message-ID: <ymiemkolxka.fsf@sevak.isi.edu>
Dobes Vandermeer <·····@mindless.com> writes:
> I am beginning to suspect that I am inspecting the problem with a less
> advanced perspective.. sort of a C++ bias on the whole issue, because I
> see the issue as divided between two seperate systems: functional code
> and object-oriented stuff.  CLOS I see conbines the two into one system,
> where a method is "shared" between the objects it operates on.

This is a good first step.

CLOS does use a different object model from C++.  I suspect that a lot
of (perhaps needless) confusion results from this not being clear.  Some
major points of difference:

   C++:  Objects encapsulate both data and functions
  CLOS:  Objects encapsulate data only

   C++:  Methods belong to objects
  CLOS:  Methods belong to generic functions.

This latter point is a big change in the view of a programming system.

Instead of organizing methods by the class to which their first argument
belongs, all methods are organized by the function that they implement.
This is a very different way of thinking about organizing code.  It
focuses on the operation as the organizational backbone, rather than on
the object.

There are a couple of interesting effects of this decision.

It makes less of a distinction between built-in objects like
DOUBLE-FLOAT or STRING and user defined objects.  You can easily define
new functions over combinations of built-in types.

There is no syntactic distinction between generic function (method)
calls and normal function calls.  Since they operate in much the same
way, it means that one could in principle change functions into generic
functions (and vice versa) without needing to change the calling code --
something you can't do for static versus non static member functions in
C++ or Java.

It makes it easy to add your own processing to a system that contains
objects provided by the system or by other software packages.  If you
received a C++ object library and wanted to add methods to those
objects, you would be forced to subclass the objects in order to do it.
In CLOS, you can define a generic function with methods that are
dispatched on existing objects defined by others, without needing to
disturb those objects or subclass them.



Historical note:  An earlier OO system for Lisp (FLAVORS), was very much
   based on the message passing paradigm.  It even used SEND to send
   messages to a single distinguished object.

I wonder if Kent Pitman or someone else would care to comment on some of
the background that went into deciding to use the CLOS/generic function
model rather than the object/message passing model for the Common Lisp
object system?

-- 
Thomas A. Russ,  USC/Information Sciences Institute          ···@isi.edu    
From: Kent M Pitman
Subject: Re: Reasons for rejecting CLOS
Date: 
Message-ID: <sfwbtfsjb6e.fsf@world.std.com>
···@sevak.isi.edu (Thomas A. Russ) writes:

> Historical note:  An earlier OO system for Lisp (FLAVORS), was very much
>    based on the message passing paradigm.  It even used SEND to send
>    messages to a single distinguished object.
> 
> I wonder if Kent Pitman or someone else would care to comment on some of
> the background that went into deciding to use the CLOS/generic function
> model rather than the object/message passing model for the Common Lisp
> object system?

Multimethods were an influence of Xerox.  We had locked out the Xerox
crowd when we made CL, because CL was historically a defensive ploy to
keep Interlisp from killing the Maclisp community.  Interlisp was,
lamentably, pretty much wiped out in the process of making CL.  But
the Xerox crowd came to the table at the start of X3J13 (1986-1988)
with one of four full proposals for an object system.  Discussions
were had about which way to go and what resulted was a kind of hybrid
of something called LOOPS (from Xerox) and Flavors (from Lisp
Machines).  Among the various influences of LOOPS (not to be confused
with LOOP, btw) one of the most prominent was multimethod dispatching.
The Maclisp/LispM community had never had this and questioned whether
it was ever useful.  The Xerox guys insisted we only had never wanted
it because we didn't have it as an option of using it.  In retrospect,
I think they were clearly right.

I don't know if that helps at all.  There is probably more that could be
said, but it's too late at night for me to remember much else.
'nite.
From: Kent M Pitman
Subject: Re: Reasons for rejecting CLOS
Date: 
Message-ID: <sfwd8097fas.fsf@world.std.com>
Dobes Vandermeer <·····@mindless.com> writes:

> OK, I see your point here, but you yourself dictate it as a "functional"
> style, not an object-oriented one.

This presupposes a particular definition of object-oriented.

I claim that the original and proper definition of object-oriented
means "organized in a way such that an object's identity matters"
and is wholly neutral as to how objects are implemented.

Lisp was object-oriented before it had ANY way to program objects.
That other latter-day people have come in and added a new way to
define objects and have co-opted the term "object-oriented" for
that is a rude irrelevance.

I think a better name for this notion that Smalltalk and Java and
other systems have of everything being about messages to single
objects would be "object-centric" or even "self-centric".
From: Harley Davis
Subject: Re: Reasons for rejecting CLOS
Date: 
Message-ID: <7h7luv$c3g$1@ffx2nh3.news.uu.net>
Kent M Pitman <······@world.std.com> wrote in message
····················@world.std.com...
> I think a better name for this notion that Smalltalk and Java and
> other systems have of everything being about messages to single
> objects would be "object-centric" or even "self-centric".

Or, better yet, "self-centered".

-- Harley
From: R. Matthew Emerson
Subject: Re: Reasons for rejecting CLOS
Date: 
Message-ID: <87zp3d3723.fsf@nightfly.apk.net>
Dobes Vandermeer <·····@mindless.com> writes:
[...]
> It is my personal belief that syntax
> has more power over a programmer than anything else in a language, at
> many levels.  Syntax decides how a programmer will use things,
> regardless of how they CAN be used.

So, when the language hands you have a syntactic hammer, you then feel
obliged to look at everything as a nail?  That doesn't sound like a very
winning argument for syntax to me.

[...]

> Another message has pointed out about 10 lines of code that change
> functional syntax into message-passing syntax.  Its easy, and so are
> other changes to make CLOS more object-oriented-ish.  The issue is not
> difficulty of implementation, the difficulty is that its not in th
> standard, and nobody is going to implement it unless there is something
> driving them to do so.

You are complaining that Lisp provides too powerful of a model?

> With the reputation of LISP as it is, nobody who
> would complain about these things (except maybe me, for some reason)
> even bothers with LISP, so I suspect word never reaches the ears of the
> LISP publishers and standard-setters.

Yeah, thanks, you're all heart.  Very kind of you to talk to us
benighted lispers.

-matt
From: Tim Bradshaw
Subject: Re: Reasons for rejecting CLOS
Date: 
Message-ID: <ey3vhe1ifhd.fsf@lostwithiel.tfeb.org>
* Dobes Vandermeer wrote:

> Another message has pointed out about 10 lines of code that change
> functional syntax into message-passing syntax.  Its easy, and so are
> other changes to make CLOS more object-oriented-ish.  

I'm sorry you took my code as an attempt to `make CLOS more
object-oriented-ish'.  It wasn't.  It was an attempt to show that
CLOS's model is a superset of the crippled message-passing /
single-dispatch idea seems to be all you recognise as object
oriented.

> word never reaches the ears of the
> LISP publishers and standard-setters.

Word reaches them all right. About the last problem CLOS has is lack
of a message-passing syntax.

--tim
From: Dobes Vandermeer
Subject: Re: Reasons for rejecting CLOS
Date: 
Message-ID: <373776E7.942C74B8@mindless.com>
Tim Bradshaw wrote:
> 
> * Dobes Vandermeer wrote:
> 
> > Another message has pointed out about 10 lines of code that change
> > functional syntax into message-passing syntax.  Its easy, and so are
> > other changes to make CLOS more object-oriented-ish.
> 
> I'm sorry you took my code as an attempt to `make CLOS more
> object-oriented-ish'.  It wasn't.  It was an attempt to show that
> CLOS's model is a superset of the crippled message-passing /
> single-dispatch idea seems to be all you recognise as object
> oriented.

Yeah, the whole thing clicked for me reading a post by Frank Adrian
earlier, I see now how CLOS is infact object-oriented in a different way
than I see object-orientation.  Your implementation was just more
single-object friendly.

CU
Dobes
From: Marco Antoniotti
Subject: Re: Reasons for rejecting CLOS
Date: 
Message-ID: <lwzp3didw8.fsf@copernico.parades.rm.cnr.it>
Dobes Vandermeer <·····@mindless.com> writes:

> Bill Newman wrote:
> > 
> >
> > Sometimes "which object is being operated upon?" is not a meaningful
> > question. In my experience, it's not a meaningful question in any code
> > which can be written in a "functional", side-effect-free style.
> 
> OK, I see your point here, but you yourself dictate it as a "functional"
> style, not an object-oriented one.  The case you are referring to is one
> where you have written a "function" that takes some arguments, does some
> processing, and returns value with no side effects.  This is a useful
> functionality to have, but is object-oriented?

Does Common Lisp have Buddha's Nature? What is the sex of Java? How
many calls to malloc can you allocate to the tip of a needle? :)

One thing is sure.  You can do

	(defmethod endless-disquisition ((n nature) (s sex) (nt needle))
	   ....)

In Common Lisp. You can't in any other language except Dylan.  :)

Cheers

-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: Bill Newman
Subject: Re: Reasons for rejecting CLOS
Date: 
Message-ID: <wnewmanFBJ20s.G85@netcom.com>
Dobes Vandermeer (·····@mindless.com) wrote:
: Bill Newman wrote:
: > 
: >
: > Sometimes "which object is being operated upon?" is not a meaningful
: > question. In my experience, it's not a meaningful question in any code
: > which can be written in a "functional", side-effect-free style.

: OK, I see your point here, but you yourself dictate it as a "functional"
: style, not an object-oriented one.  The case you are referring to is one
: where you have written a "function" that takes some arguments, does some
: processing, and returns value with no side effects.  This is a useful
: functionality to have, but is object-oriented?

: > Which object is being operated on in TYPE-INTERSECTION or TYPE-UNION?

: Both are; these functions should be implemented as functions, not as
: methods on an object.

: > Which is better, overloading the first argument or the second?

: Overload neither; objects and classes are self-describing, so you can
: easily write in a cond or case statement to perform special operations
: in these cases.

: > Which of the methods below don't you like?

: >   (DEFMETHOD TYPE-SUBTYPEP ((X MEMBER-TYPE) (Y TYPE))
: >     (EVERY (LAMBDA (MEMBER) (TYPE-SUBTYPEP MEMBER Y)) (MEMBERS X)))
: >   (DEFMETHOD TYPE-INTERSECTION ((X TYPE) (Y MEMBER-TYPE))
: >     (MAKE-INSTANCE 'MEMBER-TYPE
: >                    :MEMBERS
: >                    (REMOVE-IF (LAMBDA (MEMBER)
: >                                 (NOT (OBJECT-HAS-TYPE? MEMBER X)))
: >                               (MEMBERS X))))
: >   (DEFMETHOD TYPE-INTERSECTION ((X MEMBER-TYPE) (Y MEMBER-TYPE))
: >     ..)
: >   (DEFMETHOD TYPE-UNION ((X MEMBER-TYPE) (Y MEMBER-TYPE))
: >     ..)

: These are all poor methods, they are in every way functions.

: > When you have side effects, then often it does make sense to talk
: > about what's *the* object and what's not. For example, if we say
: > 
: >   (DEFGENERIC LEARN ((KB KNOWLEDGE-BASE) (F FACT) (TL TIME-LIMIT)))
: >   (DEFGENERIC UNLEARN ((KB KNOWLEDGE-BASE) (F FACT) (TL TIME-LIMIT)))

: And this is a case where you are working with object-orientation because
: you are using a code/data abstraction.  The previous examples took
: several objects, and probably would have used methods to get the
: information they needed from it, while these specifically operate upon
: one object.

: > then the KNOWLEDGE-BASE is likely to be the object.  However, (1)
: > multiple inheritance can still be useful (consider different classes
: > of FACTs..) and (2) I fail to see what CLOS loses by using general
: > syntax.  Why is
: > 
: >   kb->learn(f,tl)
: > 
: > better than
: > 
: >   (LEARN KB F TL)?

: Because (LEARN KB F TL) rapidly loses its meaning when you are not
: careful.  I do not know the intimates of CLOS, but say for example that
: I write a method like:

: (defmethod learn (kb (f my-fact) tl) ... )

: This method specialises on the second object, but not the first.  While
: initially this seems like it may be a feature because we can now
: intercept any calls to "learn" our special type of fact.  On the other
: hand, (LEARN KB F TL) is no longer equivalent to KB->(LEARN F TL), and
: maybe instead decides to have side effects on F instead of KB (we don't
: know, and there is no way to find out) while still using the same
: generic function.  The other approach means that we do know, given an
: object's API, whether it is that object or another that is most likely
: to be operated on.  Rather than an inplementation or flexibility issue,
: it is about readability and psychology.

I believe that the way to deal with this is to have the generic
function definition specify carefully what it does, including which
objects can have side-effects or access to private object state. In my
experience, that works well, but of course there are lots of
programming cultures, lots of project sizes, and lots of project
timescales, so YMMV.

As to whether the object system requires side-effects to be limited
to a single object, I still believe that most side-effect-ful
operations tend to have a single preferred object, but I don't
think they *all* do. Consider

  (DEFGENERIC SYNCHRONIZE! ((C1 CACHE) (C2 CACHE)))
  (DEFGENERIC MERGE! ((C1 COLLECTION) (C2 COLLECTION))
  (DEFGENERIC TRANSFER! ((TO ACCOUNT) (FROM ACCOUNT) SOMETHING)

Sometimes you can rewrite these as mutations on individual objects,
e.g. TRANSFER! might be broken into CREDIT! and DEBIT! if the
synchronization requirements weren't too complex.  But consider also..

  'Having one argument of an operation (the one designating
  the "object") special can lead to contorted designs. When
  several arguments are best treated equally, an operation is
  best represented as a nonmember function.'
    -- Stroustrup, _The C++ Programming Language_, 3d edition, p. 732

: > I do see what I think are problems with CLOS, especially the large
: > number of levels of indirection it imposes on the implementation even
: > in the simplest cases. I can also understand other people who think
: > weak static type checking or lack of enforced information hiding
: > are problems. But criticizing CLOS for not imposing an asymmetric
: > "this argument is the object, the others are along for the ride"
: > semantics seems fairly silly.  

: Perhaps my criticism there was tagged on for the ride, in response to
: someone's trivialisation of it.  It is my personal belief that syntax
: has more power over a programmer than anything else in a language, at
: many levels.  Syntax decides how a programmer will use things,
: regardless of how they CAN be used.  A syntax that places emphasis on a
: particular object means that programmers will write in a way that uses
: one object.  A syntax that places no emphasis leads to programs that use
: any number of the objects, possibly in any number of ways.  The failure
: to hide data means that programmers feel free to modify the data from
: anywhere in their code, regardless of good practices etc.

There are two issues here. On the first issue, I still think the
symmetry of CLOS is a feature, not a bug. Multiple dispatch (not
multiple *inheritance* as I repeatedly and mistakenly wrote earlier..)
is nice.

On the second issue, I already said above that I can understand
criticism of CLOS's lack of enforced information hiding.  I'm
personally happy with that aspect of CLOS, but I also think it's a
case of "horses for courses": I'm not surprised that other people find
it distasteful, or inappropriate in some circumstances.

: > CLOS evolved from systems which imposed
: > that asymmetry, it dropped that asymmetry in order to gain some very
: > useful generality (multiple inheritance), and (as above) it retains
: > the ability to describe algorithms which have that asymmetry. So I
: > just don't see the problem -- it honestly is a feature, not a bug.

: I fail to see how multiple inheritance is affected by the syntax in this
: case.. perhaps I use the word in a different way than you do?

I screwed up and wrote multiple inheritance when I meant multiple
dispatch. Sorry..

: > (Others have pointed out that Bjarne Stroustrup himself has said that
: > multiple inheritance is very nice. I'll also point out that Scott
: > Meyers, in _Effective C++_ or _More Effective C++_, I forget which,
: > also spends a lot of time showing how to do some of this in C++.  You
: > write you could "easily" specialize on the other variables, but I
: > actually wouldn't characterize it as all that easy. It's obviously
: > doable, but it's also tedious and messy.)

: You would specialise on the other variables exactly as CLOS does, I
: don't see how it could possibly be any more messy than that.  You would
: not even require a code change.

Not for the problems that multiple dispatch (not multiple inheritance,
argh!)  is usually used to solve. In particular, you usually want
inheritance to work on each of the dispatched arguments, and once you
commit to that, you do end up writing some messy code (unless you're
using CLOS or Dylan, of course:-).

Multiple dispatch is not the same thing as multiple inheritance, but
it's like it, in that when you want it, you really want it, and when
you don't, it's hard to see why anyone would.:-|

  Bill Newman
  ·······@netcom.com
From: Gareth McCaughan
Subject: Re: Reasons for rejecting CLOS
Date: 
Message-ID: <86pv48u72s.fsf@g.pet.cam.ac.uk>
Dobes Vandermeer wrote:

[someone else wrote:]
>> Sometimes "which object is being operated upon?" is not a meaningful
>> question. In my experience, it's not a meaningful question in any code
>> which can be written in a "functional", side-effect-free style.
> 
> OK, I see your point here, but you yourself dictate it as a "functional"
> style, not an object-oriented one.  The case you are referring to is one
> where you have written a "function" that takes some arguments, does some
> processing, and returns value with no side effects.  This is a useful
> functionality to have, but is object-oriented?

I don't see why it shouldn't be. It's purely a matter of terminology,
anyway. Whether or not you want to use the term "OO" when the "objects"
being worked on are immutable, it's clear that one can benefit from
information hiding, polymorphism and inheritance in that situation.

>> Which object is being operated on in TYPE-INTERSECTION or TYPE-UNION?
> 
> Both are; these functions should be implemented as functions, not as
> methods on an object.

Why? Because they don't have side effects? Surely not; I can't believe
you would suggest that all accessors should be "functions, not methods"
(and presumably, as per your next comment, implemented using a COND or
a CASE!). Because the objects they're working on are immutable? So,
would your opinion suddenly reverse if the type system changed so that
the contents of these TYPE objects could legitimately be altered after
their creation? Because they can make new objects? (Why should that
make a difference?)

None of these reasons seems to me sufficient to say Thou Shalt Not
Implement These As Methods; so what *is* the problem? Is it, in fact,
simply that they involve multiple dispatch and you don't believe in
multiple dispatch?

>> Which is better, overloading the first argument or the second?
> 
> Overload neither; objects and classes are self-describing, so you can
> easily write in a cond or case statement to perform special operations
> in these cases.

Er, what? So much for object orientation...

> > Which of the methods below don't you like?
> 
> >   (DEFMETHOD TYPE-SUBTYPEP ((X MEMBER-TYPE) (Y TYPE))
> >     (EVERY (LAMBDA (MEMBER) (TYPE-SUBTYPEP MEMBER Y)) (MEMBERS X)))
> >   (DEFMETHOD TYPE-INTERSECTION ((X TYPE) (Y MEMBER-TYPE))
> >     (MAKE-INSTANCE 'MEMBER-TYPE
> >                    :MEMBERS
> >                    (REMOVE-IF (LAMBDA (MEMBER)
> >                                 (NOT (OBJECT-HAS-TYPE? MEMBER X)))
> >                               (MEMBERS X))))
> >   (DEFMETHOD TYPE-INTERSECTION ((X MEMBER-TYPE) (Y MEMBER-TYPE))
> >     ..)
> >   (DEFMETHOD TYPE-UNION ((X MEMBER-TYPE) (Y MEMBER-TYPE))
> >     ..)
> 
> These are all poor methods, they are in every way functions.

How do you want them to be implemented, then? Something like this?

    (defun type-intersection (x y)
      (cond
        ((member-type-p x)
         ...)
        ((numeric-type-p x)
         ...)
        ((enumerated-type-p x)
         ...)
        ((structured-type-p x)
         ...)
        ...))

If you really think this is a good way to write code, then I don't
understand why you're interested in OO at all.

>> then the KNOWLEDGE-BASE is likely to be the object.  However, (1)
>> multiple inheritance can still be useful (consider different classes
>> of FACTs..) and (2) I fail to see what CLOS loses by using general
>> syntax.  Why is
>> 
>> kb->learn(f,tl)
>> 
>> better than
>> 
>> (LEARN KB F TL)?
> 
> Because (LEARN KB F TL) rapidly loses its meaning when you are not
> careful.  I do not know the intimates of CLOS, but say for example that
> I write a method like:
> 
> (defmethod learn (kb (f my-fact) tl) ... )
> 
> This method specialises on the second object, but not the first.  While
> initially this seems like it may be a feature because we can now
> intercept any calls to "learn" our special type of fact.  On the other
> hand, (LEARN KB F TL) is no longer equivalent to KB->(LEARN F TL), and
> maybe instead decides to have side effects on F instead of KB (we don't
> know, and there is no way to find out) while still using the same
> generic function.  The other approach means that we do know, given an
> object's API, whether it is that object or another that is most likely
> to be operated on.  Rather than an inplementation or flexibility issue,
> it is about readability and psychology.

This is just an issue of style.

There is nothing to stop a method on one object side-effecting
another (even in systems that, unlike CLOS, place heavy restrictions
on access to objects other than via their methods). So even in C++
you have no sort of guarantee that saying |kb->learn(f,tl)| won't
side-effect f. Maybe the first thing the |learn| method does is to
say |f->mangle_self()|. You can certainly adopt a *convention* that
a method never side-effects its arguments...

... but then, in CLOS, you can also adopt a convention that a generic
function never side-effects any argument other than the first. If that
turns out to be helpful.

What's the problem here?

> Perhaps my criticism there was tagged on for the ride, in response to
> someone's trivialisation of it.  It is my personal belief that syntax
> has more power over a programmer than anything else in a language, at
> many levels.  Syntax decides how a programmer will use things,
> regardless of how they CAN be used.  A syntax that places emphasis on a
> particular object means that programmers will write in a way that uses
> one object.  A syntax that places no emphasis leads to programs that use
> any number of the objects, possibly in any number of ways.  The failure
> to hide data means that programmers feel free to modify the data from
> anywhere in their code, regardless of good practices etc.

This reminds me very strongly of something Erik Naggum once said,
along the following lines. "Common Lisp doesn't enforce these
restrictions; you have to be polite. C++ does. The effect is that
decent people use Common Lisp, whereas thieves and bums use C++".
I think he exaggerated to make his point, but it is a point even so.

If you work in Common Lisp, you can agree that no one will make any
direct access to an object outside methods specialised on that object
as first argument; or that you won't use multiple dispatch at all;
or whatever you like. If such an agreement is helpful to your project
but the people on it are incapable of sticking to it, then you certainly
have problems, but they aren't with CLOS.

Sleazy programmers have just as much opportunity for sleaze in C++ as
they have in CLOS.

#define private public
#include "FooClass.h"

:-) (I think this particular observation may also be Erik Naggum's.)

-- 
Gareth McCaughan       Dept. of Pure Mathematics & Mathematical Statistics,
·····@dpmms.cam.ac.uk  Cambridge University, England.
From: Dobes Vandermeer
Subject: Re: Reasons for rejecting CLOS
Date: 
Message-ID: <373774E6.4729B777@mindless.com>
Gareth McCaughan wrote:
> 
> Dobes Vandermeer wrote:
> 
> [someone else wrote:]
>
> >> Which object is being operated on in TYPE-INTERSECTION or TYPE-UNION?
> >
> > Both are; these functions should be implemented as functions, not as
> > methods on an object.
> 
> Why? Because they don't have side effects? Surely not; I can't believe
> you would suggest that all accessors should be "functions, not methods"
> (and presumably, as per your next comment, implemented using a COND or
> a CASE!). Because the objects they're working on are immutable? So,
> would your opinion suddenly reverse if the type system changed so that
> the contents of these TYPE objects could legitimately be altered after
> their creation? Because they can make new objects? (Why should that
> make a difference?)

As I mentioned in another post I am starting to see what the meaning of
multiple dispatch is, and why it is useful.

On the other hand, it is my opinion the overloading (specialisation) in
general is a dangerous practice.  Nobody is perfect, especially not me,
and I am almost certainly going to define overlapping generics where I
did not intend to.  Grouping all of the possiblities into one function
means that there can be no doubt what is being done, and you only have
to specialise on one variable.  The rest you manually specialise on
inside the function.

> Is it, in fact,
> simply that they involve multiple dispatch and you don't believe in
> multiple dispatch?

I really didn't see what was meant my multiple dispatch, or why it was
useful before, but I think I see now that as an object model it is a
little bit clever.

> > > Which of the methods below don't you like?
> >
> > >   (DEFMETHOD TYPE-INTERSECTION ((X MEMBER-TYPE) (Y MEMBER-TYPE))
> > >     ..)
>
> How do you want them to be implemented, then? Something like this?
> 
>     (defun type-intersection (x y)
>       (cond
>         ((member-type-p x)
>          ...)
>         ((numeric-type-p x)
>          ...)
>         ((enumerated-type-p x)
>          ...)
>         ((structured-type-p x)
>          ...)
>         ...))
> 
> If you really think this is a good way to write code, then I don't
> understand why you're interested in OO at all.

What is type-intersection supposed to do?

I doubt I'd implement it as above, but I don't even know what its for.

> This reminds me very strongly of something Erik Naggum once said,
> along the following lines. "Common Lisp doesn't enforce these
> restrictions; you have to be polite. C++ does. The effect is that
> decent people use Common Lisp, whereas thieves and bums use C++".
> I think he exaggerated to make his point, but it is a point even so.

My comment was not about enforcement but rather syntax and appearance,
so you have obviously missed what I was saying.  I have had good
experiences with (now Apple's) OpenStep and Objective-C programming. 
Objective-C does not enforce non-access to member variables from
non-member methods, but you will never find an example piece of code
that does so.  It will not give a warning even, if you do it. 
Regardless, I still think of Objective-C as the best object system I
have ever encountered; if you have used Objective-C then you might see
how I consider object oriented programming can ideally be done.

CU
Dobes
From: Kenny Tilton
Subject: Re: Reasons for rejecting CLOS
Date: 
Message-ID: <37378561.29482B99@liii.com>
> On the other hand, it is my opinion the overloading (specialisation) in
> general is a dangerous practice.  Nobody is perfect, especially not me,
> and I am almost certainly going to define overlapping generics where I
> did not intend to.  Grouping all of the possiblities into one function
> means that there can be no doubt what is being done, and you only have
> to specialise on one variable.  The rest you manually specialise on
> inside the function.

Aren't you saying you are willing to surrender expressive power and
endure the productivity loss of hassling through coding up multiple
dispatches, all because you don't think you can keep your code
straight?  <g> 

I am pushing multiple-inheritance and generic functions to the hilt on a
new commercial app. On rare occasions I find myself lost, but a little
navel-staring and consultation of taoist texts always shows me the
light.

Just as with my other programming, I follow certain self-imposed
guidelines to keep my head on straight, including the golden rule, viz.,
that if i can't follow it anymore i have probably made things too hard
and it's time to rethink fundamentally.

You know, I guess I can see the C++ philosophy of "if you can get it to
compile it's probably right" but my taste is for expressive power over
being protected from myself by a dumbed-down language.

I have full respect for anyone who works better under different
circumstances, BTW, but if you want to question Lisp, well, the power vs
save-me-from-myself thing will be coming up an awful lot. Might as well
just flag that as a given.

IMHO,

Ken

PS.

> Regardless, I still think of Objective-C as the best object system I
> have ever encountered; 

That has interfaces...don't you end doing a lot of cutting and pasting?

KT
From: Pierre R. Mai
Subject: Re: Reasons for rejecting CLOS
Date: 
Message-ID: <87n1zbvfbo.fsf@orion.dent.isdn.cs.tu-berlin.de>
Kenny Tilton <····@liii.com> writes:

> You know, I guess I can see the C++ philosophy of "if you can get it to
> compile it's probably right" but my taste is for expressive power over
> being protected from myself by a dumbed-down language.

That is more the philosophy of modern pure functional languages with
expressive type-systems.  I've had the nice opportunity to work with
one of those, and it mostly really works the way that "if you can get
it to type-check (i.e. compile), it's probably right".  Quite nice in
a way, though to static for my "taste".  But C++ doesn't even come
anywhere near that.  If I get my C++ code to compile (which really
isn't that problematic, unless you use templates, when usually all
hell breaks loose ;), it's usually time to start debugging...

Regs, Pierre.

-- 
Pierre Mai <····@acm.org>               http://home.pages.de/~trillian/
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: Bill Newman
Subject: Re: Reasons for rejecting CLOS
Date: 
Message-ID: <wnewmanFBKq1I.GwI@netcom.com>
Dobes Vandermeer (·····@mindless.com) wrote:
: Gareth McCaughan wrote:
: > 
: > Dobes Vandermeer wrote:
: > > > Which of the methods below don't you like?
: > >
: > > >   (DEFMETHOD TYPE-INTERSECTION ((X MEMBER-TYPE) (Y MEMBER-TYPE))
: > > >     ..)
: >
: > How do you want them to be implemented, then? Something like this?
: > 
: >     (defun type-intersection (x y)
: >       (cond
: >         ((member-type-p x)
: >          ...)
: >         ((numeric-type-p x)
: >          ...)
: >         ((enumerated-type-p x)
: >          ...)
: >         ((structured-type-p x)
: >          ...)
: >         ...))
: > 
: > If you really think this is a good way to write code, then I don't
: > understand why you're interested in OO at all.

: What is type-intersection supposed to do?

: I doubt I'd implement it as above, but I don't even know what its for.

The compiler wants to keep track of what it knows about the type
of various expressions. If it's compiling

  ;; Return a symbol in the keyword package whose name is X. (a
  ;; handy utility for manipulating keyword arguments, structure slot 
  ;; names, and so forth)
  (DECLAIM (FTYPE (FUNCTION ((OR STRING SYMBOL)) KEYWORD) KEYWORDIFY))
  (DEFUN KEYWORDIFY (X)
    (INTERN (IF (SYMBOLP X) 
              (SYMBOL-NAME X)
              X)
            (FIND-PACKAGE :KEYWORD)))

then it should be able to prove to itself that the first argument to
INTERN is necessarily a STRING, so that it can compile it as some
internal %INTERN-A-STRING% primitive operation without any further
type checking.  It knows that the type of SYMBOL-NAME is STRING. Since
as per KMP's suggestion I made sure that the DECLAIM preceded the
DEFUN, it knows that the type of X before the IF is (OR STRING
SYMBOL). Then since the SYMBOLP branch wasn't taken, the type of X in
the other SYMBOLP-is-false branch of the IF must be

  (TYPE-INTERSECTION (MAKE-TYPE '(OR STRING SYMBOL))
                     (TYPE-COMPLEMENT (MAKE-TYPE 'SYMBOL))).

Finally it can deduce that the type of the return value from the IF is
the union of the types of both branches

  (TYPE-UNION (MAKE-TYPE 'STRING)
              (TYPE-INTERSECTION (MAKE-TYPE '(OR STRING SYMBOL))
                                 (TYPE-COMPLEMENT (MAKE-TYPE 'SYMBOL)))).

So to get the right answer we want

  (TYPE-INTERSECTION (MAKE-TYPE '(OR STRING SYMBOL))
                     (TYPE-COMPLEMENT (MAKE-TYPE 'SYMBOL)))
  => #<TYPE STRING>

and

  (TYPE-UNION (MAKE-TYPE 'STRING) (MAKE-TYPE 'STRING))
  => #<TYPE STRING>.

It turns out that the natural way to implement these functions, and
related type-manipulation functions, is with multiple dispatch,
i.e. doing method dispatch based on the type of more than one argument
of a generic function, not just the first argument. It would also be
possible to implement it with big TYPECASE statements instead, as
Gareth McCaughan wrote above, but it would be ugly and hard to
maintain.

Incidentally, this may have been an unfortunate example to focus on in
that the arguments in question represent a compiler's knowledge of
types, and we're also arguing about dispatching on the type of generic
function arguments, and having to keep track of the distinction
between the two -- ahem -- types of types probably makes it
unnecessarily hard to think about the problem. Oh well..

Also incidentally, all this is only a cartoon version of the way that
the system actually works inside the particular compiler I'm working
with, for a number of reasons, because e.g. the real compiler mostly
manipulates classes instead of types, and because it uses its own
twisty little object system instead of CLOS. But it is a real
application, and if it were written de novo in ANSI Common Lisp
instead of ported from pre-ANSI "Spice Lisp", it would appropriate to
write it in CLOS.

  Bill Newman
  ·······@netcom.com
From: Gareth McCaughan
Subject: Re: Reasons for rejecting CLOS
Date: 
Message-ID: <86zp3b8upy.fsf@g.pet.cam.ac.uk>
Bill Newman wrote:

> It turns out that the natural way to implement these functions, and
> related type-manipulation functions, is with multiple dispatch,
> i.e. doing method dispatch based on the type of more than one argument
> of a generic function, not just the first argument. It would also be
> possible to implement it with big TYPECASE statements instead, as
> Gareth McCaughan wrote above, but it would be ugly and hard to
> maintain.

(I'd just like to mention, for the benefit of anyone joining the
thread here, that I said that precisely in order to point out
how horrible it would be. I wouldn't dream of suggesting that
this kind of thing should be done with CASE or TYPECASE or COND
unless it turns out to give a huge performance win, which seems
unlikely.)

-- 
Gareth McCaughan       Dept. of Pure Mathematics & Mathematical Statistics,
·····@dpmms.cam.ac.uk  Cambridge University, England.
From: Pierre R. Mai
Subject: Re: Reasons for rejecting CLOS
Date: 
Message-ID: <87pv48v2ch.fsf@orion.dent.isdn.cs.tu-berlin.de>
Dobes Vandermeer <·····@mindless.com> writes:

> On the other hand, it is my opinion the overloading (specialisation) in
> general is a dangerous practice.  Nobody is perfect, especially not me,

Generic functions are precisely not about overloading, that is all
methods on a gf should implement semantically the _same_ operation.
In languages like C++, where (operator) overloading is "allowed", you
might implement both string concatenization and integer addition with
an operator +.  While this would be possible to do in CL by accident
(since both operations have the same arity), it is wrong, since both
operations are arguably semantically different.

> and I am almost certainly going to define overlapping generics where I
> did not intend to.  Grouping all of the possiblities into one function

If by overlapping generics you mean the above overloading scenario,
I can assure you that this turns out to be quite unproblematic in
practice, for much the same reasons that you don't get clashes on
functions names in general.  I actually find it interesting that you
already have an opinion on "dangerous practice", while you haven't
even learned to use CLOS yet.  One generally assumes that knowledge
about practice comes indeed from practicing.

> I really didn't see what was meant my multiple dispatch, or why it was
> useful before, but I think I see now that as an object model it is a
> little bit clever.

I would suggest you get yourself a copy of Sonya E. Keene's
"Object-Oriented Programmin in Common Lisp - A Programmer's Guide to
CLOS" to start to get an idea of what CLOS is really about.  Last time 
I looked (which was probably when I wrote a similar message for
someone else ;), it was available through Amazon.com.

It also helps to see languages as ecologies (as Kent Pitman put it so
nicely).  Since CLOS is a part/an extension of CL, it will help you to 
approach CLOS from the perspective of the problems it solves for
Common Lisp.  It will most probably not get you very far to consider
how CLOS solves the problems that C has, and conclude that Objective-C 
solves those problems "better".

> Regardless, I still think of Objective-C as the best object system I
> have ever encountered; if you have used Objective-C then you might see
> how I consider object oriented programming can ideally be done.

Considering the fact that the theory behind "Object-Orientation" is
still a much debated topic (if you can't cope with CLOS' approach,
just take a look at some of the pure/lazy functional programming
languages and their approaches ;), I'd think it's a bit premature to
assign attributes like "ideally"...

Regs, Pierre.

-- 
Pierre Mai <····@acm.org>               http://home.pages.de/~trillian/
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: ··@spam.not
Subject: Re: Reasons for rejecting CLOS
Date: 
Message-ID: <37389c21.312414287@news.pacbell.net>
On 11 May 1999 03:09:50 +0200, ····@acm.org (Pierre R. Mai)
wrote:

>In languages like C++, where (operator) overloading is "allowed", you
>might implement both string concatenization and integer addition with
>an operator +.  While this would be possible to do in CL by accident
>(since both operations have the same arity), it is wrong, since both
>operations are arguably semantically different.

"Arguably" is the key word there.  Most of us learned to add
by putting apples together.  Here are 3 apples, here are two
more, how many do we have?  Putting strings together is a
similar operation.  Here is a short string, here is another,
what do we have?  Of all the possible interpretations of
string addition, concatenation is in fact that one that
most appeals to common sense, and is in fact the best
usage of the addition operator with strings, unless you
already have another concatenation operator, in which
case it's better to reject the addition operator as an
error.  But in C++ there is no standard string concatenation
operator, and the choice of '+' is common and generally
accepted.
From: Kenny Tilton
Subject: Re: Reasons for rejecting CLOS
Date: 
Message-ID: <3737CACD.F1B51E99@liii.com>
··@spam.not wrote:
> 
> On 11 May 1999 03:09:50 +0200, ····@acm.org (Pierre R. Mai)
> wrote:
> 
> >In languages like C++, where (operator) overloading is "allowed", you
> >might implement both string concatenization and integer addition with
> >an operator +.  While this would be possible to do in CL by accident
> >(since both operations have the same arity), it is wrong, since both
> >operations are arguably semantically different.
> 
> 
[snip]

> Of all the possible interpretations of
> string addition, concatenation is in fact that one that
> most appeals to common sense,

me, you may not be disagreeing with him.

I liked what he said because "+" as an alias for string concatenation
requires poetic license. If you mean 'concatenate', why not say
concatenate? You do not seem to dispute that a bit of metaphor is
required to get concatenation out of "+".

BTW, the interpretation of string addition that most appeals to my
common sense is "123" + "456" => "579", with an error for "hello, " +
"world".

Ken
From: Christopher R. Barry
Subject: Re: Reasons for rejecting CLOS
Date: 
Message-ID: <8767602hub.fsf@2xtreme.net>
··@spam.not writes:

> On 11 May 1999 03:09:50 +0200, ····@acm.org (Pierre R. Mai)
> wrote:
> 
> >In languages like C++, where (operator) overloading is "allowed", you
> >might implement both string concatenization and integer addition with
> >an operator +.  While this would be possible to do in CL by accident
> >(since both operations have the same arity), it is wrong, since both
> >operations are arguably semantically different.
> 
> "Arguably" is the key word there.  Most of us learned to add
> by putting apples together.  Here are 3 apples, here are two
> more, how many do we have?

5.

> Putting strings together is a similar operation.  Here is a short
> string, here is another, what do we have?

2 strings.

> Of all the possible interpretations of string addition,
> concatenation is in fact that one that most appeals to common sense

If you walk into a first-grade classroom and ask, "what do you get
when you add an "apple" and 3", what kind of answers do you think you
would get? They would probably think in terms of placing an apple
object and a 3 object into a container like a basket, the 3 object
represented by something like a cardboard cutout of a "3".

Apparently in Java it would be "apple3". But with Java I find this
acceptable because the alternative is to use the ugly and less
readable C function syntax. I think using "+" this way in Lisp is
particularly unsafe, and particularly ugly and useless.

Christopher
From: Kent M Pitman
Subject: Re: Reasons for rejecting CLOS
Date: 
Message-ID: <sfwd808jbhe.fsf@world.std.com>
The really important reason not to allow + to be overloaded as it is
in Java is the way is that identities like x+y-y = x are not useful to
the compiler when you have "inconsistencies" around such as you have
for + because there is no general rule about what a good definition
is.

Java may be able to define a + operator to do string concat, but a -
operator is a lot more problematic.  Even if you could make "foobar" -
"bar" yield "foo", the problem is that commutativity and associativity
doesn't work.  x-y+y is not the same as x+y-y, for example, because...
well, it's hopefully not worth belaboring.
From: Harley Davis
Subject: Re: Reasons for rejecting CLOS
Date: 
Message-ID: <7hcfuh$b6g$1@ffx2nh3.news.uu.net>
Kent M Pitman <······@world.std.com> wrote in message
····················@world.std.com...
> The really important reason not to allow + to be overloaded as it is
> in Java is the way is that identities like x+y-y = x are not useful to
> the compiler when you have "inconsistencies" around such as you have
> for + because there is no general rule about what a good definition
> is.
>
> Java may be able to define a + operator to do string concat, but a -
> operator is a lot more problematic.  Even if you could make "foobar" -
> "bar" yield "foo", the problem is that commutativity and associativity
> doesn't work.  x-y+y is not the same as x+y-y, for example, because...
> well, it's hopefully not worth belaboring.

But Java is statically typed, so the compiler can perfectly well distinguish
the cases where "+" has all the properties you are talking about.  Or am I
missing something?

-- Harley
From: Kent M Pitman
Subject: Re: Reasons for rejecting CLOS
Date: 
Message-ID: <sfwd8066sgc.fsf@world.std.com>
"Harley Davis" <··············@spamless_ilog.com> writes:

> Kent M Pitman <······@world.std.com> wrote in message
> ····················@world.std.com...
> > The really important reason not to allow + to be overloaded as it is
> > in Java is the way is that identities like x+y-y = x are not useful to
> > the compiler when you have "inconsistencies" around such as you have
> > for + because there is no general rule about what a good definition
> > is.
> >
> > Java may be able to define a + operator to do string concat, but a -
> > operator is a lot more problematic.  Even if you could make "foobar" -
> > "bar" yield "foo", the problem is that commutativity and associativity
> > doesn't work.  x-y+y is not the same as x+y-y, for example, because...
> > well, it's hopefully not worth belaboring.
> 
> But Java is statically typed, so the compiler can perfectly well distinguish
> the cases where "+" has all the properties you are talking about.  Or am I
> missing something?

Well, my points were several.  One is that you do have to statically type
the language in order to be able to manage the identities since otherwise
different identities apply depending on the values and you can't do any
optimization.  Another is that even in the presence of typing, it is possible
to scroll your screen so the type declarations are out of sight and it's still
possible to look at a certain piece of code in isolation that is hard to
understand.  Yet another is that I guess I find it kind of visually gross
to see the arg signature be totally varied in ways that vary not only according
to use, but are kind of huffman-encoded in terms of knowing that certain args
being what they are tells you additional syntaxes are available that wouldn't
be if you used different args.  These are all just personal preferences on
my part, not absolutes.  But they are strong personal preferences.

I think the right way to "support" (and I use the term loosely) the
idea of people wanting to overload operators is to let them define the
operator in a different namespace (whether package or lexical module)
if they want to give it incompatible meaning.  But within any given
namespace, I'd like a given operator to have one purpose.  Otherwise,
I personally don't get enough "mileage" out of having given the thing
a name, since the "name" doesn't really act as a mark of quality in
any context--the name could do anything at any time and any name could
just be waiting to surprise me.  It sounds like it would work against
macros, for example.  (I have no experience that I can recall offhand
with macros in strongly typed languages, though, so I'm not sure what
that would look like...)
From: Joachim Achtzehnter
Subject: Re: Reasons for rejecting CLOS
Date: 
Message-ID: <ucpv48jsd6.fsf@soft.mercury.bc.ca>
Dobes Vandermeer <·····@mindless.com> writes:
> 
> On the other hand, it is my opinion the overloading (specialisation)
> in general is a dangerous practice.

First, let us be careful with terminology. In C++ the term
'overloading' refers to a compile-time mechanism where the same
function or method name can be used for what really are different
functions, e.g. instead of

  void munge_A(A *arg);
  void munge_B(B *arg);

C++ allows you to write

  void munge(A *arg);
  void munge(B *arg);

But these functions are not selected based on the runtime type of the
arguments, e.g. if B is a subtype of A:

B b;
A *a = &b;
munge(a);  // calls munge(A *) !!!

Multiple dispatch in CLOS selects methods based on the dynamic,
runtime type of arguments, which in C++ is only supported for a single
argument: the target of virtual functions. So, to avoid confusion let
us talk about method specialization in CLOS, not overloading.

> Nobody is perfect, especially not me, and I am almost certainly
> going to define overlapping generics where I did not intend to.
> Grouping all of the possiblities into one function means that there
> can be no doubt what is being done, and you only have to specialise
> on one variable.  The rest you manually specialise on inside the
> function.

This is not really true if you think about it. Take the binary
operator example. Say you start with a method for the most general
type, say number-class. Then you add a subtype integer-class, and you
may want to specialize the integer-class case with a more efficient
method.

(defmethod binop ((left number-class) (right number-class)) ...)
(defmethod binop ((left integer-class) (right integer-class)) ...)

You still want to use the general method for mixed combinations of
arguments. In CLOS, the more general method will be called
automatically unless BOTH arguments are of type integer-class, and the
specialized method will only be called for the case it is designed
for, namely when both arguments are of type integer-class. In C++ the
binop methods on both classes must be able to deal with the mixed
case! Not all possibilities are in the same function!

> > How do you want them to be implemented, then? Something like this?
> > 
> >     (defun type-intersection (x y)
> >       (cond
> >         ((member-type-p x)
> >          ...)
> >         ((numeric-type-p x)
> >          ...)
> >         ((enumerated-type-p x)
> >          ...)
> >         ((structured-type-p x)
> >          ...)
> >         ...))
> > 
> > If you really think this is a good way to write code, then I don't
> > understand why you're interested in OO at all.
> 
> What is type-intersection supposed to do?

The point being made here is that lack of multiple dispatch in C++
forces you sometimes into using switch statements, a technique which
was supposed to have been replaced by virtual function calls.

Joachim

-- 
·······@kraut.bc.ca      (http://www.kraut.bc.ca)
·······@mercury.bc.ca    (http://www.mercury.bc.ca)
From: Philip Lijnzaad
Subject: Re: Reasons for rejecting CLOS
Date: 
Message-ID: <u77lqgm0ib.fsf@ebi.ac.uk>
On Mon, 10 May 1999 02:53:28 GMT, 
"Dobes" == Dobes Vandermeer <·····@mindless.com> writes:

Dobes> (defmethod learn (kb (f my-fact) tl) ... )
...
Dobes> On the other hand, (LEARN KB F TL) is no longer equivalent to
Dobes> KB->(LEARN F TL), 

why not? If learn(f,tl) is defined for a number of different types, you have
the same situation.

Dobes> and maybe instead decides to have side effects on F
Dobes> instead of KB 

it could have done that all along, and the same is true for the single
dispatch

Dobes> (we don't know, and there is no way to find out) while
Dobes> still using the same generic function.  

generic functions are like interfaces; they specify an API.

Dobes> The other approach means that we do know, given an object's API,
Dobes> whether it is that object or another that is most likely to be
Dobes> operated on.

do you mean: the object that is most likely to have its state changed. I
don't know what the prevalent coding style in CLOS is (any takers?), but
would be surprised if people randomly change the state on just any of the
argument objects that takes their fancy. I suppose I would use explicit
setf's for most state changes, and would have most generic's not change the
state of any of their objects, operating non-destructively where possible.

BTW, your 'most likely to be operated on' also suggests that you can't do
without additional rules in the form of stylistic conventions; same as in
CLOS. 

                                                                      Philip
-- 
The mail transport agent is not liable for any coffee stains in this message
-----------------------------------------------------------------------------
Philip Lijnzaad, ········@ebi.ac.uk | European Bioinformatics Institute
+44 (0)1223 49 4639                 | Wellcome Trust Genome Campus, Hinxton
+44 (0)1223 49 4468 (fax)           | Cambridgeshire CB10 1SD,  GREAT BRITAIN
PGP fingerprint: E1 03 BF 80 94 61 B6 FC  50 3D 1F 64 40 75 FB 53
From: Philip Lijnzaad
Subject: Re: Reasons for rejecting CLOS
Date: 
Message-ID: <u7aevdmgny.fsf@ebi.ac.uk>
On Sun, 9 May 1999 18:05:26 GMT, 
"Bill" == Bill Newman <·······@netcom.com> writes:
...
Bill> CLOS evolved from systems which imposed
Bill> that asymmetry, it dropped that asymmetry in order to gain some very
Bill> useful generality (multiple inheritance), and (as above) it retains
                                  ^^^^^^^^^^^
              (You mean 'multiple argument dispatch' here? )

Bill> the ability to describe algorithms which have that asymmetry. So I
Bill> just don't see the problem -- it honestly is a feature, not a bug.

Bill> (Others have pointed out that Bjarne Stroustrup himself has said that
Bill> multiple inheritance is very nice. I'll also point out that Scott

               ^^^^^^^^^^^ (multiple argument dispatch ?)

Bill> Meyers, in _Effective C++_ or _More Effective C++_, I forget which,
Bill> also spends a lot of time showing how to do some of this in C++.  You
Bill> write you could "easily" specialize on the other variables, but I
Bill> actually wouldn't characterize it as all that easy. It's obviously
Bill> doable, but it's also tedious and messy.)

-- 
The mail transport agent is not liable for any coffee stains in this message
-----------------------------------------------------------------------------
Philip Lijnzaad, ········@ebi.ac.uk | European Bioinformatics Institute
+44 (0)1223 49 4639                 | Wellcome Trust Genome Campus, Hinxton
+44 (0)1223 49 4468 (fax)           | Cambridgeshire CB10 1SD,  GREAT BRITAIN
PGP fingerprint: E1 03 BF 80 94 61 B6 FC  50 3D 1F 64 40 75 FB 53
From: Bill Newman
Subject: Re: Reasons for rejecting CLOS
Date: 
Message-ID: <wnewmanFBJ0sE.EE7@netcom.com>
Philip Lijnzaad (········@ebi.ac.uk) wrote:
: On Sun, 9 May 1999 18:05:26 GMT, 
: "Bill" == Bill Newman <·······@netcom.com> writes:
: ...
: Bill> CLOS evolved from systems which imposed
: Bill> that asymmetry, it dropped that asymmetry in order to gain some very
: Bill> useful generality (multiple inheritance), and (as above) it retains
:                                   ^^^^^^^^^^^
:               (You mean 'multiple argument dispatch' here? )

: Bill> the ability to describe algorithms which have that asymmetry. So I
: Bill> just don't see the problem -- it honestly is a feature, not a bug.

: Bill> (Others have pointed out that Bjarne Stroustrup himself has said that
: Bill> multiple inheritance is very nice. I'll also point out that Scott

:                ^^^^^^^^^^^ (multiple argument dispatch ?)

: Bill> Meyers, in _Effective C++_ or _More Effective C++_, I forget which,
: Bill> also spends a lot of time showing how to do some of this in C++.  You
: Bill> write you could "easily" specialize on the other variables, but I
: Bill> actually wouldn't characterize it as all that easy. It's obviously
: Bill> doable, but it's also tedious and messy.)

: -- 
: The mail transport agent is not liable for any coffee stains in this message
: -----------------------------------------------------------------------------
: Philip Lijnzaad, ········@ebi.ac.uk | European Bioinformatics Institute
: +44 (0)1223 49 4639                 | Wellcome Trust Genome Campus, Hinxton
: +44 (0)1223 49 4468 (fax)           | Cambridgeshire CB10 1SD,  GREAT BRITAIN
: PGP fingerprint: E1 03 BF 80 94 61 B6 FC  50 3D 1F 64 40 75 FB 53

Yes, of course, thank you. (Argh! where was my brain??)

  Bill Newman
  ·······@netcom.com
From: Christopher C Stacy
Subject: Re: Reasons for rejecting Lisp (was Re: Newbie questions [Followup to   comp.lang.lisp])
Date: 
Message-ID: <x8lu2toy0n3.fsf@world.std.com>
Dobes Vandermeer <·····@mindless.com> writes:
> More important issues are things like packaging; many lisp environment
> do not compile executables

Huh?  Both major commercial Common Lisp vendors (for Windows and UNIX)
have compilers that produce executables.  At least one of them can also
produce DLLs.
From: Kent M Pitman
Subject: Re: Reasons for rejecting Lisp (was Re: Newbie questions [Followup to   comp.lang.lisp])
Date: 
Message-ID: <sfw90b0eb70.fsf@world.std.com>
Christopher C Stacy <······@world.std.com> writes:

> Dobes Vandermeer <·····@mindless.com> writes:
> > More important issues are things like packaging; many lisp environment
> > do not compile executables
> 
> Huh?  Both major commercial Common Lisp vendors (for Windows and UNIX)
> have compilers that produce executables.  At least one of them can also
> produce DLLs.

Maybe he only meant to say "many lisp environments produced for casual
use are suitable only for casual use, and only those lisp environments
that intend to be commercial quality address this commercial concern".
;-)

I agree. Quite a lot of Lisp implementations address this issue, some
in more turnkey ways than others.  That some implementations don't is
merely a commercial choice--if those implementations find a following,
their users presumably are able to cope without this.  At some point
maybe they'll add the feature too, or maybe they'll lose customers.
That's the way the market works.  The market is intentionally and
appropriately pluralistic.  Saying that some implementation doesn't
suit your need is just saying you should be shopping; saying no
implementation suits your need is just saying there's a market
opportunity waiting for you.
From: ··@spam.not
Subject: Re: Reasons for rejecting Lisp (was Re: Newbie questions [Followup to   comp.lang.lisp])
Date: 
Message-ID: <37338f44.46977780@news.pacbell.net>
On Fri, 7 May 1999 23:01:39 GMT, Kent M Pitman
<······@world.std.com> wrote:

>Maybe he only meant to say "many lisp environments produced for casual
>use are suitable only for casual use, and only those lisp environments
>that intend to be commercial quality address this commercial concern".

Being able to build an executable program which can run
on different computers that don't have Lisp is not just a
commercial concern.  Even if you just want to write
educational software for your kids, you want them to be
able to run it on their own computer.  The reason why most
people never even bother to learn Lisp is because they
perceive it as being an elephant when they want a tiger.
Having to pay $3000 to be able to build simple executable
programs is part of that perception.
From: Kent M Pitman
Subject: Re: Reasons for rejecting Lisp (was Re: Newbie questions [Followup to   comp.lang.lisp])
Date: 
Message-ID: <sfwhfpnv4dg.fsf@world.std.com>
··@spam.not writes:

> On Fri, 7 May 1999 23:01:39 GMT, Kent M Pitman
> <······@world.std.com> wrote:
> 
> >Maybe he only meant to say "many lisp environments produced for casual
> >use are suitable only for casual use, and only those lisp environments
> >that intend to be commercial quality address this commercial concern".
> 
> Being able to build an executable program which can run
> on different computers that don't have Lisp is not just a
> commercial concern.  Even if you just want to write
> educational software for your kids, you want them to be
> able to run it on their own computer.  The reason why most
> people never even bother to learn Lisp is because they
> perceive it as being an elephant when they want a tiger.
> Having to pay $3000 to be able to build simple executable
> programs is part of that perception.

This supposes that you have to pay $3000 to build a simple executable.
You do not.  This is WILDLY far off the mark and grossly misleading to
suggest as a straw man. I can think of almost no commercial
implementations full of goodies that cost this much.  Most
high-quality commercial implementations have at least one offering in
the $500-600 range, and those implementations usually have an
accompanying free version [in which you cannot dump images not because
it wouldn't be fun but becuase they NEED the revenue to be able to
continue supplying cheapskates (not saying that's you, but I believe
they're out there) with free implementations becuase they never want
to ever pay a dime in support of whowever brought them
happiness/success].  Vendors have to feed themselves.  But there are
numerous smaller/freer lisps out there, probably most of which have
the capability you ask for and don't cost a dime.  If you really think
that it costs $3000 to build a simple executable, I suggest the
possibility that you have not done your homework before speaking.  And
if you don't think it does, I SUGGEST YOU CONSIDER THE POSSIBILITY
THAT EXTREME STATEMENTS LIKE THE ONE YOU MADE ABOVE ARE AS MUCH OR
MORE THE CAUSE OF A PERPETUATED BELIEF THAT LISP IS TOO EXPENSIVE FOR
PERSONAL USE THAN THE ACTUAL FACT OF THE MATTER.

It does cost more to make a commercial redistributable app, and I think
there are some real issues there. But you have specifically said that
is not your issue.  And I just don't think that in the space of "for
personal use" Lisps, your criticism is even remotely fair.

People can, with their kids, teach them to click on the "lisp" icon
and then teach them to type (load "...").  That's all that's needed.
If that is not a fair exchange for a free implementation, I have to
say your public awaits your contribution of something better.  If kids
don't have the attention span required to do a two-step loadup instead
of a one-step loadup, they are not candidates to ever be lisp
programmers.  I daresay the path to being a serious programmer
involves more than just a few such steps, and anyone not willing to
take them might as well start lining up right now for a job in another
industry with fewer steps leading to it rather than diving in on the
mistaken assumption that everything will be handed to them on a silver
platter for free with no burden that they do even the slightest thing.
Geez, even getting my linux to run at my house (and it wasn't wholly
free-there was a substantial media cost) took me a billion (ok, only
probably a hundred) recompiles of the kernel (which I should remember
to go delete sometime :-).  Was THAT a barrier to linux success?
I just don't think so.
From: Christopher R. Barry
Subject: Re: Reasons for rejecting Lisp (was Re: Newbie questions [Followup to   comp.lang.lisp])
Date: 
Message-ID: <87zp3ftlep.fsf@2xtreme.net>
Kent M Pitman <······@world.std.com> writes:

> People can, with their kids, teach them to click on the "lisp" icon
> and then teach them to type (load "...").  That's all that's needed.

You could just make a .BAT file that does your vendor's equivalent of
"lisp -eval '(load "...")", and then put that on the Windows desktop
with a cute icon, or do the equivalent thing for a Mac or your X
window manager. A 3-year-old might have trouble typing (load "...")
(especially if "( )" are labeled as "[ ]" like on my keyboard (-; ),
but clicking an icon and then using a cute Garnet app with a mouse or
trackball should be doable.

> Geez, even getting my linux to run at my house (and it wasn't wholly
> free-there was a substantial media cost) took me a billion (ok, only
> probably a hundred) recompiles of the kernel (which I should
> remember to go delete sometime :-).  Was THAT a barrier to linux
> success?  I just don't think so.

I've only had my own PC since 1997 (before then all I had ever done
with computers was use a little AOL and play a little DOOM on my
friends' DOS/Windows PCs), and about 6 months after I got it I tried
to install Linux. I finally succeeded in getting the partitions done
and my PS/2 mouse working with X and PPP working and the 10 other
show-stoppers that took 3-5 hours a piece of tearful frustration, but
there were still all kinds of problems like "netscape bus errors" that
crashed it and netscape's fonts being ugly as hell (no decent
font-server installed yet) and the netscape 24-bit color bug. In the
end I gave up and kept using a mix of Windows 95 and NT until sometime
in earlier 1998 when I finally got sick enough of Windows that I
installed Debian again and forced myself to use only it (and I
painfully worked my way through K&R's _The C Programming Language_ at
this time and became quite a vi-user/C-hacker for awhile <shudder>).

Other friends of mine have tried to do Linux and failed (even with the
occassional hand-holding and guidance from me and with other luxuries
I didn't have (like a CDROM instead of via FTP)). The point of all
this is that it _IS_ a barrier to Linux success, if you interpret
Linux success to mean getting the largest user-base possible by
migrating Windows-users.

But hey, it's free software... except RedHat charges $50 per
tech-supported license and I hear only has robots on tech-support duty
and replying to emails.

Christopher
From: Paolo Amoroso
Subject: Re: Reasons for rejecting Lisp (was Re: Newbie questions [Followup to   comp.lang.lisp])
Date: 
Message-ID: <3736abe3.1792805@news.mclink.it>
On Sat, 08 May 1999 19:15:57 GMT, ······@2xtreme.net (Christopher R. Barry)
wrote:

> But hey, it's free software... except RedHat charges $50 per
> tech-supported license and I hear only has robots on tech-support duty
> and replying to emails.

The Red Hat support "entity" with which I recently interacted by email
behaved like a human being. If it was a robot, I'd like to see its source
code :-)


Paolo
-- 
Paolo Amoroso <·······@mclink.it>
From: Christopher R. Barry
Subject: Re: Reasons for rejecting Lisp (was Re: Newbie questions [Followup to   comp.lang.lisp])
Date: 
Message-ID: <87aevc2j3e.fsf@2xtreme.net>
·······@mclink.it (Paolo Amoroso) writes:

> On Sat, 08 May 1999 19:15:57 GMT, ······@2xtreme.net (Christopher R. Barry)
> wrote:
> 
> > But hey, it's free software... except RedHat charges $50 per
> > tech-supported license and I hear only has robots on tech-support duty
> > and replying to emails.
> 
> The Red Hat support "entity" with which I recently interacted by email
> behaved like a human being. If it was a robot, I'd like to see its source
> code :-)

I've been hearing for awhile now how the've intended to invest the
$20M they got from Intel and others primarily to improve their tech
support. So maybe things are starting to get better....

Christopher
From: Sam Steingold
Subject: Re: Reasons for rejecting Lisp (was Re: Newbie questions [Followup to   comp.lang.lisp])
Date: 
Message-ID: <m3pv4bl3mw.fsf@eho.eaglets.com>
>>>> In message <·················@news.pacbell.net>
>>>> On the subject of "Re: Reasons for rejecting Lisp (was Re: Newbie questions [Followup to   comp.lang.lisp])"
>>>> Sent on Sat, 08 May 1999 02:48:05 GMT
>>>> Honorable ··@spam.not writes:
 >>
 >> Having to pay $3000 to be able to build simple executable
 >> programs is part of that perception.

for CLISP (from the impnotes.html):

<p>There are two different ways to make CLISP "executables" for
Windows platforms.

<ul>
 <li>Associate the "mem" extension with
     "c:\clisp\lisp.exe -m 10M -M %s".
 <li>Associate the "fas" extension with
    "c:\clisp\lisp.exe -m 10M -M c:\clisp\lispinit.mem -i %s".
    Alternatively, you may want to have a function <code>main</code> in
    your files and associate the "fas" extension with
    "c:\clisp\lisp.exe -m 10M -M c:\clisp\lispinit.mem -i %s -x (main)".
</ul>

<p>The clicking on the compiled lisp file (with "fas" extension) will
load the file (thus executing all the code in the file), while clicking
on the CLISP's memory image (with "mem" extension) with start clisp with
the given memory image.

<p>The function <code>(<strong>lisp:saveinitmem</strong>
                       &amp;optional (<var>filename</var> "lispinit.mem")
                       &amp;key :quiet :init-function)</code>
saves the running CLISP's memory to a file.  If the <code>:quiet</code>
argument is not <code>nil</code>, the startup banner and the good-bye
message will be suppressed. The <code>:init-function</code> argument
specifies a function that will be executed at startup of the saved
image.  The starting package of the new image is the one in which you
were when you invoked <code>lisp:saveinitmem</code>.

CLISP is covered by GPL, but you may distribute commercial applictions.
Read the license.


-- 
Sam Steingold (http://www.goems.com/~sds) running RedHat6.0 GNU/Linux
Micros**t is not the answer.  Micros**t is a question, and the answer is Linux,
(http://www.linux.org) the choice of the GNU (http://www.gnu.org) generation.
Why do we want intelligent terminals when there are so many stupid users?
From: Erik Naggum
Subject: Re: Reasons for rejecting Lisp (was Re: Newbie questions [Followup to   comp.lang.lisp])
Date: 
Message-ID: <3135136169957202@naggum.no>
* ··@spam.not
| Being able to build an executable program which can run on different
| computers that don't have Lisp is not just a commercial concern.

  but if you want to run it on such computers, won't they _get_ a Lisp?

  please remember that C executable are _not_ standalone unless you go to a
  very serious effort and bloat your executables tremendously.  the whole
  idea with shared libraries is to capitalize on common parts of what was
  once in a number of executables, and it has made life seriously simpler
  for large libraries.  the idea is no different than what was once done on
  mainframes, with massive amounts of good stuff in the operating system so
  each program wouldn't need it.

  I regard the .fasl files as Lisp's "executables", and the fact that I
  might need to run them from inside a "real" Lisp executable that does
  what would otherwise be shared libraries is a meaningless quibble --
  people seem to accept having to run multiple programs to start their
  applications already.  if it really is such a big deal, making a .fasl
  file become an .exe file that did this on its own seems like such a
  no-brainer I really wonder why people think it's a show-stopper that
  others haven't done it for them.  (yes, I assume this is under Windows
  -- Unix people are used to so much weird shit being "executable", in
  practice whatever the system call execve(2) is happy with.)

#:Erik
From: Kent M Pitman
Subject: Re: Reasons for rejecting Lisp (was Re: Newbie questions [Followup to   comp.lang.lisp])
Date: 
Message-ID: <sfwg157v48j.fsf@world.std.com>
Erik Naggum <····@naggum.no> writes:

> * ··@spam.not
> | Being able to build an executable program which can run on different
> | computers that don't have Lisp is not just a commercial concern.
> 
>   but if you want to run it on such computers, won't they _get_ a Lisp?
>   ...
>   I regard the .fasl files as Lisp's "executables",

Excellent points.

Certainly within the freeware arena, they are no less painful to start
than x windows is to get running on linux.  Some people might think
starting a program from a shell instead of clicking with a mouse means
it's not a "real" program, but by that argument, linux out of the box
is such a system.  I still have to type startx manually because I haven't
found the place to edit in the thing that says xdm should run by default
at startup.  [hints by private e-mail are welcome]. 
From: Kent M Pitman
Subject: Re: Reasons for rejecting Lisp (was Re: Newbie questions [Followup to   comp.lang.lisp])
Date: 
Message-ID: <sfwwvyhlm76.fsf@world.std.com>
I wrote:

> Certainly within the freeware arena, they are no less painful to start
> than x windows is to get running on linux.  Some people might think
> starting a program from a shell instead of clicking with a mouse means
> it's not a "real" program, but by that argument, linux out of the box
> is such a system.  I still have to type startx manually because I haven't
> found the place to edit in the thing that says xdm should run by default
> at startup.  [hints by private e-mail are welcome]. 

I've gotten about two dozen replies to this by now and should be all set.
Thanks everyone!
From: Erik Naggum
Subject: Re: Reasons for rejecting Lisp (was Re: Newbie questions [Followup to   comp.lang.lisp])
Date: 
Message-ID: <3135138431868226@naggum.no>
* Kent M Pitman <······@world.std.com>
| At some point maybe they'll add the feature too, or maybe they'll lose
| customers.  That's the way the market works.  The market is intentionally
| and appropriately pluralistic.  Saying that some implementation doesn't
| suit your need is just saying you should be shopping; saying no
| implementation suits your need is just saying there's a market
| opportunity waiting for you.

  this assumes that the issue in question is sufficiently important that
  the open market will be where the decisions are made.  I'm wary of the
  call for the market to decide issues that are generally irrelevant -- it
  would mean I had to choose from a million different products and would
  spend more time figuring out which irrelevant feature was supported by
  which miniscule vendor's product than it would take to build it on my own
  -- sort of like how the Microsoft third-party market works for people who
  cannot or legitimate refuse to program the incredible insanity Microsoft
  calls products.  to get to the point where the market can satisfy all
  sorts of small issues would mean such a huge demand for stupid things
  that _any_ small issue would get attention.  I don't want a world like
  that.  it's one of the many reasons I don't subscribe to the Microsoft
  world view and its model of competition between umpteen broken things
  that have to be so cheap to make because of the fierce competition they
  could not possibly be of any quality.

  neither competition nor the market are any better than the customers and
  their decisions.  I think the Jenny Jones case is relevant in this regard
  -- to pick randomly from the current news.  there's a _huge_ market for
  talk shows that drag people's personal life into the public despite the
  many bad consequences and the many good reasons not to air such shows.
  the people who ensure that this market exists are ultimately the viewers
  who buy the products that are advertised during those shows, which means
  that the shows exist because of business decisions of the advertisers.
  there are similar remote relationships between most products and their
  markets, and most of the interrelationships between market and marketing
  are amazingly unpalatable and even downright ugly, and the more mass
  market you get, the uglier it gets.

  I'm frankly not sure it's a good idea to call forth these forces without
  a very good grasp of the repercussions.  I prefer to work with the people
  who have already done some significant amount of good work over going off
  and do my own thing (create a new market), or choose somebody else.  only
  if my current vendor does something really stupid that I cannot live with
  will I feel like going elsewhere.  this obviously holds only for products
  whose acquisition carries a meaningful investment with them -- like
  learning to use them or the relationship with the developers.  should the
  acquisition be free of such investment, the cost of choosing something
  else will be low enough that the market will begin to work for small
  issues.  otherwise, it will work only for issues larger than the cost of
  changing product.  and do we really want a competition between products
  so similar that any small issue will be bigger than the cost of changing?

#:Erik
From: Kent M Pitman
Subject: Re: Reasons for rejecting Lisp (was Re: Newbie questions [Followup to   comp.lang.lisp])
Date: 
Message-ID: <sfwemkrv441.fsf@world.std.com>
Erik Naggum <····@naggum.no> writes:

> 
> * Kent M Pitman <······@world.std.com>
[my blathering about markets deleted]
>   this assumes that the issue in question is sufficiently important that
>   the open market will be where the decisions are made.

Yes, I agree this is an issue.  I don't think the market will decide small
things well--only coarse-grain things.  But many of these discussions
are over "What's Really Important" and I do think the market's failure to
spawn new variant products over certain issues is a quasi-proof that those were
not the key things people saw as important.
From: Dobes Vandermeer
Subject: Re: Reasons for rejecting Lisp (was Re: Newbie questions [Followup to    comp.lang.lisp])
Date: 
Message-ID: <3736538E.6EB5A69F@mindless.com>
Kent M Pitman wrote:
> 
> Christopher C Stacy <······@world.std.com> writes:
> 
> > Dobes Vandermeer <·····@mindless.com> writes:
> > > More important issues are things like packaging; many lisp environment
> > > do not compile executables
> >
> > Huh?  Both major commercial Common Lisp vendors (for Windows and UNIX)
> > have compilers that produce executables.  At least one of them can also
> > produce DLLs.
> 
> Maybe he only meant to say "many lisp environments produced for casual
> use are suitable only for casual use, and only those lisp environments
> that intend to be commercial quality address this commercial concern".

Which LISP environments are those?

Surf to the Allegro 5.0 for UNIX site; it doesn't even mention the
ability to generate executables, only the Windows version does.

LispWorks suffers from the same symptoms, perhaps.  It could be that
they assume that you know they generate executables, but LispWorks only
has a brief note in the Windows LispWorks FAQ regarding the ability to
generate DLLs.

Do they really generate executables?  I was under the impression they
didn't.

CU
Dobes
From: Espen Vestre
Subject: Re: Reasons for rejecting Lisp (was Re: Newbie questions [Followup to    comp.lang.lisp])
Date: 
Message-ID: <w6d8099y38.fsf@wallace.nextel.no>
Dobes Vandermeer <·····@mindless.com> writes:

> Do they really generate executables?  I was under the impression they
> didn't.

You seem to be 'under the impression' of a lot of things, e.g. you start
a flame war on the fundamentals of CLOS but later admit that you don't 
really know it well.

(hint: The next time it might be better to ask critical questions
 instead of making bombastic and ill-founded claims)

(Yes, it's very easy to generate standalone executables with ACL 5.0)

-- 

  (espen)
From: Dobes Vandermeer
Subject: Re: Reasons for rejecting Lisp (was Re: Newbie questions [Followup to    comp.lang.lisp])
Date: 
Message-ID: <37364ED9.C4BB152E@mindless.com>
Christopher C Stacy wrote:
> 
> Dobes Vandermeer <·····@mindless.com> writes:
> > More important issues are things like packaging; many lisp environment
> > do not compile executables
> 
> Huh?  Both major commercial Common Lisp vendors (for Windows and UNIX)
> have compilers that produce executables.  At least one of them can also
> produce DLLs.

Huh?  There are more than two Common Lisp vendors, and Allegro only
started producing executables under Windows late last year when they
released Allegro 5.0.  Allegro 3.0.2 which I have been using at school
for the last term does not.  I suspect that it is not alone in this
bracket.

CU
Dobes
From: Christopher C Stacy
Subject: Re: Reasons for rejecting Lisp (was Re: Newbie questions [Followup to     comp.lang.lisp])
Date: 
Message-ID: <x8lyaixmw3i.fsf@world.std.com>
Well you can "suspect" anything you want, but please don't waste
everbody's time by insisting that your ignorance is our problem.
Personally, I'm solving your problem by excluding your messages from
my news feed from now on.
From: Dobes Vandermeer
Subject: Re: Apology (was Reasons for rejecting Lisp)
Date: 
Message-ID: <37367FD8.347911B7@mindless.com>
Christopher C Stacy wrote:
> 
> Personally, I'm solving your problem by excluding your messages from
> my news feed from now on.

I'm sorry if I have been too argumentative;  I do like and use LISP or I
wouldn't be here.  I am perhaps opinionated and stubborn, and it seems
like I am putting down LISP at every turn;  its not my intention to
downplay CL but rather to be honest in where I see its weaknesses and
where it could be improved.  People have strong arguments in favor of
the direction that LISP has gone (sometimes a little too strong) and
generally they are not wrong.

The reality of the situation is that I am happy to learn LISP.  As a pet
project I am in the process of implementing my own Lisp dynamic
compiling environment, and in doing so learning the intimates of the
language (as well as Intel x86 assembly, sigh).  I appreciate the levels
of quality the products like LispWorks and Allegro CL provide, and when
I become rich I will probably buy myself a copy of each to develop other
hobby apps in.

Of course, I'd suggest to anyone else that if you are going to post a
message about how ignorant I am, or how much a liar, etc. etc., either
back it up with useful dialogue or send it to me personally.  I don't
really mind a flame war or whatever you are looking to flog, but there
are a lot of respectable people on the list who simply cant have their
time wasted even more by your useless responses.

Also, this isn't comp.lang.lisp.advocacy, but if you want to run about
in a land of denial and spout angry sermons at anyone who challenges
your special world of LISP than you could try creating the newsgroup. 
If someone asks on the group "What do you think leads people away from
LISP?" then I figure I'm fair game to lay down my case; the one reflects
the many, and if my opnions are so ignorant and off-base than probably
there are heaps of people sharing it.  The quesion you should ask
yourself, perhaps, is "Why doesn't he know that ALL LISP environments
generate executables like I do?".  The answer will probably be educating
and entertaining, much in the way it can be entertaining to discover
that Canadians in fact neither live in igloos or drive dog sleds to get
around over the snow-covered tundra.

CU
Dobes
From: Kent M Pitman
Subject: Re: Reasons for rejecting Lisp (was Re: Newbie questions [Followup to     comp.lang.lisp])
Date: 
Message-ID: <sfwbtft7evy.fsf@world.std.com>
Dobes Vandermeer <·····@mindless.com> writes:

> Christopher C Stacy wrote:
> > 
> > Dobes Vandermeer <·····@mindless.com> writes:
> > > More important issues are things like packaging; many lisp environment
> > > do not compile executables
> > 
> > Huh?  Both major commercial Common Lisp vendors (for Windows and UNIX)
> > have compilers that produce executables.  At least one of them can also
> > produce DLLs.
> 
> Huh?  There are more than two Common Lisp vendors,

As far as I know (though I don't use it myself),
MCL has been producing executables for many years.

Harlequin LispWorks has been producing executables for years.
It doesn't let you do this in its free version.
One can hardly blame them.

I suspect that Eclipse CL can produce executables, since the way it works
it would be hard for it not to.

I'm pretty sure Corman CL can produce executables.

I don't know about CLISP or GCL.

Still, all told, I think it's not as bad as you suggest.

> and Allegro only
> started producing executables under Windows late last year when they
> released Allegro 5.0. 

Because of being at Harlequin for so long, I didn't have much access
to Allegro for a long time, so I don't know for sure about this.
Maybe a Franz person will comment.  But even if it is true, your
statement "do not compile" was in the present tense.  If you know they
are presently doing so, your statement would be factually in error if
you meant it to apply to this implementation.

> Allegro 3.0.2 which I have been using at school
> for the last term does not.

There is an unaccounted-for Allegro 4 in here, I assume.

> I suspect that it is not alone in this bracket.

Old versions of things often don't do things.  You have some ethical
obligation to be up-to-date before you start disparaging existing
implementations.  The vendors have been working on cleaning up their
act on this over the last several years and deserve credit for this.

Can you in fact name any commercial CL (i.e., one that charges you to
purchase it) that does not produce executables?  Presently I mean--not
"at some time in the past". I'm not saying there isn't one.  I'm just
curious if there is such a one.  I don't know of one.
From: Dobes Vandermeer
Subject: Re: LISP implementations (was Reasons for Rejecting Lisp)
Date: 
Message-ID: <37367AD2.486FC23B@mindless.com>
Kent M Pitman wrote:
> 
> Dobes Vandermeer <·····@mindless.com> writes:
> 
> > Christopher C Stacy wrote:
> > >
> > > Dobes Vandermeer <·····@mindless.com> writes:
> > > > More important issues are things like packaging; many lisp environment
> > > > do not compile executables
> > >
> > > Huh?  Both major commercial Common Lisp vendors (for Windows and UNIX)
> > > have compilers that produce executables.  At least one of them can also
> > > produce DLLs.
> >
> > Huh?  There are more than two Common Lisp vendors,
> 
> As far as I know (though I don't use it myself),
> MCL has been producing executables for many years.
> 
> Harlequin LispWorks has been producing executables for years.
> It doesn't let you do this in its free version.

I wouldn't expect that, although it probably deserves mention on the
feature list?  It is easy to assume that because it is listed as an
interpreter, it typically is not also a compiler (to executable)

> Still, all told, I think it's not as bad as you suggest.

GCL compiles to ANSI C code, like Eclipse.
CLISP does not compile to executable, but it will compile to a .fas
format which may or may not be machine-code (?)

Typical "major" features of LISP environments look like:

Some generate to C: 
Eclipse (although it also has an interpreter)
ECoLisp
GCL
Star Sapphire Common LISP (lists a "beta" lisp-to-C system)
CLiCC (minimal free system)

Non-compiling interpreters: 
(most small shareware apps) 
Star Sapphire Common LISP (commercial)
RefLISP (minimal free system)
jlisp (minimal free system)
CLISP (supports both modes)
PowerLISP (I think supports both modes)
CMUCL (supports both modes)

Dynamic compiler: 
Gold Hill
Allegro CL
LispWorks
Mac CL
Corman LISP
Medley 2.0 (Xerox)*
CLISP(?)
PowerLISP
CMUCL

Compiles to exe: 
Gold Hill
Allegro CL
LispWorks
Mac CL
CMUCL (?)
[ Corman LISP* ]

Software Engineer (by Raindrop Software)

Interestingly, Corman LISP does NOT generate actual executables, but
rather plays a clever game by creating an image file and copying the
lisp environment's .exe to the same basename as the image, which causes
it to load that image on startup instead of the standard one.  (Unless
this changed very recently without me noticing)  I actually think this
merits mention as almost generating an executable, because at least the
implementation has concern for packaging issues.  Ideally it would have
done a "self-executing" image approach, like WinZip does, by embedding
its executable in the top of the image.

I cant find out without mailing Xerox whether Medley 2.0 can generate
executables, so you could include that in the last category as well.

I omitted a few incomplete or non-CL entries from the list.  You can
take a look at it yourself if you want to really research the topic:
http://www.elwood.com/alu/table/systems.htm

Regardless, there are about 4 commercial implementations that do in fact
appear to generate actual executables, out of a list of about 10 total
(commercial) vendors, and 5 total implementations out of a total
(listed) 

Correct me if I am wrong!

CU
Dobes
From: Bill Newman
Subject: Re: LISP implementations (was Reasons for Rejecting Lisp)
Date: 
Message-ID: <wnewmanFBJ3uE.Jxy@netcom.com>
Dobes Vandermeer (·····@mindless.com) wrote:
: Dynamic compiler: 
...
: CMUCL

No one seems to have mentioned what class CMUCL is in.

CMUCL lets you dumps its memory image to disk and reload it into a
fresh CMUCL, or you can just load FASL into a fresh CMUCL. It's rather
UNIX-centric, so the implementors might consider it a strange question
if you asked "but can you generate a real executable file": the answer
might be that

  #!/bin/sh
  /usr/bin/cmucl -core /usr/local/lib/myprog.core -eval "(startup)"

or

  #!/bin/sh
  /usr/bin/cmucl -eval '(load "/usr/local/lib/myprog/main.fasl")'

*is* a real executable file.

It works pretty well for me, anyway. The only potential disadvantage
would be startup overhead, and it's much than a second in both cases,
which is good enough for me.

  Bill Newman
  ·······@netcom.com
From: Peter Van Eynde
Subject: Re: LISP implementations (was Reasons for Rejecting Lisp)
Date: 
Message-ID: <slrn7jg3qu.mkf.pvaneynd@mail.inthan.be>
On Mon, 10 May 1999 18:11:01 GMT, Bill Newman wrote:
>  #!/bin/sh
>  /usr/bin/cmucl -core /usr/local/lib/myprog.core -eval "(startup)"
>
>or
>
>  #!/bin/sh
>  /usr/bin/cmucl -eval '(load "/usr/local/lib/myprog/main.fasl")'
>
>*is* a real executable file.

In Linux 2.2.X with the binmisc module, any .x86f file is a binary file.

pvaneynd:~$ less /usr/doc/cmucl/examples/Demos/register-lisp-as-executables.sh 
#!/bin/sh
echo ':lisp:E::x86f::/usr/bin/lisp-start:'  > /proc/sys/fs/binfmt_misc/register
# this should only work for root under linux 2.1.XX or later
# now you can do "chmod a+x hello.x86f" and
# ./hello.x86f
# from your favorite shell.

pvaneynd:~$ less /usr/doc/cmucl/examples/Demos/lisp-start                      
#!/bin/sh
/usr/bin/lisp -load $1

There is even a demo of how to use a lisp-server to wait for commands, so you 
avoid the startup-delay of cmucl...

pvaneynd:~$ less /usr/doc/cmucl/examples/Demos/Start-up-server.lisp 
(in-package :user)

(format t "THIS A A HUDGE SECURITY RISC. CHANGE THE PASSWORD!!!~%~%")
(setf mp::*idle-process* mp::*initial-process*)
(mp::start-lisp-connection-listener :port 6789 :password "Clara")

;;; now you can telnet in and do:
#|
$telnet localhost 6789
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
Enter password: "Clara"

CMU Common Lisp Experimental 18a+ release x86-linux 2.2.0 cvs, \
running on slartibartfast
Send bug reports and questions to your local CMU CL maintainer,
or to ··@snoopy.mv.com,
or to ·······@uia.ac.be or ··············@uia.ua.ac.be
or to ··········@cons.org. (prefered)

type (help) for help, (quit) to exit, and (demo) to see the demos

Loaded subsystems:
    Python 1.0, target Intel x86
    CLOS based on PCL version:  September 16 92 PCL (f)
* (+ 1 1)

2
* (quit)
Connection closed by foreign host.
|#

Groetjes, Peter

-- 
It's logic Jim, but not as we know it. | ········@debian.org for pleasure,
"God, root, what is difference?",Pitr  | ········@inthan.be for more pleasure!
From: ··@spam.not
Subject: Re: Reasons for rejecting Lisp (was Re: Newbie questions [Followup to     comp.lang.lisp])
Date: 
Message-ID: <373899cb.246280082@news.pacbell.net>
On Mon, 10 May 1999 04:00:33 GMT, Kent M Pitman
<······@world.std.com> wrote:

>Harlequin LispWorks has been producing executables for years.
>It doesn't let you do this in its free version.
>One can hardly blame them.

They could make the executable announce that it's a demo
version built from a demo version of LispWorks.  A crippled
version is a very poor substitute for a full evaluation
version.  Nags, etc., are a much more reasonable way to
prevent commercial use of a demo version than to cripple
the product, because the missing features of a crippled
product prevent a valid evaluation.  To keep the nags from
bothering those who would rather have the crippled version,
the nags can just be put in those features that would have
been missing in the crippled version.

A good example of nags is the latest version of ISE Eiffel,
which can be downloaded from www.eiffel.com.  It's a full
version with all features and no time limit, but with some
nags to make it useless for commercial work.  It does
have more nags than really needed for that purpose.  The
vendors should be careful to strike the right balance in
that kind of thing, because too much nagging can be
annoying and defeat the purpose of trying to get more sales.
ISE Eiffel should only have the one nag when you start an
executable built with the demo version.  All the other nags
should be removed, because they only serve to annoy,
and that one nag in the executable more than does the
job of motivating people to buy ISE Eiffel instead of using
the free version.

Those who prefer C++ over Lisp because they want more
efficient executables might find Eiffel a good compromise.
You also don't have to pay ISE nearly as much money as
you would have to pay one of the major Lisp vendors.  And
the browser in ISE Eiffel is wonderful, once you learn how
to use all its features.  And it's probably a lot easier to
sell your boss on the idea of using Eiffel than Lisp, by
talking about the "design by contract" features, etc.,
and showing how reasonable and efficient the executables
are.
From: Hartmann Schaffer
Subject: Re: Reasons for rejecting Lisp (was Re: Newbie questions [Followup to     comp.lang.lisp])
Date: 
Message-ID: <7tDZ2.29589$134.334637@tor-nn1.netcom.ca>
In article <···············@world.std.com>,
	Kent M Pitman <······@world.std.com> writes:
> ...
> As far as I know (though I don't use it myself),
> MCL has been producing executables for many years.
> 
> Harlequin LispWorks has been producing executables for years.
> It doesn't let you do this in its free version.
> One can hardly blame them.
> 
> I suspect that Eclipse CL can produce executables, since the way it works
> it would be hard for it not to.
> 
> I'm pretty sure Corman CL can produce executables.
> 
> I don't know about CLISP or GCL.

CLISP doesn't.

GCL compiles to C, and afaik C produces executables

> ...

Hartmann Schaffer
From: Pierre R. Mai
Subject: Re: Reasons for rejecting Lisp (was Re: Newbie questions [Followup to     comp.lang.lisp])
Date: 
Message-ID: <871zgox55i.fsf@orion.dent.isdn.cs.tu-berlin.de>
··@inferno.nirvananet (Hartmann Schaffer) writes:

> GCL compiles to C, and afaik C produces executables

No, although GCL uses C (and gcc) for it's compilation step (making
it thus not a very fast compiler), it just loads the obtained object
code directly into the Lisp image.  AFAIK, there is no method of
generating "standalone" executables via gcl (i.e. you just dump out
images, like you do with most other Lisps).  That is, the "only"
difference in this regard between GCL and e.g. CMU CL, is that GCL
uses GCC as it's low-level assembler, and it's objects files are
called *.o (+ a couple of other files, IIRC, needed for fixups and
data, since *.o files couldn't hold all the information needed),
whereas CMU CL has it's own assembler, and it's files are called
*.x86f (on IA86 plattforms).

Just because something uses a C compiler somewhere doesn't imply it
will magically fit the C/Unix world-view of an executable.

If you want that, you should take a look at Eclipse or EcoLisp, which
compile to "standalone" C applications (i.e. they are made to fit the
C/Unix world-view).

Regs, Pierre.

-- 
Pierre Mai <····@acm.org>               http://home.pages.de/~trillian/
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: Tim Bradshaw
Subject: Re: Reasons for rejecting Lisp (was Re: Newbie questions [Followup to     comp.lang.lisp])
Date: 
Message-ID: <ey3yaixig68.fsf@lostwithiel.tfeb.org>
* Dobes Vandermeer wrote:

> Huh?  There are more than two Common Lisp vendors, and Allegro only
> started producing executables under Windows late last year when they
> released Allegro 5.0.  Allegro 3.0.2 which I have been using at school
> for the last term does not.  I suspect that it is not alone in this
> bracket.

I don't know if it (3.x) produces executables as such but it certainly
produces things that are standalone enough that they fit on a floppy
including all the runtime support & GUI stuff.  Until recently I
worked for some people who did exactly that: we had a paper at last
year's LUGM about exactly this.

--tim
From: Dobes Vandermeer
Subject: Re: Reasons for rejecting Lisp (was Re: Newbie questions [Followup  to     comp.lang.lisp])
Date: 
Message-ID: <37376C77.8075FFF6@mindless.com>
Tim Bradshaw wrote:
> 
> * Dobes Vandermeer wrote:
> 
> > Huh?  There are more than two Common Lisp vendors, and Allegro only
> > started producing executables under Windows late last year when they
> > released Allegro 5.0.  Allegro 3.0.2 which I have been using at school
> > for the last term does not.  I suspect that it is not alone in this
> > bracket.
> 
> I don't know if it (3.x) produces executables as such but it certainly
> produces things that are standalone enough that they fit on a floppy
> including all the runtime support & GUI stuff.  Until recently I
> worked for some people who did exactly that: we had a paper at last
> year's LUGM about exactly this.

Yeah, you generate an image file and then pass that as an argument to
the LISP.EXE in a shortcut, AFAIK.

You lose the power to accept command-line parameters and similar, but
thats not a huge loss.  Unfortunately the interface to do so is a little
difficult to use properly, there are a few options and so forth.. I
imagine you could figure it out and use it fairly easily after some
practice and research.

CU
Dobes
From: Marco Antoniotti
Subject: Re: Reasons for rejecting Lisp (was Re: Newbie questions [Followup   to     comp.lang.lisp])
Date: 
Message-ID: <lwd808j999.fsf@copernico.parades.rm.cnr.it>
Dobes Vandermeer <·····@mindless.com> writes:

	...
> 
> You lose the power to accept command-line parameters and similar, but
> thats not a huge loss.  Unfortunately the interface to do so is a little
> difficult to use properly, there are a few options and so forth.. I
> imagine you could figure it out and use it fairly easily after some
> practice and research.
> 

Command Line parameters...... on a Mac?

Do you realize that Java specifies that programs such as

	public class IAmNotPortable {
	  public static void main(String[] args) {
	    ...
	  }
        }

are not portable?

Cheers


-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: Fernando Mato Mira
Subject: Re: Reasons for rejecting Lisp (was Re: Newbie questions [Followup    to     comp.lang.lisp])
Date: 
Message-ID: <37380541.9403D632@iname.com>
Marco Antoniotti wrote:

> Command Line parameters...... on a Mac?

MacOS X?
From: Tim Bradshaw
Subject: Re: Reasons for rejecting Lisp (was Re: Newbie questions [Followup to   comp.lang.lisp])
Date: 
Message-ID: <ey390azla78.fsf@lostwithiel.tfeb.org>
* Dobes Vandermeer wrote:

> LISP's object-oriented paradigm is powerful and yet... It's essentially
> dynamic operator overloading, which is about as interesting to an object
> modeller as a stack of bricks.  Although this is certainly only an
> opinion and perhaps a trend, object orientation is centered on message
> passing, while LISP's object orientation is based on function calls
> (still).  I won't say its not USEFUL, but its not attractive, and its
> not a step forward unless you are writing "toy" applications.

Please, read some history.  Lisp's object-orientation is based on
function-calling *now*.  Once upon a time when Lisp was younger there
were systems based on explicit message passing.  Unfortunately message
passing is not as general as function calling because you can only
pass messages to one object at a time without twisting your brain so
hard it breaks.

And if you like message passing such a lot just implement it, it's not
exactly hard:

    (defmacro define-message (name &optional (signature '(arg)))
      `(progn
	(unless (fboundp 'name)
	  (defgeneric ,name ,signature))
	(defvar ,name #',name)
	',name))

    (declaim (inline send))
    (defun send (thing message &rest args)
      (declare (dynamic-extent args))
      (apply thing message args))

--tim
From: Lyman S. Taylor
Subject: Re: Reasons for rejecting Lisp (was Re: Newbie questions [Followup to    comp.lang.lisp])
Date: 
Message-ID: <3734B6EF.FD835DDD@mindspring.com>
Tim Bradshaw wrote:
...
>         (defvar ,name #',name)
...
>     (declaim (inline send))
>     (defun send (thing message &rest args)
>       (declare (dynamic-extent args))
>       (apply thing message args))

"Message" is a function. Therefore, shouldn't that last line read 

         (apply  message thing args )

 or alternatively just make send a macro since this is really just
 "sugar" to hide the generic function. [ I presume that defmethod was
 being used to define the "methods" ] 

       (defmacro  send ( thing message &rest args )
         `(,message ,thing ,@args ) )


 Another approach (as opposed to layering on top of generic functions) would
 be for objects to have "message" slots. and then "send" would look more like


       (defmacro  send  ( thing message &rest args )
          `(funcall (,message ,thing) ,@args ))

        
 The slot definition being something akin to 

           (msg :initform #'(lambda (...args..) ... body ... )
                :type  function 
                :reader msg
                :allocation :class )

 Where you'd define the messages "inside" the class definition; similar to 
 Java's style.   So you truely would have "members that are functions". 
 You'd loose the "next-method" mechanism so this isn't quite what you'd 
 have to do.  Or even prefer to do. :-)

 Additionally, if you didn't wanted to "overload" a name multiple times
within a
 single class definition this should work. However, there is a disconnect
 between a desire for a "function name" with mulitple argument signatures and 
 CLOS. So this isn't really "new".   That's not really about "message 
 passing", but language support for "name mangling".   Those are two
 different things.   Smalltalk doesn't have "name mangling" and I'm 
 pretty sure most consider it "message passing". :-)


P.S.  Netspace insists on printing backquotes as forward quotes. I cut and
      pasted back into a Listener, so I think there really are backquotes
      leading the body of those macros. If not please replace at your end.

--

Lyman
From: Raymond Toy
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <4nk8unb0hc.fsf@rtp.ericsson.se>
>>>>> "Joachim" == Joachim Achtzehnter <·······@kraut.bc.ca> writes:

    Joachim> persists in practise. I agree strongly with Joshua that lack of static
    Joachim> type checking is one on the main disadvantages of (at least) the
    Joachim> commercial Common Lisp implementation I am familiar with.

Get another implementation?  I'm not that familiar with commercial
implementations on this topic, but certainly CMUCL can do a lot of
static type checking, roughly equivalent to any C/C++ compiler, and,
in some ways, a lot more.

    Joachim> Well, I tend to disagree. Adding static type checking (optional of
    Joachim> course) would go a long way towards convincing experienced C++/Java
    Joachim> programmers to take another look at Lisp. Of course, I can be certain
    Joachim> only about my own opinion, others may disagree. I have heard a

I always thought C++/Java programmers were turned off by the syntax,
not the lack of type checking.  As a C/C++ programmer, I rather enjoy
not having to type everything when writing in Lisp.

    Joachim> arguments, passing a wrong argument, etc.  With existing Lisp
    Joachim> implementations many such errors are detected only at runtime even
    Joachim> when declarations are used. This is less problematic with mainline

Get a better implementation?

    Joachim> you're lucky. Yes, you should test all your code, but the kind of bug
    Joachim> we're talking about is often introduced by changes that are so
    Joachim> 'obvious' that many developers don't imagine a bug may have been
    Joachim> introduced.

I would say your software engineering process is broken in this case,
and no language will protect you from this kind of problem.

Ray
From: Joshua Scholar
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <3730d086.5562143@news.select.net>
On 05 May 1999 18:44:31 -0400, Raymond Toy <···@rtp.ericsson.se>
wrote:

>
>    Joachim> you're lucky. Yes, you should test all your code, but the kind of bug
>    Joachim> we're talking about is often introduced by changes that are so
>    Joachim> 'obvious' that many developers don't imagine a bug may have been
>    Joachim> introduced.
>

It's not about 'obviousness' so much as pressure an lack of time.

>I would say your software engineering process is broken in this case,

Welcome to the real world.  The only time I've ever seen an office
where the programmers had all the time they needed for designing,
documenting, debugging and other 'engineering' process tasks was in a
government job.  But then I work in the games industry which is
entertainment after all - flakiness and ego are expected to be main
ingredients of both management and engineering here.  Not that I'm
complaining - for one thing I fit right in :)

>and no language will protect you from this kind of problem.
>
>Ray

Oh, a language can help by making testing less necessary.  It's a good
thing for it to be doing in any case.

Joshua Scholar
From: Erik Naggum
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <3134940176823334@naggum.no>
* ·····@removethisbeforesending.cetasoft.com (Joshua Scholar)
| Welcome to the real world.  The only time I've ever seen an office where
| the programmers had all the time they needed for designing, documenting,
| debugging and other 'engineering' process tasks was in a government job.

  I love it when people think theirs is the only corner of the world that
  is worthy of the trademark Real World.  one usually doesn't need any more
  evidence to discard people than that.

  I have had the opportunity to do everything I have wanted to do for my
  client, and have been able to spend all the time I have wanted on what I
  think is important to the end goal: that I shall be able to leave the
  project and it shall continue to be operational for a long time to come.
  only when my management has decided on some particular deadline for some
  particular feature has this idyllic image been disturbed.  this is not a
  government project -- it's a financial news agency's distribution system,
  where we care more about accuracy and fault tolerance than anything else.

#:Erik
From: Sam Steingold
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <m390b2qnfn.fsf@eho.eaglets.com>
>>>> In message <················@naggum.no>
>>>> On the subject of "Re: Newbie questions [Followup to comp.lang.lisp]"
>>>> Sent on 06 May 1999 00:42:56 +0000
>>>> Honorable Erik Naggum <····@naggum.no> writes:
 >> * ·····@removethisbeforesending.cetasoft.com (Joshua Scholar)
 >> | Welcome to the real world.  The only time I've ever seen an office where
 >> | the programmers had all the time they needed for designing, documenting,
 >> | debugging and other 'engineering' process tasks was in a government job.
 >> 
 >>   I love it when people think theirs is the only corner of the world that
 >>   is worthy of the trademark Real World.  one usually doesn't need any more
 >>   evidence to discard people than that.
 >> 
 >>   I have had the opportunity to do everything I have wanted to do for my
 >>   client, and have been able to spend all the time I have wanted on what I
 >>   think is important to the end goal...

Is this why your CLEmacs (http://sourcery.naggum.no/emacs/clemacs.html)
project is dormant for almost 2 years now? :-)

Apparently it's not important to the end goal...

[No, you do not owe anyone anything, and I am not blaming you for
anything, and I am not whining &c &c &c.]

-- 
Sam Steingold (http://www.goems.com/~sds) running RedHat6.0 GNU/Linux
Micros**t is not the answer.  Micros**t is a question, and the answer is Linux,
(http://www.linux.org) the choice of the GNU (http://www.gnu.org) generation.
I may be getting older, but I refuse to grow up!
From: Bagheera, the jungle scout
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <7gsdu8$2nt$1@nnrp1.deja.com>
In article <················@news.select.net>,
  ·····@removethisbeforesending.cetasoft.com (Joshua Scholar) wrote:
> On 05 May 1999 18:44:31 -0400, Raymond Toy <···@rtp.ericsson.se>
> wrote:
> >I would say your software engineering process is broken in this case,
>
> Welcome to the real world.  The only time I've ever seen an office
> where the programmers had all the time they needed for designing,
> documenting, debugging and other 'engineering' process tasks was in a
> government job.  But then I work in the games industry which is
> entertainment after all - flakiness and ego are expected to be main
> ingredients of both management and engineering here.  Not that I'm
> complaining - for one thing I fit right in :)

must've been a good govt. job.
I've been working for the government for several years now
and I've only had 1 position where your situation held true.
But that was a group filled with young, idealistic, crusading
programmers that had little control enforced over them from
management.  Maybe the younger generation just knows a better
way of getting the work done?

In general, though, all the jobs I have had have been flaky, with ego
maniacal managers and leads.  I find this kind of atmosphere too
difficult to be productive in.  Chaos begets chaos.

--
Bagherra <·······@frenzy.com>
http://www.frenzy.com/~jaebear
  "What use is it to have a leader who walks on water
       if you don't follow in their footsteps?"

-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    
From: Joshua Scholar
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <373226af.537252@news.select.net>
On Thu, 06 May 1999 15:53:46 GMT, Bagheera, the jungle scout
<········@my-dejanews.com> wrote:

>In article <················@news.select.net>,
>  ·····@removethisbeforesending.cetasoft.com (Joshua Scholar) wrote:
>> On 05 May 1999 18:44:31 -0400, Raymond Toy <···@rtp.ericsson.se>
>> wrote:
>> >I would say your software engineering process is broken in this case,
>>
>> Welcome to the real world.  The only time I've ever seen an office
>> where the programmers had all the time they needed for designing,
>> documenting, debugging and other 'engineering' process tasks was in a
>> government job.  But then I work in the games industry which is
>> entertainment after all - flakiness and ego are expected to be main
>> ingredients of both management and engineering here.  Not that I'm
>> complaining - for one thing I fit right in :)
>
>must've been a good govt. job.
>I've been working for the government for several years now
>and I've only had 1 position where your situation held true.
>But that was a group filled with young, idealistic, crusading
>programmers that had little control enforced over them from
>management.  Maybe the younger generation just knows a better
>way of getting the work done?
>
>In general, though, all the jobs I have had have been flaky, with ego
>maniacal managers and leads.  I find this kind of atmosphere too
>difficult to be productive in.  Chaos begets chaos.

Two caveats:
1. I didn't say WHICH government.  The department was Agriculture
Canada.
2. The manager I met was a flaky, ego maniacal man who insisted on
meticulous software engineering.  But the only reason he could get
away with all he did (he was having programmer reinvent the wheel and
be WAY too fancy instead of buying off the shelf packages) was that he
didn't have to turn a profit.

Joshua Scholar
From: Christopher Browne
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <4KfY2.2160$Xs1.357331@news1.giganews.com>
On Wed, 05 May 1999 17:34:40 GMT, Joachim Achtzehnter
<·······@kraut.bc.ca> wrote:  
>Kent M Pitman <······@world.std.com> writes:
>>
>> ·····@removethisbeforesending.cetasoft.com (Joshua Scholar) writes:
>> 
>> > Well Bagheera didn't state the problem quite right.  The overall
>> > point is that type checking saves you from tons and tons of late
>> > night typos and logic errors.
>> 
>> Nothing in CL forbids you from type-declaring every variable. Knock
>> yourself out.  Don't forget to send bug reports when the compiler
>> fails to use them well or flag problems, so your vendor will know
>> you care.
>
>It is true that one cannot fault the language for this. Nevertheless,
>until vendors listen to and act on these 'bug reports' the problem
>persists in practise. I agree strongly with Joshua that lack of static
>type checking is one on the main disadvantages of (at least) the
>commercial Common Lisp implementation I am familiar with.

I agree strongly that the lack of static type checking is perceived
*by you and Joshua* as a main disadvantage of CL implementations.

One of the major problems with C++ is that it requires static
typechecking, which means that building generic functions requires the
contortions of templates, which have taken years to settle down enough
that compilers could coherently implement them.  (And it is not at all
clear that template-based code is portable between different
template-implementation approaches.)  

The fact that CL doesn't *force* you to restrict the manipulable types
at the time that you write the code means that you don't need
templates; you just need to go write CL code, and use it on whatever
datatypes you need to use it on.

You're not likely to get agreement that the problem *that you
perceive* represents a problem *in fact.*

>> But the language itself already supports this.  It's simply up to
>> the market to decide if this is what stands between it and
>> success. I doubt it is, but you're welcome to make the case
>> otherwise.
>
>Well, I tend to disagree. Adding static type checking (optional of
>course) would go a long way towards convincing experienced C++/Java
>programmers to take another look at Lisp. Of course, I can be certain
>only about my own opinion, others may disagree. I have heard a
>representative of a Lisp vendor seriously argue that Lisp code must
>look as simple as Java code to be competitive. IMHO, static type
>checking is an order of magnitude more important than that. :-)

Adding in a requirement to put semicolons at the ends of statements,
and changing over to C++ syntax, might also go a long way towards
convincing C++/Java advocates to take another look at Lisp.

By the way, if static type checking is, as you say, an order of
magnitude more important than simplicity of appearance, would you then
pay an order of magnitude more for the Lisp implementation?  

Money talks; if the Lisp vendors figure out that they can make ten
times as much (a decimal order of magnitude) in sales as a result of
implementing this functionality, then I don't think they'll sit back
on their haunches with some "religious" attitude about not doing it
because they think it's a silly idea; they'll say:

"Cool!  If we invest $100,000 paying a developer or two to add static
type checking, and when our sales increase by an order of magnitude,
from $10M to $100M, this $100K investment will pay off handsomely!"

[Hint: The fact that they haven't done so means that they obviously
consider this "enhancement" to *not* be of that much economic value.]

>> I won't try to tell you not to write declarations if you won't try
>> to tell me that I must write them.
>
>Sure, there is no need to take away this flexibility.
>
>> > In code that rarely runs or isn't expected to run under
>> > normal conditions, this sort of correctness checking is very
>> > important.
>> 
>> You don't say what your point is.
>
>The point is probably this: A C++/Java compiler cannot catch all
>errors, especially not design or logical errors, but at least it
>catches most simple errors like typos, passing the wrong number of
>arguments, passing a wrong argument, etc.  With existing Lisp
>implementations many such errors are detected only at runtime even
>when declarations are used. This is less problematic with mainline
>code which is likely to be run by the developer anyway, but typos in
>sections of the source code that are less frequently run have the
>habit of crashing in the hands of a user, or the QA department if
>you're lucky. Yes, you should test all your code, but the kind of bug
>we're talking about is often introduced by changes that are so
>'obvious' that many developers don't imagine a bug may have been
>introduced.

I have a whopping lot more problems with things other than type
declarations.

Feel free to believe whatever you want about the "order of magnitude"
importance of static type checking.  If you lose credibility as a
result of this belief, that's not my problem.
-- 
Where do you *not* want to go today? "Confutatis maledictis, flammis
acribus addictis"  (<http://www.hex.net/~cbbrowne/msprobs.html>
········@ntlug.org- <http://www.ntlug.org/~cbbrowne/lsf.html>
From: Joachim Achtzehnter
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <ucd80erqrh.fsf@soft.mercury.bc.ca>
········@news.hex.net (Christopher Browne) writes:
> 
> One of the major problems with C++ is that it requires static
> typechecking, which means that building generic functions requires
> the contortions of templates, which have taken years to settle down
> enough that compilers could coherently implement them.

Yes, and they are still not as flexible as they should be. There is a
lot of research going on in the area of virtual types and
genericity. Don't be surprised to see a revision of C++ templates in
the future, or the emergence of a new language. In fact, the
discussion about adding genericity to Java has prompted a lot of
activity in this area of research. The point with all this is that
languages that are alive tend to learn from experience and improve
over time. I could say something about dead languages but this would
have an undesired effect in this newsgroup :-) 

Note: this last sentence was meant to be a joke! Keep in mind that I
am using Lisp myself.

> Money talks; if the Lisp vendors figure out that they can make ten
> times as much (a decimal order of magnitude) in sales as a result of
> implementing this functionality, then I don't think they'll sit back
> on their haunches with some "religious" attitude about not doing it
> because they think it's a silly idea; they'll say:
> 
> "Cool!  If we invest $100,000 paying a developer or two to add static
> type checking, and when our sales increase by an order of magnitude,
> from $10M to $100M, this $100K investment will pay off handsomely!"

I don't share your believe in the power of the market to lead us to
paradise. If the market had this power would Lisp be the fringe
language it is? Would Microsoft be the most successful software
company?

> Feel free to believe whatever you want about the "order of
> magnitude" importance of static type checking.

Definitely.

> If you lose credibility as a result of this belief, that's not my
> problem.

Indeed.

Joachim

-- 
·······@kraut.bc.ca      (http://www.kraut.bc.ca)
·······@mercury.bc.ca    (http://www.mercury.bc.ca)
From: Pierre R. Mai
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <87emktycnn.fsf@orion.dent.isdn.cs.tu-berlin.de>
Joachim Achtzehnter <·······@kraut.bc.ca> writes:

> I don't share your believe in the power of the market to lead us to
> paradise. If the market had this power would Lisp be the fringe
> language it is? Would Microsoft be the most successful software
> company?

*IT IS BECAUSE THE MARKET HAS THIS POWER* that Microsoft is the most
"successful" software company.  It is the direct consequence of the
fact that most participants in the shrink-wrapped software market
don't really care about quality, safety, ease-of-use and TCO, their
whining not-withstanding.  Free markets work like democracies: Given
responsible, ethical and intelligent customers / citizens, they can
create paradise.  Of course, in most mass markets / communities, those
customers / citizens are in a stark minority, and thus paradise is
unatainable[1].

Back to Common Lisp:

If Lisp is about anything, it's about doing the right thing, or at
least trying very very hard to do so, and recognizing that to achieve
this, you have to move very carefully, yet move nonetheless.  There
has gone much thought, and philosophical analysis, into nearly all of
the features you'll find in the current ANSI CL spec, and I expect the
same to happen with any other new ingredient that might be added to
that self-same standard in the future.

That high standard of doing the right thing, is the single most
outstanding reason that I'm working with CL.  It is not the dynamism,
the machismo or the nice parenthesis.  It is that I care about doing
the right thing, it is that my customers expect this of me (or else
they will not be my customers), and I trust Common Lisp to help me
achieve this.

So before we added some kind of (mandatory) compile-time type-checking 
to Lisp, I'd want to be very sure that we are doing the right thing
here, that we've understood the issues at hand, the limitations,
the user's expectations, and the ability of implementations to
successfully implement this reliably.

Well, and C++/Java-style "type-checking" strikes me as being most
obviously not the right thing.

IMNSHO, IF C++ or Java programmers *really* cared about type-checking
at compile-time, they wouldn't be using those languages in the first
place.  To care about type-checking, you'd want to have a sensible and
expressive type-system to start with.  So you'd want at least something
like Ada, which also has bounds checking and a lot of other stuff you'd
want, or much better yet, you'd want something like the current crop of
typed pure functional languages, like Haskell or OPAL, etc., which not
only have expressive type-systems, and compile-time type-checking but
also type-inference, and at least in the case of OPAL, an algebraic
property language, which allows you to specify further properties of
your defined operations, to be checked at compile-time where possible.

That would be the course of action for C++ programmers that *really*
cared about compile-time type-checking.

Since the above mentioned languages don't have huge amounts of new
recruits either, it seems to me that most C++ programmers care about
compile-time type-checking like they care about OOP, or speed, or
portability, or most other things for that matter: It is a
must-have-it-and-exactly-this-way feature, yet they don't really know
what it is, or care about the underlying semantics or problems.  As
long as the compiler acts like it's doing something, everything's just
fine.  The sayings "Lisp programmers know the value of everything and
the cost of nothing" vs. "C++ programmers know the cost of everything,
yet the value of nothing" come to mind.  And since the time of this
saying, I think CL programmers at least have gotten a long way to
knowing the cost of things, yet I still see no movement in the C++
community of starting to care about the value of things.

Since I've heard in this discussion that C++-style type-checking is
mostly helpful in uncovering simple mistakes in argument order, etc.,
(which strikes me as being as mostly unrelated to type-checking), I'd
suggest using environments that reduce the likelyhood of MAKING those
mistakes.  Although I've not been prone to making those mistakes even
in languages like C++, in my Lisp environment, not only have I got the
arglist and documentation of every Lisp operator -- whether built-in or
user-defined -- at my finger-tips, but also the complete Hyperspec, all
just a keypress away.  So if I'm only a tiny bit in doubt about
argument order, or semantics, I look them up.  And guess what, I'm
quite a bit better at doing that kind of checking, than any compiler
I've yet seen (besides the SSC of course ;).

Of course I'd still be very willing to listen to the ideas and designs
of someone who *REALLY* cares about type-checking at compile-time.  But
I fear that since the type systems of modern pure functional languages
are still a much researched area, meaningfully type checking a language
like CL seems to me to be worthy of a number of PhDs before the design 
space is clearly understood.

Of course, a much more "simplistic" kind of type-checking can be
attained with CMU CL today... Not that I see most former C++ users
flocking to CMU CL instead of other CL implementations though...

Regs, Pierre.

Footnotes: 
[1]  Note that these ideas are over 2000 years old...

[2]  In reality of course, it's the languages' communities that care,
and not the language itself ;)

-- 
Pierre Mai <····@acm.org>               http://home.pages.de/~trillian/
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: Joachim Achtzehnter
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <uc4slpsngi.fsf@soft.mercury.bc.ca>
····@acm.org (Pierre R. Mai) writes:
> 
> There has gone much thought, and philosophical analysis, into nearly
> all of the features you'll find in the current ANSI CL spec, and I
> expect the same to happen with any other new ingredient that might
> be added to that self-same standard in the future... So before we
> added some kind of (mandatory) compile-time type-checking to Lisp,
> I'd want to be very sure that we are doing the right thing here

If you read the discussion in this thread carefully you will find that
not everybody, and certainly not me, was asking for a language
change. The suggestion was for implementations to make use of the type
information that can already be specified using existing language
features. I am referring here to types specified using the declare
form, type specifiers of slots, argument types in generic functions,
and perhaps other things I can't think of at the moment.

Especially in the CLOS area, where the language already requires a lot
more type checking than in other parts of CL, I find the behaviour of
the commercial implementation I'm using rather puzzling. Take this
example:

(defclass foo () ())
(defgeneric bla ((self foo) (num integer)))
(defmethod bla ((self foo) arg2 arg3))

The last call raises an error because the method bla doesn't conform
to the generic function's signature. Good.

But when I do this:

(defun bob () (bla (make-instance 'foo) 1 2))

which includes an incorrect call to the same generic function there is
no error and no warning. The environment clearly has the information
to warn me about this error here and now, but it doesn't. Instead, I
or the QA department get an exception when (bob) is called at runtime.

So, I'm not asking for any deep change in Lisp, the language, just
some early warnings from the implementation's compiler about my
frequent blunders :-(

> Well, and C++/Java-style "type-checking" strikes me as being most
> obviously not the right thing.

Well, there is room for improvement, but I wouldn't discount it as
harshly as you do.

> you'd want something like the current crop of typed pure functional
> languages, like Haskell or OPAL, etc.,

You may want to add Cecil and Idea to this list.

> That would be the course of action for C++ programmers that *really*
> cared about compile-time type-checking.

C++ has changed a lot over time because the community was not afraid
to learn from others. If this tradition continues you can expect the
result of research on these languages to make its way into C++ at some
point. In contrast, the Lisp community, if this newsgroup is any
indication, seems to totally discount the value of static typing. This
can mean a number of things, of course, including the possibility that
I'm totally off-base. :-)

> The sayings "Lisp programmers know the value of everything and the
> cost of nothing" vs. "C++ programmers know the cost of everything,
> yet the value of nothing" come to mind.

You may also want to consider Guy Steele's statement suggesting that
programming language design is an evolutionary process.

> Since I've heard in this discussion that C++-style type-checking is
> mostly helpful in uncovering simple mistakes in argument order, etc.,
> (which strikes me as being as mostly unrelated to type-checking), I'd
> suggest using environments that reduce the likelyhood of MAKING those
> mistakes.  Although I've not been prone to making those mistakes even
> in languages like C++, in my Lisp environment, not only have I got the
> arglist and documentation of every Lisp operator -- whether built-in or
> user-defined -- at my finger-tips, but also the complete Hyperspec, all
> just a keypress away.  So if I'm only a tiny bit in doubt about
> argument order, or semantics, I look them up.

This is a misunderstanding of what I was trying to express by the term
'simple error'. When referring to simple mistakes I am talking
precisely about situations when I have absolutely NO DOUBT about
what I'm doing, but get it wrong anyway, like forgetting an argument
in a call, accidentally passing the wrong variable, etc. etc.

> Of course, a much more "simplistic" kind of type-checking can be
> attained with CMU CL today... Not that I see most former C++ users
> flocking to CMU CL instead of other CL implementations though...

Others have mentioned CMU CL as well, I'll take a look at this.
Unfortunately, it won't help in my current situation where we depend
on third-party packages which are not available for CMU CL, not to
mention a heavy investment in our own existing code.

Joachim

-- 
·······@kraut.bc.ca      (http://www.kraut.bc.ca)
·······@mercury.bc.ca    (http://www.mercury.bc.ca)
From: Erik Naggum
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <3135028813767635@naggum.no>
* Joachim Achtzehnter <·······@kraut.bc.ca>
| In contrast, the Lisp community, if this newsgroup is any indication,
| seems to totally discount the value of static typing.

  that's funny -- I read it exactly that opposite way.  Lispers care about
  typing, including static type information, and because they care, they
  know what kind of costs are involved in them relative to the benefits and
  why the C++ model is so fundamentally braindamaged as to become totally
  unpalatable and useless.  _because_ we value static type information, but
  also know the costs, we have decided against anal-retentive tools, but
  would use tools that can utilize such information productively.  however,
  the kinds of mistakes that you seem to think are so important do not in
  fact occur often enough to be a significant problem, so the value would
  lie in optimization across function calls.  this is dangerous territory
  in an environment where you can change function definitions dynamically.

#:Erik
From: Joshua Scholar
Subject: What platforms available for CMU CL (was Newbie questions)
Date: 
Message-ID: <37324786.8944894@news.select.net>
On Fri, 07 May 1999 01:00:48 GMT, Joachim Achtzehnter
<·······@kraut.bc.ca> wrote:

>
>Others have mentioned CMU CL as well, I'll take a look at this.
>Unfortunately, it won't help in my current situation where we depend
>on third-party packages which are not available for CMU CL, not to
>mention a heavy investment in our own existing code.
>
>Joachim

As far as I could tell by looking at CMUCL documentation, CMUCL is for
UNIX only.  Is there a Windows version?

Joshua Scholar
From: Christopher R. Barry
Subject: Re: What platforms available for CMU CL (was Newbie questions)
Date: 
Message-ID: <87hfppy4f3.fsf@2xtreme.net>
·····@removethisbeforesending.cetasoft.com (Joshua Scholar) writes:

> As far as I could tell by looking at CMUCL documentation, CMUCL is for
> UNIX only.  Is there a Windows version?

Nope. (Someone once mentioned a few of the cmucl-imp people are
interested in one.)

Try:
http://www.franz.com
http://www.harlequin.com
http://www.corman.net (Free with source-code + 30-day shareware IDE).
http://clisp.cons.org (GNU GPL)

Or my personal recommendation:
http://www.debian.org (get _real_ OS)

then visit franz.com to get the Linux Trial Edition and also grab the
Debian CMU CL packages. If you like visual IDEs there is also
Harlequin's new Linux Personal Edition.

Christopher
From: Thomas A. Russ
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <ymihfpolqrl.fsf@sevak.isi.edu>
Joachim Achtzehnter <·······@kraut.bc.ca> writes:
> (defclass foo () ())
> (defgeneric bla ((self foo) (num integer)))
> (defmethod bla ((self foo) arg2 arg3))
> 
> The last call raises an error because the method bla doesn't conform
> to the generic function's signature. Good.
> 
> But when I do this:
> 
> (defun bob () (bla (make-instance 'foo) 1 2))
> 
> which includes an incorrect call to the same generic function there is
> no error and no warning. The environment clearly has the information
> to warn me about this error here and now, but it doesn't. Instead, I
> or the QA department get an exception when (bob) is called at runtime.

What's really puzzling about this particular behavior is (at least in
Allegro Common Lisp and Macintosh Common Lisp) that in the following
example with non-generic functions, the compiler does produce a warning:

USER> (defun foo (x y) (+ x y))
FOO
USER> (compile 'foo)
FOO
NIL
NIL
USER> (defun bar (x y z) (foo x y z))
BAR
USER> (compile 'bar)
; While compiling BAR:
Warning: FOO should be given exactly 2 arguments.  It was given 3 arguments.
Problem detected when processing
       (FOO X Y ...)
inside (BLOCK BAR (FOO X Y ...))
inside (PROGN (BLOCK BAR (FOO X Y ...)))

BAR
T
T

----------------------------------------

Is there something about generic functions that makes it harder to do
the same sort of argument count analysis?  

-- 
Thomas A. Russ,  USC/Information Sciences Institute          ···@isi.edu    
From: Tim Bradshaw
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <ey3d80blb4f.fsf@lostwithiel.tfeb.org>
* Joachim Achtzehnter wrote:
> Others have mentioned CMU CL as well, I'll take a look at this.
> Unfortunately, it won't help in my current situation where we depend
> on third-party packages which are not available for CMU CL, not to
> mention a heavy investment in our own existing code.

If you care that deeply about static checking of things like argument
counts, I'm pretty surprised you haven't taken one of the
publically-available who-calls things and modified it to warn you
about all this stuff.  Lisp is remarkably flexible in this regard.
test harnesses are another good thing that you can write pretty
trivially in Lisp but are way hard in C++ say.

--tim
From: Joachim Achtzehnter
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <m2hfpmsp4x.fsf@wizard.kraut.bc.ca>
Tim Bradshaw <···@tfeb.org> writes:
> 
> If you care that deeply about static checking of things like
> argument counts, I'm pretty surprised you haven't taken one of the
> publically-available who-calls things and modified it to warn you
> about all this stuff.

Sounds interesting. What are these 'who-calls things' and where can I
find more details about them?

Joachim
From: Tim Bradshaw
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <ey36762mqug.fsf@lostwithiel.tfeb.org>
* Joachim Achtzehnter wrote:

> Sounds interesting. What are these 'who-calls things' and where can I
> find more details about them?

They are tools that let you ask questions like `who calls this
function'.  I think to do this really right you need a code-walker,
and I'm not sure that these things really have one. Of course, a
code-walker can really do any static type-checking you want anyway. I
think there is at least one at the CMU archive.

--tim
From: Bill Newman
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <wnewmanFBH9rw.GMG@netcom.com>
Joachim Achtzehnter (·······@kraut.bc.ca) wrote:
: > Of course, a much more "simplistic" kind of type-checking can be
: > attained with CMU CL today... Not that I see most former C++ users
: > flocking to CMU CL instead of other CL implementations though...

: Others have mentioned CMU CL as well, I'll take a look at this.
: Unfortunately, it won't help in my current situation where we depend
: on third-party packages which are not available for CMU CL, not to
: mention a heavy investment in our own existing code.

I like Lisp a lot, and I have a lot of C and C++ programming
experience, and I have become quite fond of CMUCL's typing. I'm also
somewhat horrified by a few things about it, and also by some recent
changes to it, but that's another story -- by and large I like it and I
use it regularly. (and someday I hope its build process will be clean
enough that I can patch it without too much pain:-)

If you want to understand why Lispers consider C/C++ typing
simplistic, I'd recommend taking a look at something like ML.  Even if
you don't intend to use it (I don't myself) it'll broaden your mind..

: -- 
: ·······@kraut.bc.ca      (http://www.kraut.bc.ca)
: ·······@mercury.bc.ca    (http://www.mercury.bc.ca)
From: Erik Naggum
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <3135025319883397@naggum.no>
* Joachim Achtzehnter <·······@kraut.bc.ca>
| There is a lot of research going on in the area of virtual types and
| genericity.  Don't be surprised to see a revision of C++ templates in the
| future, or the emergence of a new language.  In fact, the discussion
| about adding genericity to Java has prompted a lot of activity in this
| area of research.  The point with all this is that languages that are
| alive tend to learn from experience and improve over time.

  I tend to think of this in terms of young people who learn something that
  older people have known for decades.  the world is getting increasingly
  complex, so the effort needed to get up to speed also increases and young
  people need to work harder to catch up.  and when they do, they have a
  steeper "gradient" than the older people they catch up with, so it is
  only natural that they want to continue with their catch-up speed and go
  on to do new stuff at a higher pace than they think the old people do.
  that's why you find new languages picking up a lot of _experimental_
  stuff that is "new" in some sense of the word, but which older people
  know to be junk, because it's something they discarded long ago, and
  people who catch up don't see where people retracked after going wrong,
  only where they went and decided to proceed.  similarly, new languages
  will do a lot of "research" and get a lot of funding for concepts and
  ideas that have previously been discarded.  however, to make this fly,
  they have to call it something else, the same way people who want to
  "circumvent" patents have to do _something_ clever on their own that lets
  them use somebody else's inventions.  except that regurgitated research
  uses somebody else's money by fooling people who don't know that it
  didn't work the last time around.

  and with all these new languages and regurgitated research, progress is
  _actually_ moving a lot slower than it would have if people could just
  stick to using other people's inventions instead of optimizing for their
  own degrees and for funding fun research and publicity hunters.

| I don't share your believe in the power of the market to lead us to
| paradise.  If the market had this power would Lisp be the fringe language
| it is?  Would Microsoft be the most successful software company?

  this is so mind-bogglingly over-simplified an attitude that I can't begin
  to answer it, but Microsoft has succeeded because it moved technical
  issues into _irrelevant_ positions.  people do _not_ buy Microsoft's
  shitware because they want quality, robustness, investment protection, or
  the like, they buy it out of fear of not being able to keep up with the
  competitors for their manpower and with companies they exchange files
  with.  Microsoft did, however, offer something of relevance a good number
  of years ago: they gave the suits a computer on their own desk that the
  computer people didn't control.  _that_ was the relevant criterion that
  propelled Microsoft into their leading role in the minds of the people
  who decide life or death in business: the suits.  like evolution, any
  irrelevant property can mutate without control, and some day it might
  prove to be relevant once sufficiently advanced.

  I read Kent Pitman's incessant argumentation for the "market" to be a
  strong voice to let the vendors know that certain issues are _relevant_
  to their customers, because the market only decides what's relevant, the
  irrelevant is taken for granted, and can be _anything_ that doesn't get
  relevant one way or another.  Kent's trying to influence that, and lots
  of people are trying to let people know what kind of irrelevant issues
  caused them to purchase virus distribution vehicles and security holes
  from Microsoft along with the relevant ego-stroking abilities for suits.

#:Erik
From: Tim Bradshaw
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <ey3g157lc23.fsf@lostwithiel.tfeb.org>
* Joachim Achtzehnter wrote:
> The point is probably this: A C++/Java compiler cannot catch all
> errors, especially not design or logical errors, but at least it
> catches most simple errors like typos, passing the wrong number of
> arguments, passing a wrong argument, etc.  With existing Lisp
> implementations many such errors are detected only at runtime even
> when declarations are used. This is less problematic with mainline
> code which is likely to be run by the developer anyway, but typos in
> sections of the source code that are less frequently run have the
> habit of crashing in the hands of a user, or the QA department if
> you're lucky. Yes, you should test all your code, but the kind of bug
> we're talking about is often introduced by changes that are so
> 'obvious' that many developers don't imagine a bug may have been
> introduced.

Again, I want to say: this is a good theoretical point, but do you
know of any evidence that it causes large Lisp systems to be less
robust than large C++ systems.  I know of none, but I have not looked
that hard.

--tim
From: Kent M Pitman
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <sfw7lqjywmw.fsf@world.std.com>
Tim Bradshaw <···@tfeb.org> writes:

> 
> * Joachim Achtzehnter wrote:
> > The point is probably this: A C++/Java compiler cannot catch all
> > errors, especially not design or logical errors, but at least it
> > catches most simple errors like typos, passing the wrong number of
> > arguments, passing a wrong argument, etc.

As do many Lisp compilers.

> > With existing Lisp
> > implementations many such errors are detected only at runtime even
> > when declarations are used.

Most commercial Lisp compilers I've dealt with will catch ill-named
functions, wrong number of arguments [except with APPLY, which mostly
C can't even do as gracefully as Lisp and surely can't catch errors
in any better than Lisp], etc.

> > This is less problematic with mainline
> > code which is likely to be run by the developer anyway, but typos in
> > sections of the source code that are less frequently run have the
> > habit of crashing in the hands of a user, or the QA department if
> > you're lucky. Yes, you should test all your code, but the kind of bug
> > we're talking about is often introduced by changes that are so
> > 'obvious' that many developers don't imagine a bug may have been
> > introduced.
> 
> Again, I want to say: this is a good theoretical point, but do you
> know of any evidence that it causes large Lisp systems to be less
> robust than large C++ systems.  I know of none, but I have not looked
> that hard.

I think in fact just the opposite.  Speaking only anecdoctally here,
it's assumed that type matching means things work.  I'm not so sure.
It gives one almost a false sense of confidence:

This code:

 (defvar *foo* (- most-positive-fixnum 1))
 (defun foo () (* *foo* 2))

works fine undeclared in Lisp but in C the equivalent code, properly
type declared, would do modular arithmetic.  The types would match
but the effect would be wrong.  Now, in "properly type-declared code"
you might see that the function was declared fixnum all over the place
but that wouldn't make it right--it would just mean you were asking
the compiler to trust you that the data was not going to be out of
bounds, which isn't a good thing to trust in this case.  The CMU
compiler actually will probalby put in a type-check to make sure
that the declaraiton is not violated, but such type checks do cost
and many people feel pressured not to have them.  Further, and this is
the really insidious thing about type checks in practice, there is
ENORMOUS pressure to turn
 (defun foo (x) (+ x 2))
into 
 (defun foo (x) (declare (fixnum x)) (+ x 2))
to make it "more efficient" as if somehow the generic (+ x 2) was
in fact less efficient.  (+ x 2) is maximally efficient when you don't
know if x is going to be a fixnum or not.  Adding the declaration makes
it more efficient ONLY IF you happen to know x will not be other than
a fixnum; if you don't know that, it isn't "more efficient", but rather
it is "broken".  The real problem with type declarations is not the
mathematical proof techniques associated with them, it is the willingness
to ignore or hand-wave away the very real societal tendancies of people
to force people with access to type declarations to over-aggressively
apply narrow type boundaries to problems, turning every program in the
world into a metaphor for the Y2K bug becuase each such program has its
own little time bomb in it waiting for data to become big enough to 
overflow and cause a problem.  To say that people don't overaggressively
seek these little "shortcuts" (sacrificing the future for the present)
is to deny that there is any cost to dealing with Y2K, and to somehow
say that "good programmers would never make shortsighted decisions".
From: Joachim Achtzehnter
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <m2k8uispa9.fsf@wizard.kraut.bc.ca>
Kent M Pitman <······@world.std.com> writes:
>
> [on the subject of detecting bugs early via static type checking]
>
> Tim Bradshaw <···@tfeb.org> writes:
> >
> > Again, I want to say: this is a good theoretical point, but do you
> > know of any evidence that it causes large Lisp systems to be less
> > robust than large C++ systems.  I know of none, but I have not
> > looked that hard.

The evidence I have is from my own experience over the past three
years. Again and again, bugs found by QA were the result of
programming errors which would not have passed the compiler had this
been C++ or Java.

Before anybody jumps on me again for all the wrong reasons: I am not
trying to put down Lisp here, I am impressed by the language. Anybody
who has seriously used C++, however, cannot fail to notice that
existing commercial implementations of CL (unlike CMU CL apparently)
miss many opportunities to detect errors early by not making full use
of type information. I agree with Erik that the errors that go
undetected are usually not the kind of bugs that are difficult to
fix. But isn't it precisely in the area of routine, repetitive tasks
where computers are supposed to help us out? I am fully prepared to
take responsibility for deep, logical errors, but I appreciate tools
that help me detect simple blunders quickly.

> I think in fact just the opposite.  Speaking only anecdoctally here,
> it's assumed that type matching means things work.  I'm not so sure.

Is a program that passes type checks guaranteed to be correct? Of
course not! Seems I still haven't been able to make myself
understood. This is not about proving programs correct, and I
certainly don't regard static type checking as a silver bullet. Static
type checking will not gurantee correct programs, it can, however,
very easily detect simple programming errors. Why? Because the simple
errors I'm talking about have a high probability of violating type
checks.

> [section about disadvantages of type declarations in arithmetic
> functions omitted]

The type declarations I have in mind were of a very different
kind. The system I am working on is based on a complex,
object-oriented information model. Functions, mostly generic
functions, operate on instances of well-defined types from the
model. Passing an instance of a non-conforming type is almost
certainly an error. These errors are usually easy to fix when looking
at the symptoms, but they are a major frustration when they slip
through initial tests and make it into QA or beyond. What makes this
so frustrating is that it would be so easy for a good compiler to
detect. [Yes, I'm complaining about a vendor's implementation, but
then again, you seem to be defending this weakness as perfectly
acceptable. In that light, I don't think my response is inappropriate
in this newsgroup.]

Joachim
From: Kent M Pitman
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <sfw7lqiz3ej.fsf@world.std.com>
Joachim Achtzehnter <·······@kraut.bc.ca> writes:

> > I think in fact just the opposite.  Speaking only anecdoctally here,
> > it's assumed that type matching means things work.  I'm not so sure.
> 
> Is a program that passes type checks guaranteed to be correct? Of
> course not!

I think you mean to say "of course it is not provably correct" but it isn't
so clear whether you mean to say "of course it is known not to be correct
by the person who compiled it".  I allege (absent proof) that a large
number of people believe "absence of compilation warnings" means "correct",
and further that "compilation warning" means "user is directed to change
his program in a way that muffles warning".  I think both of these are
bad practices.

I also can't help but feel thatpeople who think they are promised that the
compiler will statically catch a certain class of errors feel more 
comfortable failing to QA their programs.

You are making a great deal of your arguments about how you personally
would use compiler information.  I am making arguments not about how I
personally would use compiler information, but how I believe people
really do use compiler information.  Neither of us has the data to back up
our claims, so it will have to just rest there.

> > [section about disadvantages of type declarations in arithmetic
> > functions omitted]
> 
> The type declarations I have in mind were of a very different
> kind. The system I am working on is based on a complex,
> object-oriented information model. Functions, mostly generic
> functions, operate on instances of well-defined types from the
> model. Passing an instance of a non-conforming type is almost
> certainly an error. These errors are usually easy to fix when looking
> at the symptoms, but they are a major frustration when they slip
> through initial tests and make it into QA or beyond. What makes this
> so frustrating is that it would be so easy for a good compiler to
> detect. [Yes, I'm complaining about a vendor's implementation, but
> then again, you seem to be defending this weakness as perfectly
> acceptable. In that light, I don't think my response is inappropriate
> in this newsgroup.]

You are welcome to think that, however I will keep saying you are asking
the wrong place every time I have the energy to say it.

I believe it will HARM the Lisp community to require it.  It will only
make it harder than it already is to reach "CL-hood" for an
implementation, and make for there to be fewer Lisps.  I allege (and
for this there is considerable data) that it is HARD to get a CL
together and there are lots of people who decline to try.  It is
important to the community and important to the users that there be
vendors able to make implementations of known quality, but it is less
important that every vendor be required to be at the same quality
because it is quality/price upon which people compete.  If someone
wants to market a high-quality Lisp, at corresponding cost, they can
and should do that, offering whatever you want.  But it is just not
necessary for everyone to do this.  And certainly you don't want to
legislate that all implementations must have certain quality because
like the pressure/volume constraint for gases, that effectively
legislates that all implementations have a certain price.

Pluralism is about tolerating people doing and needing different things
than you.  The net is pluralistic.  The market is pluralistic.  The thing
that destroys plurlisms is the insistence that not only one thing but all
things in the market must meet your needs.  That destroys diversity.
And that, I feel, is bad.
From: Pierre R. Mai
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <87n1zew8jl.fsf@orion.dent.isdn.cs.tu-berlin.de>
Joachim Achtzehnter <·······@kraut.bc.ca> writes:

> The type declarations I have in mind were of a very different
> kind. The system I am working on is based on a complex,
> object-oriented information model. Functions, mostly generic
> functions, operate on instances of well-defined types from the
> model. Passing an instance of a non-conforming type is almost
> certainly an error. These errors are usually easy to fix when looking

Hmm, what in your context are "non-conforming types"?  Conforming to
what?  To the available methods as in:

(defclass foo ())
(defclass bar ())

(defgeneric frobme (instance arg))

(defmethod frobme ((instance foo) arg)
  (frobnicate instance arg))

(defun foobar ()
  (let ((x (make-instance 'bar)))
    (frobme x) ; Major losage will occur since x is not of type foo?
    ...))

In that case I don't think issuing warnings is a sound tactic for a
general purpose tool like a compiler, since the compiler cannot know,
and shouldn't assume whether an applicable method for BAR will be
available at run-time.  If in your particular application the set of
known methods is available at some specific point in time, you might
be able to walk your code at that point, and determine whether all
calls of FROBME will have applicable methods (though this will
probably involve a fair amount of analysis I'd imagine).

If I've misunderstood your problem, I'm sorry, and would be interested to
see a simplified example of your exact problem, so that maybe something
could be worked out for it.  I'm not arguing against doing more analysis
on programs, to make them more robust.  In fact I think CL is one of the
most powerful languages in that area, allowing the programmer, as Tim
Bradshaw noted, to easily implement a whole host of interesting analysis
tools, to grovel over code.

I'm mostly arguing against the call for these analyses to be included
into general-purpose tools like the compiler or even (though you
specifically didn't call for that) into the language.  The problem I
see here is that most of these analyses are only useful or tractable,
if you make some project and/or programmer-specific assumptions
somewhere.  So if you include them into an implementation, you'll
either have to alienate one part of your community, by putting out
spurious warnings, or another part of your community, by not warning
against clear (to them) errors, or most probably both.  Or you have to 
include an infinite number of tuning knobs, to let each user adjust
the analysis framework to suit his specific needs, which will most
likely still not please all of your users, and will carry with it a
non-trivial amount of investment in implementation complexity.

I'd rather see a move in the ANSI standard to expose a suitable
substrate for these kinds of analysis, like e.g. providing a
standardized code-walking facility, and/or providing hooks to get at
the compilers' analysis results, like e.g. call-site type information, 
etc.  The details of such a solution could most likely be obtained by
looking at the functionality that is already available in most
implementations, and standardizing on a useful subset/sideset of
this.

This would IMHO be far more useful to a far wider audience, wouldn't
codify some particular programming style as correct practice, and
would have the advantage of being built on a more solid foundation.

Regs, Pierre.

-- 
Pierre Mai <····@acm.org>               http://home.pages.de/~trillian/
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: Tim Bradshaw
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <ey33e16m9ww.fsf@lostwithiel.tfeb.org>
* Joachim Achtzehnter wrote:

> The type declarations I have in mind were of a very different
> kind. The system I am working on is based on a complex,
> object-oriented information model. Functions, mostly generic
> functions, operate on instances of well-defined types from the
> model. Passing an instance of a non-conforming type is almost
> certainly an error. These errors are usually easy to fix when looking
> at the symptoms, but they are a major frustration when they slip
> through initial tests and make it into QA or beyond. What makes this
> so frustrating is that it would be so easy for a good compiler to
> detect. [Yes, I'm complaining about a vendor's implementation, but
> then again, you seem to be defending this weakness as perfectly
> acceptable. In that light, I don't think my response is inappropriate
> in this newsgroup.]

I'm not sure what you mean by `non-conforming' here.

If you mean something like calling a generic function with the wrong
signature, then it's reasonable to hope that an implementation might
complain about that. But I agree with Kent that you should ask your
vendor about this not complain here, and that it certainly should
*not* be required that an implementation detect this. It's hard enough
to get a CL working without putting extra fences in the way.

If, however, what you mean is calling a GF with an argument of a class
for which there is no method, then that is almost certainly *not*
detectable without extensions to the language.  Consider this
fragment:

    (defgeneric foo (x))

    (defmethod foo ((x frobly))
      ...)

    (defun blob (y)
      (declare (type fribly y))	; FRIBLY is not a subtype of FROBLY
      (foo y))

Then I believe that this `error' can *not* be detected at compile
time.  It can't be detected because it's not an error: there could be
other methods defined on FOO almost anywhere.

In order to be able to detect this you have to be able to make some
kind of closed-world assumption about FOO, which CLOS doesn't let you
do.  This would typically be done by some kind of sealing declaration;
Dylan has mechanisms to do this.  It seems to me that you could also
get some similar kind of checking by extending the declarations that
you had in the GF:

    (defgeneric foo (x)
      (declare (type fribly-super x)))

Which is not portable CL but is allowed as an extension.

This is even more a case where you should be talking to your vendor.
Sealing declarations or just type declarations in GFs are both things
that could be provided as an extension to the language by a vendor if
there was enough demand.

I think sealing of this kind might be something interesting -- Dylan
has made a big thing of it after all -- but it's pretty hard to get
right I suspect, and I'd be really unwilling to see something
standardised without real implementations first, and I do not know of
any right now.

--tim

  
From: Lyman S. Taylor
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <3735F9B4.3612763C@mindspring.com>
Tim Bradshaw wrote:
...
> 
> If, however, what you mean is calling a GF with an argument of a class
> for which there is no method, then that is almost certainly *not*
> detectable without extensions to the language. 
...
>     (defun blob (y)
>       (declare (type fribly y)) ; FRIBLY is not a subtype of FROBLY
>       (foo y))
> 
> Then I believe that this `error' can *not* be detected at compile
> time.  It can't be detected because it's not an error: there could be
> other methods defined on FOO almost anywhere.

  Well it could be detected, in most cases, at compile time without any
  language extensions.  If "compile time" meant compiling the WHOLE program 
  all at once.   However, to get this sort of omniscient knowledge the 
  analysis has to be omnipresent.   [ Even whole program analysis cannot help 
  you with classes and methods that are defined dynamically as the program 
  runs. Of course, it is rather "bad form" to make static references
  to what will be dynamically generated. ]
   
  It is a tradeoff. CLOS can allow one to incrementally refine
  a program while it running.  C++ can have glacially slow compile
  times, but you will have type checked the heck out of you program. 
  [ I think vendors could provide "automated inspection" that
    are akin to the "tree shakers" some provide.  At some point you   
    can say "all of these files (or functions)are suppose to be a coherent
    system; go confirm this".  I wouldn't expect that to finish in a rapid
    fashion.  But as a periodic sanity check before passing code
    off to QA it probably would work well.  Or run it overnight when
    you go home. :-) ] 

  Lisp does not absolutely require that you compose all of your 
  definitions for the convenience of the compiler.  If the 
  equivalent of gcc's -Wall -pedantic flags were available in
  some cases they might generate more false positives than
  useful information when run over a incremental piece of a "program". 

> I think sealing of this kind might be something interesting -- Dylan
> has made a big thing of it after all -- but it's pretty hard to get
> right I suspect, 

   For both the implementor and the users. :-)   In some sense sealing is 
   a mechanism for turning off dynamism where you don't need it.  Partially,
   turning it off in some places but not everywhere can be tricky. 
   
   Even Dylan doesn't guarantee to catch all sealing violations at
   compile time.  [ Again dynamic class and method creation/additions involve
   dynamic checks to see that the closed worlds assumptions are not 
   violated.]  However, an aggressive Dylan implementation can catch many 
   "static" violations. 


----

Lyman
From: Thomas A. Russ
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <ymig155kk5s.fsf@sevak.isi.edu>
Joachim Achtzehnter <·······@kraut.bc.ca> writes:

> The type declarations I have in mind were of a very different
> kind. The system I am working on is based on a complex,
> object-oriented information model. Functions, mostly generic
> functions, operate on instances of well-defined types from the
> model. Passing an instance of a non-conforming type is almost
> certainly an error.

Unfortunately, I don't think you could even have the sort of type
checking that you want while using GENERIC functions.  Defgeneric syntax
doesn't allow the specification of type restrictions on the input
parameters.  It wouldn't make sense for it to do so, since additional
methods could be added that make the generic function work for
additional type combinations.

This seems to be a case where the flexibility of the language is what
prevents a check from being done at compile time.  Languages like C++
don't have to deal with the possibility of having additional methods or
functions defined after the compilation and link cycle have been
completed.

CLOS allows you to later define a method for handling "an instance of a
non-conforming type", thus making it no longer an error.

-- 
Thomas A. Russ,  USC/Information Sciences Institute          ···@isi.edu    
From: Bill Newman
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <wnewmanFBHB8r.J49@netcom.com>
Kent M Pitman (······@world.std.com) wrote:
: Tim Bradshaw <···@tfeb.org> writes:

: > > This is less problematic with mainline
: > > code which is likely to be run by the developer anyway, but typos in
: > > sections of the source code that are less frequently run have the
: > > habit of crashing in the hands of a user, or the QA department if
: > > you're lucky. Yes, you should test all your code, but the kind of bug
: > > we're talking about is often introduced by changes that are so
: > > 'obvious' that many developers don't imagine a bug may have been
: > > introduced.
: > 
: > Again, I want to say: this is a good theoretical point, but do you
: > know of any evidence that it causes large Lisp systems to be less
: > robust than large C++ systems.  I know of none, but I have not looked
: > that hard.

: I think in fact just the opposite.  Speaking only anecdoctally here,
: it's assumed that type matching means things work.  I'm not so sure.
: It gives one almost a false sense of confidence:

: This code:

:  (defvar *foo* (- most-positive-fixnum 1))
:  (defun foo () (* *foo* 2))

: works fine undeclared in Lisp but in C the equivalent code, properly
: type declared, would do modular arithmetic.  The types would match
: but the effect would be wrong.  Now, in "properly type-declared code"
: you might see that the function was declared fixnum all over the place
: but that wouldn't make it right--it would just mean you were asking
: the compiler to trust you that the data was not going to be out of
: bounds, which isn't a good thing to trust in this case.  The CMU
: compiler actually will probalby put in a type-check to make sure
: that the declaraiton is not violated, but such type checks do cost
: and many people feel pressured not to have them.  Further, and this is
: the really insidious thing about type checks in practice, there is
: ENORMOUS pressure to turn
:  (defun foo (x) (+ x 2))
: into 
:  (defun foo (x) (declare (fixnum x)) (+ x 2))
: to make it "more efficient" as if somehow the generic (+ x 2) was
: in fact less efficient.  (+ x 2) is maximally efficient when you don't
: know if x is going to be a fixnum or not.  Adding the declaration makes
: it more efficient ONLY IF you happen to know x will not be other than
: a fixnum; if you don't know that, it isn't "more efficient", but rather
: it is "broken".  The real problem with type declarations is not the
: mathematical proof techniques associated with them, it is the willingness
: to ignore or hand-wave away the very real societal tendancies of people
: to force people with access to type declarations to over-aggressively
: apply narrow type boundaries to problems, turning every program in the
: world into a metaphor for the Y2K bug becuase each such program has its
: own little time bomb in it waiting for data to become big enough to 
: overflow and cause a problem.  To say that people don't overaggressively
: seek these little "shortcuts" (sacrificing the future for the present)
: is to deny that there is any cost to dealing with Y2K, and to somehow
: say that "good programmers would never make shortsighted decisions".

I think it depends somewhat on the problem domain. I've worked a lot on a
Lisp program to play the game of Go, a game played on a NxN board,
where N is traditionally 19 and the complexity of the game is
something nasty (EXPTIME complete maybe?). There are a lot of values
which just can't get bigger than N+1 or NxN+1, and it's just never
going to overflow a machine word, sorry. There are large sections of
the code which would just work if you told the computer to just use
machine word arithmetic, and I put a considerable amount of time into
declarations which would let CMUCL prove to itself that everything was
OK without type checking, and by and large in retrospect that time was
not well spent, in that it didn't help me catch bugs and didn't give
me anything I wouldn't have gotten for free by just using machine
words.

This may not be a common situation, but it's probably not
pathologically uncommon either. Offhand I can't think of any other
problems which have complexity properties like this to give a
reasonably rigorous guarantee that we'll never work with a problem
size of 2^30 or even 2^16, but people working with code which iterates
over the contents of a network packet or the contents of a cipher
block probably have something of the same feeling.

  Bill Newman
  ·······@netcom.com
From: Kent M Pitman
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <sfwd80ani5c.fsf@world.std.com>
·······@netcom.com (Bill Newman) writes:

> I think it depends somewhat on the problem domain. I've worked a lot on a
> Lisp program to play the game of Go, a game played on a NxN board,
> where N is traditionally 19 and the complexity of the game is
> something nasty (EXPTIME complete maybe?). There are a lot of values
> which just can't get bigger than N+1 or NxN+1, and it's just never
> going to overflow a machine word, sorry.

No need to apologize.  I didn't make the claim this wasn't so.
The whole point of declarations in the language are to accomodate
this.  The whole point of programmable access to the language is to make
sure you don't have to overwhelm your program with declarations.

> There are large sections of
> the code which would just work if you told the computer to just use
> machine word arithmetic,

And the language provides the ability to do this several ways.
I'll give examples of two of them here.  Others might involve the
use of readmacros or code-walking or other such things.

[1]

Using no package surgery but different names:

(declaim (inline +&))
(defun +& (x y)
  (the fixnum (cl:+ (the fixnum x) (the fixnum y))))

(declaim (inline *&))
(defun *& (x y)
  (the fixnum (cl:* (the fixnum x) (the fixnum y))))
...etc.

----- OR -----

[2]

Using package surgery:

(defpackage "MY-STUFF"
  (:use "CL")
  (:shadow "+" "*" ...))

(defun + (&rest args)
  (the fixnum (apply #'cl:+ args)))

(define-compiler-macro + (&rest args)
  (case (length args)
    ((0) 0)
    ((1) `(the fixnum ,(car args)))
    (otherwise
         `(the fixnum (cl:+ (the fixnum ,(car  args))
			    (+ ,@(cdr args)))))))
...etc.

It's also easy in both of these to add a conditional check-type in
order to get type-checking and then to remove it when you're ready
to deploy.

##### N.B. I didn't test this code.

> and I put a considerable amount of time into
> declarations which would let CMUCL prove to itself that everything was
> OK without type checking, and by and large in retrospect that time was
> not well spent, in that it didn't help me catch bugs and didn't give
> me anything I wouldn't have gotten for free by just using machine
> words.
 
I'm not 100% clear about what you're saying here, but would be
interested if you'd elaborate.  What do you mean "using machine
words"?

> This may not be a common situation, but it's probably not
> pathologically uncommon either. Offhand I can't think of any other
> problems which have complexity properties like this to give a
> reasonably rigorous guarantee that we'll never work with a problem
> size of 2^30 or even 2^16, but people working with code which iterates
> over the contents of a network packet or the contents of a cipher
> block probably have something of the same feeling.

I don't really have a theory of this either.  NOTE: I am NOT saying
compilers shouldn't type-check what they can.  I think it's good for
them, too.  That's the kind of vendor I patronize personally.  However,
I think it's reasonable and appropriate for their to be other levels
of service that still call themselves common lisp. 

I prefer a Conservative/Libertarian CL, not a Liberal/Socialist one.
From: Bill Newman
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <wnewmanFBIzys.D4H@netcom.com>
Kent M Pitman (······@world.std.com) wrote:
: ·······@netcom.com (Bill Newman) writes:

: > I think it depends somewhat on the problem domain. I've worked a lot on a
: > Lisp program to play the game of Go, a game played on a NxN board,
: > where N is traditionally 19 and the complexity of the game is
: > something nasty (EXPTIME complete maybe?). There are a lot of values
: > which just can't get bigger than N+1 or NxN+1, and it's just never
: > going to overflow a machine word, sorry.

: No need to apologize.  I didn't make the claim this wasn't so.
: The whole point of declarations in the language are to accomodate
: this.  The whole point of programmable access to the language is to make
: sure you don't have to overwhelm your program with declarations.

: > There are large sections of
: > the code which would just work if you told the computer to just use
: > machine word arithmetic,

: And the language provides the ability to do this several ways.
: I'll give examples of two of them here.  Others might involve the
: use of readmacros or code-walking or other such things.

: Using no package surgery but different names:

: (declaim (inline +&))
: (defun +& (x y)
:   (the fixnum (cl:+ (the fixnum x) (the fixnum y))))

: (declaim (inline *&))
: (defun *& (x y)
:   (the fixnum (cl:* (the fixnum x) (the fixnum y))))
: ...etc.

Actually, I don't mind typing out declarations explicitly.  I've heard
other people complain about how declarations are visual clutter that
obscure the core code, but after years of C/C++ I'm used to it, and
besides, with syntax highlighting, it seems like nearly a non-issue.

The problem was that I had to go to considerable trouble to convince
the compiler (CMUCL) that it could safely use fixed-width arithmetic
in the termination tests for loops.

  (DO ((I 0 (1+ I)))
      ((>= I UPPER-LIMIT))
    (DO-SOMETHING I))

No matter what interval you tell CMUCL that I is in, it can't
prove to itself that (1+ I) is in the same interval.
Mostly these days I declare indices like this to be
of some restricted type like FIXNUM/2

  (DEFTYPE FIXNUM/2 () `(MOD ,(FLOOR MOST-POSITIVE-FIXNUM 2))).

That way, even if CMUCL isn't sure that I hasn't left the interval, at
least it knows it hasn't become a bignum. But it took me a lot of
experimentation before I settled down to this, and it still feels like
a slightly clumsy solution, and in this problem domain, as in others
where you're working with numbers which index into objects which fit
into the machine's memory, that experimentation seems to've been a
deadweight loss for using Lisp: as far as I can tell, it hasn't added
to the reliability or flexibility of this particular program in any
meaningful way compared to just letting the machine use internal,
modulo-wordsize arithmetic arithmetic and not worrying about it.

: > and I put a considerable amount of time into
: > declarations which would let CMUCL prove to itself that everything was
: > OK without type checking, and by and large in retrospect that time was
: > not well spent, in that it didn't help me catch bugs and didn't give
: > me anything I wouldn't have gotten for free by just using machine
: > words.
:  
: I'm not 100% clear about what you're saying here, but would be
: interested if you'd elaborate.  What do you mean "using machine
: words"?

I meant doing what C and C++ do: using the machine's native arithmetic
instructions and hoping either that the programmer doesn't cause
overflow (which is pretty realistic in this problem domain, and some
others) or that the programmer is prepared to deal with overflow if it
happens (which, as you correctly pointed out in the article I was
responding to, is often unrealistic for real programs in problem
domains where overflow is an issue).

: I prefer a Conservative/Libertarian CL, not a Liberal/Socialist one.

That suits me, too. 

  Bill Newman
  ·······@netcom.com
From: Gareth McCaughan
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <86vhe0u8we.fsf@g.pet.cam.ac.uk>
Bill Newman wrote:

> The problem was that I had to go to considerable trouble to convince
> the compiler (CMUCL) that it could safely use fixed-width arithmetic
> in the termination tests for loops.
> 
>   (DO ((I 0 (1+ I)))
>       ((>= I UPPER-LIMIT))
>     (DO-SOMETHING I))

(declaim (optimize (speed 3)))

(defun zog (upper-limit)
  (declare (fixnum upper-limit))
  (do ((i 0 (1+ i)))
      ((>= i upper-limit))
    (declare (fixnum i))
    (prin1 i)))

compiles to the following on my box.

[First of all, some function-entry stuff.]

05798B10:       .ENTRY ZOG(upper-limit)      ; (FUNCTION (FIXNUM) NULL)
      28:       POP   DWORD PTR [EBP-8]
      2B:       LEA   ESP, [EBP-32]

      2E:       CMP   ECX, 4
      31:       JNE   L2
      33:       TEST  EDX, 3
      39:       JNE   L3
      3B:       MOV   [EBP-12], EDX

[Now, set I to 0.]

      3E:       XOR   EBX, EBX               ; No-arg-parsing entry point

[And go to the termination test.]

      40:       JMP   L1

[I think the next bit is the function-call sequence implementing (prin1 i).]

      42: L0:   MOV   [EBP-16], EBX
      45:       MOV   ESI, ESP
      47:       SUB   ESP, 12
      4A:       MOV   EDX, EBX
      4C:       MOV   EAX, [#x5798B0C]
      52:       MOV   ECX, 4
      57:       MOV   [ESI-4], EBP
      5A:       MOV   DWORD PTR [ESI-8], 91851639
      61:       MOV   EBP, ESI
      63:       PUSH  91851639
      68:       JMP   DWORD PTR [EAX+5]
      6B:       NOP

      6C:       NOP
      6D:       NOP
      6E:       NOP
      6F:       NOP
 
;;; [12] (PRIN1 I)

      70:       .LRA
      74:       NOP
      75:       NOP
      76:       NOP
      77:       MOV   ESP, EBX
      79:       MOV   EBX, [EBP-16]

[So, we've finished calling PRIN1. Now update the loop variable.]

      7C:       ADD   EBX, 4

[Here's the termination test.]

      7F: L1:   CMP   EBX, [EBP-12]
      82:       JL    L0

[That was pretty painless. What follows is function-exit stuff.]

      84:       MOV   EDX, 83886091
      89:       MOV   EBX, [EBP-8]
      8C:       MOV   ECX, [EBP-4]
      8F:       ADD   EBX, 2
      92:       MOV   ESP, EBP
      94:       MOV   EBP, ECX
      96:       JMP   EBX

[And here's what happens if ZOG is called improperly.]

      98: L2:   BREAK 10                     ; Error trap
      9A:       BYTE  #x02
      9B:       BYTE  #x19                   ; INVALID-ARGUMENT-COUNT-ERROR
      9C:       BYTE  #x4D                   ; ECX
      9D: L3:   BREAK 10                     ; Error trap
      9F:       BYTE  #x02
      A0:       BYTE  #x0A                   ; OBJECT-NOT-FIXNUM-ERROR
      A1:       BYTE  #x8E                   ; EDX

Seems to be treating I as a fixnum very happily indeed.

-- 
Gareth McCaughan       Dept. of Pure Mathematics & Mathematical Statistics,
·····@dpmms.cam.ac.uk  Cambridge University, England.
From: Raymond Toy
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <4nn1zckc01.fsf@rtp.ericsson.se>
>>>>> "Bill" == Bill Newman <·······@netcom.com> writes:
    Bill> The problem was that I had to go to considerable trouble to convince
    Bill> the compiler (CMUCL) that it could safely use fixed-width arithmetic
    Bill> in the termination tests for loops.

    Bill>   (DO ((I 0 (1+ I)))
    Bill>       ((>= I UPPER-LIMIT))
    Bill>     (DO-SOMETHING I))

    Bill> No matter what interval you tell CMUCL that I is in, it can't
    Bill> prove to itself that (1+ I) is in the same interval.

Well, if it could, then the compiler would be broken because (1+ I)
can't be in the same interval as I, because you've added 1 to it. :-)
(Assuming I is an integer.)

CMUCL can't determine the type because the above macroexpands to
something that has a (setq i ...) in it.  This is a known deficiency
in the compiler that I wish I knew enough to remove, since CMUCL does
a good job if you don't use setq.

    Bill> Mostly these days I declare indices like this to be
    Bill> of some restricted type like FIXNUM/2

    Bill>   (DEFTYPE FIXNUM/2 () `(MOD ,(FLOOR MOST-POSITIVE-FIXNUM 2))).

    Bill> That way, even if CMUCL isn't sure that I hasn't left the interval, at
    Bill> least it knows it hasn't become a bignum. But it took me a lot of

You can always add (declare (fixnum i) (optimize (speed 3) (safety 0)) 
to the loop.  Just don't lie about I being a fixnum! :-)

Ray
From: Joshua Scholar
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <3732ae09.21977728@news.select.net>
I want to make it clear that I haven't been trying to argue that C++
is better than LISP.  

What happened is that I stumbled into a thread on comp.ai where a few
people, a couple of whom I know enough about to respect, were arguing
that LISP is more appropriate than C++ for AI.

Now in my work (game programming) I don't really have a choice, I more
or less have to use C++.  

What I wanted to find out is what exactly were the reasons that some
preferred LISP so that I could abstract those strengths or methods
into my work, under the limitations I work under.

Unfortunately I got sidetracked by an evangelist who was more
interested in convincing me to use a specific language than in talking
about programming techniques and I also ended up trying to defend and
explain the limitations I work under - none of which is actually an
attack on the usefulness of LISP.  The strongest thing I have to say
is that my company is not going to be willing to let its programmers
use LISP, that some of the reasons for this are quite valid and that I
haven't been given enough ammunition to challenge anyone's opinion on
the matter.

So everyone who thinks been defending LISP from a detractor, you can
calm down, that's not what is going on.

Joshua Scholar
From: Kent M Pitman
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <sfw6768gm48.fsf@world.std.com>
·····@removethisbeforesending.cetasoft.com (Joshua Scholar) writes:

> What I wanted to find out is what exactly were the reasons that some
> preferred LISP so that I could abstract those strengths or methods
> into my work, under the limitations I work under.

http://world.std.com/~pitman/PS/Hindsight.html

It's not a full list, but it might be helpful to you.

(Someone noted after I wrote this article, that I'd failed to mention
automatic storage management (gc) in the article; kind of a glaring
omission.  Oh well..)
From: Paolo Amoroso
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <3731af6c.76124@news.mclink.it>
On Wed, 05 May 1999 02:33:23 GMT, ·····@cetasoft.com (Joshua Scholar)
wrote:

> Now in my work (game programming) I don't really have a choice, I more
> or less have to use C++.  

By the way, the Web site of Franz Inc. http://www.franz.com/ mentions that
their Common Lisp system is being used also for game development. The site
provides information about a couple such projects.


Paolo
-- 
Paolo Amoroso <·······@mclink.it>
From: Paul Rudin
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <m3so9cgjlq.fsf@shodan.demon.co.uk>
Kent M Pitman <······@world.std.com> writes:


> Nothing in CL forbids you from type-declaring every variable. 

It might be nice to have some standard, succinct syntax for this.
Maybe something like:
 
(defun  foo\integer (lyst\cons x\integer ...)
  ...)
From: Vassil Nikolov
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <7gvbjb$mt1$1@nnrp1.deja.com>
In article <··············@shodan.demon.co.uk>,
  Paul Rudin <·····@shodan.demon.co.uk> wrote:
> Kent M Pitman <······@world.std.com> writes:
>
> > Nothing in CL forbids you from type-declaring every variable.
>
> It might be nice to have some standard, succinct syntax for this.
> Maybe something like:
>
> (defun  foo\integer (lyst\cons x\integer ...)
>   ...)

I don't think we need a new standard for that.  The DEFMETHOD
syntax: ``(parameter class)'' can be reused, so that the above
example becomes:

  (defun (foo integer) ((lyst cons) (x integer) ...)
    ...)

And the really good thing is that one doesn't have to change
the implementation to do that (if one wants that syntax): the
_user_ can do their own version of DEFUN which expands the above
into something like:

  (progn
    (declaim (ftype (cons integer ...) integer) foo)
    (cl:defun foo (lyst x ...)
      (declare (type cons lyst)
               (type integer x))
      ...))

(I am neither for nor against such syntax, at the time of this
writing; just want to show it can be done without the need for
a new compiler for the new syntax.)

--
Vassil Nikolov <········@poboxes.com> www.poboxes.com/vnikolov
(You may want to cc your posting to me if I _have_ to see it.)
   LEGEMANVALEMFVTVTVM  (Ancient Roman programmers' adage.)

-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    
From: Paul Rudin
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <m3u2toaa5i.fsf@shodan.demon.co.uk>
Vassil Nikolov <········@poboxes.com> writes:

> In article <··············@shodan.demon.co.uk>,
>   Paul Rudin <·····@shodan.demon.co.uk> wrote:
> > Kent M Pitman <······@world.std.com> writes:
> >
> > > Nothing in CL forbids you from type-declaring every variable.
> >
> > It might be nice to have some standard, succinct syntax for this.
> > Maybe something like:
> >
> > (defun  foo\integer (lyst\cons x\integer ...)
> >   ...)
> 
> I don't think we need a new standard for that.  The DEFMETHOD
> syntax: ``(parameter class)'' can be reused, so that the above
> example becomes:
> 
>   (defun (foo integer) ((lyst cons) (x integer) ...)
>     ...)

For this to be valid surely the standard would need to be modified?

I agree that your example is more in keeping with the style of CL and
therefore probably better. I was, however deliberately vague about
exactly what the syntax given should be equivalent to. There are a
number of things that might be desirbale, and you might what different
syntaxes (is that the plural form of syntax?) for these and
combinations thereof.

> 
> And the really good thing is that one doesn't have to change
> the implementation to do that (if one wants that syntax): the
> _user_ can do their own version of DEFUN which expands the above
> into something like:
> 
>   (progn
>     (declaim (ftype (cons integer ...) integer) foo)
>     (cl:defun foo (lyst x ...)
>       (declare (type cons lyst)
>                (type integer x))
>       ...))

Yes, picking your own syntax is fine up to a point, but other code may
have used the same syntax for a differnt purposes and you then run
into problems if you want to combine your code with such.
From: Kent M Pitman
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <sfwaevgebl2.fsf@world.std.com>
Paul Rudin <·····@shodan.demon.co.uk> writes:

> Vassil Nikolov <········@poboxes.com> writes:
> 
> > In article <··············@shodan.demon.co.uk>,
> >   Paul Rudin <·····@shodan.demon.co.uk> wrote:
> > > Kent M Pitman <······@world.std.com> writes:
> > >
> > > > Nothing in CL forbids you from type-declaring every variable.
> > >
> > > It might be nice to have some standard, succinct syntax for this.
> > > Maybe something like:
> > >
> > > (defun  foo\integer (lyst\cons x\integer ...)
> > >   ...)
> > 
> > I don't think we need a new standard for that.  The DEFMETHOD
> > syntax: ``(parameter class)'' can be reused, so that the above
> > example becomes:
> > 
> >   (defun (foo integer) ((lyst cons) (x integer) ...)
> >     ...)
> 
> For this to be valid surely the standard would need to be modified?

I'm not sure this was meant to be a question, but if so, the answer is,
I think, "no".

(shadow 'defun)
(defmacro defun ... stuff involving cl:defun ...)

Making the syntax be (foo integer) is bad, though.
There is an issue a lot of people don't talk about but that many of
us old-timers know which is that Meta-. (definition lookup in Emacs/Zmacs)
really wants the cadr position of definition forms to be "names".
Unless you're planning to make dispatch dependent on the return value
(which you can really only do in a strongly typed language and which
I'd argue is conceptually ill-defined even then), then (foo integer)
is not the name.  A better syntax is
 (defun foo ((list cons) (x integer) ... &return integer) ...)

Btw, DEFSTRUCT violates the rule about the CADR being just a name
and causes lots of problems in text editors.
From: Mike McDonald
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <7gvse1$2bi$1@spitting-spider.aracnet.com>
In article <···············@world.std.com>,
	Kent M Pitman <······@world.std.com> writes:

> Making the syntax be (foo integer) is bad, though.
> There is an issue a lot of people don't talk about but that many of
> us old-timers know which is that Meta-. (definition lookup in Emacs/Zmacs)
> really wants the cadr position of definition forms to be "names".

  That sounds like a pretty poor reason. It's not that hard to get emacs to
take the car if the cadr is a list. In zemacs, you'd just have your new defun
macro make a call to that function that registers the definition. (Sorry I
can't remember the name of it and my Schema sources are off line at the
moment.)

  Mike McDonald
  ·······@mikemac.com
From: Kent M Pitman
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <sfwvhe4e99s.fsf@world.std.com>
·······@mikemac.com (Mike McDonald) writes:

> In article <···············@world.std.com>,
> 	Kent M Pitman <······@world.std.com> writes:
> 
> > Making the syntax be (foo integer) is bad, though.
> > There is an issue a lot of people don't talk about but that many of
> > us old-timers know which is that Meta-. (definition lookup in Emacs/Zmacs)
> > really wants the cadr position of definition forms to be "names".
> 
>   That sounds like a pretty poor reason. It's not that hard to get emacs to
> take the car if the cadr is a list. In zemacs, you'd just have your new defun
> macro make a call to that function that registers the definition. (Sorry I
> can't remember the name of it and my Schema sources are off line at the
> moment.)

I don't agree.  It's not just Emacs but all potential programs that
operate on this.  And it's not just cadr, but a table that the program
would have to magically have and which there is no mechanism to
provide. 

On the Lisp Machine, there is an elaborate protocol for informing the
system of how to extract the name from the other stuff.  But
I have to say that I think it's infinitely better just to have that
be unnecessary.

Lisp has gotten a huge amount of its power not out of "picking the
best or most interesting way to do things" but out of "picking a known,
predictable way of doing something".  The power comes NOT from the goodness
of the choice, but from the lack of infinite quibbling over it.
I claim that this is an example of such.  Pushing for a change is
pushing for a life of infinite quibbling.

Don't accept that taking cadr to get the name is the right thing because
we've always done it that way.  Accept that we've always done it that way
because it's really the right thing.

JMO.
From: Mike McDonald
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <7h00ki$3o7$1@spitting-spider.aracnet.com>
In article <···············@world.std.com>,
	Kent M Pitman <······@world.std.com> writes:
> ·······@mikemac.com (Mike McDonald) writes:

> Lisp has gotten a huge amount of its power not out of "picking the
> best or most interesting way to do things" but out of "picking a known,
> predictable way of doing something".  The power comes NOT from the goodness
> of the choice, but from the lack of infinite quibbling over it.
> I claim that this is an example of such.  Pushing for a change is
> pushing for a life of infinite quibbling.

  I wasn't arguing that the standard should be changed to that but that if one
wanted to make their one "extension", that's a fairly reasonable way to
represent it.

> Don't accept that taking cadr to get the name is the right thing because
> we've always done it that way.  Accept that we've always done it that way
> because it's really the right thing.

  But there is precedance to the CADR being a list, namely DEFSTRUCT.
(Didn't one of the New/Old Flavors also have the CADR to defmethod being a
list at times? Seems like I remember it having somethingto do with specifying
before or after methods. Gnuemacs didn't like them either but a simple hack to
ctags fixed that.)

  Mike McDonald
  ·······@mikemac.com
From: Erik Naggum
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <3135131876721505@naggum.no>
* ·······@mikemac.com (Mike McDonald)
| But there is precedance to the CADR being a list, namely DEFSTRUCT.

  don't forget (DEFUN (SETF ...) ...)

#:Erik
From: Kent M Pitman
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <sfwlnezv6nd.fsf@world.std.com>
·······@mikemac.com (Mike McDonald) writes:

>   But there is precedance to the CADR being a list, namely DEFSTRUCT.

DEFSTRUCT is always an editing disaster IMO when it is a list, not because
it's a list but because the list is not the name.

> (Didn't one of the New/Old Flavors also have the CADR to defmethod being a
> list at times?

I haven't said anything about the problem being that it is a list.
Indeed flavors did have (defmethod (frob foo) ...) and
(defmethod (frob foo :after) ...)
but that was *better* than what we have now becuase the name was the
whole list.  It's actually more confusing now where the name is spliced
into the top level of the defmethod.  But at least with the present
defmethod, foo is also arguably a name.

> Seems like I remember it having somethingto do with specifying
> before or after methods. Gnuemacs didn't like them either but a 
> simple hack to ctags fixed that.)

But at least it was a general-purpose hack to the notion of names that
allows the cadr to be the name, even if it takes list syntax to jump over
it and find the other end.  (There is some problematic part if whitespace
has multiple spaces or tabs, and they have to be canonicalized by editors
that use textual representations, but that can also be dealt with.) My
point is that the same single fix makes it work for (defun (setf foo) ...)
where (setf foo) is in the cadr and is the name.

This is all just a personal opinion, mind you, not a law.  But it's a
personal opinion I know others hold as well and I have not seen
written down anywhere in recent times, though we all talked about it
long ago.  It is not something you can't violate--it's just something
I personally recommend you don't and I won't vote in favor of when it
comes to that in committees.

(I'd like to see old-style DEFSTRUCT syntax flushed, but I won't vote
to flush it without first deprecating it and providing a long
transition period.  But in the creation of new operators, where there
is no compatibility issue involved, I'll push for things that make
life simpler, not because it's impossible to do the other way but
because it introduces needless and avoidable pain to do it the other
way.  As such, this particular piece of simplicity is one I think is
important and worth pushing for where possible.)
From: Paul Rudin
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <m3r9os9mvg.fsf@shodan.demon.co.uk>
Kent M Pitman <······@world.std.com> writes:

> Paul Rudin <·····@shodan.demon.co.uk> writes:
> 
> > Vassil Nikolov <········@poboxes.com> writes:
> > 
> > > In article <··············@shodan.demon.co.uk>,
> > >   Paul Rudin <·····@shodan.demon.co.uk> wrote:
> > > > Kent M Pitman <······@world.std.com> writes:
> > > >
> > > > > Nothing in CL forbids you from type-declaring every variable.
> > > >
> > > > It might be nice to have some standard, succinct syntax for this.
> > > > Maybe something like:
> > > >
> > > > (defun  foo\integer (lyst\cons x\integer ...)
> > > >   ...)
> > > 
> > > I don't think we need a new standard for that.  The DEFMETHOD
> > > syntax: ``(parameter class)'' can be reused, so that the above
> > > example becomes:
> > > 
> > >   (defun (foo integer) ((lyst cons) (x integer) ...)
> > >     ...)
> > 
> > For this to be valid surely the standard would need to be modified?
> 
> I'm not sure this was meant to be a question, but if so, the answer is,
> I think, "no".

OK I didn't express myself clearly. I know that you can modify the
default definition of defun, but I meant for the above to be valid in
the absence of such modification. 
From: Kent M Pitman
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <sfwk8ujv67r.fsf@world.std.com>
Paul Rudin <·····@shodan.demon.co.uk> writes:

> OK I didn't express myself clearly. I know that you can modify the
> default definition of defun, but I meant for the above to be valid in
> the absence of such modification. 

Maybe I didn't express myself well either.  Just about nothing you can do
in the language is valid in the absence of accomodation by other programs.
You can't do (defun foo ...) without assuring you are in a namespace that
doesn't have FOO already there.  Saying that (defun foo ...) is valid always
assumes you have done this check.  Sometimes you "implement" this by
saying (shadow 'foo) and sometimes by (in-package "SOMETHING-ELSE") and
sometimes by (fmakunbound 'foo) but always you do something, even if
the something is the null thing because nothing is required.  

Even to bind a variable as in (let ((x 3)) ...) is invalid if the "..."
needs to access an outer x.   But we don't say you can't access both
x's at once without a change to the language, we just write
 (let ((x 2))
   (flet ((outer-x () x))
     (let ((x 3))
       (+ x (outer-x)))))
and suddenly we are on our way again without a need to involve the
language design committee.

You didn't merely say "for this to be valid you'd have to add a call
to shadow" you made the following claim (quoting whole paragraph):

> For this to be valid surely the standard would need to be modified?

Had you not made your statement this strong, I would not have bothered
to reply at all.

I just want to make it very clear that the language is a lot more 
accomodating than that, and as a rule, I doubt there's much of anything
for which the above statement is true, in the sense that the language 
permits you to build whole new languages inside it (unlike many other
languages).  And I get frustrated when people push back on the language to
solve problems they could solve themselves, especially when the "solution"
is about 1-line long.
From: Paul Rudin
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <m3pv4b9ta4.fsf@shodan.demon.co.uk>
Kent M Pitman <······@world.std.com> writes:


> 
> I just want to make it very clear that the language is a lot more 
> accomodating than that, and as a rule, I doubt there's much of anything
> for which the above statement is true, in the sense that the language 
> permits you to build whole new languages inside it (unlike many other
> languages).  And I get frustrated when people push back on the language to
> solve problems they could solve themselves, especially when the "solution"
> is about 1-line long.

I used the inital pseudo-code fragment as an example of the kind of
thing that it would be nice to do. It would be nice IMHO to have some
such succint syntax in a number of differnt situtions, not just in the
top of a defun form.

Most problems could be solved by people in one way or the other, but
then what is the point of standardising stuff. When something is
likely to be widely used it's worth standardising. I may be entirely
wrong, but I suspect that if CL incorporated a standard, succinct
syntax for associating types with symbols and checking the types of
objects then it would get used a lot more then the current, somewhat
verbose, ways of doing this is.
From: Kent M Pitman
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <sfwaevfyx7v.fsf@world.std.com>
Paul Rudin <·····@shodan.demon.co.uk> writes:

> Most problems could be solved by people in one way or the other, but
> then what is the point of standardising stuff.

This is a good question.  Here is my take on the answer:

Standardizing is what you do when you have not only just solved it yourself
but shared your solution around and you have good "current practice"
reason to think you have the "de facto" standard so that you can document
it to create normative effect.

Standardizing is what you do when you have many peopld doing certain things
in ways that those people all mutually agree should be brought together
under one cover and preferrably have figured out what cover that should be.
Sometimes, with enough community spirit behind it, it's sufficient for them
to just wish the standards body would pick one, but mostly that's a bad
idea.  We had to do it with CLOS becuase it was so central to everything
else.  But, for example, we did not do that with FFI vs RPC because it wasn't
the committee's role to say what industry should decide--it's industry's
role to say what it wants and it is standardization efforts' role to react
to that.

Even CLIM, as an example, is not in my opinion "de facto standard enough"
to become a real standard because (a) not everyone accepts it as window
system of choice and (b) even those who use it feel it should change in
many ways that being standard would keep it from doing.

> When something is
> likely to be widely used it's worth standardising.

Not that my word is the final one, but I don't really agree.  I think
it's mostly worth standardizing when it's been in use either in its
present form or comparable form from more than one vendor or widely
among users (e.g., as with DEFSYSTEM).

It's not really that I think I think NO other such things are
possible.  Sometimes you can get a feature in that doesn't hurt things
and helps some others.  But largely I don't say that's the "purpose"
of standardizing; it's just more something you tolerate sometimes
because you can.

> I may be entirely
> wrong, but I suspect that if CL incorporated a standard, succinct
> syntax for associating types with symbols and checking the types of
> objects then it would get used a lot more then the current, somewhat
> verbose, ways of doing this is.

I don't understand.  Declarations do this.  What's verbose is to
repeat the type with every use.  But in any case, since you can
personally extend the language to have the feature, it's easy for you
to gather together a community to speak to this point if you think
that's so. I doubt it is so.  But I would be convinced by numbers.

Note that I *would* be amenable to a proposal for a more general
mechanism that allowed users to associate generally additional
declaration information at any point in a code-walk (as was in CLTL2)
and to retrieve that under other circumstances.  We knew a lot of
people wanted that and didn't withdraw it for ANSI CL out of spite--we
just couldn't make the mechanics work in a way we were confident about
in the time allotted becuase it was untested.  There is a great
temptation in standards work to standardize things that are not
tested, but it's pretty risky.  Sometimes you get lucky, as with CLOS
and the condition system.  Sometimes you end up doing very flakey things.
A number of things in CL that are painful and weird (like the type
inheritance rules for arrays with various element types and storage
modes) are traceable to last-minute decisions that were not heavily
tested in practice before the original language design was rolled out.
From: Paul Rudin
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <m3k8ui9dg8.fsf@shodan.demon.co.uk>
Kent M Pitman <······@world.std.com> writes:

> Paul Rudin <·····@shodan.demon.co.uk> writes:

[snippity-snip]

> 
> > I may be entirely
> > wrong, but I suspect that if CL incorporated a standard, succinct
> > syntax for associating types with symbols and checking the types of
> > objects then it would get used a lot more then the current, somewhat
> > verbose, ways of doing this is.
> 
> I don't understand.  Declarations do this.  What's verbose is to
> repeat the type with every use. 

Sure, declarations do it; but I'd like syntax that allows the
association between a symbol and its type to be declared at th the
point in the code that it (the symbol) was introduced. Having to
rename the symbol within a subsequent delcaration is (to my way of
thinking) somewhat cumbersome; as well as increasing the effort
required of anyone subsequently trying to understand the code.


There are lots of different forms with which it would be nice to be
able to do this: defun, lambda, let, let*, flet, labels,
destructuring-bind, multiple-value-bind, do, do*, dolist, loop....
From: Rob Warnock
Subject: Standardization  [was: Re: Newbie questions...]
Date: 
Message-ID: <7h8qs4$91jtr@fido.engr.sgi.com>
Kent M Pitman  <······@world.std.com> wrote:
+---------------
| There is a great temptation in standards work to standardize things
| that are not tested, but it's pretty risky.
+---------------

Too true. Unfortunately, the groups which standardize communications
hardware, protocols, & data formats seem to have forgotten this. Ethernet
and HIPPI were about the last to have a working implementation *before*
standardization. In that arena at least, the usual case these days is
design-by-standardization-committee, *then* implementation. (*sigh*)

Some have attributed this trend to a competitive/political principle
whimsically called by your truly "maximum mutual disadvantage". That is,
if there are no working implementations before the standard gets written,
then no one vendor has a head-start advantage over another.

Common Lisp is indeed fortunate to have been hammered out before this
trend became as strong as it is today...


-Rob

-----
Rob Warnock, 8L-855		····@sgi.com
Applied Networking		http://reality.sgi.com/rpw3/
Silicon Graphics, Inc.		Phone: 650-933-1673
1600 Amphitheatre Pkwy.		FAX: 650-933-0511
Mountain View, CA  94043	PP-ASEL-IA
From: Erik Naggum
Subject: Re: Standardization  [was: Re: Newbie questions...]
Date: 
Message-ID: <3135419859061714@naggum.no>
* ····@rigden.engr.sgi.com (Rob Warnock)
| Some have attributed this trend to a competitive/political principle
| whimsically called by your truly "maximum mutual disadvantage".

  it started when some new suits at ISO wanted ISO to be more visible in
  the market and feared "competition" from fairly stupid companies that
  pushed their highly inferior "de facto" standards.  there used to be
  directives that made it virtually impossible to have "invention by
  committee" in ISO -- but these were changed.

  this is another case where competition should only be engaged in by
  people motivated by a sense of their own strength, not fear of the
  perceived strength of random others, and should not be engaged in at
  all by people who believe in sports-style competition, where there is
  one winner and everybody else are necessarily losers until they can
  beat the winner.  this is not one of those things the market can sort
  out, either, except that those who portray themselves as losers if
  they aren't the only winner will hopefully vanish completely when they
  are no longer winners.

#:Erik
From: Paul Rudin
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <m3ogjv9st4.fsf@shodan.demon.co.uk>
Kent M Pitman <······@world.std.com> writes:
> 
> Even to bind a variable as in (let ((x 3)) ...) is invalid if the "..."
> needs to access an outer x.   But we don't say you can't access both
> x's at once without a change to the language, we just write
>  (let ((x 2))
>    (flet ((outer-x () x))
>      (let ((x 3))
>        (+ x (outer-x)))))


[BTW, why would you want a flet here? what's wrong with:

(let ((x 2))
 (let ((outer-x x)
       (x 3))
   (+ x outer-x)))

?]
From: Kent M Pitman
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <sfwbtfvyxz6.fsf@world.std.com>
Paul Rudin <·····@shodan.demon.co.uk> writes:

> Kent M Pitman <······@world.std.com> writes:
> > 
> > Even to bind a variable as in (let ((x 3)) ...) is invalid if the "..."
> > needs to access an outer x.   But we don't say you can't access both
> > x's at once without a change to the language, we just write
> >  (let ((x 2))
> >    (flet ((outer-x () x))
> >      (let ((x 3))
> >        (+ x (outer-x)))))
> 
> 
> [BTW, why would you want a flet here? what's wrong with:
> 
> (let ((x 2))
>  (let ((outer-x x)
>        (x 3))
>    (+ x outer-x)))

If you ask that, you have to ask why I have bindings at all.
I meant this to stand for the more general case of:

 (let ((x ...))
    ... lots of code that might read or assign x ...
    (flet ((outer-x () x)
           (set-outer-x (y) (setq x y)))
      (let ((x ...))
        ... more code that reads or writes x or outer-x...
      ))
    ... more code reading or assigning x ...)

My point was not that you could copy x around but that in fact there
are in-language ways of referring to the original x.  Just in case
you needed to.
From: Marco Antoniotti
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <lw67634wvp.fsf@copernico.parades.rm.cnr.it>
Kent M Pitman <······@world.std.com> writes:

> Paul Rudin <·····@shodan.demon.co.uk> writes:
> 
> > Vassil Nikolov <········@poboxes.com> writes:
> > 
> > > In article <··············@shodan.demon.co.uk>,
> > >   Paul Rudin <·····@shodan.demon.co.uk> wrote:
> > > > Kent M Pitman <······@world.std.com> writes:
> > > >
> > > > > Nothing in CL forbids you from type-declaring every variable.
> > > >
> > > > It might be nice to have some standard, succinct syntax for this.
> > > > Maybe something like:
> > > >
> > > > (defun  foo\integer (lyst\cons x\integer ...)
> > > >   ...)
> > > 
> > > I don't think we need a new standard for that.  The DEFMETHOD
> > > syntax: ``(parameter class)'' can be reused, so that the above
> > > example becomes:
> > > 
> > >   (defun (foo integer) ((lyst cons) (x integer) ...)
> > >     ...)
> > 
> > For this to be valid surely the standard would need to be modified?
> 
> I'm not sure this was meant to be a question, but if so, the answer is,
> I think, "no".
> 
> (shadow 'defun)
> (defmacro defun ... stuff involving cl:defun ...)
> 
> Making the syntax be (foo integer) is bad, though.
> There is an issue a lot of people don't talk about but that many of
> us old-timers know which is that Meta-. (definition lookup in Emacs/Zmacs)
> really wants the cadr position of definition forms to be "names".
> Unless you're planning to make dispatch dependent on the return value
> (which you can really only do in a strongly typed language and which
> I'd argue is conceptually ill-defined even then), then (foo integer)
> is not the name.  A better syntax is
>  (defun foo ((list cons) (x integer) ... &return integer) ...)

What about the

 (defun foo (list x)
   (declare (type cons list)
            (type integer x)
            (values integer))
   <body>)

I have always wondered why this is not an accepted solution (modulo
syntax of course). I sort of understand that having simply VALUES as a
'declaration identifier' (3.3.3 ANSI CL) may cause some problems at
the DECLAIM/PROCLAIM level (it wouldn't be clear what the declaration
would apply to), but the idea seems correct.  CMUCL has an
implementation of this scheme.

> Btw, DEFSTRUCT violates the rule about the CADR being just a name
> and causes lots of problems in text editors.

Well, also the

	(defmethod zut :after ((....

wrecks havoc in cl-indent in Emacs.

Cheers

-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: Kent M Pitman
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <sfwiua3v5p2.fsf@world.std.com>
Marco Antoniotti <·······@copernico.parades.rm.cnr.it> writes:

> What about the
> 
>  (defun foo (list x)
>    (declare (type cons list)
>             (type integer x)
>             (values integer))
>    <body>)
> 
> I have always wondered why this is not an accepted solution (modulo
> syntax of course). I sort of understand that having simply VALUES as a
> 'declaration identifier' (3.3.3 ANSI CL) may cause some problems at
> the DECLAIM/PROCLAIM level (it wouldn't be clear what the declaration
> would apply to), but the idea seems correct.  CMUCL has an
> implementation of this scheme.

So does Genera.  It's not valid CL, because you're not naming what you're
declaring.  VALUES is intended for use in FTYPE declarations and in
THE expressions.  But it was commonplace in Genera to see both this and
ARGLIST declarations, which were both very handy for Control-Shift-A,
the thing that tells you the args and values of a function.

 (defun my-open (frob &rest open-args)
   (declare (arglist frob &key element-type direction) 
	    (values stream))
   (apply #'open (frob->filename frob) open-args))

> > Btw, DEFSTRUCT violates the rule about the CADR being just a name
> > and causes lots of problems in text editors.
> 
> Well, also the
> 	(defmethod zut :after ((....
> wrecks havoc in cl-indent in Emacs.

I don't like this defmethod syntax, but I don't think it would be a problem
for cl-indent if they implemented indent the way I implemented it in Teco.
They ported a lot of my and others' Teco libraries to gnu-emacs when they
brought it up, but they left behind a lot of features.  The Teco-based
indenter assumed anything starting with (def... ) should do body-indent
on the first line after the (def...) that was not a continuation line of
a subform that started on the first line.  [There were ways to override
this if you had definitions that didn't match the default, but this was
not a problem. def expressions vary widely in the number of forms they
have, and having a rule like "body starts at 2nd or 3rd form" is a bad one.
The right DEFAULT rule is "definitions have junk on the first line and
body on the rest of the lines".  Incidentally, this rule even accomodates
defstruct in its full-blown form.

I think I might have the code somewhere to the old teco-based indenter.
Maybe sometime for grins I'll publish it.  Teco was a wonder to behold.
From: Lieven Marchand
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <m36762tmfr.fsf@localhost.localdomain>
Kent M Pitman <······@world.std.com> writes:

> I think I might have the code somewhere to the old teco-based indenter.
> Maybe sometime for grins I'll publish it.  Teco was a wonder to behold.

To make the circle round, are there Teco docs available to write a
Teco emulator for Emacs? ;-)

-- 
Lieven Marchand <···@bewoner.dma.be>
If there are aliens, they play Go. -- Lasker
From: Kent M Pitman
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <sfw6762z36e.fsf@world.std.com>
Lieven Marchand <···@bewoner.dma.be> writes:

> Kent M Pitman <······@world.std.com> writes:
> 
> > I think I might have the code somewhere to the old teco-based indenter.
> > Maybe sometime for grins I'll publish it.  Teco was a wonder to behold.
> 
> To make the circle round, are there Teco docs available to write a
> Teco emulator for Emacs? ;-)

There is, but I'm not sure the intellectual property ownership.  Note well,
you'd need ITS Teco, not DEC Teco.  DEC Teco was a pale shadow of ITS Teco
and could never possibly have accomodated Emacs.  ITS Teco was to DEC Teco
like CL is to Lisp 1.5.  Every character was a command.  And every character
could be modified by : or @ (which is where format gets the idea, btw). Many
characters did different things based on number of arguments they got [either
one or two].  DEC Teco had only a fraction of this.  Not to mention many fewer
q-registers.  ITS Teco had one q-register [built-in storage name] per 
keyboard key in ASCII+control+meta, plus an extended namespace of variables
and on and on.  It would be a lot of work.  Not sure to ask.  Maybe try
alt.sys.pdp10, actually, rather than the teco newsgroup.  not sure of most
people on the teco group would know what ITS Teco was.  The document you
want to ask for is called "TECORD >".  Anyone who doesn't recognize it
under that name doesn't know the right one.
From: Tim Bradshaw
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <ey34slmmbc9.fsf@lostwithiel.tfeb.org>
* Lieven Marchand wrote:

> To make the circle round, are there Teco docs available to write a
> Teco emulator for Emacs? ;-)

Emacs already has a teco emulator, of course. I think it's a copy of a
copy of the DEC one, so it's probably not very like the ITS one.

--tim
From: Paolo Amoroso
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <3735a973.1168674@news.mclink.it>
On 09 May 1999 15:14:16 +0200, Lieven Marchand <···@bewoner.dma.be> wrote:

> To make the circle round, are there Teco docs available to write a
> Teco emulator for Emacs? ;-)

I seem to have read that a Teco emulator for Emacs is already available.


Paolo
-- 
Paolo Amoroso <·······@mclink.it>
From: Marco Antoniotti
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <lwwvyhid6c.fsf@copernico.parades.rm.cnr.it>
M
Kent M Pitman <······@world.std.com> writes:

> arco Antoniotti <·······@copernico.parades.rm.cnr.it> writes:
> 
> > What about the
> > 
> >  (defun foo (list x)
> >    (declare (type cons list)
> >             (type integer x)
> >             (values integer))
> >    <body>)
> > 
> > I have always wondered why this is not an accepted solution (modulo
> > syntax of course). I sort of understand that having simply VALUES as a
> > 'declaration identifier' (3.3.3 ANSI CL) may cause some problems at
> > the DECLAIM/PROCLAIM level (it wouldn't be clear what the declaration
> > would apply to), but the idea seems correct.  CMUCL has an
> > implementation of this scheme.
> 
> So does Genera.  It's not valid CL, because you're not naming what you're
> declaring.  VALUES is intended for use in FTYPE declarations and in
> THE expressions.  But it was commonplace in Genera to see both this and
> ARGLIST declarations, which were both very handy for Control-Shift-A,
> the thing that tells you the args and values of a function.

I understand the problems with a top-level

(declaim (values integer symbol))  ; VALUES of WHAT?!?

And you could always rewrite

	(defun foo (list x)
	   (declare (ftype (function (cons integer) integer) foo))
	   <body>)

But would it help?

I think this could be a nice addition to ANSI. Implementation cost is
zero. Add

(declaim (declaration values))

somewhere in the implementation and be done with it.  Of course, the
"implementable" semantics should be well defined of course.


> 
>  (defun my-open (frob &rest open-args)
>    (declare (arglist frob &key element-type direction) 
> 	    (values stream))
>    (apply #'open (frob->filename frob) open-args))
> 
> > > Btw, DEFSTRUCT violates the rule about the CADR being just a name
> > > and causes lots of problems in text editors.
> > 
> > Well, also the
> > 	(defmethod zut :after ((....
> > wrecks havoc in cl-indent in Emacs.
> 
> I don't like this defmethod syntax, but I don't think it would be a problem
> for cl-indent if they implemented indent the way I implemented it in Teco.
> They ported a lot of my and others' Teco libraries to gnu-emacs when they
> brought it up, but they left behind a lot of features.  The Teco-based
> indenter assumed anything starting with (def... ) should do body-indent
> on the first line after the (def...) that was not a continuation line of
> a subform that started on the first line.  [There were ways to override
> this if you had definitions that didn't match the default, but this was
> not a problem. def expressions vary widely in the number of forms they
> have, and having a rule like "body starts at 2nd or 3rd form" is a bad one.
> The right DEFAULT rule is "definitions have junk on the first line and
> body on the rest of the lines".  Incidentally, this rule even accomodates
> defstruct in its full-blown form.
> 
> I think I might have the code somewhere to the old teco-based indenter.
> Maybe sometime for grins I'll publish it.  Teco was a wonder to
> behold.

Well, I never felt the thrill of TECO programming (which makes me
think of your age, given mine :) )

Cheers

-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: Tim Bradshaw
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <ey3aevflarl.fsf@lostwithiel.tfeb.org>
* Marco Antoniotti wrote:

>  (defun foo (list x)
>    (declare (type cons list)
>             (type integer x)
>             (values integer))
>    <body>)

> I have always wondered why this is not an accepted solution (modulo
> syntax of course). I sort of understand that having simply VALUES as a
> 'declaration identifier' (3.3.3 ANSI CL) may cause some problems at
> the DECLAIM/PROCLAIM level (it wouldn't be clear what the declaration
> would apply to), but the idea seems correct.  CMUCL has an
> implementation of this scheme.

There is a slightly trivial but unfortunate problem with this in that
Symbolics CL (and perhaps also the other LispM flavours of Lisp) uses
a VALUES declaration in a different way -- namely to tell the system
what the *names* of the value(s) that the function returns are.  This
is used pervasively by the many tools that tell you things about the
function you're looking at.

I guess it's no longer a real problem that Genera does this, except
that there are still large Lisp systems which have this convention in
them.

--tim
From: Kent M Pitman
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <sfw90azyx4e.fsf@world.std.com>
Tim Bradshaw <···@tfeb.org> writes:

> There is a slightly trivial but unfortunate problem with this in that
> Symbolics CL (and perhaps also the other LispM flavours of Lisp) uses
> a VALUES declaration in a different way -- namely to tell the system
> what the *names* of the value(s) that the function returns are.

Oh, drat.  You're right.  I said something else about values and was wrong.
Thanks for (even inadvertently) correcting me.
From: Marco Antoniotti
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <lwvhe1id0s.fsf@copernico.parades.rm.cnr.it>
Kent M Pitman <······@world.std.com> writes:

> Tim Bradshaw <···@tfeb.org> writes:
> 
> > There is a slightly trivial but unfortunate problem with this in that
> > Symbolics CL (and perhaps also the other LispM flavours of Lisp) uses
> > a VALUES declaration in a different way -- namely to tell the system
> > what the *names* of the value(s) that the function returns are.
> 
> Oh, drat.  You're right.  I said something else about values and was wrong.
> Thanks for (even inadvertently) correcting me.

Well. Too bad for Genera.  As an alternative we could overload RETURN.

	(defun zot (x y z)
	  (declare (type integer z)
	           (type (mod 256) x y)
		   (return integer integer))
	  <body>)

Cheers

-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: Sunil Mishra
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <efyogjvijbl.fsf@cleon.cc.gatech.edu>
Marco Antoniotti <·······@copernico.parades.rm.cnr.it> writes:

> Well, also the
> 
> 	(defmethod zut :after ((....
> 
> wrecks havoc in cl-indent in Emacs.
> 
> Cheers

They seem to have gotten a fix for this in xemacs. I was *quite* pleased
:-) (Though I wonder if it will deal with arbitrary method combinations
correctly...)

Sunil
From: Pierre R. Mai
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <874sloxx3l.fsf@orion.dent.isdn.cs.tu-berlin.de>
Paul Rudin <·····@shodan.demon.co.uk> writes:

> Yes, picking your own syntax is fine up to a point, but other code may
> have used the same syntax for a differnt purposes and you then run
> into problems if you want to combine your code with such.

But with Vassil's solution, the package system would solve those
problems for you[1], whereas with reader-macros, you'd have to do
much more to keep out of trouble.

The only problem the package system can't solve IMHO is that code
not written by you will most likely not have the amount of type
declarations you'd like, which in some circumstances will weaken
your implementation's ability to do type-inference and type-checking
considerably.  But short of changing the standard to require type
declarations, no solution will help you here.  Of course you can
still declaim ftype's for those functions from the outside, but I'd
consider this practice fairly rude...

BTW: Vassil's syntax (optionally extended to allow specifiying a
return type) is not a lot of work to implement, since you can most
probably snarf a defun-lambda-list-parser from either CMUCL or PCL.

Regs, Pierre.

Footnotes: 
[1]  In fact I'd claim that this area of use is one of the main
strengths of the package system. See the Symbolics Lisp machines,
which for a long time had a large number of different dialects
available at the same time, in the same address space...

You'd just define a package "RUDIN-CL", which would import&re-export
all the CL symbols, with the exception of defun&co., which would get
their own definitions in RUDIN-CL.  In fact in a previous discussion
some time ago, Tim Bradshaw posted a nice macro, which makes the task
of defining dialect packages a lot nicer.  Just search via DejaNews
for Message-ID <···············@todday.aiai.ed.ac.uk>.

-- 
Pierre Mai <····@acm.org>               http://home.pages.de/~trillian/
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: Sam Steingold
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <m3yaj0lfco.fsf@eho.eaglets.com>
>>>> In message <············@nnrp1.deja.com>
>>>> On the subject of "Re: Newbie questions [Followup to comp.lang.lisp]"
>>>> Sent on Fri, 07 May 1999 18:32:11 GMT
>>>> Honorable Vassil Nikolov <········@poboxes.com> writes:
 >> 
 >> I don't think we need a new standard for that.  The DEFMETHOD
 >> syntax: ``(parameter class)'' can be reused, so that the above
 >> example becomes:
 >> 
 >>   (defun (foo integer) ((lyst cons) (x integer) ...)
 >>     ...)

what about optional arguments with default values?

-- 
Sam Steingold (http://www.goems.com/~sds) running RedHat6.0 GNU/Linux
Micros**t is not the answer.  Micros**t is a question, and the answer is Linux,
(http://www.linux.org) the choice of the GNU (http://www.gnu.org) generation.
Trespassers will be shot.  Survivors will be prosecuted.
From: Pierre R. Mai
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <871zgsxwsc.fsf@orion.dent.isdn.cs.tu-berlin.de>
Sam Steingold <···@goems.com> writes:

> what about optional arguments with default values?

It is still possible to do this a long those lines, but IMHO it gets a 
bit messy:

(defun myfun ((a double-float) &optional ((b double-float) 0.0d0) (x 0.1d0))
  (* a b x))

The same might apply to &key args, &aux and even &rest, if one so
likes.  One could even extend the syntax to allow the specification of 
a return type:

(defun myfun (integer 42 42) ((a (integer 1 42)) (b (integer 1 42)))
  (* a b))

Regs, Pierre.

PS: I'd agree with anyone who might find the above syntax slightly
perplexing.  I still prefer (declare ...) inside the function body.
The only addition I think would be nice was a "values" declaration
like CMU CL's, which would allow me to declare the return type of my
function within the body as well.

-- 
Pierre Mai <····@acm.org>               http://home.pages.de/~trillian/
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: Kent M Pitman
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <sfw4sloh04f.fsf@world.std.com>
····@acm.org (Pierre R. Mai) writes:

> Sam Steingold <···@goems.com> writes:
> 
> > what about optional arguments with default values?
> 
> It is still possible to do this a long those lines, but IMHO it gets a 
> bit messy:
> 
> (defun myfun ((a double-float) &optional ((b double-float) 0.0d0) (x 0.1d0))
>   (* a b x))
> 
> The same might apply to &key args,

Except that keywords already use this, as in:

 ((lambda (&key ((:fu foo) 3 foo-p)) foo) :fu 4) => 4

so you'd do

 ( ... &key (((:b bee) integer) 3 b-p) ...)
or 
 ( ... &key ((:b bee integer) 3 b-p) ... )
or
 &key (bee 3 b-p integer)
but any way you cut it, it's messy... and somewhat inconsistent.

Declaration syntax is less messy, IMO.
From: Bill Newman
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <wnewmanFBHAEt.HpK@netcom.com>
Kent M Pitman (······@world.std.com) wrote:
: ····@acm.org (Pierre R. Mai) writes:

: > Sam Steingold <···@goems.com> writes:
: > 
: > > what about optional arguments with default values?
: > 
: > It is still possible to do this a long those lines, but IMHO it gets a 
: > bit messy:
: > 
: > (defun myfun ((a double-float) &optional ((b double-float) 0.0d0) (x 0.1d0))
: >   (* a b x))
: > 
: > The same might apply to &key args,

: Except that keywords already use this, as in:

:  ((lambda (&key ((:fu foo) 3 foo-p)) foo) :fu 4) => 4

: so you'd do

:  ( ... &key (((:b bee) integer) 3 b-p) ...)
: or 
:  ( ... &key ((:b bee integer) 3 b-p) ... )
: or
:  &key (bee 3 b-p integer)
: but any way you cut it, it's messy... and somewhat inconsistent.

: Declaration syntax is less messy, IMO.

I personally hate typing names more than once, a la

  (DEFUN FOO (..) ..)
  (DECLAIM (FTYPE (..) FOO)).

because it ends up being one more maintenance headache when names
change. (Does anyone else change the names and calling conventions of
things a lot when they're setting up a program?)  For about two years
I've been using my own

  (DEFMACRO DEF-TYPED-FUN (FNN
                           (LL          ; lambda list
                            LLT         ; LL types (suitable for DECLAIM FTYPE)
                            &OPTIONAL
                            (RETURN-TYPE T)))

and also DEF-TYPED-VAR along the same lines, largely to avoid this.
(Why do I add so many declarations to programs when they're unstable
enough that they're likely to change?  I use CMUCL, which considers[1]
type declarations to be assertions, and as an old C++ programmer I
find this quite helpful -- I never got used to being blindsided by
mistakes that the compiler used to catch!)

  Bill Newman
  ·······@netcom.com

[1] Except -- argh!! -- it sometimes doesn't anymore, since it's undergone
    a lot of maintenance and not everyone considers this guarantee as 
    important (and brilliant) as I do.
From: Kent M Pitman
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <sfwg156njb3.fsf@world.std.com>
·······@netcom.com (Bill Newman) writes:

>   (DEFUN FOO (..) ..)
>   (DECLAIM (FTYPE (..) FOO)).

I recommend the DECLAIM precede, btw.  If the compiler doesn't know to
compile the DEFUN in the way you mention, the DECLAIM may not be able
to produce the effect after-the-fact.
From: Erik Naggum
Subject: Re: Newbie questions
Date: 
Message-ID: <3134853317084191_-_@naggum.no>
* ·····@removethisbeforesending.cetasoft.com (Joshua Scholar)
| The overall point is that type checking saves you from tons and tons of
| late night typos and logic errors.

  well, type checking is necessary, but it appears that you don't know the
  difference between compile-time and run-time type checking, and assume
  that without compile-time type checking, there wouldn't be any.  _that_
  would be a source of tons and tons of typos and logic errors.  however,
  the ridiculously simple-minded static type checking in C++ restrict you
  to a single line of inheritance, has no universal supertype, and offers
  no way to know the type of an object at run-time except by embedding it
  in a class and using RTTI.  that is sufficiently inconvenient that the
  customary way to deal with multiple types of return values is to use an
  "invalid value", like the infamous NULL pointer.

| Passing the wrong parameter, parameters in the wrong order, the wrong
| subfield etc. are common typos and often caught by the compiler -
| especially if you design your class interfaces to catch as much as
| possible.  In code that rarely runs or isn't expected to run under normal
| conditions, this sort of correctness checking is very important.

  it sounds like you think you're telling people something new.  why?  this
  is so obvious it's been taken care of in much better than to require the
  programmer to declare the types of _all_ objects _explicitly_, which is,
  unsurprisingly, a major source of typos and logic errors, not to mention
  recompilation and header file (interface) changes that need to propagate
  to other team members.

  oh, by the way, since I see your favorite argument coming: knowing C++ is
  part of growing up.  discarding C++ is a matter of course once you have
  grown up.  explicit type declarations is useful for new programmers the
  same way bicycles for kids have support wheels.  few kids argue against
  their removal.

#:Erik
From: Tim Bradshaw
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <ey37lqoyoub.fsf@lostwithiel.tfeb.org>
* Joshua Scholar wrote:

> Passing the wrong parameter, parameters in the wrong order, the wrong
> subfield etc. are common typos and often caught by the compiler -
> especially if you design your class interfaces to catch as much as
> possible.  In code that rarely runs or isn't expected to run under
> normal conditions, this sort of correctness checking is very
> important.

This is a good argument.  It would be more convincing if there was
empirical evidence that C++ systems are more robust than CL ones
developed in similar circumstances.  Although I'm working from small
samples, I'd say the evidence is that C++ systems are several orders
of magnitude less robust than CL ones.

Also, of course, CL supports type declarations considerably more
precise than C++, and compilers are free to check those at compile
time.  They typically don't, I guess because it turns out empirically
that this stuff doesn't help you much.

--tim
From: Raymond Toy
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <4nvhe7bufj.fsf@rtp.ericsson.se>
>>>>> "Tim" == Tim Bradshaw <···@tfeb.org> writes:

    Tim> Also, of course, CL supports type declarations considerably more
    Tim> precise than C++, and compilers are free to check those at compile
    Tim> time.  They typically don't, I guess because it turns out empirically
    Tim> that this stuff doesn't help you much.

CMUCL can handle these precise type declarations, and I've found that
it can help out quite a bit.  If you declare something as, say,
(integer 0 42), it might be possible for the compiler to deduce that
all operations always result in a fixnum, and therefore use only
fixnum operators instead of resorting to a generic integer arithmetic
routine. 

Ray
From: Joshua Scholar
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <3730fc1d.5104246@news.select.net>
On 05 May 1999 08:09:48 +0100, Tim Bradshaw <···@tfeb.org> wrote:

>* Joshua Scholar wrote:
>
>> Passing the wrong parameter, parameters in the wrong order, the wrong
>> subfield etc. are common typos and often caught by the compiler -
>> especially if you design your class interfaces to catch as much as
>> possible.  In code that rarely runs or isn't expected to run under
>> normal conditions, this sort of correctness checking is very
>> important.
>
>This is a good argument.  It would be more convincing if there was
>empirical evidence that C++ systems are more robust than CL ones
>developed in similar circumstances.  Although I'm working from small
>samples, I'd say the evidence is that C++ systems are several orders
>of magnitude less robust than CL ones.
>

Well, as I said in my last message, this whole subject is a sort of
deceptive detour away from my actual question.  I wasn't trying to say
that CL is inferior in general, I was just sidetracked into defending
the limitations I have in my job as a game programmer - when that
wasn't my original question at all.

C++ has quite enough power to support all kinds of robust paradigms,
you just need to have the right tools and practices.  My own personal
programming style emphasizes keeping myself out of trouble and
building the tools it takes to represent my problems abstractly.  Many
programmers, for instance, find writing templates to be a forbiddingly
hard process - it was hard for me too for the first few years, but I
kept forcing myself to do it until it became second nature.

So I don't need empirical evidence that average programmers can write
robust C++, all I need to know is that the three programmers in my
office can.

You understood that my point was that when you don't have time to test
all of your control paths, having even a weak form of automatic code
verification is much more of a necessity than luxury.  I keep hearing
that LISP supports building extensive layers on top of the language.
Perhaps you can build an advanced compile-time type checking and
constraint verification system on CL.  Lint for CL.

If you want to support low level programmers (like Game programmers
and device driver programmers) then you could also add:
1. More limited extension to the object system that supports inline
methods (and therefor static linking)
2. A Modula like section of unsafe operations and memory management.
Sometime you need to get away from Nanny. This part you would probably
need the compiler source for.

I hear people say you should use the right tool for the job, but there
is no reason that one language couldn't be eventually extended to
cover all needs.  And since the LISP community has always played with
features that the rest of the world considered too exotic or too
dangerous, it doesn't sound impossible that you'all could extend LISP
again.  This time, for once, it would involve extending LISP downward
toward meeting the lower level languages instead of upward toward the
experimental horizons.  It wouldn't do any harm, right?

Oh, and while you're at it, why not define a standard parser for
mathematical notation input.  Lots of people are used to mathematical
notations and want to be able to program with them.

Joshua Scholar
From: Philip Lijnzaad
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <u7u2tr3nze.fsf@ebi.ac.uk>
On Wed, 05 May 1999 07:54:24 GMT, 
"Joshua" == Joshua Scholar <·····@removethisbeforesending.cetasoft.com> writes:

Joshua> I was just sidetracked into defending the limitations I have in my
Joshua> job as a game programmer - when that wasn't my original question at
Joshua> all.

sorry to continue the side-track, and maybe this has been mentioned before,
but I seem to remember that Super Mario was developed/prototyped using a Lisp
system written/sold/resold? by Allegro Inc. And I would imagine that in
general Lisp is the perfect match for the games area, due to its immense
flexibility. You would have your high level code spit out assembler or so.

Joshua> C++ has quite enough power to support all kinds of robust paradigms,
Joshua> you just need to have the right tools and practices.  

Many programs/programmers seem to do this by building there own dynamic
typing systems etc. into C or C++, which sounds like a bad case of re-use to
me.

Joshua> Many programmers, for instance, find writing templates to be a
Joshua> forbiddingly hard process - it was hard for me too for the first few
Joshua> years, but I kept forcing myself to do it until it became second
Joshua> nature.

(is C++ the right language if you need a few years to master it? Hell, even
lambda, mapcar, and closures are easier)

Joshua> Oh, and while you're at it, why not define a standard parser for
Joshua> mathematical notation input.  Lots of people are used to mathematical
Joshua> notations and want to be able to program with them.

I remember seeing one a while ago on this newsgroup (or comp.lang.scheme);
I'm sure it's floating around somewhere, I'll try and see if I can find it.

Cheers,
                                                                      Philip
-- 
The mail transport agent is not liable for any coffee stains in this message
-----------------------------------------------------------------------------
Philip Lijnzaad, ········@ebi.ac.uk | European Bioinformatics Institute
+44 (0)1223 49 4639                 | Wellcome Trust Genome Campus, Hinxton
+44 (0)1223 49 4468 (fax)           | Cambridgeshire CB10 1SD,  GREAT BRITAIN
PGP fingerprint: E1 03 BF 80 94 61 B6 FC  50 3D 1F 64 40 75 FB 53
From: Johan Kullstam
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <ud80fsjt6.fsf@res.raytheon.com>
·····@removethisbeforesending.cetasoft.com (Joshua Scholar) writes:

> Oh, and while you're at it, why not define a standard parser for
> mathematical notation input.  Lots of people are used to mathematical
> notations and want to be able to program with them.

lisp has infix syntax.  while this may seem to be catering to the
machine, it is a small price to pay in order to reap the benefits of a
regular syntax.  (writing the reader/parser is easier with infix, but
that's not the reason for using it.)  you can make your own
`operators' (since they're all functions).  

C++ restricts you to overloading the existing set of operators or
making functions (which are infix in C++ too).  what if *two* (or
more) kinds of multiplication make sense?  e.g., let A and B be 3x3
matrices.  you might want to multiply them as matrices, or you may
wish to multiply element-wise.  matlab has * and .* operators to
handle this.  you can overload the * operator in C++ but good luck in
introducing a .* operator.  ok, so you make a function or cast the
matrices to element-wise beasts and * away on that.

so far it's just silly notations, but what happens when you want to
scale up and use templates or macros?  it is here that the
*everything* is infix in lisp that makes a big difference.  the
function/operator occupies the same slot (i.e., first in a list) no
matter what it does.  the lack of myriad special cases and syntax
contortions really pays off in the end since the macros can be more
ambitions in power and scope.

the annoymous functions and closures are also part of this.  it's not
a matter of namespace pollution so much as awkward syntax.  immagine a
C dialect in which you had no direct number constants, but were
obligated to first making a constant before you could use them.

instead of

   int x,y;

   ....

   y += x + 3

you had to do

   const int three = 3;
   int x,y;

   ....

   y += x + three;

it works the same.  but it's awkward.  the definition of three wanders
away from the point you need it.  and what if someone else made

   const double three = 3.0;

somewhere else?

why shouldn't you be able to just write a function in the same way you
can just stick in a regular number constant?  having to name every
function is just as tedious as having to name every numerical
constant.  it's exactly the same.

in C and C++, functions are very much second class citizens.  in lisp
they are bandied about just like any other kind of data.  this allows
a level of abstraction - you can program at a higher level.

in mathematical notation, you can just introduce a function.  many
times you operate on the functions as if they were mere elements.
consider, for example, functional analysis (to say nothing about more
esoteric abstraction such as you find in algebraic topology).

we could turn right around and ask of C++, why not define a standard
parser for mathematical notation input - like arbitrary functions.
Lots of people are used to mathematical notations - like functions -
and want to be able to program with them - just as easily as with
numbers or arrays of numbers.

-- 
johan kullstam
From: Rob Warnock
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <7gr3tt$6pq14@fido.engr.sgi.com>
Johan Kullstam  <········@ne.mediaone.net> wrote:
+---------------
| lisp has infix syntax.
+---------------

I think you meant to say, "Lisp has PREFIX syntax".

(At least, I *hope* that's what you meant to say...)


-Rob

-----
Rob Warnock, 8L-855		····@sgi.com
Applied Networking		http://reality.sgi.com/rpw3/
Silicon Graphics, Inc.		Phone: 650-933-1673
1600 Amphitheatre Pkwy.		FAX: 650-933-0511
Mountain View, CA  94043	PP-ASEL-IA
From: Christopher R. Barry
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <87lnf2ye50.fsf@2xtreme.net>
····@rigden.engr.sgi.com (Rob Warnock) writes:

> Johan Kullstam  <········@ne.mediaone.net> wrote:
> +---------------
> | lisp has infix syntax.
> +---------------
> 
> I think you meant to say, "Lisp has PREFIX syntax".
> 
> (At least, I *hope* that's what you meant to say...)

I believe I remember seeing someone use an infix syntax package (in
this group?) and it looked something like #I(2 + foo * pi ...).

Christopher
From: Rob Warnock
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <7grqp0$6ui3i@fido.engr.sgi.com>
Christopher R. Barry <······@2xtreme.net> wrote:
+---------------
| ····@rigden.engr.sgi.com (Rob Warnock) writes:
| > Johan Kullstam  <········@ne.mediaone.net> wrote:
| > | lisp has infix syntax.
| > +---------------
| > 
| > I think you meant to say, "Lisp has PREFIX syntax".
| > (At least, I *hope* that's what you meant to say...)
| 
| I believe I remember seeing someone use an infix syntax package (in
| this group?) and it looked something like #I(2 + foo * pi ...).
+---------------

Uh... That's what Lisp *can* do, if *you* choose to write a reader macro
function that implements an infix parser and bind it to a dispatch character
(in this case, "I", which is undefined by default in standard CL) for the
non-terminating dispatching macro character "#" in *your* application program.
But it's not built into Lisp. Lisp's native syntax (in the absence of macro
characters) is prefix:

	(+ 2 (* foo pi))


-Rob

-----
Rob Warnock, 8L-855		····@sgi.com
Applied Networking		http://reality.sgi.com/rpw3/
Silicon Graphics, Inc.		Phone: 650-933-1673
1600 Amphitheatre Pkwy.		FAX: 650-933-0511
Mountain View, CA  94043	PP-ASEL-IA
From: Raymond Toy
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <4nhfpqbfpt.fsf@rtp.ericsson.se>
>>>>> "Rob" == Rob Warnock <····@rigden.engr.sgi.com> writes:

    Rob> Christopher R. Barry <······@2xtreme.net> wrote:
    Rob> | I believe I remember seeing someone use an infix syntax package (in
    Rob> | this group?) and it looked something like #I(2 + foo * pi ...).
    Rob> +---------------

    Rob> Uh... That's what Lisp *can* do, if *you* choose to write a reader macro
    Rob> function that implements an infix parser and bind it to a dispatch character

There is the infix package in the CMU Lisp archives....

Ray
From: Christopher R. Barry
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <87r9ouxdxq.fsf@2xtreme.net>
····@rigden.engr.sgi.com (Rob Warnock) writes:

> Christopher R. Barry <······@2xtreme.net> wrote:
>
> | I believe I remember seeing someone use an infix syntax package (in
> | this group?) and it looked something like #I(2 + foo * pi ...).
> +---------------
> 
> Uh... That's what Lisp *can* do

That's kinda what I was getting at.

> Lisp's native syntax (in the absence of macro characters) is prefix:
> 
> 	(+ 2 (* foo pi))

How enlightening.

Christopher
From: Hrvoje Niksic
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <87d80bbkuq.fsf@pc-hrvoje.srce.hr>
······@2xtreme.net (Christopher R. Barry) writes:

> I believe I remember seeing someone use an infix syntax package (in
> this group?) and it looked something like #I(2 + foo * pi ...).

Yes; the package INFIX written by Mark Kantrowitz supports the obvious
infix notation, as well as a bunch of extensions.  For instance:

> '#I( x^^2 + y^^2 )
(+ (EXPT X 2) (EXPT Y 2))

> '#I(if x<y<=z then f(x)=x^^2+y^^2 else f(x)=x^^2-y^^2)
(IF (AND (< X Y) (<= Y Z))
    (SETF (F X) (+ (EXPT X 2) (EXPT Y 2)))
    (SETF (F X) (- (EXPT X 2) (EXPT Y 2))))

Unfortunately, the license for use is apparently non-commercial:

;;; Copyright (c) 1993 by Mark Kantrowitz. All rights reserved.
;;;
;;; Use and copying of this software and preparation of derivative works
;;; based upon this software are permitted, so long as the following
;;; conditions are met:
;;;      o no fees or compensation are charged for use, copies, 
;;;        distribution or access to this software
;;;      o this copyright notice is included intact.
;;; This software is made available AS IS, and no warranty is made about
;;; the software or its performance.
[... warranty disclaimer ...]
From: Johan Kullstam
Subject: Re: Newbie questions [Followup to comp.lang.lisp]
Date: 
Message-ID: <m2g15awilg.fsf@sophia.axel.nom>
····@rigden.engr.sgi.com (Rob Warnock) writes:

> Johan Kullstam  <········@ne.mediaone.net> wrote:
> +---------------
> | lisp has infix syntax.
> +---------------
> 
> I think you meant to say, "Lisp has PREFIX syntax".
> 
> (At least, I *hope* that's what you meant to say...)

oh yeah, thanks for fixing my think-o.

-- 
                                           J o h a n  K u l l s t a m
                                           [········@ne.mediaone.net]
                                              Don't Fear the Penguin!
From: Erik Naggum
Subject: Re: Newbie questions
Date: 
Message-ID: <3134844303370291@naggum.no>
* Bagheera, the jungle scout <········@my-dejanews.com>
| I think what josh is getting at is that in C++, it prevents you from
| stuffing an 8byte value into a two byte parameter.  If you don't have
| type checking for this sort of thing, you can easily overflow your
| calling stack, which is a bad no-no.  Basically it is a matter of the
| program protecting itself from the programmer.

  this is not what static types are about.  having static types _allows_ a
  compiler to skimp on (some) run-time type checking in the belief that all
  or most of the necessary type checking can be performed compile-time.
  this is, of course, as ridiculous a desire as wanting the compiler to
  prove the program free of any other bug, and has confused people to no
  end about what types are.

  the important property of static types is not what would happen if you
  took it away, but what happens when you introduce it: you can take away
  some fences that you presume the compiler took care of, but the only
  actual consequence is that you leave open the whole area of precisely the
  kind of bugs you mistakenly associate with languages and compilers with
  dynamic types.  even in the machine language, there are types and
  associated operations on them.  some of the types may look very much the
  same when stored as bits in machine memory, but that is really an
  artifact of a context-dropping human viewer -- the archetypical example
  is the C notion of a pointer and its interchangeability with integers of
  some size.  the stored pointer doesn't become an integer any more than a
  parked car becomes a source of money when broken into by a thief -- in
  both cases, it's how you use it that matters, and we have laws and
  regulations that say that parked cars are just that and not sources of
  money.  if you willfully transgress these laws, you can turn a parked car
  into a source of money, but most people know they're doing something
  wrong when they do this, and even the lowliest thief who knows this is
  morally superior to the C programmer who abuses the bits of a pointer as
  an integer and takes it for granted.

  any form of representation is subject to interpretation by a conscious
  mind that imputes meaning to the inherently meaningless.  if, however,
  you partition the otherwise confusing representation space into various
  categories and make certain that you know which category something is of
  even at run-time, you don't get into the kinds of silly troubles that
  C/C++ fans think static typing got them out of: they never happen in the
  first place.  or, to be more precise, any powerful system will allow you
  to break the rules, but it has to be on purpose, not by silly mistake,
  which is, unfortunately, what C/C++ give you absent static type checks.

  so, "if you don't have type checking for this sort of thing", you move
  that type checking into the run-time, instead, and you _can't_ overflow
  your calling stack.  I'd argue that _dynamic_ typing is how a program
  protects itself from the programmer, and static typing that makes a whole
  lot of errors and problems _arise_.

| I'm sure there are lots of ways to shoot yourself in the foot with Lisp.

  how did you acquire your certainty about this who know nothing about what
  to do if you don't do static typing?  (hint: if you want to destroy the
  world through Lisp, you have to work hard at it.  if you want to destroy
  the world through C/C++, simply derefence a stray pointer or _forget_ to
  check for NULL, which is _manual_ run-time type checking.)

| C++ is a concerted effort to reduce that possibility of self-destruction.

  it must be Slobodan Milosovic's favorite programming language, then.

| True, Lisp gives you the option of run-time program correction... but
| sometimes requirements don't allow that comfort.

  you know, I'm a little irritated by the ignorant newbies who come here
  and on other lists and aske all sorts of inane questions, but it beats
  having to deal with ignorants who think they now enough to make blanket
  statements.

#:Erik
From: Joshua Scholar
Subject: Re: Newbie questions [Followup-to comp.lang.lisp]
Date: 
Message-ID: <37337945.8467494@news.select.net>
[Followup-to comp.lang.lisp]

>....
>| C++ is a concerted effort to reduce that possibility of self-destruction.
>
>  it must be Slobodan Milosovic's favorite programming language, then.
>

Oh, grow up.

>| True, Lisp gives you the option of run-time program correction... but
>| sometimes requirements don't allow that comfort.
>
>  you know, I'm a little irritated by the ignorant newbies who come here
>  and on other lists and aske all sorts of inane questions, but it beats
>  having to deal with ignorants who think they now enough to make blanket
>  statements.
>
>#:Erik

Once again, grow up.

Joshua Scholar
From: Erik Naggum
Subject: Re: Newbie questions
Date: 
Message-ID: <3134852503194596_-_@naggum.no>
* ·····@removethisbeforesending.cetasoft.com (Joshua Scholar)
| Oh, grow up.
:
| Once again, grow up.

  it is amusing to give simple people something trivial to hold on to and
  watch them completely miss everything more challenging than kindergarten
  responses.  you were tested, Joshua, and you failed, unsurprisingly.
  next time, do yourself a favor and consider getting the jokes and the
  ridicule and respond to the issues you now conveniently elided.  that
  would _show_ us that you have yourself grown up.

  it appears that you belong in comp.ai, still.  followups redirected.

#:Erik
From: Joachim Achtzehnter
Subject: Re: Newbie questions
Date: 
Message-ID: <uchfpqrrje.fsf@soft.mercury.bc.ca>
Russ McManus <···············@gs.com> writes:
> 
> > Joachim Achtzehnter <·······@kraut.bc.ca> writes:
> >
> > It didn't bring down the whole application, I'll give you that. But
> > similar behaviour can be achieved with statically typed languages. The
> > point is that static typing can catch certain simple bugs ealier.
> 
> I disagree that you can do this in practice in C++.  Have fun trying.
> Trimmed comp.ai.

If there is one thing that gets me up a tree when discussing
programming languages, it is this pretentious believe some people have
that some requirement, feature, or behaviour can only be achieved
using their favourite language.

Contrary to what you say, isolating failures is not at all difficult
to achieve in C++ or other mainstream languages. Of course, if you
only look at monolytic systems typically emanating from Redmond, then
you are sure to crash the whole system when something goes wrong. As
an example of where we are heading, consider distributed systems
(using CORBA and other technlogies). More and more systems are built
as collections of components where every component does one thing and
one thing only. And this isn't exactly new either: On certain
mainframe systems of the past it was commonplace to implement every
transaction type as a separate program.

Anyway, this is all besides the point. How did we get distracted into
this discussion? The topic was whether more static type checking in
Common Lisp implementations would be a worthwhile improvement or
not. Nobody, at least not me, was claiming that other languages are
better than Lisp.

Joachim

-- 
·······@kraut.bc.ca      (http://www.kraut.bc.ca)
·······@mercury.bc.ca    (http://www.mercury.bc.ca)
From: Joachim Achtzehnter
Subject: Re: Newbie questions
Date: 
Message-ID: <uc90b2rpj6.fsf@soft.mercury.bc.ca>
Russ McManus <···············@gs.com> writes:
>
> [talking failure isolation]
> 
> I didn't say it was impossible in C++, just very difficult, to the
> point where most people don't do it.  I don't hear you contradicting
> this point.

It is difficult only if you approach the design with the preconceived
notion that a system must be a single, monolytic program. But as I
said, my point wasn't to compare Lisp with C++. I was proposing
something that I consider an improvement for Lisp implementations (and
you are welcome to disagree). Even if Lisp is better than every other
programming language under the sun, we might still be able to improve
it, no?

> Nuff said.

Yes.

Joachim

-- 
·······@kraut.bc.ca      (http://www.kraut.bc.ca)
·······@mercury.bc.ca    (http://www.mercury.bc.ca)
From: Fred Gilham
Subject: Re: Newbie questions
Date: 
Message-ID: <u73e187rzp.fsf@snapdragon.csl.sri.com>
Joachim Achtzehnter <·······@kraut.bc.ca> writes:
> 
> This is not why most people want static type checking. The real big
> advantage of static type checking is neither a matter of enabling
> compilers to optimize code, nor to prove complete correctness, but to
> catch simple errors early. It is the difference between being told by
> the compiler that I've made a typo versus the bug blowing up in front
> of a user at runtime.

Here's a true story.  I was demonstrating a distributed application at
a `demo-days' event.  It took about 5 minutes to bring the whole
application up from scratch.

While doing it for one potential customer, I made a typo that invoked
behavior I hadn't handled.  The application went into the debugger
with a `segmentation violation' error.

Imagine what would have happened if this were a C or C++ program.  But
this was lisp.  I just made a few conciliatory noises and re-invoked
the main entry point of the application.  Total time: 10 seconds.  I'm
not even sure the person watching the demo realized it had crashed.

Note that this was a (simple) logic error that static typing would not
have caught.

The moral is that lisp is DIFFERENT from other languages, and things
you need to give great attention to with other languages are
relatively unimportant in lisp.

Unless I'm mistaken, it's already possible to declare everything in
lisp.  It's also possible to give optimization settings that will warn
you about type problems at compile time (at least, CMUCL will do
this).  You can get run-time checks with appropriate optimization
settings, and disable them selectively if you need speed.  You can get
the best of both styles of programming.  So I don't see what the fuss
is about.

-- 
Fred Gilham                                     ······@csl.sri.com
How many Analytic Philosophers does it take to change a light bulb? 
None---it's a pseudo-problem. Light bulbs give off light (hence the
name). If the bulb was broken and wasn't giving off light, it wouldn't
be a 'light bulb' now would it? (Oh, where has rigor gone?!)
From: Joachim Achtzehnter
Subject: Re: Newbie questions
Date: 
Message-ID: <ucbtfwwzgy.fsf@soft.mercury.bc.ca>
Fred Gilham <······@snapdragon.csl.sri.com> writes:
> 
> Unless I'm mistaken, it's already possible to declare everything in
> lisp.

Yes, I have pointed this out a few times already. I am not asking for a
language change, only for implementations to make USE of this
information to warn me about problems earlier rather than later.

> It's also possible to give optimization settings that will warn
> you about type problems at compile time (at least, CMUCL will do
> this).

CMUCL indeed provides a lot of the warnings I am looking for, I played
with it last night. Too bad that the commercial implementation I'm
stuck with for now doesn't! :-(

> You can get run-time checks with appropriate optimization
> settings, and disable them selectively if you need speed.  You can get
> the best of both styles of programming.  So I don't see what the fuss
> is about.

Well, as you can see we are not so far apart after all. :-)  These
'appropriate optimization settings' are all I want.

Joachim

-- 
·······@kraut.bc.ca      (http://www.kraut.bc.ca)
·······@mercury.bc.ca    (http://www.mercury.bc.ca)
From: Kent M Pitman
Subject: Re: Newbie questions
Date: 
Message-ID: <sfw7lqktywl.fsf@world.std.com>
Joachim Achtzehnter <·······@kraut.bc.ca> writes:

> Fred Gilham <······@snapdragon.csl.sri.com> writes:
> > 
> > Unless I'm mistaken, it's already possible to declare everything in
> > lisp.
> 
> Yes, I have pointed this out a few times already. I am not asking for a
> language change, only for implementations to make USE of this
> information to warn me about problems earlier rather than later.

Right.  But it is long-established that the newsgroup is not a good place
to report bugs in individual implementations.  And you're basically saying
that this is an implementation bug.  The right place to report implementation
deficiencies is to your vendor, not to the world.
From: Raymond Toy
Subject: Re: Newbie questions
Date: 
Message-ID: <4nso95jf7e.fsf@rtp.ericsson.se>
>>>>> "Joachim" == Joachim Achtzehnter <·······@kraut.bc.ca> writes:

    Joachim> CMUCL indeed provides a lot of the warnings I am looking for, I played
    Joachim> with it last night. Too bad that the commercial implementation I'm
    Joachim> stuck with for now doesn't! :-(

Why not use CMUCL to get the warnings you want and put in the
appropriate declarations?[1]  When your vendor is ready to understand and
make use of declarations, your code will be ready.

Ray

Footnotes: 
[1]  CMUCL does a lot of type inferencing, so using CMUCL to figure
out the needed declarations will probably be close to the minimum
number of declarations needed.
From: Kent M Pitman
Subject: Re: Newbie questions
Date: 
Message-ID: <sfwiua87v7s.fsf@world.std.com>
[ replying to comp.lang.lisp only
  http://world.std.com/~pitman/pfaq/cross-posting.html ]

I know it's all most too much to hope for, but I nevertheless wish
this discussion will be moved off of comp.lang.lisp.

If people want to debate the merits of a good AI language, then I think
comp.ai is adequate for that.  Indeed, questions about what the "right 
language" is are always best answered in a forum appropriate to the need.
See my pfaq entry above for a longer-winded explanation of why I think
it's a bad idea to drag down comp.lang.lisp in this discussion.

If this were a comp.lang.lisp-only discussion, there are things I would say
relevant to this topic, but I don't want to get involved in some vast 
multi-forum flamefest whose purpose seems more to consume resources than
to hear a coherent answer.

JMO.
From: Bagheera, the jungle scout
Subject: Re: Newbie questions
Date: 
Message-ID: <7gnh41$opj$1@nnrp1.dejanews.com>
In article <···············@world.std.com>,
  Kent M Pitman <······@world.std.com> wrote:
> [ replying to comp.lang.lisp only
>   http://world.std.com/~pitman/pfaq/cross-posting.html ]
>
> I know it's all most too much to hope for, but I nevertheless wish
> this discussion will be moved off of comp.lang.lisp.
>
> If people want to debate the merits of a good AI language, then I think
> comp.ai is adequate for that.  Indeed, questions about what the "right
> language" is are always best answered in a forum appropriate to the need.
> See my pfaq entry above for a longer-winded explanation of why I think
> it's a bad idea to drag down comp.lang.lisp in this discussion.
>
> If this were a comp.lang.lisp-only discussion, there are things I would say
> relevant to this topic, but I don't want to get involved in some vast
> multi-forum flamefest whose purpose seems more to consume resources than
> to hear a coherent answer.
>
> JMO.

but the people on comp.ai would have us believe that this is the correct
forum (see recent threads on comp.ai).

I think it is a conspiracy to undermine the two groups
(shifty look)

The proper forum for language debates is undoubtedly comp.programming

--
Bagherra <·······@frenzy.com>
http://www.frenzy.com/~jaebear
  "What use is it to have a leader who walks on water
       if you don't follow in their footsteps?"

-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    
From: Christopher R. Barry
Subject: Re: Newbie questions
Date: 
Message-ID: <871zgwo8oa.fsf@2xtreme.net>
Kent M Pitman <······@world.std.com> writes:

> I know it's all most too much to hope for, but I nevertheless wish
> this discussion will be moved off of comp.lang.lisp.
> 
> If people want to debate the merits of a good AI language, then I think
> comp.ai is adequate for that.

Sadly, it isn't. "LISP can't do such and such that C++ can" is all you
hear. There are very few knowledgable Lisp types reading comp.ai, or
posting at any rate.

> Indeed, questions about what the "right language" is are always best
> answered in a forum appropriate to the need.

If you discuss Lisp in any forum other than comp.lang.lisp you'll just
get people that talk about "LISP being slow" and "LISP has no STL or
virtual functions or interfaces" and blah blah....

> If this were a comp.lang.lisp-only discussion, there are things I
> would say relevant to this topic, but I don't want to get involved
> in some vast multi-forum flamefest whose purpose seems more to
> consume resources than to hear a coherent answer.

Things were turning entirely into a discussion of Lisp vs. C++ and the
only group I know of that has people knowledgable enough of both to
make statements based on extensive experience with each instead of
heresay and what they remember reading about "LISP" in some magazine
article 10 years ago is comp.lang.lisp.

comp.ai should have been removed from the delivery headers though.

Christopher
From: Sunil Mishra
Subject: Re: Newbie questions
Date: 
Message-ID: <efyhfpszfzo.fsf@whizzy.cc.gatech.edu>
······@2xtreme.net (Christopher R. Barry) writes:

> Kent M Pitman <······@world.std.com> writes:
> 
> > I know it's all most too much to hope for, but I nevertheless wish
> > this discussion will be moved off of comp.lang.lisp.
> > 
> > If people want to debate the merits of a good AI language, then I think
> > comp.ai is adequate for that.
> 
> Sadly, it isn't. "LISP can't do such and such that C++ can" is all you
> hear. There are very few knowledgable Lisp types reading comp.ai, or
> posting at any rate.

I used to read comp.ai. Got tired of the inane topics and the general
disregard for the state of the art in AI research. (Some called this
position elitist when the issue was brought up, IIRC.) There were far too
many posters with little knowledge of the history of AI making just plain
inaccurate statements. One thing that struck me was the number of posts
that contained little more than private musings based on nothing but
uninformed intuition. In short, too little signal, too much noise. Having
it moderated might improve things... We'll see.

Aside for my personal reasons for avoiding comp.ai, if you think about it,
AI is not really tied to lisp that strongly. Lisp (or any other language)
should ideally not be a topic of discussion if your concern is AI. The
issue should be techniques. Anyone going down the path of "mine is better
than yours" should definitely be rebuked, but (as Kent has so appropriately
pointed out) handing the problem to comp.lang.lisp is certainly not the
solution. Nothing as far as I can tell will change a closed mind.

> > Indeed, questions about what the "right language" is are always best
> > answered in a forum appropriate to the need.
> 
> If you discuss Lisp in any forum other than comp.lang.lisp you'll just
> get people that talk about "LISP being slow" and "LISP has no STL or
> virtual functions or interfaces" and blah blah....
> 
> > If this were a comp.lang.lisp-only discussion, there are things I
> > would say relevant to this topic, but I don't want to get involved
> > in some vast multi-forum flamefest whose purpose seems more to
> > consume resources than to hear a coherent answer.
> 
> Things were turning entirely into a discussion of Lisp vs. C++ and the
> only group I know of that has people knowledgable enough of both to
> make statements based on extensive experience with each instead of
> heresay and what they remember reading about "LISP" in some magazine
> article 10 years ago is comp.lang.lisp.

As true as that may be, I doubt any of them would be really interested in
what we have to say. I have found a similar attitude as the C++ advocate
had over here. I had a short discussion about lisp with a student well
versed in C/C++, and the functional style really bugged him. He could not
believe that anyone could do decent software engineering without type
checking, and nothing I said would make a difference to him. It seems as
though breaking out of that school of thought is *really* hard. Are there
any software engineers (research oriented or otherwise) that take the lisp
model of writing a program seriously, other than for prototyping etc? (This 
is not a rhetorical question.)

Sunil
From: Joshua Scholar
Subject: Re: Newbie questions
Date: 
Message-ID: <373a2f04.5901948@news.select.net>
On Tue, 4 May 1999 14:41:59 GMT, Kent M Pitman <······@world.std.com>
wrote:

>[ replying to comp.lang.lisp only
>  http://world.std.com/~pitman/pfaq/cross-posting.html ]
>
>I know it's all most too much to hope for, but I nevertheless wish
>this discussion will be moved off of comp.lang.lisp.
>
>If people want to debate the merits of a good AI language, then I think
>comp.ai is adequate for that.  Indeed, questions about what the "right 
>language" is are always best answered in a forum appropriate to the need.
>See my pfaq entry above for a longer-winded explanation of why I think
>it's a bad idea to drag down comp.lang.lisp in this discussion.
>
>If this were a comp.lang.lisp-only discussion, there are things I would say
>relevant to this topic, but I don't want to get involved in some vast 
>multi-forum flamefest whose purpose seems more to consume resources than
>to hear a coherent answer.
>
>JMO.

This IS a comp.lang.lisp only discussion now.  comp.ai just became
moderated and for now they will not accept ANY cross posted messages.
Also unless the problem is restated in some way I'm sure that the
comp.ai moderators would not allow this discussion back.

Joshua Scholar
From: Joshua Scholar
Subject: Re: Newbie questions
Date: 
Message-ID: <373c2fde.6120653@news.select.net>
OOOOOOOOOOOOPs

I responded to a very old message with realizing it.  Sorry.

Joshua Scholar
From: Tim Bradshaw
Subject: Re: Newbie questions
Date: 
Message-ID: <ey36769t601.fsf@lostwithiel.tfeb.org>
* Joshua Scholar wrote:

> You're thinking in LISP.  

> There are lots of ways of doing this stuff in C++, none of which take
> the 30 line to define that you took.  When we really want all the
> generality of a closure (which we rarely do) we use a template class
> that will combine any function pointer with an object pointer to make
> a closure that contains both and, (since all such templates are
> derived from a single root), they can be passed around like anonymous
> functions.

I can't understand your definition, which I elided, but each of these
things needs a separate declaration right? You need to say for each
thing which other things you want it to close over and what the types
are and so on for a hundred miles.  You can use templates to make it
only 50 miles, perhaps.

In Lisp you don't need any of that, functions just close over
everything that is lexically apparent to them, there is no syntax at
all.  In Lisp functions are instances of anonymous classes which you
define on the fly.

--tim
From: Johan Kullstam
Subject: Re: Newbie questions
Date: 
Message-ID: <ug15eqofy.fsf@res.raytheon.com>
·····@removethisbeforesending.cetasoft.com (Joshua Scholar) writes:

> On Sat, 01 May 1999 06:14:09 GMT, ······@2xtreme.net (Christopher R.
> Barry) wrote:
> 
> >>...  But C/C++
> >> doesn't work for some sets of things because it doesn't scale well, and
> >> since Lisp is designed to scale, it works better in really big situations.
> 
> I REALLY should know better than to fall for the bait...
> 
> C++ scales very well, thank you.  C doesn't - that was the point of
> adding object oriented extensions.

it may have been the point, but imho it hasn't succeeded.

just to take one point, compare lisp macros to C++ templates or
preprocessor macros.  lisp wins this contest handily.  

in C++ templates are rock stupid and can only be used if you could
have mechanically swapped out the types in your editor and stamped out
multiple copies.  the situatation for the different types must be
*exactly* the same.

lisp macros let you inspect and digest the arguments and do different
things depending on circumstances.  if an algorithm shares 99% (or 1%)
of the same stuff, you can make a macro to share what you can and
special case what you cannot.

i think macros (amongst other things) are key to scaling.

-- 
johan kullstam
From: Lieven Marchand
Subject: Re: Newbie questions
Date: 
Message-ID: <m3n1zmyrjp.fsf@localhost.localdomain>
Johan Kullstam <········@ne.mediaone.net> writes:

> in C++ templates are rock stupid and can only be used if you could
> have mechanically swapped out the types in your editor and stamped out
> multiple copies.  the situatation for the different types must be
> *exactly* the same.
> 

Actually, C++ templates are Turing complete. 

Yes, somebody actually implemented a Turing machine with them.

Then again, somebody did the same with vi macros.

-- 
Lieven Marchand <···@bewoner.dma.be>
If there are aliens, they play Go. -- Lasker
From: Pierre R. Mai
Subject: Re: Newbie questions
Date: 
Message-ID: <87yaj5zym5.fsf@orion.dent.isdn.cs.tu-berlin.de>
Lieven Marchand <···@bewoner.dma.be> writes:

> Actually, C++ templates are Turing complete. 
> 
> Yes, somebody actually implemented a Turing machine with them.
> 
> Then again, somebody did the same with vi macros.

Yes, they are Turing complete (and have a syntax, if used to that
effect, that makes Turing machines look appealing in comparison),
but you'll still have huge problems using them to do more than
simple renaming operations on the C++ template itself.  That IMHO
is the most hillarious thing about templates: In the effort to
create a very restricted kind of "macro system", C++ ended up with
a Turing complete macro language that still isn't overly useful for
usage in C++.

If they had written some sophisticated m4 macrology[1], it might have
been more useful, and less work...  IMHO templates are just bondage &
discipline all over again.

Regs, Pierre.

Footnotes: 
[1]  Not that I condone use of m4 for this purpose ;)  Today I'm more
convinced than ever, that "Thou shalt not have a macro language other
than thine language itself", or words to this effect...

-- 
Pierre Mai <····@acm.org>               http://home.pages.de/~trillian/
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: ·························@thank.you
Subject: Re: Newbie questions
Date: 
Message-ID: <372df5e2.793333683@news.earthlink.net>
On 03 May 1999 09:21:05 -0400, Johan Kullstam
<········@ne.mediaone.net> wrote:

>lisp macros let you inspect and digest the arguments and do different
>things depending on circumstances.  if an algorithm shares 99% (or 1%)
>of the same stuff, you can make a macro to share what you can and
>special case what you cannot.

I'm new to Lisp but am interested in macros.  I'm wondering
if Lisp macros can be invoked by code that is in a
non-standard syntax, such that the macro processes the
syntax to make it work.  For example, can a macro be called
by its name followed by a sequence of whitespace delimited
arguments with some defined pattern to end the list of
arguments, and the number of arguments found would
determine the macro processing?  Or can a macro be
invoked by an arbitrary expression in an arbitrary syntax,
such that the parts of the expression would be the macro
arguments?  The macro definition would have to define the
syntax, and there would have to be some way for the compiler
to recognize that syntax from knowledge of that macro.

And can you do things like rename the parenthesis characters
so some other character or symbol would represent them in
ordinary Lisp syntax?

Or if not that kind of stuff, what kind of stuff can you do
with Lisp macros?
From: Barry Margolin
Subject: Re: Newbie questions
Date: 
Message-ID: <jjoX2.310$jw4.25789@burlma1-snr2>
In article <··················@news.earthlink.net>,
 <·························@thank.you> wrote:
>I'm new to Lisp but am interested in macros.  I'm wondering
>if Lisp macros can be invoked by code that is in a
>non-standard syntax, such that the macro processes the
>syntax to make it work.  For example, can a macro be called
>by its name followed by a sequence of whitespace delimited
>arguments with some defined pattern to end the list of
>arguments, and the number of arguments found would
>determine the macro processing?  Or can a macro be
>invoked by an arbitrary expression in an arbitrary syntax,
>such that the parts of the expression would be the macro
>arguments?  The macro definition would have to define the
>syntax, and there would have to be some way for the compiler
>to recognize that syntax from knowledge of that macro.

A macro's parameters are the subexpressions of the expression of which it's
the head.  For instance, in the expression (push <expression> <place>), the
PUSH macro will be given the unevaluated expressions <expression> and
<place> as its parameter.  Since the function definition containing the
macro invocation will already have been processed lexically by the reader,
all the objects will have been read into internal data types.

This means that a macro can define arbitrary semantics to its syntax, but
it can't change the lexical nature of the language.  For an example of a
macro that defines a pretty complex syntax of its own, see LOOP.

>And can you do things like rename the parenthesis characters
>so some other character or symbol would represent them in
>ordinary Lisp syntax?

These things can be done using reader macros.  These are functions that are
invoked by the reader when it encounters a specific character during
lexical analysis.  See the I/O chapter of CltL or the CLHS for details.

-- 
Barry Margolin, ······@bbnplanet.com
GTE Internetworking, Powered by BBN, Burlington, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: Stig Hemmer
Subject: Re: Newbie questions
Date: 
Message-ID: <ekv7lqpao5g.fsf@kallesol.pvv.ntnu.no>
·························@thank.you writes:
> I'm new to Lisp but am interested in macros.  I'm wondering if Lisp
> macros can be invoked by code that is in a non-standard syntax, such
> that the macro processes the syntax to make it work.

[My answer is about Common Lisp, not Lisp in general]

A macro cannot change the basic syntax of parentesis, whitespace etc.
It is still damn powerful and very, very useful.

A macro _character_ on the other hand, can change anything.

So you cannot say
(with-angle-brackets  <+ 2 <* 3 4>>)

But you _can_ say
?<+ 2 <* 3 4>>
by properly defining the meaning of "?"

For example, I have defined
?(func-name arg1 arg2)
to mean
(my-funcall #'func-name arg1 arg2)

Stig Hemmer,
Jack of a Few Trades.
From: Paolo Amoroso
Subject: Re: Newbie questions
Date: 
Message-ID: <372f431c.53560@news.mclink.it>
On Mon, 03 May 1999 19:30:33 GMT, ·························@thank.you
wrote:

> Or if not that kind of stuff, what kind of stuff can you do
> with Lisp macros?

You may check the best source on Common Lisp macros:

   "On Lisp - Advanced Techniques for Common Lisp"
   Paul Graham
   Prentice Hall, 1994
   ISBN 0-13-030552-9


Paolo
-- 
Paolo Amoroso <·······@mclink.it>
From: Pierre R. Mai
Subject: Re: Newbie questions
Date: 
Message-ID: <87g15e16xq.fsf@orion.dent.isdn.cs.tu-berlin.de>
Johan Kullstam <········@ne.mediaone.net> writes:

> lisp macros let you inspect and digest the arguments and do different
> things depending on circumstances.  if an algorithm shares 99% (or 1%)
> of the same stuff, you can make a macro to share what you can and
> special case what you cannot.
> 
> i think macros (amongst other things) are key to scaling.

Yes, and to connect to the other thread, parts of the MOP, Open
Implementation and Aspect Oriented Programming seem indeed to be the
extensions of that principle[1].

Regs, Pierre.

Footnotes: 
[1]  Although I find the names a bit hypey, but I guess in these days
you have to call Lisp macros "Dynamic Compilation Optimizers" or
"Semantic Oriented Transformations" to get your point across, with all 
the mainstream hype around.

-- 
Pierre Mai <····@acm.org>               http://home.pages.de/~trillian/
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: Erik Naggum
Subject: Re: Newbie questions
Date: 
Message-ID: <3134574475962391@naggum.no>
* "Nir Sullam" <······@actcom.co.il>
| Here in Israel, we have thousands of C/C++ programmers and so the
| newspapers want ads are filled with requests for this kind of
| programmers, BUT never did I see a CLOS / Lisp programmer want ad.

  you're comparing a commodities market with a luxury market.  why?

| If CLOS is so powerfull and (I read that) P Graham sold his software (in
| CLOS) to Yahoo in 49 Million U$, How come CLOS is so obscure in the
| programming community?

  because it's a luxury.

  essentially, any damn fool can pretend to be a C/C++ programmer, as you
  will very quickly find out if you _place_ one of those ads and try to
  sort one or two good guys out from of the liars and incompetent fucks who
  apply for the job.  if you want to apply for jobs where the requirement
  is Common Lisp, you will equally quickly find out that you can't fool
  anyone.  likewise, you can find a job at a hamburger joint without any
  skills whatsoever, but if you want to look at how you produce equipment
  used in hamburger joints that should be simple enough that any unskilled
  person can operate it without causing himself damage or produce bad food,
  you look at the end of the market that Common Lisp is good at helping --
  you don't see many ads for hamburger joint equipment designers, either.

  with all those ads for C/C++, it means companies are _desperate_ to hire
  people who seem to know C/C++.  why do you think this is?  it's because
  C/C++ are job-creating languages.  let's take the development of the
  telephone as an example: initially, there were human operators, and all
  this worked great: but the more people wanted phones, the more people
  were required to be operators, and the companies were scrambling for
  operators, offering a lot of people work.  then better technology came to
  the rescue and released the work force tied up in operators to do other
  necessary tasks.  think of C/C++ programmers are human telephone
  operators -- the more successful they are, the more of them are needed
  (it is no accident the languages come from the biggest telephone company
  in the United States, either).  in contrast, the Common Lisp programmers
  as designers of the automated telephone switching system, and the more
  successful they are, the less investment will be in human operators.

| None of the above can export to EXE in the Windows Environment,

  you're assuming that this is what you will need to produce.  why?

| How do CLOS beginners usually start programming?

  they play with their Lisp environments, and use Emacs as their main
  interface to this environment unless they have a visual something that
  does the same.  it seems that you simply use your tools inefficiently,
  but you should investigate how to interface several powerful tools with
  eachother to make them work as one.  they don't have to come as one form
  the Single Vendor to work as one if they were designed well.  (this may
  well be very foreign to a Microsoft victim. :)

  all programming languages are not alike.  that you have found interest in
  Common Lisp sets you out from the crowd.  don't let the crowd run you
  down just because you have found something much better.  however, it is
  essential to succeeding "on your own" that you can talk to people who
  know how to do it.  the only upside of a commodities market is that lots
  of people use the commodity, and that they can talk a lot among
  themselves.  so try to find people in your community that you can talk to
  about Lisp.  posting here is a great start in this regard.

  oh, by the way, C/C++ are quite interesting languages from a marxist
  point of view, too (it being May 1 and all): they are so bad that any
  professional who wants to do interesting work needs to learn new tools
  all the time and have the employer pay for it.  the means of production
  are thus removed from the hands of the owners into the hands of the
  workers in a very new way.  if the employers were any smart, they would
  not use languages that removed _all_ their investments in their people
  this way, because it is as unhealthy for a market with disproportionate
  power in the hands of employees as it is with disproportionate power in
  the hands of employers.  shortly, therefore, smart employers will figure
  it all out and look for stable, solid languages where programmers don't
  need to be paid to acquire this week's skill set simply in order _not_ to
  quit for the job that requires it...  C/C++ are very bad for business,
  which is another reason why you see so many companies hiring: it's such a
  terrible tool that managers who used to or taught how to manage industry
  will throw people are failing projects.

  advertising is a symptom of insufficient demand or insufficient supply.
  in many ways, competition itself stems from insufficiency in solving a
  problem.  also remember that quality can never compete with quantity.
  please remember this when you use the consequences of competition as a
  measure of success: it never is.  success is when you have no competition
  and you ensure that you match the demand.  then you can go and talk to
  people with a confidence that the desperately competing people can't.  it
  is also sage advice to remain sufficiently above it all that you don't
  become a pawn in the games of the marketers.  in other words: resist the
  temptations of the mass market in any area where it matters to you.

#:Erik
From: Philip Morant
Subject: Newbie questions
Date: 
Message-ID: <54EC8A7BBA7FD111AE40006097706C7A10D5B0@TREBBIANO>
This sounds like prejudice and bigotry to me. Hmm, am I the first person
to think this?
I've dallied with functional languages before. I did LISP, and I did ML.
LISP was interesting, but ultimately impractical. The left and right
parenthesis keys wore out on my keyboard before I finished my second
program. Give me C++ any day. This sensible language spreads the load
out much more evenly across _all_ the number-keys in the top row of my
keyboard.



-----Original Message-----
From: Erik Naggum [···········@naggum.no]
Posted At: 01 May 1999 20:08
Posted To: lisp
Conversation: Newbie questions
Subject: Re: Newbie questions


* "Nir Sullam" <······@actcom.co.il>
| Here in Israel, we have thousands of C/C++ programmers and so the
| newspapers want ads are filled with requests for this kind of
| programmers, BUT never did I see a CLOS / Lisp programmer want ad.

  you're comparing a commodities market with a luxury market.  why?

| If CLOS is so powerfull and (I read that) P Graham sold his software
(in
| CLOS) to Yahoo in 49 Million U$, How come CLOS is so obscure in the
| programming community?

  because it's a luxury.

  essentially, any damn fool can pretend to be a C/C++ programmer, as
you
  will very quickly find out if you _place_ one of those ads and try to
  sort one or two good guys out from of the liars and incompetent fucks
who
  apply for the job.  if you want to apply for jobs where the
requirement
  is Common Lisp, you will equally quickly find out that you can't fool
  anyone.  likewise, you can find a job at a hamburger joint without any
  skills whatsoever, but if you want to look at how you produce
equipment
  used in hamburger joints that should be simple enough that any
unskilled
  person can operate it without causing himself damage or produce bad
food,
  you look at the end of the market that Common Lisp is good at helping
--
  you don't see many ads for hamburger joint equipment designers,
either.

  with all those ads for C/C++, it means companies are _desperate_ to
hire
  people who seem to know C/C++.  why do you think this is?  it's
because
  C/C++ are job-creating languages.  let's take the development of the
  telephone as an example: initially, there were human operators, and
all
  this worked great: but the more people wanted phones, the more people
  were required to be operators, and the companies were scrambling for
  operators, offering a lot of people work.  then better technology came
to
  the rescue and released the work force tied up in operators to do
other
  necessary tasks.  think of C/C++ programmers are human telephone
  operators -- the more successful they are, the more of them are needed
  (it is no accident the languages come from the biggest telephone
company
  in the United States, either).  in contrast, the Common Lisp
programmers
  as designers of the automated telephone switching system, and the more
  successful they are, the less investment will be in human operators.

| None of the above can export to EXE in the Windows Environment,

  you're assuming that this is what you will need to produce.  why?

| How do CLOS beginners usually start programming?

  they play with their Lisp environments, and use Emacs as their main
  interface to this environment unless they have a visual something that
  does the same.  it seems that you simply use your tools inefficiently,
  but you should investigate how to interface several powerful tools
with
  eachother to make them work as one.  they don't have to come as one
form
  the Single Vendor to work as one if they were designed well.  (this
may
  well be very foreign to a Microsoft victim. :)

  all programming languages are not alike.  that you have found interest
in
  Common Lisp sets you out from the crowd.  don't let the crowd run you
  down just because you have found something much better.  however, it
is
  essential to succeeding "on your own" that you can talk to people who
  know how to do it.  the only upside of a commodities market is that
lots
  of people use the commodity, and that they can talk a lot among
  themselves.  so try to find people in your community that you can talk
to
  about Lisp.  posting here is a great start in this regard.

  oh, by the way, C/C++ are quite interesting languages from a marxist
  point of view, too (it being May 1 and all): they are so bad that any
  professional who wants to do interesting work needs to learn new tools
  all the time and have the employer pay for it.  the means of
production
  are thus removed from the hands of the owners into the hands of the
  workers in a very new way.  if the employers were any smart, they
would
  not use languages that removed _all_ their investments in their people
  this way, because it is as unhealthy for a market with
disproportionate
  power in the hands of employees as it is with disproportionate power
in
  the hands of employers.  shortly, therefore, smart employers will
figure
  it all out and look for stable, solid languages where programmers
don't
  need to be paid to acquire this week's skill set simply in order _not_
to
  quit for the job that requires it...  C/C++ are very bad for business,
  which is another reason why you see so many companies hiring: it's
such a
  terrible tool that managers who used to or taught how to manage
industry
  will throw people are failing projects.

  advertising is a symptom of insufficient demand or insufficient
supply.
  in many ways, competition itself stems from insufficiency in solving a
  problem.  also remember that quality can never compete with quantity.
  please remember this when you use the consequences of competition as a
  measure of success: it never is.  success is when you have no
competition
  and you ensure that you match the demand.  then you can go and talk to
  people with a confidence that the desperately competing people can't.
it
  is also sage advice to remain sufficiently above it all that you don't
  become a pawn in the games of the marketers.  in other words: resist
the
  temptations of the mass market in any area where it matters to you.

#:Erik
From: Pierre R. Mai
Subject: Re: Newbie questions
Date: 
Message-ID: <87d80i16hd.fsf@orion.dent.isdn.cs.tu-berlin.de>
Philip Morant <········@edina.co.uk> writes:

> I've dallied with functional languages before. I did LISP, and I did ML.
> LISP was interesting, but ultimately impractical. The left and right
> parenthesis keys wore out on my keyboard before I finished my second
> program. Give me C++ any day. This sensible language spreads the load
> out much more evenly across _all_ the number-keys in the top row of my
> keyboard.

Hmm, I think you're a month and 2 days late, but just for the fun of
it: 

If the spread of wear on the keys of your keyboard is the sole concern 
in your programming, then either you are the sad victim of very shoddy 
keyboards, or you have not understood what programming[1] is all about ;)

Regs, Pierre.

(Hmm, maybe I should get that saying trademarked and put on some nice
mugs.  Anyone want to pre-order?)

Footnotes: 

[1] That is in the sense of system and program construction, and not
    some of the other things people currently seem to think programming
    is, or might be...

-- 
Pierre Mai <····@acm.org>               http://home.pages.de/~trillian/
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: Christopher R. Barry
Subject: Re: Newbie questions
Date: 
Message-ID: <87lnf6nhk9.fsf@2xtreme.net>
Philip Morant <········@edina.co.uk> writes:

> The left and right parenthesis keys wore out on my keyboard before I
> finished my second program. Give me C++ any day. This sensible
> language spreads the load out much more evenly across _all_ the
> number-keys in the top row of my keyboard.

You should swap "( )" and "[ ]", even if you are not a Lisp
programmer, since the parenthesis keys are used far more. Even in C
and C++ with all the "[ ]" and "{ }", a quick hack to count the
occurence of each character in a large source tree shows that the
parenthesis beat out the others combined comfortably. I regret not
having done this earlier.

If you are using X, add the following to your .Xmodmap (some unixes
have a different file like .xmodmaprc, I think):

  !!! Lisp parenthesis bindings
  !!! =========================
  keysym 9 = 9 bracketleft
  keysym 0 = 0 bracketright
  keysym bracketleft = parenleft braceleft
  keysym bracketright = parenright braceright

Then run "$ xmodmap .Xmodmap", or restart X. Manually running "$
xmodmap SOME_FILE" where SOME_FILE contains the above will always work
if you can't figure out your system's xmodmap startup file.

Christopher
From: Espen Vestre
Subject: Re: Newbie questions
Date: 
Message-ID: <w6n1zll46w.fsf@wallace.nextel.no>
······@2xtreme.net (Christopher R. Barry) writes:

> You should swap "( )" and "[ ]", even if you are not a Lisp
> programmer, since the parenthesis keys are used far more. 

My norwegian keyboard favours lisp over the C-syntax family of
languages, since {[]} are almost ultimately unaccessible ;-)

Having to type shift-8 and shift-9 isn't a big deal though, I
think my stiff left arm is caused by EscapeMetaAltControlShifts
- ingenious ideas for remapping emacs control combinations are
very welcome (is it possible, on a sun keyboard, to use the
caps lock as an ordinary modifier?).

-- 

  espen
From: Christopher R. Barry
Subject: Re: Newbie questions
Date: 
Message-ID: <876768oi1v.fsf@2xtreme.net>
Espen Vestre <··@nextel.no> writes:

> Having to type shift-8 and shift-9 isn't a big deal though, I
> think my stiff left arm is caused by EscapeMetaAltControlShifts
> - ingenious ideas for remapping emacs control combinations are
> very welcome (is it possible, on a sun keyboard, to use the
> caps lock as an ordinary modifier?).

Swapping Control_L and Caps_Lock is extremely common. If you want to
do that, then [copy and paste from the xmodmap manpage]:

       One of the more irritating differences  between  keyboards
       is  the  location  of  the Control and Shift Lock keys.  A
       common use of xmodmap is to swap these two  keys  as  fol�
       lows:

            !
            ! Swap Caps_Lock and Control_L
            !
            remove Lock = Caps_Lock
            remove Control = Control_L
            keysym Control_L = Caps_Lock
            keysym Caps_Lock = Control_L
            add Lock = Caps_Lock
            add Control = Control_L

I personally made Caps_Lock run "complete", though I've yet to
massively overhaul completion.el or write my own thing from scratch
like I've been meaning to do Any Day Now. I originally wanted to make
Caps_Lock do "complete" and shift-Caps_Lock do the actual
uppercase-lock, but making shift-CapsLock distinguishable from plain
old Caps_Lock to X seems all but impossible. I instead in extremely
kludgy fashion swapped F6 and Caps_Lock. Bleah.

Christopher
From: Espen Vestre
Subject: Re: Newbie questions
Date: 
Message-ID: <w6n1zjer6b.fsf@wallace.nextel.no>
······@2xtreme.net (Christopher R. Barry) writes:

> Swapping Control_L and Caps_Lock is extremely common. If you want to
> do that, then [copy and paste from the xmodmap manpage]:
                                         ^^^^^^^^^^^^^^^
*blush*, and I always complain when people stick to their prejudices
instead of investigating the possiblities ;-) (my prejudice being
the false assumption that the caps lock key is "different")

Thank you! (now let's see what my left little finger thinks of this).

-- 

  espen
From: Erik Naggum
Subject: Re: Newbie questions
Date: 
Message-ID: <3134735930848643@naggum.no>
* Philip Morant <········@edina.co.uk>
| This sounds like prejudice and bigotry to me.

  some people see something they can hate whenever they get the chance.

| LISP was interesting, but ultimately impractical.  The left and right
| parenthesis keys wore out on my keyboard before I finished my second
| program.  Give me C++ any day.  This sensible language spreads the load
| out much more evenly across _all_ the number-keys in the top row of my
| keyboard.

  I'm truly relieved.  I actually appreciate it when the people who hate me
  are certified nut cases.

#:Erik
From: Brent A Ellingson
Subject: Re: Newbie questions
Date: 
Message-ID: <7gl4mp$11c4$2@node2.nodak.edu>
Erik Naggum (····@naggum.no) wrote:
: * Philip Morant <········@edina.co.uk>

: | LISP was interesting, but ultimately impractical.  The left and right
: | parenthesis keys wore out on my keyboard before I finished my second
: | program.  Give me C++ any day.  This sensible language spreads the load
: | out much more evenly across _all_ the number-keys in the top row of my
: | keyboard.

:   I'm truly relieved.  I actually appreciate it when the people who hate me
:   are certified nut cases.

It is actually important to me that I believe Philip was joking when he 
wrote this.  Despite all the evidence to the contrary, I've managed to
convince myself that no-one in the world is that clueless.  It will
destroy my world construct if people I respect take the above comments
by Philip seriously.

Erik, please don't take him seriously.

-- 
Brent Ellingson (········@badlands.NoDak.edu)
"It is amazing how complete is the delusion that beauty is goodness." 
                                                 -- Leo Tolstoy
From: Erik Naggum
Subject: Re: Newbie questions
Date: 
Message-ID: <3134765208417212@naggum.no>
* ········@badlands.NoDak.edu (Brent A Ellingson)
| It is actually important to me that I believe Philip was joking when he
| wrote this.  Despite all the evidence to the contrary, I've managed to
| convince myself that no-one in the world is that clueless.

  the evidence to the contrary is overwhelming: there is no limit to
  cluelessness, and no reason to presume cluelessness is a joke: failing to
  realize that the clueless aren't joking is how most politicians manage to
  pull their tricks on all of us.

| It will destroy my world construct if people I respect take the above
| comments by Philip seriously.

  that's harsh, but I think you should just convince yourself otherwise.

#:Erik
From: Philip Morant
Subject: RE: Newbie question
Date: 
Message-ID: <54EC8A7BBA7FD111AE40006097706C7A10F94D@TREBBIANO>
No, dudes, there's a serious point. Here:
There's not such a huge difference between all these different computer
landwiches.  After all, they all get compiled down to machine code
eventually.  Sure, some landwiches bring useful abstractions to the
science, and, naturally, some more so than others.  To anyone who reads
this newsgroup, LISP seems to be the biz because of the quality and
usefulness of the said abstractions (by the way, what is all this stuff
about Turing-completeness?).  But, given that it all boils down to
machine instructions in the end, what price do we pay for all the
convenience we gain?  There has to be one.
	C and C++ map more closely to the Chicken Pulley Unit's
instruction set than any other high-level landwich.  You could write a
LISP compiler in C, but you'd have quite a problem if you wanted to
write a C compiler using LISP.  Isn't it clear, if you think a priori
(and, recalling a previous argument on comp.lang.lisp, 13 March 1999,
started by David Hanley, I think that a posteriori reasoning is less
convincing here), that C++ is always going to be the landwich of choice
for writing fast code?  Try asking the folk down at Counterpane.  They
count every one of the CPU clock cycles that make their algorithm go.
See if they'll switch from Assembly landwich to LISP.
	Eventually, there will come a point in human history (if we
don't greenhouse ourselves all to death first) where all the necessary
functionality has been provided.  People will only write new code to
indulge themselves.  When this happens, the only serious work will be
the optimisation of what has already been done.  Programs will edited
until they can be formally verified, and then all the dead wood will be
taken out.  It seems to me that the more abstractions that a program
uses, the more work will be involved in optimising it.



-----Original Message-----
From: Erik Naggum [···········@naggum.no]
Posted At: 04 May 1999 01:07
Posted To: lisp
Conversation: Newbie questions
Subject: Re: Newbie questions


* ········@badlands.NoDak.edu (Brent A Ellingson)
| It is actually important to me that I believe Philip was joking when
he
| wrote this.  Despite all the evidence to the contrary, I've managed to
| convince myself that no-one in the world is that clueless.

  the evidence to the contrary is overwhelming: there is no limit to
  cluelessness, and no reason to presume cluelessness is a joke: failing
to
  realize that the clueless aren't joking is how most politicians manage
to
  pull their tricks on all of us.

| It will destroy my world construct if people I respect take the above
| comments by Philip seriously.

  that's harsh, but I think you should just convince yourself otherwise.

#:Erik
From: Espen Vestre
Subject: Re: Newbie question
Date: 
Message-ID: <w6iua9jd9n.fsf@wallace.nextel.no>
Philip Morant <········@edina.co.uk> writes:

> 	Eventually, there will come a point in human history (if we
> don't greenhouse ourselves all to death first) where all the necessary
> functionality has been provided.  People will only write new code to
> indulge themselves.

I've heard this argument, or variants of it, for 20 years now.

It has been partly responsible from scaring good students away
from CS studies, because some smartasses thougth that the demand for
good programmers would *sink* (while all the existing mediocre 
programmers were out there IRL providing Y2K trouble and other 
kinds of "functionality").

But you're certainly giving it a new twist: You're trying to use
it as an argument for writing *less* *abstract* code!

I can only *sigh* :-[

-- 

  espen
From: Marc Battyani
Subject: Re: Newbie question
Date: 
Message-ID: <5FE01BE0B101EDC4.28588D0FD09C0DAE.D1BBFC522F86B2F0@library-proxy.airnews.net>
Philip Morant <········@edina.co.uk> wrote in message
···········································@TREBBIANO...
...
> (and, recalling a previous argument on comp.lang.lisp, 13 March 1999,
> started by David Hanley, I think that a posteriori reasoning is less
> convincing here), that C++ is always going to be the landwich of choice
> for writing fast code?  Try asking the folk down at Counterpane.  They
> count every one of the CPU clock cycles that make their algorithm go.
> See if they'll switch from Assembly landwich to LISP.
...
I you are into real fast code you can't beat the VHDL language which
translate
directly to hardware.
The LISP + VHDL association is hard to beat!

You can also see C++ as a portable assembly language and generate it from
LISP.
We use LISP to convert lots of legacy C++ objects into COM objects.

Marc Battyani
From: Ian Wild
Subject: Re: Newbie question
Date: 
Message-ID: <372EE23E.BFBF5D7D@cfmu.eurocontrol.be>
Philip Morant wrote:
> 
> You could write a
> LISP compiler in C, but you'd have quite a problem if you wanted to
> write a C compiler using LISP.

Haven't you got this arse about face?

I can't see how you'd even /start/ writing a Lisp
compiler in C...
From: Marco Antoniotti
Subject: Re: Newbie question
Date: 
Message-ID: <lwlnf52ekm.fsf@copernico.parades.rm.cnr.it>
Philip Morant <········@edina.co.uk> writes:

	...

> 	C and C++ map more closely to the Chicken Pulley Unit's
> instruction set than any other high-level landwich.  You could write a
> LISP compiler in C, but you'd have quite a problem if you wanted to
> write a C compiler using LISP.

Are you kidding or clueless?  I hope the first. Otherwise may the
NAGGUM be unleashed over you! :)

Cheers

-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: Paul Rudin
Subject: Re: Newbie question
Date: 
Message-ID: <m3k8uomzw9.fsf@shodan.demon.co.uk>
Philip Morant <········@edina.co.uk> writes:


>                              (by the way, what is all this stuff
> about Turing-completeness?).

Turing completeness is the property of being able to implement the
same algorthims as a Turing machine. There are good reasons why this
is a limit on the algorthims that can possibly be implemented on a
computer. 

[Incidentally, this is the smallest set of functions that includes the
zero function; the successor function and is closed under projection,
composition, (primitive) recursion, and minimalisation.]


Most computer languages in use today are Turing complete, so if you
can do it in one language, you can do it in another, it's just a
question of how much effort is involved. IIRC, one interesting
exception is the Charity language; programs in which are guaranteed to
terminate.
From: Tim Bradshaw
Subject: Re: Newbie question
Date: 
Message-ID: <ey33e1dt3t4.fsf@lostwithiel.tfeb.org>
* Philip Morant wrote:
> 	C and C++ map more closely to the Chicken Pulley Unit's
> instruction set than any other high-level landwich.  You could write a
> LISP compiler in C, but you'd have quite a problem if you wanted to
> write a C compiler using LISP.

Symbolics had C, Fortran and I think pascal compilers written in, erm,
Lisp.

I was really disappointed when I found they had an X implementation,
only to discover when looking at the source that it was just MIT X,
compiled with their C compiler.

--tim
From: Christopher Browne
Subject: Re: Newbie question
Date: 
Message-ID: <gXOX2.647$YB4.39157@news2.giganews.com>
On 04 May 1999 13:30:47 +0100, Tim Bradshaw <···@tfeb.org> wrote:
>* Philip Morant wrote:
>> 	C and C++ map more closely to the Chicken Pulley Unit's
>> instruction set than any other high-level landwich.  You could write a
>> LISP compiler in C, but you'd have quite a problem if you wanted to
>> write a C compiler using LISP.
>
>Symbolics had C, Fortran and I think pascal compilers written in, erm,
>Lisp.
>
>I was really disappointed when I found they had an X implementation,
>only to discover when looking at the source that it was just MIT X,
>compiled with their C compiler.

An X implementation written in a Lisp would indeed be extremely
interesting.  Definitely would need some interesting compilation
techniques to stay fast; it would doubtless trample some bugs out of
existence.
-- 
"Microsoft builds product loyalty on the part of network administrators and
consultants, [these are] the only people who really count in the Microsoft
scheme of things. Users are an expendable commodity."  -- Mitch Stone 1997
········@hex.net- <http://www.hex.net/~cbbrowne/lsf.html>
From: Christian Lynbech
Subject: Re: Newbie question
Date: 
Message-ID: <ofbtg0c52j.fsf@tbit.dk>
>>>>> "Christopher" == Christopher Browne <········@news.hex.net> writes:

Christopher> An X implementation written in a Lisp ... would doubtless
Christopher> trample some bugs out of existence.

But wouldn't such a thing (a bug free X) render non-trivial amounts of
C programs useless :-)

As it has been said: you can promote a bug to a feature by documenting
it.


---------------------------+--------------------------------------------------
Christian Lynbech          | Telebit Communications A/S                       
Fax:   +45 8628 8186       | Fabrikvej 11, DK-8260 Viby J
Phone: +45 8628 8177 + 28  | email: ···@tbit.dk --- URL: http://www.telebit.dk
---------------------------+--------------------------------------------------
Hit the philistines three times over the head with the Elisp reference manual.
                                        - ·······@hal.com (Michael A. Petonic)
From: Marco Antoniotti
Subject: Re: Newbie question
Date: 
Message-ID: <lwbtg0x6ww.fsf@copernico.parades.rm.cnr.it>
········@news.hex.net (Christopher Browne) writes:

> On 04 May 1999 13:30:47 +0100, Tim Bradshaw <···@tfeb.org> wrote:
> >* Philip Morant wrote:
> >> 	C and C++ map more closely to the Chicken Pulley Unit's
> >> instruction set than any other high-level landwich.  You could write a
> >> LISP compiler in C, but you'd have quite a problem if you wanted to
> >> write a C compiler using LISP.
> >
> >Symbolics had C, Fortran and I think pascal compilers written in, erm,
> >Lisp.
> >
> >I was really disappointed when I found they had an X implementation,
> >only to discover when looking at the source that it was just MIT X,
> >compiled with their C compiler.
> 
> An X implementation written in a Lisp would indeed be extremely
> interesting.  Definitely would need some interesting compilation
> techniques to stay fast; it would doubtless trample some bugs out of
> existence.

Ahem!  CLX?  Remember that in the beginning there were TWO
implementations of X, C/Xlib and CL/CLX.

Cheers

-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: Pierre R. Mai
Subject: Re: Newbie question
Date: 
Message-ID: <87emkvzqlb.fsf@orion.dent.isdn.cs.tu-berlin.de>
Marco Antoniotti <·······@copernico.parades.rm.cnr.it> writes:

> Ahem!  CLX?  Remember that in the beginning there were TWO
> implementations of X, C/Xlib and CL/CLX.

Weren't we talking about the server side of X?  Or have I again missed 
something important?

Regs, Pierre.

-- 
Pierre Mai <····@acm.org>               http://home.pages.de/~trillian/
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: Christopher Browne
Subject: Re: Newbie question
Date: 
Message-ID: <8KfY2.2163$Xs1.357331@news1.giganews.com>
On 05 May 1999 10:22:23 +0200, Marco Antoniotti
<·······@copernico.parades.rm.cnr.it> wrote:  
>········@news.hex.net (Christopher Browne) writes:
>> An X implementation written in a Lisp would indeed be extremely
>> interesting.  Definitely would need some interesting compilation
>> techniques to stay fast; it would doubtless trample some bugs out of
>> existence.
>
>Ahem!  CLX?  Remember that in the beginning there were TWO
>implementations of X, C/Xlib and CL/CLX.

Isn't that merely a CL implementation of Xlib?

That's hardly an "X implementation written in a Lisp;" that's merely a
wrapper for the X protocol written in a Lisp, and is not a
*remarkably* interesting thing.

-- 
Where do you *not* want to go today? "Confutatis maledictis, flammis
acribus addictis"  (<http://www.hex.net/~cbbrowne/msprobs.html>
········@ntlug.org- <http://www.ntlug.org/~cbbrowne/lsf.html>
From: Marco Antoniotti
Subject: Re: Newbie question
Date: 
Message-ID: <lwvhe6uxcv.fsf@copernico.parades.rm.cnr.it>
········@news.hex.net (Christopher Browne) writes:

> On 05 May 1999 10:22:23 +0200, Marco Antoniotti
> <·······@copernico.parades.rm.cnr.it> wrote:  
> >········@news.hex.net (Christopher Browne) writes:
> >> An X implementation written in a Lisp would indeed be extremely
> >> interesting.  Definitely would need some interesting compilation
> >> techniques to stay fast; it would doubtless trample some bugs out of
> >> existence.
> >
> >Ahem!  CLX?  Remember that in the beginning there were TWO
> >implementations of X, C/Xlib and CL/CLX.
> 
> Isn't that merely a CL implementation of Xlib?
> 
> That's hardly an "X implementation written in a Lisp;" that's merely a
> wrapper for the X protocol written in a Lisp, and is not a
> *remarkably* interesting thing.

I stand corrected.  However, beside being interesting, why should a
X *server* be built in CL at this time in history?

Cheers

-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: Tim Bradshaw
Subject: Re: Newbie question
Date: 
Message-ID: <ey3hfpnlc89.fsf@lostwithiel.tfeb.org>
* Marco Antoniotti wrote:

> I stand corrected.  However, beside being interesting, why should a
> X *server* be built in CL at this time in history?

No reason.  The original point was that someone said you couldn't
write a C compiler in Lisp easily and I pointed out that not only had
this been done (and Fortran, pascal), but that the compiler was
competent enough to compile the MIT X server which worked presumably
reasonably well considering the HW it ran on.

--tim
From: William Deakin
Subject: Re: Newbie question
Date: 
Message-ID: <3731B547.83D7B02B@pindar.com>
Hmm. If you did want to write a CL compiler in C, is there a formal grammar
(LALR etc) out there somewhere. I had a look and couldn't find one.
From: Erik Naggum
Subject: Re: Newbie question
Date: 
Message-ID: <3135018767208296@naggum.no>
* William Deakin <·····@pindar.com>
| If you did want to write a CL compiler in C, is there a formal grammar
| (LALR etc) out there somewhere.  I had a look and couldn't find one.

  you won't find one, either.  formal grammars are useful for languages
  that are designed to be unreadable and with constructs designed to be
  impossible to represent at run-time.  Lisp is designed to be readable and
  source code to be representable at run-time, and so have a very different
  way of thinking about its grammar.  considering that all you will ever do
  with a grammar is associate the first element of a list (a symbol) with a
  rule for dealing with the rest of the list, you might as well design the
  whole system using a recursive descent "parser".

  Common Lisp's syntax is designed to be LL(1), and objects are read in
  their entirety before any semantics are associated with them, quite
  unlike languages designed to keep the sources unreadable to programs
  written in those languages.
  
#:Erik
From: Pierre R. Mai
Subject: Re: Newbie question
Date: 
Message-ID: <87iua5yex1.fsf@orion.dent.isdn.cs.tu-berlin.de>
William Deakin <·····@pindar.com> writes:

> Hmm. If you did want to write a CL compiler in C, is there a formal grammar
> (LALR etc) out there somewhere. I had a look and couldn't find one.

Given the complexity of the Common Lisp reader, and it's flexibility
(reader macros, read-time evaluation, etc.), I strongly doubt that any
parser generator will be able to handle this task out of the box. Of
course I haven't tried to do this, so I might be mistaken... ;)

Regs, Pierre.

-- 
Pierre Mai <····@acm.org>               http://home.pages.de/~trillian/
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: Tim Bradshaw
Subject: Re: Newbie questions
Date: 
Message-ID: <ey34sltt5dp.fsf@lostwithiel.tfeb.org>
* Philip Morant wrote:
> I've dallied with functional languages before. I did LISP, and I did ML.
> LISP was interesting, but ultimately impractical. The left and right
> parenthesis keys wore out on my keyboard before I finished my second
> program. Give me C++ any day. This sensible language spreads the load
> out much more evenly across _all_ the number-keys in the top row of my
> keyboard.

Actually, what most Lisp people do nowadays is simply take all those
useless function keys that modern keyboards have and define them as
paren keys.  Only a few weeks training is needed to even out the wear
patterns by round-robining on all those spare keys.  I have 30 or 40
extra function keys on my keyboard to solve this problem.  I believe
people are also working on `hot-swappable' keyboards: if one of the
paren keys fails, it's dynamically mapped to an unused key (one of the
function keys usually), and you can carry right on typing while an
engineer is called to swap out the failed key.

The tragic mistake made by the lisp machines was not to realise that
large numbers of general-purpose keys can be used for Lisp quite
efficiently given a sufficiently smart keyboard compiler, and instead
to provide special hardware-support for really good paren keys.  They
were fun to use, and the keyboards fitted on a single desk unlike
modern ones, but they were obviously ultimately doomed.

Of course, C++ will probably have this kind of dynamic keyboard
technology within a few years.

--tim
From: Link Davis
Subject: Re: Newbie questions
Date: 
Message-ID: <7gn2a0$9p0$1@nntp5.atl.mindspring.net>
Ever use voice activation?  It works, and it's faster than trying to type
everything, such as the hard-to-hit keys you're talking about.  You can also
program Dragon Dictate to paste in several lines of code.  Beats either
typing it from scratch or cut-n-paste.

Tim Bradshaw <···@tfeb.org> wrote in message
····················@lostwithiel.tfeb.org...
> * Philip Morant wrote:
> > I've dallied with functional languages before. I did LISP, and I did ML.
> > LISP was interesting, but ultimately impractical. The left and right
> > parenthesis keys wore out on my keyboard before I finished my second
> > program. Give me C++ any day. This sensible language spreads the load
> > out much more evenly across _all_ the number-keys in the top row of my
> > keyboard.
>
> Actually, what most Lisp people do nowadays is simply take all those
> useless function keys that modern keyboards have and define them as
> paren keys.  Only a few weeks training is needed to even out the wear
> patterns by round-robining on all those spare keys.  I have 30 or 40
> extra function keys on my keyboard to solve this problem.  I believe
> people are also working on `hot-swappable' keyboards: if one of the
> paren keys fails, it's dynamically mapped to an unused key (one of the
> function keys usually), and you can carry right on typing while an
> engineer is called to swap out the failed key.
>
> The tragic mistake made by the lisp machines was not to realise that
> large numbers of general-purpose keys can be used for Lisp quite
> efficiently given a sufficiently smart keyboard compiler, and instead
> to provide special hardware-support for really good paren keys.  They
> were fun to use, and the keyboards fitted on a single desk unlike
> modern ones, but they were obviously ultimately doomed.
>
> Of course, C++ will probably have this kind of dynamic keyboard
> technology within a few years.
>
> --tim
>
>
From: Lieven Marchand
Subject: Re: Newbie questions
Date: 
Message-ID: <m3k8uqyr12.fsf@localhost.localdomain>
"Nir Sullam" <······@actcom.co.il> writes:

> Here in Israel , we have thousands of C\C++ programmers and so the
> newspapers want ads are filled with requests for this kind of programmers ,
> BUT never did I see a CLOS \ Lisp programmer want ad .!!!
> 
> If CLOS is so powerfull and (I read that) P Graham sold his software (in
> CLOS) to Yahoo in 49 Million U$, How come CLOS is so obscure in the
> programming community ?

I think you've answered your own question in a way. Newspaper
advertising is a mass medium. It is appropriate to reach mass
audiences. The Lisp programming community is not that large and so job
announcements get distributed through other channels. I have
personally had job announcements through email, by friends or people
who get asked by their management to search for additional Lisp
programmers.

I've never seen newspaper ads for brain surgeons but I'm fairly
certain that a competent brain surgeon has no trouble finding a job.

-- 
Lieven Marchand <···@bewoner.dma.be>
If there are aliens, they play Go. -- Lasker
From: Georges KO
Subject: Re: Newbie questions
Date: 
Message-ID: <372dca9c.0@bctpewww1>
> Here in Israel , we have thousands of C\C++ programmers and so the
> newspapers want ads are filled with requests for this kind of
programmers ,
> BUT never did I see a CLOS \ Lisp programmer want ad .!!!

    The same here in Taiwan, where you can add VB, assembly and
hardware stuff as well. tw.bbs.comp.lang is full of BC++, VC++, etc.
articles, one won't find Lisp stuff (haven't seen the word for ages).
But, anyway, that's not going to stop me learning it because it's
great and ...

> If CLOS is so powerfull and (I read that) P Graham sold his software
(in
> CLOS) to Yahoo in 49 Million U$, How come CLOS is so obscure in the
> programming community ?

    ... even if you cannot use it as your primary language, as it
really opens your mind to a different way of thinking and solving
problems. Though I haven't used Common Lisp yet in any project (only
SIOD and Emacs Lisp), I have make sure that people in my departement
have heard about it, especially management. As I may do some kind of
official presentation of it in the future, I think it would be a good
idea to invite people from marketing, sales and field engineering to
come.
From: Nick Levine
Subject: Re: Newbie questions
Date: 
Message-ID: <372EEC83.E457A14E@harlequin.co.uk>
> I program AutoLISP in an IDE called VitalLISP and I came to like the way it
> works. (automatic completion of symbols names) - I once downloaded FreeLISP
> and then lost the copy. I can remeber that it reminded me of VitalLISP -
> Can anybody tell me where can I find the latest version that Harlquin
> released ?

Freelisp was withdrawn last year and replaced by the LispWorks Personal
Edition, see:

    http://www.harlequin.com/products/ads/lisp/

Meta-Control-i completes symbols by looking them up in package tables.
Meta-/ is dynamic completion, looking symbols up in the current editor buffer.

- nick