Hi everybody,
This is a request for your opinion about the subject theme.
I'm working on Machine Learning and I wish to know your opinion about what
is the best language to implement ML. (in general AI.) software. The
discussion is focused on software for researchers, not for comercial tools.
My workgroup is now discussing what is best, but we haven't take a decission
yet. We are mainly interested in:
- Existing implemented ML. algorithms.
- An easy maintenance (easy to understand and easy for group working).
- Language standarization.
- Multiplatform support.
- Interface with graphic enviroments.
- Software Engineering methodologies developed.
We think both of them have its advantages and disadvantages. Perhaps your
experience could help us.
I know there are other good languages for AI. implementations, but we want
to restrict the discussion to Lisp and C++. Of course, you are free to aswer
this message to defend your favourite language. Any opinion is welcome.
Thank you in advance for your help.
Carlos Cid.
#-------------------------------------------#--------------------------------#
# JOSE CARLOS CID VITERO (Charlie) ITIG. | http://www.eis.uva.es/~charlie #
# Grupo de Ciencias Cognitivas Aplicadas y | ··············@dali.eis.uva.es #
# Vision por Computador. | #
# Dpt. Ingenieria de Sistemas y Automatica. | Tlf : (983) 42-33-55 42-33-58 #
# Escuela Tecnica Superior de Ingenieros | Fax : (983) 42-33-10 42-33-58 #
# Industriales. Universidad de Valladolid. | Paseo del cauce S/N. 47011 VA #
#-------------------------------------------#--------------------------------#
I am responding to Carlos Cid's request. I am basing my opinions on my
own experience in doing something much like what he wants to do.
Carlos Cid wrote:
>
> My workgroup is now discussing what is best, but we haven't take a decission
> yet. We are mainly interested in:
>
> - Existing implemented ML. algorithms.
A lot is available for Lisp. See the CMU-AI archive.
> - An easy maintenance (easy to understand and easy for group working).
Programming language theory experts say that Lisp has the advantage over
C++ here.
> - Language standarization.
Common Lisp is an ANSI standard and many implementations have attained
or nearly attained that standard.
> - Multiplatform support.
> - Interface with graphic enviroments.
The basic common lisp language is implemented on many platforms and
common lisp code runs on all of them. If you want a fancy interface to
run an any Unix or Linux machine, that is no problem. If you want a
fancy
interface to run on any Windows(TM) machine, that is no problem. If you
want a single fancy interface to run on both Unix and Windows(TM) then
you need to do a bit more work (look into CLIM or TCL/Tk, and see the
Lisp
FAQ) but it is possible.
I suppose this is true of C++ also.
If a text-based interface is good enough, then that is perfectly
portable.
> - Software Engineering methodologies developed.
I'm not sure what you are referring to here. Lisp is a very old and
mature
language which is being used in industry all over the place. C++ is
relatively
young but it is widely used.
Lisp is a lot of fun. It's not a drag to program.
--
Benjamin Shults Email: ·······@math.utexas.edu
Department of Mathematics Phone: (512) 471-7711 ext. 208
University of Texas at Austin WWW:
http://www.ma.utexas.edu/users/bshults
Austin, TX 78712 USA FAX: (512) 471-9038 (attn: Benjamin
Shults)
From: George Van Treeck
Subject: Re: Lisp versus C++ for AI. software
Date:
Message-ID: <3252DB5E.5495@sybase.com>
Carlos Cid wrote:
> - Existing implemented ML. algorithms.
A lot more LISP code out there -- much of it outdated though. Most
new stuff is in C++.
> - An easy maintenance (easy to understand and easy for group working).
Only very wierd people think LISP is readable... Writing obtuse
code is equally easy in either language. LISP is better for
prototyping and C++ better for production.
> - Language standarization.
Toss up.
> - Multiplatform support.
C++ is on many more vendor's platforms. Hardly any systems vendors
provide LISP -- have to rely on some small company. Due to the
much larger market for C++, it tends to be more optimized, bug free,
etc.
> - Interface with graphic enviroments.
Portable GUI frameworks are available for both LISP and C++.
Personally, I would use Java's AWT for making a 100% portable GUI
code. It can call out to C++ code for the compute intensive
portions. Java is about as fast as LISP, so you might be able
to write the whole thing in Java.
> - Software Engineering methodologies developed.
C++ wins hands down. Go to any book store with computer section.
Many books on methodologies with examples in C++ -- nothing with
examples in LISP.
You forgot one other important category -- performance. If
you ML code is very compute intensive, e.g., GA, neural nets,
etc. then C++ is the only way to go.
-George
From: Juliana L Holm
Subject: Re: Lisp versus C++ for AI. software
Date:
Message-ID: <52v7nr$ao6@portal.gmu.edu>
George Van Treeck (······@sybase.com) wrote:
: C++ wins hands down. Go to any book store with computer section.
: Many books on methodologies with examples in C++ -- nothing with
: examples in LISP.
LISP is a very useful language for learning about AI. It forces you to
code symbolically.
--
---------------------------------------------------------------------------
Julie Holm (ENTP)| DoD #1604 AMA#397939 UKMC# 0001 VOC# 4672
·····@gmu.edu | 1985 Virago 700 "Maureen"
|*** Nasty Girlie Gang Armourer, Shopping Consultant,
| and Travel Agent!!!
| I'm home at http://osf1.gmu.edu/~jholm
---------------------------------------------------------------------------
-----BEGIN GEEK CODE BLOCK-----
Version: 3.1
GCS/L/IT ·@ s-:- a+ C++$ UX/H/O/S+>+++$ p+ L E---() W++(+++) N+++ o? K-
w$ O M-- V PS++(--) ··@ Y+ PGP- t+(+++) !5 !X R tv-- b++ DI+ D--- G e++
h---- r+++ x++++
------END GEEK CODE BLOCK------
From: ········@ohstpy.mps.ohio-state.edu
Subject: Re: Lisp versus C++ for AI. software
Date:
Message-ID: <1996Oct3.001857.9970@ohstpy>
In article <··········@portal.gmu.edu>, ·····@osf1.gmu.edu (Juliana L Holm) writes:
> George Van Treeck (······@sybase.com) wrote:
> : C++ wins hands down. Go to any book store with computer section.
> : Many books on methodologies with examples in C++ -- nothing with
> : examples in LISP.
>
> LISP is a very useful language for learning about AI. It forces you to
> code symbolically.
I am using a combination of C++ and Lisp in a current game I am writing.
The game is coded in C++, while I use a customized Lisp interpreter
for the AI. The C++ program feeds lisp commands into the interpreter
which then returns commands.
I am also able to use the interpreter to reprogram itself (i.e. learn).
I am new to Lisp myself, but I am impressed with its ability to bind
functions dynamically.
-G
> --
> ---------------------------------------------------------------------------
> Julie Holm (ENTP)| DoD #1604 AMA#397939 UKMC# 0001 VOC# 4672
> ·····@gmu.edu | 1985 Virago 700 "Maureen"
> |*** Nasty Girlie Gang Armourer, Shopping Consultant,
> | and Travel Agent!!!
> | I'm home at http://osf1.gmu.edu/~jholm
> ---------------------------------------------------------------------------
> -----BEGIN GEEK CODE BLOCK-----
> Version: 3.1
> GCS/L/IT ·@ s-:- a+ C++$ UX/H/O/S+>+++$ p+ L E---() W++(+++) N+++ o? K-
> w$ O M-- V PS++(--) ··@ Y+ PGP- t+(+++) !5 !X R tv-- b++ DI+ D--- G e++
> h---- r+++ x++++
> ------END GEEK CODE BLOCK------
From: John P DeMastri
Subject: Re: Lisp versus C++ for AI. software
Date:
Message-ID: <3253E7CE.219C@ils.nwu.edu>
········@ohstpy.mps.ohio-state.edu wrote:
>
> I am also able to use the interpreter to reprogram itself (i.e. learn).
>
> I am new to Lisp myself, but I am impressed with its ability to bind
> functions dynamically.
>
This kind of self modifying code can be EXTREMELY difficult to debug and
especially maintain, so these features of Lisp should be used with the
utmost caution...
John DeMastri
From: ········@ohstpy.mps.ohio-state.edu
Subject: Re: Lisp versus C++ for AI. software
Date:
Message-ID: <1996Oct3.135843.9974@ohstpy>
In article <·············@ils.nwu.edu>, John P DeMastri <········@ils.nwu.edu> writes:
> ········@ohstpy.mps.ohio-state.edu wrote:
>>
>> I am also able to use the interpreter to reprogram itself (i.e. learn).
>>
>> I am new to Lisp myself, but I am impressed with its ability to bind
>> functions dynamically.
>>
>
> This kind of self modifying code can be EXTREMELY difficult to debug and
> especially maintain, so these features of Lisp should be used with the
> utmost caution...
Well, I should have clarified that the routines can be reprogrammed by
the C++ program which calls it (the interpreter isn't directly
reprogramming itself, rather it will). AT any rate, I haven't really
tried any of that yet, since I am still getting basic stuff working.
My point was that generic functions can be changed without having to
recompile the code. My ultimate objective is to have user-programmable
doctrines (using some sort of natural language interface) to
control fleets and other necessary but repetitive maintennce tasks.
I also defined a new defun and defmacro set of functions which
call defun and defmacro, but also keep a list of fun/macro names
which can be accessed later and written to disk during shutdown...
Oh well, I am using Lisp because these kinds of tasks would be a nightmare
in c/c++, and I enjoy being able to playu with AI modifications
directly while the host program is running.
(Sorry for the waste of bandwidth...)
-G
>
> John DeMastri
George Van Treeck <······@sybase.com> writes:
>
> Carlos Cid wrote:
> > - Existing implemented ML. algorithms.
>
> A lot more LISP code out there -- much of it outdated though. Most
> new stuff is in C++.
>
> > - An easy maintenance (easy to understand and easy for group working).
>
> Only very wierd people think LISP is readable... Writing obtuse
A misleading statement, to put it mildly. Consider this: I know C far
better than I know Lisp. Yet, after an hour at looking at the
compiler for CMUCL, I was able to make some enhancements to it that
helps the compiler produce significantly better number code. I can't
imagine being able to do any significant change with gcc after an hour
of study, for example. The CMUCL compiler code is very well written
and extremely clear.
> code is equally easy in either language. LISP is better for
> prototyping and C++ better for production.
As Paul Graham says in "On Lisp", Lisp is like living in a rich
country, where you have to work to keep thin, but that is better
living in a poor country where thinness is a matter of course.
Ray
George Van Treeck wrote:
>
> Carlos Cid wrote:
> > - Existing implemented ML. algorithms.
>
> A lot more LISP code out there -- much of it outdated though. Most
> new stuff is in C++.
>
> > - An easy maintenance (easy to understand and easy for group working).
>
> Only very wierd people think LISP is readable... Writing obtuse
> code is equally easy in either language. LISP is better for
> prototyping and C++ better for production.
Why?
>
> > - Language standarization.
>
> Toss up.
I have to disagree. Common Lisp is ANSI standardized, and there are many fine
implementations that support the standard. My experience and understanding of C++ is
that the commonality between implementations is far less than that guaranteed by ANSI
Common Lisp. Consider, for example, that ANSI Common Lisp defines run-time typing,
exception handling, stream I/O, automatic memory management, iteration, parsing
readers, advanced formatted output, file system interaction, complex number
arithmetic, non-overflowing integer arithmetic, polymorphic sequence and set
operations, ...
Some C++ implementations and/or class libraries provide many of these features (as do
Java implementations/class libraries), but I believe that there is no approved ANSI
specification for these in C++. Am I wrong?
>
> > - Multiplatform support.
>
> C++ is on many more vendor's platforms. Hardly any systems vendors
> provide LISP -- have to rely on some small company. Due to the
> much larger market for C++, it tends to be more optimized, bug free,
> etc.
My PERSONAL experience is that commercial Common Lisp implementations are of higher
quality and have much less significant bugs than commercial C++ implementations --
better even than C implementations from some large vendors. Perhaps other people have
had a different experience?
>
> > - Interface with graphic enviroments.
>
> Portable GUI frameworks are available for both LISP and C++.
> Personally, I would use Java's AWT for making a 100% portable GUI
> code. It can call out to C++ code for the compute intensive
> portions. Java is about as fast as LISP, so you might be able
> to write the whole thing in Java.
>
> > - Software Engineering methodologies developed.
>
> C++ wins hands down. Go to any book store with computer section.
> Many books on methodologies with examples in C++ -- nothing with
> examples in LISP.
It is sometimes said that those who can't do, teach; those that can't teach, write
books; those that can't write, give seminars.
If we are to judge by teaching at leading computer science universities, we should all
be writing in Scheme -- a dialect of Lisp that some might argue does not have direct
support for OO methodologies at all, and yet which is often used to teach the core
concepts of OO programming.
(If we are to judge by seminars, then you must use Java/Powerbuilder/VisualBasic.)
Some issues in favor of Common Lisp as providing direct support for OO methodologies
are:
- Methods and classes are themselves first class objects. This is tremendously
important for creating tools which analyse the relationships between objects.
- Direct support for multi-methods.
- Direct support for objectizing method combination semantics.
- No syntactical distinction between method invocation and other function calls.
- A Meta-Object-Protocol which provides the ability to change the way the object
system works. Admitedly, this is not part of the ANSI standard, but most
implementations provide some support for this de-facto standard.
- A macro facility which provides support for defining new languages which provide
explicit, enforced support for a PARTICULAR methodology. Some commercial products
CAD systems, knowledge servers, etc.) are actually implemented by Lisp macros
which provide and enforce the methodology used by the product, sometimes combined
with portable modifications to the standard reader that provide a non-lispy syntax
for these macros.
An issue against Common Lisp as providing direct support for OO methodologies is that,
as defined by ANSI, methods may be syntactically separated from the objects which
specialize their behavior. This upsets some people's notions of encapsulation. Of
course, the macro facility can be used to define new constructions which provide such
enforced syntactic encapsulation. (I spent many years on a commercial knowledge based
engineering system which did this.) (To be fair, there are other encapsulation issues
which some people have with Common Lisp. These are discussed in the book I cite,
below.)
There are other languages (Sather) which are intended to provide very explicit support
for certain OO methodologies.
An excellent reference on the issues involved in OO programming in Common Lisp is:
"Object Oriented Programming, the CLOS Perspective", edited by Andreas Paepcke.
MIT Press, 1993.
Here are the articles it includes (CLOS is the Common Lisp Object System):
-An Introduction to CLOS
-CLOS in Context: The Shape of the Design Space
-User-Level Language Crafting: Introducing the CLOS Metaobject Protocol
-Metaobject Protocols: Why We Want Them and What Else They Can Do
-Metaobject Programming in CLOS
-The Silica Window System: The Metalevel Approcach Applied More Broadly
-CLOS and C++
-CLOS, Eifel, and Sather
-CLOS and Smalltalk
-Documenting Protocols in CLOS: Keeping the Promise of REuse
-CLOS & LispView: Users' Experiences Distilled
-Using CLOS to Implement a Hybrid Knowledge Representation Tool
-TICLOS: An Implmentation of CLOS for the Explorer Family
-Efficient Method Dispatch in PCL
Note that there are several comparisons with other languages, including C++.
>
> You forgot one other important category -- performance. If
> you ML code is very compute intensive, e.g., GA, neural nets,
> etc. then C++ is the only way to go.
Why?
>
> -George
"Howard R. Stearns" <······@elwoodcorp.com> writes:
>I have to disagree. Common Lisp is ANSI standardized, and there are many fine
>implementations that support the standard. My experience and understanding of C++ is
>that the commonality between implementations is far less than that guaranteed by ANSI
>Common Lisp.
He's right. ANSI C code is quite portable, and Common LISP code is
quite portable, but C++ is still in flux. There's lots of little
stuff that's gone in since the ARM, and various compilers have various
subsets of the new features. The ANSI committee needs to get the
standard out the door while people still care about it.
Java may be the next AI language. Java is actually closer to LISP
than is generally realized.
John Nagle
···@intentionally.blank-see.headers wrote:
> In article <················@elwoodcorp.com> "Howard R. Stearns"
<······@elwoodcorp.com> writes:
[snip]
> Some C++ implementations and/or class libraries provide many of these
features (as do
> Java implementations/class libraries), but I believe that there is no
approved ANSI
> specification for these in C++. Am I wrong?
>
> The ANSI C++ standard provides a lot of that functionality in one way
> or another (but geared towards entirely different application areas).
> ANSI C++ doesn't have to bother with standardizing a lot of external
> interfaces because other people are doing the work: POSIX, X11,
> Win32/Open32, and lots of smaller APIs are expressed in C already.
> [...]
Howard Stearns correct: there is as yet no ANSI C++ standard; the
proposed standard is in progress, and -- based on my reading of some
relevant newsgroups -- implementations which correctly and completely
implement the current draft of the standard are at best rare.
It is common for C++ programmers (including myself) to speak of "the ANSI
C++ standard" but that is shorthand for "the current draft working paper
of the standardization committee."
--
·····@brecher.reno.nv.us (Steve Brecher)
In article <················@best.best.com>,
···@intentionally.blank-see.headers wrote:
> portable, machine independent platform, for many more applications
> (e.g., WWW, UI, semi-numerical), CL is, in practical terms,
> considerably less well standardized than C++, since those applications
> require facilities that simply aren't addressed by the current CL
> standard.
Which facilities are you thinking of and where are these
"standards" for C++?
Greetings,
Rainer Joswig
From: Erik Naggum
Subject: Re: standardization (Re: Lisp versus C++ for AI. software)
Date:
Message-ID: <3053419544367442@naggum.no>
(Non-Lisp groups removed from Newsgroups header. It used to include
comp.ai, comp.ai.genetic, comp.ai.neural-nets, and comp.lang.c++.)
[···@intentionally.blank-see.headers] (believed to be Thomas Breuel)
| The ANSI C++ standard ...
When was it approved?
| Unfortunately, a number of CommonLisp's APIs are a tad outdated,
| including its I/O and file system interfaces ...
Could you substantiate this? It is difficult to uncover what you actually
think when you only provide vague conclusions.
#\Erik
--
I could tell you, but then I would have to reboot you.
From: George Van Treeck <······@sybase.com>
Date: Wed, 02 Oct 1996 14:15:10 -0700
> - An easy maintenance (easy to understand and easy for group working).
Only very wierd people think LISP is readable...
This is a ridiculous and inflamatory statement.
> - Interface with graphic enviroments.
Portable GUI frameworks are available for both LISP and C++.
Personally, I would use Java's AWT for making a 100% portable GUI
code. It can call out to C++ code for the compute intensive
portions. Java is about as fast as LISP, so you might be able
to write the whole thing in Java.
On what do you base this last statement? Do you have benchmarks? Are
you comparing Java to interpreted or compiled Lisp? I would be amazed
if Java were as fast as compiled Common Lisp code.
From: Erik Naggum
Subject: Re: Java vs. CL speed (Re: Lisp versus C++ for AI. software)
Date:
Message-ID: <3053420345771365@naggum.no>
(Newsgroups trimmed to comp.lang.lisp. Removed newsgroups: comp.ai,
comp.ai.genetic, comp.ai.neural-nets, and comp.lang.c++.)
[···@intentionally.blank-see.headers] (Believed to be Thomas Breuel)
| One thing that makes generating efficient numerical Java code so much
| easier is that numbers are not objects, so passing them among compiled
| functions, including structure accessors, without boxing is easier to
| generate code for than in Lisp, where a compiler has to worry about the
| possibility of boxed calls. Sadly, all Lisp compilers seem to throw in
| the towel in that area, apart from some half-hearted special cases.
I have used CLISP, GCL, WCL, and CMUCL extensively. They all represent
fixnums as unboxed machine words. With proper declarations, they generate
code that neither expects nor returns boxed fixnums. CMUCL is faster for
unchecked fixnum-only functions than C is. (That is, when C does not do
bignums, does not detect overflow, etc, neither does the Common Lisp code,
and the Common Lisp code wins by about 10% in performance.)
Methinks you have never actually programmed numerical code in Common Lisp.
#\Erik
--
I could tell you, but then I would have to reboot you.
From: George Van Treeck
Subject: Re: Lisp versus C++ for AI. software
Date:
Message-ID: <325B0122.1BCF@sybase.com>
Carl L. Gay wrote:
>
> From: George Van Treeck <······@sybase.com>
> Date: Wed, 02 Oct 1996 14:15:10 -0700
>
> > - An easy maintenance (easy to understand and easy for group working).
>
> Only very wierd people think LISP is readable...
>
> This is a ridiculous and inflamatory statement.
It's only rediculous and inflamatory to those in love with Lisp.
It's like telling a mother her child is ugly. Those without
vested interest will simply nod and read on.
>
> > - Interface with graphic enviroments.
>
> Portable GUI frameworks are available for both LISP and C++.
> Personally, I would use Java's AWT for making a 100% portable GUI
> code. It can call out to C++ code for the compute intensive
> portions. Java is about as fast as LISP, so you might be able
> to write the whole thing in Java.
>
> On what do you base this last statement? Do you have benchmarks? Are
> you comparing Java to interpreted or compiled Lisp? I would be amazed
> if Java were as fast as compiled Common Lisp code.
I base the statement on some background in writing interpreters and
compilers. The methods are for byte code interpreters and JIT
compilers is pretty mature and there won't be big differences.
Java JIT compilers are appearing now on Macs, PCs, and some UNIX
workstations. It won't be long until Java JIT compilers much
prevalent on all vendor's computers and performance will be in
the same ballpark.
Bottom line: The proof of which is better (C++ or Lisp) is to
look at the commercially successful applications. If Lisp is such
a hot language, how come most applications are all written in C and
C++? Is that that Lisp is not used because it's difficult to read
and maintain? Is Lisp's problem lack of performance, portability,
what? Must be some major weaknesses.
-George
············@wildcard.demon.co.uk (Cyber Surfer) writes:
>In article <·············@sybase.com>
> ······@sybase.com "George Van Treeck" writes:
>> Bottom line: The proof of which is better (C++ or Lisp) is to
>> look at the commercially successful applications. If Lisp is such
>> a hot language, how come most applications are all written in C and
>> C++? Is that that Lisp is not used because it's difficult to read
>> and maintain? Is Lisp's problem lack of performance, portability,
>> what? Must be some major weaknesses.
Maybe not. People don't always choose the best language to write in.
Often that choice is highly influenced by perception, ignorance about
Lisp, lack of Lisp software in the organization, or peer pressure.
Paul Graham has a nice discussion of "why Lisp?" in the first six
pages of his book, _ANSI Common Lisp_. He notes:
Programming languages teach you not to want what they cannot provide.
You have to think in a language to write programs in it, and it's
hard to want something you can't describe.
Graham also notes: GNU Emacs, Autocad, and Interleaf are all written
in Lisp.
--David Finton
> Graham also notes: GNU Emacs, Autocad, and Interleaf are all written
> in Lisp.
This may be slightly misleading. I don't know about Autocad,
but GNU Emacs and Interleaf are both written in C. But in
both these cases, the C code also has an embedded Lisp (or
Lisp-ish, in case of GNU Emacs) interpreter. Thus once the
basic system is up, various extensions written in Lisp
can be loaded and executed. It is doubtful that in these
two cases, the basic system itself would have been usably
fast if all of it was written in Lisp -- but it is certainly
the case that the usage of Lisp as an extension language
has proved to be a very powerful and flexible extension mechanism
in both. It would have been much more cumbersome and
error prone, to try to provide C or C++ as a user-level
dynamic extension language to the base system (it is
possible to do this, e.g. in "plug-in"s such as found in
Netscape Navigator, but these interfaces are nowhere near as
flexible, powerful and convenient as GNU Emacs and Interleaf user
extensions.)
From: Bruce Tobin
Subject: Re: Lisp versus C++ for AI. software
Date:
Message-ID: <32624224.12B6@iwaynet.net>
Mukesh Prasad wrote:
>
>
> This may be slightly misleading. I don't know about Autocad,
> but GNU Emacs and Interleaf are both written in C. But in
> both these cases, the C code also has an embedded Lisp (or
> Lisp-ish, in case of GNU Emacs) interpreter. Thus once the
> basic system is up, various extensions written in Lisp
> can be loaded and executed. It is doubtful that in these
> two cases, the basic system itself would have been usably
> fast if all of it was written in Lisp -- but it is certainly
> the case that the usage of Lisp as an extension language
> has proved to be a very powerful and flexible extension mechanism
> in both.
This is itself misleading. I don't know about
Interleaf, but GNU Emacs is written in Lisp. The
distribution, for reasons of portability, includes
C source for the Lisp interpreter. Unfortunately
C compilers are a lot more common than Lisp
compilers, making this mode of distribution a
practical necessity. An Emacs editor written from
scratch in Common Lisp would be faster, not
slower-- take a look at Hemlock.
From: ········@wat.hookup.net
Subject: Re: Lisp versus C++ for AI. software
Date:
Message-ID: <53u15n$edi@nic.wat.hookup.net>
In <·············@dma.isg.mot.com>, Mukesh Prasad <·······@dma.isg.mot.com> writes:
>> Graham also notes: GNU Emacs, Autocad, and Interleaf are all written
>> in Lisp.
>
>This may be slightly misleading. I don't know about Autocad,
>but GNU Emacs and Interleaf are both written in C. But in
>both these cases, the C code also has an embedded Lisp (or
>Lisp-ish, in case of GNU Emacs) interpreter. Thus once the
>basic system is up, various extensions written in Lisp
>can be loaded and executed. It is doubtful that in these
>two cases, the basic system itself would have been usably
>fast if all of it was written in Lisp -- but it is certainly
>the case that the usage of Lisp as an extension language
>has proved to be a very powerful and flexible extension mechanism
>in both. It would have been much more cumbersome and
>error prone, to try to provide C or C++ as a user-level
>dynamic extension language to the base system (it is
>possible to do this, e.g. in "plug-in"s such as found in
>Netscape Navigator, but these interfaces are nowhere near as
>flexible, powerful and convenient as GNU Emacs and Interleaf user
>extensions.)
It seems to me that the C code is solely to provide the elisp and bytecode
interpreter. Granted, Elisp has lots of datatypes and fubctions designed to
make editing tasks written in Elisp easier, but the editor itself seems to be
written in Elisp.
Hartmann Schaffer
From: Juliana L Holm
Subject: Re: Lisp versus C++ for AI. software
Date:
Message-ID: <53j7j9$evu@portal.gmu.edu>
Erik Naggum (····@naggum.no) wrote:
: [George Van Treeck]
: | That's a very good point! If you're a professor in some ivory tower
: | you can afford to savor your favorite computer language. If you
: | develop software that must be sold to put food on your plate, then you
: | become much more conservative! Your definition of best becomes more
: | dominated in terms of what is the "safe" choice. And the safe choice
: | is one that is popular -- has support from many vendors, books, lots of
: | staff you hire that already know the lanugage, etc. That means your
: | favorite is something commercially successful. You know that if it's
: | commercially successful, that there is a high probability you can get
: | the project out the door with sufficient quality to sell it.
: in other words, the specific _language_ you use is utterly irrelevant. it
: could be Cobol or APL or C or Java or whatever, as long as it is "safe" and
: all those other parameters are present. you argued that C and C++ were
: better languages because they were commercially successful. in fact, you
: have argued for the irrelevancy of the choice of programming language
: compared to other factors that complely overshadow language merit.
When you're in school, the best language may be the one that illustrates
the concepts that you are learning best. In school you might use
Prolog, LISP, whatever. The point is to learn the concepts.
When you are in business you use the language they already have. Or you
use the language that they can hire programmers who already know. Or
that the existing programmers can "get up to speed" on best. Rarely,
you can do an evaluation of several different development platforms to
decide what languages and tools you will use. But your evaluation must
include considering not only the choice of language for the AI poriton
of the project, but also the GUI, the interaction with other systems,
the external functions, the interaction with the database.
Portability between platforms. Scalability, Speed. How stable is the
company that sells the product(s) you will choose. HOw much money is
budgeted. How important is the system to the company. What is the
support like, how much do classes cost. Etc. Etc. Etc.
It becomes very complex.
--
---------------------------------------------------------------------------
Julie Holm (ENTP)| DoD #1604 AMA#397939 UKMC# 0001 VOC# 4672
·····@gmu.edu | 1985 Virago 700 "Maureen"
|*** Nasty Girlie Gang Armourer, Shopping Consultant,
| and Travel Agent!!!
| I'm home at http://osf1.gmu.edu/~jholm
---------------------------------------------------------------------------
-----BEGIN GEEK CODE BLOCK-----
Version: 3.1
GCS/L/IT ·@ s-:- a+ C++$ UX/H/O/S+>+++$ p+ L E---() W++(+++) N+++ o? K-
w$ O M-- V PS++(--) ··@ Y+ PGP- t+(+++) !5 !X R tv-- b++ DI+ D--- G e++
h---- r+++ x++++
------END GEEK CODE BLOCK------
From: Will
Subject: Re: Lisp versus C++ for AI. software
Date:
Message-ID: <53j8n1$gop@news.one.net>
In article <·············@sybase.com>, George Van Treeck <······@sybase.com> wrote:
>Name some commercial applications written in Common Lisp. A
>couple of expert system shells is about it.
Abuse, which is a recently released video game is written in Lisp.
There is a web server written by someone at MIT which is in Lisp.
The products are there, they just aren't as obvious. When do you
see a sticker on a box saying, "Programmed in C++, language of
champions!!" That type of thing just doesn't happen, it doesn't matter
what language you're talking about.
Cheers~
Will
From: Matt Grice
Subject: Re: Lisp versus C++ for AI. software
Date:
Message-ID: <3260dca7.5106933@news>
·····@one.net (Will) wrote:
>In article <·············@sybase.com>, George Van Treeck <······@sybase.com> wrote:
>
>>Name some commercial applications written in Common Lisp. A
>>couple of expert system shells is about it.
>
>Abuse, which is a recently released video game is written in Lisp.
No it isn't. Abuse has a LISP embedded interpreter, and the various
game objects are controlled via LISP scripts. This is less than 20%
of the total source code of Abuse.
It seems to be a common mistake in this thread to say something which
has an embedded interpreter for language 'x' is 'written in X.' LISP
scripts may be essential to the operation of each program mentioned,
but almost none I have seen mentioned are primarily written in LISP.
I don't think this concept is too hard to grasp, but perhaps most do
not have direct knowledge of the products they are mentioning.
Dozens of windows apps have embedded basic interpreters... doesn't
mean they are written in basic, as I am sure everyone understands. On
another note, I keep hearing about the Apple LISP compiler - are many
mac applications written in LISP?
In article <················@news>, ······@iastate.edu (Matt Grice) wrote:
> Dozens of windows apps have embedded basic interpreters... doesn't
> mean they are written in basic, as I am sure everyone understands. On
> another note, I keep hearing about the Apple LISP compiler - are many
> mac applications written in LISP?
No, Apple has (again) done a bad job. Thank god, "Digitool" now ensures
the ongoing development of Macintosh Common Lisp. Their new versions
(PPC MCL 4.0 and MCL 3.1) are out maybe this months. The biggest
and most important contribution from Digitool until now is the port
of MCL to the PowerPC, so that Lisp developers on the Mac again
have adequate speed on the newer machines.
Some commercial software (public and inhouse) has been written with MCL and
a lot more have been developed or prototyped with the help of
MCL. Just see the customer list of Digitool (http://www.digitool.com/)
and you will read a lot of familiar names. If I remember
correctly, Apple has done
early work on what we see as "Apple Guide" (the online
help system) in MCL. There are sure more examples. Even
Microsoft (!!!) had an user interface mock-up of
Microsoft Word written in MCL.
Greetings,
Rainer Joswig
Matt Grice wrote:
>
> ·····@one.net (Will) wrote:
>
> >In article <·············@sybase.com>, George Van Treeck <······@sybase.com> wrote:
> >
> >>Name some commercial applications written in Common Lisp. A
> >>couple of expert system shells is about it.
> >
> >Abuse, which is a recently released video game is written in Lisp.
>
> No it isn't. Abuse has a LISP embedded interpreter, and the various
> game objects are controlled via LISP scripts. This is less than 20%
> of the total source code of Abuse.
>
> It seems to be a common mistake in this thread to say something which
> has an embedded interpreter for language 'x' is 'written in X.'
Stop and think about what you just said.
Somebody who already had a competency in C went to the trouble of
cross-linking a second language into the application because it
brought something to the table not found in C. Doesn't that tell
you something?
I'm a FORTRAN hack, myself. So you two guys can club each other
into pink mush and I have nothing invested emotionally.
And I accept your point... that having a *percentage* of an
application written in language X does not entitle someone to
call it an "X application".
But from a view of formal logic, it seems that, in the process
of making your point, you have conceeded something all of us
already knew: "There is no one 'best' language. Each has it's
strong and weak points. Where there is an excess of hardware
resources, you can afford to be inefficient in some places and
have the luxury of living in a one-language paradigm. Where hardware
is dear (How muct does it cost to put a 20_cents_more_expensive
CPU in 10 million Nintendo's?), sometimes you have to cherry-pick
the best technology for each sub-task and glue them together."
--
Glen Clark
····@clarkcom.com
George Van Treeck wrote:
> For example, Microsoft
> OEMs it's FORTRAN and COBOL.
You're saying that Powestation 4.0 was developed out-of-house?
Didn't know that. Who actually wrote it?
--
Glen Clark
····@clarkcom.com
In article <·············@sybase.com>, George Van Treeck
<······@sybase.com> wrote:
> What language do you think the product comprising Microsoft Office
> or Claris Works are written in? What about the latest version
> Autodesk's 3D software? Relational DBMS from Oracle, Sybase and
> Informix? Electronics design tools from Cadence, Mentor Graphics,
> Viewlogic, etc.?
Oh, "Electronics design tools" do like Lisp.
> Name some commercial applications written in Common Lisp. A
> couple of expert system shells is about it.
We have gone through this before. There is quite a lot
software written in and running in Lisp. See the
customer list of Digitool (at http://www.digitool.com/).
Pretty impressive.
Ford designs cars with Lisp software, Boeing designs
turbines with Lisp software, latest games
on the market have been designed with Lisp software
(Crash Bandicoot for Sony and Super Mario 64 for
Nintendo), TI designs 586 compatible processors
with Lisp software, Swissair eliminates double
bookings with Lisp software, commercial development
environments use Lisp software (just got a
letter from SUN about their C++ development environment,
which editor are they using? XEmacs!), Apple
has been prototype a new multimedia environment
in Lisp (SK8), Siemens has developed natural
language translation software in Lisp (Metal,
recently redone in C/C++), ...
To give you an example, we are a small company. Scripting
on Unix is being done in scsh (the Scheme shell). We have
developed some accounting stuff for inhouse purpose
in MCL (Macintosh Common Lisp). This enables us to be
very flexible and we can produce bills on the first
of a new month.
Another example: here in Hamburg there is a software that
guides people through the local public transport
system. It computes optimal ways through the huge
system of busses, trains, underground trains and ships
of a city with 1.5 million people. Its algorithms
have been scaled to handle the complete transport
system (and not some subset) and to use
additional constraints (handicaps, no tunnels,
shorts ways between stations to walk, etc.).
It has been developed to support the phone supports
of the HVV (the local Hamburg public transport company). If you
go to the central station, you will see people standing
in front of a 21" touch screen. The system displays
graphically the routes and informs about times, etc.
You can print out your personal plan, etc.
People will never notice, but the software is written
completely in Common Lisp. Some of the basic algorithms
have been developed on Macintosh Common Lisp running
on a PowerBook 180 (a friend of mine has done work
on that). A company has brought it to market. Former
Cobol programmers were happily hacking in Lisp.
Now it runs in Lucid Common Lisp on SPARCstations.
You don't now any Lisp software out there? Maybe its
your fault. Even President Clinton's web site
is partly run on Lisp machines. Better have a look
how the White House distributes electronic
documents over the Web. Running on a system
the combines forms processing via the Web and
via Email, using document classification,
built with an object-oriented database.
Yes, running on a Lisp machine.
> It's not myths. At Digital, a CAD tool was written in Lisp and dumped
> because it Lisp was just too slow. Lisp can't be used in real-time
> applications, because you need to guarantee response time.
Some how has Lucent built its high-speed switching system
on a Lisp architecture? Ever heard of th company Gensym?
> A 911
> call needs to routed over the phone lines and can't afford to wait
> for Lisp program to take a coffee brake (garbage collect for some
> indeterminate period). And the code size is too large to be
> quickly activated and used for small utility applications. Perl,
> Tcl, C/C++ are used instead.
You have no idea how Lisp system work? How can I type
to a Lisp-based editor like Emacs? Either it takes to long
to be activated or it is constantly garbage collecting
for long times. ;-) Still I can type very fluently to it.
I have used my Lisp machines editor (Zmacs) a couple of
time when "tools" under Unix were unusable. Under
Zmacs I can still edit a 100 MB file and do cut and
paste with 20 MB parts. Technology from 15 years ago.
> Virtually all new commercial applications are developed in C/C++.
Besides all the other applications that are being
developed in Pascal (ever heard of Delphi?) or Basic
(ever heard of Visual Basic?) or Java or SmallTalk
(did you hear that IBM sells and uses SmallTalk?)
> Many in-house applications are being developed in VisualBasic
> or some 4GL/GUI tool like PowerBuilder, because it requires
> less technical skill ($25/hr labor for VisualBASIC programmer
> vs. $60/hr for C++ programmer) and development is quicker
> using something like PowerBuilder. Lisp could never
> cut it in that market either.
AutoCad is a good counter example. A lot of AutoCad software
is being done in Lisp. A new CAD system from Germany
(FelixCad) also uses Lisp and claims AutoCad compatibility.
Glen Clark <····@clarkcom.com> wrote in article
<·················@clarkcom.com>...
> Erik Naggum wrote:
>
> > I hope you see that your line of argumentation is utterly without merit
and
> > relevance -- the sorry fact is that it is only useful as long as your
> > audience already agrees with you before you started to argue. since
your
> > audience is largely made up of people who have never looked seriously
at
> > any other programming languages, they won't even stop to think that
your
> > invalid argument is invalid. and _that's_ how C and C++ manage to live
on,
> > regardless of their obvious costs and problems.
>
> Wow! Actually, I have no knowledge of LISP, so I can't comment
> much on the substance. But I can't help but notice the vigor.
> Even though it is not his native cause, I wonder if we could
> hire him as a mercinary for comp.lang.fortran.vs.C which has
> been raging at a different place on your radio dial since before
> the Serbs hated the Croats. Such vigor! ;)
I never knew that C++ could be used for AI programming. Anyway, if it can
be, it must be very tedious. LISP was created with AI programming as
objective; so, it should be the best suited.
But, the most complete AI programming package I've found so far comes from
the NASA. It's called CLIPS (derived from LISP???)
Eddy
In article <···························@intnet.mu.intnet.mu> "Eddy Young" <·····@arbornet.org> writes:
:But, the most complete AI programming package I've found so far comes from
:the NASA. It's called CLIPS (derived from LISP???)
CLIPS stand for C Language Integrated Production System.
--Norm
Eddy,
> LISP was created with AI programming as objective
I seem to recall that it was intented to process arbitrary lists --
witness both the name (LISt Processor) and its primitives (car, cdr,
...).
Cheers,
Felix.
In article <················@naggum.no>, Erik Naggum <····@naggum.no> wrote:
> [George Van Treeck]
>
> | That's a very good point! If you're a professor in some ivory tower
> | you can afford to savor your favorite computer language. If you
> | develop software that must be sold to put food on your plate, then you
> | become much more conservative! Your definition of best becomes more
> | dominated in terms of what is the "safe" choice. And the safe choice
> | is one that is popular -- has support from many vendors, books, lots of
> | staff you hire that already know the lanugage, etc. That means your
> | favorite is something commercially successful. You know that if it's
> | commercially successful, that there is a high probability you can get
> | the project out the door with sufficient quality to sell it.
>
> in other words, the specific _language_ you use is utterly irrelevant. it
> could be Cobol or APL or C or Java or whatever, as long as it is "safe" and
> all those other parameters are present. you argued that C and C++ were
> better languages because they were commercially successful. in fact, you
> have argued for the irrelevancy of the choice of programming language
> compared to other factors that complely overshadow language merit.
Most of the **crashing** software I'm forced to use is written in C
or C++.
In article <·············@comp.uark.edu>, ······@comp.uark.edu wrote:
> George Van Treeck wrote:
>
> > Bottom line: The proof of which is better (C++ or Lisp) is to
> > look at the commercially successful applications. If Lisp is such
> > a hot language, how come most applications are all written in C and
> > C++? Is that that Lisp is not used because it's difficult to read
> > and maintain? Is Lisp's problem lack of performance, portability,
> > what? Must be some major weaknesses.
>
> Ah, yeah. By that measure, COBOL is the best language, followed closely
> by Fortran 77, and then Visual Basic.
Actually, the first example that jumped to my mind of "inferior products
dominating the market" was VCRs--Beta vs. VHS. I'm not implying that C++
is inferior to Lisp, but it's clear that you can't always argue from
market success backwards to which product is superior....there are too
many other factors to consider that have very little to do with the actual
quality of the product.
Just noticed the distribution list on this thread, too...apologies to
those who think it should have died a long time ago.
--
--Michael Wolfe
--The Institute for the Learning Sciences
--Northwestern University
·······@ils.nwu.edu
> > Ah, yeah. By that measure, COBOL is the best language, followed closely
> > by Fortran 77, and then Visual Basic.
My solution to this dispute is to look at the career ads in the newspapers
and on the web. You will notice that C++ is being used to build MANY more
applications than LISP. (Actually I haven't seen a job ad for LISP). C++
is far more flexible than LISP in terms of what can be done, is much better
supported, and if you're really hurtin for hyper-fast code you can still
write inner loop functions in assemby language.
I'll sum up:
Learn LISP and be unemployed.
nh
From: Jim Veitch
Subject: Re: Lisp versus C++ for AI. software
Date:
Message-ID: <326662F7.5C87@franz.com>
Go look at the jobs offered at our Web page. Subscribe to
··········@cs.cmu.edu and ············@cs.cmu.edu. One problem for
companies using Lisp is that all the competent Lisp people are being
lured away to work in companies to implement Java, implement dynamic
network languages, implement dynamic Web servers, etc., for huge
salaries (these jobs don't normally list Lisp as a requirement, its just
that if Lisp people apply they seem to have what it takes). Makes it
tough for an ordinary Lisp company to find people. I have yet to hear
of an unemployed Lisp user except maybe one in the U.K.
Jim.
Neil Henderson wrote:
> My solution to this dispute is to look at the career ads in the newspapers
> and on the web. You will notice that C++ is being used to build MANY more
> applications than LISP. (Actually I haven't seen a job ad for LISP). C++
> is far more flexible than LISP in terms of what can be done, is much better
> supported, and if you're really hurtin for hyper-fast code you can still
> write inner loop functions in assemby language.
>
> I'll sum up:
> Learn LISP and be unemployed.
>
> nh
From: David Brabant [SNI]
Subject: Re: Lisp versus C++ for AI. software
Date:
Message-ID: <54l7hd$8o9@iliana.csl.sni.be>
> My solution to this dispute is to look at the career ads in the newspapers
> and on the web. You will notice that C++ is being used to build MANY more
> applications than LISP. (Actually I haven't seen a job ad for LISP). C++
> is far more flexible than LISP in terms of what can be done, is much better
^^^^^^^^^^^^^^^^^^^^^^^
Guess that you never wrote a program in Lisp. Nor in C++. I'm a long
time Lisp programmer and a C++ veteran too. Writing AI program in C++
is like using a screwdriver to drive in nails into board. Sure, that's
a lot more fun than using a hammer, but less efficient.
> supported, and if you're really hurtin for hyper-fast code you can still
> write inner loop functions in assemby language.
>
> I'll sum up:
> Learn LISP and be unemployed.
David
--
David BrabaNT, | E-mail: ·············@csl.sni.be
Siemens Nixdorf [SNI], | CIS: 100337,1733
Centre Software de Li�ge, | X-400: C=BE;A=RTT;P=SCN;O=SNI;OU1=LGG1;OU2=S1
2, rue des Fories, | S=BRABANT;G=DAVID
4020 Li�ge, BELGIUM | HTTP: www.sni.de www.csl.sni.be/~david
In article <··········@iliana.csl.sni.be>, ·············@csl.sni.be (David
Brabant [SNI]) wrote:
> > My solution to this dispute is to look at the career ads in the newspapers
> > and on the web. You will notice that C++ is being used to build MANY more
> > applications than LISP. (Actually I haven't seen a job ad for LISP). C++
> > is far more flexible than LISP in terms of what can be done, is much better
> ^^^^^^^^^^^^^^^^^^^^^^^
>
> Guess that you never wrote a program in Lisp. Nor in C++. I'm a long
> time Lisp programmer and a C++ veteran too. Writing AI program in C++
> is like using a screwdriver to drive in nails into board. Sure, that's
> a lot more fun than using a hammer, but less efficient.
>
A topic that hasn't been discussed in this thread is the difference in
problem-solving techniques between languages such as Lisp and C++, or
perhaps you could say "mind set." IMHO, programmers who "know a language"
are missing out on a lot of rich knowledge which is embodied in other
languages. Building C++ application is just one way to solve a problem.
Not necessarily the right way, but a way.
It is important to be aware of different programming paradigms. C++,
Pascal, Smalltalk, Lisp, etc. all provide a specific view of the world.
Prolog (which I've never been able to grasp), Forth, assembler, Perl,
Basic, pick your favorite set of languages. They are all just simple
tools. Although most of my programming is in C++ these days (due to
customer requirements), I still can still use the patterns and metaphors
provided by languages such as Lisp or Perl to understand what needs to be
done and then slog through the C++ to make it happen.
So, don't just learn C++. Don't just learn Lisp. Quality professional
programmers should have at least 3 or 4 really different languages with
which they are proficient. It makes picking up the next language (like
scripting, string processing, or who know's what we'll be using in 15
years) that much easier.
Ray
> > supported, and if you're really hurtin for hyper-fast code you can still
> > write inner loop functions in assemby language.
> >
> > I'll sum up:
> > Learn LISP and be unemployed.
>
> David
>
> --
> David BrabaNT, | E-mail: ·············@csl.sni.be
> Siemens Nixdorf [SNI], | CIS: 100337,1733
> Centre Software de Li�ge, | X-400: C=BE;A=RTT;P=SCN;O=SNI;OU1=LGG1;OU2=S1
> 2, rue des Fories, | S=BRABANT;G=DAVID
> 4020 Li�ge, BELGIUM | HTTP: www.sni.de www.csl.sni.be/~david
--
Raymond Cote, VP Product Development ·······@apsol.com
Appropriate Solutions, Inc.
In article <··········@iliana.csl.sni.be>, ·············@csl.sni.be (David Brabant [SNI]) wrote:
>> applications than LISP. (Actually I haven't seen a job ad for LISP). C++
>> is far more flexible than LISP in terms of what can be done, is much better
> ^^^^^^^^^^^^^^^^^^^^^^^
>
> Guess that you never wrote a program in Lisp. Nor in C++. I'm a long
> time Lisp programmer and a C++ veteran too. Writing AI program in C++
> is like using a screwdriver to drive in nails into board. Sure, that's
> a lot more fun than using a hammer, but less efficient.
Actually, it's worse than that. You can always turn a screwdriver around and
use the handle to pound nails in.
No, writing software in C++ is more like driving rivets into steel beam with
a toothpick.
And you have to manually remove all the rivets with a paperclip when you're
done with the beam. And if you miss one the building will come down on your
head.
And a whole host of other Bad Evil things.
>David
Adam
> My solution to this dispute is to look at the career ads in the newspapers
> and on the web. You will notice that C++ is being used to build MANY more
> applications than LISP.
There are more ads specifically requesting C++ coders, but this does
nothing to elucidate the effacy of programming in the two languages.
For one thing, it might mean it takes more C++ coders. :)
(Actually I haven't seen a job ad for LISP). C++
> is far more flexible than LISP in terms of what can be done,
In what way? Both languages are turing-complete.
>is much better
> supported,
True. Hopefully, the lisp companies will eventually catch up in this
department.
> and if you're really hurtin for hyper-fast code you can still
> write inner loop functions in assemby language.
You can do that in lisp too.
dave
From: ········@wat.hookup.net
Subject: Re: Lisp versus C++ for AI. software
Date:
Message-ID: <54620q$r59@nic.wat.hookup.net>
In <··························@paris>, "Neil Henderson" <······@worldchat.com> writes:
>> > Ah, yeah. By that measure, COBOL is the best language, followed closely
>> > by Fortran 77, and then Visual Basic.
>
>My solution to this dispute is to look at the career ads in the newspapers
>and on the web. You will notice that C++ is being used to build MANY more
>applications than LISP. (Actually I haven't seen a job ad for LISP). C++
>is far more flexible than LISP in terms of what can be done, is much better
>supported, and if you're really hurtin for hyper-fast code you can still
>write inner loop functions in assemby language.
>
>I'll sum up:
> Learn LISP and be unemployed.
>
>nh
>
>
I submit that Lisp is a better vehicle to learn programming concepts. Once
you know those, it's essentially a question of picking up the paradigms and
syntax of tha actually used language to get going (later it is a question of
overcoming your frustration about being stuck with a language that puts up
roadblocks).
I am very frustrated when I see people taking programming classes where the
whole class contents is going over the syntax and a little bit of semantics
of one language. Graduates from classes like that all too often end up knowing
nothing about programming.
Hartmann Schaffer
From: George Van Treeck
Subject: Re: Lisp versus C++ for AI. software
Date:
Message-ID: <32682FEE.2DCC@sybase.com>
Please make all continuing responses on this topic to this note.
That way responses won't be cross-posted to comp.ai.genetic and
comp.ai.neuralnets (the only two comp newsgroups I read). These
two areas do not use symbolic processing at all. They are
computationally intensive, using primarilly C and C++. In other
words, take your smelly dead horse discussion elsewhere!
········@wat.hookup.net wrote:
> I submit that Lisp is a better vehicle to learn programming concepts. Once
> you know those, it's essentially a question of picking up the paradigms and
> syntax of tha actually used language to get going (later it is a question of
> overcoming your frustration about being stuck with a language that puts up
> roadblocks).
>
> I am very frustrated when I see people taking programming classes where the
> whole class contents is going over the syntax and a little bit of semantics
> of one language. Graduates from classes like that all too often end up knowing
> nothing about programming.
········@wat.hookup.net wrote
> I submit that Lisp is a better vehicle to learn programming concepts.
Once
> you know those, it's essentially a question of picking up the paradigms
and
> syntax of tha actually used language to get going (later it is a question
of
> overcoming your frustration about being stuck with a language that puts
up
> roadblocks).
Would you have any recommendations for books and/or approaches for a
long-time C/C++ programmer who would like to "move up" to Lisp? I
downloaded a trial of Franz's Allegro, but haven't done anything with it
yet (but it looks fascinating).
-- Dave Sieber
·······@terminal-impact.com
http://www.terminal-impact.com
Neil Henderson wrote:
> C++ is far more flexible than LISP in terms of what can be done, is much
> better supported, and if you're really hurtin for hyper-fast code you can
> still write inner loop functions in assemby language.
1) Generally you can do that in LISP too.
2) How is C++ "far more flexible?" Overall I actually found it more
flexible and, on average, much easier to use.
3) It certainly is much better supported.
> Learn LISP and be unemployed.
My first "real" job was a LISP job.
--
Dave Newton | TOFU | (voice) (970) 225-4841
Symbios Logic, Inc. | Real Food for Real People. | (fax) (970) 226-9582
2057 Vermont Dr. | | ············@symbios.com
Ft. Collins, CO 80526 | The World Series diverges! | (Geek joke.)
In article <·············@symbiosNOJUNK.com>,
Dave Newton <············@symbiosNOJUNK.com> wrote:
At ILS and other research-oriented places I know about, we are having
serious problems finding great Lisp programmers. My friends in
research-oriented industry organizations tell me the same. We post ads
regularly on the net, and the number of responses we get isn't huge. So from
our perspective it is more of a supply problem than a demand problem, despite
what the industry as a whole might look like.
>
>> Learn LISP and be unemployed.
>
> My first "real" job was a LISP job.
>
In <··········@news.acns.nwu.edu> ······@ils.nwu.edu (Kenneth D.
Forbus) writes:
>
>In article <·············@symbiosNOJUNK.com>,
> Dave Newton <············@symbiosNOJUNK.com> wrote:
>
>At ILS and other research-oriented places I know about, we are having
>serious problems finding great Lisp programmers. My friends in
>research-oriented industry organizations tell me the same. We post
ads
>regularly on the net, and the number of responses we get isn't huge.
So from
>our perspective it is more of a supply problem than a demand problem,
despite
>what the industry as a whole might look like.
Perhaps Ken is overgeneralizing from his data. It seems more likely
that there is a shortage of great Lisp programmers WHO WANT TO LIVE IN
CHICAGO.
Steven Vere
But given the distribution of the population of the US, wouldn't it then stand
to reason that there can't be that many of them if that is true?
In article <··········@sjx-ixn10.ix.netcom.com>,
····@ix.netcom.com(Steven Vere) wrote:
> Perhaps Ken is overgeneralizing from his data. It seems more likely
>that there is a shortage of great Lisp programmers WHO WANT TO LIVE IN
>CHICAGO.
>
>Steven Vere
Actually, Ken made an important point in private email to me (which I
hope he doesn't mind me adapting for public consumption) that the fact
of Lisp's neglect (per the major flame war ongoing elsewhere on this
list) can and ought to be addressed at the educational level. It used
to be the case, at CMU and Penn to take the only examples that I know,
that although people were taught some variously required C clone,
whether the clone of choice at the moment was C, C++, Pascal, or
whatnot, and ALSO Lisp (or scheme). There used to be a belief among
CS educators that aside from its obvious utility, simplicity, and
elegance, Lisp was the language wherein the *real* PRINCIPLES of
ALGORITHMS and computing -- that is, what was relevant in computer
science (above the level of the hardware) -- was best demonstrated.
With the advent of the emphasis on pretty interfaces, superpower
desktop computing, and Microsoft millionaires, it became less and less
relevant to actually know anything at all about computer SCIENCE; all
you have to be able to do in the real world is do rectangular graphics
fast enough to make your recipe organizer look cool enough to sell a
million copies.
As a result, what the educational system seems to be turning out is
people who can do pretty much that and that's pretty much it. We've
been trying to hire a good **C++** programmer in the Pittsburgh area,
but 95% of the applications we've had can't actually think in
computational terms; they can, narrowly speaking, program in C++, but
who cares? Note that I'm not even looking for a lisp programmer.
Oddly, though, the ones that are good also seem to know lisp.
The point being that I believe that Ken's (side effect) point about
the educational system failing is exactly right, and moreover, that
learning lisp and learning to think about computing are highly
correlated for good reason, and that its largely the responsibility of
that system to resolve this problem, not ours.
'Jeff
Jeff Shrager (·······@neurocog.lrdc.pitt.edu) wrote:
: Actually, Ken made an important point in private email to me (which I
: hope he doesn't mind me adapting for public consumption) that the fact
: of Lisp's neglect (per the major flame war ongoing elsewhere on this
: list) can and ought to be addressed at the educational level. It used
: to be the case, at CMU and Penn to take the only examples that I know,
: that although people were taught some variously required C clone,
: whether the clone of choice at the moment was C, C++, Pascal, or
: whatnot, and ALSO Lisp (or scheme). There used to be a belief among
: CS educators that aside from its obvious utility, simplicity, and
: elegance, Lisp was the language wherein the *real* PRINCIPLES of
: ALGORITHMS and computing -- that is, what was relevant in computer
: science (above the level of the hardware) -- was best demonstrated.
: With the advent of the emphasis on pretty interfaces, superpower
: desktop computing, and Microsoft millionaires, it became less and less
: relevant to actually know anything at all about computer SCIENCE; all
: you have to be able to do in the real world is do rectangular graphics
: fast enough to make your recipe organizer look cool enough to sell a
: million copies.
: As a result, what the educational system seems to be turning out is
: people who can do pretty much that and that's pretty much it. We've
: been trying to hire a good **C++** programmer in the Pittsburgh area,
: but 95% of the applications we've had can't actually think in
: computational terms; they can, narrowly speaking, program in C++, but
: who cares? Note that I'm not even looking for a lisp programmer.
: Oddly, though, the ones that are good also seem to know lisp.
: The point being that I believe that Ken's (side effect) point about
: the educational system failing is exactly right, and moreover, that
: learning lisp and learning to think about computing are highly
: correlated for good reason, and that its largely the responsibility of
: that system to resolve this problem, not ours.
: 'Jeff
--
*******************begin r.s. response******************
lisp is the premier
recursive
language,
yes?
noticed,
when playing with
pc-lisp
(i think it was...)
that
'back' recursion
does not work,
(by the standards
of e.g. the 'c' programming language),
but,
that
'front' recursion
does...
is this a historic
feature of the antiquated
character of the language?
(by the way...
i love lisp...)
in interpreted form...
lisp can be used as a
command line calculator...
for (ms pc dr)dos pc,
(286 good),
lisp interpreters are
freely available
(lisp like)
xlisp(various versions)...
and
pc-lisp
.
(each of these is very good.)
*******************end r.s. response********************
Ralph Silverman
········@bcfreenet.seflin.lib.fl.us
In article <··········@nntp.seflin.lib.fl.us>,
Ralph Silverman <········@bcfreenet.seflin.lib.fl.us> wrote:
> noticed, when playing with pc-lisp (i think it was...) that
>'back' recursion does not work, (by the standards of e.g. the 'c'
>programming language), but, that 'front' recursion does...
What's back recursion?
--
== Seth Tisue <·······@nwu.edu> http://www.cs.nwu.edu/~tisue/
poetry is the god's own
language
yes?
noticed
when reading your
post
(i'm sure it was...)
that
couldn't figure out
what you could possibly be
talking
about
(by the standards
of e.g. programming languages
anyway...),
but,
that
the formatting
was nice...
is this a historic
feature of the antiquated
character of the language?
(by the way...
i love poetry...)
in interpretation...
poetry can be used as a
jackhammer...
for (ms pc dr)dos pc,
(286 good),
unprintable words are,
freely available
(lisp like)
yea!, four letters even,
and, even
it is said by some
that:
"poetry is the god's own
language
yes?
...
In article <··········@usenet.srv.cis.pitt.edu>,
Jeff Shrager <·······@neurocog.lrdc.pitt.edu> wrote:
> poetry is the god's own
> language
> yes?
>
> noticed
> when reading your
> post
> (i'm sure it was...)
> that
> couldn't figure out
> what you could possibly be
> talking
> about
> (by the standards
> of e.g. programming languages
> anyway...),
Text
does not
become
poetry
just because
you
indent
the
lines
strangely.
On
the
other
hand
if this
quote
poetry
unquote
was generated
by a
computer
program
that would be
interesting
if only in that
human poets
could take
heart
in seeing that
computers
are not
going to
capture the
poetry market
(such as it is)
any
time
soon.
:
-)
d
j
From: William Paul Vrotney
Subject: Re: Great Lisp Programmer Shortage?
Date:
Message-ID: <vrotneyDzrwnI.IF1@netcom.com>
In article <··········@usenet.srv.cis.pitt.edu>
·······@neurocog.lrdc.pitt.edu (Jeff Shrager) writes:
> poetry is the god's own
> language
> yes?
>
> noticed
> when reading your
> post
> (i'm sure it was...)
> that
> couldn't figure out
> what you could possibly be
> talking
> about
> (by the standards
> of e.g. programming languages
> anyway...),
> but,
> that
> the formatting
> was nice...
>
Parenthetically, Dot lisps
nth years ago
I expressed ... you recall
lispers as we were
pushed as it was ^
into sea, plus pus, as it is ... ^
these sad values that you find @@@@@@
would return as they have @@@@@@@@
@ . . @
^
William Shakestree \___/
--
William P. Vrotney - ·······@netcom.com
: > Oddly, though, the ones that are good also seem to know lisp.
: Ever heard of the Sapir-Worf hypothesis?
Indeed. However, my guess is that it's more a function of the better
schools teaching lisp and the smarter student go to the better
schools. Programmers right out of highschool (which is more common
than one'd think -- summer jobs: flipping hambergers v. pushing bits)
seem to be a toss-up; some are simply C lusers -- sort of the
computational equivalent of slackers -- but others seem to be so into
computers that they've gone off and learned everything they can; These
folks are actually quite good, and generally (as per my hypothesis)
know lisp too.
Of course, I'm probably not the best person to be assessing this as if
someone knows lisp I am immediately biased positively toward them. :)
Cheers,
'Jeff
Jeff Shrager (·······@neurocog.lrdc.pitt.edu) wrote:
: : > Oddly, though, the ones that are good also seem to know lisp.
: : Ever heard of the Sapir-Worf hypothesis?
: Indeed. However, my guess is that it's more a function of the better
: schools teaching lisp and the smarter student go to the better
: schools. Programmers right out of highschool (which is more common
: than one'd think -- summer jobs: flipping hambergers v. pushing bits)
: seem to be a toss-up; some are simply C lusers -- sort of the
: computational equivalent of slackers -- but others seem to be so into
: computers that they've gone off and learned everything they can; These
: folks are actually quite good, and generally (as per my hypothesis)
: know lisp too.
: Of course, I'm probably not the best person to be assessing this as if
: someone knows lisp I am immediately biased positively toward them. :)
: Cheers,
: 'Jeff
--
******************begin r.s. response*************************
re.
"...c lusers..."
(posting cited above)
those wishing to use modern (1980s)
implementations of
lisp
on (ms dr pc)dos pcs (286 good)
might find useful
xlisp (various releases)
and
pc-lisp
...
both these are freely available...
and, technically, shareware,
i guess...
but that was long ago...
both these are interpreted
(so far as known to present poster...)
and remarkably
well made and sophisticated!!!
******************end r.s. response***************************
Ralph Silverman
········@bcfreenet.seflin.lib.fl.us
········@bcfreenet.seflin.lib.fl.us (Ralph Silverman) writes:
>Jeff Shrager (·······@neurocog.lrdc.pitt.edu) wrote:
>: : > Oddly, though, the ones that are good also seem to know lisp.
>: : Ever heard of the Sapir-Worf hypothesis?
It's "Whorf". No connection with Star Trek involved.
The "strong" Whorf-Sapir hypothesis ("POSSIBLE to think" = "possible to
say") is false.
The "weak" Whorf-Sapir hypothesis ("EASY to think" = "easy to say")
is pretty much obvious.
> those wishing to use modern (1980s)
> implementations of
> lisp
> on (ms dr pc)dos pcs (286 good)
> might find useful
> xlisp (various releases)
> and
> pc-lisp
> ...
> both these are freely available...
> and, technically, shareware,
> i guess...
> but that was long ago...
> both these are interpreted
> (so far as known to present poster...)
> and remarkably
> well made and sophisticated!!!
In case Ralpha Silverman hasn't noticed,
- we're in the late 90's now, so "1980s" systems no longer count
as modern
- xlisp _has_ a compiler.
- CLISP is a more "modern" system. It's free.
- CMU Common Lisp is a exceptionally good free system. It manages
to have a native code compiler _and_ be interactive. It is
available for Linux; I don't know about other 80*86 operating systems.
- GNU Common Lisp (was AKCL) is a very good free system. It too
manages to have a native code compiler and be interactive. It is
available for Linux; I don't know about othe 80*86 operating systems.
- Gambit is a full Scheme implementation, which compiles to very good
native code. It's free. I use it on a Mac.
- In fact we have more modern Lisp/Scheme compilers than you can
shake a stick at.
--
Mixed Member Proportional---a *great* way to vote!
Richard A. O'Keefe; http://www.cs.rmit.edu.au/%7Eok; RMIT Comp.Sci.
Richard A. O'Keefe <··@goanna.cs.rmit.edu.au> wrote:
+---------------
| The "strong" Whorf-Sapir hypothesis ("POSSIBLE to think" = "possible to
| say") is false.
| The "weak" Whorf-Sapir hypothesis ("EASY to think" = "easy to say")
| is pretty much obvious.
+---------------
True, but to me the more interesting/useful form is the contrapositive
of the "weak" version [the "pretty-strong" version?] -- "If it can't
be said easily in any language one knows, it's quite difficult for one
to think or reason about it" -- which I would claim *does* have considerable
applicability to the art of programming [not to mention a myriad of
personal/religious/philosphical contemplations].
After all, one of the beauties of Lisp/Scheme programming is that if
you're having trouble trying to cleanly express something, you can
CHANGE THE LANGUAGE to make it easier to express.
-Rob
-----
Rob Warnock, 7L-551 ····@sgi.com
Silicon Graphics, Inc. http://reality.sgi.com/rpw3/
2011 N. Shoreline Blvd. Phone: 415-933-1673 FAX: 415-933-0979
Mountain View, CA 94043 PP-ASEL-IA
In article <············@goanna.cs.rmit.edu.au>, ··@goanna.cs.rmit.edu.au�
says...
>
>
>>: : Ever heard of the Sapir-Worf hypothesis?
>
Yes, "Language limits thought"
The artificial language Loglan was invented to test this hypothesis.
Daniel L. Bates wrote:
> > > : : Ever heard of the Sapir-Worf hypothesis?
>
> Yes, "Language limits thought"
You mean "BASIC damages your brain"? :-)
Michael Wein
Hi!
RS> on (ms dr pc)dos pcs (286 good)
RS> might find useful
RS> xlisp (various releases)
RS> and
RS> pc-lisp
Actually xlisp and PC-Lisp are not of much use anymore. There are much
better free Lisps around, for example CLISP and the ACL/PC Web Release.
xlisp and pc-lisp are both long outdated. Almost every newer book on Lisp
uses Common Lisp or Scheme, so you should use one of those languages and
not an oldfashioned dynamic scoping lisp.
Ok, once upon a time I used xlisp much, I even hacked in the sources to get
some more functionality (added a really stupid funcall facility into xlisp
1.4) and data types, but today there are better systems available.
BTW: CLISP has a interpreter and a bytecode-compiler and ACL/PC is a
native-code-compiler. Both are _much_ faster than xlisp and pc-lisp.
bye, Georg
On Fri, 1 Nov 1996, Georg Bauer wrote:
> Actually xlisp and PC-Lisp are not of much use anymore. There are much
> better free Lisps around, for example CLISP and the ACL/PC Web Release.
> xlisp and pc-lisp are both long outdated. Almost every newer book on Lisp
> uses Common Lisp or Scheme, so you should use one of those languages and
> not an oldfashioned dynamic scoping lisp.
>
[snip]
>
> BTW: CLISP has a interpreter and a bytecode-compiler and ACL/PC is a
> native-code-compiler. Both are _much_ faster than xlisp and pc-lisp.
>
> bye, Georg
OK, so what should *I* do? I've used xlisp and pc-lisp and found
them rather poor, just as you said.
CLISP is almost what I want, but I really want support for windoze.
ACL/PC is probably exactly what I want, but when I installed it on
my PC (OK, it is a 386 with only 4Mb memory) I had to leave it for
about an hour before *anything* had happened. I've never got to the
stage where it's actually finished opening all its windows and you
finally get to do something with it.
Any suggestions?
Cheers,
Bill.
In article <·······································@granby>
·······@unix.ccc.nottingham.ac.uk "WJ Bland" writes:
> CLISP is almost what I want, but I really want support for windoze.
> ACL/PC is probably exactly what I want, but when I installed it on
> my PC (OK, it is a 386 with only 4Mb memory) I had to leave it for
> about an hour before *anything* had happened. I've never got to the
> stage where it's actually finished opening all its windows and you
> finally get to do something with it.
In my experience, even 8 MB is not enough for ACL/PC. I tried it on
my 8 MB 386, and it just thrashed. I didn't wait for it to stop!
Perhaps it would've eventually settled down, but I wouldn't want to
see the machine thrash evertime I use ACL/PC.
> Any suggestions?
Try 16 MB. It runs beautifully with that kind of memory, under
Windows 3.11. I've not yet tried it with Win95 and 16 MB.
With NT 3.51 with 32 MB, you can also run a _real_ memory hog,
like VC++ 4.0. ;-)
--
<URL:http://www.enrapture.com/cybes/> You can never browse enough
Future generations are relying on us
It's a world we've made - Incubus
We're living on a knife edge, looking for the ground -- Hawkwind
···········@ms3.maus.westfalen.de (Georg Bauer) writes:
> Actually xlisp and PC-Lisp are not of much use anymore. There are much
> better free Lisps around, for example CLISP and the ACL/PC Web Release.
> xlisp and pc-lisp are both long outdated. Almost every newer book on Lisp
> uses Common Lisp or Scheme, so you should use one of those languages and
> not an oldfashioned dynamic scoping lisp.
On the contrary, xlisp (as a component of xlispstat) has a significant
user base in the statistics community. There's a lot of software out
there and active research projects that rely on it. And gradually,
xlisp is coming to look a lot more like Common Lisp. Check out the
xlispstat Web sites at UCLA and the University of Minnesota, if you're
interested.
Rob St. Amant
Hi!
RSA>On the contrary, xlisp (as a component of xlispstat) has a significant
RSA>user base in the statistics community.
Sorry, I forgot. Of course, there are several projects built upon XLISP,
one is the extension language of AutoCAD. They are of course of use. And
XLISP has one important plus: the interpreter source is easy to understand
and extend. Actually to me it was a lot more understandable and extendable
than most other systems that where designed specially for this purpose ;-)
I didn't wanted to degrade the value of XLISP. And as I can read, it seems
I really need an update on XLISP.
bye, Georg
···········@ms3.maus.westfalen.de (Georg Bauer) wrote in
comp.lang.lisp <···················@ms3.maus.de>:
> Hi!
Lo!
RS>> on (ms dr pc)dos pcs (286 good)
^^^^^^^^^^^^^^^^^^^^^^^^^^^ !!!!!!!!!!!!!
[...]
> Actually xlisp and PC-Lisp are not of much use anymore. There are much
> better free Lisps around, for example CLISP and the ACL/PC Web Release.
I agree that CLISP is better than xlisp (for being an almost CLtL2
implementation), but it does not run on a 286.
> xlisp and pc-lisp are both long outdated.
I haven't heard any news about xlisp since version 2.1g (May 1994),
but at least since version 1.5 it is lexically scoped (I suppose
longer, but I don't have older ones around).
> Lisp uses Common Lisp or Scheme, so you should use one of those
> languages and not an oldfashioned dynamic scoping lisp.
> Ok, once upon a time I used xlisp much, I even hacked in the sources to
> get some more functionality (added a really stupid funcall facility into
> xlisp 1.4) and data types, but today there are better systems available.
Is there really such a huge difference between 1.4 and 1.5?
I remember having to hack inverse trig functions into early xlisps,
but not such fundamental things as funcall.
What I'm doing with it is to add some special stuff I want to play with
(e.g. serial I/O), bacause it is easily extended.
Xlisp is AFAIK the most CL like lisp one can get for a <=286.
> BTW: CLISP has a interpreter and a bytecode-compiler and ACL/PC is a
> native-code-compiler. Both are _much_ faster than xlisp and pc-lisp.
At least xlisp-stat has a compiler, too (but requires Win32s which
excludes real mode).
Ralf
In article <···········@elefant.Jena.Thur.De>,
·············@elefant.Jena.Thur.De (Ralf Muschall) wrote:
> What I'm doing with it is to add some special stuff I want to play with
> (e.g. serial I/O), bacause it is easily extended.
>
> Xlisp is AFAIK the most CL like lisp one can get for a <=286.
Do you really still use a 286?
> > BTW: CLISP has a interpreter and a bytecode-compiler and ACL/PC is a
> > native-code-compiler. Both are _much_ faster than xlisp and pc-lisp.
> At least xlisp-stat has a compiler, too (but requires Win32s which
> excludes real mode).
Hmm, guys read the Franz web site news?
Seems like our PC/Linux friends have a nice
Christmas present from Franz. Check it out!
Rainer Joswig
Hi!
RM>I agree that CLISP is better than xlisp (for being an almost CLtL2
RM>implementation), but it does not run on a 286.
Hmm. I buried my last 286-machine some time ago - it was the machine of my
dad, he upgraded to a marvelous fast 386/20 ;-)
RM>I haven't heard any news about xlisp since version 2.1g (May 1994),
RM>but at least since version 1.5 it is lexically scoped (I suppose
RM>longer, but I don't have older ones around).
Sorry, I left XLISP with 1.4. That XLISP is lexically scoped is new to me.
I really should get a current version and have a look at it.
RM>Is there really such a huge difference between 1.4 and 1.5?
Yes, if 1.5 is lexically scoped, than there are big differences. 1.4 was
dynamically scoped, if I remember correctly.
RM>I remember having to hack inverse trig functions into early xlisps,
RM>but not such fundamental things as funcall.
I did have to. Ok, it can be I don't remember the number correctly, so it
_can_ be 1.3 or earlier, but I think it was 1.4.
RM>What I'm doing with it is to add some special stuff I want to play with
RM>(e.g. serial I/O), bacause it is easily extended.
Definitely.
RM>Xlisp is AFAIK the most CL like lisp one can get for a <=286.
That's true, too. But there is S88, a really useful Scheme system with
native-code compiler and an assembler, that needs only a 8086 and can make
use of expanded memory. That's what I use on my sub-notebook (an 80186
machine). It's not very good documented, but it's a full R4RS-Scheme and it
creates _fast_ code. It's really funny what nice things you can find on the
net ...
RM>At least xlisp-stat has a compiler, too (but requires Win32s which
RM>excludes real mode).
Nice. I remember my first trys on building a compiler for XLISP (used the
McCarthy boo). Funny, but definitely not useable (actually it compiled
function-definitions to prog-constructs, and because the nature of XLISP
those prog-constructs run slower then the original functions ;-) )
bye, Georg
Ralf Muschall (·············@elefant.Jena.Thur.De) wrote:
: > xlisp and pc-lisp are both long outdated.
: I haven't heard any news about xlisp since version 2.1g (May 1994),
: but at least since version 1.5 it is lexically scoped (I suppose
: longer, but I don't have older ones around).
: ...
: Is there really such a huge difference between 1.4 and 1.5?
I always thought that xlisp 1.5 was never in the wild. I thought that 1.5
was the version AutoDESK licensed from David for AutoLISP, and that 1.6 was
the next release. (I've got both but no 1.5)
If there's a 1.5 in the wild I'd like to get my hands on it...
And I thought that lexical scoping came with 2.0, but I can be wrong.
---
Reini Urban <······@xarch.tu-graz.ac.at> http://xarch.tu-graz.ac.at/~rurban/
(defun tail (l n)
(cond ((zerop n) l)
(t (tail (cdr l) (1- n)))))
From: William Sobel
Subject: Re: Lisp versus C++ for AI. software
Date:
Message-ID: <uu3rj3r8y.fsf@nj.rpsi.com>
········@saturn.cs.waikato.ac.nz (Bernhard Pfahringer) writes:
>
> In article <··········@news.acns.nwu.edu>,
> Kenneth D. Forbus <······@ils.nwu.edu> wrote:
> [snip...]
>
> What incentive does a C++ programmer have for learning Lisp?
Emacs...
>
> cheers Bernhard
> --
> -----------------------------------------------------------------------------
> Bernhard Pfahringer
> ···············@cs.waikato.ac.nz http://www.ai.univie.ac.at/~bernhard (still)
Will
"Neil Henderson" <······@worldchat.com> writes:
>> > Ah, yeah. By that measure, COBOL is the best language, followed closely
>> > by Fortran 77, and then Visual Basic.
That sounds about right ... and C and C++ are close behind.
>My solution to this dispute is to look at the career ads in the newspapers
>and on the web.
Interesting choice. Have you considered that many promising jobs are
not listed in newspaper ads? Or, alternatively, that many of the ads
in the newspaper are not necessarily promising?
>You will notice that C++ is being used to build MANY more
>applications than LISP. (Actually I haven't seen a job ad for LISP).
I think you mean "C++ far more often found as a job requirement for
the types of jobs advertised in newspapers than is Lisp". And I don't
doubt it --- my experience is the same. I'm surprised that COBOL
doesn't crush them both into tiny pieces as far as number of
job offerings go ... but then, maybe times are changing.
>C++ is far more flexible than LISP in terms of what can be done,
<bzzt> Sorry, thanks for playing. C++ *is* better for certain,
restricted applications, mostly having to do with hardware access,
speed, etc --- but then you can do even better going to assembler, as
you note below. But assembler is so expensive to code that it is
not cost-effective to code most large applications in assembler ---
instead, 99% will be in C or C++, and the rest in assembler.
C++ is a good but difficult language, and it is harder to code large,
complex applications with interesting behaviors in C++ than it is in
Lisp. Period. So, for most "advanced experimental software" (such as
my thesis project, Nicole) system designers don't really have a
choice: they must turn to a powerful language that supports rapid
prototyping and protects the programmer from a whole class of errors
not relevant to the design and testing of the behaviors of the
program.
One such language is Lisp. It allows the fast construction of
programs with highly complex high-level behaviors. If the performance
is good, great; if not, once the prototype has been built, the
principles behind the program can be translated into another
language. (I have already considered how to translate Nicole into C++
or Java, and I'm positive it can be done. But I would rather
graduate than rewrite my entire system in a language that might be
faster, but would certainly be more difficult to program and extend.)
True, both C/C++ and Lisp are roughly Turing equivalent, as is
assembler, so any one can fill in for any other. But how hard is it
to program the desired behavior? Each language has its uses. For
example, some of the software engineering firms I have dealt with
(most of which develop final applications in C++) recommend building
*prototypes* in a language that is *not* suitable for the final
deliverable. Why? Two reasons. First, it's faster to use a rapid
prototyping environment; second, it prevents the client (or management
or other force of nature) from being seduced by a sexy prototype and
then demanding that the prototype system be completed and delivered
"rather than going through the time and effort to create a separate
deliverable system from scratch". Of course, a prototype forced into
service as a deliverable is almost always a bad idea, so here a rapid
prototyping system serves both a functional and political role in the
software development process!
>[C/C++] is much better supported,
True for some areas, not for others. Lisp is pretty well supported
too, you know --- there are not that many first-rank vendors (but then
there aren't *really* that many first-rank vendors for C/C++ either)
but there is a massive body of software/freeware/wisdom available *if*
you know where to look...
>and if you're really hurtin for hyper-fast code you can still
>write inner loop functions in assemby language.
True. And if you're really hurtin for hyper-complex behavior, embed
yerself a primitive Lisp or Prolog interpreter in your system for
higher-level functions and use C/C++/assembler for the grunt work in
the code.
>I'll sum up:
> Learn LISP and be unemployed.
>nh
Now that's just silly. I know Lisp like the back of my hand, and based
on that I can and will get a really good job, most likely in research,
working on "advanced experimental software" that is simply too hard to
code in anything else. But I also know C, C++, <buzz> Java <buzz>,
FORTRAN, and Pascal, plus enough Prolog, Modula, (Visual) Basic,
assembler, Ada, shell scripts and what have you to hit the ground
running in whatever environment I am faced with.
"Learn Lisp and be unemployed" --- pfui. Let *me* sum up:
Learn how to program in several paradigms (procedural,
object-oriented, functional, dataflow), learn the most
common languages of each, and learn when each tool is
useful to do a specific job. Learn these three things,
grasshopper, and you will be employed anywhere you want.
For the record, if I was writing a Windows95 application, I'd probably
do it in C++. But there are alternatives ... such as Pascal, VB or
even Java ... that I would consider, based on the complexity of the
program, the intended market, and my available resources.
In any case, interesting conversation. I've got to get back to work
--- I'm building a C-based communications interface to link up several
Lisp-based AI simulators and cognitive systems remotely over a
network, and the fact that C lacks an online interpreter with a good
debugger is making the error I'm encountering particularly difficult
to unravel...
-Anthony
--
Anthony G. Francis, Jr. | cen.taur \'sen-.to.(*)r\ n
AI/Cognitive Science Group | [ME, fr. L Centaurus, fr. Gk Kentauros]
College of Computing | 1: one of a race fabled to be half man
Georgia Institute of Technology | and half horse and to dwell in the
Atlanta, Georgia 30332-0280 | mountains of Thessaly.
Phone: (404) 894-5612 | 2: nickname for this AI grad student
Net: ·······@cc.gatech.edu - http://www.cc.gatech.edu/ai/students/centaur/
ymbiosNOJUNK.com> <··········@news.acns.nwu.edu> <··········@sjx-ixn10.ix.netcom.com> <··········@news.acns.nwu.edu> <··········@usenet.srv.cis.pitt.edu> <··········@nntp.seflin.lib.fl.us> <··········@usenet.srv.cis.pitt.edu>
Organization: SEFLIN Free-Net - Broward
Distribution:
Jeff Shrager (·······@neurocog.lrdc.pitt.edu) wrote:
: poetry is the god's own
: language
: yes?
: noticed
: when reading your
: post
: (i'm sure it was...)
: that
: couldn't figure out
: what you could possibly be
: talking
: about
: (by the standards
: of e.g. programming languages
: anyway...),
: but,
: that
: the formatting
: was nice...
: is this a historic
: feature of the antiquated
: character of the language?
: (by the way...
: i love poetry...)
: in interpretation...
: poetry can be used as a
: jackhammer...
: for (ms pc dr)dos pc,
: (286 good),
: unprintable words are,
: freely available
: (lisp like)
: yea!, four letters even,
: and, even
: it is said by some
: that:
: "poetry is the god's own
: language
: yes?
: ...
--
******************begin r.s response***********************
a) re.
back, front recursion
simple, really;
recursion
is
call by subprogram from within itelf
yes?
that effective action of subprogram
preceding
recursive call
is
front recursion
;
that effective action of subprogram
following
recursive call
is
back recursion...
lisp is older than
algol60(revised) (@1963)
and
the recursion supporting runtime
apparatus, generally, used with such,
(stack), is derived from algol...
evidently, the recursion supporting
runtime apparatus of lisp is different,
in function, primarily regarding
back recursion...
******************end r.s. response************************
Ralph Silverman
········@bcfreenet.seflin.lib.fl.us
From: ········@wat.hookup.net
Subject: Re: Lisp versus C++ for AI. software
Date:
Message-ID: <53lvmp$r9n@nic.wat.hookup.net>
In <·············@sybase.com>, George Van Treeck <······@sybase.com> writes:
>> ...
>> if C and C++ are so hot, how come they haven't always been? how did they
>> become hot? how did they overcome the "how come most applications are all
>> written in Fortran and Cobol" argument? how does Java face the same
>> argument?
>
>Applications written many years ago were done in FORTRAN and COBOL.
>Today, many large computer vendors and software companies don't
>even sell COBOL because it's too small a market! They OEM COBOL
>from some small compiler company. FORTRAN and COBOL are sold
>mostly to maintain legacy software. For example, Microsoft
>OEMs it's FORTRAN and COBOL.
>
>Virtually all new commercial applications are developed in C/C++.
>If you worked in the "real-world" (companies that develop
>commercial software) you know that. Look in the new paper for
>programming jobs. Count the number of ads wanting C/C++ vs.
>FORTRAN/COBOL.
But there was a time were proposals to do software in C were rejected because
"everybody writes in Pascal / FORTRAN / COBOL", i.e. the same argument you are
using now to establish the superiority of C (or do you have another explanation
why you need FORTRAN and COBOL for the legacy software?).
>Many in-house applications are being developed in VisualBasic
>or some 4GL/GUI tool like PowerBuilder, because it requires
>less technical skill ($25/hr labor for VisualBASIC programmer
>vs. $60/hr for C++ programmer) and development is quicker
>using something like PowerBuilder. Lisp could never
>cut it in that market either.
If the $60/hr person can do the job in a day where the $25/hr person need a
week, who is cheaper?
>-George
Hartmann Schaffer
From: kransom
Subject: Re: Lisp versus C++ for AI. software
Date:
Message-ID: <325E796D.F43@franz.com>
Howard R. Stearns wrote:
> I guess I'm confused. I was under the impression that the really hard
> parts of three of products you mention ARE written in Lisp. (From
> Autodesk, Oracle, Viewlogic). I would add Parametric Technologies and
> Cognition. Can anyone who actually knows tell us? Anyway, check out
> the customer pages of vendors such as Franz and Digitool.
Cadence have an embedded extension Lisp, HLDS are remarketing through
Mentor and all their stuff is in Lisp, Viewlogic have a set of silicon
compilers (SILCSYN) implemented in Lisp. Yes, and Parametric and
Cognition are also using Lisp. Check it out at www.franz.com.
George Van Treeck wrote:
> Virtually all new commercial applications are developed in C/C++.
> If you worked in the "real-world" (companies that develop
> commercial software) you know that. Look in the new paper for
> programming jobs. Count the number of ads wanting C/C++ vs.
> FORTRAN/COBOL.
>
> Many in-house applications are being developed in VisualBasic
> or some 4GL/GUI tool like PowerBuilder, because it requires
> less technical skill ($25/hr labor for VisualBASIC programmer
> vs. $60/hr for C++ programmer) and development is quicker
> using something like PowerBuilder. Lisp could never
> cut it in that market either.
>
Geez, chill out guys! Saying that Lisp sucks because people can't build
database apps quickly like they can with PowerBuilder is a bit like
saying that a whale is a miserable excuse for an animal because it
doesn't fly as well as a sea gull. (Yeah, OK, let's see how good the sea
gull does below a 40 foot water depth.)
The point is that each of these languages has a value and purpose which
is appropriate for completely different circumstances. Which language is
best suited for building turnkey database apps quickly? PowerBuilder.
Which language is best suited for modelling and simulating complex
problem domains? Lisp.
I'm always amazed when people leap from isolated questions like those
above to the grand pronouncement that "xxxx is better than yyyy in all
cases that REALLY matter" with such religious fervor and righteousness.
Regards,
Chuck
______________________________________________________________________
Charles E. Matthews
Software consulting in knowledge
Synergistic Technologies based systems and object oriented
······@infonet.isl.net analysis and design
George,
> George Van Treeck wrote:
> If Lisp is such a hot language, how come most applications are all written in C and
> C++?
COMMON LISP is big, slow, expensive, and somewhat esoteric. It is a
great language to
program in for many applications. However, the programming language is
invisible to
end-user. LISP gives the end-user nothing for the extra resources it
requires.
Compiled LISP can approach other languages in speed. But, that imposes
constraints
that restrict most of the functions that makes LISP desirable in the
first place.
LISP is still my favorite language by far. I programmed in LISP for
several years.
Mostly simulations but also a simple schematics editor.
Gary
Gary,
>Gary Brown wrote:
>
> > If Lisp is such a hot language, how come most applications are all written in C and
> > C++?
>
> COMMON LISP is big, slow, expensive, and somewhat esoteric. It is a
> great language to
> program in for many applications. However, the programming language is
> invisible to
> end-user. LISP gives the end-user nothing for the extra resources it
> requires.
>
Well, i am not sure about this. Consider the case of software agents
--which is directly linked to AI if you bear in mind that GOOD agents
are genetically evolutive (ideally). When dealing with networked agents,
you just cannot let pointer arithmetic (an automatic feature of C/C++)
get into the way, for obvious security reasons. LISP, on the other hand,
is one language that manages this pretty well, as do others such as
SafeTCL (another interpreted language!!).
What i actually believe is that interpreted languages and compiled
languages have very different application domains. Interpreted languages
work wonders when used as tiny specialized scripts.
Thomas
From: ········@wat.hookup.net
Subject: Re: Lisp versus C++ for AI. software
Date:
Message-ID: <53rl90$f3q@nic.wat.hookup.net>
In <··········@Godzilla.cs.nwu.edu>, ·····@cs.nwu.edu (Seth Tisue) writes:
> ...
>a day when they were the dominant languages, and C and Smalltalk (the
>original source of the OOP ideas in C++) were funny little obscure
I always thaught C++'s OO concept was derived from Simula67
Hartmann Schaffer
> > ...
> >a day when they were the dominant languages, and C and Smalltalk (the
> >original source of the OOP ideas in C++) were funny little obscure
>
> I always thaught C++'s OO concept was derived from Simula67
>
In his book, The Design and Evolution of C++, Bjarne mentions lots of
influence from both these languages, but C++ seems to borrow more directly
from Simula than Smalltalk.
For example, C++'s inheritence mechanism is more similar to Simula than
Smalltalk.
Daniel Pflager
From: Bjarne Stroustrup
Subject: Re: Lisp versus C++ for AI. software
Date:
Message-ID: <DzBI0H.MyH@research.att.com>
"Daniel Pflager" <······@ix.netcom.com> writes:
>
> > > ...
> > >a day when they were the dominant languages, and C and Smalltalk (the
> > >original source of the OOP ideas in C++) were funny little obscure
> >
> > I always thaught C++'s OO concept was derived from Simula67
> >
>
> In his book, The Design and Evolution of C++, Bjarne mentions lots of
> influence from both these languages, but C++ seems to borrow more directly
> from Simula than Smalltalk.
>
> For example, C++'s inheritence mechanism is more similar to Simula than
> Smalltalk.
Yes. For some reason, there is a tendency to exaggerate the influence of
Smalltalk on C++ (possibly because of the pure-OOP hype and the relative
obscurity of Simula).
I don't list Smalltalk among C++'s ancestor languages (see the chart on
page 6 of D&E) whereas C and Simula are listed as the primary and direct
ancestors. Simula provided my original inspiration in both the area of
language features and in the area of programming/design techniques.
- Bjarne
Bjarne Stroustrup, AT&T Research, http://www.research.att.com/~bs/homepage.html
>It seems to me that the C code is solely to provide the elisp and >bytecode
>interpreter. Granted, Elisp has lots of datatypes and fubctions >designed to
>make editing tasks written in Elisp easier, but the editor itself seems >to be
>written in Elisp.
>
>Hartmann Schaffer
I would have to disagree. The C code for byte code
interpreting is small compared to the rest
of the emacs C code. Also, most of the core editor
algorihtms were in C the last time I looked.
Elisp provides several "facilities" and "hooks"
on top, not at the core level.
Though I am sure rms placed as much functionality
in Lisp as he felt he reasonably could.
* Mukesh Prasad <·······@dma.isg.mot.com>
| I would have to disagree.
on what basis do you make conclusions about Emacs? do you know the code
intimately? well, I do.
| The C code for byte code interpreting is small compared to the rest of
| the emacs C code.
ahem, the issue was the Lisp machine, not the byte code interpreter. if
you insist on being silly and twisting people's words, do not expect to be
respected.
what distinguishes a C program from a Lisp program? first of all, it has
functions defined in a C-like fashion, and makes function calls in the
normal C way. in Emacs, 90% of the C code consists of Lisp-like C-code and
Lisp-like calls to such functions. second, if a C program contains
assembly code for efficiency, it is no less a C program -- it just has some
sections of itself that are hand-compiled because that would speed things
up or do something that is hard to do in C. (and believe you me, C is
_not_ the all-purpose language it is believed to be.)
90% of the C code in Emacs is effectively hand-compiled Lisp.
how can I say that? because I'm working on an Emacs Lisp to C compiler for
these things, such that (1) one may write in Lisp instead of the tortous
output of the imaginary compiler that exists today, and (2) one may compile
functions in any package that turns out to need the speed.
| Also, most of the core editor algorihtms were in C the last time I
| looked.
you haven't looked at all, have you, Mukesh?
| Elisp provides several "facilities" and "hooks" on top, not at the core
| level.
Emacs is written in Emacs Lisp. the display system and the Lisp machine is
written in C for two reasons: (1) speed, and (2) portability, respectively.
| Though I am sure rms placed as much functionality in Lisp as he felt he
| reasonably could.
it is better to say that as little of the functionality is in C as possible
because it sacrifices (1) readability, and the purpose of free software is
to make people able to learn from other programmers, (2) flexibility, as it
takes much more effort to be flexible in C than in Lisp.
(note that XEmacs goes the other way: with more and more code hidden in
impenetrable C code with "abstract types" implemented solely in C and used
only by Lisp through an "API". one _might_ say that Emacs is written in
Lisp because its programmers think in Lisp, whereas XEmacs is written in C
because its programmers think in C.)
#\Erik
--
I could tell you, but then I would have to reboot you.
> Bottom line: The proof of which is better (C++ or Lisp) is to
> look at the commercially successful applications. If Lisp is such
> a hot language, how come most applications are all written in C and
> C++?
I would even set aside the question of MOST successful applications and
just ask someone to show me ONE successful Lisp application. By "show
me" I mean something I can download and try. By successful I mean
something
I would use regularly. I have lots of programs written in C that meet
those
criteria but none in Lisp.
I think it is much easier to prototype in Lisp than in C. You don't have
to
worry about memory management, you can make incremental changes to
a running program, and usually you get to use nice high-level interfaces
onto
the machine's resources. Also, your mistakes don't crash the machine.
But in my experience Lisp can be much slower than C. Especially for
character processing. I once wrote a program to read some ascii and
decode it into binary. C was much faster at that than Lisp. Of course,
you
can always write low level things like that in C and call them from your
Lisp
program.
Another big problem with Lisp is you can't make a small, stand-alone binary
with it. The smallest application you can build is usually about a
megabyte.
Hard drives are so large now that this may finally not be such a problem
anymore. (I'm referring now to my experience with Common Lisp. I'm sure
Scheme systems can make smaller binaries.)
So can anyone point me to a useful application, written in Lisp, that I can
download and try? I can show you plenty done in C.
In article <··························@default>, "Dan Winkler"
<······@tiac.net> wrote:
> I would even set aside the question of MOST successful applications and
> just ask someone to show me ONE successful Lisp application. By "show
> me" I mean something I can download and try. By successful I mean
> something
> I would use regularly. I have lots of programs written in C that meet
> those
> criteria but none in Lisp.
Well, I'm using XEmacs daily. It is my preferred editor on Unix
(maybe there is a version for Windows).
If you would have a Lisp machine you could download CL-HTTP from
MIT AI Lab. Well, on a Lisp machine you don't need to download
it, you can directly install from the FTP server. It is
a web server written in Common Lisp - I use it daily. It is also
available in source with ports for MCL, ACL and LispWorks.
You may want to look at WebMaker from Harlequin. It translates
FrameMaker docs into HTML. Well, it has been used by
Apple to translate all their "Inside Macintosh" volumes.
If you have a Mac you may look at "Interaction/IP" written
by Terje Norderhaug (·····@in-Progress.com):
I am proud to say that the application has developed quite a bit from its
conception as one of the first threaded forums on the web back in 1994.
Interaction is now a solid framework for advanced web services, with chat
rooms and shopping as two manifestations. A high number of web sites use
Interaction every day for purposes such as:
* Visitor Entertainment
* On-line customer support
* Open discussions
* Virtual Cafe's
* Socializing
* Dating services
* Distance meetings
* Intranet Groupware
* Dynamic websites
...
> But in my experience Lisp can be much slower than C. Especially for
> character processing. I once wrote a program to read some ascii and
> decode it into binary. C was much faster at that than Lisp.
Depends on how you write this stuff in Lisp. To see how to get
a reasonable performance even with a object-oriented design
you may want to look at the sources of CL-HTTP. This may
give you an idea how real world Common Lisp code can look like.
> Of course,
> you
> can always write low level things like that in C and call them from your
> Lisp
> program.
If we can't do it in Lisp, then there should be something wrong.
Complete OS were written in Lisp.
> Another big problem with Lisp is you can't make a small, stand-alone binary
> with it. The smallest application you can build is usually about a
> megabyte.
> Hard drives are so large now that this may finally not be such a problem
> anymore. (I'm referring now to my experience with Common Lisp. I'm sure
> Scheme systems can make smaller binaries.)
True.
> So can anyone point me to a useful application, written in Lisp, that I can
> download and try? I can show you plenty done in C.
If you happen to have a running Common Lisp or Scheme, there is
plenty of Lisp software out there. See ftp.digitool.com
for MCL specific stuff, or see the CMU AI repository,
see the Scheme repository, ...
Rainer Joswig
[Rainer Joswig]
| Well, I'm using XEmacs daily. It is my preferred editor on Unix
| (maybe there is a version for Windows).
FWIW, GNU Emacs runs on Windows 95 and NT, too.
| > Another big problem with Lisp is you can't make a small, stand-alone
| > binary with it. The smallest application you can build is usually
| > about a megabyte. Hard drives are so large now that this may finally
| > not be such a problem anymore. (I'm referring now to my experience
| > with Common Lisp. I'm sure Scheme systems can make smaller
| > binaries.)
|
| True.
but... Wade L. Hennessey's WCL uses shared libraries to produce very small
binaries (smaller than C++ with GCC). granted, the shared libraries are
enormous, but enormous shared libraries don't stop people who use other
enormous shared libraries from pointing to their small, not-so-stand-alone
binaries and gloat. with WCL, Common Lisp programmers can do the same if
they wish. WCL runs on SPARCs with SunOS and Solaris. it seems not to be
maintained. <URL:ftp://cdr.stanford.edu/pub/wcl/>
#\Erik
--
I could tell you, but then I would have to reboot you.
In article <················@naggum.no>, Erik Naggum <····@naggum.no> wrote:
> but... Wade L. Hennessey's WCL uses shared libraries to produce very small
> binaries (smaller than C++ with GCC). granted, the shared libraries are
> enormous, but enormous shared libraries don't stop people who use other
> enormous shared libraries from pointing to their small, not-so-stand-alone
> binaries and gloat. with WCL, Common Lisp programmers can do the same if
> they wish. WCL runs on SPARCs with SunOS and Solaris. it seems not to be
> maintained. <URL:ftp://cdr.stanford.edu/pub/wcl/>
Right, there is also CLICC, a Common Lisp to C Compiler.
You can compile a subset (the dynamism removed) of
Common Lisp directly to C. I never have used
it, but it should generate small binaries, too.
Then there was the expensive Lisp-to-C compiler
from Chestnut, I wonder how big the generated
applications were (minimum that is).
MCL also uses shared libraries.
Rainer Joswig
Hi!
EN>enormous, but enormous shared libraries don't stop people who use other
EN>enormous shared libraries from pointing to their small,
EN>not-so-stand-alone binaries and gloat.
Actually the Cygnus port of GNU GCC for Win32 does use a 3MB shared library
plus an additional 1.5 MB static runtimelibrary, not to mention the size of
the compilers themselve. That's far enough for a decent Common Lisp, I
think.
Actually the Allegro CL/PC does only use 6 MB for it's image file, and
there is a full compiler in it and a full interactive environment. That's
far superior compared with GCC, I think. So everybody that want's small
programs should use Allegro CL/PC instead of GCC under Win32 :-)
bye, Georg
Hi!
RJ>Well, I'm using XEmacs daily. It is my preferred editor on Unix
RJ>(maybe there is a version for Windows).
Yes, Pearl sells a Lucid Emacs port for Windows.
bye, Georg
···········@ms3.maus.westfalen.de (Georg Bauer) writes:
>
> RJ>Well, I'm using XEmacs daily. It is my preferred editor on Unix
> RJ>(maybe there is a version for Windows).
>
> Yes, Pearl sells a Lucid Emacs port for Windows.
Pearl's Win-Emacs also comes in a free (nagware) version, which is
the same as the commercial version except that a nag screen pops
up every half-hour or so. The free version is widely used (I used
it for over a year before upgrading to the commercial version).
Win-Emacs is downloadable via
http://www.pearlsoft.com/
-Bill
George Van Treeck <······@sybase.com> writes:
>That's a very good point! If you're a professor in some ivory tower
>you can afford to savor your favorite computer language.
There are very few ivory towers left, and professors don't get to live
in them. (You have to be an "Entrepreneur" for that, which is today's
eumphemisum for a rich criminal. But I digress...)
Tertiary education in this country faces continuing funding cuts
and a government which is actively hostile to the idea of "social good"
or anything except "private enterprise" doing anything. (For a critique
of what undiluted pursuit of the almighty dollar has actually accomplished,
read the book "The Coming Plague". Consider also the fact that an
*advocate* of private health stated in a TV interview here that private
health is 6 times more expensive than public health. And it doesn't make
you 6 times healthier either.)
The pervasive attitude is "I don't want to know anything if it isn't in
the final exam; how dare you put anything in the final exam if it isn't
relevant to industry; so what you _used_ to work in a software company,
you don't have a PC on your desk, you know _nothing_ nya nya".
In short, today's professors are stuck with the languages their students
are willing to pay to learn. Just this year, a very good place that was
doing a great job of teaching people, using Ada, was forced to switch to
C++ as an introductory language (now _there's_ a nightmare; whatever the
merits of C++ for experts, it's hell for novices), even though Ada 95 is
superior on every metric I'm aware of except popularity. That was a bad
day for education in this country.
>If you
>develop software that must be sold to put food on your plate, then
>you become much more conservative! Your definition of best becomes
>more dominated in terms of what is the "safe" choice.
I am fully in agreement that a software engineer chooses a language
to reduce risk and lifecycle costs.
>And the safe choice is one that is popular --
But there are *many* popular languages, not just C.
There are a lot of applications where Visual Basic or Delphi, to name
just two popular languages, might be a much better choice than C.
>has support from many vendors,
>books, lots of staff you hire that already know the lanugage, etc.
These are independent properties. For example, Perl has lots of books
(some of them very bad, but none that quite matches the badness of most
C books) and a lot of people know it, but there is only one implementation.
>That means your favorite is something commercially successful.
Yes, but the fact that a *tool* sells well doesn't mean that a
*product* made using that tool will work. Philips head screws
are commercially successful, but other kinds of screws are still
deservedly popular (and indeed there is a much better head that
has been available for a couple of years know but has not yet
reached popularity; check through the last decade's Scientific
American and Discover if you are interested).
>You know that if it's commercially successful, that there is a
>high probability you can get the project out the door with sufficient
>quality to sell it.
This is absolutely false. C and C++ are commercially successful.
There have been *numerous* failed projects using those languages.
In fact the C code I see follows a "Sturgeon's Square Law".
Sturgeon's Law: "90% of _everything_ is crud."
Sturgeon's Square Law: "99% of C programs are crud."
Just to give two examples:
for many years it was a popular passtime for UNIX-haters to try
$ANY_UNIX_UTILITY </vmstat
Many UNIX utilities (written in C) would crash.
Just the other day I reported a bug to Sun via our technical
support group. The UNIX editor 'ed' has been around since
the 1970s, something like 20 years now. I made a one-character
typing mistake in a common command, and ed dumped core. Guess
what language ed is written in?
>> in other words, the "commercially successful" argument is a statement of
>> shallowness on the part of he who uses that argument.
>It is not shallowness. It's pragmatism.
Pragmatism has no such tunnel vision. Following the other sheep into
the freezing-works-bound truck is not pragmatism. Pragmatism relies
on experiments, measurements, trying things out, not taking things on
authority. Professional programming doesn't take _any_ tool for
granted, but makes a serious attempt to estimate costs and risks for
_all_ choices.
>It's not myths. At Digital, a CAD tool was written in Lisp and dumped
>because it Lisp was just too slow. Lisp can't be used in real-time
>applications, because you need to guarantee response time. A 911
>call needs to routed over the phone lines and can't afford to wait
>for Lisp program to take a coffee brake (garbage collect for some
>indeterminate period).
I've got some bad news for you. Several emergency phone systems
in several countries have seriously misbehaved.
I've got some even worse news for you: Ericsson, a major international
telecoms company, has developed a programming language for real-time
distributed programming of things like telephone switches. It's called
Erlang. Lisp programmers would find it _much_ more familiar than C
programmers.
>And the code size is too large to be
>quickly activated and used for small utility applications. Perl,
>Tcl, C/C++ are used instead.
Oh the same tired old lie. Lisp code is as small as you want it
to be. Perl, with *none* of my code in it, is larger than most of
the Lisp systems I have ever used.
>Virtually all new commercial applications are developed in C/C++.
>If you worked in the "real-world" (companies that develop
>commercial software) you know that.
I worked in a real-world company that developed commercial software.
We used C, assembler, two Lisps, and Prolog, plus a bit of Awk and
shell scripts. Perhaps because we weren't developing *mass market*
code for PCs, you will dismiss us as "not real world". But we made
millions of dollars a year and stayed in business until we were bought
out and the purchaser put some of our key programmers onto other tasks.
>Look in the new paper for
>programming jobs. Count the number of ads wanting C/C++ vs.
>FORTRAN/COBOL.
The adds do not tell you how much work is being done in any
particular language. They only tell you how much work is NOT
finished because they still need people.
>Many in-house applications are being developed in VisualBasic
>or some 4GL/GUI tool like PowerBuilder, because it requires
>less technical skill ($25/hr labor for VisualBASIC programmer
>vs. $60/hr for C++ programmer) and development is quicker
>using something like PowerBuilder. Lisp could never
>cut it in that market either.
It could be the implementation language underneath such a tool,
though. Some of the earliest 4GLs _were_ developed in Lisp,
but at a time when the market didn't believe in 4GLs.
--
Mixed Member Proportional---a *great* way to vote!
Richard A. O'Keefe; http://www.cs.rmit.edu.au/%7Eok; RMIT Comp.Sci.
Mukesh Prasad <·······@dma.isg.mot.com> writes:
>
> >It seems to me that the C code is solely to provide the elisp and >bytecode
> >interpreter. Granted, Elisp has lots of datatypes and fubctions >designed to
> >make editing tasks written in Elisp easier, but the editor itself seems >to be
> >written in Elisp.
> >
> >Hartmann Schaffer
>
> I would have to disagree. The C code for byte code
> interpreting is small compared to the rest
> of the emacs C code. Also, most of the core editor
> algorihtms were in C the last time I looked.
> Elisp provides several "facilities" and "hooks"
> on top, not at the core level.
On my Sparc 5 running Sparc Linux:
cd /usr/share/emacs/19.34/
spot:/usr/share/emacs/19.34$ du lisp
du lisp
239 lisp/term
19722 lisp
spot:/usr/share/emacs/19.34$
20 Megs seems like quite a lot to me.
I also believe that 20 megabytes of Lisp "facilities" and
"hooks" have way more functionality than 20 megabytes of C code.
Erik Naggum <····@naggum.no> writes:
>
> ... in Emacs, 90% of the C code consists of Lisp-like C-code
> and Lisp-like calls to such functions. ...
> 90% of the C code in Emacs is effectively hand-compiled Lisp.
The C-implemented e-lisp primitives are certainly not 100%
compatible with those implemented in e-lisp. For example,
one cannot advise (or redefine) them because there is often
C-level calls to such primitives and such C-level calls do
not call indirectly through the symbol-function cell. Such
"hardcoded" calls make it extremely difficult to implement
extensions to Emacs (witness the convoluted code due to loops
that had to be jumped through to do things like ange-ftp, etc).
There are probably only a very small number of e-lisp primitives
that would be adversely affected performance-wise by calling
through the function-cell. However, there are a much larger number
of such primitives where it would be useful to be able to advise
these primitives as Emacs evolves (this is especially true for
any object-oriented extensions where one wants to lift up some
of the primitives to generic methods, etc). Unfortunately RMS
has not been sympathetic to these arguments (perhaps due to his
qualms about introducing OO techniques into Emacs). This is in
contrast to the XEmacs developers, who have been much more
open-minded (cf. much of the code implemented by Ben Wing).
-Bill
From: Erik Naggum
Subject: Re: E-lisp primitives unextendible [was: Lisp versus C++ for AI. software]
Date:
Message-ID: <3055832195897961@naggum.no>
* Bill Dubuque <···@martigny.ai.mit.edu>
| The C-implemented e-lisp primitives are certainly not 100% compatible
| with those implemented in e-lisp. For example, one cannot advise (or
| redefine) them because there is often C-level calls to such primitives
| and such C-level calls do not call indirectly through the symbol-function
| cell. Such "hardcoded" calls make it extremely difficult to implement
| extensions to Emacs (witness the convoluted code due to loops that had to
| be jumped through to do things like ange-ftp, etc).
FWIW, there is also the byte code machine, which performs a number of
operations directly without calling any functions.
#\Erik
--
Those who do not know Lisp are doomed to reimplement it.
····@ix.cs.uoregon.edu (Carl L. Gay) writes:
> From: George Van Treeck <······@sybase.com>
> [...] Java is about as fast as LISP [...]
> On what do you base this last statement? Do you have benchmarks? Are
> you comparing Java to interpreted or compiled Lisp? I would be amazed
> if Java were as fast as compiled Common Lisp code.
It strikes me that a lot of the issues in optimizing Lisp compilers
would also apply to Java compilers, and the performance would be
similar. So I wouldn't be amazed.
Unfortunately (or perhaps fortunately :-), there aren't too many
platforms on which to do this comparison. On UNIX, where there are
several good Lisp compilers, there are not yet any released Java
JIT compilers (Guava is in beta, and Sun's not yet out at all), only
interpreters. On Windows 95/NT, there are at least three Java
compilers (Borland, Symantec, Asymetrix), but AFAIK only one serious Lisp
compiler to compare to (Franz). On the Mac, I think Metrowerks and
Symantec (?) are the only Java compilers already released, but I'm not
personally aware of any Power-PC native Lisp compiler (or is
Digitool's MCL now PPC native? Pardon my ignorance).
- Marty
Lisp Programming Resources: <http://www.apl.jhu.edu/~hall/lisp.html>
Java Programming Resources: <http://www.apl.jhu.edu/~hall/java/>
From: Marty Hall
Subject: Re: JIT Java compilers for Unix (was: Re: Lisp versus C++ for AI. software)
Date:
Message-ID: <x5g23uverf.fsf@rsi.jhuapl.edu>
Simon Leinen <·····@switch.ch> writes:
> Marty Hall wrote
> > On UNIX, where there are several good Lisp compilers, there are not
> > yet any released Java JIT compilers (Guava is in beta, and Sun's not
> > yet out at all), only interpreters.
>
> Sorry for becoming off-topic, but some JIT Java compilers for Unix
> have recently become available:
>
> SGI has a Java development environment called CosmoCode whose latest
> release includes a JIT compiler. You can download the beta version
> from http://www.sgi.com/Products/cosmo/beta/
>
> Then there was an announcement for a commercial product called "Guava"
> (not to be confused with another, free, product with the same name) in
> comp.lang.java.programmer <···················@floyd.sw.oz.au> which
> is a JDK-compatible run-time system with JIT compilation for Solaris 2.5.
I hadn't realized that the SGI product included a JIT compiler. Note
that the Guava you mentioned is the same one I mentioned, and is still
in beta.
Anyhow, even though Carl is very knowledgeable about Lisp and
knowledgeable about Java, on this particular issue ("amazed" if
[compiled] Java were as fast as compiled Lisp), I beg to differ. I see
no technical reason why Java need be any slower. Right now, I still
think it is relatively hard to get good numbers for comparison, since
there are few (no?) platforms for which there is more than one
commercial non-beta Java compiler *and* more than one commercial Lisp
compiler.
- Marty
Lisp Programming Resources: <http://www.apl.jhu.edu/~hall/lisp.html>
Java Programming Resources: <http://www.apl.jhu.edu/~hall/java/>
From: John David Stone
Subject: Re: standardization (Re: Lisp versus C++ for AI. software)
Date:
Message-ID: <udranea44f.fsf@post.math.grin.edu>
···@intentionally.blank-see.headers writes:
> In article <······················@brecher.reno.nv.us>,
> ·····@brecher.reno.nv.us (Steve Brecher) writes:
>
> Vendors have been
> working hard to implement the "draft" ANSI C++ standard, there are
> numerous textbooks explaining it, ...
Very amusing. The best part of the draft standard, as opposed to
existing implementations of C++, is the Standard Template Library. Of the
scores of introductory C++ textbooks currently on the market, I'll bet
there aren't half a dozen that so much as tell the student what's _in_ the
Standard Template Library, let alone how to use it correctly. Accurate
coverage of function objects is even rarer. And far more textbooks give
complete C++ source code for the bubble sort (!) than mention the sort()
function from the algorithm library.
--
====== John David Stone - Lecturer in Computer Science and Philosophy =====
============== Manager of the Mathematics Local-Area Network ==============
============== Grinnell College - Grinnell, Iowa 50112 - USA ==============
========== ·····@math.grin.edu - http://www.math.grin.edu/~stone/ =========
John David Stone <·····@math.grin.edu> wrote:
> ···@intentionally.blank-see.headers writes:
>
> > In article <······················@brecher.reno.nv.us>,
> > ·····@brecher.reno.nv.us (Steve Brecher) writes:
> >
> > Vendors have been
> > working hard to implement the "draft" ANSI C++ standard, there are
> > numerous textbooks explaining it, ...
Please be careful with quotes/attributions, folks. The "Vendors..."
statement was written by ···@intentionally.blank-see.headers in a followup
in which he did in fact quote me, but John David Stone's followup fails to
note that he (John David Stone) is quoting selectively. The inclusion of
"> In article ... Steve ... writes:" in the quote was unnecessary and
misleading -- which is not to say I think it was other than an innocent
error.
> Very amusing. The best part of the draft standard, as opposed to
> existing implementations of C++, is the Standard Template Library. Of the
> scores of introductory C++ textbooks currently on the market, I'll bet
> there aren't half a dozen that so much as tell the student what's _in_ the
> Standard Template Library, let alone how to use it correctly. Accurate
> coverage of function objects is even rarer. And far more textbooks give
> complete C++ source code for the bubble sort (!) than mention the sort()
> function from the algorithm library.
I originally pointed out that the C++ standard is in process -- not final
-- because it is in fact an issue in my own work with C++. My vendor is
indeed working hard (as best I can tell :) but I don't as yet have an
implementation of the current draft standard nor of the Standard Template
Library that I consider appropriate for production use. The last time I
used LISP was eons before there was a CL, but -- if standardization is the
only criterion -- CL would have to get the nod over C++ right now.
--
·····@brecher.reno.nv.us (Steve Brecher)
On 04 Oct 1996 15:03:44 -0500, John David Stone <·····@math.grin.edu> wrote:
>> Vendors have been
>> working hard to implement the "draft" ANSI C++ standard, there are
>> numerous textbooks explaining it, ...
>
> Very amusing. The best part of the draft standard, as opposed to
>existing implementations of C++, is the Standard Template Library. Of the
>scores of introductory C++ textbooks currently on the market, I'll bet
>there aren't half a dozen that so much as tell the student what's _in_ the
>Standard Template Library, let alone how to use it correctly. Accurate
>coverage of function objects is even rarer. And far more textbooks give
>complete C++ source code for the bubble sort (!) than mention the sort()
>function from the algorithm library.
Well, I won't go deep into discussing STL (in my opinion it's really pretty good
does what it was intended for - to show that templates and generic programming
can be used to make C++ libraries more flexible and useful - but it's too weak
and non-obvious to use to be included in the standard), but isn't the purpose of
the textbooks on C++ to teach the student program in C++ first ? And the bubble
sort, while being one of the worst sorting methods, can be used just nice to
demonstrate many of the C++ (and most other procedural/hybrid languages)
features. I think that the standard library (streams, STL, etc) should be taught
much after the first principles, if at all, at least its crrent usability is
pretty low, when comparing with, say MFC and OWL.
Oleg
From: Simon Leinen
Subject: JIT Java compilers for Unix (was: Re: Lisp versus C++ for AI. software)
Date:
Message-ID: <aaranelxao.fsf_-_@switch.ch>
[Followup-to: set to comp.lang.java.programmer.]
> On UNIX, where there are several good Lisp compilers, there are not
> yet any released Java JIT compilers (Guava is in beta, and Sun's not
> yet out at all), only interpreters.
Sorry for becoming off-topic, but some JIT Java compilers for Unix
have recently become available:
SGI has a Java development environment called CosmoCode whose latest
release includes a JIT compiler. You can download the beta version
from http://www.sgi.com/Products/cosmo/beta/
Then there was an announcement for a commercial product called "Guava"
(not to be confused with another, free, product with the same name) in
comp.lang.java.programmer <···················@floyd.sw.oz.au> which
is a JDK-compatible run-time system with JIT compilation for Solaris 2.5.
--
Simon.
From: William Paul Vrotney
Subject: Re: standardization (Re: Lisp versus C++ for AI. software)
Date:
Message-ID: <vrotneyDyu1rz.K91@netcom.com>
In article <··············@post.math.grin.edu> John David Stone <·····@math.grin.edu> writes:
>
> Very amusing. The best part of the draft standard, as opposed to
> existing implementations of C++, is the Standard Template Library. Of the
> ...
STL is *not* good at all for doing AI software. There are no symbols, no
singly liked lists, no conses, and nothing for building heterogeneous
structures. You can try imitating some of this with STL's version of doubly
linked lists and virtual functions, but it feels like you are forcing you
algorithms to be awkward because of the limitations of C++. Iterators seem
to be fit better with arrays but make list operations awkward.
--
William P. Vrotney - ·······@netcom.com
* Erik Naggum wrote:
> | Unfortunately, a number of CommonLisp's APIs are a tad outdated,
> | including its I/O and file system interfaces ...
> Could you substantiate this? It is difficult to uncover what you actually
> think when you only provide vague conclusions.
Obvious I/O type things that CL is missing are random access to files
and some kind of networking API. Of course everyone has these things,
but it's a mild annoyance that you have to write glue code each time.
(of course they might be very hard to standardise...)
--tim
From: Erik Naggum
Subject: Re: standardization (Re: Lisp versus C++ for AI. software)
Date:
Message-ID: <3053681322232040@naggum.no>
[Tim Bradshaw]
| Obvious I/O type things that CL is missing are random access to files
I must have missed something. Is not `file-position' random access?
| (of course they might be very hard to standardise...)
`file-position' was already present in CLtL1 in 1984.
#\Erik
--
I could tell you, but then I would have to reboot you.
From: John David Stone
Subject: Re: standardization (Re: Lisp versus C++ for AI. software)
Date:
Message-ID: <ud7mp2hpq7.fsf@post.math.grin.edu>
Oleg Moroz writes:
> Well, I won't go deep into discussing STL, ... but isn't the
> purpose of the textbooks on C++ to teach the student program in C++
> first?
Under the draft ANSI standard, STL is part of C++. It's just as
much part of C++ as, say, cout or pow(). A C++ textbook that doesn't cover
the STL is about as useless as one that doesn't mention streams or the math
library.
One of the first lessons that novice programmers should learn is
the one about not re-inventing the wheel. C++ textbooks that spend half of
their pages laboriously constructing poorly-designed versions of classes
that are already in the STL are setting a very bad example.
> And the bubble sort, while being one of the worst sorting
> methods, can be used just nice to demonstrate many of the C++ (and most
> other procedural/hybrid languages) features.
True. So can insertion sort, selection sort, merge sort, or quick
sort, any of which would be a better choice of example than bubble sort --
provided of course that students also learn that they should use sort()
instead in most cases.
--
====== John David Stone - Lecturer in Computer Science and Philosophy =====
============== Manager of the Mathematics Local-Area Network ==============
============== Grinnell College - Grinnell, Iowa 50112 - USA ==============
========== ·····@math.grin.edu - http://www.math.grin.edu/~stone/ =========
In article <··············@post.math.grin.edu> John David Stone <·····@math.grin.edu> writes:
>
> One of the first lessons that novice programmers should learn is
>the one about not re-inventing the wheel. C++ textbooks that spend half of
>their pages laboriously constructing poorly-designed versions of classes
>that are already in the STL are setting a very bad example.
Point taken. However, example code for classes that are already in
standards is relatively easy to find, and textbook authors don't want to
reinvent the wheel, either. Until someone writes a book with better
examples, authors won't change their behavior (and then they'll just try to
copy the better book :-). Plus, if an author came up with really useful
example classes, he/she could probably make more money by *not* publishing
them in a textbook before they had already been implemented by someone who
would pay real money for the privilege.
David Seibert
From: George Van Treeck
Subject: Re: standardization (Re: Lisp versus C++ for AI. software)
Date:
Message-ID: <325AF986.606C@sybase.com>
William Paul Vrotney wrote:
>
> STL is *not* good at all for doing AI software. There are no symbols, no
> singly liked lists, no conses, and nothing for building heterogeneous
> structures.
Symbols, linked lists, consing, etc. are a wasted effort by people
who know little about neuropsych and figure they could use
introspection to deduce and brain functions. All programs that
use such methods have been complete failures at such simple things
as recognizing a hand written characters, voice recognition, etc.
I haven't seen any interesting software written in a symbol
processing language in last 7 years.
The only software (and hardware) that has ever had any success at
all with these "intellegence" problems are those that emulate
neural structures to some degree. And NONE of that software
does it "symbolically." That software is number crunching
software -- generally written in a language like C or C++.
And even if you were to do symbolic processing for say,
parsing natural language text, a language like Prolog is
far superior to Lisp!
-George
In article <·············@sybase.com>, George Van Treeck
<······@sybase.com> wrote:
> Symbols, linked lists, consing, etc. are a wasted effort by people
> who know little about neuropsych and figure they could use
> introspection to deduce and brain functions. All programs that
> use such methods have been complete failures at such simple things
> as recognizing a hand written characters, voice recognition, etc.
"Simple things"?
> I haven't seen any interesting software written in a symbol
> processing language in last 7 years.
That might be *your* problem. I have seen such software.
> And even if you were to do symbolic processing for say,
> parsing natural language text, a language like Prolog is
> far superior to Lisp!
Hear, Hear. You should have told this all the people
who wrote successful AI software in Lisp.
Obviously you have no idea of AI and AI software development.
From: Tim Menzies
Subject: Re: standardization (Re: Lisp versus C++ for AI. software)
Date:
Message-ID: <53mjac$28s@mirv.unsw.edu.au>
In article <·································@news.lavielle.com> ······@lavielle.com (Rainer Joswig) writes:
>In article <·············@sybase.com>, George Van Treeck
><······@sybase.com> wrote:
>
>> Symbols, linked lists, consing, etc. are a wasted effort by people
>> who know little about neuropsych and figure they could use
>> introspection to deduce and brain functions. All programs that
>> use such methods have been complete failures at such simple things
>> as recognizing a hand written characters, voice recognition, etc.
>
>"Simple things"?
>
>> I haven't seen any interesting software written in a symbol
>> processing language in last 7 years.
perhaps you should read:
Paradigms of Artificial Intelligence Programming: Case
Studies in Common Lisp
http://www.harlequin.com/books/norvig/paip.html:
Peter Norvig
A book published by Morgan Kaufmann, 1992.
Paperbound, xxviii + 946 pages, ISBN
1-55860-191-0.
Over 8 million pages sold!
now this text contains nothing "new": but my goodness it does lots of
"old" things so succinctly!!! e.g. bobrows ENTIRE phd thesis in less
than 33 pages, with full soruce code source and comments
--
Dr. Tim Menzies rm EE339 |
····@cse.unsw.edu.au | And for the tourist who
,-_|\ www.cse.unsw.edu.au/~timm | really wants to get away
/ \ AI Dept, School of Computer | from it all- safaris in
\_,-._* Science &Engineering, Uni NSW | Vietnam.
v Sydney, Australia, 2052 | -- Newsweek, late 1960s
+61-2-93854034(p)93855995(f) |
From: Felix Kasza [MVP]
Subject: Re: standardization (Re: Lisp versus C++ for AI. software)
Date:
Message-ID: <32613c48.29744280@babya.shd.de>
Rainer,
> Hear, Hear. You should have told this all the people
> who wrote successful AI software in Lisp.
> Obviously you have no idea of AI and AI software development.
I have no intention to join a pissing contest.
Rather, I will gladly admit to my ignorance in AI matters: "successful
AI software"? Does that mean there's a piece of software out there that
survives the Turing test and everything else you can throw at it?
I don't care what language it's written in, but unless something drastic
happend in the past six months, "successful AI software" is rather
far-out a concept.
Cheers,
Felix.
From: Erik Naggum
Subject: Re: standardization (Re: Lisp versus C++ for AI. software)
Date:
Message-ID: <3054236357395185@naggum.no>
[Felix Kasza]
| Rather, I will gladly admit to my ignorance in AI matters: "successful
| AI software"? Does that mean there's a piece of software out there
| that survives the Turing test and everything else you can throw at it?
no, that is not what it means.
| I don't care what language it's written in, but unless something drastic
| happend in the past six months, "successful AI software" is rather
| far-out a concept.
yes, your concept of AI is rather far-out. this cannot be attributed to
Artificial Intelligence as an area of research, however.
it is true, on the other hand, that some of the ambitious speeches were
overly optimistic and thus hurt the field over time, but they also produced
strong interest and enthusiasm in their time. ironically, the research
produced most of the fuel used to discredit it later, by discovering just
how hard their problems were. previously, people didn't know, and would
listen to their hype, which they did, in large numbers, even.
in moral terms: who is to blame for believing something that turns out to
be false? my guess is that the intense hatred for AI in some quarters can
be attributed mostly to the desire to externalize the feeling of stupidity
in having believed the hype to begin with.
as a general comment: if we are to discredit all fields that have had any
hype at some point in their life that turned out to be optimistic pep talks
more than the conservative statements of truth that seems to be required of
AI by AI-haters, nothing would be left. if we allow fields to use hype to
generate interest (as is done for Java, C++, WWW), we must allow for it in
the past, as well. anybody can have 20-20 hindsight.
#\Erik
--
I could tell you, but then I would have to reboot you.
From: Glen Clark
Subject: Re: standardization (Re: Lisp versus C++ for AI. software)
Date:
Message-ID: <32618AFA.436C98E6@clarkcom.com>
Erik Naggum wrote:
> my guess is that the intense hatred for AI in some quarters can
> be attributed mostly to the desire to externalize the feeling of stupidity
> in having believed the hype to begin with.
Who hates AI? Can you give some examples. This is not a bait.
It is a serious question.
--
Glen Clark
····@clarkcom.com
From: Erik Naggum
Subject: Re: standardization (Re: Lisp versus C++ for AI. software)
Date:
Message-ID: <3054283453467870@naggum.no>
[Glen Clark]
| Who hates AI? Can you give some examples. This is not a bait.
| It is a serious question.
I don't want to broadcast a professor's name, but his reaction to anything
that resembles AI has been so strongly negative that the whole department
has not done any research into AI techniques at all for the past 15-20
years, despite research in areas where AI researchers have gained important
progress, such as image processing. he's unfortunately not alone.
#\Erik
--
I could tell you, but then I would have to reboot you.
From: Cyber Surfer
Subject: Re: standardization (Re: Lisp versus C++ for AI. software)
Date:
Message-ID: <845276810snz@wildcard.demon.co.uk>
In article <·················@clarkcom.com>
····@clarkcom.com "Glen Clark" writes:
> Erik Naggum wrote:
>
> > my guess is that the intense hatred for AI in some quarters can
> > be attributed mostly to the desire to externalize the feeling of stupidity
> > in having believed the hype to begin with.
>
> Who hates AI? Can you give some examples. This is not a bait.
> It is a serious question.
Mrs Thatcher, ex Prime Minister of Great Britain (an ironic name
for an island - er, set of islands - if ever there was one). It
was allegedly a comment by Marvin Minsky about "AI"s someday
wanting to keep us as pets that prompted Mrs Thatcher to kill
the Alvy (? I'm not sure of the name...) project. For those who
don't know, that was a big AI project in this country.
I hope this is a serious answer, even if I can't provide many
details. All I know about it comes from watching TV, so don't be
too suprised if some of them are wrong! On the other hand, I do
remember the project being cancelled, whatever the reason may
have actually been.
--
<URL:http://www.enrapture.com/cybes/> You can never browse enough
Future generations are relying on us
It's a world we've made - Incubus
We're living on a knife edge, looking for the ground -- Hawkwind
From: Stephen Wolstenholme
Subject: Re: standardization (Re: Lisp versus C++ for AI. software)
Date:
Message-ID: <326559b8.3542463@news>
On Mon, 14 Oct 96 07:06:50 GMT, ············@wildcard.demon.co.uk
(Cyber Surfer) wrote:
>In article <·················@clarkcom.com>
> ····@clarkcom.com "Glen Clark" writes:
>
>> Erik Naggum wrote:
>>
>> > my guess is that the intense hatred for AI in some quarters can
>> > be attributed mostly to the desire to externalize the feeling of stupidity
>> > in having believed the hype to begin with.
>>
>> Who hates AI? Can you give some examples. This is not a bait.
>> It is a serious question.
>
>Mrs Thatcher, ex Prime Minister of Great Britain (an ironic name
>for an island - er, set of islands - if ever there was one). It
>was allegedly a comment by Marvin Minsky about "AI"s someday
>wanting to keep us as pets that prompted Mrs Thatcher to kill
>the Alvy (? I'm not sure of the name...) project. For those who
>don't know, that was a big AI project in this country.
>
I think the Alvey project came to an end when it ran out of money
after it's initial phase. Wasn't it European rather that just British?
No doubt Mrs Thatcher did have some involvement in not providing the
money.
Steve
--------------------------------------------------------------------------
Stephen Wolstenholme, Cheadle Hulme, Cheshire, UK
·····@tropheus.demon.co.uk
Author of NeuroDiet.
Windows 95 Neural Network Health & Fitness Diet Planner
http://www.simtel.net/pub/simtelnet/win95/food/ndiet10.zip
ftp://ftp.simtel.net/pub/simtelnet/win95/food/ndiet10.zip
From: Cyber Surfer
Subject: Re: standardization (Re: Lisp versus C++ for AI. software)
Date:
Message-ID: <845409993snz@wildcard.demon.co.uk>
In article <················@news>
·····@tropheus.demon.co.uk "Stephen Wolstenholme" writes:
> I think the Alvey project came to an end when it ran out of money
> after it's initial phase. Wasn't it European rather that just British?
> No doubt Mrs Thatcher did have some involvement in not providing the
> money.
Well, it's her alleged attitude to AI that interests me, not
so much what part she played in killing Alvey. I find it more
significant that she's given credit for the killing - but perhaps
that's just anti-Thatcher propaganda? I dunno.
In these post-Thatcher days, it's very hard to tell. Almost
nobody who knows anything about it can be objective. Today,
the pro/anti "European" issue is more relevant - but that's
a whole different matter! Let's not get into _that_, please.
Of course, just withholding money can successfully kill a
project. That may be what was refered to. As for the cause,
I dunno. Did she actually meet Minsky? If so, what _did_
he say to her? Etc, etc.
A more interesting question might be: who makes more money from
lecture tours, Minsky or Thatcher? ;-) No, don't answer that!
I doubt that anybody cares.
--
<URL:http://www.enrapture.com/cybes/> You can never browse enough
Future generations are relying on us
It's a world we've made - Incubus
We're living on a knife edge, looking for the ground -- Hawkwind
From: Felix Kasza [MVP]
Subject: Re: standardization (Re: Lisp versus C++ for AI. software)
Date:
Message-ID: <32665ffa.11776373@babya.shd.de>
Erik,
> my guess is that the intense hatred for AI in some quarters
I do hope that my original post doesn't classify me as an AI-hater. My
original question remain unanswered, however. All my dictionaries tell
me that "intelligence" refers to either the household meaning of
intelligence (a whole field of research on its own) or to the gathering
(and product thereof) of information -- as in "military intelligence".
I therefore expect the name "Artificial Intelligence" to refer to either
any kind of mechanism that displays "intelligent behaviour" in the first
sense, or that makes up pseudo-facts and disguises them to look like
information (second sense).
The WWW hype you refer to is quite different; after all, the name is
much less of a claim that "Artificial Intelligence".
To summarize, I fail to see how any current product can claim to be
"intelligent"; this served as my argument to Rainer Joswig, who in a
most impolite manner told another poster, "Obviously you have no idea of
AI and AI software development.".
All I wanted is show Rainer that he was just as clueless, or he would
realize that AI doesn't exist, with or without Lisp.
Cheers,
Felix.
From: Erik Naggum
Subject: Re: standardization (Re: Lisp versus C++ for AI. software)
Date:
Message-ID: <3054755627141139@naggum.no>
* Felix Kasza <······@mailbag.shd.de>
| I therefore expect the name "Artificial Intelligence" to refer to either
| any kind of mechanism that displays "intelligent behaviour" in the first
| sense, or that makes up pseudo-facts and disguises them to look like
| information (second sense).
this reminds me of the etymology of two words in ordinary English that
differ quite remarkably from their perceived meaning. "muscle" comes from
the Latin "musculus", dimunitive of "mus", our "mouse". this is allegedly
because muscles show movement much like small mice, contracting and
stretching as they do when they move. would you raise an eyebrow should
someone complain loudly that muscles are emphatically _not_ small mice? a
"compilation" (a collection of writings) comes from the Latin "compilare",
which means "to plunder". because some authors plundered the writings of
others and put them together as their own 400-odd years ago, today's
"compile" means "to collect". I would find it surprising if somebody who
has duplicated software in violation of license agreements (I think a
"pirate" is an evil man who kills and robs at sea, and I refuse to use it
about computer users, regardless of their crimes, alleged or real), would
defend himself by arguing that the programs were "plundered" to begin with.
the real question, of course, is which dictionary you use. used wrongly, a
dictionary of etymology will make a man a source of much ridicule. used
wrongly by a patient, a medical dictionary will do nothing but frustrate
the medical doctors who has to deal with that patient. your looking up
"intelligence" alone in a dictionary shares many of these qualities.
#\Erik
--
I could tell you, but then I would have to reboot you.
In article <·················@babya.shd.de>, ······@mailbag.shd.de (Felix
Kasza [MVP]) wrote:
> I therefore expect the name "Artificial Intelligence" to refer to either
> any kind of mechanism that displays "intelligent behaviour" in the first
> sense, or that makes up pseudo-facts and disguises them to look like
> information (second sense).
As you may guess, you are not the first German who doesn't
understand it. Again, read the literature about AI and
you will see that making human-like thinking beings
is not the top priority of most AI researchers.
Instead they have completely different goals.
The debate about the term "Artificial Intelligence" is
old, such as the misunderstandings caused by these words.
German translation is "Kuenstliche Intelligenz", which
is even worse, because it misses the meaning
of "intelligence". Some people prefer "Wissenabasierte Systeme",
"Kognitive Systeme", etc., depending of the field they are working
in.
> To summarize, I fail to see how any current product can claim to be
> "intelligent"; this served as my argument to Rainer Joswig, who in a
> most impolite manner told another poster, "Obviously you have no idea of
> AI and AI software development.".
Yes, and I may have that to say again. If you want to discuss
this topic, some prior knowledge of this area may be required.
For example it would be worth knowing something about the
history of "AI" and the various developments that have taken
place. As you can imagine, a field with thousands of researchers
is very diverse and many opinions do exist about what
AI is. Also knowing a bit about AI software in industry may
help (hint: for some years there have been conferences
devoted to this topic.).
Example:
People are working on a new generation of ships. Ships,
that have diagnosis systems on board, ships that can
do navigation (for example builing a simple convoy, etc.).
Who cares whether the ship will pass the Turing test?
It may be sufficient, that it can safely find a way
on the ocean.
Example two:
There are systems with gather data from those
high-voltage power lines all over the country.
If some interruptions happens these systems
try to identify the location of the problem.
Obviously you need some diagnosis capability.
This enables to send repair teams directly to
the place where the cause of the power outage
is, without searching it for some time. Who
cares if this system will pass the Turing test?
Example three:
The European Union has to translate huge amounts
of texts. Many texts can be translated with
some support from computers (for example see
the Metal system from Siemens). Who cares
if the machine translation software will pass
the Turing test? Still it does useful work.
Example four:
American Express has to look at every money
transcation of their customers. Most of
them are pretty trivial and can be
checked very fast. But some of these transaction
may require more background knowledge about
the customer. Think, a company like
American Express has to maximize customer
satisfaction. The complicated decisions are being
handled by some Lisp machines. I don't care
whether these machines will pass the Turing test.
Still they have to decide whether a money transaction
is o.k.
Are these examples not AI software? If not, why?
> All I wanted is show Rainer that he was just as clueless, or he would
> realize that AI doesn't exist, with or without Lisp.
Your remark is a stupid one. You may have a completely
different definition of AI than I have. Which basic
books about AI do you have read? (Seems like reading
books is a bit out of fashion and posting is easier.)
For a modern introduction into AI you may look into the latest
book by Russel/Norvig. Or read Mark Stefik's book about
knowledge systems. Email me, in case you need an
ISBN Number. Some books about Expert Systems, Machine Translation,
Image Processing, Automated Reasoning, Knowledge Representation,
Robotics, Planning may give you an idea what AI is about.
Rainer Joswig
In article <·················@babya.shd.de>, ······@mailbag.shd.de (Felix
Kasza [MVP]) wrote:
> Rainer,
>
> > Hear, Hear. You should have told this all the people
> > who wrote successful AI software in Lisp.
>
> > Obviously you have no idea of AI and AI software development.
>
> I have no intention to join a pissing contest.
You already did.
> Rather, I will gladly admit to my ignorance in AI matters: "successful
> AI software"? Does that mean there's a piece of software out there that
> survives the Turing test and everything else you can throw at it?
Again, obviously you have no idea of AI and AI software development.
Who cares about the "Turing test"? Very few people (maybe
Mr. Loebner).
> I don't care what language it's written in, but unless something drastic
> happend in the past six months, "successful AI software" is rather
> far-out a concept.
Read literature about AI, get some idea what AI is about (I can
give you a hint: it is a very diverse field with many different
ideas), and then you will see how silly your questions are.
Greetings,
Rainer Joswig
> Cheers,
> Felix.
From: Espen Vestre
Subject: Re: standardization (Re: Lisp versus C++ for AI. software)
Date:
Message-ID: <w6hgo13h65.fsf@pilt.online.no>
In article <·································@news.lavielle.com> ······@lavielle.com (Rainer Joswig) writes:
In article <·············@sybase.com>, George Van Treeck
<······@sybase.com> wrote:
> I haven't seen any interesting software written in a symbol
> processing language in last 7 years.
That might be *your* problem. I have seen such software.
Yesterdays announcement (crossposted to this group) of KR-96 should
be enough to convince anyone that symbol-based AI research is alive
and doing well.
Ah, and I couldn't resist adding a little extra remark: I consider
Perl5 also to be a symbol processing language. It reinvents lisp
in quite ugly ways, but it definitely has a very large impact
through its extensive use in e.g. WWW applications.
--
Espen Vestre
Telenor Online AS
Norway
* George Van Treeck wrote:
> If you go to an engineering manager and propose a product, do
> you think the manager will agree to using a language that no
> one else uses to get product out the door? Get real.
Someone has to, some time. Someone must have been the first to use
C/C++/Cobol.
> It's not myths. At Digital, a CAD tool was written in Lisp and dumped
> because it Lisp was just too slow. Lisp can't be used in real-time
> applications, because you need to guarantee response time. A 911
> call needs to routed over the phone lines and can't afford to wait
> for Lisp program to take a coffee brake (garbage collect for some
> indeterminate period). And the code size is too large to be
> quickly activated and used for small utility applications. Perl,
> Tcl, C/C++ are used instead.
You should read at least some papers on real-time systems and GC
before posting something like this. Pausing for an indeterminate
period is just the sort of thing that using garbage collected
languages can avoid which is hard to avoid in a traditional
malloc/free type system.
To give more content to this thread -- does anyone have real figures
for GC overheads for real programs (in Lisp or any other GC'd
languages, not just real-time stuff). XEmacs has some kind profiler
now, and I ran that for a few days, and it spends only about 2-3% of
its time in GC, despite using large & consy elisp packages like gnus &
VM.
--tim
From: Carlos Cid
Subject: Final Decission (Re: Lisp versus C++ for AI. software)
Date:
Message-ID: <32621AF7.6B1@eis.uva.es>
Hi,
Tanks everybody for your help. I have collect your replies to this message
and I post them by request.
Some of this replies have not been posted to newsgroups so I'll try to cite
only the opinion, not the sender. To all those senders: I hope you never
mind that I have cited your opinion.
I have only included replies to my message, you can follow the rest of the
discussion on newsgroups.
At the end of this message you can find our final decission.
=============================================================================
THIS IS THE ORIGINAL MESSAGE
=============================================================================
Hi everybody,
This is a request for your opinion about the subject theme.
I'm working on Machine Learning and I wish to know your opinion about what
is the best language to implement ML. (in general AI.) software. The
discussion is focused on software for researchers, not for comercial tools.
My workgroup is now discussing what is best, but we haven't take a decission
yet. We are mainly interested in:
- Existing implemented ML. algorithms.
- An easy maintenance (easy to understand and easy for group working).
- Language standarization.
- Multiplatform support.
- Interface with graphic enviroments.
- Software Engineering methodologies developed.
We think both of them have its advantages and disadvantages. Perhaps your
experience could help us.
I know there are other good languages for AI. implementations, but we want
to restrict the discussion to Lisp and C++. Of course, you are free to aswer
this message to defend your favourite language. Any opinion is welcome.
Thank you in advance for your help.
Carlos Cid.
=============================================================================
THOSE ARE YOUR REPLIES
=============================================================================
You should also consider using functional languages such as ML (not
machine learning, but the ML language, like SML -standard ML- or
Ocaml).
=============================================================================
=============================================================================
HI
I know that you want to restrict to C++ or lisp, but let me introduce
you another language that mixed the best of both in my opinion.
TCL/TK. This language is interpreted and work with the comcept of
list like LISP. The TK is the visual interface. All that stuff TK and
interpreter exist for unix, mac and windows with the same visual
interface object, so the code is close to be fully portable. Moreover,
you can call C++ lib in TCL/TK. So you can build C++ librairy that does
the hard stuff and that you can compile on different platform in the
rest of the interface can be in TCL/TK which will run on many platforms
too.
This is my point of view i'am not considering myself an expert in none
of those language. I hope the information can help you.
=============================================================================
=============================================================================
Dear Carlos,
I went into AI wanting to do everything in LISP or
Prolog as these are the AI languages, I thought.
Since then I have had my mind changed on this subject.
Lisp and Prolog are high level languages. A high-level language
means the compiler makes a lot of assumptions about what you are
going to be doing, in order to make programming a lot easier.
I have used a high-level language in a business environment,
and I think it's a great idea.
HOWEVER. In AI research you will quickly find that your
high level language has made the _wrong_ assumptions about
what you are going to do. Lisp and Prolog were not designed
with neural nets and genetic algorithms in mind, for example.
What you want is a programming environment that lets you specify your own
assumptions about your future programming needs, in order to facilitate
future programming. But it should also allow you to _change_ those
assumptions as necessary at a lower level.
Voila. C++. A fixed high level language is OK in a business environment,
but at the cutting edge of technology you need C++. For "facilitating
assumptions" read "class libraries".
=============================================================================
=============================================================================
Hello. A while back I worked in the Computer Science dept. at the
University of Aberdeen. There, the feeling amongst the ML people
was that lisp/prolog was good for prototyping code, but for anything
using large datasets, C was the language choice. I suppose like all
software engineering, once the technology stabilises, it will be ported
from lisp/prolog to C (ie Progol, c4.5 etc etc).
=============================================================================
=============================================================================
There aren't any good answers, unfortunately. It depends on how
good your researchers are, how high performance the implementation
needs to be, how much money you want to spend, etc. C++ is a pain
to develop in, difficult to learn, and difficult to port, but it
is efficient and there is a lot of software for it. Lisp
is a bit easier to learn and develop in, but whatever OS APIs it
has are proprietary, it is tricky to get good performance out
of it, and it is very expensive.
One language to consider is Objective-C, preferably using a garbage
collector like Boehm's: it combines most of the advantages of C and
Lisp, and it is cross-platform (the OpenStep libraries for Windows and
UNIX). It is simple, efficient, powerful, and free.
=============================================================================
=============================================================================
"ILOG Talk" is a Lisp dedicated to the development in Lisp with C++
libraries (including your extensions). This certainly a way to get
the best of both worlds.
cc> - Existing implemented ML. algorithms.
Dunno. If there are in C++, then you get them for free in ILOG Talk too.
cc> - An easy maintenance (easy to understand and easy for group
cc> working).
There is some kind support for groupware modular development. So far
a training session is the most appropriate approach to groupware in
ILOG Talk.
cc> - Language standarization.
ILOG Talk is close to ISLISP (the ISO standard for LISP), with many
extensions.
cc> - Multiplatform support.
Plenty Unixes, plus Windows NT and Windows 95.
cc> - Interface with graphic enviroments.
Any the C++ (and C) libraries you want! We recommend ILOG Views,
which is a C++ graphic library widely ported, efficient, and powerful.
Check http://www.ilog.fr/ for more about ILOG Views.
cc> - Software Engineering methodologies developed.
Object-oriented modeling is best addressed by ILOG Power Classes.
Please ask ····@ilog.fr for more information about ILOG Power Classes.
cc> We think both of them have its advantages and disadvantages. Perhaps your
cc> experience could help us.
Almost all our ILOG Talk customers are Lisp programmers now enjoying
using C++ libraries. Most of them still write no C++ code, some now
code in both languages.
ILOG Talk is not Common Lisp (smaller footprint, modular runtime,
simpler, etc), but it is definitely in the same familly of Lisp
dialects (an extension of the ISO standard). You can get it for free
on Linux (1), or buy it for Unix and Windows (2).
I hope you'll enjoy!
(1) Please check ftp://ftp.ilog.fr/pub/Products/Talk/linux/ilog-talk-3.2.tar.gz
or ftp://sunsite.unc.edu/pub/Linux/Incoming/ilog-talk-3.2.tar.gz on sunsite
or any sunsite mirror
(2) Please check http://www.ilog.fr/Products/Talk/ and/or ask ····@ilog.com
=============================================================================
=============================================================================
I am responding to Carlos Cid's request. I am basing my opinions on my
own experience in doing something much like what he wants to do.
Carlos Cid wrote:
>
> My workgroup is now discussing what is best, but we haven't take a decission
> yet. We are mainly interested in:
>
> - Existing implemented ML. algorithms.
A lot is available for Lisp. See the CMU-AI archive.
> - An easy maintenance (easy to understand and easy for group working).
Programming language theory experts say that Lisp has the advantage over
C++ here.
> - Language standarization.
Common Lisp is an ANSI standard and many implementations have attained
or nearly attained that standard.
> - Multiplatform support.
> - Interface with graphic enviroments.
The basic common lisp language is implemented on many platforms and
common lisp code runs on all of them. If you want a fancy interface to
run an any Unix or Linux machine, that is no problem. If you want a
fancy
interface to run on any Windows(TM) machine, that is no problem. If you
want a single fancy interface to run on both Unix and Windows(TM) then
you need to do a bit more work (look into CLIM or TCL/Tk, and see the
Lisp
FAQ) but it is possible.
I suppose this is true of C++ also.
If a text-based interface is good enough, then that is perfectly
portable.
> - Software Engineering methodologies developed.
I'm not sure what you are referring to here. Lisp is a very old and
mature language which is being used in industry all over the place.
C++ is relatively young but it is widely used.
Lisp is a lot of fun. It's not a drag to program.
=============================================================================
=============================================================================
Carlos Cid wrote:
> - Existing implemented ML. algorithms.
A lot more LISP code out there -- much of it outdated though. Most
new stuff is in C++.
> - An easy maintenance (easy to understand and easy for group working).
Only very wierd people think LISP is readable... Writing obtuse
code is equally easy in either language. LISP is better for
prototyping and C++ better for production.
> - Language standarization.
Toss up.
> - Multiplatform support.
C++ is on many more vendor's platforms. Hardly any systems vendors
provide LISP -- have to rely on some small company. Due to the
much larger market for C++, it tends to be more optimized, bug free,
etc.
> - Interface with graphic enviroments.
Portable GUI frameworks are available for both LISP and C++.
Personally, I would use Java's AWT for making a 100% portable GUI
code. It can call out to C++ code for the compute intensive
portions. Java is about as fast as LISP, so you might be able
to write the whole thing in Java.
> - Software Engineering methodologies developed.
C++ wins hands down. Go to any book store with computer section.
Many books on methodologies with examples in C++ -- nothing with
examples in LISP.
You forgot one other important category -- performance. If
you ML code is very compute intensive, e.g., GA, neural nets,
etc. then C++ is the only way to go.
=============================================================================
=============================================================================
C>I'm working on Machine Learning and I wish to know your opinion about what
C>is the best language to implement ML...
Either way, try not to use both languages. You may spend more
time on integration than research. I've worked on a project
that had to interface LISP and C++ (LISP:intelligence, c++:simulator).
The end statistics was that we spent more than 50% of labor hour
on LISP-C++ interface and close to 70% of code was written to
transoform data structure back and fourth between the 2 languages
on multiple platforms.
Project was a success, after we re-implemented the design (yes,
we actually had a design, thanks to the insistence/threat of the
customer) in a single language.
=============================================================================
THE FINAL DECISSION
=============================================================================
We finally decided to use C (not C++) for our implementation. Before telling
why I think you should knew a little bit more about my project.
We wish to build a Machine Learning System and we wish to apply it to voice
recognition problem. The workgroup is interested on any kind of ML
algorithms but my personal work is restricted to the ID3 family (proposed
by Quinlan); other kind of algorithms should be added to this system in
future.
I have been searching for existing implementations of those algorithms. I
have found lots of them in Lisp, Prolog, C, and C++. We choose three:
- MLC++ a ML system with a complete library of ML algorithms
implemented in C++ at Silicon Graphics.
- A Lisp implementation of C4.5 by Ray Mooney at the University of
texas.
- The implementation of C4.5 by Quinlan in C at the University of
Sidney.
Now let me reply myself based on my personal experience, your opinions and
the information I have found:
> - Existing implemented ML. algorithms.
Of course, I have found good implementations on all languages.
> - An easy maintenance (easy to understand and easy for group working).
We thought that C++ should be the best by its philosophy (as an object
oriented language), but we have found that this language are not mature
enought and is changing continously trying to join other formal object
oriented languages.
We think that Lisp is more easy to understand than C, but C is best for
group working. The advantage of Lisp is not too real because most people
(at least in my enviroment) know a lot of C but rarely a bit of Lisp.
> - Language standarization.
Lisp lose here. Although exists a good standard (Common Lisp) you need
to use some other layers (for example CLOS to use an object oriented Lisp)
and there isn't a good standard for those layers.
C++ have a new good standard (ANSI) but C++ compilers do not implement the
standard 100% or are too spensive compilers and run over a more expensive
computer. Perhaps GNU compiler should be a solution in future.
C is a very mature and stable language and exists a good standard (ANSI).
C compilers usually have its own dialect, but compile a standard ANSI C
code if you wish.
> - Multiplatform support.
[NOTE: We really want to build a system kernel suitable to run on a
personal computer and on a UNIX workstation.]
You can found compilers for all this languages for a PC or a Unix WS, but
the problems are refered to language standarization. If you use C++ or Lisp
you probably must reimplement parts of code to port it from a WS to a PC.
My personal experience porting C code demostrate (at least for me) that
C it's the best (of course for a system kernel, not for GUIs).
> - Interface with graphic enviroments.
Lisp GUIs have a very bad performance but we never say performance was
important. C++ is interesting because exists a lot of visual programming
tools. The advantage of C is that you ever find a way to build a GUI and
good graphic libraries. All of them are suitable.
> - Software Engineering methodologies developed.
Well, at this point I have found that object oriented methodologies are
in a good state-of-the-art now but some experts have said me that the
methodologie used should not depend on the implementation language and
we have to choose the best for the problem type.
Finally we choose C because:
- I have been programming with C for seven years.
- Quinlan works on it an he is contoinouslly increasing ID3
capabilities and he use C.
- We can easilly build a system kernel for both, a PC and a WS.
On a WS we have a very good performace. The PC version of the
software is very "transportable" because lot of people have one
and is a cheap platform.
- In future, developers should wish to work on C++ using the MLC++
implementation. It's very easy to include my C code on their C++
system.
Thanks again for your opinions.
#-------------------------------------------#--------------------------------#
# JOSE CARLOS CID VITERO (Charlie) ITIG. | http://www.eis.uva.es/~charlie #
# Grupo de Ciencias Cognitivas Aplicadas y | ··············@dali.eis.uva.es #
# Vision por Computador. | #
# Dpt. Ingenieria de Sistemas y Automatica. | Tlf : (983) 42-33-55 42-33-58 #
# Escuela Tecnica Superior de Ingenieros | Fax : (983) 42-33-10 42-33-58 #
# Industriales. Universidad de Valladolid. | Paseo del cauce S/N. 47011 VA #
#-------------------------------------------#--------------------------------#
"Dan Winkler" <······@tiac.net> writes:
> > Bottom line: The proof of which is better (C++ or Lisp) is to
> > look at the commercially successful applications. If Lisp is such
> > a hot language, how come most applications are all written in C and
> > C++?
>
> I would even set aside the question of MOST successful applications
> and just ask someone to show me ONE successful Lisp application. By
> "show me" I mean something I can download and try. By successful I
> mean something I would use regularly. I have lots of programs
> written in C that meet those criteria but none in Lisp.
How do you feel about Java?
-- Harley Davis
-------------------------------------------------------------------
Harley Davis net: ·····@ilog.com
Ilog, Inc. tel: (415) 944-7130
1901 Landings Dr. fax: (415) 390-0946
Mountain View, CA, 94043 url: http://www.ilog.com/
There are a lot of applications developed using (IDL)/Lisp from Concentra Corp.
This company has product called ICAD, which is very popular with Boeing, Pratt
and Whitney, GE and many more. I do not know if they have any application you
can download and use, but you can visit their WEb page. I have been using their
product for a few months now and it is working well. It is very expensive
though.Their URL is WWW.concentra.com.
--Milind
······@lavielle.com (Rainer Joswig) writes:
> In article <················@naggum.no>, Erik Naggum <····@naggum.no> wrote:
>
> > but... Wade L. Hennessey's WCL uses shared libraries to produce very small
> > binaries (smaller than C++ with GCC). granted, the shared libraries are
> > enormous, but enormous shared libraries don't stop people who use other
> > enormous shared libraries from pointing to their small, not-so-stand-alone
> > binaries and gloat. with WCL, Common Lisp programmers can do the same if
> > they wish. WCL runs on SPARCs with SunOS and Solaris. it seems not to be
> > maintained. <URL:ftp://cdr.stanford.edu/pub/wcl/>
>
> Right, there is also CLICC, a Common Lisp to C Compiler.
> You can compile a subset (the dynamism removed) of
> Common Lisp directly to C. I never have used
> it, but it should generate small binaries, too.
>
> Then there was the expensive Lisp-to-C compiler
> from Chestnut, I wonder how big the generated
> applications were (minimum that is).
>
> MCL also uses shared libraries.
Ilog Talk seems to be the only extant commercial Lisp system which
compiles to C and generates and uses shared libraries.
-- Harley Davis
-------------------------------------------------------------------
Harley Davis net: ·····@ilog.com
Ilog, Inc. tel: (415) 944-7130
1901 Landings Dr. fax: (415) 390-0946
Mountain View, CA, 94043 url: http://www.ilog.com/
Gary Brown <······@thebrowns.ultranet.com> writes:
> Compiled LISP can approach other languages in speed. But, that imposes
> constraints
> that restrict most of the functions that makes LISP desirable in the
> first place.
I'm confused by this. Could you give some examples of where compiling
your code restricts you? In the case of Common Lisp, most people I
know *always* run compiled, even during development. Some Common Lisps
(e.g. Allegro CL/Windows) only compile, and never run anything
interpreted.
- Marty
Lisp Programming Resources: <http://www.apl.jhu.edu/~hall/lisp.html>
From: thomas hennes <······@pobox.oleane.com>
Date: Sat, 19 Oct 1996 03:48:28 +0100
Well, i am not sure about this. Consider the case of software agents
--which is directly linked to AI if you bear in mind that GOOD agents
are genetically evolutive (ideally). When dealing with networked agents,
you just cannot let pointer arithmetic (an automatic feature of C/C++)
get into the way, for obvious security reasons. LISP, on the other hand,
is one language that manages this pretty well, as do others such as
SafeTCL (another interpreted language!!).
Another?
What i actually believe is that interpreted languages and compiled
languages have very different application domains. Interpreted languages
work wonders when used as tiny specialized scripts.
No, Common Lisp is not an interpreted language.
Yes, Lisp interpreters exist.
All commercial Common Lisps that I'm aware of are compiled by default.
Even if you type in a "tiny specialized script" it is usually compiled
before it is executed.
Harumph. :)
Carl L. Gay (····@ix.cs.uoregon.edu) wrote:
: From: thomas hennes <······@pobox.oleane.com>
: Date: Sat, 19 Oct 1996 03:48:28 +0100
: Well, i am not sure about this. Consider the case of software agents
: --which is directly linked to AI if you bear in mind that GOOD agents
: are genetically evolutive (ideally). When dealing with networked agents,
: you just cannot let pointer arithmetic (an automatic feature of C/C++)
: get into the way, for obvious security reasons. LISP, on the other hand,
: is one language that manages this pretty well, as do others such as
: SafeTCL (another interpreted language!!).
: Another?
: What i actually believe is that interpreted languages and compiled
: languages have very different application domains. Interpreted languages
: work wonders when used as tiny specialized scripts.
: No, Common Lisp is not an interpreted language.
: Yes, Lisp interpreters exist.
: All commercial Common Lisps that I'm aware of are compiled by default.
: Even if you type in a "tiny specialized script" it is usually compiled
: before it is executed.
: Harumph. :)
--
***************begin r.s. response***************
lisp
is one of the earliest
high level languages,
dating to the 1950s...
certainly,
in early implementations,
lisp
was available primarily
as an interpreted language...
historically, lisp pioneered
routine availability of
recursion
to the programmer...
later...
so generally available in
the widespread tradition dating
to algol60(revised)...
for those with systems capable
of supporting (ms dr pc)dos
(286 good)...
versions of lisp interpreters
are freely available as shareware...
(lisp like)
xlisp
and
pc-lisp;
each of these is remarkable
and very well made...
(xlisp is available in a
variety of releases...)
***************end r.s. response*****************
Ralph Silverman
········@bcfreenet.seflin.lib.fl.us
Ralph Silverman (········@bcfreenet.seflin.lib.fl.us) wrote:
: Carl L. Gay (····@ix.cs.uoregon.edu) wrote:
: : From: thomas hennes <······@pobox.oleane.com>
: : Date: Sat, 19 Oct 1996 03:48:28 +0100
: : Well, i am not sure about this. Consider the case of software agents
: : --which is directly linked to AI if you bear in mind that GOOD agents
: : are genetically evolutive (ideally). When dealing with networked agents,
: : you just cannot let pointer arithmetic (an automatic feature of C/C++)
: : get into the way, for obvious security reasons. LISP, on the other hand,
: : is one language that manages this pretty well, as do others such as
: : SafeTCL (another interpreted language!!).
: : Another?
: : What i actually believe is that interpreted languages and compiled
: : languages have very different application domains. Interpreted languages
: : work wonders when used as tiny specialized scripts.
: : No, Common Lisp is not an interpreted language.
: : Yes, Lisp interpreters exist.
: : All commercial Common Lisps that I'm aware of are compiled by default.
: : Even if you type in a "tiny specialized script" it is usually compiled
: : before it is executed.
: : Harumph. :)
: --
: ***************begin r.s. response***************
: lisp
: is one of the earliest
: high level languages,
: dating to the 1950s...
: certainly,
: in early implementations,
: lisp
: was available primarily
: as an interpreted language...
: historically, lisp pioneered
: routine availability of
: recursion
: to the programmer...
: later...
: so generally available in
: the widespread tradition dating
: to algol60(revised)...
: for those with systems capable
: of supporting (ms dr pc)dos
: (286 good)...
: versions of lisp interpreters
: are freely available as shareware...
: (lisp like)
: xlisp
: and
: pc-lisp;
: each of these is remarkable
: and very well made...
: (xlisp is available in a
: variety of releases...)
: ***************end r.s. response*****************
: Ralph Silverman
: ········@bcfreenet.seflin.lib.fl.us
--
******************begin r.s. response******************
during the time of the early
development of
lisp
,
limitations,
now virtually taken for granted,
were not necessarily accepted...
self-modification of software
particularly,
was thought, by some,
to be a design goal for
advanced computer
programming languages...
in early, interpreted forms,
lisp
supported such use...
various aspects of
self-modification were programmed
into systems...
******************end r.s. response********************
Ralph Silverman
········@bcfreenet.seflin.lib.fl.us
From: Ben Sauvin
Subject: Re: Lisp is not an interpreted language
Date:
Message-ID: <326D6473.4500@csql.mv.com>
Ralph Silverman wrote:
>
> Ralph Silverman (········@bcfreenet.seflin.lib.fl.us) wrote:
> : Carl L. Gay (····@ix.cs.uoregon.edu) wrote:
>
> : : From: thomas hennes <······@pobox.oleane.com>
> : : Date: Sat, 19 Oct 1996 03:48:28 +0100
>
> : : Well, i am not sure about this. Consider the case of software agents
> : : --which is directly linked to AI if you bear in mind that GOOD agents
> : : are genetically evolutive (ideally). When dealing with networked agents,
> : : you just cannot let pointer arithmetic (an automatic feature of C/C++)
> : : get into the way, for obvious security reasons. LISP, on the other hand,
> : : is one language that manages this pretty well, as do others such as
> : : SafeTCL (another interpreted language!!).
>
> : : Another?
>
> : : What i actually believe is that interpreted languages and compiled
> : : languages have very different application domains. Interpreted languages
> : : work wonders when used as tiny specialized scripts.
>
> : : No, Common Lisp is not an interpreted language.
>
> : : Yes, Lisp interpreters exist.
>
> : : All commercial Common Lisps that I'm aware of are compiled by default.
> : : Even if you type in a "tiny specialized script" it is usually compiled
> : : before it is executed.
>
> : : Harumph. :)
>
> : --
> : ***************begin r.s. response***************
>
> : lisp
> : is one of the earliest
> : high level languages,
> : dating to the 1950s...
>
> : certainly,
> : in early implementations,
> : lisp
> : was available primarily
> : as an interpreted language...
>
> : historically, lisp pioneered
> : routine availability of
> : recursion
> : to the programmer...
> : later...
> : so generally available in
> : the widespread tradition dating
> : to algol60(revised)...
>
> : for those with systems capable
> : of supporting (ms dr pc)dos
> : (286 good)...
> : versions of lisp interpreters
> : are freely available as shareware...
> : (lisp like)
> : xlisp
> : and
> : pc-lisp;
>
> : each of these is remarkable
> : and very well made...
> : (xlisp is available in a
> : variety of releases...)
>
> : ***************end r.s. response*****************
> : Ralph Silverman
> : ········@bcfreenet.seflin.lib.fl.us
>
> --
> ******************begin r.s. response******************
>
> during the time of the early
> development of
> lisp
> ,
> limitations,
> now virtually taken for granted,
> were not necessarily accepted...
>
> self-modification of software
> particularly,
> was thought, by some,
> to be a design goal for
> advanced computer
> programming languages...
>
> in early, interpreted forms,
> lisp
> supported such use...
> various aspects of
> self-modification were programmed
> into systems...
>
> ******************end r.s. response********************
> Ralph Silverman
> ········@bcfreenet.seflin.lib.fl.us
Ralph, please forgive me (this is NOT a flame), but I GOTTA know: what
text formatting language or utility are you using? :)
PowerLisp, for Macs, has an interpreter. You can, however, explicitely
compile a routine, in which case it executes faster.
Fred B<····@mailhost.ais.net>
Carl L. Gay (····@ix.cs.uoregon.edu) wrote:
: From: thomas hennes <······@pobox.oleane.com>
: Date: Sat, 19 Oct 1996 03:48:28 +0100
: Well, i am not sure about this. Consider the case of software agents
: --which is directly linked to AI if you bear in mind that GOOD agents
: are genetically evolutive (ideally). When dealing with networked agents,
: you just cannot let pointer arithmetic (an automatic feature of C/C++)
: get into the way, for obvious security reasons. LISP, on the other hand,
: is one language that manages this pretty well, as do others such as
: SafeTCL (another interpreted language!!).
: Another?
: What i actually believe is that interpreted languages and compiled
: languages have very different application domains. Interpreted languages
: work wonders when used as tiny specialized scripts.
: No, Common Lisp is not an interpreted language.
: Yes, Lisp interpreters exist.
: All commercial Common Lisps that I'm aware of are compiled by default.
: Even if you type in a "tiny specialized script" it is usually compiled
: before it is executed.
: Harumph. :)
--
***************begin r.s. response****************
early,
sophisticated,
programming languages...
available, then,
primarily in
interpreted
form,
such as,
lisp
and
apl
,
fostered interactive development
and testing of software components
...
generally,
compilation,
if available,
would be for creation of
a
deliverable
of the final product...
***************end r.s. response******************
Ralph Silverman
········@bcfreenet.seflin.lib.fl.us
"Neil Henderson" <······@worldchat.com> writes:
> You will notice that C++ is being used to build MANY more
> applications than LISP.
True.
> (Actually I haven't seen a job ad for LISP).
See the last entry in the "Internet Resources" section of
<http://www.apl.jhu.edu/~hall/lisp.html> for several recent Lisp job ads.
>C++ is far more flexible than LISP in terms of what can be done
Obviously, this depends on what you are trying to do. Give an example
of the kinds of things you want to do, then some meaningful
comparisons can be drawn. For many (but not all) of the things *I*
typically do, Lisp is more flexible.
- Marty
····@ix.cs.uoregon.edu (Carl L. Gay) writes:
>
>
> From: thomas hennes <······@pobox.oleane.com>
> Date: Sat, 19 Oct 1996 03:48:28 +0100
>
> Well, i am not sure about this. Consider the case of software agents
> --which is directly linked to AI if you bear in mind that GOOD agents
> are genetically evolutive (ideally). When dealing with networked agents,
> you just cannot let pointer arithmetic (an automatic feature of C/C++)
> get into the way, for obvious security reasons. LISP, on the other hand,
> is one language that manages this pretty well, as do others such as
> SafeTCL (another interpreted language!!).
>
> Another?
>
> What i actually believe is that interpreted languages and compiled
> languages have very different application domains. Interpreted languages
> work wonders when used as tiny specialized scripts.
>
> No, Common Lisp is not an interpreted language.
>
> Yes, Lisp interpreters exist.
>
> All commercial Common Lisps that I'm aware of are compiled by default.
> Even if you type in a "tiny specialized script" it is usually compiled
> before it is executed.
>
> Harumph. :)
This is not the first time this assumption about LISP has been made.
I think many people make the mistaken assumption that LISP is
interpreted because it's interactive and dynamically linked, and
confuse the two concepts. They look at the "batch"
(edit-compile-link-run) model of conventional languages like C or
Fortran and identify that with compilation. After all, what language
other than LISP is interactive and compiled? (Forth is the only one I
can think of.)
All the more reason that LISP is an essential component of a
programming education.
--
Liam Healy
··········@nrl.navy.mil
In article <··············@apogee.nrl.navy.mil>
··········@nrl.navy.mil "Liam Healy" writes:
> This is not the first time this assumption about LISP has been made.
> I think many people make the mistaken assumption that LISP is
> interpreted because it's interactive and dynamically linked, and
> confuse the two concepts. They look at the "batch"
> (edit-compile-link-run) model of conventional languages like C or
> Fortran and identify that with compilation. After all, what language
> other than LISP is interactive and compiled? (Forth is the only one I
> can think of.)
This is why I so often recommend to people is 'Writing Interactive
Compilers and Interpreters', by P.J. Brown, ISBN 0 471 27609 X,
ISBN 0471 10072 pbk. I sometimes wonder how many people really know
what can be done using computers, what an interactive computer is,
or how to use one, should they find themselves sitting in front
of one. Far too many people are spiritually still wearing white
coats, queuing up with packs of punched cards, and sitting at
teletypes.
> All the more reason that LISP is an essential component of a
> programming education.
Or any other truly interactive language with an incremental
compiler. For example, Smalltalk, Prolog - even Forth. Sadly,
interactive Basic interpreters seem to be rather less common
on the machines I use, as even Basic has succumed to the "batch"
approach. On the other hand, it's called "visual", so perhaps
we shouldn't worry...
Yep, in the language of MS, "visual" means "batch". George Orwell
should be spinning in his (now multimedia?) grave...
--
<URL:http://www.enrapture.com/cybes/> You can never browse enough
Future generations are relying on us
It's a world we've made - Incubus
We're living on a knife edge, looking for the ground -- Hawkwind
Cyber Surfer wrote:
[...]
> This is why I so often recommend to people is 'Writing Interactive
> Compilers and Interpreters', by P.J. Brown, ISBN 0 471 27609 X,
> ISBN 0471 10072 pbk.
Yes, this is a truly excellent book. Aside from the subject matter,
the style is a model for other authors of technical books.
--
Ian Johnston, Contracting at UBS, Zurich
Hacked electronic address to defeat junk mail; please edit when replying
Liam Healy <··········@nrl.navy.mil> writes:
>After all, what language
>other than LISP is interactive and compiled?
Pop. Pop-2 came out in the late 60s, I think. Pop11 is still alive
and well (see comp.lang.pop). I don't think there has ever been a
Pop implementation that had an interpreter. Since Pop-2 was developed
for AI programming, it comes as no surprise that Pop and Lisp are very
very similar, in everything except syntax.
--
Mixed Member Proportional---a *great* way to vote!
Richard A. O'Keefe; http://www.cs.rmit.edu.au/%7Eok; RMIT Comp.Sci.
Liam Healy (··········@nrl.navy.mil) wrote:
: ····@ix.cs.uoregon.edu (Carl L. Gay) writes:
: >
: >
: > From: thomas hennes <······@pobox.oleane.com>
: > Date: Sat, 19 Oct 1996 03:48:28 +0100
: >
: > Well, i am not sure about this. Consider the case of software agents
: > --which is directly linked to AI if you bear in mind that GOOD agents
: > are genetically evolutive (ideally). When dealing with networked agents,
: > you just cannot let pointer arithmetic (an automatic feature of C/C++)
: > get into the way, for obvious security reasons. LISP, on the other hand,
: > is one language that manages this pretty well, as do others such as
: > SafeTCL (another interpreted language!!).
: >
: > Another?
: >
: > What i actually believe is that interpreted languages and compiled
: > languages have very different application domains. Interpreted languages
: > work wonders when used as tiny specialized scripts.
: >
: > No, Common Lisp is not an interpreted language.
: >
: > Yes, Lisp interpreters exist.
: >
: > All commercial Common Lisps that I'm aware of are compiled by default.
: > Even if you type in a "tiny specialized script" it is usually compiled
: > before it is executed.
: >
: > Harumph. :)
: This is not the first time this assumption about LISP has been made.
: I think many people make the mistaken assumption that LISP is
: interpreted because it's interactive and dynamically linked, and
: confuse the two concepts. They look at the "batch"
: (edit-compile-link-run) model of conventional languages like C or
: Fortran and identify that with compilation. After all, what language
: other than LISP is interactive and compiled? (Forth is the only one I
: can think of.)
: All the more reason that LISP is an essential component of a
: programming education.
: --
: Liam Healy
: ··········@nrl.navy.mil
--
***************begin r.s. response********************
i guess a
'dynamic compiler'
for a language is a fancy
kind of interpreter...
whatever the name sounds like;
yes???
after all,
when a program actually has been
compiled and linked
successfully,
it runs from binary ...
NOT A SCRIPT!!!!!!!!!!!!!!!!!!!!!!!!!!
^^^^^^^^^^^^
(a program compiled and linked
properly is not, routinely,
recompiled at
runtime
...
a program requiring
anything like that
^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^
is interpreted!!!
)
***************end r.s. response**********************
Ralph Silverman
········@bcfreenet.seflin.lib.fl.us
Ralph Silverman <<········@bcfreenet.seflin.lib.fl.us>> wrote:
<CUT talk of LISP>
> ***************begin r.s. response********************
>
> i guess a
> 'dynamic compiler'
> for a language is a fancy
> kind of interpreter...
> whatever the name sounds like;
> yes???
>
> after all,
> when a program actually has been
> compiled and linked
> successfully,
> it runs from binary ...
> NOT A SCRIPT!!!!!!!!!!!!!!!!!!!!!!!!!!
> ^^^^^^^^^^^^
>
> (a program compiled and linked
> properly is not, routinely,
> recompiled at
> runtime
> ...
> a program requiring
> anything like that
> ^^^^^^^^^^^^^^^^^^
> ^^^^^^^^^^^^^^^^^^
> is interpreted!!!
> )
> ***************end r.s. response**********************
What about object code that is relocatable? Is that
interpreted? It sure ain't straight block load binary
to fixed address and jmp...
what about such as TAOS? Single object code loadable
onto mulitple heterogenous processors, with translation
taking place at loadtime? i.e. single object translated
on demand for specific processor? By your above logic
interpreted.
What about Intel code running on an alpha? Is that
interpreted? or compiled? It was compiled with regard
to an intel, but is now acting (to some extent) as
instructions for an interpreter.... by your above logic
an existance os both compiled and interpreted....
Hmmm
In article <··········@nntp.seflin.lib.fl.us>
········@bcfreenet.seflin.lib.fl.us "Ralph Silverman" writes:
> i guess a
> 'dynamic compiler'
> for a language is a fancy
> kind of interpreter...
> whatever the name sounds like;
> yes???
No. You might like to read 'Writing Interactive Compiler and
Interpreters'. Alternately, take a look at any Forth 'interpreter,
and you'll find that there's a _compiler_. The 'interpreter'
is the address interpreter, but in some Forths, the object code
will be native code. Not that you'll often need to know this!
Forth does an excellent job of hiding such details, and revealing
them when you need them.
In that sense, Lisp is very similar. So are many Basic 'interpreters'.
> after all,
> when a program actually has been
> compiled and linked
> successfully,
> it runs from binary ...
> NOT A SCRIPT!!!!!!!!!!!!!!!!!!!!!!!!!!
> ^^^^^^^^^^^^
Not necessarily. Concepts like 'compiling' come from certain
language implementations, and their (batch) enviroments.
There are also C interpreters, but few people call C an
interpreterd language. Some Pascal and C compilers are so fast
that you barely see them running, and there are incremental
compilers for Lisp and other languages, too. There are even
a few incremental C/C++ compilers!
'Linking' is another idea that's tied to 'batch' enviroments.
I find it sad that so called 'visual' enviroments perpetuate
such archaic practices. They're so common that some people,
like yourself, mistake them for the _only_ way to compile.
> (a program compiled and linked
> properly is not, routinely,
> recompiled at
> runtime
> ...
> a program requiring
> anything like that
> ^^^^^^^^^^^^^^^^^^
> ^^^^^^^^^^^^^^^^^^
> is interpreted!!!
> )
This is only because of a great deal of stagnation amoung
compilers for languages like Pascal, C/C++, etc. In case
you've not noticed this, Java is changing this. JIT is just
another name for what PJ Brown called a "throw away compiler".
Tao Systems have used this technique in an _operating system_.
Wake up and smell the coffee (pun intended). Anyway, it'll
get much harder for you to resist pretty soon, coz you'll find
that software using such compiler tech will be available to
_you_, on your machine (whatever that is). If not, then your
opinions will be irrelevant, assuming that they not already,
as there's already too much interest to stop it from happening.
I don't like to use the 'inevitable' to describe technology,
but I might compare it to a steamroller. ;-) I've been using
this kind of compiler for years, so it's rather satisfying to
see 'mainstream' developers discovering it too. At last we can
move on! Hurrah! It'll be very hard to deny it works when so
many people are using it.
Please forgive me if I sound a little smug...
--
<URL:http://www.enrapture.com/cybes/> You can never browse enough
Future generations are relying on us
It's a world we've made - Incubus
We're living on a knife edge, looking for the ground -- Hawkwind
In article <··········@world.std.com> ··@world.std.com "Jeff DelPapa" writes:
> Hell, there are some machine codes that qualify under that metric. If
> the machine uses some sort of microcode, the "native" machine code
> counts as interpreted. (also there are a small number of C systems
> that have limited incremental compilation. (if nothing else the ones
> for lispm's could do this.) -- One of the lispm vendors even had a
> dynamically compiling FORTRAN)
Let's not forget the interpreter that is _very_ popular with C
programmers: printf. Common Lisp programmers have format, which
kicks printf's butt, but at a price. Not everyone wants to print
a number in Roman numerals, and those that do might be happy with
some library code. Still, dynamic linking can handle that...
It seems to me that Lisp programmers can be more ambitions, in
that they reach for higher goals. I've not yet had a chance to
compare MFC with Garnet, but I suspect that it might be an unfair
comparison. Which tool looks "better" may depend on what you wish
to do, and how well it helps you accomplish it.
My own experience is that when I use Lisp, I consider techniques
that I dismiss as "too expensive" when I use C++. The truth is that
it just looks like too much work in C++! While you look at small
programming examples, it may seem that Lisp's advantages are small.
When you multiply the size of the code by several orders of magnitude,
Lisp begins to make a very significant difference, not only to the
amount of time to develop something, but in many other ways, too.
Unfortunately, if you've never experienced this yourself, you might
find it hard to believe.
I'm still impressed by it today. I keep thinking, "This shouldn't
be this easy, or should it?" Well, yes it should! It's the pain of
using C++ that fools me into thinking that some things are just
inherently difficult. If I think too much about how I'd do something
in Lisp, when that option isn't available to me (yes, it is possible
for that to happen), I get seriously frustrated.
Imagine that the only forms of control flow available in C were
'if' and 'goto', or if you didn't have 'struct' and 'typedef'.
That isn't even close to how ANSI C or C++ look to me, compared
to Lisp. Of course, I might be just as happy with ML, as Lisp's
syntax and relaxed attitude to type declarations isn't really
what attracts me to the language. I just discovered Lisp earlier.
--
<URL:http://www.enrapture.com/cybes/> You can never browse enough
Future generations are relying on us
It's a world we've made - Incubus
We're living on a knife edge, looking for the ground -- Hawkwind
>: > All commercial Common Lisps that I'm aware of are compiled by default.
>: > Even if you type in a "tiny specialized script" it is usually compiled
>: > before it is executed.
Humm. I've used a fair number of Common Lisps (mostly commercial
ones) that don't work that way. They have an interpreter as well
as a compiler, and interpretation is the default. However, most
Common Lisp implementations have a compiler that compiles to native
code and that can compile source files to object files -- and that
should be compiler-like enough to satisfy almost everyone. ^_^
In any case, Lisp is not an interpreted or compiled _language_.
_Implementations_ of Lisp might be interpreters or compilers or
some combination.
-- jd
>In article <··········@nntp.seflin.lib.fl.us>,
>Ralph Silverman <········@bcfreenet.seflin.lib.fl.us> wrote:
>> after all,
>> when a program actually has been
>> compiled and linked
>> successfully,
>> it runs from binary ...
>> NOT A SCRIPT!!!!!!!!!!!!!!!!!!!!!!!!!!
>> ^^^^^^^^^^^^
This view is so far out of date that it wasn't even true in the SIXTIES.
I'll mention two examples from the 70s. Both of them apply to the B6700,
a machine which was so compiler-oriented that
- it didn't have an assembler. Period. NONE. There were about 20
instructions in the (very capable for its time) operating system that
weren't generated by the ESPOL compiler, and they were hand translated
binary stuck in an array.
- the average Joe program couldn't be a compiler either; object files
had an "is-a-compiler" bit which could only be set by an operator
in the control room "blessing" the program.
Example 1. Student Fortran compiler.
The Auckland University "STUFOR" compiler was an Algol program that
read student Fortran programs, generated *native* code for them,
and called that native code.
(The original student Pascal compiler for the CDC machines did
much the same thing. Pascal, remember, is a 60s language. Wirth
reported that compiling to native code directly and jumping to it
was significantly faster than using the operating system's native
linker.)
Example 2. The REDUCE symbolic algebra system.
REDUCE is written in a Lisp dialect called PSL.
When typing at the PSL system, you could ask it to compile
a file. It did this by
reading the file,
generating _Algol_ source code,
calling the Algol compiler to generate native code.
Since the B6700 operating system let code in one object file
*dynamically* call code from another object file, problem solved.
Now just think about things like
- VCODE (a package for dynamically generating native code; you specify
the code in a sort of abstract RISC and native code is generated for
SPARC, MIPS, or something else I've forgotten -- I have VCODE and a
SPARC but not the other machines it supports).
- dynamic linking, present in UNIX System V Release 4, Win32, OS/2, VMS,
...
Run-time native code generation has been used in a lot of interactive
programming languages including BASIC (for which "throw-away compiling"
was invented), Lisp, Pop, Smalltalk, Self, Prolog, ML, Oberon, ...
The original SNOBOL implementation compiled to threaded code, but SPITBOL
compiled to native code dynamically.
The B6700 is in fact the only computer I've ever had my hands on where
dynamic code generation was difficult. (Ok, so flushing the relevant
part of the I-cache is a nuisance, but it is _only_ a nuisance.)
Oh yes, the punchline is this: on the B6700, "scripts" in their job
control language (called the WorkFlow Language, WFL) were in fact
*compiled*, so just because something is a script, doesn't mean it
can't be or isn't compiled.
--
Mixed Member Proportional---a *great* way to vote!
Richard A. O'Keefe; http://www.cs.rmit.edu.au/%7Eok; RMIT Comp.Sci.
Richard A. O'Keefe <··@goanna.cs.rmit.edu.au> a �crit dans l'article
<············@goanna.cs.rmit.edu.au>...
> >In article <··········@nntp.seflin.lib.fl.us>,
> >Ralph Silverman <········@bcfreenet.seflin.lib.fl.us> wrote:
> >> after all,
> >> when a program actually has been
> >> compiled and linked
> >> successfully,
> >> it runs from binary ...
> >> NOT A SCRIPT!!!!!!!!!!!!!!!!!!!!!!!!!!
> >> ^^^^^^^^^^^^
>
I belieave Compiling means translate from a language to another. Not have
to translate in native binary code.
Chris
Chris wrote:
> =
> Richard A. O'Keefe <··@goanna.cs.rmit.edu.au> a =E9crit dans l'article
> <············@goanna.cs.rmit.edu.au>...
> > >In article <··········@nntp.seflin.lib.fl.us>,
> > >Ralph Silverman <········@bcfreenet.seflin.lib.fl.us> wrote:
> > >> after all,
> > >> when a program actually has been
> > >> compiled and linked
> > >> successfully,
> > >> it runs from binary ...
> > >> NOT A SCRIPT!!!!!!!!!!!!!!!!!!!!!!!!!!
> > >> ^^^^^^^^^^^^
> >
> =
> I belieave Compiling means translate from a language to another. Not have=
> to translate in native binary code.
> =
> Chris
Uh I think that Compiling means to translate anything into machine code. =
Interpreted means to identify a string value with an according machine =
code instruction.
>Uh I think that Compiling means to translate anything into machine code.
>Interpreted means to identify a string value with an according machine
>code instruction.
----------
I don't think so. The compiler usually translate to assembly, then the
assembler translates to machine code. Sometimes the assembler is built in,
sometimes it may translate in one pass.
What about the Java compiler ? Is the code in machine language ? Isn't it a
compiler ?
Chris
Chris wrote:
>
> >Uh I think that Compiling means to translate anything into machine code.
> >Interpreted means to identify a string value with an according machine
> >code instruction.
> ----------
> I don't think so. The compiler usually translate to assembly, then
> the assembler translates to machine code. Sometimes the assembler is
> built in, sometimes it may translate in one pass.
> What about the Java compiler ? Is the code in machine language ?
> Isn't it a compiler ?
The compiler doesn't necessarily translate to assembly; the ones
that do are multipass compilers. Single pass compilers translate
source code directly to object code.
Java compiles to a bytecode; this is either: interpreted by a program
acting as a virtual processor, or is the instruction set of a real
processor. In both senses the bytecode has been compiled. In only
one sense is the code strictly in machine language. The Java virtual
machine is an interpreter; it is executing idealised machine code.
From: J. Christian Blanchette
Subject: Re: Lisp is not an interpreted language
Date:
Message-ID: <3286D467.EA4@ivic.qc.ca>
>
> What about the Java compiler ? Is the code in machine language ? Isn't it a
> compiler ?
Java source code is plain Unicode text (.java), it is compiled into .class
binaries compatible with the "Java Virtual Machine". The JVM is usually
implemented as an interpreter, although a program could convert Java bytecodes
into native machine code.
Jas.
"I take pride as the king of illiterature."
- K.C.
In article <·············@earthlink.net> ···@earthlink.net "Bull Horse" writes:
> Uh I think that Compiling means to translate anything into machine code. =
>
> Interpreted means to identify a string value with an according machine =
>
> code instruction.
I'd use a more general definition, and say that "compiling" is
when you translate from one form into another. It might not even
be a one-way translation. I recall reading that QuickBasic code
exists in one of 3 different forms, depending on the state of the
"interpreter" (running, editing, and a 3rd state).
If we want to, we can imagine that a typical compiler will translate
one string (e.g. source code) into another (e.g. machine code). Not
all compiles use strings, however. Compiler theory is just a special
case for strings! We can even represent trees and other data structures
as strings. The reverse is also possible.
Isn't this fun?
--
<URL:http://www.enrapture.com/cybes/> You can never browse enough
Future generations are relying on us
It's a world we've made - Incubus
We're living on a knife edge, looking for the ground -- Hawkwind
From: Dan Mercer
Subject: Re: Lisp is not an interpreted language
Date:
Message-ID: <55qi3l$j5p@dawn.mmm.com>
Bull Horse (···@earthlink.net) wrote:
: Chris wrote:
: > =
: > Richard A. O'Keefe <··@goanna.cs.rmit.edu.au> a =E9crit dans l'article
: > <············@goanna.cs.rmit.edu.au>...
: > > >In article <··········@nntp.seflin.lib.fl.us>,
: > > >Ralph Silverman <········@bcfreenet.seflin.lib.fl.us> wrote:
: > > >> after all,
: > > >> when a program actually has been
: > > >> compiled and linked
: > > >> successfully,
: > > >> it runs from binary ...
: > > >> NOT A SCRIPT!!!!!!!!!!!!!!!!!!!!!!!!!!
: > > >> ^^^^^^^^^^^^
: > >
: > =
: > I belieave Compiling means translate from a language to another. Not have=
: > to translate in native binary code.
: > =
: > Chris
: Uh I think that Compiling means to translate anything into machine code. =
: Interpreted means to identify a string value with an according machine =
: code instruction.
Then what does YACC stand for? (Yet Another Compiler-Compiler). Lex, yacc,
and cfront all compile input into C-src code, which must be further
compiled into machine code.
--
Dan Mercer
Reply To: ········@mmm.com
"Compiled" vs "Interpreted" are merely words -- if
any programming language which can be compiled into a byte-code
is to be called a "compiled" language, we should drop altogether
the concept of an "interpreted" language, since for
any given programming language, a byte-code and a byte-code
compiler can be found.
But if one has to have this distinction, Lisp should
fall into the "interpreted" category, since the
"compiled" byte-code is interpreted by sofware, not
the hardware. I don't know about the Lisp
machines though, do they (or did they) have hardware
instructions corresponding one-to-one with read/eval/print?
In article <·············@dma.isg.mot.com>, Mukesh Prasad
<·······@dma.isg.mot.com> wrote:
> "Compiled" vs "Interpreted" are merely words -- if
> any programming language which can be compiled into a byte-code
> is to be called a "compiled" language, we should drop altogether
> the concept of an "interpreted" language, since for
> any given programming language, a byte-code and a byte-code
> compiler can be found.
Ahh, you guys you still don't get it.
A compiler-based system ->
Language A compiles to Language B.
How languge B runs (native, as byte code, ...) is not
of any importance. Important is that Language A gets
compiled into a target language (b).
An Interpreter-based system ->
Language A is being read by an Interpreter.
This results in some internal representation
for programs of the language A.
This internal representation then is being interpreted.
The internal representation is a direct mapping
from the original source.
To give you an example:
A Lisp compiler takes expression in the Lisp
language and compiles it for example to PowerPC
machine code. The PowerPC processor then executes
this machine code. This is a compiler-based system
(like Macintosh Common Lisp).
A Lisp compiler takes expression in the Lisp
language and compiles it to Ivory (!!)
instructions. The Ivory code is than
being interpreted by a virtual machine running
on a DEC Alpha processor.
This is still a compiler-based system
(like Open Genera from Symbolics).
A Lisp Interpreter takes expressions in
the Lisp language, interns them and
executes these interned representation
of the program. Still, every statement
has to be examined (is it a macro,
is it a function call, is it a constant, ...).
Still, macro expansion
will happen for every loop cycle over and
over. You can get the original expression back
and change it, because there is a one to
one relation between the original source
and the interned representation.
> But if one has to have this distinction, Lisp should
> fall into the "interpreted" category, since the
> "compiled" byte-code is interpreted by sofware, not
> the hardware.
Ahh, no.
There are also tons of compilers who generate optimized
machine code of CISC and RISC architectures.
>I don't know about the Lisp
> machines though, do they (or did they) have hardware
> instructions corresponding one-to-one with read/eval/print?
Sure not.
Read a basic computer science book, where they explain the
difference (get: Structure and Interpretation of
Computer Programs, by Abelson & Sussman, MIT Press).
Rainer Joswig
From: Patrick Juola
Subject: Re: Lisp is not an interpreted language
Date:
Message-ID: <55sq8s$ok0@news.ox.ac.uk>
In article <·············@dma.isg.mot.com> Mukesh Prasad <·······@dma.isg.mot.com> writes:
>"Compiled" vs "Interpreted" are merely words -- if
>any programming language which can be compiled into a byte-code
>is to be called a "compiled" language, we should drop altogether
>the concept of an "interpreted" language, since for
>any given programming language, a byte-code and a byte-code
>compiler can be found.
>
>But if one has to have this distinction, Lisp should
>fall into the "interpreted" category, since the
>"compiled" byte-code is interpreted by sofware, not
>the hardware. I don't know about the Lisp
>machines though, do they (or did they) have hardware
>instructions corresponding one-to-one with read/eval/print?
You're palming a card. C, a compiled language by anyone's reckoning,
has no (single) hardware instruction corresponding to printf. Or even
necessarily to ||, because of the complex semantics of evaluation.
As a matter of fact, many of us consider a language that *does*
correspond 1-1 with a set of hardware instructions to be an
assembly language.... and the advantage of "compiled" languages
is that they let one get *away* from this level of detail.
Patrick
From: Ken Bibb
Subject: Re: Lisp is not an interpreted language
Date:
Message-ID: <kbibb.847382207@shellx>
In <·············@dma.isg.mot.com> Mukesh Prasad <·······@dma.isg.mot.com> writes:
>"Compiled" vs "Interpreted" are merely words -- if
>any programming language which can be compiled into a byte-code
>is to be called a "compiled" language, we should drop altogether
>the concept of an "interpreted" language, since for
>any given programming language, a byte-code and a byte-code
>compiler can be found.
>But if one has to have this distinction, Lisp should
>fall into the "interpreted" category, since the
>"compiled" byte-code is interpreted by sofware, not
>the hardware.
This is not necessarily the case. Most modern lisps allow you to create
binary executables.
--
Ken Bibb "If the boundary breaks I'm no longer alone
·····@arastar.com Don't discourage me
·····@best.com Bring out the stars/On the first day"
·····@csd.sgi.com David Sylvian--"The First Day"
In article <·············@dma.isg.mot.com>,
Mukesh Prasad <·······@dma.isg.mot.com> wrote:
>But if one has to have this distinction, Lisp should
>fall into the "interpreted" category, since the
>"compiled" byte-code is interpreted by sofware, not
>the hardware.
There are many Lisp compilers which compile to native code, not byte
code. (It seems that many have been pointing this out on this thread
for some time, to no avail...)
--
== Seth Tisue <·······@nwu.edu> http://www.cs.nwu.edu/~tisue/
In article <··········@Godzilla.cs.nwu.edu>,
Seth Tisue <·····@cs.nwu.edu> wrote:
>In article <·············@dma.isg.mot.com>,
>Mukesh Prasad <·······@dma.isg.mot.com> wrote:
>>But if one has to have this distinction, Lisp should
>>fall into the "interpreted" category, since the
>>"compiled" byte-code is interpreted by sofware, not
>>the hardware.
>
>There are many Lisp compilers which compile to native code, not byte
>code. (It seems that many have been pointing this out on this thread
>for some time, to no avail...)
Not only that, many modern lisps use incremental compilation. A good
test is to use DEFUN to define a function and then immediately use
SYMBOL-FUNCTION to look a the definition of that function. If you are
using an interpreted lisp, you will see the lambda expression that you
specified with DEFUN, but if your lisp has incremental compilation
(such as Macintosh Common Lisp) you will instead see only that the
value is a compiled function. On systems that are not incrementally
compiled, you can probably still use (COMPILE 'FOO) in order to compile
an individual function, which may be confirmed by again using
(SYMBOL-FUNCTION 'FOO) to view the function definition.
If you want more proof that some lisps are truly compiled, consider the
fact that Macintosh Common Lisp has separate versions for the PowerPC
and for 68K Macs. The 68K version will run on either type of machine
(using the system's 68K interpreter when running on the PowerPC), but
the PowerPC version will not run on a 68K machine at all (because it's
native PowerPC code and it won't run on 68K Macs). The same goes for
compiled files and saved applications that you can produce with the two
versions.
If you compile a lisp file (foo.lisp) with MCL 3.0 (the 68K version),
you get a fast-loading file (foo.fasl) that contains native 68K code.
If you compile the same file using MCL 3.9PPC, you get a file foo.pfsl
that contains native PowerPC code and is not usable on a 68K Mac. If
you use SAVE-APPLICATION to produce a double-clickable application
program, the resulting program will not run on a 68K Mac if it was
produced by the PowerPC version of MCL.
If programs were being "compiled" to byte-code, there obviously would
be no need for all this -- just re-implement the byte-code interpreter
on the PowerPC.
--
Dave Seaman ·······@purdue.edu
++++ stop the execution of Mumia Abu-Jamal ++++
++++ if you agree copy these lines to your sig ++++
++++ see http://www.xs4all.nl/~tank/spg-l/sigaction.htm ++++
From: Jim Balter
Subject: Re: Lisp is not an interpreted language
Date:
Message-ID: <jqbE0Iu9B.Awv@netcom.com>
In article <··········@godzilla.cs.nwu.edu>,
Seth Tisue <·····@cs.nwu.edu> wrote:
>In article <·············@dma.isg.mot.com>,
>Mukesh Prasad <·······@dma.isg.mot.com> wrote:
>>But if one has to have this distinction, Lisp should
>>fall into the "interpreted" category, since the
>>"compiled" byte-code is interpreted by sofware, not
>>the hardware.
>
>There are many Lisp compilers which compile to native code, not byte
>code. (It seems that many have been pointing this out on this thread
>for some time, to no avail...)
Ignorance memes are highly resistant and mutate readily.
--
<J Q B>
> A language can be called "interpreted", in a non-technical but widely
> used sense, when the implementations most commonly used for production
> applications fail to compile to native code. The Basic dialect used in
> Visual Basic, for example, fits this criterion (though it may not for
> much longer); Lisp doesn't.
A language could be called "compiled" when the code run is different from
the source code, and interpreted when THE source is translated at each
execution.
About native code, we call then a native compiler.
Erik Naggum wrote:
>
> as has already been mentioned here at the myth-debunking central, many Lisp
> systems "interpret" code by compiling it first, then executing the compiled
> code.
So that's different from the "sleigh of hand" I mentioned? Are you
a one-person myth-debunking central, or a myth-creation one?
> however, the amazing thing is that Lisp systems that don't have
> `eval' (Scheme), force the programmers who need to evaluate > expressions at
I was expecting people to cite Scheme, T, Nil et al,
but it didn't happen in hordes (though there were very
interesting arguments on the order of "I never use Eval,
therefore Lisp is not interpreted" and "Until more than
N Basic compilers exist, Basic will be an interpreted
language and Lisp will be a compiled language...")
One interesting thing is, I have never seen C mentioned
as a variant of BCPL, and I have seldom seen Pascal
referred to as a kind of Algol. And nobody calls
C++ "a kind of C" anymore. Yet Scheme is even now
a "Lisp system"!
Perhaps instead of drawing fine lines in the sand about
distinctions between interpreted and compiled, and
trying to make being "compiled" the holy grail of Lisp
systems, the Lisp community should have instead tried
to see how well Lisp does as an Internet language!
Nobody cares if Java is an interpreted language, as long as
it does what they need done.
Or on second thoughts, perhaps Lisp could become a Smalltalk
like language -- a source of several ideas, instead
of something in a limbo with always having a small
but vocal minority needing to defend it by claiming
it is not interpreted and such.
From: William Paul Vrotney
Subject: Re: Lisp is not an interpreted language
Date:
Message-ID: <vrotneyE0r1nr.46@netcom.com>
In article <·············@dma.isg.mot.com> Mukesh Prasad
<·······@dma.isg.mot.com> writes:
>
> Erik Naggum wrote:
> >
> > as has already been mentioned here at the myth-debunking central, many Lisp
> > systems "interpret" code by compiling it first, then executing the compiled
> > code.
>
> So that's different from the "sleigh of hand" I mentioned? Are you
> a one-person myth-debunking central, or a myth-creation one?
>
>
> Perhaps instead of drawing fine lines in the sand about
> distinctions between interpreted and compiled, and
> trying to make being "compiled" the holy grail of Lisp
> systems, the Lisp community should have instead tried
> to see how well Lisp does as an Internet language!
> Nobody cares if Java is an interpreted language, as long as
> it does what they need done.
>
Check out
http://www.ai.mit.edu/projects/iiip/doc/cl-http/home-page.html = Common Lisp
Hypermedia Server
> Or on second thoughts, perhaps Lisp could become a Smalltalk
> like language -- a source of several ideas, instead
> of something in a limbo with always having a small
> but vocal minority needing to defend it by claiming
> it is not interpreted and such.
This reads as though Lisp has a mind of it's own. Lisp is good for AI, I
didn't know it was that good! :-)
--
William P. Vrotney - ·······@netcom.com
From: Erik Naggum
Subject: Re: Lisp is not an interpreted language
Date:
Message-ID: <3056786391534295@naggum.no>
* Mukesh Prasad
| Yet Scheme is even now a "Lisp system"!
it's interesting to see just how little you know of what you speak.
Schemers call Scheme a Lisp system. many Schemers become irate when you
try to tell them that Scheme is not a Lisp.
| Or on second thoughts, perhaps Lisp could become a Smalltalk like
| language -- a source of several ideas, instead of something in a limbo
| with always having a small but vocal minority needing to defend it by
| claiming it is not interpreted and such.
this "source of several ideas" thing has been an ongoing process since its
inception. I'm surprised that you don't know this. people learn from Lisp
(then go off to invent a new syntax) all the time, all over the place.
when I was only an egg, at least I knew it. Mukesh Prasad may want to
investigate the option of _listening_ to those who know more than him,
instead of making a fool out of himself.
#\Erik
--
Please address private replies to "erik". Mail to "nobody" is discarded.
> it's interesting to see just how little you know of what you speak.
> Schemers call Scheme a Lisp system. many Schemers become irate when you
> try to tell them that Scheme is not a Lisp.
Now if Scheme were a wild success, they would become
irate if you called it Lisp. Amazing how
it works, is it not?
(But I will admit I don't know enough Scheme to debate
how it is or is not Lisp -- I was just making
a point about human behavior...)
> | Or on second thoughts, perhaps Lisp could become a Smalltalk like
> | language -- a source of several ideas, instead of something in a limbo
> | with always having a small but vocal minority needing to defend it by
> | claiming it is not interpreted and such.
> this "source of several ideas" thing has been an ongoing process since its
> inception. I'm surprised that you don't know this. people learn from Lisp
> (then go off to invent a new syntax) all the time, all over the place.
Not too many people, Erik. The Emacs stuff that you _do_ know
about, is more an exception, not a rule. Out here, C and C++ rule,
and Java seems to be on the rise. Lisp is not really
in the picture anymore. A pity, but it made too many promises
and for some reason didn't deliver. I personally suspect it is
because it makes things too easy and encourages lax discipline,
but I may be wrong.
> when I was only an egg, at least I knew it. Mukesh Prasad may want to
Hmmm... If you depend upon things you knew as an egg (i.e.
never bothered to actualy learn,) no wonder you come out
with the proclamations you do!
> investigate the option of _listening_ to those who know more than him,
> instead of making a fool out of himself.
Many times, the only way not to appear a fool to fools is to
join in their foolishness.
From: William Paul Vrotney
Subject: Re: Lisp is not an interpreted language
Date:
Message-ID: <vrotneyE0svpw.B41@netcom.com>
In article <·············@dma.isg.mot.com> Mukesh Prasad
<·······@dma.isg.mot.com> writes:
> Not too many people, Erik. The Emacs stuff that you _do_ know
> about, is more an exception, not a rule. Out here, C and C++ rule,
> and Java seems to be on the rise. Lisp is not really
> in the picture anymore. A pity, but it made too many promises
^
|
Sure, you think it is a pity. What hypocrisy!
Yes you are right Emacs, Lisp and AI are the exception. So why are *you*
posting your predictable opinions to such exceptional news groups? Is it
because you want to advance Emacs, Lisp or AI? ... I don't think so ...
--
William P. Vrotney - ·······@netcom.com
In article <·············@dma.isg.mot.com>
·······@dma.isg.mot.com "Mukesh Prasad" writes:
> Now if Scheme were a wild success, they would become
> irate if you called it Lisp. Amazing how
> it works, is it not?
This is a hypothetical question. It might not happen that way at
all, and my personal opinion is that this is more likely. Whatever
Scheme's success may be, I think that Scheme and CL programmers
can life together far more harmoniously than C and Pascal programmers
ever could. so, I doubt anybody will make a fuss.
Besides, you didn't say anthing about Scheme being more successful
than CL, did you? ;-)
> (But I will admit I don't know enough Scheme to debate
> how it is or is not Lisp -- I was just making
> a point about human behavior...)
Noted. It's your point, not mine. While C and Pascal programmers
may behave that way, I've not noticed CL and Schemer programmers
being so childish. Ah, but using Lisp is a sign of muturity! ;-)
> Not too many people, Erik. The Emacs stuff that you _do_ know
> about, is more an exception, not a rule. Out here, C and C++ rule,
> and Java seems to be on the rise. Lisp is not really
> in the picture anymore. A pity, but it made too many promises
> and for some reason didn't deliver. I personally suspect it is
> because it makes things too easy and encourages lax discipline,
> but I may be wrong.
Yes, you may be wrong. My suspicion, based on the opinions of
C++ programmers that I've seen posted to UseNet, is that some
people just like programming to be _difficult_, and refuse to
use anything "too easy". In fact, they'll go further, and claim
that such tools can't be used.
This is curious behavious, considering the evidence to the
contrary. However, this evidence is frequently drawn to their
attention, in such discussions, and the issue is forgotten.
> Hmmm... If you depend upon things you knew as an egg (i.e.
> never bothered to actualy learn,) no wonder you come out
> with the proclamations you do!
What have you learned, eh? C'mon, quote some references that
support your assertions. Then we might have something to discuss.
> > investigate the option of _listening_ to those who know more than him,
> > instead of making a fool out of himself.
>
> Many times, the only way not to appear a fool to fools is to
> join in their foolishness.
As I'm doing. ;-) I could just ignore you, but it's more fun
to this way. We're playing the "who knows more about compilers"
game, in which nobody scores any points (none that count, anyway),
there are no prizes (just the survival of death of certain memes),
and we can all walk away thinking, "Well, that showed him!"
It would be very childish, if this kind of stupidity didn't effect
the nature of the software we use and the tools used to create it.
The effect may be small, it every meme and every head those memes
live in plays its part.
--
<URL:http://www.enrapture.com/cybes/> You can never browse enough
Future generations are relying on us
It's a world we've made - Incubus
We're living on a knife edge, looking for the ground -- Hawkwind
From: Erik Naggum
Subject: Re: Lisp is not an interpreted language
Date:
Message-ID: <3056908648072943@naggum.no>
* Mukesh Prasad
| > when I was only an egg, at least I knew it. Mukesh Prasad may want to
|
| Hmmm... If you depend upon things you knew as an egg (i.e. never bothered
| to actualy learn,) no wonder you come out with the proclamations you do!
the expression "I am only an egg" refers to Robert A. Heinlein's legendary
"Stranger on a Strange Land". so does the word "grok", which I assume is
even more unfamiliar to you, both as a concept and as a word. it was first
published in 1961. at that time I was _literally_ only an egg, but that is
not what the expression refers to.
#\Erik
--
Please address private replies to "erik". Mail to "nobody" is discarded.
Erik Naggum wrote:
[snip]
> the expression "I am only an egg" refers to Robert A. Heinlein's legendary
> "Stranger on a Strange Land". so does the word "grok", which I assume is
> even more unfamiliar to you, both as a concept and as a word. it was first
> published in 1961. at that time I was _literally_ only an egg, but that is
> not what the expression refers to.
> #\Erik
We digress a bit, but "I am only an egg" was actually an
expression of politeness and humility - concepts you apparently
haven yet to grok :-)
First you assume I don't know Lisp so you can
get away with erroneous statements, then you assume
I haven't read book xyz -- and you always
manage to pick the wrong topic. Are you
always this lucky in life too? Here is a free
hint -- embed references to Joyce's works (get a copy of
Ulysses) if you want to talk about commonly read
things that I haven't read.
/Mukesh
In article <·············@dma.isg.mot.com>
·······@dma.isg.mot.com "Mukesh Prasad" writes:
> First you assume I don't know Lisp so you can
> get away with erroneous statements, then you assume
This appears to be a very reasonable conclusion (not an assumption),
based on your views. Until you can demonstrate your superior knowledge
and wisdom on this subject, it should be safe to assume that your
views are based on ignorance - which we've been trying to help you
change. Of course, if you prefer to either remain ignorance, or
appear that way, then I don't know what we can do for you.
You might as well take your views to a place (newgroup? whatever?)
where they'll be better appreciated/tolerated.
> I haven't read book xyz -- and you always
> manage to pick the wrong topic. Are you
Which book was that? Are you saing that you've read SIOCP? What,
if anything, did you learn from it? Please tell us, so that we
can avoid insulting your intelligence <ahem>, and so that you
can grant us the same courtesy.
> always this lucky in life too? Here is a free
> hint -- embed references to Joyce's works (get a copy of
> Ulysses) if you want to talk about commonly read
> things that I haven't read.
So you _have_ read SIOCP? Excellent. Did you understand the last
two chapters of the book? Have you also read PJ Brown's book, and
did you understand his defintions of "compiler" and "interpreter"?
--
<URL:http://www.enrapture.com/cybes/> You can never browse enough
Future generations are relying on us
It's a world we've made - Incubus
We're living on a knife edge, looking for the ground -- Hawkwind
From: Erik Naggum
Subject: Re: Lisp is not an interpreted language
Date:
Message-ID: <3057306460618975@naggum.no>
* Mukesh Prasad
| First you assume I don't know Lisp ...
it's an undeniable, irrefutable _fact_ that you don't know Lisp or anything
you have been saying about interpreters in this thread. the proof is in
your own articles. if you did know Lisp, you could not have said anything
of what you have said. it would be a crime of logic to conclude anything
_other_ than that you do not know what you're talking about.
#\Erik
--
Please address private replies to "erik". Mail to "nobody" is discarded.
Mukesh Prasad (·······@dma.isg.mot.com) wrote:
> Or on second thoughts, perhaps Lisp could become a Smalltalk
> like language -- a source of several ideas, instead
> of something in a limbo with always having a small
> but vocal minority needing to defend it by claiming
> it is not interpreted and such.
You need to realize that Lisp is the second oldest high-level programming
language. It has always been "a source of several ideas", in that every
functional language has had its roots in Lisp, and lots of stuff has been
carried over into other types of languages as well.
Greetings,
Jens.
--
Internet: ···········@bbn.hp.com Phone: +49-7031-14-7698 (TELNET 778-7698)
MausNet: [currently offline] Fax: +49-7031-14-7351
PGP: 06 04 1C 35 7B DC 1F 26 As the air to a bird, or the sea to a fish,
0x555DA8B5 BB A2 F0 66 77 75 E1 08 so is contempt to the contemptible. [Blake]
Jens Kilian wrote:
[snip]
> language. It has always been "a source of several ideas", in that every
> functional language has had its roots in Lisp, and lots of stuff has
That's true. Yet, for whatever reasons, none of the functional
languages have matched even the popularity of Lisp
itself, much less surpass it to become one of the highly popular
languages.
In article <············@dma.isg.mot.com>
·······@dma.isg.mot.com "Mukesh Prasad" writes:
> That's true. Yet, for whatever reasons, none of the functional
> languages have matched even the popularity of Lisp
> itself, much less surpass it to become one of the highly popular
> languages.
15 years ago, the same might've been said about OOP. All
you're telling us is where we currently stand in the history
of functional languages. I think we already know that.
Once again, you're trying to confuse the issue by misrepresenting
the facts. It doesn't help your argument. In fact, you appear
to be rather "clueless" when it comes to compile theory. While
Richard Gabriel has suggested some excellent books on the subject,
I'd recommend that you start with something more basic, and which
specifically explains how compiler theory relates to _interactive_
language systems.
You, on the other hand, have not given any references for
your sources of information. Where's _your_ credibility?
C'mon, put up or shut up. ;-)
It could just be that you're confusing "interactive" with
"interpreted". The two are _not_ the same, as you would be
aware by now if you were paying attention. So, kindly go away
and start reading (and hopefully _learning_ from) some of
these books, and then come back when you can refrain from
trying to teach your grandmother to suck eggs.
The difference between ignorance and stupidity is that an
ignorant person can be educated. Well, we've done our best
to help you in this regard. The next bit is all down to you.
Good luck!
--
<URL:http://www.enrapture.com/cybes/> You can never browse enough
Future generations are relying on us
It's a world we've made - Incubus
We're living on a knife edge, looking for the ground -- Hawkwind
In article <·············@dma.isg.mot.com>
·······@dma.isg.mot.com "Mukesh Prasad" writes:
> I was expecting people to cite Scheme, T, Nil et al,
> but it didn't happen in hordes (though there were very
> interesting arguments on the order of "I never use Eval,
> therefore Lisp is not interpreted" and "Until more than
> N Basic compilers exist, Basic will be an interpreted
> language and Lisp will be a compiled language...")
Check my email to you, and you'll find Scheme mentioned.
EVAL is a relic from an ancient time.
> One interesting thing is, I have never seen C mentioned
> as a variant of BCPL, and I have seldom seen Pascal
> referred to as a kind of Algol. And nobody calls
> C++ "a kind of C" anymore. Yet Scheme is even now
> a "Lisp system"!
Scheme is just a dialect of Lisp. VB is a dialect of Basic,
but who calls it Basic? The name distinguishes it from other
dialects.
Sadly, some people simply refer to Common Lisp as "Lisp".
This can be confusing. If you want a short name, refer to
it as CL. When I refer to Lisp, I include CL, Scheme, Dylan,
and anything else that is Lisp-like. Check the Lisp FAQ
for examples.
> Perhaps instead of drawing fine lines in the sand about
> distinctions between interpreted and compiled, and
> trying to make being "compiled" the holy grail of Lisp
> systems, the Lisp community should have instead tried
> to see how well Lisp does as an Internet language!
> Nobody cares if Java is an interpreted language, as long as
> it does what they need done.
I'm one of the few Lisp programmers who'd like to see Lisp
become more popular. I guess most people are happy with Lisp
the way it is.
> Or on second thoughts, perhaps Lisp could become a Smalltalk
> like language -- a source of several ideas, instead
> of something in a limbo with always having a small
> but vocal minority needing to defend it by claiming
> it is not interpreted and such.
Actually, it's only fools like youself calling it interpreted
that cause problems. You're confusing implementation details
with the language. Read a few books on compiler theory, esp
books like Brown's. <hint, hint>
--
<URL:http://www.enrapture.com/cybes/> You can never browse enough
Future generations are relying on us
It's a world we've made - Incubus
We're living on a knife edge, looking for the ground -- Hawkwind
Cyber Surfer wrote:
[snip]
>Actually, it's only fools like youself calling it interpreted
>that cause problems. You're confusing implementation details
>with the language. Read a few books on compiler theory, esp
>books like Brown's. <hint, hint>
[snip]
Now what was the provocation for all these direct insults?
I am not singling you out -- but it seems a lot of
modern-day Lispers have been resorting to such tactics.
Well, at least it makes me glad I am not doing any more Lisp,
I might have been working with people like this!
In article <·············@dma.isg.mot.com>
·······@dma.isg.mot.com "Mukesh Prasad" writes:
> Cyber Surfer wrote:
> [snip]
> >Actually, it's only fools like youself calling it interpreted
> >that cause problems. You're confusing implementation details
> >with the language. Read a few books on compiler theory, esp
> >books like Brown's. <hint, hint>
> [snip]
>
> Now what was the provocation for all these direct insults?
> I am not singling you out -- but it seems a lot of
> modern-day Lispers have been resorting to such tactics.
> Well, at least it makes me glad I am not doing any more Lisp,
> I might have been working with people like this!
_You're_ the one making a fool of themselves. Go away and read
some books about compiler theory - _interactive_ compiles, that
is. You seem unable to understand some very basic ideas.
Why not start with an excellent introduction, "Writing Interactive
Compilers and Interpreters". P.J. Brown, ISBN 0 471 27609 X, ISBN
0471 100722 pbk. John WIley & Sons Ltd. Please come back after
reading this fine book. _Then_ we may have something to discuss.
Meanwhile, I humbly suggest that you're trying to teach your
grandmother to suck eggs...Read Brown's book and you'll begin
to understand why.
Martin Rodgers
Enrapture Limited
--
<URL:http://www.enrapture.com/cybes/> You can never browse enough
Future generations are relying on us
It's a world we've made - Incubus
We're living on a knife edge, looking for the ground -- Hawkwind
Mukesh Prasad <·······@dma.isg.mot.com> writes:
>Ah, "compiling to native code" brings up a different issue,
>that of whether or not you want to allow eval in the
>language. If you do, there are some sleigh of hands
>involved (like hiding an interpreter in your "compiled"
>executable.)
Wrong. A *compiler* in the executable will do fine.
What's more, a *dynamically linked* compiler will also do fine,
so no space need actually be taken up in the object file.
For example, in DEC-10 Prolog, the compiler was a "shared segment"
which was swapped in when you loaded a file and swapped out again
when it had finished.
Take Oberon as another example. Looks like a stripped down Pascal.
Acts like a stripped down Pascal, _except_ it dynamically loads
new modules, and in one implementation, the module loader generates
new native code from a machine-independent compiled format.
>If you don't, is it Lisp?
Well, it might be Scheme...
--
Mixed Member Proportional---a *great* way to vote!
Richard A. O'Keefe; http://www.cs.rmit.edu.au/%7Eok; RMIT Comp.Sci.
In article <············@goanna.cs.rmit.edu.au>, ··@goanna.cs.rmit.edu.au (Richard A. O'Keefe) writes:
|> Mukesh Prasad <·······@dma.isg.mot.com> writes:
|> >Ah, "compiling to native code" brings up a different issue,
|> >that of whether or not you want to allow eval in the
|> >language. If you do, there are some sleigh of hands
|> >involved (like hiding an interpreter in your "compiled"
|> >executable.)
|>
|> Wrong. A *compiler* in the executable will do fine.
|> What's more, a *dynamically linked* compiler will also do fine,
|> so no space need actually be taken up in the object file.
Just a foolow up to the above: in a LIsp I implemented for the old
IBM 360/370 line, eval just called the compiler, ran the native code
produced, marked the function for garbage collect, then returned the
values. BTW, the reason for this was that I detested the way other
Lisp's had/enjoyed differences between compiler and eval semantics.
Jeff Barnett
Jeff Barnett wrote:
> Just a foolow up to the above: in a LIsp I implemented for the old
> IBM 360/370 line, eval just called the compiler, ran the native code
> produced, marked the function for garbage collect, then returned the
> values. BTW, the reason for this was that I detested the way other
> Lisp's had/enjoyed differences between compiler and eval semantics.
> Jeff Barnett
This is a good technique -- and as has been pointed out
earlier, in the modern operating systems even
this is not necessary because dynamic linking
is available on most systems, so the language
processor can simply be made available as
a dynamic library at minimal overhead. No
reason even to "run" the compiler, when
you can just call it in your own address space.
(Though, of course, this does not mean that one is free
from the obligation of making the language
processor available except in particular dialetcs...)
Richard A. O'Keefe wrote:
>
>[snip]
... A *compiler* in the executable will do fine.
> What's more, a *dynamically linked* compiler will also do fine,
> so no space need actually be taken up in the object file.
Correct, "hiding an interpreter" was an example strategy.
But you must hide *something* in the run-time environment,
and invoke it at run-time. As opposed to doing it
all at compile time.
Whether or not it takes up extra space, and whether
or not it is dynamically linked, is an operating
system dependent issue, and is not relevant from
a language point of view. (But if you are just
trying to get people to be interested in Lisp, it is
an actual issue of concern. But laying and defending a false
foundation ir order to raise interest, does not give
one a good start.)
> >If you don't, is it Lisp?
> Well, it might be Scheme...
Sure. So why not say what you mean, instead
of talking about Lisp and switching to
a particular dialect in the middle?
From: Warren Sarle
Subject: Re: Lisp is not an interpreted language
Date:
Message-ID: <E0zDAL.5xo@unx.sas.com>
Please stop cross-posting these interminable programming threads to
irrelevant newsgroups.
In article <·············@dma.isg.mot.com>, Mukesh Prasad <·······@dma.isg.mot.com> writes:
|> Richard A. O'Keefe wrote:
|> >
|> >[snip]
|> ... A *compiler* in the executable will do fine.
|> > What's more, a *dynamically linked* compiler will also do fine,
|> > so no space need actually be taken up in the object file.
|>
|> Correct, "hiding an interpreter" was an example strategy.
|> But you must hide *something* in the run-time environment,
|> and invoke it at run-time. As opposed to doing it
|> all at compile time.
|>
|> Whether or not it takes up extra space, and whether
|> or not it is dynamically linked, is an operating
|> system dependent issue, and is not relevant from
|> a language point of view. (But if you are just
|> trying to get people to be interested in Lisp, it is
|> an actual issue of concern. But laying and defending a false
|> foundation ir order to raise interest, does not give
|> one a good start.)
|>
|> > >If you don't, is it Lisp?
|> > Well, it might be Scheme...
|>
|> Sure. So why not say what you mean, instead
|> of talking about Lisp and switching to
|> a particular dialect in the middle?
|>
--
Warren S. Sarle SAS Institute Inc. The opinions expressed here
······@unx.sas.com SAS Campus Drive are mine and not necessarily
(919) 677-8000 Cary, NC 27513, USA those of SAS Institute.
*** Do not send me unsolicited commercial or political email! ***
(Patrick Juola) wrote:
: You're palming a card. C, a compiled language by anyone's reckoning,
: has no (single) hardware instruction corresponding to printf.
Printf is an interpreter.
--
<---->
From: Patrick Juola
Subject: Re: Lisp is not an interpreted language
Date:
Message-ID: <56erik$omv@news.ox.ac.uk>
In article <··········@james.freenet.hamilton.on.ca> ·····@james.freenet.hamilton.on.ca (Scott Nudds) writes:
>(Patrick Juola) wrote:
>: You're palming a card. C, a compiled language by anyone's reckoning,
>: has no (single) hardware instruction corresponding to printf.
>
> Printf is an interpreter.
Bingo! Thank you for so eloquently pouncing on the point I've been
trying to make for some time now.
Patrick
Richard A. O'Keefe (··@goanna.cs.rmit.edu.au) wrote:
: Mukesh Prasad <·······@dma.isg.mot.com> writes:
: >"Compiled" vs "Interpreted" are merely words
: So? Are you claiming Humpty Dumpty's privilege?
: There are important issues concerning binding time.
: >But if one has to have this distinction, Lisp should
: >fall into the "interpreted" category, since the
: >"compiled" byte-code is interpreted by sofware, not
: >the hardware.
: This is complete and utter bull-dust.
: The RUCI Lisp system I used on a DEC-10 in the early 80s compiled
: Lisp to *native* DEC-10 instructions.
: The Franz Lisp system I used on a VAX-11/780 around the same time
: compiled Lisp to *native* VAX instructions.
: The Gmabit Scheme system I use on a 680x0 Macintosh compiles
: Scheme (a dialect of Lisp) to *native* 680x0 instructions.
: The Lisp system I use on a a SPARC compiles Common Lisp
: to *native* SPARC instructions.
: Even the PSL system I used on a B6700 back in the 70s compiled
: PSL (a dialect of Lisp) to *native* B6700 instructions.
: The T system I used to use on our Encore Multimaxes compiled
: T and Scheme (dialects of Lisp) to *native* NS32k instructions.
: There are or have been *native-code* Lisp compilers for all the
: major machines, from Univac 1108s, IBM 360s, all the way up to
: Connection Machines and beyond.
: >I don't know about the Lisp
: >machines though, do they (or did they) have hardware
: >instructions corresponding one-to-one with read/eval/print?
: Why Lisp machines do you mean? CONS? CADR? LMI? Symbolics?
: Xerox (1108, 1109, 1185, ...)? The Japanese ones? The European ones?
: The plain fact of the matters is that Lisp
: - *CAN* be interpreted by fairly simple interpreters
: - *CAN* be compiled to very efficient native code on any reasonable
: modern machine
: [If you can compile a language to byte codes, you can compile it to
: native code by treating the byte codes as "macros", and running an
: optimiser over the result. This has actually been used as a route
: for developing a native code compiler.]
: --
: Mixed Member Proportional---a *great* way to vote!
: Richard A. O'Keefe; http://www.cs.rmit.edu.au/%7Eok; RMIT Comp.Sci.
--
*********************begin r.s. response*******************
technical sophistication
of this old time programmer
is undenyable ...
and, certainly,
ideas of people like this
are of the greatest value here ...
however,
while this master has had his nose
in his terminal since the 1970s
he may have missed out on some of
the pernicious political trends
^^^^^^^^^^^^^^^^
in his poor field ...
compiled (or assembled) 'native code'
is like 'real money' ...
someone always wants to debase it ...
as though 'store coupons' were as good
as cash ...
a subtle slide of programming
from a 'native code' standard
(real programming) to an
interpreted standard is obviously
intended!!!
*********************end r.s. response*********************
Ralph Silverman
········@bcfreenet.seflin.lib.fl.us
In article <············@goanna.cs.rmit.edu.au> ··@goanna.cs.rmit.edu.au (Richard A. O'Keefe) writes:
Mukesh Prasad <·······@dma.isg.mot.com> writes:
>"Compiled" vs "Interpreted" are merely words
>But if one has to have this distinction, Lisp should
>fall into the "interpreted" category, since the
>"compiled" byte-code is interpreted by sofware, not
>the hardware.
This is complete and utter bull-dust.
... and O'Keefe goes on to list a myriad Lisp systems that compiled to
native code. But Prasad's comment is "bull-dust" for a more basic
reason: By his definition EVERY language
(A) run on a microcoded machine or
(B) compiled for a 68K Mac but run on a PPC Mac
is Interpreted because
(A) on a microcoded machine, what we call "machine code" _is_ just
a byte code that is interpreted by an interpreter written in the 'real'
machine language of this machine, microcode.
(B) on a PPC Mac, 68k machine code is treated as a byte code and
executed by an interpreter.
So the same _binary object code_ can be actual machine code or a byte
code, depending on what machine you run it on. So the notion of a
_language_ being "interpreted" or "compiled" makes no sense. A
particular _implementation_ on an particular _computer_ down to a
particular _level of abstraction_ (e.g., 'down to 68K machine code')
can be "interpreted" or "compiled", but not a language.
Lou Steinberg wrote:
> particular _implementation_ on an particular _computer_ down to a
> particular _level of abstraction_ (e.g., 'down to 68K machine code')
> can be "interpreted" or "compiled", but not a language.
So what on earth is this thread about? Have you
read the topic heading?
You may not be aware of this (actually, you are obviously
not) but books on programming languages tend to divide
languages into two categories, "interpreted" and "compiled".
I repeat, *languages*, not *implementations*.
Since its inception, Lisp has been placed by programming
language theorists in the "interpreted" category.
The language itself, not any particular implementation.
However, Lisp systems have improved in technology.
In the early days, Lisp interpreters directly interpreted
the original source. An obvious improvement was
to "compact" the source code and to get rid of
comments, spaces etc prior to interpretation. But
this does not make the language "compiled".
Another improvement was to replace the original
source code by more compact and easy to interpret
"byte code". The function to do this is called
"compile", hence confusing the typical Lisp user
already.
To confuse matters more, the newer versions of the
"compile" function are more sophisticated, and can generate
machine code into which the interpreter transfers
the flow of control via a machine level jump
instruction. The confusion of the typical modern
day Lisp user is complete at this point!
However, having a function called "compile" doesn't
make language a compiled language.
An interpreted language is one which necessitates baggage
at run-time to interpret it. A compiled language
is one which doesn't. Lisp -- due to the nature
of the language definition -- necessitates baggage at
run-time, even with modern "compile" functions
which can generate machine code.
I will try once more (but not much more, this thread
has not attracted knowledgable responses or
intelligent, unbiased discourse) to explain this -- if the
Lisp language _itself_ is to be deemed "compiled" (irrespective
of any implementation of it), then by that definition,
all languages must be deemed "compiled languages".
For any given language, things which have been
done to Lisp can be done. Thus that language's
definition does not make the language "interpreted"
any more than Lisp is.
>So the same _binary object code_ can be actual machine code or a byte
>code, depending on what machine you run it on. So the notion of a
>_language_ being "interpreted" or "compiled" makes no sense. A
You should read some books on Computer Science. It is
actually a matter of definition, not "sense". It will
only make sense if you are familiar with the definitions.
Otherwise, you might as well look at a book of mathematics
and claim the term "factors" must have something to
do with "fact"s, because that is how you choose to
understand it.
"Interpreted" and "compiled", when applied to
languages, have specific meanings.
>particular _implementation_ on an particular _computer_ down to a
>particular _level of abstraction_ (e.g., 'down to 68K machine code')
>can be "interpreted" or "compiled", but not a language.
This and other such complicated gems occurring in
this thread, are neither compiled nor interpreted, but
simple and pure BS, arising out of ignorance, bias
and lack of clear thinking.
In article <·············@dma.isg.mot.com>,
Mukesh Prasad <·······@dma.isg.mot.com> wrote:
>An interpreted language is one which necessitates baggage
>at run-time to interpret it. A compiled language
>is one which doesn't. Lisp -- due to the nature
>of the language definition -- necessitates baggage at
>run-time, even with modern "compile" functions
>which can generate machine code.
In order to convince anyone of this, you will have to:
1) Define "baggage". Your whole argument depends on what exactly
"baggage" consist of.
2) Tell us exactly what it is about the Lisp language definition
that requires this runtime "baggage".
3) Support your definitions of "interpreted language" and "compiled
language" with citations from reputable computer science
textbooks. Your definitions certainly don't match what I was
taught in school.
--
== Seth Tisue <·······@nwu.edu> http://www.cs.nwu.edu/~tisue/
Seth Tisue wrote:
>
> >An interpreted language is one which necessitates baggage
> >at run-time to interpret it. A compiled language
> >is one which doesn't. Lisp -- due to the nature
> >of the language definition -- necessitates baggage at
> >run-time, even with modern "compile" functions
> >which can generate machine code.
>
> In order to convince anyone of this, you will have to:
Why would I want to do this on (a newly redirected)
comp.lang.lisp, where everybody already knows
Lisp? On other newsgroups, there may
exist people who want to know facts about
the language, and may deserve protection from
dis-information before they spend inordinate
amounts of time on learning a new language.
Here, everybody presumably knows Lisp enough
to make up there own minds.
Mukesh Prasad <·······@dma.isg.mot.com> writes:
>However, Lisp systems have improved in technology.
>In the early days, Lisp interpreters directly interpreted
>the original source.
Lisp 1.5 had a compiler.
No "mainstream" Lisp interpreter has _ever_ "directly interpreted
the original source".
(I have seen an interpreter for a C-like language that did exactly
that. It was in a book by Herbert Schildt, and as you might expect
it was seriously inefficient.)
Lisp interpreters deal with _abstract syntax trees_.
>An obvious improvement was
>to "compact" the source code and to get rid of
>comments, spaces etc prior to interpretation. But
>this does not make the language "compiled".
Once again, _all_ Lisp systems since 1.5 and before have
been based on abstract syntax trees, and most of them have
had _both_ an interpreter walking these trees (for debugging)
_and_ a compiler generating code (for execution).
>Another improvement was to replace the original
>source code by more compact and easy to interpret
>"byte code". The function to do this is called
>"compile", hence confusing the typical Lisp user
>already.
I once used a byte coded system, a Xerox 1108.
Thing was, the byte codes WERE THE NATIVE INSTRUCTION SET
of the machine. There was microcode underneath, but
there was microcode underneath the IBM 360 and M68000,
and nobody ever slammed BAL/360 for being an "interpreted"
language.
And again: the PSL system I was using in the 70s compiled
to *native* *instructions*.
>To confuse matters more, the newer versions of the
>"compile" function are more sophisticated, and can generate
>machine code into which the interpreter transfers
>the flow of control via a machine level jump
>instruction. The confusion of the typical modern
>day Lisp user is complete at this point!
Maybe you are confused, but Lisp users are not.
As far as a Lisp user is concerned, the question is
simply "do I get fine grain debugging, or do I get
high performance".
>However, having a function called "compile" doesn't
>make language a compiled language.
No, but in the 60s and 70s the mainstream Lisp
systems included the ability to compile to native machine
instructions, and this facility was *routinely* used.
Tell me this, and tell me honestly:
what properties does Scheme have (or lack)
compared with Fortran
that make Scheme "interpreted" and Fortran "compiled".
When you are answering this, consider the fact that I use
a Scheme compiler which is a "batch" compiler producing
code that often outperforms C, and the fact that I have
read a Fortran interpreter that was used to execute student
programs.
>An interpreted language is one which necessitates baggage
>at run-time to interpret it.
You haven't defined what "interpret" means.
Using any of several reasonable criteria, this makes Fortran and
C interpreted languages.
>A compiled language
>is one which doesn't. Lisp -- due to the nature
>of the language definition -- necessitates baggage at
>run-time, even with modern "compile" functions
>which can generate machine code.
Ah, but that "baggage" you are ranting about
- need not occupy any space in the executable form of the program
- need not take any *time* at run time unless it is actually *used*
- exists for C on UNIX, Windows, VMS, and other modern operating systems.
What am I getting at with that last point?
This:
A C program may *at run time* construct new code,
cause it to be compiled,
and cause it to become part of the running program.
There isn't any special _syntax_ for this, but it's in the _API_.
Presuambly you know about Windows DLLs; in UNIX SVr4 look for
'dlopen' in the manuals; in VMS I don't know what it's called but
I've used a program that used it. The CMS operating system has
had a dynamic LOAD command for a very long time.
However, calling compilation interpreting simply because it
happens at run time is a bizarre abuse of the English language.
>I will try once more (but not much more, this thread
>has not attracted knowledgable responses or
It has. People have tried to tell you that Lisp systems have
been generating native machine code for decades, since very early
days indeed. You keep calling them interpreters.
>intelligent, unbiased discourse)
You appear to accept only people who agree with you as unbiased.
to explain this -- if the
>Lisp language _itself_ is to be deemed "compiled"
With the notable exception of yourself, most people have
been arguing that it is EXTREMELY SILLY to call *ANY* language
"compiled" or "interpreted".
What people have been saying is that mainstream Lisp *SYSTEMS*
have since the earliest days offered compilers for the Lisp
language.
>(irrespective
>of any implementation of it),
Nobody in his right mind would so deem *any* programming language.
*Any* programming language can be interpreted.
Just about all of them *have* been. (There is at least one C++ interpreter.)
*Some* programming languages are hard to compile; APL2 springs to mind
because the *parsing* of an executable line can vary at run time. Even
so, there are *systems* that can reasonably be called APL compilers.
Another language that I would _hate_ to have to compile is M4; once
again the syntax can change while the program is executing.
>>So the same _binary object code_ can be actual machine code or a byte
>>code, depending on what machine you run it on. So the notion of a
>>_language_ being "interpreted" or "compiled" makes no sense. A
>You should read some books on Computer Science.
Lou Steinberg has probably read a *lot* of them.
>It is actually a matter of definition, not "sense". It will
>only make sense if you are familiar with the definitions.
>Otherwise, you might as well look at a book of mathematics
>and claim the term "factors" must have something to
>do with "fact"s, because that is how you choose to
>understand it.
This paragraph does not suggest any very profound acquaintance
with _either_ computing _or_ mathematics. One of the problems
that plagues both disciplines is that terminology and notation
are used differently by different authors. Most of the serious
mathematics and formal methods books I have include, BECAUSE
THEY NEED TO, a section explaining the notation they use.
>"Interpreted" and "compiled", when applied to
>languages, have specific meanings.
There are no *standard* meanings for those terms when applied to
languages. Why don't you cite the definitions and where you
found them?
For now, I've searched my fairly full bookshelves, and failed to
find any such definition. Amusingly, I did find the following
paragraph in the classic
Compiler Construction for Digital Computers
David Gries
Wiley International Edition, 1971.
At the beginning of Chapter 16, "Interpreters", we find this:
We use the term _interpreter_ for a program which performs
two functions:
1. Translates a source program written in the source language
(e.g. ALGOL) into an internal form; and
2. Executes (interprets, or simulates) the program in this
internal form.
The first part of the interpreter is like the first part of
a multi-pass cmpiler, and we will call it the "compiler".
The issue as Gries saw it back in 1971 (and I repeat that this is a
classic textbook) was between an internal form executed by software
and an internal form executed by hardware.
The question is, what are you trying to do in this thread?
Are you
(A) trying to make some substantive point about Lisp compared with
other languages?
In that case you have been proved wrong. There is no _useful_
sense in which Lisp is more "interpreted" than C, in existing
practical implementations of either.
(B) trying to make a _terminological_ point with no consequences for
what _happens_ to a program, only for what it is _called_?
In that case, you should be aware that you are NOT using
words in a way intelligible to people who have actually
*worked* on compilers, and you should explicitly state what
definitions you are using and where you got them.
The only point I care about is (A), whether there is any intrinsic
inefficiency in Lisp compared with C, and the answer is NO, there
isn't:
A programming language that is widely accepted as a Lisp dialect
(Scheme) not only _can_ be "batch" compiled like C, I routinely
use just such a compiler and get the same or better performance
out of it. (This is the Stalin compiler.)
In all the "popular" operating systems these days: DOS, Windows,
most modern UNIX systems, VMS, CMS, it is possible for a C
program to dynamically construct new code which then becomes
part of the running program.
Any book which states or implies that there is something about
"Lisp" (not otherwise specified as to dialect) is intrinsically
"interpreted" is not to be discarded lightly, but to be hurled
with great force.
--
Mixed Member Proportional---a *great* way to vote!
Richard A. O'Keefe; http://www.cs.rmit.edu.au/%7Eok; RMIT Comp.Sci.
Richard A. O'Keefe wrote:
> What am I getting at with that last point?
> This:
>
> A C program may *at run time* construct new code,
> cause it to be compiled,
> and cause it to become part of the running program.
>
> There isn't any special _syntax_ for this, but it's in the _API_.
> Presuambly you know about Windows DLLs; in UNIX SVr4 look for
> 'dlopen' in the manuals;
I'm confused on your point here-DLLs aren't compiled or "constructed"
at load time. I could, I suppose, spawn off a compile/link process and
create a DLL then explicitly load it, but I don't think that counts.
> In all the "popular" operating systems these days: DOS, Windows,
> most modern UNIX systems, VMS, CMS, it is possible for a C
> program to dynamically construct new code which then becomes
> part of the running program.
I used to do that on my TRaSh-80 by writing machine language into
string space. Worked pretty well. That counts. Dynamically-linked libraries
don't, IMHO. If you want to modify machine code inside a code or data
segment that probably _would_ count, but it's pretty much a pain to do.
--
Dave Newton | TOFU | (voice) (970) 225-4841
Symbios Logic, Inc. | Real Food for Real People. | (fax) (970) 226-9582
2057 Vermont Dr. | | ············@symbios.com
Ft. Collins, CO 80526 | The World Series diverges! | (Geek joke.)
In article <·············@symbiosNOJUNK.com>
············@symbiosNOJUNK.com "Dave Newton" writes:
> I used to do that on my TRaSh-80 by writing machine language into
> string space. Worked pretty well. That counts. Dynamically-linked libraries
> don't, IMHO. If you want to modify machine code inside a code or data
> segment that probably _would_ count, but it's pretty much a pain to do.
I also POKEed machine code into string space, as well as writing
Basic code that wrote assembly source code. My first compile took
the display memory map and wrote the code for a program to load
it back in, with the load address set for the memory map. Yes,
I used a TRS-80! I learned a lot from that "mess" of a machine,
perhaps _because_ of all its faults.
While I've not yet written machine code to a data segment and then
created a code alias for it, it doesn't look hard to do. The "hard"
part is the bit that writies the machine code. Having written an
assembler in Forth, and various Forth compilers, I think I understand
the principles. Still, all I'm saying is that I _could_ do it if
I ever had a good reason to. So far, I haven't.
However, I may prefer doing it that way to writing - let's say - C
source code that then using a C compiler, esp if the app/util/whatever
has to be delivered to a client who almost certainly won't have a
C compiler or any other development tool. Whether this is preferable
to writing a bytecode interpreter, and compiling to bytecodes, will
likely depend on the requirements of the program in which you're
embedding this code. If the compiled code won't survive after the
the program stops running, then using machine code may actually be
_easier_ than bytecodes.
Alternately, there's threaded code, but if the addresses are direct,
i.e. direct threading, then a change to the program will, like the
machine code approach, require all the code from your compile to be
recompiled, thus updating the addresses to match the runtime.
Complicated, isn't it? ;-) Even writing about it can be messy, but
I find it easier to write the code than to write _about_ the code.
PJ Brown explained it much better than I can!
So, I'm not disagreeing with either of you. Richard is right, you
can count a DLL, _if_ you have the tools to create a DLL handy when
your program runs. If not, then your point will be valid.
--
<URL:http://www.enrapture.com/cybes/> You can never browse enough
Future generations are relying on us
It's a world we've made - Incubus
We're living on a knife edge, looking for the ground -- Hawkwind
Richard A. O'Keefe wrote:
> A C program may *at run time* construct new code,
> cause it to be compiled,
> and cause it to become part of the running program.
True, but extremely contrived and misleading. Yes, it is possible
to execute "cc" from a C program. No, it is not necessary
for C programs to bundle "cc" with the C executable.
The case you are talking about is no different from
a particular C program needing to execute "ls", and
therefore needing "ls" to be bundled with it. There
are no intrinsic language features requiring such bundling. With
Lisp, there are, which is the basic difference. (You
may choose not to use those particular language features,
but that is your own business.)
> With the notable exception of yourself, most people have
> been arguing that it is EXTREMELY SILLY to call *ANY* language
> "compiled" or "interpreted".
Actually, nobody has been arguing along the lines of "this thread
is meaningless, because there is no such thing as an
'interpreted language'." Now that, I would have
considered honest and unbiased opinion. This argument
is only pulled out, somehow, in _defense_ of the thread!
> A programming language that is widely accepted as a Lisp dialect
> (Scheme) not only _can_ be "batch" compiled like C, I routinely
> use just such a compiler and get the same or better performance
> out of it. (This is the Stalin compiler.)
This thread is not about Scheme, but Lisp.
In article <·············@dma.isg.mot.com>
·······@dma.isg.mot.com "Mukesh Prasad" writes:
> The case you are talking about is no different from
> a particular C program needing to execute "ls", and
> therefore needing "ls" to be bundled with it. There
> are no intrinsic language features requiring such bundling. With
> Lisp, there are, which is the basic difference. (You
> may choose not to use those particular language features,
> but that is your own business.)
No. Full Common Lisp includes EVAL and COMPILE, etc. You don't
necessarily get these functions when the code is delivered, but
that'll depend on the _implementation_. This is the same mistake
were making earlier.
Also, not all Lisps are CL. The Scheme language doesn't include
EVAL, altho some implementations may do so. It appears that
Gambit C supports this, and this is a Scheme compiler that procuces
C source code. However, you should not make the mistake of
generalising: this is only one compiler, after all.
Some Basics has an EVAL$ function that took a string (you guessed
that, I bet) and fed it into the interpreter. Not all Basics
support this, and some of them don't compile the the code into
tokenised form and then interpret it. Visual Basic doesn't work
this way, and it would be very costly if it did, and would be
worse if it support EVAL$. On the other hand, VB apps that link
to 3 MB of libraries are not unheard of.
Generalisations are dangerous. At best, they can make you look
a fool. I'll leave the worse case as an exercise for the reader.
> Actually, nobody has been arguing along the lines of "this thread
> is meaningless, because there is no such thing as an
> 'interpreted language'." Now that, I would have
> considered honest and unbiased opinion. This argument
> is only pulled out, somehow, in _defense_ of the thread!
See PJ Brown's book for my answer. ;-) Let me know when you've
read it...
> > A programming language that is widely accepted as a Lisp dialect
> > (Scheme) not only _can_ be "batch" compiled like C, I routinely
> > use just such a compiler and get the same or better performance
> > out of it. (This is the Stalin compiler.)
>
> This thread is not about Scheme, but Lisp.
Scheme is a dialect of Lisp. It's perfectly valid to refer to
it in this thread. See the Lisp FAQ.
--
<URL:http://www.enrapture.com/cybes/> You can never browse enough
Future generations are relying on us
It's a world we've made - Incubus
We're living on a knife edge, looking for the ground -- Hawkwind
From: J. A. Durieux
Subject: Re: Lisp is not an interpreted language
Date:
Message-ID: <E12n2F.Gz2@cs.vu.nl>
In article <·············@dma.isg.mot.com>,
Mukesh Prasad <·······@dma.isg.mot.com> wrote:
>This thread is not about Scheme, but Lisp.
This thread is not about cows, but mammals, right?
J. A. Durieux wrote:
>
> In article <·············@dma.isg.mot.com>,
> Mukesh Prasad <·······@dma.isg.mot.com> wrote:
>
> >This thread is not about Scheme, but Lisp.
>
> This thread is not about cows, but mammals, right?
Right. Well, you can talk about cows to support
a viewpoint, as long as you don't make statements
like "all mammals have four legs and two horns".
From: Warren Sarle
Subject: Re: Lisp is not an interpreted language
Date:
Message-ID: <E16q1G.773@unx.sas.com>
In article <·············@dma.isg.mot.com>, Mukesh Prasad <·······@dma.isg.mot.com> writes:
|> J. A. Durieux wrote:
|> >
|> > In article <·············@dma.isg.mot.com>,
|> > Mukesh Prasad <·······@dma.isg.mot.com> wrote:
|> >
|> > >This thread is not about Scheme, but Lisp.
|> >
|> > This thread is not about cows, but mammals, right?
|>
|> Right. Well, you can talk about cows to support
|> a viewpoint, as long as you don't make statements
|> like "all mammals have four legs and two horns".
You forgot to crosspost this stupid thread to sci.bio.bovine
--
Warren S. Sarle SAS Institute Inc. The opinions expressed here
······@unx.sas.com SAS Campus Drive are mine and not necessarily
(919) 677-8000 Cary, NC 27513, USA those of SAS Institute.
*** Do not send me unsolicited commercial or political email! ***
From: Ian Cresswell
Subject: Re: Lisp is not an interpreted language
Date:
Message-ID: <1432@leopold.win-uk.net>
>|> Right. Well, you can talk about cows to support
>|> a viewpoint, as long as you don't make statements
>|> like "all mammals have four legs and two horns".
>
>You forgot to crosspost this stupid thread to sci.bio.bovine
I hate to disagree with you Warren ... this should be crossposted
to comp.bio.bse to :-)
-----------------------------------------------------
(Dr) Ian Cresswell
Research Coordinator, School of Computer Science, UCE
www.ibmpcug.co.uk/~leopold
PLEASE REMOVE COMP.AI.NEURAL-NETS FROM THIS THREAD.
THANK YOU VERY MUCH.
Gregory E. Heath ·····@ll.mit.edu The views expressed here are
M.I.T. Lincoln Lab (617) 981-2815 not necessarily shared by
Lexington, MA (617) 981-0908(FAX) M.I.T./LL or its sponsors
02173-9185, USA
Mukesh Prasad (·······@dma.isg.mot.com) wrote:
[...]
> "Interpreted" and "compiled", when applied to
> languages, have specific meanings.
_Any_ programming language can be implemented by an interpreter or a compiler.
It just doesn't make sense to speak about "compiled languages" vs "interpreted
languages". I take it that you have never heard about C interpreters?
[...]
> This and other such complicated gems occurring in
> this thread, are neither compiled nor interpreted, but
> simple and pure BS, arising out of ignorance, bias
> and lack of clear thinking.
*Plonk*
Pot, kettle, black etc.
Jens.
--
Internet: ···········@bbn.hp.com Phone: +49-7031-14-7698 (TELNET 778-7698)
MausNet: [currently offline] Fax: +49-7031-14-7351
PGP: 06 04 1C 35 7B DC 1F 26 As the air to a bird, or the sea to a fish,
0x555DA8B5 BB A2 F0 66 77 75 E1 08 so is contempt to the contemptible. [Blake]
From: Carl Donath
Subject: Re: Lisp is not an interpreted language
Date:
Message-ID: <328B7F64.59E2B600@cci.com>
Jens Kilian wrote:
> _Any_ programming language can be implemented by an interpreter or a compiler.
> It just doesn't make sense to speak about "compiled languages" vs "interpreted
> languages". I take it that you have never heard about C interpreters?
This does not take into account languages (Lisp, APL) where the program
may generate functions and execute them. A compiler could only do this
if the compiled program included a compiler to compile and execute the
generated-on-the-fly instructions, which is difficult and/or silly.
In a phrase, "self-modifying code".
--
----------------------------------------------------------------------
-- ···@nt.com ----- ········@rpa.net ----- ········@mailbox.syr.edu --
----------------------------------------------------------------------
In article <·················@cci.com>, Carl Donath <···@cci.com> wrote:
>Jens Kilian wrote:
>> _Any_ programming language can be implemented by an interpreter or a compiler.
>> It just doesn't make sense to speak about "compiled languages" vs "interpreted
>> languages". I take it that you have never heard about C interpreters?
>
>This does not take into account languages (Lisp, APL) where the program
>may generate functions and execute them. A compiler could only do this
>if the compiled program included a compiler to compile and execute the
>generated-on-the-fly instructions, which is difficult and/or silly.
Most modern Lisp environments include a Lisp compiler precisely so
that code can be generated on the fly and compiled. It is neither
difficult nor silly, it is actually a rather common practice.
>In a phrase, "self-modifying code".
>
>--
>----------------------------------------------------------------------
>-- ···@nt.com ----- ········@rpa.net ----- ········@mailbox.syr.edu --
>----------------------------------------------------------------------
--
Christopher R. Eliot, Senior Postdoctoral Research Associate
Center for Knowledge Communication, Department of Computer Science
University of Massachusetts, Amherst. (413) 545-4248 FAX: 545-1249
·····@cs.umass.edu, http://rastelli.cs.umass.edu/~ckc/people/eliot/
From: Jim Balter
Subject: Re: Lisp is not an interpreted language
Date:
Message-ID: <328CE557.7794@netcom.com>
CHRISTOPHER ELIOT wrote:
>
> In article <·················@cci.com>, Carl Donath <···@cci.com> wrote:
> >Jens Kilian wrote:
> >> _Any_ programming language can be implemented by an interpreter or a compiler.
> >> It just doesn't make sense to speak about "compiled languages" vs "interpreted
> >> languages". I take it that you have never heard about C interpreters?
> >
> >This does not take into account languages (Lisp, APL) where the program
> >may generate functions and execute them. A compiler could only do this
> >if the compiled program included a compiler to compile and execute the
> >generated-on-the-fly instructions, which is difficult and/or silly.
>
> Most modern Lisp environments include a Lisp compiler precisely so
> that code can be generated on the fly and compiled. It is neither
> difficult nor silly, it is actually a rather common practice.
A compiler is just another component of a runtime system.
Code is just another form of data.
Eval is just another function.
Compilation is translation, interpretation is execution.
There really isn't much more worth saying, so please let this silly
thread die.
--
<J Q B>
Jim Balter wrote:
> A compiler is just another component of a runtime system.
> Code is just another form of data.
> Eval is just another function.
> Compilation is translation, interpretation is execution.
This is nomenclature (as I was saying originally,) but if at
run-time you need to lex and parse the language, to me that is
interpretation.
If to you it isn't, then there is no such thing as
an "interpreted language" by your definitions, because
all the techniques used in Lisp can be applied to any
given language, as I was saying originally.
What was so difficult to understand about either of these
two very simple points?
I know I'm going to regret joining in here.
Mukesh Prasad <·······@dma.isg.mot.com> writes:
>This is nomenclature (as I was saying originally,) but if at
>run-time you need to lex and parse the language, to me that is
>interpretation.
Nomenclature needs to be useful.
I think the point of contention here is your use of the word "need".
It is true that most dialects of Lisp have historically provided
implementation independent ways of lexing, parsing, and compiling the
language. These are *extensions*. They do not affect the semantics
of the language, nor do they categorize the *language* (as opposed to
a given implementation) as interpreted or compiled. (Of course,
however, the presence of these extensions has had an effect on what
programs were chosen to be written in Lisp --- but let's not get
confused by sociological artifacts.)
To see what I mean, consider the following assertion (ignoring the
fact that it needs some qualifications in order to be universally
true):
Any Lisp program which "needs" to use READ, EVAL, or COMPILE has
functionality such that a translation into another language (e.g. C)
would require external calls to a parser or a
compiler. (e.g. exec'ing cc).
Now, if you can make some claims about a program that, when written in
Lisp, needs to use these extensions but, when written in C, doesn't
need to use these, *then* I'll grant you that "interpreted" is a
property of the language and not the program (or the language
implementation).
>If to you it isn't, then there is no such thing as
>an "interpreted language" by your definitions, because
>all the techniques used in Lisp can be applied to any
>given language, as I was saying originally.
>What was so difficult to understand about either of these
>two very simple points?
The fact that "Interpreted language" vs. "Compiled language" doesn't
strike me (and probably others as well) as a useful distinction. On
the other hand, knowing whether a given implementation of a language
is interpreted (for example, Sabre C) will likely give me some
expectations (e.g. performance, debugging environment) which might or
might not turn out to be true. Additionally, if you told me that a
program called out to the compiler to generate new code (or was
self-modifying, or required lexing or parsing of user input, or ...)
that *might* tell me something interesting about the program (although
to me it seems much less useful than the first case. We're always
"interpreting" user input; if it happens that the "interpreter" we use
for the input is the same as the language the program is written in,
well, that just seems like an interesting coincidence, but not
necessarily something important.)
I hope this clears up why your remarks seem difficult to understand to
some readers of this newsgroup.
(One final comment: I suppose one can argue that it is valid to
interpret "Interpreted language" as a statistical comment about the
majority of implementations of a given programming language. In the
case of Lisp, though, even then I believe that it would be incorrect
to call it an "interpreted language" since the large majority of Lisp
implementations are compiled.)
Michael Greenwald wrote:
> Any Lisp program which "needs" to use READ, EVAL, or COMPILE has
> functionality such that a translation into another language (e.g. C)
> would require external calls to a parser or a
> compiler. (e.g. exec'ing cc).
This covers a fundamental issue excellently, and I
certainly have no problems with this. My point was simply
that Lisp chose to provide this particular power
as a feature of the language, which made "requires
interpretation" a property of the language and not
the program.
(Actually, originally there were lots of features
in early versions of Lisp, e.g. dynamic scoping, which
were thought to require interpretation. One by one many of
these have been knocked down. But I am not up
on the micro issues of compilation versus
implementation, or standards.)
The presence of this power resulted in programs
which depended on this power. Same as the
presence of pointers in a language like C
results in their being used, and all
the attendant benefits and problems.
I personally used to know somebody who wrote a compiler
in such a way that the result of the parsing phase was
not a typical AST but a fully executable s-expression,
which when executed, would generate the target machine code.
(The compiler was for some other language.)
In the early days of Lisp, such things
were commonplace and deemed clever.
Now lately there have been efforts to denigrate
such tactics. Rightly so -- but this is
not the same as having removed them from
the language. There is an important difference.
> The fact that "Interpreted language" vs. "Compiled language" doesn't
> strike me (and probably others as well) as a useful distinction. On
> the other hand, knowing whether a given implementation of a language
> is interpreted (for example, Sabre C) will likely give me some
> expectations (e.g. performance, debugging environment) which might or
Yes, but arguing about the whole language because of the
expectations people get when they hear "interpreted",
is more political than technical. It is distasteful
to see a whole branch of computer science people start
placing more emphasis on the politics of something
rather than the technical accuracy of it.
I don't think it works either. An intelligent
person interested in Lisp will soon realize
that the over-emphasis on "lisp is not an
interpreted language" is to cover something
up. (Ever heard "C is not an interpreted
language" being debated hotly?)
If one had to pick something to tell newcomers,
it should have been "such and such implementation
of Lisp will generate code to match
that generated for an equivalent program by
a C compiler -- never mind whether it is compiled
or interpreted". (That's true for some implementations,
isn't it?)
Or if this claim could not be made, at least "this
is an excellent language for trying out new ideas
quickly," or even "this is a very comfortable
language to work in".
> I hope this clears up why your remarks seem difficult to understand to
> some readers of this newsgroup.
I can understand why they would be "difficult to palate".
In article <············@dma.isg.mot.com>
·······@dma.isg.mot.com "Mukesh Prasad" writes:
> Michael Greenwald wrote:
>
> > Any Lisp program which "needs" to use READ, EVAL, or COMPILE has
> > functionality such that a translation into another language (e.g. C)
> > would require external calls to a parser or a
> > compiler. (e.g. exec'ing cc).
>
> This covers a fundamental issue excellently, and I
> certainly have no problems with this. My point was simply
> that Lisp chose to provide this particular power
> as a feature of the language, which made "requires
> interpretation" a property of the language and not
> the program.
EVAL and COMPILE can be thought of as features of an interactive
lanaguage. Are you still confusing "interactive" with "interpreted"?
> (Actually, originally there were lots of features
> in early versions of Lisp, e.g. dynamic scoping, which
> were thought to require interpretation. One by one many of
> these have been knocked down. But I am not up
> on the micro issues of compilation versus
> implementation, or standards.)
_Interactive_. See Brown's book (yet again). It'll explain it all,
in very simple terms - using _Basic_ to illustrate the techniques.
> The presence of this power resulted in programs
> which depended on this power. Same as the
> presence of pointers in a language like C
> results in their being used, and all
> the attendant benefits and problems.
Lisp programmers can choose not to use READ, EVAL, and COMPILE
in their programs. If implementations include them in the runtime,
then blame the implementors for making that choice irrelevant, or
to put it another way, for "removing" the benefits of that choice.
BTW, the first commercial Lisp that I used could load and unload
the various code modules, so that if you didn't do any I/O for a
while, the system could unload that module, freeing the memory.
This was on a machine without VM, so it was a very tasty feature!
Today, Windows apps can do the same thing, just by optionally
loading or unloading a DLL. How many Lisps for Windows support
this feature? I dunno, as I've not yet found one. I don't think
of this as an inherent fault of Lisp (Common Lisp, as I don't
yet know of a commercial Scheme with a native code compiler).
Instead, I'd ask the vendors why they've neglected this, and
whether they'd consider supporting it at some point in the future.
That sounds a hell of a lot more constructive that your approach,
which appears to be to criticise Lisp based on what you _think_
it is, but as people have pointed out to you, most definitely
_is not_. If it were so easy to play games with the truth, then
we might all claim that "Language <X> has problem <Y>", where
<X> is any language you choose to slag off, and <Y> is any problem
that you think you know more about than the people reading your
UseNet posts.
Indeed, this game has been played to death many times in the past,
with regular failures. The reason is simple: most people playing
that game with the playing against much more experienced players,
and some of them are armed with tactical nuclear weapons (e.g. CS
papers, apps, compilers - many freely available over the Internet),
while your arguments will be like short sticks (unverifiable and
easy to break).
> I personally used to know somebody who wrote a compiler
> in such a way that the result of the parsing phase was
> not a typical AST but a fully executable s-expression,
> which when executed, would generate the target machine code.
> (The compiler was for some other language.)
>
> In the early days of Lisp, such things
> were commonplace and deemed clever.
Such things might be said about _any_ language. Take Basic...
Perhaps someday we'll all look back at C, Pascal, Java, etc,
and say the same things. Well, maybe. Whether or not such
statements will be _fair_ is another matter. This isn't a
intended as a criticism of any language. It's more a comment
on how compiler theory changes over the years, and how that
changes our perception of languages designed within the
constraints (no pun intended) of the time, i.e. the limits
of compiler theory.
When you say, "early says of Lisp", when exactly do you mean?
Late 50s? Early 60s? What year is it now? Could it be that
you're talking about something that, if you're right, no longer
has any meaning to modern programmers, except as an odd bit of
history? If so, then it might be worth mentioning the lack
of hardware support for stacks, and asking if that had any
effect on language design at the time. I've read that Fortran
used to modify the code so that the return address once placed
in the code of a function. Yuk.
Does Fortran code still do this? I don't think so. So I'm
only mentioning this as a way of making a point about compilers
and language design. There are better ways of implementing
functions, and modern Fortran dialects and implementations
now use them.
The same can be said about Lisp and many of the "features"
that you're so critical of. The one that you described sounds
like a classic incremental compiler technique, and not at all
unique to Lisp. The only detail that might tie it to Lisp was
the use of an s-expr, but tokenised source code or some other
representation might just as easily have been used, if the
langauge were not Lisp. Tokenised Basic code was very popular
at one time.
> Now lately there have been efforts to denigrate
> such tactics. Rightly so -- but this is
> not the same as having removed them from
> the language. There is an important difference.
Why "rightly so"? Who has done so? The "throw away" compiler
idea is still in use, and probably more popular now than
it has ever been in the past - and rightly so, IMHO. It gives
developers a degree of CPU independance, but I suspect that's
something that some chip vendors, and perhaps even some compiler
vendors too, would like to discourage. Vested interests.
> > The fact that "Interpreted language" vs. "Compiled language" doesn't
> > strike me (and probably others as well) as a useful distinction. On
> > the other hand, knowing whether a given implementation of a language
> > is interpreted (for example, Sabre C) will likely give me some
> > expectations (e.g. performance, debugging environment) which might or
>
> Yes, but arguing about the whole language because of the
> expectations people get when they hear "interpreted",
> is more political than technical. It is distasteful
> to see a whole branch of computer science people start
> placing more emphasis on the politics of something
> rather than the technical accuracy of it.
s/interpreted/interactive
> I don't think it works either. An intelligent
> person interested in Lisp will soon realize
> that the over-emphasis on "lisp is not an
> interpreted language" is to cover something
> up. (Ever heard "C is not an interpreted
> language" being debated hotly?)
s/interpreted/interactive
C _is_ an interpreted language, _if_ you use an interpreter.
Interactive? I dunno. You're right this is not hotly debated,
but let's pause for a moment and wonder why...
Why are so few Basics interative these days? I'd say it's
because some people don't like interactive systems, and perhaps
confuse the word with "interactive. You, for example.
> If one had to pick something to tell newcomers,
> it should have been "such and such implementation
> of Lisp will generate code to match
> that generated for an equivalent program by
> a C compiler -- never mind whether it is compiled
> or interpreted". (That's true for some implementations,
> isn't it?)
<ahem>
s/interpreted/interactive
You've yet to answer this point. Care to try? If you read
Brown's book first, then you might possibly have a clearer
idea of the distinction I'm making. It's a very basic one
(pun intended!), but you seem to be missing it.
> Or if this claim could not be made, at least "this
> is an excellent language for trying out new ideas
> quickly," or even "this is a very comfortable
> language to work in".
I suspect there's a degree of elitism at work. Make a system
too easy to use, and all kinds of people will use it. Make it
hard to use, and it'll be much more exclusive. Hmmm. I'm not
naming any particular language here, as I think that _all_
programming languages do this to some extent! They just do
it in different ways.
It might be unkind to suggest that you're clueless, but I
will say that you're perhaps a little out of your depth here.
As we discussed by email, you could do a little research to
bring your knowledge of Lisp up to date. Then we might have an
interesting discussion...based on reality, rather than myth.
I hope you'll take this as constructive criticism, as I really
do feel that you have something to offer here, if only your
information were more complete. If I were less generous, then
I might suspect that your intendtion is simply to spread
anti-Lisp memes. You seem to have swallowed a great number
of them already! It's time for a stomach pump. It won't hurt,
and you'll feel much better afterward.
--
<URL:http://www.enrapture.com/cybes/> You can never browse enough
Future generations are relying on us
It's a world we've made - Incubus
We're living on a knife edge, looking for the ground -- Hawkwind
From: Jim Balter
Subject: Re: Lisp is not an interpreted language
Date:
Message-ID: <32920790.7E6E@netcom.com>
Mukesh Prasad wrote:
>
> Jim Balter wrote:
>
> > A compiler is just another component of a runtime system.
> > Code is just another form of data.
> > Eval is just another function.
> > Compilation is translation, interpretation is execution.
>
> This is nomenclature (as I was saying originally,) but if at
> run-time you need to lex and parse the language, to me that is
> interpretation.
>
> If to you it isn't, then there is no such thing as
> an "interpreted language" by your definitions, because
> all the techniques used in Lisp can be applied to any
> given language, as I was saying originally.
>
> What was so difficult to understand about either of these
> two very simple points?
The difficult thing to understand is why you are still on about this,
other than as some sort of ego gratification at the expense of these
newsgroups.
--
<J Q B>
In article <·············@netcom.com> ···@netcom.com "Jim Balter" writes:
> Mukesh Prasad wrote:
> >
> > Jim Balter wrote:
> >
> > > A compiler is just another component of a runtime system.
> > > Code is just another form of data.
> > > Eval is just another function.
> > > Compilation is translation, interpretation is execution.
> >
> > This is nomenclature (as I was saying originally,) but if at
> > run-time you need to lex and parse the language, to me that is
> > interpretation.
> >
> > If to you it isn't, then there is no such thing as
> > an "interpreted language" by your definitions, because
> > all the techniques used in Lisp can be applied to any
> > given language, as I was saying originally.
> >
> > What was so difficult to understand about either of these
> > two very simple points?
>
> The difficult thing to understand is why you are still on about this,
> other than as some sort of ego gratification at the expense of these
> newsgroups.
>
> --
> <J Q B>
>
Now that's *ironic*..
Not long ago, you were claiming that this "distinction" was
almost a test for one's competence in computing.....
What goes around, comes around eh?
Or is this just another example of the "fragmentation" I keep
drawing your (and others') attention to (which you seem
determined to conceive as 'hypocrisy').
http://www.uni-hamburg.de/~kriminol/TS/tskr.htm
--
David Longley
From: Jim Balter
Subject: Re: Lisp is not an interpreted language
Date:
Message-ID: <3293485B.50DE@netcom.com>
David Longley wrote:
>
> In article <·············@netcom.com> ···@netcom.com "Jim Balter" writes:
>
> > Mukesh Prasad wrote:
> > >
> > > Jim Balter wrote:
> > >
> > > > A compiler is just another component of a runtime system.
> > > > Code is just another form of data.
> > > > Eval is just another function.
> > > > Compilation is translation, interpretation is execution.
> > >
> > > This is nomenclature (as I was saying originally,) but if at
> > > run-time you need to lex and parse the language, to me that is
> > > interpretation.
> > >
> > > If to you it isn't, then there is no such thing as
> > > an "interpreted language" by your definitions, because
> > > all the techniques used in Lisp can be applied to any
> > > given language, as I was saying originally.
> > >
> > > What was so difficult to understand about either of these
> > > two very simple points?
> >
> > The difficult thing to understand is why you are still on about this,
> > other than as some sort of ego gratification at the expense of these
> > newsgroups.
> >
> > --
> > <J Q B>
> >
> Now that's *ironic*..
>
> Not long ago, you were claiming that this "distinction" was
> almost a test for one's competence in computing.....
Oh, I "said that", did I? I'm afraid that your fragmented mental
defects make you incapable of comprehending what I say, so you should
limit yourself to direct quotation.
> What goes around, comes around eh?
>
> Or is this just another example of the "fragmentation" I keep
> drawing your (and others') attention to (which you seem
> determined to conceive as 'hypocrisy').
Longley, you are apparently too mentally defective to understand that,
since you apply your standards to others but not to yourself, you are
in *moral* error. Hypocrisy is a matter of *ethics*, which are
apparently beyond your sociopathic grasp. But if I point out that
I'm not talking science here, you will criticize me for being in a
muddle, which just further demonstrates your not-quite-sane autistic
mental defect. But that is just another example of your fragmentation,
I suppose (which can only prove your point through affirmation of the
consequent, which would only prove your point about humans having
trouble with logic through affirmation of the consequent, which ... oh,
never mind.)
--
<J Q B>
In article <·············@netcom.com> ···@netcom.com "Jim Balter" writes:
>
> The difficult thing to understand is why you are still on about this,
> other than as some sort of ego gratification at the expense of these
> newsgroups.
>
> --
> <J Q B>
>
David Longley wrote:
>
> In article <·············@netcom.com> ···@netcom.com "Jim Balter" writes:
>
> > Mukesh Prasad wrote:
> > >
> > > Jim Balter wrote:
> > >
> > > > A compiler is just another component of a runtime system.
> > > > Code is just another form of data.
> > > > Eval is just another function.
> > > > Compilation is translation, interpretation is execution.
> > >
> > > This is nomenclature (as I was saying originally,) but if at
> > > run-time you need to lex and parse the language, to me that is
> > > interpretation.
> > >
> > > If to you it isn't, then there is no such thing as
> > > an "interpreted language" by your definitions, because
> > > all the techniques used in Lisp can be applied to any
> > > given language, as I was saying originally.
> > >
> > > What was so difficult to understand about either of these
> > > two very simple points?
> >
> > The difficult thing to understand is why you are still on about this,
> > other than as some sort of ego gratification at the expense of these
> > newsgroups.
> >
> > --
> > <J Q B>
> >
<DL>
>> Now that's *ironic*..
>
>> Not long ago, you were claiming that this "distinction" was
>> almost a test for one's competence in computing.....
<JB>
>Oh, I "said that", did I? I'm afraid that your fragmented mental
>defects make you incapable of comprehending what I say, so you should
>limit yourself to direct quotation.
<DL>
>> What goes around, comes around eh?
>>
>> Or is this just another example of the "fragmentation" I keep
>> drawing your (and others') attention to (which you seem
>> determined to conceive as 'hypocrisy').
<JB>
>Longley, you are apparently too mentally defective to understand that,
>since you apply your standards to others but not to yourself, you are
>in *moral* error. Hypocrisy is a matter of *ethics*, which are
>apparently beyond your sociopathic grasp. But if I point out that
>I'm not talking science here, you will criticize me for being in a
>muddle, which just further demonstrates your not-quite-sane autistic
>mental defect. But that is just another example of your fragmentation,
>I suppose (which can only prove your point through affirmation of the
>consequent, which would only prove your point about humans having
>trouble with logic through affirmation of the consequent, which ... oh,
>never mind.)
I think the above is a bit of a muddle, but, as I keep trying to
point out - that's folk psychology for you...
http://www.uni-hamburg.de/~kriminol/TS/tskr.htm
What I have tried to point out is that we do best when folk
psychological notions are eschewed altogether, except to describe
these *as* heuristics.. (in itself quite a difficult concept for
many to grasp..but then, perhaps you have to be a psychologist to
begin to understand how psychologists empirically describe these
behaviours without giving them a normative status.
What I have had to say about "Fragmentation" is a consequence of
the failure of Leibniz's Law within intensional contexts, which
dominate folk psychological language. This in turn, I propose, is
characteristic of the way in whuch we are constrained to work out
side of the extensional stance.
'Suppose that each line of the truth table for the
conjunction of all [of a person's] beliefs could be
checked in the time a light ray takes to traverse the
diameter of a proton, an approximate "supercycle" time,
and suppose that the computer was permitted to run for
twenty billion years, the estimated time from the "big-
bang dawn of the universe to the present. A belief
system containing only 138 logically independent
propositions would overwhelm the time resources of this
supermachine.'
C. Cherniak (1986)
Minimal Rationality p.93
'Cherniak goes on to note that, while it is not easy to
estimate the number of atomic propositions in a typical
human belief system, the number must be vastly in excess
of 138. It follows that, whatever its practical benefits
might be, the proposed consistency-checking algorithm is
not something a human brain could even approach. Thus,
it would seem perverse, to put it mildly, to insist that
a person's cognitive system is doing a bad job of
reasoning because it fails to periodically execute the
algorithm and check on the consistency of the person's
beliefs.'
S. Stich (1990)
The Fragmentation of Reason p.152
'I should like to see a new conceptual apparatus of a
logically and behaviourally straightforward kind by
which to formulate, for scientific purposes, the sort of
psychological information that is conveyed nowadays by
idioms of propositional attitude.'
W V O Quine (1978)
In the extract from Cherniak, the point being made is that as the
number of discrete propositions increase, the possible combinations
increases dramatically, or, as Shafir and Tversky 1992 say:
'Uncertain situations may be thought of as disjunctions
of possible states: either one state will obtain, or
another....
Shortcomings in reasoning have typically been attributed
to quantitative limitations of human beings as
processors of information. "Hard problems" are typically
characterized by reference to the "amount of knowledge
required," the "memory load," or the "size of the search
space"....Such limitations, however, are not sufficient
to account for all that is difficult about thinking. In
contrast to many complicated tasks that people perform
with relative ease, the problems investigated in this
paper are computationally very simple, involving a
single disjunction of two well defined states. The
present studies highlight the discrepancy between
logical complexity on the one hand and psychological
difficulty on the other. In contrast to the "frame
problem" for example, which is trivial for people but
exceedingly difficult for AI, the task of thinking
through disjunctions is trivial for AI (which routinely
implements "tree search" and "path finding" algorithms)
but very difficult for people. The failure to reason
consequentially may constitute a fundamental difference
between natural and artificial intelligence.'
E. Shafir and A. Tversky (1992)
Thinking through Uncertainty: Nonconsequantial Reasoning
and Choice
Cognitive Psychology 24,449-474
From a pattern recognition or classification stance, it is known that
as the number of predicates increase, the number of linearly separable
functions becomes proportionately smaller as is made clear by the
following extract from Wasserman (1989) when discussing the concept of
linear separability:
'We have seen that there is no way to draw a straight
line subdividing the x-y plane so that the exclusive-or
function is represented. Unfortunately, this is not an
isolated example; there exists a large class of
functions that cannot be represented by a single-layer
network. These functions are said to be linearly
inseparable, and they set definite bounds on the
capabilities of single-layer networks.
Linear separability limits single-layer networks to classification
problems in which the sets of points (corresponding to input values)
can be separated geometrically. For our two-input case, the separator
is a straight line. For three inputs, the separation is performed by a
flat plane cutting through the resulting three-dimensional space. For
four or more inputs, visualisation breaks down and we must mentally
generalise to a space of n dimensions divided by a "hyperplane", a
geometrical object that subdivides a space of four or more
dimensions.... A neuron with n binary inputs can have 2 exp n
different input patterns, consisting of ones and zeros. Because each
input pattern can produce two different binary outputs, one and zero,
there are 2 exp 2 exp n different functions of n variables.
As shown [below], the probability of any randomly selected function
being linearly separable becomes vanishingly small with even a modest
number of variables. For this reason single-layer perceptrons are, in
practice, limited to simple problems.
n 2 exp 2 exp n Number of Linearly Separable Functions
1 4
2 16 14
3 256 104
4 65,536 1,882
5 4.3 x 10 exp 9 94,572
6 1.8 x 10 exp 19 5,028,134
P. D. Wasserman (1989)
Linear Separability: Ch2. Neural Computing Theory and Practice
In later sections evidence is presented in the context of clinical vs.
actuarial judgment that human judgement is severely limited to
processing only a few variables. Beyond that, non- linear fits become
more frequent. This is discussed later in the context of connectionist
'intuitive',inductive inference and constraints on short-term or
working memory span (c.f. Kyllonen & Christal 1990 - "Reasoning
Ability Is (LIttle More Than) Working-Memory Capacity?!"), but it is
worth mentioning here that in the epilogue to their expanded re-print
of their 1969 review of neural nets 'Perceptrons - An Introduction to
Computational Geometry', after reiterating their original criticism
that neural networks had only been shown to be capable of solving 'toy
problems', ie problems with a small number of dimensions, using 'hill
climbing' algorithms, Minsky and Papert (1988) effectively did a
'volte face' and said:
'But now we propose a somewhat shocking alternative:
Perhaps the scale of the toy problem is that on which,
in physiological actuality, much of the functioning of
intelligence operates. Accepting this thesis leads into
a way of thinking very different from that of the
connectionist movement. We have used the phrase "society
of mind" to refer to the idea that mind is made up of a
large number of components, or "agents," each of which
would operate on the scale of what, if taken in
isolation, would be little more than a toy problem.'
M Minsky and S Papert (1988) p266-7
and a little latter, which is very germane to the fragmentation of
behaviour view being advanced in this volume:
'On the darker side, they [parallel distributed
networks] can limit large-scale growth because what any
distributed network learns is likely to be quite opaque
to other networks connected to it.'
ibid p.274
This *opacity* of aspects, or elements, of our own behaviour to
ourselves is central to the theme being developed in this volume,
namely that a science of behaviour must remain entirely extensional
and that there can not therefore be a science or technology of
psychology to the extent that this remains intensional (Quine
1960,1992). The discrepancy between experts' reports of the
information they use when making diagnoses (judgments) is reviewed in
more detail in a later section, however, research reviewed in Goldberg
1968, suggests that even where diagnosticians are convinced that they
use more than additive models (ie use interactions between variables -
which statistically may account for some of the non-linearities),
empirical evidence shows that in fact they only use a few linear
combinations of variables (cf. Nisbett and Wilson 1977, in this
context).
As an illustration of methodological solipsism (intensionalism) in
practice consider the following which neatly contrasts subtle
difference between the methodological solipsist approach and that of
the methodological or 'evidential' behaviourist.
Several years ago, a prison psychologist sought the views of prison
officers and governors as to who they considered to be 'subversives'.
Those considered 'subversive' were flagged 1, those not considered
subversive were flagged 0. The psychologist then used multiple
regression to predict this classification from a number of other
behavioural variables. From this he was able to produce an equation
which predicted subversiveness as a function of 4 variables: whether
or not the inmate had a firearms offence history, the number of
reports up to arrival at the current prison, the number of moves up to
arrival where the inmate had stayed more than 28 days, and the number
of inmate assaults up to arrival.
Note that the dependent variable was binary, the inmate being
classified as 'subversive' or 'not subversive'. The prediction
equation, which differentially weighted the 4 variables, therefore
predicted the dependent variable as a value between 0 and 1. Now the
important thing to notice here is that the behavioural variables were
being used to predict something which is essentially a propositional
attitude, ie the degree of certainty of the officers beliefs that
certain inmates were subversive.
The methodological solipsist may well hold that the officer's beliefs
are what are important, however, the methodological behaviourist would
hold that what the officers thought was just *an approximation of what
the actual measures of inmate behaviour represented*, ie his thoughts
were just vague, descriptive terms for inmates who had lots of
reports, assaulted inmates and had been moved through lots of prisons,
and were probably in prison for violent offences. What the officers
thought was not perhaps, all that important, since we could just go to
the records and identify behaviours which are characteristic of
troublesome behaviour and then identify inmates as a function of those
measures (cf. Williams and Longley 1986).
In the one case the concern is likely to be with developing better and
better predictors of what staff THINK, and in the other, it becomes a
matter of simply recording better measures of classes of behaviour and
empirically establishing functional relations between those classes.
In the case of the former, intensional stance, one becomes interested
in the *psychology* of those exposed to such factors (ie those exposed
to the behaviour of inmates, and what they *vaguely or intuitively
describe it as)*. From the extensional stance (methodological
behaviourist) defended in these volumes, such judgments can only be a
**function** of the data that staff have had access to. From the
extensional stance, one is simply interested in recording *behaviour*
itself and deducing implicit relations. Ryle (1949) and many
influential behaviourists since (Quine 1960), have, along with Hahn
(1933) suggested that this is our intellectual limit anyway:
'It is being maintained throughout this book that when
we characterize people by mental predicates, we are not
making untestable inferences to any ghostly processes
occurring in streams of consciousness which we are
debarred from visiting; we are describing the ways in
which those people conduct parts of their predominantly
public behaviour.'
G. Ryle
The Concept of Mind (1949)
Using regression technology as outlined above is essentially how
artificial neural network software is used to make classifications, in
fact, there is now substantial evidence to suggest that the two
technologies are basically one and the same (Stone 1986), except that
in neural network technology, the regression variable weights are
opaque to the judge, cf. Kosko (1992):
'These properties reduce to the single abstract property
of *adaptive model-free function estimation*:Intelligent
systems adaptively estimate continuous functions from
data without specifying mathematically how outputs
depend on inputs...A function f, denoted f: X Y, maps
an input domain X to an output range Y. For every
element x in the input domain X, the function f uniquely
assigns the element y to the output range Y.. Functions
define causal hypotheses. Science and engineering paint
our pictures of the universe with functions.
B. Kosko (1992)
Neural Networks and Fuzzy Systems: A Dynamical Systems
Approach to Machine Intelligence p 19.
The rationale behind Sentence Management as outlined in the paper
'What are Regimes?' (Longley 1992) and in section D below, is that the
most effective way to bring about sustained behaviour change is not
through specific, formal training programmes, but through a careful
strategy of apposite allocation to activities which *naturally require
the behavioural skills* which an inmate may be deficient in. This
depends on standardised recording of activity and programme behaviour
*throughout sentence* which will provide a *historical and actuarial,
record of attainment.* This will provide differential information to
guide management's decisions as how best to help inmates lead a
constructive life whilst in custody, and, hopefully, after release.
Initially, it will serve to support actuarial analysis of behaviour as
a practical working, inmate, and management, information system. In
time, it should provide data to enable managers to focus resources
where they are most required (ie provide comprehensive regime
profiles, which highlight strong and weak elements). Such a system is
only interested in what inmates 'think' or 'believe' to the extent
that what they 'think' and 'believe' are specific skills which the
particular activities and programmes require, and which can therefore
be systematically assessed as criteria of formative behaviour
profiling. What is required for effective decision making and
behaviour management is a history of behavioural performance in
activities and programmes, much like the USA system of Grade Point
Averages and attendance. All such behaviours are the natural skills
required by the activities and programmes, and all such assessment is
criterion reference based.
The alternative, intensional approach, of asking staff to identify
risk factors from the documented account of the offence, and
subsequently asking staff to look out for them in the inmate's prison
behaviour may well only serve to shape inmates to inhibit
(conditionally suppress) such behaviour, especially if their
progression through the prison system is contingent on this. However,
from animal studies of acquisition-extinction-reacquisition, there is
no evidence that such behaviour inhibition is likely to produce a
*permanent* change in the inmate's behaviour in the absence of the
inmate *learning new behaviours*. Such an approach is also blind to
base rates of behaviours. Only through a system which encouraged the
acquisition of *new* behaviours can we expect there to be a change in
risk, and even this would have to be *actuarially* determined. For a
proper estimate of risk, one requires a system where inmates can be
assessed with respect to standard demands of the regime. The standard
way to determine risk factors was to derive these from *statistical*
*analysis,* not from *clinical (intensional) judgement*.
Much of the rationale for this stance can be deduced from the
following. Throughout the 20th century, psychologists' evaluation of
the extent to which reasoning can be formally taught has been
pessimistic. From Thorndike (1913) through Piaget (see Brainerd 1978)
to Newell (1980) it has been maintained that:
'the modern.....position is that learned problem-solving
skills are, in general, idiosyncratic to the task.'
A. Newell 1980.
Furthermore, it has been argued that whilst people may in fact use
abstract inferential rules, these rules can not be formally taught to
any significant degree. They are learned instead under natural
conditions of development and cannot be improved by formal
instruction. This is essentially Piaget's position.
The above is, in fact, how Nisbett et al (1987) opened their SCIENCE
paper '*Teaching Reasoning*'. Reviewing the history of the concept of
formal discipline which looked to the use of latin and the classics to
train the 'muscles of the mind', Nisbett et. al provided some
empirical evidence on the degree to which one class of inferential
rules can be taught. They describe these rules as 'a family of
pragmatic inferential rule systems that people induce in the context
of solving recurrent everyday problems'. These include "causal
schemas", "contractual schemas" and "statistical heuristics". The
latter are clearly instances of inductive rather than deductive
inference.
Nisbett et. al. clearly pointed out that the same can not be said for
the teaching of deductive inference (i.e. formal instruction in
deductive logic or other syntactic rule systems). With respect to the
teaching of logical reasoning, Nisbett et. al. had the following to
say:
'Since highly abstract statistical rules can be taught
in such a way that they can be applied to a great range
of everyday life events, is the same true of the even
more abstract rules of deductive logic? We can report no
evidence indicating that this is true, and we can
provide some evidence indicating that it is not.....In
our view, when people reason in accordance with the
rules of formal logic, they normally do so by using
pragmatic reasoning schemas that happen to map onto the
solutions provided by logic.'
ibid. p.628
Such 'causal schemas' are known as 'intensional heuristics' (Agnoli
and Krantz 1989) and have been widely studied in psychology since the
early 1970s, primarily by research psychologists such as Tversky and
Kahneman (1974), Nisbett and Ross (1980), Kahneman, Slovic and Tversky
(1982), Holland et. al (1986) and Ross and Nisbett (1991).
A longitudinal study by Lehman and Nisbett (1990) looked at
differential improvements in the use of such heuristics in college
students classified by different subject groups. They found
improvements in the use of statistical heuristics in social science
students, but no improvement in conditional logic (such as the Wason
selection task). Conversely, the natural science and humanities
produced significant improvements in conditional logic. Interestingly,
there were no changes in students studying chemistry. Whilst the
authors took the findings to provide some support for their thesis
that reasoning can be taught, it must be appreciated that the findings
at the same time lend considerable support to the view that each
subject area inculcates its own particular type of reasoning, even in
highly educated individuals. That is, the data lend support to the
thesis that training in particular skills must look to training for
transfer and application within particular skill areas. This is
elaborated below in the context of the system of Sentence Management.
Today, formal modelling of such intensional processes is researched
using a technology known as 'Neural Computing' which uses inferential
statistical technologies closely related to regression analysis.
However, such technologies are inherently inductive. They take samples
and generalise to populations. They are at best pattern recognition
systems.
Such technologies must be contrasted with formal deductive logical
systems which are algorithmic rather than heuristic (extensional
rather than intensional). The algorithmic, or computational, approach
is central to classic Artificial Intelligence and is represented today
by the technology of relational databases along with rule and
Knowledge Information Based System (KIBS) which are based on the First
Order Predicate Calculus, the Robinson Resolution Principle (Robinson
1965,1979) and the long term objectives of automated reasoning (e.g.
Wos et. al 1992 and the Japanese Fifth Generation computing project) -
see Volume 2 and 3.
The degree to which intensional heuristics can be suppressed by
training is now controversial (Kahneman and Tversky 1983; Nisbett and
Ross 1980; Holland et al. 1986; Nisbett et al 1987; Agnoli and Krantz
1989; Gladstone 1989; Fong and Nisbett 1991; Ploger and Wilson 1991;
Smith et al 1992). In fact, the degree to which they are or are not
may be orthogonal to the main theme of this paper, since the main
thrust of the argument is that behaviour science should look to
deductive inferential technology, not inductive inference. Central to
the controversy, however, is the degree to which the suppression is
sustained, and the degree of generalisation and practical application
of even 'statistical heuristics'. For example, Ploger and Wilson
(1991) said in commentary on the 1991 Fong and Nisbett paper:
'G. T. Fong and R. E. Nisbett argued that, within the
domain of statistics, people possess abstract rules;
that the use of these rules can be improved by training;
and that these training effects are largely independent
of the training domain. Although their results indicate
that there is a statistically significant improvement in
performance due to training, they also indicate that,
even after training, most college students do not apply
that training to example problems.
D. Ploger & M. Wilson
Statistical reasoning: What is the role of inferential rule training?
Comment on Fong and Nisbett.
Journal of Experimental Psychology General; 1991 Jun Vol
120(2) 213-214
Furthermore, Gladstone (1989) criticises the stance adopted by the
same group in an article in American Psychologist (1988):
'[This paper]' criticizes the assertion by D. R. Lehman
et al. that their experiments support the doctrine of
formal discipline. The present author contends that the
work of Lehman et al. provides evidence that one must
teach for transfer, not that transfer occurs
automatically. The problems of creating a curriculum and
teaching it must be addressed if teachers are to help
students apply a rule across fields. Support is given to
E. L. Thorndike's (1906, 1913) assessment of the general
method of teaching for transfer.'
R. Gladstone (1989)
Teaching for transfer versus formal discipline.
American Psychologist; 1989 Aug Vol 44(8) 1159
What this research suggests is that whilst improvements can be made by
training in formal principles (such as teaching the 'Law of Large
Numbers'), this does not in fact contradict the stance of Piaget and
others that most of these inductive skills are in fact learned under
natural lived experience ('erlbnis' and 'lebenswelt' Husserl 1952, or
'Being-in-the-world' Heidegger 1928). Furthermore, there is evidence
from short term longitudinal studies of training in such skills that
not only is there a decline in such skills after even a short time,
but there is little evidence of application of the heuristics to novel
problem situations outside the training domain. This is the standard
and conventional criticism of 'formal education'. Throughout this
work, the basic message seems to be to focus training on specific
skills acquisition which will not so much generalise to novel
contexts, but find application in other, similar if not identical
contexts.
Most recently, Nisbett and colleagues have looked further at the
criteria for assessing the efficacy of cognitive skills training:
'A number of theoretical positions in psychology
(including variants of case-based reasoning, instance-
based analogy, and connectionist models) maintain that
abstract rules are not involved in human reasoning, or
at best play a minor role. Other views hold that the use
of abstract rules is a core aspect of human reasoning.
The authors propose 8 criteria for determining whether
or not people use abstract rules in reasoning. They
examine evidence relevant to each criterion for several
rule systems. There is substantial evidence that several
inferential rules, including modus ponens, contractual
rules, causal rules, and the law of large numbers, are
used in solving everyday problems. Hybrid mechanisms
that combine aspects of instance and rule models are
considered.'
E. E. Smith, C. Langston and R. E. Nisbett:
The case for rules in reasoning.
Cognitive Science; 1992 Jan-Mar Vol 16(1) 1-40
We use rules, it can be argued, when we apply extensionalist
strategies which are of course, by design, domain specific. Note that
in the history of logic it took until 1879 to discover Quantification
Theory. Furthermore, research on deductive reasoning itself suggests
strongly that the view developed in this volume is sound:
'Reviews 3 types of computer program designed to make
deductive inferences: resolution theorem-provers and
goal-directed inferential programs, implemented
primarily as exercises in artificial intelligence; and
natural deduction systems, which have also been used as
psychological models. It is argued that none of these
methods resembles the way in which human beings usually
reason. They [humans] appear instead to depend, not on
formal rules of inference, but on using the meaning of
the premises to construct a mental model of the relevant
situation and on searching for alternative models of the
premises that falsify putative conclusions.'
P. N. Johnson-Laird
Human and computer reasoning.
Trends in Neurosciences; 1985 Feb Vol 8(2) 54-57
'Contends that the orthodox view in psychology is that
people use formal rules of inference like those of a
natural deduction system. It is argued that logical
competence depends on mental models rather than formal
rules. Models are constructed using linguistic and
general knowledge; a conclusion is formulated based on
the model that maintains semantic information, expresses
it parsimoniously, and makes explicit something not
directly stated by the premise. The validity of the
conclusion is tested by searching for alternative models
that might refute the conclusion. The article summarizes
a theory developed in a 1991 book by P. N. Johnson-Laird
and R. M. Byrne.'
P. N. Johnson-Laird & R. M. Byrne
Precis of Deduction.
Behavioral and Brain Sciences; 1993 Jun Vol 16(2) 323-
380
That is, human reasoning tends to focus on content or intension. As
has been argued elsewhere, such heuristic strategies invariably suffer
as a consequence of their context specificity and constraints on
working memory capacity.
--
David Longley
There is a logical possibility that in restricting the subject matter
of psychology, and thereby the deployment of psychologists, to what
can only be analysed and managed from a Methodological Solipsistic
(cognitive) perspective, one will render some very significant results
of research in psychology irrelevant to applied *behaviour* science
and technology, unless taken as a vindication of the stance that
behaviour is essentially context specific. As explicated above,
intensions are not, in principle, amenable to quantitative analysis.
They are, in all likelihood, only domain or context specific. A few
further examples should make these points clearer.
Many Cognitive Psychologists study 'Deductive Inference' from the
perspective of 'psychologism', a doctrine, which, loosely put,
equates the principles of logic with those of thinking. Yet the work
of Church (1936), Post (1936) and Turing (1937) clearly established
that the principles of 'effective' computation are not psychological,
and can in fact be mechanically implemented. However, researchers in
'Cognitive Science' such as Johnson-Laird and Byrne (1992) have
reviewed 'mental models' which provide an account for some of the
difficulties and some of the errors observed in human deductive
reasoning (Wason 1966). Throughout the 1970s, substantial empirical
evidence began to accumulate to refute the functionalist (Putnam 1967)
thesis that human cognitive processes were formal and computational.
Even well educated subjects it seems, have considerable difficulty
with relatively simple deductive Wason Selection tasks such as the
following:
_____ _____ _____ _____
| | | | | | | |
| A | | T | | 4 | | 7 |
|_____| |_____| |_____| |_____|
Where the task is to test the rule "if a card has a vowel on one side
it has an even number on the other".
Or in the following:
_____ _____ _____ _____
| | | | | | | |
| A | | 7 | | D | | 3 |
|_____| |_____| |_____| |_____|
where subjects are asked to test the rule 'each card that has an A on
one side will have a 3 on the other'. In both problems they can only
turn over a maximum of two cards to ascertain the truth of the rule.
Similarly, the majority have difficulty with the following,
similar problem, where the task is to reveal up to two hidden
halves of the cards to ascertain the truth or falsehood of the rule
'whenever there is a O on the left there is a O on the right':
_____________ _____________ ____________ ____________
| ||||||| | ||||||| ||||||| | ||||||| |
| O ||||||| | ||||||| ||||||| O | ||||||| |
|______||||||| |______||||||| |||||||______| |||||||______|
(a) (b) (c) (d)
Yet computer technology has no difficulty with these examples of the
application of basic deductive inference rules (modus ponens and modus
tollens). The above require the application of the material
conditional. [1] is falsified by turning cards A and 9, [2] by turning
cards A and 7, and [3] by turning cards (a) and (d). Logicians, and
others trained in the formal rules of deductive logic often fail to
solve such problems:
'Time after time our subjects fall into error. Even some
professional logicians have been known to err in an
embarrassing fashion, and only the rare individual takes
us by surprise and gets it right. It is impossible to
predict who he will be. This is all very puzzling....'
P. C. Wason and P. N. Johnson-Laird (1972)
Psychology of Reasoning
Furthermore, there is impressive empirical evidence that formal
training in logic does not generalise to such problems (Nisbett et al
1987). Yet why is this so if, in fact, human reasoning is, as the
cognitivists, have claimed, essentially logical and computational?
Wason (1966) also provided subjects with numbers which increased in
series, asking them to identify the rule. In most cases, the simple
fact that all examples shared no more than simple progression was
skipped, and whatever hypotheses they created were held onto even
though the actual rule was subsequently made clear. This persistence
of belief, and rationalisation of errors despite debriefing and
exposure to contrary evidence, is well documented in psychology, and
is a phenomenon which methodologically is, as Popper makes clear in
the leading quote to this paper, at odds with the formal advancement
of knowledge. Here is what Sir Karl Popper (1965) had to say about
this matter:
'My study of the CONTENT of a theory (or of any
statement whatsoever) was based on the simple and
obvious idea that the informative content of the
CONJUNCTION, ab, of any two statements, a, and b, will
always be greater than, or at least equal to, that of
its components.
Let a be the statement 'It will rain on Friday'; b the
statement 'It will be fine on Saturday'; and ab the
statement 'It will rain on Friday and it will be fine on
Saturday': it is then obvious that the informative
content of this last statement, the conjunction ab, will
exceed that of its component a and also that of its
component b. And it will also be obvious that the
probability of ab (or, what is the same, the probability
that ab will be true) will be smaller than that of
either of its components.
Writing Ct(a) for 'the content of the statement a', and
Ct(ab) for 'the content of the conjunction a and b', we
have
(1) Ct(a) <= Ct(ab) => Ct(b)
This contrasts with the corresponding law of the
calculus of probability,
(2) p(a) => p(ab) <= p(b)
where the inequality signs of (1) are inverted. Together
these two laws, (1) and (2), state that with increasing
content, probability decreases, and VICE VERSA; or in
other words, that content increases with increasing
IMprobability. (This analysis is of course in full
agreement with the general idea of the logical CONTENT
of a statement as the class of ALL THOSE STATEMENTS
WHICH ARE LOGICALLY ENTAILED by it. We may also say that
a statement a is logically stronger than a statement b
if its content is greater than that of b - that is to
say, if it entails more than b.)
This trivial fact has the following inescapable
consequences: if growth of knowledge means that we
operate with theories of increasing content, it must
also mean that we operate with theories of decreasing
probability (in the sense of the calculus of
probability). Thus if our aim is the advancement or
growth of knowledge, then a high probability (in the
sense of the calculus of probability) cannot possibly be
our aim as well: THESE TWO AIMS ARE INCOMPATIBLE.
I found this trivial though fundamental result about
thirty years ago, and I have been preaching it ever
since. Yet the prejudice that a high probability must be
something highly desirable is so deeply ingrained that
my trivial result is still held by many to be
'paradoxical'.
K. Popper
Truth, Rationality, and the Growth of Knowledge
Ch. 10, p 217-8
CONJECTURES AND REFUTATIONS (1965)
Modus tollens and the extensional principle that a compound event can
only be less probable (or equal) to its component events independently
is fundamental to the logic of scientific discovery, and yet this,
along with other principles of extensionality (deductive logic) seem
to be principles which are in considerable conflict with intuition, as
Kahneman and Tversky (1983) demonstrated with their illustration of
the 'Linda Problem'. In conclusion, the above authors wrote, twenty
years after Wason's experiments on deductive reasoning and Popper's
(1965) remarks on Conjectures and Refutation':
'In contrast to formal theories of belief, intuitive
judgments of probability are generally not extensional.
People do not normally analyse daily events into
exhaustive lists of possibilities or evaluate compound
probabilities by aggregating elementary ones. Instead,
they use a limited number of heuristics, such as
representativeness and availability (Kahneman et al.
1982). Our conception of judgmental heuristics is based
on NATURAL ASSESSMENTS that are routinely carried out as
part of the perception of events and the comprehension
of messages. Such natural assessments include
computations of similarity and representativeness,
attributions of causality, and evaluations of the
availability of associations and exemplars. These
assessments, we propose, are performed even in the
absence of a specific task set, although their results
are used to meet task demands as they arise. For
example, the mere mention of "horror movies" activates
instances of horror movies and evokes an assessment of
their availability. Similarly, the statement that Woody
Allen's aunt had hoped that he would be a dentist
elicits a comparison of the character to the stereotype
and an assessment of representativeness. It is
presumably the mismatch between Woody Allen's
personality and our stereotype of a dentist that makes
the thought mildly amusing.. Although these assessments
are not tied to the estimation of frequency or
probability, they are likely to play a dominant role
when such judgments are required. The availability of
horror movies may be used to answer the question "What
proportion of the movies produced last year were horror
movies?", and representativeness may control the
judgement that a particular boy is more likely to be an
actor than a dentist.
The term JUDGMENTAL HEURISTIC refers to a strategy -
whether deliberate or not - that relies a natural
assessment to produce an estimation or a prediction.
.Previous discussions or errors of judgement have
focused on deliberate strategies and on
misinterpretations of tasks. The present treatment calls
special attention to the processes of anchoring and
assimilation, which are often neither deliberate nor
conscious. An example from perception may be
instructive: If two objects in a picture of a three-
dimensional scene have the same picture size, the one
that appears more distant is not only seen as "really"
larger but also larger in the picture. The natural
computation of real size evidently influences the (less
natural) judgement of picture size, although observers
are unlikely to confuse the two values or to use the
former to estimate the latter.
The natural assessments of representativeness and
availability do not conform to the extensional logic of
probability theory.'
A. Tversky and D. Kahneman
Extensional Versus Intuitive Reasoning:
The Conjunction Fallacy in Probability Judgment.
Psychological Review Vol 90(4) 1983 p.294
The study of Natural Deduction (Gentzen 1935;Prawitz 1971; Tenant
1990) as a psychological process (1983) is really just the study of
the performance of a skill (like riding a bicycle in fact), which
attempts to account for why some of the difficulties with deduction
per se occur. The best models here may turn out to be connectionist,
where each individual's model ends up being almost unique in its fine
detail. There is a problem for performance theories, as Johnson Laird
and Byrne (1991) point out:
'A major difficulty for performance theories based on
formal logic is that people are affected by the content
of a deductive system..yet formal rules ought to apply
regardless of content. That is what they are: rules that
apply to the logical form of assertions, once it has
been abstracted from their content.'
P. N. Johnson-Laird and R. M. J. Byrne (1991)
Deduction p.31
The theme of this volume up to this point has been that methodological
solipsism is unlikely to reveal much more than the shortcomings and
diversity of social and personal judgment and the context specificity
of behaviour. It took until 1879 for Frege to discover the Predicate
Calculus (Quantification Theory), and a further half century before
Church (1936), Turing (1937) and others laid the foundations for
computer and cognitive science through their collective work on
recursive function theory. From empirical evidence, and developments
in technology, it looks like human and other animal reasoning is
primarily inductive and heuristic, not deductive and algorithmic.
Human beings have considerable difficulties with the latter, and this
is normal. It has taken considerable intellectual effort to discover
formal, abstract, extensional principles, often only with the support
of logic, mathematics and computer technology itself. The empirical
evidence, reviewed in this volume is that extensional principles are
not widely applied except in specific professional capacities which
are domain-specific. In fact, on the simple grounds that the discovery
of such principles required considerable effort should perhaps make us
more ready to accept that they are unlikely to be spontaneously
applied in everyday reasoning and problem solving.
For further coverage of the 'counter-intuitive' nature of deductive
reasoning (and therefore its low frequency in everyday practice) see
Sutherland's 1992 popular survey 'Irrationality', or Plous (1993) for
a recent review of the psychology of judgment and decision making. For
a thorough survey of the rise (and possibly the fall) of Cognitive
Science, see Putnam 1986, or Gardner 1987. The latter concluded his
survey of the Cognitive Revolution within psychology with a short
statement which he referred to as the 'computational paradox'. One
thing that Cognitive Science has shown us is that the computer or
Turing Machine is not a good model of how people reason, at least not
in the Von-Neumann Serial processing sense. Similarly, people do not
seem to think in accordance with the axioms of formal, extensional
logic. Instead, they learn rough and ready heuristics which they which
they try to apply to problems in a very rough, approximate way.
Accordingly, Cognitive Science may well turn to the work of Church,
Turing and other mathematical logicians who, in the wake of Frege,
have worked to elaborate what effective processing is. We will then be
faced with the strange situation of human psychology being of little
practical interest, except as a historical curiosity, an example of
pre-Fregian logic and pre-Church (1936) computation. Behaviour science
will pay as little attention to the 'thoughts and feelings' of 'folk
psychology' as contemporary physics does to quaint notions of 'folk
physics'. For some time, experimental psychologists working within the
information processing (computational) tradition have been working to
replace such concepts such as 'general reasoning capacity' with more
mechanistic notions such as 'Working Memory' (Baddeley 1986):
'This series of studies was concerned with determining
the relationship between general reasoning ability (R)
and general working-memory capacity (WM). In four
studies, with over 2000 subjects, using a variety of
tests to measure reasoning ability and working-memory
capacity, we have demonstrated a consistent and
remarkably high correlation between the two factors. Our
best estimates of the correlation between WM and R were
.82, .88., .80 and .82 for studies 1 through 4
respectively.
...
The finding of such a high correlation between these two
factors may surprise some. Reasoning and working-memory
capacity are thought of differently and they arise from
quite different traditions. Since Spearman (1923),
reasoning has been described as an abstract, high level
process, eluding precise definition. Development of good
tests of reasoning ability has been almost an art form,
owing more to empirical trial-and-error than to a
systematic delineation of the requirements such tests
must satisfy. In contrast, working memory has its roots
in the mechanistic, buffer-storage model of information
processing. Compared to reasoning, short-term storage
has been thought to be a more tractable, demarcated
process.'
P. C. Kyllonen & R. E. Christal (1990)
Reasoning Ability Is (Little More Than) Working-Memory
Capacity
Intelligence 14, 389-433
Such evidence stands well with the logical arguments of Cherniak which
were introduced in Section A, and which are implicit in the following
introductory remarks of Shinghal (1992) on automated reasoning:
'Suppose we are given the following four statements:
1. John awakens;
2. John brings a mop;
3. Mother is delighted, if John awakens and cleans his room;
4. If John brings a mop, then he cleans his room.
The statements being true, we can reason intuitively to
conclude that Mother is delighted. Thus we have deduced
a fact that was not explicitly given in the four
statements. But if we were given many statements, say a
hundred, then intuitive reasoning would be difficult.
Hence we wish to automate reasoning by formalizing it
and implementing it on a computer. It is then usually
called automated theorem proving. To understand
computer-implementable procedures for theorem proving,
one should first understand propositional and predicate
logics, for those logics form the basis of the theorem
proving procedures. It is assumed that you are familiar
with these logics.'
R. Shinghal (1992)
Formal Concepts in Artificial Intelligence: Fundamentals
Ch.2 Automated Reasoning with Propositional Logic p.8
Automated report writing and automated reasoning drawing on actuarial
data is fundamental to the PROBE project. In contrast to such work
using deductive inference, Gluck and Bower (1988) have modelled human
inductive reasoning using artificial neural network technology (which
are heuristic, based on constraint satisfaction/approximation, or
'best fit' rather than being 'production rule' based). That is, it is
unlikely that anyone spontaneously reasons using truth-tables or the
Resolution Rule (Robinson 1965). Furthermore, Rescorla (1988), perhaps
the dominant US spokesman for research in Pavlovian Conditioning, has
drawn attention to the fact that Classical Conditioning should perhaps
be seen as the experimental modelling of inductive inferential
'cognitive' heuristic processes. Throughout this paper, it is being
argued that such inductive inferences are in fact best modelled using
artificial neural network technology, and that such processing is
intensional, with all of the traditional problems of intensionality:
'Connectionist networks are well suited to everyday
common sense reasoning. Their ability to simultaneously
satisfy soft constraints allows them to select from
conflicting information in finding a plausible
interpretation of a situation. However, these networks
are poor at reasoning using the standard semantics of
classical logic, based on truth in all possible models.'
M. Derthick (1990)
Mundane Reasoning by Settling on a Plausible Model
Artificial Intelligence 46,1990,107-157
and perhaps even more familiarly:
'Induction should come with a government health warning.
A baby girl of sixteen months hears the word 'snow' used
to refer to snow. Over the next months, as Melissa
Bowerman has observed, the infant uses the word to refer
to: snow, the white tail of a horse, the white part of a
toy boat, a white flannel bed pad, and a puddle of milk
on the floor. She is forming the impression that 'snow'
refers to things that are white or to horizontal areas
of whiteness, and she will gradually refine her concept
so that it tallies with the adult one. The underlying
procedure is again inductive.'
P. N. Johnson-Laird (1988)
Induction, Concepts and Probability p.238: The Computer
and The Mind
Connectionist systems, it is claimed, do not represent knowledge as
production rules, ie as well-formed-formulae represented in the syntax
of the predicate calculus (using conditionals, modus ponens, modus
tollens and the quantifiers), but as connection weights between
activated predicates in a parallel distributed network:
'Lawful behavior and judgments may be produced by a
mechanism in which there is no explicit representation
of the rule. Instead, we suggest that the mechanisms
that process language and make judgments of
grammaticality are constructed in such a way that their
performance is characterizable by rules, but that the
rules themselves are not written in explicit form
anywhere in the mechanism.'
D E Rumelhart and D McClelland (1986)
Parallel Distributed Processing Ch. 18
Such systems are function-approximation systems, and are
mathematically a development of Kolmogorov's Mapping Neural Network
Existence Theorem (1957). Such networks consist of three layers of
processing elements. Those of the bottom layer simply distribute the
input vector (a pattern of 1s and 0s) to the processing elements of
the second layer. The processing elements of this middle or hidden
layer implement a *'transfer function'* (more on this below). The top
layer are output units.
An important feature of Kolmogorov's Theorem, is that it is not
constructive. That is, it is not algorithmic or 'effective'. Since the
proof of the theorem is not constructive, we do not know how to
determine the key quantities of the transfer functions. The theorem
simply tells us that such a three layer mapping network must exist. As
Hecht-Nielsen (1990) remarks:
'Unfortunately, there does not appear to be too much
hope that a method of finding the Kolmogorov network
will be developed soon. Thus, the value of this result
is its intellectual assurance that continuous vector
mappings of a vector variable on the unit cube
(actually, the theorem can be extended to apply to any
COMPACT, ie, closed and bounded, set) can be implemented
EXACTLY with a three-layer neural network.'
R. Hecht-Nielsen (1990)
Kolmogorov's Theorem
Neurocomputing
That is, we may well be able to find weight-matrices which capture or
embody certain functions, but we may not be able to say 'effectively'
what the precise equations are which algorithmically compute such
functions. This is often summarised by statements to the effect that
neural networks can model or fit solutions to sample problems, and
generalise to new cases, but they can not provide a rule as to how
they make such classifications or inferences. Their ability to do so
is distributed across the weightings of the whole weight matrix of
connections between the three layers of the network. The above is to
be contrasted with the fitting of linear discriminant functions to
partition or classify an N dimensional space (N being a direct
function of the number of classes or predicates). Fisher's
discriminant analysis (and the closely related linear multiple
regression technology) arrive at the discriminant function
coefficients through the Gaussian method of Least Mean Squares, each b
value and the constant being arrived at deductively via the solution
of simultaneous equations. Function approximation, or the
determination of hidden layer weights or connections is based on
recursive feedback, elsewhere within behaviour science, this is known
as 'reinforcement', the differential strengthening or weakening of
connections depending on feedback or knowledge of results. Kohonen
(1988) commenting on "Connectionist Models" in contrast to
conventional, extensionalist relational databases, writes:
'Let me make it completely clear that one of the most
central functions coveted by the "connectionist" models
is the ability to solve *simplicitly defined relational
structures*. The latter, as explained in Sect. 1.4.5,
are defined by *partial relations*, from which the
structures are determined in a very much similar way as
solutions to systems of algebraic equations are formed;
all the values in the universe of variables which
satisfy the conditions expressed as the equations
comprise, by definition, the possible solutions. In the
relational structures, the knowledge (partial
statements, partial relations) stored in memory
constitutes the universe of variables, from which the
solutions must be sought; and the conditions expressed
by (eventually incomplete) relations, ie, the "control
structure" [9.20] correspond to the equations.
Contrary to the conventional database machines which
also have been designed to handle such relational
structures, the "connectionist" models are said to take
the relations, or actually their strengths into account
statistically. In so doing, however they only apply the
Euclidean metric, or the least square loss function to
optimize the solution. This is not a very good
assumption for natural data.'
T. Kohonen (1988)
Ch. 9 Notes on Neural Computing
In Self-Organisation and Associative Memory
Throughout the 1970s Nisbett and colleagues studied the use of
probabilistic heuristics in real world human problem solving,
primarily in the context of Attribution Theory (H. Kelley 1967, 1972).
Such inductive as opposed to deductive heuristics of inference do
indeed seem to be influenced by training (Nisbett and Krantz 1983,
Nisbett et. al 1987). Statistical heuristics are naturally applied in
everyday reasoning if subjects are trained in the Law of Large
Numbers. This is not surprising, since application of such heuristics
is an example of response generalisation - which is how psychologists
have traditionally studied the vicissitudes of inductive inference
within Learning Theory. As Wagner (1981) has pointed out, we are
perfectly at liberty to use the language of Attribution Theory as an
alternative, this exchangeability of reference system being an
instance of Quinean Ontological Relativity, where what matters is not
so much the names in argument positions, or even the predicates
themselves, but the *relations* (themselves at least two-place
predicates) which emerge from such systems.
Under most natural circumstances, inductive inference is irrational
(cf. Popper 1936, Kahneman et al. 1982, Dawes, Faust and Meehl 1989,
Sutherland 1992). This is because it is generally based on
unrepresentative sampling (drawing on the 'availability' and
'representativeness' heuristics), and this is so simply because that
is how data in a structured culture often naturally presents itself.
Research has therefore demonstrated that human inference is seriously
at odds with formal deductive logical reasoning, and the algorithmic
implementation of those inferential processes by computers (Church
1936, Post 1936, Turing 1936). One of the main points of this paper is
that we generally turn to the formal deductive technology of
mathematico-logical method (science) to compensate for the heuristics
and biases which typically characterise natural inductive inference.
Where possible, we turn to *relational databases and 4GLs* (recursive
function theory and mathematical logic) to provide descriptive, and
deductively valid pictures of individuals and collectives.
This large, and unexpected body of empirical evidence from decision-
theory, cognitive experimental social psychology and Learning Theory,
began accumulating in the mid to late 1970s (cf. Kahneman, Tversky and
Slovic 1982, Putnam 1986, Stich 1990), and began to cast serious doubt
on the viability of the 'computational theory' of mind (Fodor
1975,1980) which was basic to functionalism (Putnam 1986). That is,
the substantial body of empirical evidence which accumulated within
Cognitive Psychology itself suggested that, contrary to the doctrine
of functionalism, there exists a system of independent, objective
knowledge, and reasoning against which we can judge human, and other
animal cognitive processing. However, it gradually became appreciated
that the digital computer is not a good model of human information
processing, at least not unless this is conceived in terms of 'neural
computing' (also known as 'connectionism' or 'Parallel Distributed
Processing). The application of formal rules of logic and mathematics
to the analysis of behaviour solely within the language of formal
logic is the professional business of Applied Behaviour Scientists.
Outside of the practice of those professional skills, the scientist
himself is as prone to the irrationality of intensional heuristics as
are laymen (Wason 1966). Within the domain of formal logic applied to
the analysis of behaviour, the work undertaken by applied scientists
is impersonal. The scientists' professional views are dictated by the
laws of logic and mathematics rather than personal opinion
(heuristics).
Applied psychologists, particularly those working in the area of
Criminological Psychology, are therefore faced with a dilemma. Whilst
many of their academic colleagues are *studying* the heuristics and
biases of human cognitive processing, the applied psychologist is
generally called upon to do something quite different, yet is largely
prevented from doing so for lack of relational systems to provide the
requisite distributional data upon which to use the technology of
algorithmic decision making. In the main, the applied criminological
psychologists as behaviour scientist is called upon to bring about
behaviour change, rather than to better understand or explicate the
natural heuristics of cognitive (clinical) judgement. To the applied
psychologist, the low correlation between self-report and actual
behaviour, the low consistency of behaviour across situations, the low
efficacy of prediction of behaviours such as 'dangerousness' on the
basis of clinical judgment, and the fallibility of assessments based
on interviews, are all testament to the now *well documented
unreliability of intensional heuristics (cognitive processes) as data
sources, and we have already pointed to why this is so.* Yet
generally, psychologists can rely on no other sources, as there are in
fact, inadequate Inmate Information Systems. Thus, whilst applied
psychologists know from research that they must rely on distributional
data to establish their professional knowledge base, and that they
must base their work with individuals (whether prisoners, governors or
managers) on extensional analysis of such knowledge bases, *they
neither have the systems available nor the influence to have such
systems established, despite powerful scientific evidence (Dawes,
Faust and Meehl 1989) that their professional services in many areas
depend on the existence and use of such systems.* What applied
psychologists have learned therefore is to eschew intensional
heuristics and look instead to the formal technology of extensional
analysis of observations of behaviour. The fact that training in
formal statistical and deductive logic is difficult, particularly the
latter, makes this a challenge, since most of the required skills are
only likely to be applicable when sitting in front of a computer
keyboard (Holland et al 1986). It is particularly challenging in that
the information systems are generally inadequate to allow
professionals to do what they are trained to do.
Over the past five years (1988-1993), a programme has been developed
which is explicitly naturalistic on that it seeks to record
inmate/environment (regime) interactions. This system is the
PROBE/Sentence Management system. It breaks out of solipsism by making
all assessments of behaviour, and all inmate targets *RELATIVE to
predetermined requirements of the routines and structured activities
defined under function 17 of the annual Governors Contract*. It is by
design a 'formative profiling system' which is 'criterion reference'
based.
The alternative, intensional heuristics, which are the mark of natural
human judgement (hence our rich folk psychological vocabulary of
metaphor) have to be contrasted with extensional analysis and
judgement using technology based on the deductive algorithms of the
First Order Predicate Calculus (Relational Database Technology). This
is not only coextensive with the 'scope and language of science'
(Quine 1954) but is also, to the best of our knowledge from research
in Cognitive Psychology, an effective compensatory system to the
biases of natural intensional, inductive heuristics (Agnoli and Krantz
1989). Whilst a considerable amount of evidence suggests that training
in formal logic and statistics is not in itself sufficient to suppress
usage of intensional heuristics in any enduring sense, ie that
generalisation to extra-training contexts is limited, there is
evidence that judgement can be rendered more rational by training in
the use of extensional technology. The demonstration by Kahneman and
Tversky 1983, that subjects generally fail to apply the extensional
conjunction rule in probability that conjunctions are always equal or
less probable than its elements, and that this too is generally
resistant to counter-training, is another example, this time within
probability theory (a deductive system) of the failure of extensional
rules in applied contexts. Careful use of I.T. and principles of
deductive inference (e.g. semantic tableaux, Herbrand models, and
Resolution methods) promise, within the limits imposed by Godel's
Theorem, to keep us on track if we restrict our technology to the
extensional.
Before leaving the concept of Methodological Solipsism, here's how one
commentator reviewed the situation in the context of the work of
perhaps psychology's best known radical behaviourist:
'Meanings Are Not 'In the Head'
Skinner has developed a case for this claim in the book,
VERBAL BEHAVIOR (1957), and elsewhere, where he
maintains that meaning, rather than being a property of
an utterance itself, is to be found in the nature of the
relationship between occurrence of the utterance and its
context. It is important enough to put in his own words.
..meaning is not properly regarded as a property either
of a response or a situation but rather of the
contingencies responsible for both the topography of
behavior and the control exerted by stimuli. To take a
primitive example, if one rat presses a lever to obtain
food when hungry while another does so to obtain water
when thirsty, the topographies of their behaviors may be
indistinguishable, but they may be said to differ in
meaning: to one rat pressing the lever 'means food'; to
the other it 'means' water. But these are aspects of the
contingencies which have brought behavior under the
control of the current occasion. Similarly, if a rat is
reinforced with food when it presses the lever in the
presence of a flashing light but with water when the
light is steady, then it could be said that the flashing
light means food and the steady light means water, but
again these are references not to some property of the
light but to the contingencies of which the lights have
been parts.
The same point may be made, but with many more
implications, in speaking of the meaning of verbal
behavior. The over-all function of the behavior is
crucial. In an archetypal pattern a speaker is in
contact with a situation to which a listener is disposed
to respond but with which he is not in contact. A verbal
response on the part of the speaker makes it possible
for the listener to respond appropriately. For example,
let us suppose that a person has an appointment, which
he will keep by consulting a clock or a watch. If none
is available, he may ask someone to tell him the time,
and the response permits him to respond effectively...
*The meaning of a response for the speaker* includes the
stimulus which controls it (in the example above, the
setting on the face of a clock or watch) and possibly
aversive aspects of the question, from which a response
brings release. *The meaning for the listener* is close
to the meaning the clock face would have if it were
visible to him, but it also includes the contingencies
involving the appointment, which make a response to the
clock face or the verbal response probable at such a
time..
One of the unfortunate implications of communication
theory is that the meanings for speaker and listener are
the same, that something is made common to both of them,
that the speaker conveys an idea or meaning, transmits
information, or imparts knowledge, as if his mental
possessions then become the mental possessions of the
listener. There are no meanings which are the same in
the speaker and listener. Meanings are not independent
entities...
Skinner, 1974, pp.90-2
One does not have to take Skinner's word alone, however,
for much current philosophical work also leads to the
conclusion that meanings are not in the head. The issue
extends beyond the problem of meaning construed as a
linguistic property to the problem of intensionality and
the interpretation of mentality itself. While the
reasoning behind this claim is varied and complex,
perhaps an analogy with machine functions can be helpful
here. A computer is a perfect example of a system that
performs meaningless syntactic operations. The
electrical configuration of the addressable memory
locations is just formal structures, without semantic
significance to the computer either as numbers or as
representations of numbers. All the computer does is
change states automatically as electrical current runs
through its circuits. Despite the pure formality of its
operations, however, the computer (if designed and
programmed correctly) will be truth-preserving across
computations: ask the thing to add 2 + 2 and it will
give you a 4 every time. But the numerical meanings we
attach to the inputs and outputs do not enter into and
emanate from the computer itself. Rather, they remain
outside the system, in the interpretations that we as
computer users assign to the inputs and outputs of the
machine's operations. Now, if one is inclined to a
computational view of mind, then by analogy much the
same thing holds for the organic computational systems
we call our brains. Meanings are not in them, but exist
in the mode through which they in their functioning
stand to the world.
Ironies begin to mount here. Brentano's claim that
'Intentionality' is the mark of the mental is now widely
accepted. Intentionality in its technical sense has to
do with the meaningfulness, the semantic context of
mental states. But the argument is now made that
cognitive operations and their objects are formal and
syntactic only, and do not themselves have semantic
context (e.g. see Putnam, 1975; Fodor, 1980; and Stich,
1983, for a range of contributions to this viewpoint).
Semantic issues do not concern internal mental
mechanisms but concern the mode of relation between
individuals and their worlds. Such issues are not really
psychological at all, it is claimed, and are relegated
to other fields of inquiry for whatever elucidation can
be brought to them. For example, while belief is a
canonical example of a mental, intentional state, Stich
says, 'believing that p is an amalgam of historical,
contextual, ideological, and perhaps other
considerations' (1983, p.170). The net result of these
recent moves in cognitive psychology and the philosophy
of mind seems to be that the essence of mentality - its
meaningfulness - is in the process of being disowned by
modern mentalism! But Stich's ashbin of intentionality -
historical and contextual considerations - is exactly
what behaviorism seeks to address. Can it be that
BEHAVIORISM will be the instrument called for final
explication of Brentano's thesis of the mental? One's
head spins to think it.
R. Schnaitter (1987)
Knowledge as Action: The Epistemology of Radical Behaviorism
In B. F. Skinner Consensus and Controversy
Eds. S. Modgil and C. Modgil
The reawakening of interest in connectionism in the early to mid 1980s
can indeed be seen as a vindication of the basic principles of
behaviourism. What is psychological may well be impenetrable, for any
serious scientific purposes, not because it is in any way a different
kind of 'stuff', but because structurally it amounts to no more than
an n-dimensional weight space, idiosyncratic and context specific, to
each and every one of us.
--
David Longley
It will help if an idea of what we mean by 'clinical' and 'actuarial'
judgement is provided. The following is taken from a an early (Meehl
1954), and a relatively recent review of the status 'Clinical vs.
Actuarial Judgement' by Dawes, Faust and Meehl (1989):
'One of the major methodological problems of clinical
psychology concerns the relation between the "clinical"
and "statistical" (or "actuarial") methods of
prediction. Without prejudging the question as to
whether these methods are fundamentally different, we
can at least set forth the main difference between them
as it appears superficially. The problem is to predict
how a person is going to behave. In what manner should
we go about this prediction?
We may order the individual to a class or set of classes
on the basis of objective facts concerning his life
history, his scores on psychometric tests, behavior
ratings or check lists, or subjective judgements gained
from interviews. The combination of all these data
enables us to CLASSIFY the subject; and once having made
such a classification, we enter a statistical or
actuarial table which gives the statistical frequencies
of behaviors of various sorts for persons belonging to
the class. The mechanical combining of information for
classification purposes, and the resultant probability
figure which is an empirically determined relative
frequency, are the characteristics that define the
actuarial or statistical type of prediction.
Alternatively, we may proceed on what seems, at least,
to be a very different path. On the basis of interview
impressions, other data from the history, and possibly
also psychometric information of the same type as in the
first sort of prediction, we formulate, as a psychiatric
staff conference, some psychological hypothesis
regarding the structure and the dynamics of this
particular individual. On the basis of this hypothesis
and certain reasonable expectations as to the course of
other events, we arrive at a prediction of what is going
to happen. This type of procedure has been loosely
called the clinical or case-study method of prediction'.
P. E. Meehl (1954)
The Problem: Clinical vs. Statistical Prediction
'In the clinical method the decision-maker combines or
processes information in his or her head. In the
actuarial or statistical method the human judge is
eliminated and conclusions rest solely on empirically
established relations between data and the condition or
event of interest. A life insurance agent uses the
clinical method if data on risk factors are combined
through personal judgement. The agent uses the actuarial
method if data are entered into a formula, or tables and
charts that contain empirical information relating these
background data to life expectancy.
Clinical judgement should not be equated with a clinical
setting or a clinical practitioner. A clinician in
psychiatry or medicine may use the clinical or actuarial
method. Conversely, the actuarial method should not be
equated with automated decision rules alone. For
example, computers can automate clinical judgements. The
computer can be programmed to yield the description
"dependency traits", just as the clinical judge would,
whenever a certain response appears on a psychological
test. To be truly actuarial, interpretations must be
both automatic (that is, prespecified or routinized) and
based on empirically established relations.'
R. Dawes, D. Faust & P. Meehl (1989)
Clinical Versus Actuarial Judgement Science v243, pp
1668-1674 (1989)
As long ago as 1941, Lundberg made it clear that any argument between
those committed to the 'clinical' (intuitive) stance and those arguing
for the 'actuarial' (statistical) was a pseudo-argument, since all the
clinician could possibly be making his or her decision on was his or
her limited experience (database) of past cases and outcomes.
'I have no objection to Stouffer's statement that "if
the case-method were not effective, life insurance
companies hardly would use it as they do in
supplementing their actuarial tables by a medical
examination of the applicant in order to narrow their
risks." I do not see, however, that this constitutes a
"supplementing" of actuarial tables. It is rather the
essential task of creating specific actuarial tables. To
be sure, we usually think of actuarial tables as being
based on age alone. But on the basis of what except
actuarial study has it been decided to charge a higher
premium (and how much) for a "case" twenty pounds
overweight, alcoholic, with a certain family history,
etc.? These case-studies have been classified and the
experience for each class noted until we have arrived at
a body of actuarial knowledge on the basis of which we
"predict" for each new case. The examination of the new
case is for the purpose of classifying him as one of a
certain class for which prediction is possible.'
G. Lundberg (1941)
Case Studies vs. Statistical Methods - An Issue Based
on Misunderstanding. Sociometry v4 pp379-83 (1941)
A few years later, Meehl (1954), drawing on the work of Lundberg
(1941) and Sarbin (1941) in reviewing the relative merits of clinical
vs. statistical prediction (judgement) reiterated the point that all
judgements about an individual are always referenced to a class, they
are always therefore, probability judgements.
'No predictions made about a single case in clinical
work are ever certain, but are always probable. The
notion of probability is inherently a frequency notion,
hence statements about the probability of a given event
are statements about frequencies, although they may not
seem to be so. Frequencies refer to the occurrence of
events in a class; therefore all predictions; even those
that from their appearance seem to be predictions about
individual concrete events or persons, have actually an
implicit reference to a class....it is only if we have a
reference class to which the event in question can be
ordered that the possibility of determining or
estimating a relative frequency exists.. the clinician,
if he is doing anything that is empirically meaningful,
is doing a second-rate job of actuarial prediction.
There is fundamentally no logical difference between the
clinical or case-study method and the actuarial method.
The only difference is on two quantitative continua,
namely that the actuarial method is more EXPLICIT and
more PRECISE.'
P. Meehl (1954)
Clinical vs. Statistical Prediction:
A Theoretical Analysis and a Review of the Evidence
There has, unfortunately, over the years, been a strong degree of
resistance to the actuarial approach. It must be appreciated however,
that the technology to support comprehensive actuarial analysis and
judgment has only been physically available since the 1940s with the
invention of the computer. Practically speaking, it has only been
available on the scale we are now discussing since the late 1970s with
the development of sophisticated DBMS's (databases with query
languages based on the Predicate Calculus; Codd 1970; Gray 1984;
Gardarin and Valduriez 1989, Date 1992), and the development and mass
production of powerful and cheap microcomputers. Minsky and Papert
(1988) in their expanded edition of 'Perceptrons' (basic pattern
recognition systems) in fact wrote:
'The goal of this study is to reach a deeper
understanding of some concepts we believe are crucial to
the general theory of computation. We will study in
great detail a class of computations that make decisions
by weighting evidence.....The people we want most to
speak to are interested in that general theory of
computation.'
M. L. Minsky & S. A. Papert (1969,1990)
Perceptrons p.1
The 'general theory of computation' is, as elaborated elsewhere,
'Recursive Function Theory' (Church 1936, Kleene 1936, Turing 1937),
and is essentially the approach being advocated here as evidential
behaviourism, or eliminative materialism which eschews psychologism
and intensionalism. Nevertheless, as late as 1972, Meehl still found
he had to say:
'I think it is time for those who resist drawing any
generalisation from the published research, by
fantasising about what WOULD happen if studies of a
different sort WERE conducted, to do them. I claim that
this crude, pragmatic box score IS important, and that
those who deny its importance do so because they just
don't like the way it comes out. There are few issues in
clinical, personality, or social psychology (or, for
that matter, even in such fields as animal learning) in
which the research trends are as uniform as this one.
Amazingly, this strong trend seems to exert almost no
influence upon clinical practice, even, you may be
surprised to learn, in Minnesota!...
It would be ironic indeed (but not in the least
surprising to one acquainted with the sociology of our
profession) if physicians in nonpsychiatric medicine
should learn the actuarial lesson from biometricians and
engineers, whilst the psychiatrist continues to muddle
through with inefficient combinations of unreliable
judgements because he has not been properly instructed
by his colleagues in clinical psychology, who might have
been expected to take the lead in this development.
I understand (anecdotally) that there are two other
domains, unrelated to either personality assessment or
the healing arts, in which actuarial methods of data
combination seem to do at least as good a job as the
traditional impressionistic methods: namely, meteorology
and the forecasting of security prices. From my limited
experience I have the impression that in these fields
also there is a strong emotional resistance to
substituting formalised techniques for human judgement.
Personally, I look upon the "formal-versus-judgmental"
issue as one of great generality, not confined to the
clinical context. I do not see why clinical
psychologists should persist in using inefficient means
of combining data just because investment brokers,
physicians, and weathermen do so. Meanwhile, I urge
those who find the box score "35:0" distasteful to
publish empirical studies filling in the score board
with numbers more to their liking.'
P. E. Meehl (1972)
When Shall We Use Our Heads Instead of the Formula?
PSYCHODIAGNOSIS: Collected Papers (1971)
In 1982, Kahneman, Slovic and Tversky, in their collection of papers
on (clinical) judgement under conditions of uncertainty, prefaced the
book with the following:
'Meehl's classic book, published in 1954, summarised
evidence for the conclusion that simple linear
combinations of cues outdo the intuitive judgements of
experts in predicting significant behavioural criteria.
The lasting intellectual legacy of this work, and of the
furious controversy that followed it, was probably not
the demonstration that clinicians performed poorly in
tasks that, as Meehl noted, they should not have
undertaken. Rather, it was the demonstration of a
substantial discrepancy between the objective record of
people's success in prediction tasks and the sincere
beliefs of these people about the quality of their
performance. This conclusion was not restricted to
clinicians or to clinical prediction:
People's impressions of how they reason, and how well
they reason, could not be taken at face value.'
D. Kahneman, P. Slovic & A. Tversky (1982)
Judgment Under Conditions of Uncertainty: Heuristics and
Biases
Earlier in 1977, reviewing the Attribution Theory literature evidence
on individuals' access to the reasons for their behaviours, Nisbett
and Wilson (1977) summarised the work as follows:
'................................... there may be little
or no direct introspective access to higher order
cognitive processes. Ss are sometimes (a) unaware of the
existence of a stimulus that importantly influenced a
response, (b) unaware of the existence of the response,
and (c) unaware that the stimulus has affected the
response. It is proposed that when people attempt to
report on their cognitive processes, that is, on the
processes mediating the effects of a stimulus on a
response, they do not do so on the basis of any true
introspection. Instead, their reports are based on a
priori, implicit causal theories, or judgments about the
extent to which a particular stimulus is a plausible
cause of a given response. This suggests that though
people may not be able to observe directly their
cognitive processes, they will sometimes be able to
report accurately about them. Accurate reports will
occur when influential stimuli are salient and are
plausible causes of the responses they produce, and will
not occur when stimuli are not salient or are not
plausible causes.'
R. Nisbett & T. Wilson (1977)
Telling More Than We Can Know: Public Reports on Private
Processes
Such rules of thumb or attributions, are of course the intensional
heuristics studied by Tversky and Kahneman (1973), or the 'function
approximations' computed by neural network systems discussed earlier
as connection weights (both in artificial and real neural nets, cf.
Kandel's work with Aplysia).
Mathematical logicians such as Putnam (1975,1988); Elgin 1990 and
Devitt (1990) have long been arguing that psychologists may, as
Skinner (1971,1974) argued consistently, be looking for their data in
the wrong place. Despite the empirical evidence from research in
psychology on the problems of self report, and a good deal more drawn
from decision making in medical diagnosis, the standard means of
obtaining information for 'reports' on inmates for purposes of review,
and the standard means of assessing inmates for counselling is on the
basis of clinical interview. In the Prison Service this makes little
sense, since it is possible to directly observe behaviour under
relatively natural conditions of everyday activities. The clinical
interview, is still the basis of much of the work of the Prison
Psychologist despite the literature on fallibility of self-reports,
and the fallibility and unwitting distortions of those making
judgments in such contexts has been consistently documented within
psychology:
'The previous review of this field (Slovic, Fischoff &
Lichtenstein 1977) described a long list of human
judgmental biases, deficiencies, and cognitive
illusions. In the intervening period this list has both
increased in size and influenced other areas of
psychology (Bettman 1979, Mischel 1979, Nisbett & Ross
1980).'
H. Einhorn and R. Hogarth (1981)
The following are also taken from the text:
'If one considers the rather typical findings that
clinical judgments tend to be (a) rather unreliable (in
at least two of the three senses of that term), (b) only
minimally related to the confidence and amount of
experience of the judge, (c) relatively unaffected by
the amount of information available to the judge, and
(d) rather low in validity on an absolute basis, it
should come as no great surprise that such judgments are
increasingly under attack by those who wish to
substitute actuarial prediction systems for the human
judge in many applied settings....I can summarize this
ever-growing body of literature by pointing out that
over a very large array of clinical judgment tasks
(including by now some which were specifically selected
to show the clinician at his best and the actuary at his
worst), rather simple actuarial formulae typically can
be constructed to perform at a level no lower than that
of the clinical expert.'
L. R. Goldberg (1968)
Simple models or simple processes?
Some research on clinical judgments
American Psychologist, 1968, 23(7) p.483-496
'The various studies can thus be viewed as repeated
sampling from a uniform universe of judgement tasks
involving the diagnosis and predication of human
behavior. Lacking complete knowledge of the elements
that constitute this universe, representativeness cannot
be determined precisely. However, with a sample of about
100 studies and the same outcome obtained in almost
every case, it is reasonable to conclude that the
actuarial advantage is not exceptional but general and
likely to encompass many of the unstudied judgement
tasks. Stated differently, if one poses the query:
Would an actuarial procedure developed for a particular
judgement task (say, predicting academic success at my
institution) equal or exceed the clinical method?", the
available research places the odds solidly in favour of
an affirmative reply. "There is no controversy in social
science that shows such a large body of qualitatively
diverse studies coming out so uniformly....as this one
(Meehl J. Person. Assess, 50,370 (1986)".'
The distinction between collecting observations and integrating it is
further brought out vividly by Meehl (1989):
'Surely we all know that the human brain is poor at
weighting and computing. When you check out at a
supermarket you don't eyeball the heap of purchases and
say to the clerk, "well it looks to me as if it's about
$17.00 worth; what do you think?" The clerk adds it up.
There are no strong arguments....from empirical
studies.....for believing that human beings can assign
optimal weight in equations subjectively or that they
apply their own weights consistently.'
P. Meehl (1986)
Causes and effects of my disturbing little book
J Person. Assess. 50,370-5,1986
'Distributional information, or base-rate data, consist
of knowledge about the distribution of outcomes in
similar situations. In predicting the sales of a new
novel, for example, what one knows about the author, the
style, and the plot is singular information, whereas
what one knows about the sales of novels is
distributional information. Similarly, in predicting the
longevity of a patient, the singular information
includes his age, state of health, and past medical
history, whereas the distributional information consists
of the relevant population statistics. The singular
information consists of the relevant features of the
problem that distinguish it from others, while the
distributional information characterises the outcomes
that have been observed in cases of the same general
class. The present concept of distributional data does
not coincide with the Bayesian concept of a prior
probability distribution. The former is defined by the
nature of the data, whereas the latter is defined in
terms of the sequence of information acquisition.
The tendency to neglect distributional information and
to rely mainly on singular information is enhanced by
any factor that increases the perceived uniqueness of
the problem. The relevance of distributional data can be
masked by detailed acquaintance with the specific case
or by intense involvement with it........
The prevalent tendency to underweigh or ignore
distributional information is perhaps the major error of
intuitive prediction. The consideration of
distributional information, of course, does not
guarantee the accuracy of forecasts. It does, however,
provide some protection against completely unrealistic
predictions. The analyst should therefore make every
effort to frame the forecasting problem so as to
facilitate utilising all the distributional information
that is available to the expert.'
A. Tversky & D. Kahneman (1983)
Extensional Versus Intuitive Reasoning: The Conjunction
Fallacy in Probability Judgment Psychological Review
v90(4) 1983
'The possession of unique observational capacities
clearly implies that human input or interaction is often
needed to achieve maximal predictive accuracy (or to
uncover potentially useful variables) but tempts us to
draw an additional, dubious inference. A unique capacity
to observe is not the same as a unique capacity to
predict on the basis of integration of observations. As
noted earlier, virtually any observation can be coded
quantitatively and thus subjected to actuarial analysis.
As Einhorn's study with pathologists and other research
shows, greater accuracy may be achieved if the skilled
observer performs this function and then steps aside,
leaving the interpretation of observational and other
data to the actuarial method.'
R. Dawes, D. Faust and P. Meehl (1989)
ibid.
--
David Longley
what is this crap david and what's it doing in an msdos newsgroup?
kindly keep it to the ai groups please instead of spreading it all over the
place. if we're interested we'll subscribe to comp.ai.*
nik
·····@longley.demon.co.uk (David Longley) spammed :
|> It will help if an idea of what we mean by 'clinical' and 'actuarial'
|> judgement is provided. The following is taken from a an early (Meehl
|> 1954), and a relatively recent review of the status 'Clinical vs.
|> Actuarial Judgement' by Dawes, Faust and Meehl (1989):
|>
a whole bunch of crap pasted out of some book or something.
|> --
|> David Longley
|>
--
* putting all reason aside you exchange what you've *
* got for a thing that's hypnotic and strange *
In article <··········@lyra.csx.cam.ac.uk>
·····@thor.cam.ac.uk "G.P. Tootell" writes:
> what is this crap david and what's it doing in an msdos newsgroup?
> kindly keep it to the ai groups please instead of spreading it all over the
> place. if we're interested we'll subscribe to comp.ai.*
>
> nik
If you want to know what it is I suggest you *read* it. As to *why* it's
posted to these groups, I would have thought that was quite clear. Headers
from the original abusive material have been retained so that my response
to the nonsense from Balter is APPROPRIATELY circulated.
>
>
> ·····@longley.demon.co.uk (David Longley) spammed :
>
> |> It will help if an idea of what we mean by 'clinical' and 'actuarial'
> |> judgement is provided. The following is taken from a an early (Meehl
> |> 1954), and a relatively recent review of the status 'Clinical vs.
> |> Actuarial Judgement' by Dawes, Faust and Meehl (1989):
> |>
>
> a whole bunch of crap pasted out of some book or something.
>
*That's* abusive too - I suggest you make an effort to understand the
following (taking a brief break from blindly defending "netiquette").
'If we are limning the true and ultimate structure of
reality, the canonical scheme for us is the austere
scheme that knows no quotation but direct quotation and
no propositional attitudes but only the physical
constitution and behavior of organisms.'
W.V.O Quine
Word and Object 1960 p 221
For:
'Once it is shown that a region of discourse is not
extensional, then according to Quine, we have reason to
doubt its claim to describe the structure of reality.'
C. Hookway
Logic: Canonical Notation and Extensionality
Quine (1988)
The problem with intensional (or common sense or 'folk') psychology
has been clearly spelled out by Nelson (1992):
'The trouble is, according to Brentano's thesis, no such
theory is forthcoming on strictly naturalistic, physical
grounds. If you want semantics, you need a full-blown,
irreducible psychology of intensions.
There is a counterpart in modern logic of the thesis of
irreducibility. The language of physical and biological
science is largely *extensional*. It can be formulated
(approximately) in the familiar predicate calculus. The
language of psychology, however, is *intensional*. For
the moment it is good enough to think of an
*intensional* sentence as one containing words for
*intensional* attitudes such as belief.
Roughly what the counterpart thesis means is that
important features of extensional, scientific language
on which inference depends are not present in
intensional sentences. In fact intensional words and
sentences are precisely those expressions in which
certain key forms of logical inference break down.'
R. J. Nelson (1992)
Naming and Reference p.39-42
and explicitly by Place (1987):
'The first-order predicate calculus is an extensional
logic in which Leibniz's Law is taken as an axiomatic
principle. Such a logic cannot admit 'intensional' or
'referentially opaque' predicates whose defining
characteristic is that they flout that principle.'
U. T. Place (1987)
Skinner Re-Skinned P. 244
In B.F. Skinner Consensus and Controversy
Eds. S. Modgil & C. Modgil
--
David Longley
G.P. Tootell wrote:
>
> what is this crap david and what's it doing in an msdos newsgroup?
> kindly keep it to the ai groups please instead of spreading it all over the
> place. if we're interested we'll subscribe to comp.ai.*
>
> nik
Good God no!
We don't have a clue what he's babbling about either. We thought
he came with you.
--
Glen Clark
····@clarkcom.com
Glen Clark wrote:
>
> G.P. Tootell wrote:
> >
> > what is this crap david and what's it doing in an msdos newsgroup?
> > kindly keep it to the ai groups please instead of spreading it all over the
> > place. if we're interested we'll subscribe to comp.ai.*
> >
> > nik
>
> Good God no!
>
> We don't have a clue what he's babbling about either. We thought
> he came with you.
>
> --
> Glen Clark
> ····@clarkcom.com
Hi All,
Half of this series of threads appears to be the the consequence of
someone leaving their browser logged-on and unattended in an institution
for the insane. Maybe there should be a new separate newsgroup,
comp.ai.bedlam
or
comp.ai.babel
or
comp.ai.institutionalized
Perhaps it is a new form of therapy developed by Microsoft to increase
sales figures: give each inmate a copy of InterNet Exploder and turn
them loose. I would have preferred to be forewarned, however.
Good Luck,
Michael D. Kersey
In article <·············@hal-pc.org>
········@hal-pc.org "Michael D. Kersey" writes:
> Hi All,
>
> Half of this series of threads appears to be the the consequence of
> someone leaving their browser logged-on and unattended in an institution
> for the insane. Maybe there should be a new separate newsgroup,
>
> comp.ai.bedlam
> or
> comp.ai.babel
> or
> comp.ai.institutionalized
>
> Perhaps it is a new form of therapy developed by Microsoft to increase
> sales figures: give each inmate a copy of InterNet Exploder and turn
> them loose. I would have preferred to be forewarned, however.
>
> Good Luck,
> Michael D. Kersey
>
Now *that's* quite a nice illustrative example of the fallibility
of common-sense or folk-psychology as a theoretical (or practical)
perspective.
--
David Longley
This is a a critique of a stance in contemporary psychology, and
for want of a better term, I have characterised that as
"intensional". As a corrective, I outline what may be described
as "The Extensional Stance" which draws heavily on the
naturalized epistemology of W.V.O Quine.
Full text is available at:
http://www.uni-hamburg.de/~kriminol/TS/tskr.htm
A: Methodological Solipsism
'A cognitive theory with no rationality restrictions is
without predictive content; using it, we can have
virtually no expectations regarding a believer's
behavior. There is also a further metaphysical, as
opposed to epistemological, point concerning rationality
as part of what it is to be a PERSON: the elements of a
mind - and, in particular, a cognitive system - must FIT
TOGETHER or cohere.......no rationality, no agent.'
C. Cherniak (1986)
Minimal Rationality p.6
'Complexity theory raises the possibility that formally
correct deductive procedures may sometimes be so slow as
to yield computational paralysis; hence, the "quick but
dirty" heuristics uncovered by the psychological
research may not be irrational sloppiness but instead
the ultimate speed-reliability trade-off to evade
intractability. With a theory of nonidealized
rationality, complexity theory thereby "justifies the
ways of Man" to this extent.'
ibid p.75-76
The establishment of coherence or incoherence depends on a commitment
to clear and accurate recording and analysis of observations and their
relations within a formal system. Unfortunately, biological
constraints on both neuron conduction velocity and storage capacity
impose such severe constraints on human processing capacity that we
are restricted to using heuristics rather than recursive functions.
There can be no doubt that non-human computers are, at least with
respect to the propositional calculus, and first order predicate
calculus with monadic predicates, ie systems which are decidable, have
decision procedures untainted by Godel's Theorem 1931, offer a far
more reliable way of analysing information than intuitive judgment.
The primary reason for writing this volume is to locate the programme
of behaviour assessment and management referred to as 'Sentence
Management' within contemporary research and development in cognitive
and behaviour science. It is also in part motivated by the author
having been in a position for some time where he has been required to
both teach, train, and support applied criminological psychologists in
the use of deductive (computer, and relational database 4GL
programming) as well as inductive (inferential statistical) inference
in an applied setting. This responsibility has led to a degree of
bewilderment. Some very influential work in mathematical logic this
century has suggested that certain domains of concern simply do not
fall within the 'scope and language of science' (Quine 1956). That
work suggests, in fact, that psychological idioms, as opposed to
behavioural terms, belong to a domain which is resistant to the tools
of scientific analysis since they flout a basic axiom which is a
precondition for valid inference.
Whilst this point has been known to logicians for nearly a century,
empirical evidence in support of this conclusion began to accumulate
throughout the 1970s and 1980s as a result of work in Decision Theory
in psychology and medicine. (Kahneman, Slovic and Tversky 1982, Arkes
and Hammond 1986). This work provided a substantial body of empirical
evidence that human judgement is not adequately modelled by the axioms
of subjective probability theory (ie Bayes Theorem, cf. Savage 1954,
Cooke 1991), or formal logic (Wason 1966, Johnson-Laird and Wason
1972), and that in all areas of human judgement, quite severe errors
of judgement were endemic probably due to basic neglect of base rates
(prior probabilities or relative frequencies of behaviours in the
population, see Eddy 1982 for a striking example of the
misinterpretation of the diagnostic contribution of mammography in
breast cancer). Instead, the evidence now strongly suggests that
judgements are usually little more than guesses, or 'heuristics' which
are prone to well understood biases such as 'anchoring',
'availability', and 'representativeness'. This work has progressively
undermined the very foundations of Cognitive Science, which takes
rationality and substitutivity as axiomatic.
It is bewildering how difficult it is to teach deductive reasoning
skills effectively if in fact the classical, functionalist, stance of
contemporary cognitive psychology is in fact true. Yet the literature
on teaching skills in deductive reasoning suggests that these *are* in
fact very difficult skills to teach. Most significantly, it is
notoriously difficult to teach such skills with the objective of
having them *applied to practical problems*. What seems to happen is
that, despite efforts to achieve the contrary, the skills that are
acquired, are both acquired and applied as intensional, inductive
heuristics, rather than as a set of formal logical rules.
This volume is therefore to be taken as a rationale for both a
programme of inmate management and assessment referred to as 'Sentence
Management' (which is both historically descriptive and deductive
rather than projective and inductive in approach), and for the format
of the current MSc 'Computing and Statistics' module which is part of
the Msc in Applied Criminological Psychology, designed to provide
formal training in behaviour science for new psychologists working
within the English Prison Service. The 'Computing and Statistics'
module could in fact be regarded as a module in 'Cognitive Skills' for
psychologists. The format adopted is consistent with the
recommendations of researchers such as Nisbett and Ross (1980) Holland
et al. (1986), and Ross and Nisbett 1991. Elaboration of the
substantial Clinical vs. Actuarial dimension can be found in section C
below.
At the heart of 20th century logic there is a very interesting problem
(illuminated over the last three decades by W.V.O Quine (1956, 1960,
1990,1992), which seems to be divisive with respect to the
classification and analysis of 'mental' (or psychological) phenomena
as opposed to 'physical' phenomena. The problem is variously known as
Brentano's Thesis (Quine 1960), 'the problem of intensionality', or
'the content-clause problem'.
"The keynote of the mental is not the mind it is the
content-clause syntax, the idiom 'that p'".
W. V. O. Quine
Intension
The Pursuit of Truth (1990) p.71
It is the subject of this volume that the solution to this problem
renders psychology and behaviour science two very different subjects
with entirely different methods and domains of application. The
problem is reflected in differences in how language treats certain
classes of terms. One class is the 'extensional' and the other
'intensional'. This volume therefore sets out the relevant
contemporary research background and outlines the practical
implications which these logical classes have for the applied,
practical work of criminological psychologists. We will begin with a
few recent statements on the implications of Brentano's Thesis for
psychology as a science. The basic conclusion is that there can be no
scientific analysis, ie no reliable application of the laws of logic
or mathematics to psychological phenomena, because psychological
phenomena flout the very axioms which mathematical, logical and
computational processes must assume for valid inference. From the fact
that quantification is unreliable within intensional contexts, it
follows that both p and not-p can be held as truth values for the same
proposition, and from any system which allows such inconsistency, any
conclusion whatsoever can be inferred. The thrust of this volume is
that bewilderment vanishes once one appreciates that the subject
matter of Applied Criminological Psychology is exclusively that of
behaviour, and that its methodology is exclusively deductive and
analytical. This is taken to be a clear vindication of Quine's 1960
dictum that:
'If we are limning the true and ultimate structure of
reality, the canonical scheme for us is the austere
scheme that knows no quotation but direct quotation and
no propositional attitudes but only the physical
constitution and behavior of organisms.'
W.V.O Quine
Word and Object 1960 p 221
For:
'Once it is shown that a region of discourse is not
extensional, then according to Quine, we have reason to
doubt its claim to describe the structure of reality.'
C. Hookway
Logic: Canonical Notation and Extensionality
Quine (1988)
The problem with intensional (or common sense or 'folk') psychology
has been clearly spelled out by Nelson (1992):
'The trouble is, according to Brentano's thesis, no such
theory is forthcoming on strictly naturalistic, physical
grounds. If you want semantics, you need a full-blown,
irreducible psychology of intensions.
There is a counterpart in modern logic of the thesis of
irreducibility. The language of physical and biological
science is largely *extensional*. It can be formulated
(approximately) in the familiar predicate calculus. The
language of psychology, however, is *intensional*. For
the moment it is good enough to think of an
*intensional* sentence as one containing words for
*intensional* attitudes such as belief.
Roughly what the counterpart thesis means is that
important features of extensional, scientific language
on which inference depends are not present in
intensional sentences. In fact intensional words and
sentences are precisely those expressions in which
certain key forms of logical inference break down.'
R. J. Nelson (1992)
Naming and Reference p.39-42
and explicitly by Place (1987):
'The first-order predicate calculus is an extensional
logic in which Leibniz's Law is taken as an axiomatic
principle. Such a logic cannot admit 'intensional' or
'referentially opaque' predicates whose defining
characteristic is that they flout that principle.'
U. T. Place (1987)
Skinner Re-Skinned P. 244
In B.F. Skinner Consensus and Controversy
Eds. S. Modgil & C. Modgil
The *intension* of a sentence is its 'meaning', or the property it
conveys. It is sometimes used almost synonymously with the
'proposition' or 'content' communicated. The *extension* of a term or
sentence on the other hand is the CLASS of things of which the term or
sentence can be said to be true. Thus, things belong to the same
extension of a term or sentence if they are the same members of the
designated class, whilst things share the same intension,
(purportedly) if they share the same property. Here's how Quine (1987)
makes the distinction:
'If it makes sense to speak of properties, it should
make clear sense to speak of sameness and differences of
properties; but it does not. If a thing has this
property and not that, then certainly this property and
that are different properties. But what if everything
that has this property has that one as well, and vice
versa? Should we say that they are the same property? If
so, well and good; no problem. But people do not take
that line. I am told that every creature with a heart
has kidneys, and vice versa; but who will say that the
property of having a heart is the same as that of having
kidneys?
In short, coextensiveness of properties is not seen as
sufficient for their identity. What then is? If an
answer is given, it is apt to be that they are identical
if they do not just happen to be coextensive, but are
necessarily coextensive. But NECESSITY, q.v., is too
hazy a notion to rest with.
We have been able to go on blithely all these years
without making sense of identity between properties,
simply because the utility of the notion of property
does not hinge on identifying or distinguishing them.
That being the case, why not clean up our act by just
declaring coextensive properties identical? Only because
it would be a disturbing breach of usage, as seen in the
case of the heart and kidneys. To ease that shock, we
change the word; we speak no longer of properties, but
of CLASSES......
We must acquiesce in ordinary language for ordinary
purposes, and the word 'property' is of a piece with it.
But also the notion of property or its reasonable
facsimile that takes over, since these contexts never
hinge on distinguishing coextensive properties. One
instance among many of the use of classes in mathematics
is seen under DEFINITION, in the definition of number.
For science it is classes SI, properties NO.'
W. V. O. Quine (1987)
Classes versus Properties
QUIDDITIES:
It has been argued quite convincingly by Quine (1956,1960) that the
scope and language of science is entirely extensional, that the
intensional is purely attributive, instrumental or creative, and that
there can not be a universal language of thought or 'mentalese', since
such a system would presume determinate translation relations.
Different languages are different systems of behaviour which may
achieve similar ends, but they do not support direct, determinate
translation relations. This is Quine's (1960) 'Indeterminacy of
Translation Thesis'. Despite its import, we frequently behave 'as if'
it is legitimate to directly translate (substitute), and we do this
not only within our own language as illustrated below, but within our
own thinking and language.
This profound point of mathematical logic can be made very clear with
a simple, but representative example. The sub-set of intensional
idioms with which we are most concerned in our day to day dealings
with people are the so called 'propositional attitudes' (saying that,
remembering that, believing that, knowing that, hoping that and so
on). If we report that someone 'said that' he hated his father, it is
often the case that we do not report what is articulated verbatim, ie
precisely, what was said. Instead, we frequently 'approximate' the
'meaning' of what was said and consider this legitimate on the grounds
that the 'meaning' is preserved.
Unfortunately, this assumes that, in contexts of propositional
attitude ('says that', 'thinks that', 'believes that', and, quite
pertinently, 'knows that' etc.) we are free to substitute terms or
phrases which are otherwise co-referential as we can extensionally
with 7+3 = 10 and 5+5=10. That is, it assumes that inference within
intensional contexts is valid. Yet nobody would report that if Oedipus
said that he wanted to marry Jocasta that he said that he wanted to
marry his mother! The problem with intensional idioms is that they can
not be substituted for one another and still preserve the truth
functionality of the contexts within which they occur. In fact, they
can only be directly quoted verbatim, ie as behaviours. Now
substitutivity of co-referential identicals 'salva veritate'
(Leibniz's Law) is in fact a basic extensional axiom of first order
logic, and is a law which underpins all valid inference. One of the
objectives of this paper is therefore to specify in practical detail
how in fact we propose to develop a system for inmate reporting which
does not flout Leibniz's Law, but takes it as central. This is an
inversion of current practices in significant areas of the work of
applied psychologists, and whilst the example cited above is a simple
one, it is nevertheless highly representative of much of the
problematic work of practising psychologists, who, often ignorant of
the above constraint on dealing with the problems which logicians have
identified with the intensional, are, as a consequence, therefore more
often 'creative' in their dealings with inmates and in their report
writing, than is often appreciated, even though the 'Puzzle About
Belief' and modal contexts is well documented (Church 1954, Kripke
1979).
Dretske (1980) put the issue as follows:
'If I know that the train is moving and you know that
its wheels are turning, it does not follow that I know
what you know just because the train never moves without
its wheels turning. More generally, if all (and only) Fs
are G, one can nonetheless know that something is F
without knowing that it is G. Extensionally equivalent
expressions, when applied to the same object, do not
(necessarily) express the same cognitive content.
Furthermore, if Tom is my uncle, one can not infer (with
a possible exception to be mentioned later) that if S
knows that Tom is getting married, he thereby knows that
my uncle is getting married. The content of a cognitive
state, and hence the cognitive state itself, depends
(for its identity) on something beyond the extension or
reference of the terms we use to express the content. I
shall say, therefore, that a description of a cognitive
state, is non-extensional.'
F. I. Dretske (1980)
The Intentionality of Cognitive States
Midwest Studies in Philosophy 5,281-294
For the discipline of psychology, the above logical analyses can be
taken either as a vindication of 20th century behaviourism/physicalism
(Quine 1960,1990,1992) or as a knockout blow to 20th century
'Cognitivism' and psychologism (methodological solipsism).
'One may accept the Brentano thesis as showing the
indispensability of intentional idioms and the
importance of an autonomous science of intention, or as
showing the baselessness of intentional idioms and the
emptiness of a science of intention. My attitude, unlike
Brentano's, is the second. To accept intentional usage
at face value is, we saw, to postulate translation
relations as somehow objectively valid though
indeterminate in principle relative to the totality of
speech dispositions. Such postulation promises little
gain in scientific insight if there is no better ground
for it than that the supposed translation relations are
presupposed by the vernacular of semantics and
intention.'
W. V. O. Quine
The Double Standard
Flight from Intension
Word and Object (1960), p218-221
In response to these mounting problems, Jerry Fodor published an
influential paper in 1980 entitled 'Methodological Solipsism
Considered as a Research Strategy for Cognitive Psychology'. In that
paper he proposed that Cognitive Psychology adopt a stance, or
restricted itself to the explication of the ways that subjects make
sense of the world from their 'own particular point of view'. This was
to be contrasted with the objectives of 'Naturalistic Psychology' or
'Evidential Behaviourism'.
Methodological Solipsism, as opposed to Methodological Behaviourism,
takes 'cognitive processes', mental contents (meanings/propositions)
or 'propositional attitudes' of folk/commonsense psychology at face
value. It accepts that there is a 'Language of Thought' (Fodor 1975),
that there is a universal 'mentalese' which natural languages map
onto, and which express thoughts as 'propositions'. It examines the
apparent causal relations and processes of 'attribution' between these
processes and other psychological processes which have propositional
content. It accepts what is known as the 'formality condition', ie
that thinking is a purely formal, syntactic, computational affair
which therefore has no room for semantic notions such as truth or
falsehood. Such computational processes are therefore indifferent to
whether beliefs are about the world per se (can be said to have a
reference), or are just the views of the belief holder (ie may be
purely imaginary). Technically, this amounts to beliefs not being
subject to 'existential or universal quantification' (where
'existential' refers to the logical quantifier, y 'there exists at
least one', and 'universal' z refers to 'for all').
Methodological Solipsism looks to the *relations* between beliefs 'de
dicto', which are opaque to the holder (he may believe that the
Morning Star is the planet Venus, but not believe that the Evening
Star is the planet Venus, and therefore believe different things of
the Morning and Evening Stars). Methodological Solipsism does not
concern itself with the transparency of beliefs, ie their referential,
or 'de re' status. Some further examples of what all this entails
might be helpful here, since the implications of Methodological
Solipsism are both subtle and far ranging. The critical notions in
what follow are 'transfer of training', 'generalisation decrement',
'inductive vs. deductive inference', and the distinction between
'heuristics' and 'algorithms'.
Here is how Fodor's paper was summarised in abstract:
'Explores the distinction between 2 doctrines, both of
which inform theory construction in much of modern
cognitive psychology: the representational theory of
mind and the computational theory of mind. According to
the former, propositional attitudes are viewed as
relations that organisms bear to mental representations.
According to the latter, mental processes have access
only to formal (nonsemantic) properties of the mental
representations over which they are defined. The
following claims are defended: (1) The traditional
dispute between rational and naturalistic psychology is
plausibly viewed as an argument about the status of the
computational theory of mind. (2) To accept the
formality condition is to endorse a version of
methodological solipsism. (3) The acceptance of some
such condition is warranted, at least for that part of
psychology that concerns itself with theories of the
mental causation of behavior. A glossary and several
commentaries are included.'
J A Fodor (1980)
Methodological solipsism considered as a research
strategy in cognitive psychology.
Massachusetts Inst of Technology
Behavioral and Brain Sciences; 1980 Mar Vol 3(1) 63-109
Some of the commentaries, particularly those by Loar or Rey clarify
what is, admittedly, quite a difficult, but substantial view widely
held by graduate psychologists.
'If psychological explanation is a matter of describing
computational processes, then the references of our
thoughts do not matter to psychological explanation.
This is Fodor's main argument.....Notice that Fodor's
argument can be taken a step further. For not only are
the references of our thoughts not mentioned in
cognitive psychology; nothing that DETERMINES their
references, like Fregian senses, is mentioned
either....Neither reference nor reference-determining
sense have a place in the description of computational
processes.'
B. F. Loar
Ibid p.89
Not all of the commentaries were as formal, as the following
commentary from one of the UK's most eminent logicians makes clear:
'Fodor thinks that when we explain behaviour by mental
causes, these causes would be given "opaque"
descriptions "true in virtue of the way the agent
represents the objects of his wants (intentions,
beliefs, etc.) to HIMSELF" (his emphasis). But what an
agent intends may be widely different from the way he
represents the object of his intention to himself. A man
cannot shuck off the responsibility for killing another
man by just 'directing his intention' at the firing of a
gun:
"I press a trigger - Well, I'm blessed!
he's hit my bullet with his chest!"'
P. Geach
ibid p80
The Methodological Solipsist's stance is clearly at odds with what is
required to function effectively as an APPLIED Criminological
Psychologist if 'functional effectiveness' is taken to refer to
intervention in the behaviour of an inmate with reference to his
environment. Here's how Fodor contrasted Methodological Solipsism with
the naturalistic approach:
'..there's a tradition which argues that - epistemology
to one side - it is at best a strategic mistake to
attempt to develop a psychology which individuates
mental states without reference to their environmental
causes and effects...I have in mind the tradition which
includes the American Naturalists (notably Pierce and
Dewey), all the learning theorists, and such
contemporary representatives as Quine in philosophy and
Gibson in psychology. The recurrent theme here is that
psychology is a branch of biology, hence that one must
view the organism as embedded in a physical environment.
The psychologist's job is to trace those
organism/environment interactions which constitute its
behavior.'
J. Fodor (1980) ibid. p.64
Here is how Stich (1991) reviewed Fodor's position ten years on:
'This argument was part of a larger project. Influenced
by Quine, I have long been suspicious about the
integrity and scientific utility of the commonsense
notions of meaning and intentional content. This is not,
of course, to deny that the intentional idioms of
ordinary discourse have their uses, nor that the uses
are important. But, like Quine, I view ordinary
intentional locutions as projective, context sensitive,
observer relative, and essentially dramatic. They are
not the sorts of locutions we should welcome in serious
scientific discourse. For those who share this Quinean
scepticism, the sudden flourishing of cognitive
psychology in the 1970s posed something of a problem. On
the account offered by Fodor and other observers, the
cognitive psychology of that period was exploiting both
the ontology and the explanatory strategy of commonsense
psychology. It proposed to explain cognition and certain
aspects of behavior by positing beliefs, desires, and
other psychological states with intentional content, and
by couching generalisations about the interactions among
those states in terms of their intentional content. If
this was right, then those of us who would banish talk
of content in scientific settings would be throwing out
the cognitive psychological baby with the intentional
bath water. On my view, however, this account of
cognitive psychology was seriously mistaken. The
cognitive psychology of the 1970s and early 1980s was
not positing contentful intentional states, nor was it
(adverting) to content in its generalisations. Rather, I
maintained, the cognitive psychology of the day was
"really a kind of logical syntax (only psychologized).
Moreover, it seemed to me that there were good reasons
why cognitive psychology not only did not but SHOULD not
traffic in intentional states. One of these reasons was
provided by the Autonomy argument.'
Stephen P. Stich (1991)
Narrow Content meets Fat Syntax
in MEANING IN MIND - Fodor And His Critics
and writing with others in 1991, even more dramatically:
'In the psychological literature there is no dearth of
models for human belief or memory that follow the lead
of commonsense psychology in supposing that
propositional modularity is true. Indeed, until the
emergence of connectionism, just about all psychological
models of propositional memory, except those urged by
behaviorists, were comfortably compatible with
propositional modularity. Typically, these models view a
subject's store of beliefs or memories as an
interconnected collection of functionally discrete,
semantically interpretable states that interact in
systematic ways. Some of these models represent
individual beliefs as sentence like structures - strings
of symbols that can be individually activated by their
transfer from long-term memory to the more limited
memory of a central processing unit. Other models
represent beliefs as a network of labelled nodes and
labelled links through which patterns of activation may
spread. Still other models represent beliefs as sets of
production rules. In all three sorts of models, it is
generally the case that for any given cognitive episode,
like performing a particular inference or answering a
question, some of the memory states will be actively
involved, and others will be dormant......
The thesis we have been defending in this essay is that
connectionist models of a certain sort are incompatible
with the propositional modularity embedded in
commonsense psychology. The connectionist models in
question are those that are offered as models at the
COGNITIVE level, and in which the encoding of
information is widely distributed and subsymbolic. In
such models, we have argued, there are no DISCRETE,
SEMANTICALLY INTERPRETABLE states that play a CAUSAL
ROLE in some cognitive episodes but not others. Thus
there is, in these models, nothing with which the
propositional attitudes of commonsense psychology can
plausibly be identified. If these models turn out to
offer the best accounts of human belief and memory, we
shall be confronting an ONTOLOGICALLY RADICAL theory
change - the sort of theory change that will sustain the
conclusion that propositional attitudes, like caloric
and phlogiston, do not exist.'
W. Ramsey, S. Stich and J. Garon (1991)
Connectionism, eliminativism, and the future of folk
psychology.
The implications here are that progress in applying psychology will be
impeded if psychologists persist in trying to talk about, or use
psychological (intensional) phenomena within a framework (evidential
behaviourism) which inherently resists quantification into such
terms. Without bound, extensional predicates, we can not reliably use
the predicate calculus, and without the predicate (functional)
calculus we can not formulate lawful relationships, statistical or
determinate.
In the following pages, I hope to be able to explicate how dominant
the methodologically solipsistic approach is within psychological
research and practice, and how that work can only have a critically
negative impact on the practical work of the Applied Criminological
Psychologist. In the main, the following looks to the study of how
people spontaneously use socially conditioned (induced) intensional
heuristics, and how these are at odds with what we now know to be
formally optimal (valid) from the stance of the objective
(extensional) sciences. It argues that the primary objective of the
applied psychologist must be the extensional analysis of observations
of behaviour (Quine 1990) and that any intervention or advice must be
based exclusively on such data if what is provided is to be classed as
a professional service. To attempt to understand or describe behaviour
without reference to the environment within which it occurs, is, it is
argued, to only partially understand and describe behaviour at best, a
point made long ago by Brunswick and Tolman (1933). To do otherwise is
to treat self-assessment/report as a valid and reliable source of
behavioural data, whilst a substantial body of evidence from Cognitive
Psychology, some of which is reviewed in this paper, suggests such a
stance is a very fundamental error. Like 'folk physics', 'folk
psychology' has been documented and found wanting. The last section of
this paper outlines a technology for directly recording and
extensionally analysing inmate/regime interactions or relations,
thereby providing a practical direction to shape the work of Applied
Criminological Psychology.
The following pages cite some examples of research which looks at the
use of intensional heuristics from a methodological solipsistic
stance. The first looks at the degree to which intensional heuristics
can be trained, and is a development of work published by Nisbett and
Krantz (1983). The concept of response generalisation, ie the transfer
of training to new problems is the key issue in what follows. However,
as Nisbett and Wilson (1977) clearly pointed out, subjects' awareness
should not be given undue weight when assessing its efficacy, instead,
testing for change by differential placement in contexts which require
such skills should be the criterion.
'Ss were trained on the law of large numbers in a given
domain through the use of example problems. They were
then tested either on that domain or on another domain
either immediately or after a 2-wk delay. Strong domain
independence was found when testing was immediate. This
transfer of training was not due simply to Ss' ability
to draw direct analogies between problems in the trained
domain and in the untrained domain. After the 2-wk
delay, it was found that (1) there was no decline in
performance in the trained domain and (2) although there
was a significant decline in performance in the
untrained domain, performance was still better than for
control Ss. Memory measures suggest that the retention
of training effects is due to memory for the rule system
rather than to memory for the specific details of the
example problems, contrary to what would be expected if
Ss were using direct analogies to solve the test
problems.'
Fong G. T. & Nisbett R. E. (1991)
Immediate and delayed transfer of training effects in
statistical reasoning.
Journal of Experimental Psychology General; 1991 Mar Vol
120(1) 34-45
Note that the authors report a decline in performance after the delay,
a point taken up and critically discussed by Ploger and Wilson (1991).
Upon reanalysing the Fong and Nisbett's results, these authors
concluded:
'The data in this study suggest the following argument:
Most college students did not apply the LLN [Law of
Large Numbers] to problems in everyday life. When given
brief instruction on the LLN, the majority of college
students were able to remember that rule. This led to
some increase in performance on problems involving the
LLN. **Overall, most students could state the rule with
a high degree of accuracy, but failed to apply it
consistently. The vast majority of college students
could memorize a rule; some applied it to examples, but
most did not.**
Fong and Nisbett (1991) concluded their article with the
suggestion that "inferential rule training may be the
educational gift that keeps on giving" (p.44). It is
likely that their educational approach may be successful
for relatively straightforward problems that are in the
same general form as the training examples. We suspect,
however, that for more complex problems, rule training
might be less effective. **Students may remember the
rule, but fail to understand the relevant implications.
In such cases, students may accept the gift, but it will
not keep on giving.'**
D. Ploger and M. Wilson
J Experimental Psychology: General, 1991,120,2,213-214
(My emphasis)
This criticism is repeated by Reeves and Weisberg (1993):
G. T. Fong and R. E. Nisbett claimed that human problem
solvers use abstract principles to accomplish transfer
to novel problems, based on findings that Ss were able
to apply the law of large numbers to problems from a
different domain from that in which they had been
trained. However, the abstract-rules position cannot
account for results from other studies of analogical
transfer that indicate that the content or domain of a
problem is important both for retrieving previously
learned analogs (e.g., K. J. Holyoak and K. Koh, 1987;
M. Keane, 1985, 1987; B. H. Ross, 1989) and for mapping
base analogs onto target problems (Ross, 1989). It also
cannot account for Fong and Nisbett's own findings that
different-domain but not same-domain transfer was
impaired after a 2-wk delay. It is proposed that the
content of problems is more important in problem solving
than supposed by Fong and Nisbett.'
L. M. Reeves and R. W. Weisberg
Abstract versus concrete information as the basis for
transfer in problem Solving: Comment on Fong and Nisbett
(1991).
Journal of Experimental Psychology General 1993 Mar
Vol122(1) 125-128
The above authors concluded their paper:
'Accordingly, we urge caution in development of an
abstract-rules approach in analogical problem solving at
the expense of domain or exemplar-specific information.
Theories in deductive reasoning have been developed that
give a more prominent role to problem content (e.g.
Evans, 198; Johnson-Laird, 1988; Johnson-Laird & Byrne,
1991) and thus better explain the available data; the
evidence suggests that problem solving theories should
follow this trend. Ibid p.127
The key issue is not whether students (or inmates) can learn
particular rules, or strategies of behaviour, since such behaviour
modification is quite fundamental to training any professional;
rather, **the issue is how well such rules are in fact applied outside
the specific training domain where they are learned**, which, writ
large, means the specialism within which they belong. This theme runs
throughout this paper in different guises. In some places the emphasis
is on 'similarity metrics', in others, 'synonymy', 'analyticity' and
'the opacity of the intensional'. Throughout, the emphasis is on
transfer of training and the fragmentation of all skill learning which
is fundamental to the rationale for the system of Sentence Management
which will be explicated and discussed in the latter parts of this
paper.
Fong et al. (1990) having reviewed the general neglect of base rate
information and overemphasis on case-specific information in parole
decision making, went on to train probation officers in the use of the
law of large numbers. This training increased probation officers' use
of base-rates when making predictions about recidivism, but this is a
specialist, context specific skill.
'Consider a probation officer who is reviewing an
offender's case and has two primary sources of
information at his disposal: The first is a report by
another officer who has known the offender for three
years; and the second is his own impressions of the
offender based on a half-hour interview. According to
the law of large numbers, the report would be considered
more important than the officer's own report owing to
its greater sample size. But research suggests that
people will tend to underemphasize the large sample
report and overemphasize the interview. Indeed, research
on probation decisions suggests that probation officers
are subject to exactly such a bias (Gottfredson and
Gottfredson; 1988; Lurigio, 1981)'
G. T. Fong, A. J. Lurigio & L. J. Stalans (1990)
Improving Probation Decisions Through Statistical Training
Criminal Justice and behavior 17,3,1990, 370-388
However, it is important to evaluate the work of Nisbett and
colleagues in the context of their early work which is clearly in the
tradition of fallibility of 'intuitive' human judgment. Their work
illustrates the conditions under which formal discipline, or cognitive
skills can be effectively inculcated, and which classes of skills are
relatively highly resistant to training. Such training generalises
most effectively to apposite situations, many of which will be
professional contexts. A major thesis of this volume is that for
extensional skills to be put into effective practice, explicit
applications must be made salient to elicit and sustain the
application of such skills. Formal, logical skills are most likely to
be applied within contexts such as actuarial analysis, which comprise
the application of professional skills in information technology. Such
a system is outlined with illustrative practical examples as framework
for applied behaviour science in the latter part of this volume.
Recently, Nisbett and colleagues (1992) in defending their stance
against the conventional view that there may in fact be little in the
way of formal rule learning, have suggested criteria for resolving the
question as to whether or not explicit rule following is fundamental
to reasoning, and if so, under what circumstances:
'A number of theoretical positions in psychology -
including variants of case-based reasoning, instance-
based analogy, and connectionist models - maintain that
abstract rules are not involved in human reasoning, or
at best play a minor role. Other views hold that the use
of abstract rules is a core aspect of human reasoning.
We propose eight criteria for determining whether or not
people use abstract rules in reasoning, and examine
evidence relevant to each criterion for several rule
systems. We argue that there is substantial evidence
that several different inferential rules, including
modus ponens, contractual rules, causal rules, and the
law of large numbers, are used in solving everyday
problems. We discuss the implications for various
theoretical positions and consider hybrid mechanisms
that combine aspects of instance and rule models.
E. Smith , C Langston and R Nisbett (1992)
The Case for Rules in Reasoning, Cognitive Science 16, 1-40
Whilst the above, particularly the degree to which training must be
'taught for transfer', is clearly relevant to the training of
psychologists in the use of deductive and actuarial technology
(computing and statistics), it is also relevant to work in the domain
of cognitive skills, and, from the evidence that cognitive skills
should be treated no differently to any other behavioural skills, the
argument is relevant to any other skill training, whether part of
inmate programmes. or staff training.
For instance, in some of the published studies (e.g. Porporino et al
1991), pre to post course changes (difference scores) in cognitive
skills have been presented as evidence for the efficacy of such
programmes in conjunction with the more critical (albeit to date,
quantitatively less impressive) measures of changes in reconviction
rate. Clearly one must ask whether one is primarily concerned to bring
about a change in cognitive behaviour, and/or a change in other
behaviours. In the transfer of training and reasoning studies by
Nisbett and colleagues, the issues are acknowledged to be highly
dependent on the types of heuristics being induced, and the
conventional position (which is being represented in this volume) is,
as pointed out above, still contentious, although the view being
expressed here remains the *conventional* one. The issue is one of
*generalisation* of skills to novel tasks or situations, ie situations
other than the training tasks. To what extent does generalisation in
practice occur, if at all?. These issues, and the research in
experimental psychology (outside the relatively small area of
criminological psychology), are cited here as clear empirical
illustrations of *the opacity of the intensional*. The conventional
view, as Fong and Nisbett (1991) clearly state, is that:
'A great many scholars today are solidly in the
concrete, empirical, domain-specific camp established by
Thorndike and Woodworth (1901), arguing that people
reason without the aid of abstract inferential rules
that are independent of the content domain.'
Thus, whilst Nisbett and colleagues have provided some evidence for
the induction of (statistical) heuristics, they acknowledge that there
is a problem attempting to teach formal rules (such as those of the
predicate calculus) which are not 'intuitively obvious'. This issue is
therefore at the heart of the question as to the issue of resourcing
specific, ie special inmate programmes, which are 'cognitively' based,
and which adhere to the conventional 'formal discipline' notion. Such
investment must be compared with investment in the rest of inmate
activities which can be used to monitor and shape behaviour under the
relatively natural conditions of the prison regime. There, the natural
demands of the activities are focal, and the 'programme' element rests
in apposite allocation and clear description of what the activity area
requires/offers in terms of behavioural skills.
There is a logical possibility that in restricting the subject matter
of psychology, and thereby the deployment of psychologists, to what
can only be analysed and managed from a Methodological Solipsistic
(cognitive) perspective, one will render some very significant results
of research in psychology irrelevant to applied *behaviour* science
and technology, unless taken as a vindication of the stance that
behaviour is essentially context specific. As explicated above,
intensions are not, in principle, amenable to quantitative analysis.
They are, in all likelihood, only domain or context specific. A few
further examples should make these points clearer.
--
David Longley
David Longley wrote:
>
> This is a a critique of a stance in contemporary psychology, and
> for want of a better term, I have characterised that as
> "intensional". As a corrective, I outline what may be described
> as "The Extensional Stance" which draws heavily on the
> naturalized epistemology of W.V.O Quine.
And hundreds of lines more of off-topic stuff. I apologize to these
groups for provoking the obviously disturbed Mr. Longley into posting
this inappropriate material. He has been doing so for over a year in
comp.ai.philosophy, but apparently has decided to branch out.
Hopefully he can reel himself back in and limit himself once again
to comp.ai.philosophy, where we are used to him and his autistic ways,
and have learned to tolerate his particular flavor of spam.
--
<J Q B>
In article <·············@netcom.com> ···@netcom.com "Jim Balter" writes:
> David Longley wrote:
> >
> > This is a a critique of a stance in contemporary psychology, and
> > for want of a better term, I have characterised that as
> > "intensional". As a corrective, I outline what may be described
> > as "The Extensional Stance" which draws heavily on the
> > naturalized epistemology of W.V.O Quine.
>
> And hundreds of lines more of off-topic stuff. I apologize to these
> groups for provoking the obviously disturbed Mr. Longley into posting
> this inappropriate material. He has been doing so for over a year in
> comp.ai.philosophy, but apparently has decided to branch out.
> Hopefully he can reel himself back in and limit himself once again
> to comp.ai.philosophy, where we are used to him and his autistic ways,
> and have learned to tolerate his particular flavor of spam.
>
> --
> <J Q B>
>
I am indeed "disturbed", and I think many others should be too.
However, there's nothing "autistic" about what I have been
attempting to draw attention to, and if it draws the attention of
even a few third generation programmers and a few folk in AI, I
think the intrusion will have been justified.
There *is* an important applied context for all that I have
posted so far, and it's an important one (however construed), not
only for psychologists, but also for programmers (of both sorts).
So long as Balter does not follow-up with yet more of his inane
pseudo-intellectual rubbish, I'll let this thread end with this
posting.
REGIMES, ACTIVITIES & PROGRAMMES:
WHAT WORKS & WHAT CAN BE EFFECTIVELY MANAGED:
An illustrative Analysis with Implications for Viable Behaviour Science
1. What Works? Cognitive Skill Programmes or Structured Regimes &
Attainment?
'The primary reason for the impact of "What Works?" is
the extraordinary gap between the claims of success made
by proponents of various treatments and the reality
revealed by good research.'
Robert Martinson (1976)
California Research at the Crossroads
Crime & Delinquency, April 1976, pp.63-73
It may help to look closely at some recent views and analyses of 'What
Works', in the area of inmate rehabilitation. Here is what Martinson
(1976) had to say about the common defences of the 'efficacy' of
programmes:
'Palmer's critique of "What Works?" is a strong defense
of what was best in the California tradition of
"recidivism only" research; it is also a stubborn
refusal to take the step forward from that kind of
thinking to the era of "social planning" research.
The primary reason for the impact of "What Works?" is
the extraordinary gap between the claims of success made
by proponents of various treatments and the reality
revealed by good research.
Palmer bases his critique on grounds of research
methods. In doing so, he makes an interpretation error
by construing as "studies" the "efforts" Martinson
mentions in his conclusion. In fact, "effort" represents
an independent variable category; this use of the term
does not justify Palmer's statement that Martinson
inaccurately described individual studies, whose results
have been favorable or partially favorable, as being few
and isolated exceptions. The table in which Palmer
tabulates 48 percent of the research studies as having
at least partially positive results is meaningless; it
includes findings from studies of "intervention" as
dissimilar as probation placement and castration. Palmer
does not understand the difficulties of summarising a
body of research findings. The problem lies in drawing
together often conflicting findings from individual
studies which differ in the degree of reliance that can
be placed on their conclusions. It is essential to weigh
the evidence and not merely count the studies. The real
conclusion of "What Works?" is that the addition of
isolated treatment elements to a system in which a given
flow of offenders has generated a gross rate of
recidivism has very little effect in changing this rate
of recidivism.
To continue the search for treatment that will reduce
the recidivism rate of the "middle base expectancy"
group or that will show differential effects for that
group is to become trapped in a dead end. The essence of
the new "social planning" epoch is a change in the
dependent variable from recidivism to the crime rate
(combined with cost). The public does not care whether a
program will demonstrate that the experimental group
shows a lower recidivism rate than a control group;
rather, it wants to know whether the program reduced the
overall crime rate. To ask "which methods work best for
which types of offenders and under what conditions or in
what types of settings" is to impose the narrowest of
questions on the research for knowledge. The economists,
too, do not live in Palmer's world of "types" of
offenders. To them, recidivism is an aspect of
occupational choice strengthened by the atrophy of
skills and opportunity for legitimate work that occurs
during the stay in prison.
The aim of future research will be to create the
knowledge needed to reduce crime. It must combine the
analytical skills of the economist, the jurisprudence of
the lawyer, the sociology of the life span, and the
analysis of systems. Traditional "evaluation" will play
a modest but declining role.'
Robert Martinson
California Research at the Crossroads
Crime & Delinquency, April 1976, pp.63-73 (my emphasis)
Martinson's sanguine recommendation is for more work and less
rhetoric, and that work will, as he says, depend on our establishing,
and analyzing the results of better systems. The same cautious
remarks were made by Lab and Whitehead (1990) in response to the
analysis of Andrews et. al. As recently as 1990, researchers
attempting to identify 'what works' from meta-analyses of published
research on programmes produced the following (it should be noted that
the analysis offered here presents a significantly different picture
to that presented by Nuttall in the 1992 seminar referenced above (see
Annex C):
'Even without applications of the principles of risk and
need, the behavioral aspect of the responsivity
principle was supported. The mean phi coefficient for
behavioral service was 0.29 (N=41) compared with an
average phi of .04 (N=113) for nonbehavioral
interventions overall and with 0.07 (N=83) for
nonbehavioral treatments when criminal sanctions were
excluded...
We Were Not Nice to Guided Group Interaction &
Psychodynamic Therapy:
We reported empirical tests of guided group interaction
and psychodynamic therapy that yielded negative mean phi
estimates and, in response, Lab and Whitehead cited
rhetoric favorable to treatments that even their review
had found ineffective. Ideally, research findings have
the effect of increasing or decreasing confidence in an
underlying theory. Reversing that ideal of the
relationship between theory and research, Lab and
Whitehead use theory to refute research findings
unfavorable to treatments that they apparently prefer,
and they use theory to reject research findings
favorable to treatments that apparently they find less
attractive.'
Andrews et al. (1990)
A Human Science Approach or More Punishments and
Pessimism: A Rejoinder to Lab and Whitehead -
Criminology, 28,3 1990 419-429
Note that of the 154 tests of correctional treatment surveyed by
Andrews et al (1990), the division into juvenile and adult across the
four types of treatment were as follows:
JUVENILE ADULT - - 0 + +
1. CRIMINAL SANCTIONS 26 4 3 1
2. INAPPROPRIATE CORRECTIONAL SERVICE 31 7 1 3 2 1
3. UNSPECIFIED CORRECTIONAL SERVICE 29 3 1 1 1
4. APPROPRIATE CORRECTIONAL SERVICE 45 9 1 8
------ -----
131 23
a = Significant NEGATIVE Phi
b = Negative Phi
c = 0 Phi
d = Positive Phi
e = Significant POSITIVE Phi
The authors classified all of their studies into one of the above four
types to bring home their point that of programmes are analyzed as to
whether on not programmes are appropriately targeted etc., it becomes
easier to ascertain whether anything does in fact work.
A negative Phi coefficient indicates that, if anything, the control
group did better than the treatment group. A significant negative Phi
indicates that this trend was statistically significant, ie that the
treatment group did WORSE than the control group.
Of the 23 adult programmes, 9 resulted in positive significant Phi
coefficients, ie where the treatment groups did better than controls.
These 9 studies are examined more closely below. Each of the 9
studies is listed along with its Phi coefficient. Listed also are the
percentages reconvicting in the treatment and control groups. Size of
these two groups is also listed, as is the setting of the programme.
A. Appropriateness Uncertain on Targets/Style
1. Walsh A. (1985) An evaluation of the effects of adult basic
education on rearrest rates amongst
probationers.
J. of Offender Counselling, Services &
Rehabilitation
Phi = 0.21
24% Treatment Group (N=50) Reconvicted vs. 44% of Control Group (N=50)
Setting = COMMUNITY
B. Structured One-on-one Paraprofessional/peer Program
2. Andrews D.A (1980) Some Experimental investigations of the
principles
of differential association through deliberate
manipulations of the structure of service
systems
American Sociological Review 45,448-462
Phi = 0.15
15% of Treatment Group (N=72) Reconvicted vs. 28% of Controls (N=116)
Setting = COMMUNITY
C.Intensive Structured Skill Training
Ross R.R, (1988) Reasoning and rehabilitation.
Fabiano E.A & International Journal of Offender Therapy and
Ewles C.D Comparative Criminology
Phi = 0.52
18% of Treatment Group (N=22) reoffended vs. 70% of control group
(N=23)
Setting = COMMUNITY
4. Same Study
Phi = 0.31
18% of treatment group (N=22) reoffended vs. 47% of control group
(N=17)
Setting = COMMUNITY
5. Dutton D.G (1986) The outcome of court-mandated treatment for
wife
assault: A quasi-experimental evaluation
Violence and Victims 1:163-175
Phi = 0.43
4% of treatment group (N=50) reoffended vs. 40% of control group
(N=50)
Setting = COMMUNITY
D. Appropriately Matched According to Risk/Responsivity or Need
Systems
6. Baird S.C, (1979) Project Report #14: A Two Year Follow-up
Heinz R.C. & Bureau of Community Corrections, Wisconsin
Bemus B.J Department of Health and Social Services
Phi = 0.17
16% of treatment group (N=184) reoffended vs. 30% of control group
(N=184)
Setting = COMMUNITY
7. Andrews D.A, (1986) The risk principle of case classification: An
Kiessling J.J, evaluation with young adult probationers.
Robinson D. & Canadian Journal of Criminology 28,377-396
Mickus S.
Phi = 0.31
33% of treatment group (N=54) reoffended vs. 75% of control group
(N=12)
Setting = COMMUNITY
8. Andrews D.A, (1980) Program structure an effective correctional
& Kiessling J.J practices: A summary of the CaVIC research
Phi = 0.82
0% of treatment group (N=11) reoffended vs. 80% of control group
(N=10)
Setting = COMMUNITY
9. Same study
Phi = 0.27
31% of treatment group (N=34) reoffended vs. 58% of control group
(N=23)
NOTE - all 9 studies which had positive Phi coefficients for the adult
programmes were conducted in the COMMUNITY, not in custodial
settings. All 9 programmes were classified as 'Probation, Parole,
Community' (PPC)
In fact, Andrews et al. (1990) say:
'The minor but statistically significant adjusted main
effect of setting is displayed in column six of Table 1.
This trend should not be overemphasized, but the
relatively weak performance of appropriate correctional
service in residential facilities is notable from Table
2 (mean phi estimate of .20 compared with .35 for
treatment within community settings, F[1/52] = 5.89,
p<.02). In addition, inappropriate service performed
particularly poorly in residential settings compared
with community settings (-.15 versus -.04, F[1/36] =
3.74, p<.06). Thus, it seems that institutions and
residential settings may dampen the positive effects of
appropriate service while augmenting the negative impact
of inappropriate service. This admittedly tentative
finding does not suggest that appropriate correctional
services should not be applied in institutional and
residential settings. Recall that appropriate service
was more effective than inappropriate service in all
settings.'
Andrews et. al (1990) ibid p384
In England, policy both in the area of inmate programmes and Sentence
Planning is primarily focusing on convicted, adult males, serving long
sentences. The data cited above speaks for itself. One may or may not
agree with Andrews et. al. in their interpretation of these results.
Cited below are some further figures drawn from the Andrews paper
which should help clarify implications for the thesis being developed
in this paper.
Note that there were only 23 Adult studies. Of these, only 5 were in
residential or institutional settings. Of these 5, 4 produced negative
Phi coefficients, three of them significant (-.18,-.17, and -.14).
The fifth programme, which was the only one the authors classed as
appropriate, produced a non-significant Phi of 0.09.
This suggests that at least in terms of the available adult studies,
the only significant findings are that adult programmes in
institutional/residential settings, have, if any effect, only a
*deleterious* one on likelihood to reconvict!
Here is the summary of average Phi coefficients from all of the
studies examined in the Andrews et. al. meta-analysis:
CORRECTIONAL SERVICE
Criminal Inapp. Unspec. Appropriate
Sanctions
Sample of Studies
Whitehead & Lab -.04 (21) -.11 (20) .09 (16) .24 (30)
Sample 2 -.13 ( 9) -.02 (18) .17 (16) .37 (24)
Justice System
Juvenile -.06 (26) -.07 (31) .13 (29) .29 (45)
Adult -.12 ( 4) -.03 ( 7) .13 ( 3) .34 ( 9)
Year of Publication
Before 1980s -.16 (10) -.09 (22) .17 (11) .24 (33)
1980s -.02 (20) -.03 (16) .11 (21) .40 (21)
Quality of Research Design
Weaker -.07 (21) -.04 (10) .15 (18) .32 (26)
Stronger -.07 ( 9) -.08 (22) .11 (14) .29 (28)
Setting
Community -.05 (24) -.14 (31) .12 (27) .35 (37)
Institution/Res. -.14 ( 6) -.15 ( 7) .21 ( 5) .20 (17)
Behavioral Intervention
No -.07 (24) -.14 (31) .12 (27) .35 (37)
Yes - -.09 ( 2) .23 ( 1) .31 (38)
Overall Mean Phi -.07 (30) -.06 (38) .13 (32) .30 (54)
S.D. .14 .15 .16 .19
Mean Phi Adjusted for -.08 (30) -.07 (38) .10 (32) .32 (54)
Other Variables
In summary, the 9 adult programmes which had positively significant
Phi coefficients were all run in the COMMUNITY, they were not run in
prisons. In fact, of the 5 adult institutional/residential programmes,
3 produced significant NEGATIVE Phi coefficients, 1 produced a non-
significant negative Phi, and the fifth a Phi of 0.09. Three of the 5
were classed by Andews et. al. under CRIMINAL SANCTIONS (two with
significant negative Phi coefficients), 1 under INAPPROPRIATE
CORRECTIONAL SERVICE (also a significant negative Phi). The only 1 to
be classified under APPROPRIATE CORRECTIONAL SERVICE was a study by
Grant & Grant (1959) entitled 'A Group Dynamics approach to the
treatment of nonconformists in the navy', a programme which produced a
Phi of 0.09 (treated group 29% reconvicted (N=135) vs. 38% of the
untreated (N=141).
On the juvenile side, there were 30 institutional/Residential
programmes, (there were only 35 institutional or residential
programmes in the meta-analysis of 154 programmes). These 30 are
presented below:
a b c d e
JUVENILE - - 0 + +
CRIMINAL SANCTIONS 3 2 1
INAPPROPRIATE CORRECTIONAL SERVICE 6 2 2 2
UNSPECIFIED CORRECTIONAL SERVICE 5 4 1
APPROPRIATE CORRECTIONAL SERVICE 16 1 1 4 10
------
30
Of these 30 programmes, there were 11 which produced significant
negative Phi coefficients.
A. Token Economies
5 significant positive Phi coefficients,
(Note that there were also two programmes with negative phi
coefficients in this area, one significantly negative)
B. Individual/Group Counselling
1 positive significant Phi
C. Intensive Structured Skill Training
2 significant positive Phi coefficients
D. Structured one-on-one Paraprofessional/Peer Program
1 positive significant Phi
E. Family Therapy
1 positive significant Phi
UNSPECIFIED CORRECTIONAL SERVICE
Months served in programme
1 positive significant Phi
The implications for the efficacy (and therefore resourcing) of adult
programmes within residential or institutional settings is not
promising on the basis of the studies reviewed by Andrews et al. 1990.
As far as rehabilitation for adult prisoners is concerned, their study
provides little direct evidence to support a renewed faith in
rehabilitative programmes as conventionally implemented.
What their study can be taken to suggest as being worthwhile, is
improvements in how we structure what we do, so that we can begin to
work towards apposite allocation of inmates to appropriate activities
or settings.
Over recent years, there have been some moves to introduce formal
'Cognitive Skills' programmes both in the Canadian Correctional System
(Porporino et. al. 1991) and more recently, within the English
system. Empirical studies to date have focused on very small numbers
in treatment and 'comparison' groups, and have produced equivocal
results when the dependent variable is taken as reconviction rate.
2. What Works in Special Cognitive Skills Programmes
In brief, and based on the *published data* at the time of writing
(1993), efficacy of the Canadian Cognitive Skills Training can not
be described as robust, nor can it be said that the programme's
content per se significantly influences recidivism. The objective here
is not to be negative, there may be further unpublished evidence which
puts the programme in a more favourable light. However, I think it
important to point out that on the basis of the evidence reported in
the Porporino et al (1991) paper we should look at the claims made for
the efficacy of the Cognitive Skills programme with a degree of
caution. The published results can in fact be taken to suggest
something equally positive if considered from the alternative
perspective of Sentence Management as outlined in "The Implications of
Recent Research for The Development of Programmes and Regimes" (1992).
The Porporino et al. paper suggests that those who are motivated to
change (those who volunteered) did almost as well as those who
actually participated in the Cognitive Skills programme. If this is
true, it would seem to be further justification for adopting the
Attainment based Sentence Management system as an infrastructure for
Inmate Programmes. If further evidence can be drawn upon to
substantiate the published claims for the efficacy of 'Cognitive
Skills', that evidence could be used to support the proposed strategy
of an integrated use of the natural demands of all activities and
routines to inculcate new skills in social behaviour and problem
solving. Sentence Management is designed to provide a prison service
with a means of integrating all of the currently used assessment
systems in use across activities. It is important to appreciate that
the criteria it looks to assess inmates with respect to, are the very
criteria which activity supervisors are already using to assess
inmates, be these NVQ Performance Criteria, or the 'can do' statements
of an RSA english course. Attainment Criteria per se, can not
therefore be dismissed lightly. The Sentence Management system is
designed to enable staff throughout the system to pull together
assessment material *in a common format*, it has not been designed to
ask anything new of such staff, although they can add additional
criteria to those they already use if they wish.
Effective programmes must produce evidence of behaviour change, and
not merely self-report, or changes in verbal behaviour. For thism one
requires measures of attainment with respect to the preset skill
levels which programme staff have been contracted to deliver. All
programmes must have predetermined goals or objectives and these can
be specified independently of any participating inmates. If there is
evidence that the special programmes approach has special merit which
excludes them from the remarks made so far (which, from the review of
programmes below must be viewed with caution), we should not lose
sight of the fact that special programmes are likely to be seen as
treatment programmes, and that they can only occupy inmates for a
small proportion of their time in custody. If there is evidence to
justify the efficacy of special programmes addressing how inmates
think, we should look carefully to what education and other skill
based programmes are designed to deliver. There is much to be said for
adopting an approach to inmate programmes which is education, rather
than treatment based, and one which looks to all that the regime has
to offer as an infrastructure.
The following is how the Canadian group describe the objectives of
their 'Cognitivist' approach:
'The basic assumption of the cognitive model is that the
offender's thinking should be a primary target for
offender rehabilitation. Cognitive skills, acquired
either through life experience or through intervention,
may serve to help the individual relate to his
environment in a more socially adaptive fashion and
reduce the chances of adopting a pattern of criminal
conduct.
Such a conceptualization of criminal behaviour has
important implications for correctional programming. It
suggests that offenders who are poorly equipped
cognitively to cope successfully must be taught rather
than treated. It suggests that emphasis be placed on
teaching offenders social competence by focusing on:
thinking skills, problem-solving and decision making;
general strategies for recognizing problems, analyzing
them, conceiving alternative non-criminal solutions to
them;
ways of thinking logically, objectively and rationally
without overgeneralizing, distorting facts, or
externalizing blame;
calculating the consequences of their behaviour - to
stop and think before they act;
to go beyond an egocentric view of the world and
comprehend and consider the thoughts and feelings of
other people;
to improve interpersonal problem-solving skills and
develop coping behaviours which can serve as effective
alternatives to anti-social or criminal behaviour;
to view frustrations as problem-solving tasks and not
just as personal threats;
to develop a self-regulatory system so that their pro-
social behaviour is not dependent on external control.
to develop beliefs that they can control their life;
that what happens to them depends in large measure on
their thinking and the behaviour it leads to.
To date we have been able to examine the outcome of 40
offenders who had been granted some form of conditional
release and were followed up in the community for at
least six months. On average, the follow up period was
19.7 months. We also gathered information on the outcome
of a comparison group of 23 offenders who were selected
for Cognitive Skills Training but had not participated.
These offenders did not differ from the program
participants on a number of characteristics and were
followed-up for a comparable period of time.
........................................offenders in the
treatment group were re-admitted for new convictions at
a lower rate that the comparison group during the
follow-up period. Specifically, only 20% of the
treatment group were re-admitted for new convictions
compared to 30% of the offenders in the comparison
group. It is interesting to note that the number of
offenders who were returned to prison without new
convictions (eg technical violations, day-parole
terminations) is similar yet marginally larger in the
treatment group. It is possible that the Cognitive
Skills Training participants may be subjected to closer
monitoring because of expectations regarding the
program.'
Porporino, Fabiano and Robinson
Focusing on Successful Reintegration:Cognitive Skills
Training for Offenders July 1991
'Fragments of Behaviour: The Extensional Stance', extracted from 'A
System Specification for PROfiling BEhaviour' presents a substantial
body of evidence drawn from mainstream research in the psychology of
reasoning which reveal that many of above statements are in fact a
highly contentious set of propositions on the basis of established
empiruical data.
Not only is their theoretical stance dubious on the basis of
mainstream research, but the authors tell us that seven of the 23 in
the comparison group were reconvicted for a new offence, whilst eight
of the 40 offenders in the treatment group were reconvicted for a new
offence. However, looking at returns to prison for violations of
parole etc., the authors say:
'It is interesting to note that the number of offenders
who were returned to prison without new convictions (eg
technical violations, day-parole terminations) is
similar yet marginally larger in the treatment group'.
Furthermore, when the authors compared the predicted reconviction rate
(52%) for these groups with the actual rates (20% and 30% for the
treatment and comparison groups respectively) the low rate of
reconviction in the comparison group led them to conclude:
'motivation for treatment in end of itself may be
influential in post-release success'.
In fact, the conclusion can be stated somewhat more strongly. Imagine
this was a drugs trial. The comparison group, like the treatment
group are all volunteers. They all wanted to be in the programme, they
all, effectively, wanted to take the tablets. Some, however, didn't
get to join the programme, they didn't 'get to take the tablets', but
other than that did not differ from the treatment group. In the
Porporino study, those inmates comprised the comparison group. When
the reconviction data came in, it showed that those in the comparison
group were pretty much like those in the treatment group. The
treatment, ie 'the Cognitive Skills' training, had virtually no
effect. The comparison group is remarkably like the treatment group in
not being reconvicted for a new offence. In fact, if five of the
comparison group had reconvicted rather than seven, the reconviction
rate would have been the same (20%) for both groups.
TREATMENT COMPARISON
Readmissions with New Convictions 20% 30.4%
(8/40) (7/23)
Readmissions without New Convictions 25% 21.7%
(10/40) (5/23)
No Readmissions 55% 47.9%
(22/40) (11/23)
Apart from the fact that the numbers being analyzed are extremely
small, the fact that the authors take these figures to justify
statements that Cognitive Skills, ie an intensive 8-12 week course,
focusing on what inmates 'think', a course that focuses apparently on
changing 'attitudes' rather than 'teaching new/different behaviours',
is causally efficacious in bringing about reductions in recidivism is
questionable. The comparison group it must be appreciated, were all
volunteers, only differing from the treatment group in that they did
not get to participate in the programme. But only 30% of them (7/23)
were reconvicted for a new offence, compared to 20% (8/40) in the
treatment group. Compared to the expected reconviction rate for those
in either group (52%) might reasonably be led to the conclusion that
those in the comparison group did very well compared to those who
actually participated in the programme. The above pattern of results
casts some doubt as to how important the content of the Cognitive
Skills Programme was at all. The fact that the percentages in the 'No
Readmissions' and the 'Readmissions Without New Reconvictions' lends
support to this view.
These (Canadian) studies have also presented evidence for short term
longitudinal changes in 'Cognitive Skills' performance for those
participating in the programme (and somewhat surprisingly, sometimes
in the comparison groups). These changes may however be comparable in
kind to the changes observed in the more formal education studies
surveyed by Nisbett et al. (1987). The whole notion of formal training
in abstract 'Cognitive Skills' might in fact be profitably critically
evaluated in the context of such research programmes, along with the
more substantial body of research into the heuristics and biases of
natural human judgment ('commonsense') in the absence of
distributional data. Other studies, e.g. McDougall, Barnett, Ashurst
and Willis (1987), although more sensitive to some of the
methodological constraints in evaluating such programs, still give
much greater weight to their conclusions than seems warranted by
either the design of their study or their actual data. For instance,
in the above study, an anger control course resulted in a
'significant' difference in institutional reports in a three month
follow up period at the p<0.05 level using a sign test. However, apart
from methodological problems, acknowledged by the authors, the
suggestion that the *efficacious component* was cognitive must, in the
light of the arguments of Meehl (1967;1978) and others, on the simple
logic of hypothesis testing, be considered indefensible. On the basis
of their design, one might (cautiously) suggest that there is some
evidence that participating in the programme had some effect
(possibly, as p<0.05), but precisely what it was within the course
which was efficacious can not be said given the design of the study.
As readers will come to appreciate, this is a pervasive problem in
social science research, and is yet another example of 'going beyond
the information given' (Bruner 1957;1974). The force of Meehl's and
Lakatos' arguments in the light of such failures to refrain from
inductive speculation on the basis of minimal evidence should not be
treated lightly. It is a problem which has reached epidemic
proportions in psychology as many of the leading statisticians now
lament (Guttman 1985, Cohen 1990), the above studies are in fact quite
representative of the general failure of psychologists as a group to
appreciate the limits of the propositional as opposed to the predicate
calculus as a basis for their methodology. Most of the designs of
experiments adopted do not allow researchers to draw the conclusions
that they do from their studies. In the above study, the best one
could say is that behaviour improved for those inmates who
participated in a program. Logically, one simply cannot say more.
3. An Alternative: Sentence Management & Actuarial Analysis of
Attainment
At the same time that 'Cognitive Skills' programmes are being
developed in the English Prison system, an attempt is being made to
introduce a naturalistic approach to behavioural skill development and
assessment,('cognitive skills' being but one class of these
behaviours. Such skills are generally taught within education, as
elements of particular Vocational Training or Civilian Instructor
supervised training courses, NVQs or even some of the domestic
activities such as wing cleaning. This is the system of 'Sentence
Management' which looks to inculcate skills under the relatively
natural conditions of inmate activities and the day to day routines.
Systematic work over the past three years has generated a timetable
for the deployment of psychologists whereby attainment data can be
routinely collected from all areas of the regimes on a weekly basis,
automatically analyzed and converted into inmate reports, generate
incentive levels and enable staff to identify norms and outliers
suitable as candidates for behavioural contract negotiation and
monitoring. Through an explicit and auditable combination of
continuous assessment of behaviour, target negotiation, contracting
and apposite allocation of inmates, the system aims to maximise
transfer of skills acquisition by teaching for transfer (Gladstone
1989), and compensating for deficits.
The Porporino empirical data is quite consistent with the argument
that 'volunteers' for programmes make up a sub-population of inmates
motivated to attain, who simply because they are 'attainers', show a
difference in reconviction rate when compared to baseline predicted
rates. That is, what is observed in studies such as that by Porporino
et al (1991) in all likelihood has nothing to do with "Cognitive
Skills" course content. Rather, the pattern in the data substantially
supports the rationale behind the system of "Sentence Management"
outlined here, in the 1992 Directorate of Inmate Programmes Senior
Management seminar report and as empirically illustrated in volume 2
of "A System Specification for PROfiling BEhaviour" (Longley 1995).
This behaviour profiling and assessment system, outlined below, is
specifically designed to provide behaviour scientists and their
managers with a formal behaviour management infrastructure which
provides an explicit professional role for behaviour scientists in the
measurement of positive "attainment" inculcated through the natural
contingencies afforded by the regime which has been selected by
Governors and their senior management teams.
As a behaviour profiling and assessment system it is designed to
shadow the structure of the regime, providing staff with routine
feedback on how the regime is operating through sensitive measures of
positive attainment. On this basis, the system provides the sine qua
non for effective inmate sentence management (and Key Performance
Indicator Monitoring), accommodating the expertise of all activity
supervisors contributing to the regime rather than risk focusing
divisively or disproportionately on innovative, but as yet unproven)
special programmes for inmates - an example being that between
traditional educational courses and the innovative Cognitive or
Thinking Skills programmes.
See also:
"Fragments of Behaviour: The Extensional Stance"
http://www.uni-hamburg.de/~kriminol/TS/tskr.htm
--
David Longley
In article <············@longley.demon.co.uk>, David Longley
<·····@longley.demon.co.uk> writes
>In article <··········@lyra.csx.cam.ac.uk>
> ·····@thor.cam.ac.uk "G.P. Tootell" writes:
>
>> what is this crap david and what's it doing in an msdos newsgroup?
>> kindly keep it to the ai groups please instead of spreading it all over the
>> place. if we're interested we'll subscribe to comp.ai.*
>>
>> nik
>
>If you want to know what it is I suggest you *read* it. As to *why* it's
>posted to these groups, I would have thought that was quite clear. Headers
>from the original abusive material have been retained so that my response
>to the nonsense from Balter is APPROPRIATELY circulated.
>>
>>
>> ·····@longley.demon.co.uk (David Longley) spammed :
>>
>> |> It will help if an idea of what we mean by 'clinical' and 'actuarial'
>> |> judgement is provided. The following is taken from a an early (Meehl
>> |> 1954), and a relatively recent review of the status 'Clinical vs.
>> |> Actuarial Judgement' by Dawes, Faust and Meehl (1989):
>> |>
>>
>> a whole bunch of crap pasted out of some book or something.
>>
>*That's* abusive too - I suggest you make an effort to understand the
>following (taking a brief break from blindly defending "netiquette").
>
> 'If we are limning the true and ultimate structure of
What is limning?
> reality, the canonical scheme for us is the austere
What do you mean by the "canonical scheme for us"?
> scheme that knows no quotation but direct quotation and
> no propositional attitudes but only the physical
What is a propositional attitude?
> constitution and behavior of organisms.'
>
> W.V.O Quine
> Word and Object 1960 p 221
>
>For:
>
> 'Once it is shown that a region of discourse is not
> extensional, then according to Quine, we have reason to
What is extensional? Who is Quine?
> doubt its claim to describe the structure of reality.'
>
> C. Hookway
> Logic: Canonical Notation and Extensionality
> Quine (1988)
>
>The problem with intensional (or common sense or 'folk') psychology
>has been clearly spelled out by Nelson (1992):
>
> 'The trouble is, according to Brentano's thesis, no such
> theory is forthcoming on strictly naturalistic, physical
> grounds. If you want semantics, you need a full-blown,
> irreducible psychology of intensions.
What?
>
> There is a counterpart in modern logic of the thesis of
> irreducibility. The language of physical and biological
> science is largely *extensional*. It can be formulated
> (approximately) in the familiar predicate calculus. The
> language of psychology, however, is *intensional*. For
> the moment it is good enough to think of an
> *intensional* sentence as one containing words for
> *intensional* attitudes such as belief.
>
> Roughly what the counterpart thesis means is that
What is the counterpart thesis?
> important features of extensional, scientific language
WHat?
> on which inference depends are not present in
> intensional sentences. In fact intensional words and
> sentences are precisely those expressions in which
> certain key forms of logical inference break down.'
>
> R. J. Nelson (1992)
> Naming and Reference p.39-42
>
>and explicitly by Place (1987):
>
> 'The first-order predicate calculus is an extensional
> logic in which Leibniz's Law is taken as an axiomatic
> principle. Such a logic cannot admit 'intensional' or
> 'referentially opaque' predicates whose defining
What is referentially opaque?
Very few people use these term? You don't impress anyone with
your long words, mate. Either you want everyone to understand you
in which case you will use simpler English or you don't, in which
case why are you sending me and several hundred other programmers
this message? We are programmers not English students.
Kepp it simple so I can understand you or go away.
> characteristic is that they flout that principle.'
>
> U. T. Place (1987)
> Skinner Re-Skinned P. 244
> In B.F. Skinner Consensus and Controversy
> Eds. S. Modgil & C. Modgil
>
--
David Williams
In article <················@smooth1.demon.co.uk>
···@smooth1.demon.co.uk "David Williams" writes:
> What is limning?
> What do you mean by the "canonical scheme for us"?
> What is a propositional attitude?
> What is extensional? Who is Quine?
> What is the counterpart thesis?
> What is referentially opaque?
> Either you want everyone to understand you
> in which case you will use simpler English or you don't, in which
> case why are you sending me and several hundred other programmers
> this message? We are programmers not English students.
> --
> David Williams
>
The issues are fundamentally computational as many familair with
the basis of programming will no doubt appreciate. As to specific
answers to the above, they are explained in the text and
references at:
http://www.uni-hamburg.de/~kriminol/TS/tskr.htm
'Humans did not "make it to the moon" (or unravel the
mysteries of the double helix or deduce the existence of
quarks) by trusting the availability and
representativeness heuristics or by relying on the
vagaries of informal data collection and interpretation.
On the contrary, these triumphs were achieved by the use
of formal research methodology and normative principles
of scientific inference. Furthermore, as Dawes (1976)
pointed out, no single person could have solved all the
problems involved in such necessarily collective efforts
as space exploration. Getting to the moon was a joint
project, if not of 'idiots savants', at least of savants
whose individual areas of expertise were extremely
limited - one savant who knew a great deal about the
propellant properties of solid fuels but little about
the guidance capabilities of small computers, another
savant who knew a great deal about the guidance
capabilities of small computers but virtually nothing
about gravitational effects on moving objects, and so
forth. Finally, those savants included people who
believed that redheads are hot-tempered, who bought
their last car on the cocktail-party advice of an
acquaintance's brother-in-law, and whose mastery of the
formal rules of scientific inference did not notably
spare them from the social conflicts and personal
disappointments experienced by their fellow humans. The
very impressive results of organised intellectual
endeavour, in short, provide no basis for contradicting
our generalizations about human inferential
shortcomings. Those accomplishments are collective, at
least in the sense that we all stand on the shoulders of
those who have gone before; and most of them have been
achieved by using normative principles of inference
often conspicuously absent from everyday life. Most
importantly, there is no logical contradiction between
the assertion that people can be very impressively
intelligent on some occasions or in some domains and the
assertion that they can make howling inferential errors
on other occasions or in other domains.'
R. Nisbett and L. Ross (1980)
Human Inference: Strategies and Shortcomings of Social
Judgment
--
David Longley
: The issues are fundamentally computational as many familair with
: the basis of programming will no doubt appreciate. As to specific
: answers to the above, they are explained in the text and
: references at:
...whatever. Everybody, just put this guy in your kill file. Let's get on
with normal, rational discussion...
GreG
: --
: David Longley
Balter Abuse (2)
<JB>
>Oh, I "said that", did I? I'm afraid that your fragmented mental
>defects make you incapable of comprehending what I say, so you should
>limit yourself to direct quotation.
<DL>
>> What goes around, comes around eh?
>
>> Or is this just another example of the "fragmentation" I keep
>> drawing your (and others') attention to (which you seem
>> determined to conceive as 'hypocrisy').
>>
><JB>
>Longley, you are apparently too mentally defective to understand that,
>since you apply your standards to others but not to yourself, you are
>in *moral* error. Hypocrisy is a matter of *ethics*, which are
>apparently beyond your sociopathic grasp. But if I point out that
>I'm not talking science here, you will criticize me for being in a
>muddle, which just further demonstrates your not-quite-sane autistic
>mental defect. But that is just another example of your fragmentation,
>I suppose (which can only prove your point through affirmation of the
>consequent, which would only prove your point about humans having
>trouble with logic through affirmation of the consequent, which ... oh,
>never mind.)
>
>--
><J Q B>
As to my being sociopathic or whatever - is this really likely
given the context of my work? Is it not more likely that you
still have not grasped what these issues are really all about?
Whilst the following is written with prison inmates in mind, the
same principles and system are being advocated for education &
training programmes more generally. Whilst some of the material
within this extract bears on this thread, it is posted in the
hope that it will elicit wider evaluation and discussion -
hopefully, what emerges from these fragments is an empirical
conclusion, not a particular ideaology.
BEHAVIOUR MODIFICATION: SENTENCE MANAGEMENT & PLANS
'No predictions made about a single case in clinical work are
ever certain, but are always probable. The notion of
probability is inherently a frequency notion, hence
statements about the probability of a given event are
statements about frequencies, although they may not seem to
be so. Frequencies refer to the occurrence of events in a
class; therefore all predictions; even those that from their
appearance seem to be predictions about individual concrete
events or persons, have actually an implicit reference to a
class....it is only if we have a reference class to which the
event in question can be ordered that the possibility of
determining or estimating a relative frequency exists.....
the clinician, if he is doing anything that is empirically
meaningful, is doing a second-rate job of actuarial
prediction. There is fundamentally no logical difference
between the clinical or case-study method and the actuarial
method. The only difference is on two quantitative continua,
namely that the actuarial method is #more explicit# and #more
precise#.'
P. E. Meehl (1954)
Clinical versus Statistical Prediction
A Theoretical Analysis and a Review of the Evidence
This section outlines the second phase of PROBE's technology, that of
PROgramming BEhaviour. Monitoring behaviour is one essential function
of PROBE, and the major developments to date having been outlined in
Section 2. Effective *control* of behaviour on the other hand requires
staff and inmates to make use of that information in the interests of
programming or shaping behaviour in a pro-social (non-delinquent)
direction. This is what the Sentence Management and Planning system,
covered in this section is designed to provide. Further technical
details can be found in *Volumes 1 & 2* of this system specification.
If there is to be any change in an inmate's behaviour after release,
there will need to be a change in behaviour from the time he was
convicted, either through acquisition of new behaviours or simple
maturation (as in the age-report rate function). In ascertaining the
characteristic behaviour of classes, it is not that we make
predictions of future behaviour, but that we describe behaviour
characteristic of classes. This is clearly seen in discriminant
analysis and regression in general. We analyse the relationship
between one class and others, and, providing that an individual can be
allocated to one class or another, we can say, as a consequence of his
class membership, what other characteristics are likely to be the case
as a function of that class membership. Temporality, i.e. pre-diction
has nothing to do with it.
Any system which provides a record of skill acquisition during
sentence must therefore be an asset in the long term management of
inmates towards this objective. However, research in education and
training, perhaps the most practical areas of application of Learning
Theory, clearly endorse the conclusions drawn in *Volume 1* on the
context specificity of the intensional. Some of the most influential
models of cognitive processing in the early to mid 1970s took context
as critical for encoding and recall of memory (Tulving and Thompson
1972). Generalisation Theory, ie that area of research which looks at
transfer-of-training has almost unequivocally concluded that learning
is context specific. Empirical research supports the logical
conclusion that skill acquisition does not readily transfer from one
task to another. This is another illustration of the failure of
substitutivity in psychological contexts. In fact, upon detailed
analysis, many of the attractive notions of intensionalism, so
characteristic of cognitivism, may reveal themselves to be vacuous on
closer analysis:
'Generalizability theory (Cronbach, Gleser, Nada & Rajaratnam
1972; see also, Brennan, 1983; Shavelson, Webb, & Rowley,
1989) provides a natural framework for investigating the
degree to which performance assessment results can be
generalised. At a minimum, information is needed on the
magnitude of variability due to raters and to the sampling of
tasks. Experience with performance assessments in other
contexts such as the military (e.g. Shavelson, Mayberry, Li &
Webb, 1990) or medical licensure testing (e.g. Swanson,
Norcini, & Grosso, 1987) suggests that there is likely
substantial variability due to task. Similarly,
generalizability studies of direct writing assessments that
manipulate tasks also indicate that the variance component
for the sampling of tasks tends to be greater than for the
sampling of raters (Breland, Camp, Jones, Morris, & Rock,
1987; Hieronymous & Hoover 1986).
Shavelson, Baxter & Pine (1990) recently investigated the
generalizability of performance across different hands-on
performance tasks such as experiments to determine the
absorbency of paper towels and experiments to discover the
reactions of sowbugs to light and dark and to wet and dry
conditions. Consistent with the results of other contexts,
Shavelson et al. found that performance was highly task
dependent. The limited generalizability from task to task is
consistent with research in learning and cognition that
emphasizes the situation and context-specific nature of
thinking (Greeno, 1989).'
R. L. Linn, E. L. Baker & S. B. Dunbar (1991)
Complex, Performance-Based Assessment:
Expectations and Validation Criteria
Educational Researcher, vol 20, 8, pp15-21
Intensionalists, holding that what happens inside the head matters, ie
that intension determines extension, appeal to our common, folk
psychological intuitions to support arguments for the merits of
abstract cognitive skills. However, such strategies are not justified
on the basis of educational research (see also *Volume 1*):
'Critics of standardized tests are quick to argue that such
instruments place too much emphasis on factual knowledge and
on the application of procedures to solve well-structured
decontextualized problems (see e.g. Frederiksen 1984). Pleas
for higher order thinking skills are plentiful. One of the
promises of performance-based assessments is that they will
place greater emphasis on problem solving, comprehension,
critical thinking, reasoning, and metacognitive processes.
These are worthwhile goals, but they will require that
criteria for judging all forms of assessment include
attention to the processes that students are required to
exercise.
It should not simply be assumed, for example, that a hands-on
scientific task encourages the development of problem solving
skills, reasoning ability, or more sophisticated mental
models of the scientific phenomenon. Nor should it be assumed
that apparently more complex, open-ended mathematics problems
will require the use of more complex cognitive processes by
students. The report of the National Academy of Education's
Committee that reviewed the Alexander-James (1987) study
group report on the Nation's Report Card (National Academy of
Education, 1987) provided the following important caution in
that regard:
It is all too easy to think of higher-order skills
as involving only difficult subject matter as, for
example, learning calculus. Yet one can memorize
the formulas for derivatives just as easily as
those for computing areas of various geometric
shapes, while remaining equally confused about the
overall goals of both activities.
(p.54)
The construction of an open-ended proof of a
theorem in geometry can be a cognitively complex
task or simply the display of a memorized sequence
of responses to a particular problem, depending on
the novelty of the task and the prior experience of
the learner. Judgments regarding the cognitive
complexity of an assessment need to start with an
analysis of the task; they also need to take into
account student familiarity with the problems and
the ways in which students attempt to solve them.'
ibid p. 19
As covered at length in Section 1.3, skills do not seem to generalise
well. Dretske (1980) put the issue as follows:
'If I know that the train is moving and you know that its
wheels are turning, it does not follow that I know what you
know just because the train never moves without its wheels
turning. More generally, if all (and only) Fs are G, one can
nonetheless know that something is F without knowing that it
is G. Extensionally equivalent expressions, when applied to
the same object, do not (necessarily) express the same
cognitive content. Furthermore, if Tom is my uncle, one can
not infer (with a possible exception to be mentioned later)
that if S knows that Tom is getting married, he thereby knows
that my uncle is getting married. The content of a cognitive
state, and hence the cognitive state itself, depends (for its
identity) on something beyond the extension or reference of
the terms we use to express the content. I shall say,
therefore, that a description of a cognitive state, is non-
extensional.'
F. I. Dretske (1980)
The Intentionality of Cognitive States
Midwest Studies in Philosophy 5,281-294
As noted above, this is corroborated by transfer of training research:
'Common descriptions of skills are not, it is concluded, an
adequate basis for predicting transfer. Results support J.
Fotheringhame's finding that core skills do not automatically
transfer from one context to another.'
C. Myers
Core skills and transfer in the youth training schemes:
A field study of trainee motor mechanics.
Journal of Organizational Behavior;1992 Nov Vol13(6) 625-632
'G. T. Fong and R. E. Nisbett (1993) claimed that human
problem solvers use abstract principles to accomplish
transfer to novel problems, based on findings that Ss were
able to apply the law of large numbers to problems from a
different domain from that in which they had been trained.
However, the abstract-rules position cannot account for
results from other studies of analogical transfer that
indicate that the content or domain of a problem is important
both for retrieving previously learned analogs (e.g., K. J.
Holyoak and K. Koh, 1987; M. Keane, 1985, 1987; B. H. Ross,
1989) and for mapping base analogs onto target problems
(Ross, 1989). It also cannot account for Fong and Nisbett's
own findings that different-domain but not same-domain
transfer was impaired after a 2-wk delay. It is proposed that
the content of problems is more important in problem solving
than supposed by Fong and Nisbett.'
L. M. Reeves & R. W. Weisberg
Abstract versus concrete information as the basis for
transfer in problem solving: Comment on Fong and Nisbett (1991).
Journal of Experimental Psychology General; 1993 Mar Vol
122(1) 125-128
'Content', recall, is a cognate of 'intension' or 'meaning'. A major
argument for the system of Sentence Management is that if we wish to
expand the range of an individuals' skills (behaviours), we can do no
better than to adopt *effective* (ie algorithmic) practices to guide
placements of inmates into activities based on actuarial models of
useful relations which exists between skills, both positive and
negative. We are unlikely to identify these other than through
empirical analyses. These should identify where such skills will be
naturally acquired and practised. As discussed at length in Section 1
and in Volume 1, there is now overwhelming evidence that behaviour is
context specific. Given that conclusion, which is supported by social
role expectations (see any review of Attribution Theory), we are well
advised to focus all attempts at behaviour engineering via inmate
programmes and activities with this fully understood. Furthermore,
within the PROBE project at least, we have no alternative but to
eschew psychological ie intensional (cognitive) processes because
(Section 1.3), as we have seen, valid inference is logically
impossible in principle within such contexts.
The work on Sentence Planning and Management represents work on the
second phase of PROBE's development between 1990 and 1994. The work on
Sentence Planning is a direct development of the original CRC
recommendations, and comprises record 33 and 34 of the PROBE system.
Sentence Management, comprising records 30,31 and 32 is designed as an
essential substrate, or support structure, for Sentence Planning.
3.1 SENTENCE MANAGEMENT
'I wish I had said that', said Oscar Wilde in applauding one
of Whistler's witticisms, Whistler, who took a dim view of
Wilde's originality, retorted, 'You will, Oscar; you will.
This tale reminds us that an expression like 'Whistler said
that' may on occasion serve as a grammatically complete
sentence. Here we have, I suggest, the key to a correct
analysis of indirect discourse, an analysis that opens a lead
to an analysis of psychological sentences generally
(sentences about propositional attitudes, so-called), and
even, though this looks beyond anything to be discussed in
the present paper, a clue to what distinguishes psychological
concepts from others.'
D. Davidson (1969)
On Saying That p.93
'Finding right words of my own to communicate another's
saying is a problem of translation. The words I use in the
particular case may be viewed as products of my total theory
(however vague and subject to correction) of what the
originating speaker means by anything he says: such a theory
is indistinguishable from a characterization of a truth
predicate, with his language as object language and mine as
metalanguage. The crucial point is that there will be equally
acceptable alternative theories which differ in assigning
clearly non-synonymous sentences of mine as translations of
his same utterance. This is Quine's thesis of the
indeterminacy of translation.'
ibid. p.100
'Much of what is called for is to mechanize as far as
possible what we now do by art when we put ordinary English
into one or another canonical notation. The point is not that
canonical notation is better than the rough original idiom,
but rather that if we know what idiom the canonical notation
is for, we have as good a theory for the idiom as for its
kept companion.'
D. Davidson (1967)
Truth and Meaning
Delinquency, *simply construed*, is a failure to co-operate with the
some social requirements. As an alternative to a purely custodial
model, the following outlines a positive incentive approach to
structuring time in custody. It is designed to map on to all elements
of inmate programmes, providing a systematic way of collating and
managing progress as reported by experts, which is analysed
objectively to produce reports based on actual behaviour rather than
casual judgement. It is, by design, a system which will allow
management of behaviour to be based on individual merit and
performance.
Regime & Sentence Management as a POSITIVE Behaviour Management System
An R & D Proposal
Introduction
Research between 1989 and 1991 led to the conclusion that Sentence
Planning will require such a fundamental, systematic, and nationally
implemented information base and that this can most efficiently be
derived from the management of inmate activities throughout the
estate. According to this view, Sentence Planning needs to be
supported by a system of 'Sentence Management' which focuses on the
structure and functions of available and potential inmate activities.
In this way, Sentence Planning would be integrated with the Regime
Monitoring System, effectively developing within the framework of
'accountable regime;. This implies that the most effective way to
launch Sentence Planning is not as an additional task grafted onto the
regime, but as a natural development and improvement of inmate review
and reporting practices.
The system specified below is efficient and cost-effective with the
potential infrastructure to support and integrate several initiatives
which have begun since the re-organisation. Although not covered in
this note, two of the most significant are Prisoners Pay, and The
Place of Work in the Regime.
In broad outline, what is proposed has much in common with the
Department of Education and Science's 1984 initiative Records of
Achievement and has the benefit of using this nationally implemented
programme in behaviour assessment as a source of best practice. Whilst
the initiative outlined below is an independent development which took
its cue from recommendations published in the 1984 HMSO CRC Report,
from which the PROBE (PROfiling Behaviour) project developed, results
of R&D work over the past 6 years are reassuringly compatible with the
work done throughout the English education system during the same
period. In this context, what is outlined below focuses on what the
Department of Education and Science refered to as Formative Profiling
(continuous assessment and interactive profiling involving the inmate
throughout his career) rather than Summative Profiling (which provides
a review somewhat akin to the parole review, or more locally, Long
Term Reviews). In all that follows, the recommendations of the 1984
HMSO CRC Report are seen to be integrally related.
Broad Outline
The system, for national implementation, across all sentence groups
can be specified as a 5 step cycle:
1. Inmates are observed under natural conditions of activities.
2. Observed behaviour is rated and recorded (continuous assessment).
3. Profiles of behaviour become the focus for interview dialogues/contracts.
4. Inmates are set targets based on the behaviour ratings/observations.
5. Elements of problem behaviour are addressed by apposite allocation.
Some immediate comments follow.
With little intrusion into the running of Inmate Activities, behaviour
which is central to these activities can be monitored and recorded
more directly to identify levels of inmate competence across the range
of activities. The records of competence would guide the setting and
auditing of individual targets.
Targets will be identified within the Activity Areas supported by the
regime. This requires continuous assessment of inmates within
activities, and the setting of targets based on a set of possible
attainments drawn from those activities. Such attainment profiles
would serve to identify and audit targets and would enable allocation
staff to judge the general standard of attainment within and across
activities, thereby enhancing both target-setting and auditing.
The frequency of behaviour assessment within activities and routines,
and the auditing of the whole process must be driven by what is
practicable. The system requires assessment of attainment to be
undertaken monthly, in order to ensure standardisation in collection
of Regime Monitoring data. Targets set are to be based on observations
of behaviour which are already fundamental to the running of
activities and routines, and the progress in achieving targets will be
discussed with the inmate, guiding allocation to activities within and
between prisons. These steps are in accordance with the policy
guidelines. Whilst the targets set will be individual, and when
collated will comprise a set of short and long term objectives
defining the 'Sentence Plan', they will fall into some broad areas
(social behaviour, health, performance at work, and so on).
By making more systematic use of the information which is already
being used to select, deselect and manage inmates within activities
and with respect to routines, Sentence Planning will become a natural
co-ordinating feature of the prison's regime.
Specific programmes for problem behaviour (e.g. sex offenders) can be
seen as particular inmate activities with their own, more intensive
assessment, activity and target setting procedures explicitly designed
to address problem behaviour. Development of, and allocation to such
programmes will be integrated with other activities. These programmes
are seen as both drawing on and informing 'Risk Assessment'.
Specific Details
Fundamental to the system outlined above is the fact that classes of
behaviour (as opposed to properties of inmates) are taken as the basic
data. These classes of behaviour are demanded by activities and
routines, and should serve as basic data for Regime Monitoring.
Observations of inmate behaviour are observations of an inmate's level
of attainment with respect to characteristics that staff responsible
for the activities have specified in advance as essential to the task.
Activities and routines have a structure quite independent of the
particular inmates who are subject to the demands of activities and
routines. Perhaps the defining feature of Sentence Management is that
it comprises a process of objective continuous assessment, where what
are assessed are levels of attainment with respect to pre-set aims and
objectives, themselves defining activities and routines. Since the
focus is on classes of behaviour rather than attributes of inmates,
all of the assessments are of progress with respect to pre-determined
classes of behaviour which are requirements of activities and
routines.
Attainment Areas
Each activity area can be specified in terms of classes of behaviour
which the activity requires. These classes of behaviour are basic
skill areas which are fundamental to the nature of the activity, which
in combination account for activities being distinguishable from each
other. These basic skill areas will be referred to as Attainment
Areas. They need to be carefully selected as they will be taken to be
the defining features of the activity. From this point of view, any
part of the daily routines should be specifiable in these terms, and
staff should be encouraged to think about how best their area of
inmate supervision could be so sub-classified. Whilst the
identification of Attainment Areas may, at first glance seem a
demanding or unfamiliar task, it is soon appreciated that the
identification of Attainment Areas is in fact a pre-requisite to the
establishment of any activity in prison, be it an education course,
industrial activity or simple housework.
Attainment Criteria
Each Attainment Area can be further classified into up to five levels
of attainment. These are levels of the same skill, progressing from a
low level of competence to a high level of competence. These must be
described in a series of direct statements, specifying particular
skills of graded sophistication which can be observed, and checked as
having been observed. Levels of competence are therefore NOT to be
specified as a scale from LOW to HIGH, but rather as a series of
specific, and observable behaviours. These are the Attainment Criteria
of the activity or routine. Just as Attainment Areas are naturally
identified by staff who design activities, so too are Attainment
Criteria natural pre-requisites for day to day supervision.
Competence Checklists (SM-1s)
For each set of Attainment Areas the Attainment Criteria comprises a
COMPETENCE CHECKLIST, against which performance can be monitored.
Competence Checklists are referred to within the system as SM-1s.
Record of Targets (SM-2s)
Targets are identified using a second form, referred to as SM-2.
Targets will generally be identified from the profile of Attainment
Criteria within Activities, (Competence Checklists being completed on
a monthly basis provide a record of progress). But Targets may also be
identified outside of standard activities, based on an analysis of
what is available within the Regime Digest, or Directory which will be
a natural product of the process of defining Attainment Areas and
Attainment Criteria, and the printing of the Competence Checklists
The two forms, ATTAINMENTS (SM-1) and RECORD OF TARGETS (SM-2)
comprise the building blocks of the system. These forms are now
available as final drafts (and will incidentally be machine readable).
Both forms are designed to be stored in the third element of the
system, the inmate's Sentence Management Dossier. This is simply a
'pocket file' to hold the sets of the two forms, and the proposal is
that the Head of Inmate Activities and his staff be responsible for
maintaining the system.
Through an analysis of the SM-1s both within and across activity
areas, Heads of Inmate Activities would have a better picture of the
structure of the activities, and of the relative progress of inmates
within activities. With inmates actively involved in the process of
target negotiation, and with the system being objective, problems of
confidentiality so characteristic of subjective reports, would become
substantially reduced. Whilst the system can run as a paper system,
once computerised, the data collected via SM-1s and SM-2s will form
the basis of automated reports.
Relationship to the Regime Monitoring System
The proposed procedure for recording Sentence Management is intimately
related to Regime Monitoring, as it is largely based on the same
Reporting Points within Activity areas making up the RMS. This will be
even more apparent when regime Monitoring embraces more activities
that it does at present. It has the promise also of providing the more
qualitative measure of regime delivery in that the record of
attainments will be an objective record of achievement.
The design of SM-1 form enables the capture of the basic data required
for maintenance of the Regime Monitoring System (RMS). The form
provides an efficient means of collecting such data since each SM-1
records an inmate's daily attendance in the activity via a 1-28 day
register covering each morning and afternoon session attended.
Since the form is designed to record attendance and attainment data
each month, it implicitly allows the number of hours to be calculated
for each inmate, each reporting point, and at a higher level of
aggregation to produce data on the number of inmates for each activity
area, sub-establishment and so on.
In terms of paperwork, this is not a demanding task, and in
capitalising on what is already done at Reporting Points (where daily
logs are maintained already) it promises to be an efficient and
accurate way of collecting the required data.
For a Reporting Point with 15 inmates, the system would require 15 SM-
1s to be completed and returned to the Head of Inmate Activities each
month. As mentioned above, the design of the forms renders them
potentially able to be processed by an Optical Mark Reader, allowing
the data to be converted to computer storable data, thereby making the
whole system easier to manage and audit.
Fundamental to the design of the SM-1 is the fact that the Attainment
Criteria are generated by staff who will be using them, each SM-1
being tied specifically to an activity. The content of the form is
'user definable'.
More than one SM-1 form will be completed per inmate per month since
the inmate will be assessed at more than one Reporting Point. To
record behaviour in daytime activities and domestically on the wings,
one SM-1 would be completed each month as a record of attainment at
the allocated work/education Reporting Point, and another on the
wings, the latter providing an assessment of the inmate's level of co-
operation/contribution to the general running of the routines, though
not necessarily contributing to the overall Regime Monitoring figures.
Although inextricably linked to the Regime Monitoring System (RMS),
the focus is at a more fundamental level of the regime - the recording
of attainment levels of individual inmates - with the RMS data being
logically compiled or deduced from those individual assessments. In
defining Attainment Areas and Attainment Criteria by staff supervising
the Reporting Point, in consultation with the HIA, the SM-1s and SM-2s
would allow staff to define the nature and objectives of the Reporting
Points, storing them within the proposed Sentence Management System to
serve as the basic statements for any subsequent computer profiling
of the inmate's progress as well as serving as the basic material for
a local and national directory or digest of activities and their
curricula.
Costs and Benefits
The cost of an Optical Mark Reader (OMR: the machine to read the
contents of the forms directly into a computer) to automate the
storage of the attainments data would be in the order of 8,000 per
prison, and probably substantially less if the systems were bought in
bulk. A small system to hold and analyse the data (including
appropriate software) would be in the order of 6000. We suggest that
the management of the Sentence Management System would most naturally
rest with the HIA, who would naturally liaise with other relevant
functional managers.
This relatively simple monitoring system would provide both Sentence
Management and Regime Monitoring information in one system.
Furthermore, the system could be Wide Area Networked (WAN), with each
AREA of 8-14 prisons being polled automatically by the Area Managers'
systems at HQ, these in turn being polled by a central system. The
system would be able directly to provide regime providers with
information bearing on their areas of concern.
This communications improvement is something which has already
proposed to improve the efficiency of Regime Monitoring.
With data being collated once a month via SM-1s, weekly data would
only be available retrospectively once a month. Nevertheless, this may
well be a small price to pay for a substantial reduction in data
handling and the provision of a far more useful system. Such a system
would make Regime Monitoring a naturally emergent indicator of
Sentence Management, and could be implemented using much of the
already installed infrastructure for running and auditing inmate
activities.
A significant benefit is in the potential for automatic machine-
generated reports of inmate progress. These could save many thousands
of officer-hours. The practicality of such reports is already being
demonstrated in HMP Parkhurst.
Coverage of Non-Standard Inmate Activities
The SM-1 form is designed to allow all staff to formally assess any
programme of activity in a standard manner (ie, marking whether
behaviour in the activity matches the attainment criteria on the
Competence Checklist). This form has provision to record a Checklist
Code, along with the activity and reporting point identifier. This
Checklist Code will allow more than one checklist to be generated for
each Reporting Point if the extent or modular nature of the activity
requires multiple checklists for comprehensive assessment of the
skills which the activity offers.
Similarly, the SM-2 form allows targets to be identified by staff both
within an activity, or from a knowledge of what the regime has on
offer. The Head of Inmate Activities, in building a library of
Attainment Areas and Attainment Criteria, (the Regime Digest, or
Directory) will be able to provide interested staff, such as Review
boards, with a digest of what activities are available and how they
are broken down by attainment areas and criteria.
In this way, short duration intervention programmes can be included in
the 'Sentence Management Dossier' in the same way as are the more
formal activities. Formal activities (as currently defined within the
Regime Monitoring System) are so regarded because they tend to occupy
large groups of inmates in activities which are basically structured
to have inmates participate for a relatively fixed period (8 weeks to
several years).
Using this form of assessment, the staff wishing to run ad hoc
programmes, occupying either small groups or single inmates in short
modules would be tasked with defining Attainment Areas and Attainment
Criteria as a sine qua non for running the proposed programme,
submitting the proposal to the HIA to be considered as an element of
the regime.
The fact that each SM-1 has an attendance register will permit the
system to capture the extent of all activity throughout the regime,
thereby contributing to a more comprehensive profile of activity
within each establishment and the estate in general. The Head of
Inmate Activies' task would more clearly become one of co-ordinating
Attainment Areas to bring about a balanced and appropriately monitored
regime, and the data would serve as a sound information base from
which staff could build Sentence Plans.
The system is designed to support full recording of inmate behaviour
and, based on co-operation with the demands of the routines and
activities, allow staff to negotiate and contract behaviour targets
based on their level of behaviour and known empirical relations which
hold between classes of behaviour.
See: "Fragments of Behaviour: The Extensional Stance" for a more
comprehensive account: http://www.uni-hamburg.de/~kriminol/TS/tskr.htm
--
David Longley
I've trimmed a lot of unecessary newsgroups from the Newsgroups line. Keep
this in the right place.
In article <·················@cci.com>, Carl Donath <···@cci.com> wrote:
>Jens Kilian wrote:
>> _Any_ programming language can be implemented by an interpreter or a
> compiler.
>> It just doesn't make sense to speak about "compiled languages" vs
> "interpreted
>> languages". I take it that you have never heard about C interpreters?
>
>This does not take into account languages (Lisp, APL) where the program
>may generate functions and execute them. A compiler could only do this
>if the compiled program included a compiler to compile and execute the
>generated-on-the-fly instructions, which is difficult and/or silly.
Perhaps one should read up a little more before opening ones mouth. It's
neithing difficult, nor silly, and there are many sytems out there that do
exactly this (Macintosh Common LISP, CMU Common LISP, SELF).
Granted, if you want to say, build a defun expression at run time and execute
THAT, you need EVAL, and then you need the compiler. But if you don't, if a
you have functions which return closures, then you don't need the compiler to
generate new functions at run-time, since all you're doing is modifying a
closure. Granted, not as powerful as generating a complex new function, but
very few systems actually need to do that at runtime.
I've written many systems in MCL which created new functions at runtime via
closures, WITHOUT the compiler.
>In a phrase, "self-modifying code".
If it's done within certain constraints, not a problem at all. Like I said
above, EVAL is the killer there, and it's easy to get by without it. I don't
think I've ever written a lisp program that used eval, and I've build a lot of
very large, sophisticated lisp systems.
- Adam
--
Adam Alpern, <······@brightware.com>
Adam Alpern wrote:
> I've trimmed a lot of unecessary newsgroups from the Newsgroups line. Keep
> this in the right place.
Good idea.
> Granted, if you want to say, build a defun expression at run time and execute
> THAT, you need EVAL, and then you need the compiler. But if you don't, if a
> you have functions which return closures, then you don't need the compiler to
> generate new functions at run-time, since all you're doing is modifying a
> closure. Granted, not as powerful as generating a complex new function, but
> very few systems actually need to do that at runtime.
The whole point is -- you cannot say a language
has such and such a property unless that
property applies to ALL programs which
CAN be written in that language, according
to the language definition. Sure,
anybody can make up Lisp programs
which can be trivially compiled. What does
that prove?
Btw, thanks for responding. I was suspecting all
actual knowledge of Lisp techniques must
be on vanishing point and the implementors
moved on to different fields...
Carl Donath (···@cci.com) wrote:
> Jens Kilian wrote:
> > _Any_ programming language can be implemented by an interpreter or a compiler.
> > It just doesn't make sense to speak about "compiled languages" vs "interpreted
> > languages". I take it that you have never heard about C interpreters?
> This does not take into account languages (Lisp, APL) where the program
> may generate functions and execute them. A compiler could only do this
> if the compiled program included a compiler to compile and execute the
> generated-on-the-fly instructions, which is difficult and/or silly.
It is neither difficult nor silly. Several Prolog systems are doing this,
and the SELF language is also compiled on-the-fly.
> In a phrase, "self-modifying code".
As long as it's done in a structured fashion (i.e., generate a whole new
function/predicate/class/whatsit and use it), so what?
Greetings,
Jens.
--
Internet: ···········@bbn.hp.com Phone: +49-7031-14-7698 (TELNET 778-7698)
MausNet: [currently offline] Fax: +49-7031-14-7351
PGP: 06 04 1C 35 7B DC 1F 26 As the air to a bird, or the sea to a fish,
0x555DA8B5 BB A2 F0 66 77 75 E1 08 so is contempt to the contemptible. [Blake]
In article <··········@isoit109.bbn.hp.com>, ·····@bbn.hp.com (Jens Kilian)
wrote:
> > This does not take into account languages (Lisp, APL) where the program
> > may generate functions and execute them. A compiler could only do this
> > if the compiled program included a compiler to compile and execute the
> > generated-on-the-fly instructions, which is difficult and/or silly.
>
> It is neither difficult nor silly. Several Prolog systems are doing this,
> and the SELF language is also compiled on-the-fly.
Why should it be silly? You can include an interpreter or a compiler.
Both is perfectly reasonable for a certain class of applications.
These are applications that are intended to be extended/modified
by users. Typically these are CAD, Editors, Music generation, ...
Everything that makes use of an extension language falls into
this category. Many Lisp system can also get rid of the compiler
for delivery.
If the software itself wants to generate code, that should be compiled
at runtime, why not? This is true for development environments,
interface builders, rule-based systems, parser generators, ...
> > In a phrase, "self-modifying code".
>
> As long as it's done in a structured fashion (i.e., generate a whole new
> function/predicate/class/whatsit and use it), so what?
Yep.
Rainer Joswig
From: Patrick Juola
Subject: Re: Lisp is not an interpreted language
Date:
Message-ID: <56havm$r9h@news.ox.ac.uk>
In article <·················@cci.com> Carl Donath <···@cci.com> writes:
>Jens Kilian wrote:
>> _Any_ programming language can be implemented by an interpreter or a compiler.
>> It just doesn't make sense to speak about "compiled languages" vs "interpreted
>> languages". I take it that you have never heard about C interpreters?
>
>This does not take into account languages (Lisp, APL) where the program
>may generate functions and execute them. A compiler could only do this
>if the compiled program included a compiler to compile and execute the
>generated-on-the-fly instructions, which is difficult and/or silly.
Difficult and/or silly perhaps, but also common.
Patrick
In article <·················@cci.com> ···@cci.com "Carl Donath" writes:
> Jens Kilian wrote:
> > _Any_ programming language can be implemented by an interpreter or a compiler.> > It just doesn't make sense to speak about "compiled languages" vs "interpreted> > languages". I take it that you have never heard about C interpreters?
>
> This does not take into account languages (Lisp, APL) where the program
> may generate functions and execute them. A compiler could only do this
> if the compiled program included a compiler to compile and execute the
> generated-on-the-fly instructions, which is difficult and/or silly.
This isn't hard to do. It's just unpopular.
> In a phrase, "self-modifying code".
In a phrase, an app that include some development tools. Err,
correct me if I'm wrong, but I do believe that there are many
apps that _do_ in fact include some form of development tool(s)!
In fact, there are APIs that support this. ActiveX allows a
programmer to very simply add a "script" language to their app,
without writing a compiler/interpreter/whatever. It's all done
inside the ActiveX classes. I don't see why this couldn't work
with native code, as the OS API supports that, too, by allowing
code to write code in memory, and then call it.
BTW, In the case of ActiveX, VBScript and JavaScript are already
available and being used. If anyone could produce Scheme or Common
Lisp classes for ActiveX scripting, then I'd be _very_ happy!
--
<URL:http://www.enrapture.com/cybes/> You can never browse enough
Future generations are relying on us
It's a world we've made - Incubus
We're living on a knife edge, looking for the ground -- Hawkwind
Jens Kilian wrote:
> _Any_ programming language can be implemented by an interpreter or a compiler.
> It just doesn't make sense to speak about "compiled languages" vs "interpreted
> languages". I take it that you have never heard about C interpreters?
>
That's a self-consistent set of definitions, but I didn't notice
you jumping in to say "this topic is meaningless because
there is no such thing as an interpreted language"?
If you are trying to use the term "interpreted language",
you must go by its past usage. Tactics like switching the meaning,
terms and subject in the middle of a discussion may be legitimate
in certain fields, but hopefully not in a technical
discussion on programming.
In article <·············@dma.isg.mot.com>
·······@dma.isg.mot.com "Mukesh Prasad" writes:
> Jens Kilian wrote:
>
> > _Any_ programming language can be implemented by an interpreter or a compiler.> > It just doesn't make sense to speak about "compiled languages" vs "interpreted> > languages". I take it that you have never heard about C interpreters?
> >
>
> That's a self-consistent set of definitions, but I didn't notice
> you jumping in to say "this topic is meaningless because
> there is no such thing as an interpreted language"?
>
> If you are trying to use the term "interpreted language",
> you must go by its past usage. Tactics like switching the meaning,
> terms and subject in the middle of a discussion may be legitimate
> in certain fields, but hopefully not in a technical
> discussion on programming.
>
It's not legitimate *anywhere* except in sophistry, rhetoric and
poetry... equivocation is anathema to scientific discussion and
reliable communication in any language.
--
David Longley
In article <·············@dma.isg.mot.com>
·······@dma.isg.mot.com "Mukesh Prasad" writes:
> You may not be aware of this (actually, you are obviously
> not) but books on programming languages tend to divide
> languages into two categories, "interpreted" and "compiled".
> I repeat, *languages*, not *implementations*.
Some people may bever have used a language where the distinction
is hard to make.
Take Forth as an example. Is Forth compiled or interpreted? It
depends on how your definition, but there are Forths that generate
native code. One of my batch Forth compilers generated assembly
source code, while a later version generates threaded code.
Interactive Forths can compile to either native or threaded code.
You could even argue that the Novix compiler generates _microcode_,
but since the code is 16bit, that may be debatable. Can microcode
be 16bit? Perhaps. Perhaps not.
If you run x86 code on an emulator, is that interpreted? Is it
still "native"? Who cares?
> Since its inception, Lisp has been placed by programming
> language theorists in the "interpreted" category.
> The language itself, not any particular implementation.
In which category did PJ Brown put Lisp? Or Basic...
There are C interpreters. So what?
The language theorists may not be the ones who decide these things.
It could be the marketing people. We should also ask _when_ these
categorisations were made. You didn't say, did you? By playing the
same game, I could say that C is an obscure novelty that few people
have heard of, never mind actually use. I could also say that it
was only available for Unix. However, I won't, because it would no
longer be true.
> However, Lisp systems have improved in technology.
> In the early days, Lisp interpreters directly interpreted
> the original source. An obvious improvement was
> to "compact" the source code and to get rid of
> comments, spaces etc prior to interpretation. But
> this does not make the language "compiled".
This is just history. Very old history. See above.
> Another improvement was to replace the original
> source code by more compact and easy to interpret
> "byte code". The function to do this is called
> "compile", hence confusing the typical Lisp user
> already.
Basic used to do this, and perhaps still does. Which Basic, tho?
VB? No. QuickBasic? ST Basic? BBC Basic?
> To confuse matters more, the newer versions of the
> "compile" function are more sophisticated, and can generate
> machine code into which the interpreter transfers
> the flow of control via a machine level jump
> instruction. The confusion of the typical modern
> day Lisp user is complete at this point!
What confusion? All I see here is the bollocks that you're talking.
You're talking history, which most people will ignore.
> However, having a function called "compile" doesn't
> make language a compiled language.
Not necessailry, but producing native code might. Do you mean
"compile to native code"? It's not clear - perhaps you're confusing
the word compile with some special meaning that only you know?
See PJ Brown's book for my choice. What's yours?
> An interpreted language is one which necessitates baggage
> at run-time to interpret it. A compiled language
> is one which doesn't. Lisp -- due to the nature
> of the language definition -- necessitates baggage at
> run-time, even with modern "compile" functions
> which can generate machine code.
Most, if not all. langauges necessitates runtime baggage.
Perhaps you hadn't noticed this. C is a perfect example.
Very few programmes use it without the standard C library.
Some of us call the OS directly - which can be thought of
as an even larger runtime! Ok, that's bryond the scope of
the langauge, but since you're so intent on confusing the
issue with misinformation, why not? Some Lisps run without
the support of what most people would call an OS. So do
some Basics...
> I will try once more (but not much more, this thread
> has not attracted knowledgable responses or
> intelligent, unbiased discourse) to explain this -- if the
> Lisp language _itself_ is to be deemed "compiled" (irrespective
> of any implementation of it), then by that definition,
> all languages must be deemed "compiled languages".
Now you're getting it! See PJ Brown's book. The words "compile"
and "interpret" are a distraction that'll only confuse you.
> For any given language, things which have been
> done to Lisp can be done. Thus that language's
> definition does not make the language "interpreted"
> any more than Lisp is.
That's a good argument for not make such distinctions,
and yet you insist on making them, as if they still mean
anything.
> >So the same _binary object code_ can be actual machine code or a byte
> >code, depending on what machine you run it on. So the notion of a
> >_language_ being "interpreted" or "compiled" makes no sense. A
>
> You should read some books on Computer Science. It is
> actually a matter of definition, not "sense". It will
> only make sense if you are familiar with the definitions.
Ah, but _which_ CS books? Whose definitions? Let's have
some modern definitions, from the last 20 years or so.
In fact, we could go back a lot further than that and
still find that distinctions like yours are misplaced.
It's better to look at it all as merely data. In Turing's
day, this may have been better understood, but more
recently, we've been unnecessarily obsessed with hardware.
Finally, attention seems to be shifting back to the notion
that the hardware details are...just details, like any of
information inside a computer. Even marketing people are
becoming aware of it!
> Otherwise, you might as well look at a book of mathematics
> and claim the term "factors" must have something to
> do with "fact"s, because that is how you choose to
> understand it.
Now you're being silly again. Wake up and smell the coffee!
How long have you been asleep?
> "Interpreted" and "compiled", when applied to
> languages, have specific meanings.
No they don't. You're talking about _implementations_,
not languages. There is a difference, y'know. This lack
of understanding doesn't help your credibility.
> >particular _implementation_ on an particular _computer_ down to a
> >particular _level of abstraction_ (e.g., 'down to 68K machine code')
> >can be "interpreted" or "compiled", but not a language.
>
> This and other such complicated gems occurring in
> this thread, are neither compiled nor interpreted, but
> simple and pure BS, arising out of ignorance, bias
> and lack of clear thinking.
Wanna bet? Have you noticed how many emulators for the
instruction sets of major CPUs are available today? Intel
are working on their next generation of CPUs, which will
emulate the x86 familiy. Is that BS? Perhaps, as most of
us have only Intel's word that this is what they're doing.
However, they're not the only ones persuing this line.
Digital also have an x86 emulator, and others have produced
emulators before them. It's a rather _old_ idea.
You've mentioned CS books, so I wonder if you've read Vol 1
of Knuth's Art of Computer Programming? Take a look at the
Mix "machine" that he describes. It might just as well be
a real machine for all we care, and that's the point. Mix
allowed him to write code for a specific archetecture, which
can be useful for certain areas of programming, like writing
a floating point package - or a compiler.
You might also want to take a look at the Pascal P4 compiler,
and the portable BCPL compiler. There are books documenting
them. Oh, and let's not forget the VP code used by Tao (see
<URL:http://www.tao.co.uk> for details).
The weight of evidence against must be crushing you. ;-)
--
<URL:http://www.enrapture.com/cybes/> You can never browse enough
Future generations are relying on us
It's a world we've made - Incubus
We're living on a knife edge, looking for the ground -- Hawkwind
From: Erik Naggum
Subject: Re: Lisp is not an interpreted language
Date:
Message-ID: <3056908135389456@naggum.no>
* Mukesh Prasad
| Since its inception, Lisp has been placed by programming language
| theorists in the "interpreted" category. The language itself, not any
| particular implementation.
this should make you think very carefully about the qualifications of those
"programming language theorists".
| However, Lisp systems have improved in technology. In the early days,
| Lisp interpreters directly interpreted the original source.
the `read' function, which transforms a character string (the original
source) into lists/trees, has been with the language since the very
earliest days. nothing ever saw the original source code in the Lisp
system apart from this function, called the Lisp reader.
| An obvious improvement was to "compact" the source code and to get rid of
| comments, spaces etc prior to interpretation.
you have no inkling of a clue to what you talk about.
| Another improvement was to replace the original source code by more
| compact and easy to interpret "byte code".
geez. it appears that your _only_ exposure to "interpreted" languages are
BASIC systems. it is quite unbelievable that anyone should want to parade
the kind of ignorance you display across so many newsgroups.
| To confuse matters more, the newer versions of the "compile" function are
| more sophisticated, and can generate machine code into which the
| interpreter transfers the flow of control via a machine level jump
| instruction.
let me guess. you're an old ZX-80, TRS-80, CBM-64, etc, hacker, right?
you know the way the old BASIC interpreters worked, by heart, right? and
you think "interpreter" has to mean the same thing for toy computers in the
early 80's and a language "designed primarily for symbolic data processing
used for symbolic calculations in differential and integral calculus,
electrical circuit theory, mathematical logic, game playing, and other
fields of artificial intelligence" (McCarthy, et al: Lisp 1.5; MIT Press,
1962) in the early 60's.
| "Interpreted" and "compiled", when applied to languages, have specific
| meanings.
this is perhaps the first true statement I have you make in several weeks.
however, the specific meanings are not the ones you have in mind.
| This and other such complicated gems occurring in this thread, are
| neither compiled nor interpreted, but simple and pure BS, arising out of
| ignorance, bias and lack of clear thinking.
right. I was about to flame you for being a moron, so thanks for laying
the foundation. you don't know what you're talking about, you don't know
what Lisp is like, you don't know any of Lisp's history, you refuse to
listen when people tell you, and, finally, you don't seem to grasp even the
simplest of ideas so that you can express them legibly. in brief, you're a
moron. I sincerely hope that Motorola made an error in hiring you.
#\Erik
--
Please address private replies to "erik". Mail to "nobody" is discarded.
Erik Naggum <······@naggum.no> writes:
> when I was only an egg, at least I knew it. Mukesh Prasad may want
> to investigate the option of _listening_ to those who know more than
> him, instead of making a fool out of himself.
We'll leave *that* option to people who needn't learn anything or
admit a mistake or even fallability just after they left their egg
state.
--
David Kastrup Phone: +49-234-700-5570
Email: ···@neuroinformatik.ruhr-uni-bochum.de Fax: +49-234-709-4209
Institut f=FCr Neuroinformatik, Universit=E4tsstr. 150, 44780 Bochum, Germa=
ny
I pointed out _reasons_ why calling a language compiled or interpreted
made little sense. Prasad responded with several appeals to authority:
books on programming languages tend to divide
languages into two categories, "interpreted" and "compiled".
I repeat, *languages*, not *implementations*.
Since its inception, Lisp has been placed by programming
language theorists in the "interpreted" category.
The language itself, not any particular implementation.
Can you cite any sources to back up these claims? I flatly do not believe
them. Please tell us which "books on programming languages" you are
referring to. Were they published within the last 20 years?
Can you direct us to any statement in the literature by any
programming language theorist that supports this claim?
In the early days, Lisp interpreters directly interpreted
the original source. An obvious improvement was
to "compact" the source code and to get rid of
comments, spaces etc prior to interpretation.
This is complete nonsense. One of the interesting features of _all_
interpreted implementations of Lisp, from the very first, was that
they did not interpret a character string but rather the internal
linked-list ("s-expression") representation. See, e.g., "Programming in
the Interactive Environment" by Erik Sandewall, Computing Surveys,
V. 10, # 1, March 1978, for a discussion of some of the consequences
of this approach.
--------------------------------------------------------------------
Prof. Louis Steinberg ···@cs.rutgers.edu
Department of Computer Science http://www.cs.rutgers.edu/~lou
Rutgers University
In article <·················@atanasoff.rutgers.edu>,
Lou Steinberg <···@cs.rutgers.edu> wrote:
>I pointed out _reasons_ why calling a language compiled or interpreted
>made little sense. Prasad responded with several appeals to authority:
>
> books on programming languages tend to divide
> languages into two categories, "interpreted" and "compiled".
> I repeat, *languages*, not *implementations*.
Dividing languages into interpreted and compiled is like saying that
"trucks run on diesel fuel and cars run on gasoline". It may be a
valid generalization, but there are exceptions. The terms "interpreter"
and "compiler" describe features of an implementation not a language.
I don't know what kind of "authority" it takes to be convincing
about this distinction. I was part of the implementation team
at MIT that wrote a Lisp compiler for the VAX in the mid 1980's.
You can look in the proceeedings of AAAI-96 for my paper
describing how I used a Lisp compiler to model multi-agent
reasoning. In fact, I wrote an interpreter for a simulation
langauge to do that.
> Since its inception, Lisp has been placed by programming
> language theorists in the "interpreted" category.
> The language itself, not any particular implementation.
>
>Can you cite any sources to back up these claims? I flatly do not believe
>them. Please tell us which "books on programming languages" you are
>referring to. Were they published within the last 20 years?
>Can you direct us to any statement in the literature by any
>programming language theorist that supports this claim?
> In the early days, Lisp interpreters directly interpreted
> the original source. An obvious improvement was
> to "compact" the source code and to get rid of
> comments, spaces etc prior to interpretation.
>
>This is complete nonsense. One of the interesting features of _all_
>interpreted implementations of Lisp, from the very first, was that
>they did not interpret a character string but rather the internal
>linked-list ("s-expression") representation. See, e.g., "Programming in
>the Interactive Environment" by Erik Sandewall, Computing Surveys,
>V. 10, # 1, March 1978, for a discussion of some of the consequences
>of this approach.
>--------------------------------------------------------------------
>Prof. Louis Steinberg ···@cs.rutgers.edu
>Department of Computer Science http://www.cs.rutgers.edu/~lou
>Rutgers University
--
Christopher R. Eliot, Senior Postdoctoral Research Associate
Center for Knowledge Communication, Department of Computer Science
University of Massachusetts, Amherst. (413) 545-4248 FAX: 545-1249
·····@cs.umass.edu, http://rastelli.cs.umass.edu/~ckc/people/eliot/
In article <·············@dma.isg.mot.com>
Mukesh Prasad <·······@dma.isg.mot.com> writes:
> You may not be aware of this (actually, you are obviously
> not) but books on programming languages tend to divide
> languages into two categories, "interpreted" and "compiled".
> I repeat, *languages*, not *implementations*.
What books are those? The ones I have read specifically state that it
is the *implementation* which can be divided into the two extremes of
"interpreted" (simulated) and "compiled" (translated). Some go on to
classify languages into these categories based upon the most common
*implementations*.
> Since its inception, Lisp has been placed by programming
> language theorists in the "interpreted" category.
> The language itself, not any particular implementation.
>
> However, Lisp systems have improved in technology.
> In the early days, Lisp interpreters directly interpreted
> the original source. An obvious improvement was
> to "compact" the source code and to get rid of
> comments, spaces etc prior to interpretation.
LISP systems never "directly interpreted the original source" --
rather, they convert input source to internal list representations of
"S-expressions". Comments and spaces are removed by the read
procedure.
> But
> this does not make the language "compiled".
"In the early days", LISP existed in both interpreted and compiled
forms. Read McCarthy's LISP1.5 Programmer's Manual (1965), which
describes the interpreter and the compiler (which generated IBM 7090
assembly language).
> An interpreted language is one which necessitates baggage
> at run-time to interpret it. A compiled language
> is one which doesn't. Lisp -- due to the nature
> of the language definition -- necessitates baggage at
> run-time, even with modern "compile" functions
> which can generate machine code.
What runtime baggage does the language LISP *require*? One might say
"garbage collection", but that can be considered a "helper function",
just like heap allocation via malloc() is for C.
-- Tim Olson
Apple Computer, Inc.
(···@apple.com)
From: Casper H.S. Dik - Network Security Engineer
Subject: Re: Lisp is not an interpreted language
Date:
Message-ID: <casper.848393364@uk-usenet.uk.sun.com>
···@apple.com (Tim Olson) writes:
>What runtime baggage does the language LISP *require*? One might say
>"garbage collection", but that can be considered a "helper function",
>just like heap allocation via malloc() is for C.
A compiled lisp program typically requires a lisp interpreter; you can
always construct programs and execute them; if need be, compile them first.
So the lisp runtime system requires a interpreter/compiler. That is not
the case for languages in which it is not possible to create and/or
manipulate executable objects.
Casper
--
Expressed in this posting are my opinions. They are in no way related
to opinions held by my employer, Sun Microsystems.
Statements on Sun products included here are not gospel and may
be fiction rather than truth.
From: Erik Naggum
Subject: Re: Lisp is not an interpreted language
Date:
Message-ID: <3057389535745514@naggum.no>
* Casper H. S. Dik
| So the lisp runtime system requires a interpreter/compiler. That is not
| the case for languages in which it is not possible to create and/or
| manipulate executable objects.
first, you need to distinguish between the "runtime system" and
"libraries". the _runtime_ system does not require an interpreter or
compiler. the runtime system would include error handlers, the garbage
collector, the function that was called by and returns control to the
operating system and is responsible for setting up and tearing down all
sorts of things, etc.
second, if the program has a command language, it has an interpreter for
another language embedded, and implements a micro-EVAL all of its own.
third, if the program reads data files that have any non-trivial format, it
implements the equivalent of READ, including lots of special-purpose code
to it and lex/yacc tables.
just because the functions are _not_ called READ, EVAL or COMPILE, doesn't
mean they aren't there. a language doesn't have to have executable objects
to run code not its own. any data-driven or table-driven implementation
can be regarded as an interpreter. etc. just because you can see it by
name in one language, and not in another, doesn't mean they differ in these
important regards.
it's been said that any sufficiently large programming system contains a
Common Lisp struggling to get out.
#\Erik
--
Please address private replies to "erik". Mail to "nobody" is discarded.
From: Jive Dadson
Subject: Re: C pointers and GC (Re: Lisp is not an interpreted language)
Date:
Message-ID: <329BB0BC.2150@ix.netcom.com>
If you all must insist, against all protests, to continue to post these never-ending
computer language discussions to completely inappropriate newsgroups,
please keep to one subject-line so the thousands of people who are interested
in what the newsgroups are actually intended for may know what to ignore.
Thank you very much,
J.
Tim Olson wrote:
[snip]
> What runtime baggage does the language LISP *require*? One might say
> "garbage collection", but that can be considered a "helper function",
> just like heap allocation via malloc() is for C.
I see garbage collection as not much more than a runtime
library. But eval, intern etc require language-processing
at run-time. This is what I was referreing to as "required
baggage". In other words, when the language-processor cannot
just do its work and go away, but may have to hide itself
in some guise or other in the generated executable.
In article <············@dma.isg.mot.com>
·······@dma.isg.mot.com "Mukesh Prasad" writes:
> I see garbage collection as not much more than a runtime
> library. But eval, intern etc require language-processing
> at run-time. This is what I was referreing to as "required
> baggage". In other words, when the language-processor cannot
> just do its work and go away, but may have to hide itself
> in some guise or other in the generated executable.
<ahem> The flaw in this argument has been pointed out to you
repeatedly. EVAL is not a feature of every Lisp, just as it
isn't supported by every Basic dialect. Not all Common Lisp
compilers support EVAL at runtime. Of course, we should take
care and define exactly what we mean by runtime. Since your
definitions of certain words can wildly differ from others
here, perhaps "runtime" doesn't mean the same thing to you
as it is to me and others? How can we tell?
--
<URL:http://www.enrapture.com/cybes/> You can never browse enough
Future generations are relying on us
It's a world we've made - Incubus
We're living on a knife edge, looking for the ground -- Hawkwind
From: Paul Schlyter
Subject: Re: Lisp is not an interpreted language
Date:
Message-ID: <56vubi$ads@electra.saaf.se>
In article <············@dma.isg.mot.com>,
Mukesh Prasad <·······@dma.isg.mot.com> wrote:
> Tim Olson wrote:
> [snip]
>> What runtime baggage does the language LISP *require*? One might say
>> "garbage collection", but that can be considered a "helper function",
>> just like heap allocation via malloc() is for C.
>
> I see garbage collection as not much more than a runtime
> library. But eval, intern etc require language-processing
> at run-time. This is what I was referreing to as "required
> baggage". In other words, when the language-processor cannot
> just do its work and go away, but may have to hide itself
> in some guise or other in the generated executable.
Garbage collection is definitely more than a runtime library -- it
requires language support as well. Consider implementing garbage
collection in a language like C -- it would be next to impossible,
because of all the dangling pointers that may remain here, there
and everywhere in the program, which must be updated automatically
if garbage collection is to be useful. But such a feat would be
impossible from a runtime library only, and even very very hard if
the compiler generated support code for this.
Thus a language with garbage collection must be much more restrictive
on pointer usage than C. LISP fits this description pretty well, since
it doesn't have pointers that the programmer is allowed to manipulate
in the LISP program.
--
----------------------------------------------------------------
Paul Schlyter, Swedish Amateur Astronomer's Society (SAAF)
Grev Turegatan 40, S-114 38 Stockholm, SWEDEN
e-mail: ······@saaf.se ···@home.ausys.se ····@inorbit.com
Paul Schlyter <······@electra.saaf.se> wrote:
+---------------
| Garbage collection is definitely more than a runtime library -- it
| requires language support as well. Consider implementing garbage
| collection in a language like C -- it would be next to impossible...
+---------------
Gee, then I guess I better stop linking the Boehm/Demers "Conservative
Garbage Collector for C and C++" with my C programs, hadn't I? I had
*thought* it was just a runtime library, but I guess I just didn't realize
it was "impossible". ;-} ;-}
-Rob
References:
http://reality.sgi.com/employees/boehm_mti/gc.html
ftp://parcftp.xerox.com/pub/gc/gc.html
-----
Rob Warnock, 7L-551 ····@sgi.com
Silicon Graphics, Inc. http://reality.sgi.com/rpw3/
2011 N. Shoreline Blvd. Phone: 415-933-1673 FAX: 415-933-0979
Mountain View, CA 94043 PP-ASEL-IA
> Gee, then I guess I better stop linking the Boehm/Demers "Conservative
> Garbage Collector for C and C++" with my C programs, hadn't I? I had
> *thought* it was just a runtime library, but I guess I just didn't realize
> it was "impossible". ;-} ;-}
It is impossible, in general.
You're just not writing general enough programs. Few people are that sadistic.
--
Where walks their Brother wan and lone |For the time being, email
who marched from halls of marbled stone?| to me might be lost or
The Brothers brood their bristling mood;| delayed. Email to the
their anger grows till air will moan. |sender will definitely go
From: Graham Hughes
Subject: Re: Lisp is not an interpreted language
Date:
Message-ID: <578uiu$e9e@yuggoth.ucsb.edu>
······@netcom.com (·@?*$%) writes:
>> Gee, then I guess I better stop linking the Boehm/Demers "Conservative
>> Garbage Collector for C and C++" with my C programs, hadn't I? I had
>> *thought* it was just a runtime library, but I guess I just didn't realize
>> it was "impossible". ;-} ;-}
>It is impossible, in general.
>You're just not writing general enough programs. Few people are that sadistic.
Bullshit. You've never actually *looked* at the Boehm/Demers GC, have
you? Or at Great Circle? Both are general garbage collectors, and both
will point you to memory leaks you're not otherwise handling. If they
don't handle everything, they handle more than enough.
Geez, *look* at the stuff before you make silly comments.
--
Graham Hughes (·············@resnet.ucsb.edu)
·······················@A-abe.resnet.ucsb.edu".finger.look.examine
alt.homelike-page."http://A-abe.resnet.ucsb.edu/~graham/".search.browse.view
alt.silliness."http://www.astro.su.se/~robert/aanvvv.html".look.go.laugh
> Bullshit. You've never actually *looked* at the Boehm/Demers GC, have
> you? Or at Great Circle? Both are general garbage collectors, and both
> will point you to memory leaks you're not otherwise handling. If they
> don't handle everything, they handle more than enough.
>
> Geez, *look* at the stuff before you make silly comments.
Fantastic! They finally solved that halting problem! This is great news!
(Rememberring, of course, that C and C++, unlike Lisp, permit arbitrary
transformations on pointer values which a collector can apparently, always
transform back into a valid pointer.)
--
Where walks their Brother wan and lone |For the time being, email
who marched from halls of marbled stone?| to me might be lost or
The Brothers brood their bristling mood;| delayed. Email to the
their anger grows till air will moan. |sender will definitely go
······@netcom.com (·@?*$%) wrote:
>> Bullshit. You've never actually *looked* at the Boehm/Demers GC, have
>> you? Or at Great Circle? Both are general garbage collectors, and both
>> will point you to memory leaks you're not otherwise handling. If they
>> don't handle everything, they handle more than enough.
>>
>> Geez, *look* at the stuff before you make silly comments.
>
>Fantastic! They finally solved that halting problem! This is great news!
No, but (I'm just starting on this thread) they do make C++ memory
management a lot easier, usually faster, and trivial to debug with
essentially no programmer overhead.
>(Rememberring, of course, that C and C++, unlike Lisp, permit arbitrary
>transformations on pointer values which a collector can apparently, always
>transform back into a valid pointer.)
Of course, why one would store a pointer in an int, or XOR one pointer
against another, or any of the other bizarre tricks that C programmers
take glory in (instead of, e.g., writing solid code that does their
customers some good) is another issue entirely.
GC works.
rsr
www.wam.umd.edu/~rsrodger b a l a n c e
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
"If we let our friend become cold and selfish and exacting without a
remonstrance, we are no true lover, no true friend."
- Harriet Beecher Stowe
I don't know which is sadder, programmers uneducated in math or uneducated
in their native language.
--
Where walks their Brother wan and lone |For the time being, email
who marched from halls of marbled stone?| to me might be lost or
The Brothers brood their bristling mood;| delayed. Email to the
their anger grows till air will moan. |sender will definitely go
Hi!
>Fantastic! They finally solved that halting problem! This is great news!
Beg your pardon? The halting problem has nothing to do with a conservative
garbage collector. That one just looks at a pointer, if it _could_be_ a
valid pointer. So the problem is, it misses out some blocks, but it never
frees too much blocks. In an average programs, you don't have that much
pointer-manipulation. And even if you have, it's nothing critical. As I
said, the worst thing that can happen is, that the GC doesn't find _all_
free blocks.
bye, Georg
From: Warren Sarle
Subject: Re: C pointers and GC (Re: Lisp is not an interpreted language)
Date:
Message-ID: <E1JtGG.7y9@unx.sas.com>
Those of you who are intelligent enough to be able to program in C or
Lisp should also be capable of looking at the "Newsgroups" line and
noting that this thread and several related ones should NOT be
cross-posted to comp.ai,comp.ai.genetic,comp.ai.neural-nets, and
comp.ai.philosophy.
In article <··················@best.com>, Thomas Breuel <···@intentionally.blank> writes:
|> ······@netcom.com (·@?*$%) writes:
|> > (Rememberring, of course, that C and C++, unlike Lisp, permit arbitrary
|> > transformations on pointer values which a collector can apparently, always
|> > transform back into a valid pointer.)
|>
|> Read the ANSI C spec: C does not permit "arbitrary transformations on
|> pointer values". ...
--
Warren S. Sarle SAS Institute Inc. The opinions expressed here
······@unx.sas.com SAS Campus Drive are mine and not necessarily
(919) 677-8000 Cary, NC 27513, USA those of SAS Institute.
*** Do not send me unsolicited commercial or political email! ***
Hi!
PS>Garbage collection is definitely more than a runtime library -- it
PS>requires language support as well. Consider implementing garbage
PS>collection in a language like C -- it would be next to impossible,
Have a look at Boehm's conservative garbage collector. That addresses the
exact problem - it's a garbage collector for normal standard C. Works
great. Of course, if your language-implementation has typetags or something
like that, this helps much. But you can do without.
bye, Georg
Lou Steinberg wrote:
[snip]
> Can you cite any sources to back up these claims? I flatly do not believe
> them. Please tell us which "books on programming languages" you are
> referring to. Were they published within the last 20 years?
> Can you direct us to any statement in the literature by any
> programming language theorist that supports this claim?
I will have to find and look through old books,
(not being academically involved, I don't keep
them on hand) but here are some books which may have
contributed to forming my opinion:
Pratt
The Dragon Book
Mehdi and Jayajeri
In general, until your very confident challenge,
I was very sure from all my reading that
languages themselves had been categorized as interpreted
vs compiled in the past. (Hence the reason
for this thread's name -- Lisp people never
liked Lisp being called an "interpreted
language".) But I will look.
> This is complete nonsense. One of the interesting features of _all_
> interpreted implementations of Lisp, from the very first, was that
> they did not interpret a character string but rather the internal
> linked-list ("s-expression") representation. See, e.g., "Programming in
> the Interactive Environment" by Erik Sandewall, Computing Surveys,
You are, of course, correct about this. This had slipped my mind.
In any event, some amount of internal processing would
obviously be necessary -- I was primarily trying to
distinguish this from subsequent implementations using
compilations to byte-code.
In article <·············@dma.isg.mot.com>
·······@dma.isg.mot.com "Mukesh Prasad" writes:
> The Dragon Book
There are a great many compiler techniques not covered by this
book. It has _nothing_ to say about compiling in an interactive
system, which could explain your confusion. "Interactive" does
_not_ mean "interpreted".
BTW, _which_ one do you mean? Compilers, Principles, Techniques,
and Tools is the later book.
> In general, until your very confident challenge,
> I was very sure from all my reading that
> languages themselves had been categorized as interpreted
> vs compiled in the past. (Hence the reason
> for this thread's name -- Lisp people never
> liked Lisp being called an "interpreted
> language".) But I will look.
I suspect that you've bee mislead by a very limited source of
information. I'm only an "ameteur" compiler writer, in the sense
that none of my compilers can be described as "industrial strength",
and yet I may easily have vastly more info on compiler techniques
than you. On one shelf alone, I have at least 6 books on language
implementation (mainly compilers, but also "interpreters" of several
kinds). Most of the books on the shelf below it include compilers
and interpreters of some kind.
Someday I'll put a list of all these books into my homepage. If I
had such a page (or more likely, set of pages), I could just recommend
that you browse it...
> > This is complete nonsense. One of the interesting features of _all_
> > interpreted implementations of Lisp, from the very first, was that
> > they did not interpret a character string but rather the internal
> > linked-list ("s-expression") representation. See, e.g., "Programming in
> > the Interactive Environment" by Erik Sandewall, Computing Surveys,
>
> You are, of course, correct about this. This had slipped my mind.
How convenient. Well, I'll give you the benefit of my doubt.
However, such mistakes don't help your credibility when making
such claims as yours. Somebody ignorant of Lisp's history might
be more easily forgiven.
> In any event, some amount of internal processing would
> obviously be necessary -- I was primarily trying to
> distinguish this from subsequent implementations using
> compilations to byte-code.
What kind of processing do you mean? Parser macros?
--
<URL:http://www.enrapture.com/cybes/> You can never browse enough
Future generations are relying on us
It's a world we've made - Incubus
We're living on a knife edge, looking for the ground -- Hawkwind
Cyber Surfer wrote:
> I suspect that you've bee mislead by a very limited source of
> information. I'm only an "ameteur" compiler writer, in the sense
> that none of my compilers can be described as "industrial strength",
> and yet I may easily have vastly more info on compiler techniques
> than you. On one shelf alone, I have at least 6 books on language
I see. Book counting for resolving the differences?
There has been a mistaken impression that I was
"appealing to authority". If I may continue to borrow the
phraseology of my opponents, this is utter nonsense
and bull-dust.
To debate whether or not "lisp is not an intepreted
language", you must start from some common
usage of the term "interpreted language". Which
was the only reason for my referring to
early usage of the term.
You may refuse to accept the term "interpreted
language" as valid at all, but in that case
you must say so. What my opponents have been
trying to do instead, is to change the definitions
(as well as limit the discussion to some
particular dialect or the other) in the middle
of the argument.
> > You are, of course, correct about this. This had slipped my mind.
>
> How convenient. Well, I'll give you the benefit of my doubt.
> However, such mistakes don't help your credibility when making
> such claims as yours. Somebody ignorant of Lisp's history might
> be more easily forgiven.
There were no mistakes made on my part, as you would have
noticed if you had carefully read through the threads. I
was acknowledging that a valid point was made (showing
certain knowledge of early Lisp implementations,) but this
does not necessarily negate anything I had said. (And
actually, did not -- the historical progression
I described is accurate, and reading into S-expressions
was indeed the early way of implementing "interpret
directly from the source".)
In any event, what I was saying was that you cannot
compile "eval" without embedding a language processor
in the compiled executable.
Since you wrote compilers, please do tell us
how you did it -- which will prove that I am wrong.
In fact, the only response to this I have seen are
sleazy instead of valid logic. (Except pointing out
that Basic also sometimes has "eval" -- though I must
admit I fail to see how that leads to the
conclusion that Lisp is not an interpreted language.)
> What kind of processing do you mean? Parser macros?
"Interpreting the source directly" obviously
means that there is a certain amount of processing to
be done to interpret the source everytime. Was
this not obvious at all?
In article <·············@dma.isg.mot.com>
·······@dma.isg.mot.com "Mukesh Prasad" writes:
> To debate whether or not "lisp is not an intepreted
> language", you must start from some common
> usage of the term "interpreted language". Which
> was the only reason for my referring to
> early usage of the term.
This is why I strongly recommend that you read Brown's book.
Have you read it?
--
<URL:http://www.enrapture.com/cybes/> You can never browse enough
Future generations are relying on us
It's a world we've made - Incubus
We're living on a knife edge, looking for the ground -- Hawkwind
From: Jeffrey Mark Siskind
Subject: Re: Lisp is not an interpreted language
Date:
Message-ID: <QOBI.96Nov17180837@sullivan>
Actually, here is a piece of code that uses EVAL that would be *very* hard to
compile. (I've written it in Scheme because I don't remember Common Lisp :-)
(define (fifth x) (car (cdr (cdr (cdr (cdr x))))))
(define (set-second! x y) (set-car! (cdr x) y))
(define self
'(let loop ()
(set-second! (fifth self) (read))
(write x)
(loop)))
(eval self)
The catch is that the code that is executed by eval modifies itself as it is
running. (I.e. evaluating the abstract syntax tree mutates the very same
abstract syntax tree.)
Technically, this is not allowed in current Common Lisp. The possibility of
doing this was overlooked in CLtL1 and then disallowed in CLtL2.
Thi is what is *really* meant by self-modifying code.
Jeff (home page http://www.emba.uvm.edu/~qobi)
P.S. If modifying quoted constants was not disallowed, you can even do the
above example without a need for eval to reference the global environment for
other than builtin procedures.
(define (second x) (car (cdr x)))
(define (fourth x) (car (cdr (cdr (cdr x)))))
(define (fifth x) (car (cdr (cdr (cdr (cdr x))))))
(define (set-second! x y) (set-car! (cdr x) y))
(define self
'(let loop ()
(set-car! (cdr 'x) (read))
(write x)
(loop)))
(set-second! (second (second (fourth self))) (fifth self))
(eval self)
--
Jeff (home page http://www.emba.uvm.edu/~qobi)
From: Erik Naggum
Subject: Re: Lisp is not an interpreted language
Date:
Message-ID: <3057487468970283@naggum.no>
* Jeffrey Mark Siskind
| (define self
| '(let loop ()
| (set-second! (fifth self) (read))
| (write x)
| (loop)))
:
| P.S. If modifying quoted constants was not disallowed, you can even do the
| above example without a need for eval to reference the global environment for
| other than builtin procedures.
would this be allowed?
(define self
(list 'let 'loop '()
'(set-second! (fifth self) (read))
'(write x)
'(loop)))
the list is no longer a constant, quoted or otherwise.
#\Erik
--
Please address private replies to "erik". Mail to "nobody" is discarded.
Erik Naggum wrote:
* Jeffrey Mark Siskind
| (define self
| '(let loop ()
| (set-second! (fifth self) (read))
| (write x)
| (loop)))
> would this be allowed?
> (define self
> (list 'let 'loop '()
> '(set-second! (fifth self) (read))
> '(write x)
> '(loop)))
Couldn't you just change the quote character to
a backquote, to make it a list cons'd at run-time
rather than a quoted constant?
Or is this old technology, not allowed any more?
Bull Horse <···@earthlink.net> wrote:
>Chris wrote:
>> =
>> Richard A. O'Keefe <··@goanna.cs.rmit.edu.au> a =E9crit dans l'article
>> <············@goanna.cs.rmit.edu.au>...
>> > >In article <··········@nntp.seflin.lib.fl.us>,
>> > >Ralph Silverman <········@bcfreenet.seflin.lib.fl.us> wrote:
>> > >> after all,
>> > >> when a program actually has been
>> > >> compiled and linked
>> > >> successfully,
>> > >> it runs from binary ...
>> > >> NOT A SCRIPT!!!!!!!!!!!!!!!!!!!!!!!!!!
>> > >> ^^^^^^^^^^^^
>> >
>> =
>> I belieave Compiling means translate from a language to another. Not have=
>> to translate in native binary code.
>> =
>> Chris
>Uh I think that Compiling means to translate anything into machine code. =
>Interpreted means to identify a string value with an according machine =
>code instruction.
Appologies if I have got hold of the wrong end of the stick here, as I
seem to have caught the tail end of this thread, but I thought the
difference between compiled and interpreted was that a compiled
program was all converted into machine language prior to execution,
whilst an interpreted one was converted as required, typically one
line at a time, and at the time of execution. Whether compiled or
interpreted, the code has to be converted to a level at which the
machine can run it. It is the time of this conversion that make the
difference. This causes the difference in speed. A compiled program
need make no conversion (to the native code) whilst running, whereas
an interpreted program must be converted to naitive code as it is
running.
Hope this makes sense, and isn't out of context ;-)
Pete Heywood
In article <··········@lira.i-way.co.uk>,
Pete Heywood <········@i-way.co.uk> wrote:
>Bull Horse <···@earthlink.net> wrote:
>>Chris wrote:
(snip)
>>> I belieave Compiling means translate from a language to another. Not have=
>>> to translate in native binary code.
(snip)
>>Uh I think that Compiling means to translate anything into machine code. =
>>Interpreted means to identify a string value with an according machine =
>>code instruction.
>
>Apologies if I have got hold of the wrong end of the stick here, as I
>seem to have caught the tail end of this thread, but I thought the
>difference between compiled and interpreted was that a compiled
>program was all converted into machine language prior to execution,
>whilst an interpreted one was converted as required, typically one
>line at a time, and at the time of execution. Whether compiled or
>interpreted, the code has to be converted to a level at which the
>machine can run it. It is the time of this conversion that make the
>difference. This causes the difference in speed. A compiled program
(snip)
It might help to explicitly point out that you're sorting out
confusion here between compiled languages and compilers in
general. Before being executed, a program written in a compiled
language eventually has to be translated, however indirectly, into
machine language, so that it doesn't need to be interpreted to be
executed. On the other hand, a compiler translates one language into
another, neither necessarily being machine language.
So, programs in compiled languages have to end up being compiled into
machine language, but a general compiler translates between two
languages, so in doing the compiling for a compiled language you may
use several compilers in series to achieve your final
machine-executable file.
Follow-ups set to comp.lang.misc: this thread needs moving.
-- Mark (who should probably get an award for using the substring
'compil' so many times in one sentence)
PASCAL is a 70's langauage. Developed in the early 70's I think 73...
From: ········@wat.hookup.net
Subject: Re: Lisp is not an interpreted language
Date:
Message-ID: <55k58f$9js@nic.wat.hookup.net>
In <·············@earthlink.net>, Bull Horse <···@earthlink.net> writes:
>PASCAL is a 70's langauage. Developed in the early 70's I think 73...
Definitely much earlier. It must have been 68 or 69
Hartmann Schhaffer
Bull Horse wrote:
> PASCAL is a 70's langauage. Developed in the early 70's I think 73...
You're right. The preface of the "PASCAL User Manual and Report" states:
A preliminary version of the programming language Pascal was
drafted in 1968. It followed in its spirit the Algol-60 and
Algol-W line of languages. After an extensive development phase,
a first compiler became operational in 1970, and publication
followed a year later (see References 1 and 8, p.104). The
growing interest in the development of compilers for other
computers called for a consolidation of Pascal, and two years of
experience in the use of the language dictated a few revisions.
This led in 1973 to the publication of a Revised Report and a
definition of a language representation in terms of the ISO
characters set.
Yes, Niklaus used a type writer (fixed space) font... ;)
--
······@acm.org, ·····@live.robin.de, ·····@blues.sub.de
Kosta Kostis, Talstr. 25, D-63322 R�dermark, Germany
http://ourworld.compuserve.com/homepages/kosta/
In article <··········@news.acns.nwu.edu>,
······@ils.nwu.edu (Kenneth D. Forbus) writes:
> In article <·············@symbiosNOJUNK.com>,
> Dave Newton <············@symbiosNOJUNK.com> wrote:
>
> At ILS and other research-oriented places I know about, we are having
> serious problems finding great Lisp programmers. My friends in
> research-oriented industry organizations tell me the same. We post ads
> regularly on the net, and the number of responses we get isn't huge. So from
> our perspective it is more of a supply problem than a demand problem, despite
> what the industry as a whole might look like.
I have to disagree. I have my news reader set up to look for
postings in misc.jobs.offered that mention lisp. It finds one every
other week or so. (It found ILS's the other day. My resume should be
in the Email this afternoon.) There's literally thousands a day for
C/C++ "programmers". Other than a bunch of copies of G2's openings
showing up recently, the pickings are pretty slim. I've been watching
for about six months now, if that matters. (18 months ago when I was
looking for a job, there weren't many lisp openings being posted then
either.)
Mike McDonald
·······@engr.sgi.com
What we pay at ILS is competitive with other university programming jobs.
We get plenty of C and C++ programmer applicants, but very few Lisp
applicants. Sadly, we also find it is vastly easier to have Lisp folks become
great C++ programmers than for C++ programmers to learn to be great Lisp
programmers. This is a real problem, given the drastically lower productivity
of C++ for the kinds of work we do compared to Lisp.
My long-term strategy is to change the culture by producing students who can
program in C/C++ when they need to, but also understand Lisp (both Common Lisp
and Scheme). As computers continue to grow cheaper, software complexity
continues to rise, and people remain relatively expensive, the number of
situations in which it makes sense to use some version of Lisp is growing.
But it won't be used if the pool of programmers being produced remains
ignorant of everything but the Industry Language De Jour.
In article <··············@laura.ilog.com>,
Harley Davis <·····@laura.ilog.com> wrote:
>······@ils.nwu.edu (Kenneth D. Forbus) writes:
>
>> In article <·············@symbiosNOJUNK.com>,
>> Dave Newton <············@symbiosNOJUNK.com> wrote:
>>
>> At ILS and other research-oriented places I know about, we are
>> having serious problems finding great Lisp programmers. My friends
>> in research-oriented industry organizations tell me the same. We
>> post ads regularly on the net, and the number of responses we get
>> isn't huge. So from our perspective it is more of a supply problem
>> than a demand problem, despite what the industry as a whole might
>> look like.
>
>How much do you pay your candidates as a research institute? It is
>pretty reasonable to suppose that there are some great Lisp
>programmers out there who choose for financial reasons to exercise
>their talents in other languages.
>
>-- Harley Davis
>
>-------------------------------------------------------------------
>Harley Davis net: ·····@ilog.com
>Ilog, Inc. tel: (415) 944-7130
>1901 Landings Dr. fax: (415) 390-0946
>Mountain View, CA, 94043 url: http://www.ilog.com/
>
In article <··········@news.acns.nwu.edu>,
Kenneth D. Forbus <······@ils.nwu.edu> wrote:
>What we pay at ILS is competitive with other university programming jobs.
>We get plenty of C and C++ programmer applicants, but very few Lisp
>applicants. Sadly, we also find it is vastly easier to have Lisp folks become
>great C++ programmers than for C++ programmers to learn to be great Lisp
>programmers. This is a real problem, given the drastically lower productivity
> of C++ for the kinds of work we do compared to Lisp.
What incentive does a C++ programmer have for learning Lisp? Lots of C++ jobs
offered and Lisp job payment is only competitive. If you really get such a
dramatically higher productivity, shouldn't that also be reflected in the
Lisp programmers' wages? I mean if a Lisp job would pay 50% more than a
C++ job, wouldn't that attract a lot more interest?
just my (currently NZ$) 2 cents
cheers Bernhard
--
-----------------------------------------------------------------------------
Bernhard Pfahringer
···············@cs.waikato.ac.nz http://www.ai.univie.ac.at/~bernhard (still)
In article <··········@saturn.cs.waikato.ac.nz>,
Bernhard Pfahringer <········@saturn.cs.waikato.ac.nz> wrote:
>In article <··········@news.acns.nwu.edu>,
>Kenneth D. Forbus <······@ils.nwu.edu> wrote:
>>What we pay at ILS is competitive with other university programming jobs.
>>We get plenty of C and C++ programmer applicants, but very few Lisp
>>applicants. Sadly, we also find it is vastly easier to have Lisp folks become
>>great C++ programmers than for C++ programmers to learn to be great Lisp
>>programmers. This is a real problem, given the drastically lower productivity
>> of C++ for the kinds of work we do compared to Lisp.
>
>What incentive does a C++ programmer have for learning Lisp?
Pleasure.
--
== Seth Tisue <·······@nwu.edu> http://www.cs.nwu.edu/~tisue/
Bernhard Pfahringer <········@saturn.cs.waikato.ac.nz> wrote in article
<··········@saturn.cs.waikato.ac.nz>...
> What incentive does a C++ programmer have for learning Lisp? Lots of C++
jobs
> offered and Lisp job payment is only competitive. If you really get such
a
> dramatically higher productivity, shouldn't that also be reflected in the
> Lisp programmers' wages? I mean if a Lisp job would pay 50% more than a
> C++ job, wouldn't that attract a lot more interest?
Although I'm a raw beginner learning Lisp, as a long-time C/C++ programmer
I'd like to respond to this. But first I do need to point out that my
reasons aren't related to the job market, and so perhaps I am atypical.
Certainly, the reasons you state are valid and probably applicable in the
vast majority of cases. Here is a slightly different perspective, though,
from my point of view.
I've been writing decompilers for the 5 or 6 years. I write them and market
them myself. I'm self-employed and I work at home. I have complete freedom
in my choice of tools (well, maybe I have some financial bounds <g>). I
started with C some 10 years ago or more, and have been using C++ for
almost as long. When I started in this particular line of work, it didn't
occur to me to NOT use C(++) -- it was simply the language "real
programmers" used (that *is* a humorous remark, BTW), and it was what I was
strongest in.
But C++ development is getting to be a real PITA. Despite the fact that the
machine on my desk is now several times faster and more powerful than what
I was using in 1990, the edit-compile-link cycle is just too slow. You know
it's too slow when you find yourself hesitating to make the change you know
you need to make because of the time the recompile will take.
I think (hope?) that making the move to Lisp will increase my productivity.
A decompiler is complex enough that often there is a lot of
exploratory-type work done before a workable solution is found. I've done
things like work on developing a certain approach for a day or two only to
find that it won't work and have to throw it out. No problem with that, but
C++ does not lend itself to such work easily.
What about the customers? Will Lisp be fast enough? Small enough? That's
unimportant -- a decompiler is not something people run all day every day.
Most of my customers are simply thrilled to be able to recover their source
code rather type it all in again from scratch. Whether it takes 30 minutes
instead of 1 minute is almost irrelevant. Same for whether it takes up 1MB
or 10MB on their disk. BTW, I just made up these numbers -- I don't think
Lisp will be 10 times slower or bigger than C++ or anything like that,
especially with the Lisp systems that are now available. (I think it's an
interesting observation that I even bring these points up -- I think I've
been programming in C too long, worrying always about having to have the
smallest, fastest possible code at any expense.)
Anyway, to wrap this up, I think I have a good incentive to learn Lisp and
use it instead of C++. And I realize that my situation is not typical, so
I'm not trying to challenge your statement, which, again, I think is
correct. Perhaps it's unfortunate for the Lisp community that it is this
way in the job market. But at least I have the freedom to use what I want
to use.
I think I'm really going to like Lisp.
-- Dave Sieber
·······@terminal-impact.com
http://www.terminal-impact.com
······@ils.nwu.edu (Kenneth D. Forbus) writes:
> In article <·············@symbiosNOJUNK.com>,
> Dave Newton <············@symbiosNOJUNK.com> wrote:
>
> At ILS and other research-oriented places I know about, we are
> having serious problems finding great Lisp programmers. My friends
> in research-oriented industry organizations tell me the same. We
> post ads regularly on the net, and the number of responses we get
> isn't huge. So from our perspective it is more of a supply problem
> than a demand problem, despite what the industry as a whole might
> look like.
How much do you pay your candidates as a research institute? It is
pretty reasonable to suppose that there are some great Lisp
programmers out there who choose for financial reasons to exercise
their talents in other languages.
-- Harley Davis
-------------------------------------------------------------------
Harley Davis net: ·····@ilog.com
Ilog, Inc. tel: (415) 944-7130
1901 Landings Dr. fax: (415) 390-0946
Mountain View, CA, 94043 url: http://www.ilog.com/
So it sounds like the pools are small on both ends, which of course
makes it harder to find a match. Sigh.
In article <··········@fido.asd.sgi.com>,
·······@engr.sgi.com (Mike McDonald) wrote:
> I have to disagree. I have my news reader set up to look for
>postings in misc.jobs.offered that mention lisp. It finds one every
>other week or so. (It found ILS's the other day. My resume should be
>in the Email this afternoon.) There's literally thousands a day for
>C/C++ "programmers". Other than a bunch of copies of G2's openings
>showing up recently, the pickings are pretty slim. I've been watching
>for about six months now, if that matters. (18 months ago when I was
>looking for a job, there weren't many lisp openings being posted then
>either.)
>
> Mike McDonald
> ·······@engr.sgi.com
Liam Healy <··········@nrl.navy.mil> writes:
>
> This is not the first time this assumption about LISP has been made.
> I think many people make the mistaken assumption that LISP is
> interpreted because it's interactive and dynamically linked, and
> confuse the two concepts. They look at the "batch"
> (edit-compile-link-run) model of conventional languages like C or
> Fortran and identify that with compilation. After all, what language
> other than LISP is interactive and compiled? (Forth is the only one I
> can think of.)
Some versions of ML are interactive and compiled.
>
> All the more reason that LISP is an essential component of a
> programming education.
Oh well, I always thought we should start with ML in the first year,
after all we're supposed to be teaching them to program not how to
use the latest industry fad.
>
> --
> Liam Healy
> ··········@nrl.navy.mil
From: Larry Hunter
Subject: Symbols and Mind (was Re: standardization (was Re: Lisp versus C++ for AI. software))
Date:
Message-ID: <rbbudtbxbr.fsf_-_@work.nlm.nih.gov>
I haven't posted to these lists in a long time, but this is too much to
take:
From: George Van Treeck <······@sybase.com>
Symbols, linked lists, consing, etc. are a wasted effort by people who
know little about neuropsych and figure they could use introspection to
deduce and brain functions.
George,
There are plenty of us who know a great deal about neuropsychology--and
various other aspects of cognition--who find symbolic programming useful for
building models and testing theories. For those who find it perspicacious,
LISP can be a fine language for this sort of cognitive research.
As just one of many examples, you might want to look at the qualitative
neural models that James Olds, Jeff Krichmar and I will present at the
Neuroscience meeting next month. It's an order of magnitude faster than
GENESIS, the state of the art numberical system, and provides quite detailed
(and accurate) predictions about complex neural systems. There's also a
longer paper on our qualitative neuron submitted to Biological Cybernetics.
In cognitive modelling, as with the rest of computational science, the
programming language you use is unlikely to determine the quality of your
results.
--
Lawrence Hunter, PhD.
National Library of Medicine phone: +1 (301) 496-9303
Bldg. 38A, 9th fl, MS-54 fax: +1 (301) 496-0673
Bethesda. MD 20894 USA email: ······@nlm.nih.gov
WATCH THOSE NEWSGROUPS LINES!!!
This thread is now in:
comp.ai
comp.ai.genetic
comp.ai.neural-nets
comp.lang.lisp
comp.lang.c++
comp.os.msdos.programmer
comp.lang.asm.x86
comp.unix.programmer
comp.ai.philosophy
When following up, exclude as many of these as possible.
It seems Mr. Silverman is up to his tricks again.
--
Andrew Gierth (·······@microlise.co.uk)
"Ceterum censeo Microsoftam delendam esse" - Alain Knaff in nanam
Shannon Lee <········@rain.com> writes:
> Jeff Shrager wrote:
>
> > Oddly, though, the ones that are good also seem to know lisp.
>
> Ever heard of the Sapir-Worf hypothesis?
No. Do tell.
- Marty
Lisp Resources: <http://www.apl.jhu.edu/~hall/lisp.html>
From: Larry Hunter
Subject: Symbolic computation in NNs and GAs (was Re: Lisp versus C++ for AI. software)
Date:
Message-ID: <rbbudnhyiy.fsf_-_@work.nlm.nih.gov>
George Van Treeck writes:
Please make all continuing responses on this topic to this note. That way
responses won't be cross-posted to comp.ai.genetic and comp.ai.neuralnets
(the only two comp newsgroups I read). These two areas do not use
symbolic processing at all. They are computationally intensive, using
primarilly C and C++. In other words, take your smelly dead horse
discussion elsewhere!
Mr. Van Treeck,
Perhaps you missed the note I posted last week in this thread describing a
biologically accurage neural network simulator that uses symbolic rather
than numeric calculation (QRN), and is an order of magnitude faster than and
just as accurate as the currently most widely used system (GENESIS).
Perhaps you also fail to appreciate the contribution of genetic programming
which uses symbols in its "genomes" (see, e.g. Koza's "Genetic Programming")
to evolutionary computation.
Perhaps the name-calling to which you resort is due to the incorrectness of
your argument?
Larry
--
Lawrence Hunter, PhD.
National Library of Medicine phone: +1 (301) 496-9303
Bldg. 38A, 9th fl, MS-54 fax: +1 (301) 496-0673
Bethesda. MD 20894 USA email: ······@nlm.nih.gov
From: James A Hammerton
Subject: Re: Symbolic computation in NNs and GAs (was Re: Lisp versus C++ for AI. software)
Date:
Message-ID: <555g9n$5qu@percy.cs.bham.ac.uk>
George Van Treeck writes:
: Please make all continuing responses on this topic to this note. That way
: responses won't be cross-posted to comp.ai.genetic and comp.ai.neuralnets
: (the only two comp newsgroups I read). These two areas do not use
: symbolic processing at all. They are computationally intensive, using
There is an entire subfield in connectionism (i.e. neural nets) that
is devoted to methods of performing symbolic computation with neural
nets. See my Connectionist NLP page, and the accompnaying
bibliography on Connectionist NLP and Knowledge Representation.
James
--
James Hammerton, PhD Student, School of Computer Science,
University of Birmingham | Email: ·············@cs.bham.ac.uk
WWW Home Page: http://www.cs.bham.ac.uk/~jah
Connectionist NLP WWW Page: http://www.cs.bham.ac.uk/~jah/CNLP/cnlp.html
From: Mike McDonald
Subject: Re: Lisp is not an interpreted language
Date:
Message-ID: <55m3kt$1s1@fido.asd.sgi.com>
In article <···························@gaijin>,
"Chris" <······@infonie.fr> writes:
>>Uh I think that Compiling means to translate anything into machine code.
>>Interpreted means to identify a string value with an according machine
>>code instruction.
> ----------
> I don't think so. The compiler usually translate to assembly, then the
> assembler translates to machine code. Sometimes the assembler is built in,
> sometimes it may translate in one pass.
>
> What about the Java compiler ? Is the code in machine language ? Isn't it a
> compiler ?
>
> Chris
Trying to distinguish between "compiled" and "interpreted" seems
like a complete waste of time, to me anyway. Afterall, everything is
interpreted eventually anyway. That's what a CPU is, afterall. Just
another interpreter.
Mike McDonald
·······@engr.sgi.com
In article <··········@fido.asd.sgi.com>
·······@engr.sgi.com "Mike McDonald" writes:
> Trying to distinguish between "compiled" and "interpreted" seems
> like a complete waste of time, to me anyway. Afterall, everything is
> interpreted eventually anyway. That's what a CPU is, afterall. Just
> another interpreter.
Books have been written about this. My favourite is "Writing
Interactive Compilers and Interpreters", P.J. Brown, ISBN 0
471 27609 X, ISBN 0471 100722 pbk. John Wiley & Sons Ltd,
but it isn't unique. For example, there's "Structure and
Interpretation of Computer Programs, Second Edition, by Harold
Abelson and Gerald Jay Sussman with Julie Sussman.
--
<URL:http://www.enrapture.com/cybes/> You can never browse enough
Future generations are relying on us
It's a world we've made - Incubus
We're living on a knife edge, looking for the ground -- Hawkwind
From: George Van Treeck
Subject: Re: Symbolic computation in NNs and GAs (was Re: Lisp versus C++ for AI. software)
Date:
Message-ID: <328AA48A.CF6@sybase.com>
Larry Hunter wrote:
>
> George Van Treeck writes:
>
> Please make all continuing responses on this topic to this note. That way
> responses won't be cross-posted to comp.ai.genetic and comp.ai.neuralnets
> (the only two comp newsgroups I read). These two areas do not use
> symbolic processing at all. They are computationally intensive, using
> primarilly C and C++. In other words, take your smelly dead horse
> discussion elsewhere!
>
> Mr. Van Treeck,
>
> Perhaps you missed the note I posted last week in this thread describing a
> biologically accurage neural network simulator that uses symbolic rather
> than numeric calculation (QRN), and is an order of magnitude faster than and
> just as accurate as the currently most widely used system (GENESIS).
Note the discussion of the previous cross-posted notes was C++ vs.
Lisp -- not sybmolic vs. numeric programming. You could ALSO implement
the same biological algorithms using symbolic processing (pattern
matching) in C++. It would take longer to write it and debug it in
C++ than in Lisp. If you are an expert C/C++ programming and
understand code generation, register allocation and memory allocation
methods, the C/C++ version of the same algorithm will run faster.
The distinction is research/prototyping (where Lisp is better)
versus production (where C/C++ is generally better).
In "AI", in contrast to neurophysiology, speed is MUCH more
important than simply verifying a biological hypothesis. Thus,
mathematicians/statisticians would probably take the biologically
accurate model from neurophysiology you developed in Lisp to
analyze it to understand the principles in a more formally specified
manner -- abstract your work into some equations. And a computer
scientist is best at writing those equations to squeeze most NIPS
(neural instructions per second) out of the equations. And if
performance is not good enough with C/C++ and there is sufficient
market demand -- implement it in hardware (e.g., in neural chips from
Synoptics, Intel, NEC, etc.) or custom ASICs, which require the
expertise of electrical engineers.
> Perhaps you also fail to appreciate the contribution of genetic programming
> which uses symbols in its "genomes" (see, e.g. Koza's "Genetic Programming")
> to evolutionary computation.
Well, if you read work from Holland and others that predates Koza's
book by about 20 years, you would be unimpressed with Koza's book.
All he did was apply well known methods to the problem of program
generation. Symbolic, integer, floating point and combinations have
been experimented with for many years -- and doesn't require Lisp.
Further "symbolic" modelling of processes has been done 20 or more
years in language languages designed specifically for modeling and
simulation like SIMULA (after which C++ is modeled -- have you
read the C++ ARM?), CSPP, etc. All the genetic algorithm examples,
in Goldberg's book are in Pascal. When you look for cookbooks for
neural networks in book stores the vast majority focus on C/C++
recipies.
Further, Koza's "genetic programming" is less tightly biologically
than the "genetic algorithms" used Holland, Goldberg and most others
in the comp.ai.genetic newsgroup. You will see more discussion on
genetic programming ala Koza in comp.ai.alife. You will see more
more discussion of genetic algorithms ala Goldberg in comp.ai.genetic.
> Perhaps the name-calling to which you resort is due to the incorrectness of
> your argument?
You think it's "name-calling." I think it's called beating a dead
horse when it is clear two camps are not going to convince each other
of which is the "best" language. Look in the archives of comp.lang,
etc. you will find the "my language is better than yours for AI and
..." going back many years. This is evidence of beating a horse that
has died a long time ago.
It's not an attack on you and has nothing to do with correctness of
my arguments. It is a case of time being wasted if there is no
convincing to be done.
I think neuropsychologists like yourself are THE important people
to advancing the art of neural networks -- more so than statisticians,
computer scientists, etc. Although these others have important cross-
disciplinary contributions to make. When you're arguing choice
computer languages, you're not in an area of your expertise... You
might want to defer to software engineers on that topic. And if
you look at comp.lang you will see this discussion has raged for
many years. It's as technically deep as neurophysiology with
more religion than facts. :-)
My opinion:
For what you are doing, Lisp is fine. For "production" genetic
algorithm or neural network code sold for real-world applications,
it would generally better to develop in a language like C/C++.
-George
From: George Van Treeck
Subject: Re: Lisp versus C++ for AI. software
Date:
Message-ID: <328A6CEA.6EE8@sybase.com>
Mike McDonald wrote:
>
> In article <··········@news.acns.nwu.edu>,
> ······@ils.nwu.edu (Kenneth D. Forbus) writes:
> > In article <·············@symbiosNOJUNK.com>,
> > Dave Newton <············@symbiosNOJUNK.com> wrote:
> >
> > At ILS and other research-oriented places I know about, we are having
> > serious problems finding great Lisp programmers. My friends in
> > research-oriented industry organizations tell me the same. We post ads
> > regularly on the net, and the number of responses we get isn't huge. So from
> > our perspective it is more of a supply problem than a demand problem, despite
> > what the industry as a whole might look like.
>
> I have to disagree. I have my news reader set up to look for
> postings in misc.jobs.offered that mention lisp. It finds one every
> other week or so. (It found ILS's the other day. My resume should be
> in the Email this afternoon.) There's literally thousands a day for
> C/C++ "programmers". Other than a bunch of copies of G2's openings
> showing up recently, the pickings are pretty slim. I've been watching
> for about six months now, if that matters. (18 months ago when I was
> looking for a job, there weren't many lisp openings being posted then
> either.)
Mike is right.
Lisp is great language for research (protoyping) because you can
focus more on the problem and less on the bits and bytes coding
of how to tell the computer to solve the problem. However, most
of the world has to get "production" code out on the first pass.
And that requires more careful tuning to bits and bytes level.
For example, specifing data types and sizes for a relational
DBMS. More specifically with respect to the two newsgroups I
read (which you're spamming/polluting with this irrelavent
discussion), comp.ai.genetic and comp.ai.neural-nets:
In genetic algorithms you get much higher performance if you can
specify exactly how many bits represents a gene. Each bit you
trim off cuts the search space in half. Lisp does not address this
at all. Lisp's polymorphic arithmetic and run-time type checking
slow this evalutation speed up a lot. I run GAs that run literally
for weeks at time. If I ran them in Lisp, it would take several times
longer. I spend many hours thinking of little tweeks to shorten
the run-times by a few hours. Those low-level tweeks can't be
done in Lisp.
In neural networks, Lisp might be used to prototype some new
learning algorithm or connection topology. However, when you
want to actually apply the learning algorithm on a large net,
you need to carefully optimize the code. On fully interconnected
neural networks, e.g., backprop, each neuron increases the compute
time by the cube (order N**3). Thus, when N=10000, the compute
time can be quite long. This is why, one must go even more
low level than C/C++ and implement the networks directly in
neural hardware chips (process in parallel). For example,
real-time, high-resolution vision requires nets implemented
in hardware.
There aren't many jobs in Lisp because the number
of application areas where Lisp is the best tool to use,
is fairly small (mostly research) -- compared to the kinds of
applications that can be efficiently implemented in C/C++.
This may change as processing speed improves to the point
where the prototype in Lisp can also serve as the production
code. This has already happened in the MIS world, where
4GLs are big growth market for database applications. However,
the more natural language-like syntax of MIS 4GLs will make
them preferable to other declarative languages like Lisp.
Using more declarative languages like Lisp for production code
trend hasn't happened yet in application areas that are
more computationally intensive.
-George
George Van Treeck <······@sybase.com> writes:
>There aren't many jobs in Lisp because the number
>of application areas where Lisp is the best tool to use,
>is fairly small (mostly research) -- compared to the kinds of
>applications that can be efficiently implemented in C/C++.
>This may change as processing speed improves to the point
>where the prototype in Lisp can also serve as the production
>code. This has already happened in the MIS world, where
>4GLs are big growth market for database applications. However,
>the more natural language-like syntax of MIS 4GLs will make
>them preferable to other declarative languages like Lisp.
>Using more declarative languages like Lisp for production code
>trend hasn't happened yet in application areas that are
>more computationally intensive.
>-George
I prefer to turn this argument on its side. Instead of asking what
is the most efficient choice now, can we ask what would be the most
efficient choice 5 years from now?
Why is SQL a successful language? In large part because it has a
very clean semantics that allows an optimizing compiler to automate
the mapping from the declarative language to the procedural code that
worrys about the bits and bytes.
Compiler technology has done at least as much for lisp execution
speed as hardware technology.
How can technology provide order of magnitude improvements in
programmer productivity? Well code reuse is a popular buzz word. But
if my class library depends on the number of bits in a word, it is
harder to port to new hardware. And if the programmer is using a
language that doesn't fully support multiple inheretance, then the
class designer is restricted designing classes that aggregate
behaviors, instead of isolating the desired behavior and allowing the
programmer to pick and chose the appropriate parts.
I think that what the world really needs is better compiler
technology. SQL is evidence of that. The evolution of lisp is
evidence of that. Advanced compiler technology depends on a clean
semantic algebra that abstracts out the bits and bytes so that the
expert that optimizes the bits is a program, not a programmer.
Gerard
From: Howard R. Stearns
Subject: Performance in GA (was: Lisp versus C++ for AI. software)
Date:
Message-ID: <328B393A.1B37ADEA@elwoodcorp.com>
George Van Treeck wrote:
>
> In genetic algorithms you get much higher performance if you can
> specify exactly how many bits represents a gene. Each bit you
> trim off cuts the search space in half. Lisp does not address this
> at all. Lisp's polymorphic arithmetic and run-time type checking
> slow this evalutation speed up a lot. I run GAs that run literally
> for weeks at time. If I ran them in Lisp, it would take several times
> longer. I spend many hours thinking of little tweeks to shorten
> the run-times by a few hours. Those low-level tweeks can't be
> done in Lisp.
I confess I am mostly ignorant of genetic programs, but I hope you will
permit me to ask a naive question anyway:
What is it about the programming that requires polymorphic arithmetic
and run-time type checking when programmed in Lisp, but not when
programmed in other languages.
For example, in Common Lisp, one can declare variables, arguments and
return values to be very precise types: i.e. not just "int" but specific
ranges of integer, or signed or unsigned bytes of a specific number of
bits. Good compilers usually use this information to produce the same
machine instructions one would use if programming "by hand" in C.
Inlined code, vectors, and dynamic-extent and other declarations should
also be usable to advantage. It is my understanding that Thinking
Machines' supercomputers (programmed in Lisp) use these techniques.
What am I not seeing? What sort of "low-level tweks" are you using?
From: George Van Treeck
Subject: Re: Performance in GA (was: Lisp versus C++ for AI. software)
Date:
Message-ID: <328B92CE.4F55@sybase.com>
Howard R. Stearns wrote:
> I confess I am mostly ignorant of genetic programs, but I hope you will
> permit me to ask a naive question anyway:
>
> What is it about the programming that requires polymorphic arithmetic
> and run-time type checking when programmed in Lisp, but not when
> programmed in other languages.
>
> For example, in Common Lisp, one can declare variables, arguments and
> return values to be very precise types: i.e. not just "int" but specific
> ranges of integer, or signed or unsigned bytes of a specific number of
> bits. Good compilers usually use this information to produce the same
> machine instructions one would use if programming "by hand" in C.
>
> Inlined code, vectors, and dynamic-extent and other declarations should
> also be usable to advantage. It is my understanding that Thinking
> Machines' supercomputers (programmed in Lisp) use these techniques.
>
> What am I not seeing? What sort of "low-level tweks" are you using?
In Lisp, each piece of data has some tag bits associated with it.
This allows Lisp to determine the type at run-time and automatically
perform type conversions on demand, e.g., manipulate a number as a
string. It's part of what makes Lisp a "symbolic" processing
language.
To gain performance, Thinking Machines put some of the tag
handling in hardware. However, the market for Lisp specific
computers is now gone. Didn't they go out of business a long
time ago? "Generic" CPUs don't have tag specific handling for
Lisp, thus there is run-time overhead to handle the tags.
In Lisp, you can specify the level of optimization on a segment
of code, and it can eliminate a lot of the run-time type checking.
But, this optimization tends to vary in amount with Lisp compiler
vendor.
In C, if you want to remove the overhead of converting an int to
a float in mixed arithmatic you define the int as a float in the
first place. In Lisp, 1.0 is generally stored as an int and
promoted at run-time to a float resulting in slower code. Some
Lisp compilers might be smart enough not to do this. C/C++
guarantees you can do this via explicit statement.
An example of C/C++ tweek is keeping a pointer to an array in a
register rather than using an absolute address or stack address
(the CPU does not have to load the address along with the
instruction because the location is already in a register -- fewer
loads and tighter code for fewer cache misses). You can
increment the register pointer as you move through the array
more quickly as well. This is a standard part of C/C++. FORTRAN
compilers are really good at this because they know the location
of the array is static and automatically keep the location
in registers. In C/C++ you give it a hint by using a pointer to
an array and declaring it "register". It's been a long time since
I used Lisp, but those kinds of optimizations were not used then,
and I suspect whether they exist today depends on the particular
Lisp compiler vendor.
Code generation isn't everything. Particularly as CPUs get faster,
more "prototype" Lisp code will be perfectly suitable as production
code. Over time, higher level languages like Lisp, Prolog, etc.
will certainly push C/C++ into ever smaller niches. But it will
happen very slowly. Many of will have retired or died by then.
-George
In article <·············@sybase.com>, George Van Treeck
<······@sybase.com> wrote:
> In Lisp, each piece of data has some tag bits associated with it.
> This allows Lisp to determine the type at run-time and automatically
> perform type conversions on demand, e.g., manipulate a number as a
> string. It's part of what makes Lisp a "symbolic" processing
> language.
Type conversion creates new the data. You can't manipulate
a number as a string in Lisp. You have to create
a string from the number.
> To gain performance, Thinking Machines put some of the tag
> handling in hardware. However, the market for Lisp specific
> computers is now gone. Didn't they go out of business a long
> time ago? "Generic" CPUs don't have tag specific handling for
> Lisp, thus there is run-time overhead to handle the tags.
This is mostly true. A 64 bit processor helps a lot.
Symbolics Ivories were 40 Bits data+tags.
> In Lisp, you can specify the level of optimization on a segment
> of code, and it can eliminate a lot of the run-time type checking.
> But, this optimization tends to vary in amount with Lisp compiler
> vendor.
This also is true and it varies between architectures (as expected).
> first place. In Lisp, 1.0 is generally stored as an int and
> promoted at run-time to a float resulting in slower code.
Depends.
> Some
> Lisp compilers might be smart enough not to do this. C/C++
> guarantees you can do this via explicit statement.
> An example of C/C++ tweek is keeping a pointer to an array in a
> register rather than using an absolute address or stack address
> (the CPU does not have to load the address along with the
> instruction because the location is already in a register -- fewer
> loads and tighter code for fewer cache misses). You can
> increment the register pointer as you move through the array
> more quickly as well. This is a standard part of C/C++. FORTRAN
> compilers are really good at this because they know the location
> of the array is static and automatically keep the location
> in registers. In C/C++ you give it a hint by using a pointer to
> an array and declaring it "register". It's been a long time since
> I used Lisp, but those kinds of optimizations were not used then,
> and I suspect whether they exist today depends on the particular
> Lisp compiler vendor.
You might look in some old Symbolics manuals under the
keyword "Array Registers". The Genera 8.3 documentation (Feb 1990)
has it in Book 7 "Symbolics Common Lisp Language Concepts",
chapter 4.5.10. Basically you have the declarations
SYS:ARRAY-REGISTER and SYS:ARRAY-REGISTER-1D which results
in similar behavior. For the underlying implementation
look at Book 14 "Internals", chapter 28.2 "Ivory Array Registers".
Such stuff has been used extensively on those machines
(they had color paint software, 3d animation packages, ...).
Do other vendors have similar stuff? This is surely
non standard.
> Code generation isn't everything. Particularly as CPUs get faster,
> more "prototype" Lisp code will be perfectly suitable as production
> code. Over time, higher level languages like Lisp, Prolog, etc.
> will certainly push C/C++ into ever smaller niches. But it will
> happen very slowly.
It is surely important to discuss the wishes of *maybe*-users
and to see how we can advance the issue such that we
get portable fast code generation.
Common Lisp always tried to be a Lisp which can be compiled
to reasonable fast code on stock hardware. It is a language
which tries to bridge the gap between low level and
high level coding. There are sure some compromises in the
language and getting fast code may be not that easy
(while it is possible). Some optimizations take
place in high-level code: for example the CLOS
Meta Object Protocol is a facility for this. Optimizations
at that level are often not possible or system dependent
in other languages.
> Many of will have retired or died by then.
Since Lisp has survived for quite some time due
to its flexibility to adopt new ideas, there
is good chance for it finding future uses.
Rainer Joswig
From: Richard A. O'Keefe
Subject: Re: Performance in GA (was: Lisp versus C++ for AI. software)
Date:
Message-ID: <56gq29$1sb$1@goanna.cs.rmit.edu.au>
George Van Treeck <······@sybase.com> writes:
>In Lisp, each piece of data has some tag bits associated with it.
Dangerous half truth.
Hadn't you noticed that Common Lisp allows you to provide *type
declarataions*, which means that a Lisp compiler _can_ represent
data known to be bounded integer or float or whatever without tags,
and some Lisp compilers *DO*.
The Stalin compiler for Scheme even manages to accomplish this without
any declarations, using type inference. It even generates different
representations for "list of float" and "list of anything".
>This allows Lisp to determine the type at run-time and automatically
>perform type conversions on demand, e.g., manipulate a number as a
>string. It's part of what makes Lisp a "symbolic" processing
>language.
Yes, BUT THE COMPILER DOESN'T HAVE TO GENERATE MOST GENERAL CODE
ALL THE TIME. People have put a _lot_ of work into this over the
years.
>To gain performance, Thinking Machines put some of the tag
>handling in hardware. However, the market for Lisp specific
>computers is now gone. Didn't they go out of business a long
>time ago? "Generic" CPUs don't have tag specific handling for
>Lisp, thus there is run-time overhead to handle the tags.
(1) You may be referring to the Connection Machines that were built
out of SPARC processors, which have support for "tagged arithmetic".
(2) You should know that SPARCs are commodity processors, NOT in any
way Lisp specific, and in no imminent danger of extinction.
For example, I am posting from an "Ultra Enterprise 3000",
which is a 64-bit 166 Mhz 4-way superscalar UNIX box, which
still has those "tagged add/subtract" instructions.
(3) With the exception of bounded integer arithmetic and floating
point arithmetic, which many Lisp compilers can handle with
no tagging thanks to declarations, the tag checking can often
be folded in with work that has to be done anyway.
>In Lisp, you can specify the level of optimization on a segment
>of code, and it can eliminate a lot of the run-time type checking.
>But, this optimization tends to vary in amount with Lisp compiler
>vendor.
This is true of ANY language. Try comparing different C compilers
on the same machine. On the machine I'm using, there's nearly a
factor of *three* between the fastest C compiler and the C compiler
that generates the fastest code.
>In C, if you want to remove the overhead of converting an int to
>a float in mixed arithmatic you define the int as a float in the
>first place. In Lisp, 1.0 is generally stored as an int and
>promoted at run-time to a float resulting in slower code.
This is a joke, right? Every Lisp I've ever used stored 1.0 as
1.0, NOT as an int. It _had_ to, because being-a-float is part
of its run-time value!
>Some Lisp compilers might be smart enough not to do this.
Name one Common Lisp system of any sort that _does_ store 1.0 as an int!
>An example of C/C++ tweek is keeping a pointer to an array in a
Where _does_ this "tweek" spelling come from?
I've been seeing it a lot lately. It's "tweak" with an A.
You should be aware that modern optimising C compilers do just as well
with array-munching source code using "a[i]" as with source code
using "*p++". Sometimes better. For the parallelizing C compilers,
sometimes _much_ better.
>It's been a long time since I used Lisp,
It shows.
>but those kinds of optimizations were not used then,
>and I suspect whether they exist today depends on the particular
>Lisp compiler vendor.
But why should Lisp be any different from C? The optimisations
done for *any* language vary from compiler vendor to compiler vendor,
and from release to release from the same vendor.
>Code generation isn't everything. Particularly as CPUs get faster,
>more "prototype" Lisp code will be perfectly suitable as production
>code. Over time, higher level languages like Lisp, Prolog, etc.
>will certainly push C/C++ into ever smaller niches. But it will
>happen very slowly. Many of will have retired or died by then.
Look at the number of people writing Perl, even TCL, rather than C.
Look at the number of people raving about Java who used to rave
about C++, when they were using interpreted byte codes for Java.
As for Prolog, if efficiency had _ever_ been the reason why people
preferred C to it, the availability of Mercury (no run-time tags,
performance comparable to or better than gcc-compiled C) would have
put an end to _that_. Then there are Sisal, NESL, and other "high
level" languages that routinely beat Fortran on the problems that
Fortran is supposed to be good at.
Sigh.
--
Mixed Member Proportional---a *great* way to vote!
Richard A. O'Keefe; http://www.cs.rmit.edu.au/%7Eok; RMIT Comp.Sci.
Dear Mr. Van Treek,
[**** a benchmark to show Lisp performance appended ****]
it is not my intention to "flame" you and you are probably not a
clueless newbie.
But, you insist to post information about the speed of my preferred
programming language that is either wrong or inaccurate. Your
statements, if accepted by a wide range of people, will make it harder
to use a tool I currently use to be ahead of my competitors.
I appended two programs, one in C and one in Common Lisp.
If you are interested to get the facts straigt in this discussion,
would you please get a copy of CMU Common Lisp or Allegro Common Lisp
(they ship a CD with a Demo), I assume you have a C compiler, run this
benchmark and post the results to the same public forums you posted
your previous information to? See my home page at cracauer.cons.org
for pointers to CMUCL for workstations and PCs.
First, let me comment on a few points, the benchmark is a uuencoded/
tarred/ gziped file at the end of this posting.
George Van Treeck <······@sybase.com> writes:
[...]
>In Lisp, each piece of data has some tag bits associated with it.
>This allows Lisp to determine the type at run-time and automatically
>perform type conversions on demand, e.g., manipulate a number as a
>string. It's part of what makes Lisp a "symbolic" processing
>language.
This is only true as long as you want it and don't add
declarations.
>To gain performance, Thinking Machines put some of the tag
>handling in hardware. However, the market for Lisp specific
>computers is now gone. Didn't they go out of business a long
>time ago?
I don't know much a bout TMC's machines, but Symbolics and LMI
had/have this, too. Additionally, there were highly optimized Bytecode
machines (Xerox Interlisp), the same kind of stuff now planned for
Java.
>"Generic" CPUs don't have tag specific handling for
>Lisp, thus there is run-time overhead to handle the tags.
The Sparc architecture has. But, it is not of much use even for Lisp
programs because you can add the declarations you need and no further
type-checking is done.
>In Lisp, you can specify the level of optimization on a segment
>of code, and it can eliminate a lot of the run-time type checking.
>But, this optimization tends to vary in amount with Lisp compiler
>vendor.
No surprise. To get good code, you have to get a descent
compiler. There are many and CMU CL is a free implementation and you
can eleminate *all* type checks for the code discussed here.
You wouldn't use a bytecode C system, either.
>In C, if you want to remove the overhead of converting an int to
>a float in mixed arithmatic you define the int as a float in the
>first place. In Lisp, 1.0 is generally stored as an int and
>promoted at run-time to a float resulting in slower code. Some
>Lisp compilers might be smart enough not to do this. C/C++
>guarantees you can do this via explicit statement.
I think that is irrelevant. Declare the type and you get what you
want. You declare the type in C, too, so there is no extra effort for
Common Lisp.
>An example of C/C++ tweek is keeping a pointer to an array in a
>register rather than using an absolute address or stack address
>(the CPU does not have to load the address along with the
>instruction because the location is already in a register -- fewer
>loads and tighter code for fewer cache misses).
I have to admit I don't understand what you mean. When iterating of an
array, all good compilers will hold the address in a register. What
additionaly place is there to look up?
Do you mean the pointer is held in a register across function calls?
If so, does you C or Fortran compiler on a SPARC?
>You can
>increment the register pointer as you move through the array
>more quickly as well. This is a standard part of C/C++. FORTRAN
>compilers are really good at this because they know the location
>of the array is static and automatically keep the location
>in registers. In C/C++ you give it a hint by using a pointer to
>an array and declaring it "register". It's been a long time since
>I used Lisp, but those kinds of optimizations were not used then,
>and I suspect whether they exist today depends on the particular
>Lisp compiler vendor.
There are certainly optimizations not made in Lisp compilers. But we
are talking about minor improvements. In complex programs those can be
noise compared to the effect caused by the overall optimization. In
complex programs the optimization near the program flow level can be
more important than micro-optimization near the machine
implementation. Of course, it is quite hard to write benchmarks to
show.
But, I think Lisp is way behind in many microoptimization issues. For
example, the right instruction sheduling for modern CPUs is quite hard
to find. So hard that only compilers provided by the CPU vendor can do
the best. Those compilers are usually C, C++ and Fortran.
These optimizations cause program speedup in the range of +50/-30%. In
fact my benchmark will be slower than C within such a range (although
my tests on Mips machine showed Lisp to be 20% faster). When comparing
languages, we usually talk about beeing 10 or 30 times slower (see
Java).
>Code generation isn't everything. Particularly as CPUs get faster,
>more "prototype" Lisp code will be perfectly suitable as production
>code. Over time, higher level languages like Lisp, Prolog, etc.
>will certainly push C/C++ into ever smaller niches. But it will
>happen very slowly. Many of will have retired or died by then.
True, but to show the current state of afairs, please run these
programs and post your results and/or comments.
Again, I'm interested in a useful discussion about this and I'm happy
to dissect the assembler code generated by our compilers to make
progress.
begin 666 lisp-bench.tar.gz
<uuencoded_portion_removed>
1#6M8PQK^&_ 'F7R5[P H "4
end
--
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Martin Cracauer <········@wavehh.hanse.de> http://cracauer.cons.org
Those interested in the C++ vs Lisp runtime wrangling --especially in the
area of GAs, etc-- should check out the work by Ken Anderson
(·········@bbn.com).
In short, he took the GP code from Koza's book and spent a half a day or
so doing some obvious and straightforward optimizations (e.g.
type-declarations, etc).
Not only did he speed up that particular program greatly, but Ken's
results (here and elsewhere) demonstrate that in almost all cases Lisp can
(easily) be made run at least as fast and efficiently as comparable C++
code.
Nichael Cramer
·······@sover.net -- deep autumn my neighbor what does she do
http://www.sover.net/~nichael/ --Basho
From: George Van Treeck
Subject: Re: Performance in GA (was: Lisp versus C++ for AI. software)
Date:
Message-ID: <3291241A.5FCF@sybase.com>
From treeck Mon Nov 18 13:05 PST 1996
Date: Mon, 18 Nov 1996 13:05:36 -0800
To: ········@wavehh.hanse.de
Subject: Re: Performance in GA (was: Lisp versus C++ for AI. software)
Cc: treeck
Mime-Version: 1.0
Content-Transfer-Encoding: 7bit
> I appended two programs, one in C and one in Common Lisp.
I only recieved one file in lisp-bench.tar.z. appended in your
email. Where is the C version? And gunzip said:
gunzip: lisp-bench.tar.z: not in gzip format.
Note that the topic is performance in "GA". Are these GA benchmarks?
If not, what relevance are the benchmarks? Different languages perform
better at different tasks. For some people, their favorite language
is their hammer and every problem a nail. What I'm advocating is
using the best language for the particular job (e.g., a saw, drill,
etc.).
If done in a manner, where cross-over and mutation are symbolic
rather than bit-wise, then Lisp may be faster than C/C++!
However, symbolic manipulation tends to very application specific,
i.e., change the symbols for changes in the application. For
example, suppose you were to do some Koza-like minimal programming
instruction set and then change to a completely different kind
of application like routing oil through pipelines -- completely
different symbols.
If you frequently modify the application code (change definitions of
alleles/symbols) and don't want to rewrite the associated GA code.
You would use bit-wise manipulation, i.e., the GA code knows nothing
about the data (just a string of bits). I pass the GA code an
array of structs (containing bit fields aligned on bit boundaries)
cast as an array of bytes. The advantages of this are that a
flipped bit doesn't create an invalid symbol, because there are
just enoug bits allocated to represent all the sybmols. A flipped
bit or cross-over, always operates on real data rather than
"blanks", thus evolves faster. When I get results, the compiler
unpacks the result. I suspect C/C++ is a little easier to use for
this bit twiddling code and probably runs faster. Is Lisp now
better at bit twiddling as well?
I write my GUI and most of my application code in SuperCard (a
4GL) and write the GA and other compute intensive portions in
C++. In other words, instead picking a particular language, and
try to contort it into doing everything, I mix would and match for
the application at hand.
-George
C>I'm working on Machine Learning and I wish to know your opinion about what
C>is the best language to implement ML...
Either way, try not to use both languages. You may spend more
time on integration than research. I've worked on a project
that had to interface LISP and C++ (LISP:intelligence, c++:simulator).
The end statistics was that we spent more than 50% of labor hour
on LISP-C++ interface and close to 70% of code was written to
transoform data structure back and fourth between the 2 languages
on multiple platforms.
Project was a success, after we re-implemented the design (yes,
we actually had a design, thanks to the insistence/threat of the
customer) in a single language.
--Norman
From: Scott Musman
Subject: Re: mixed language programming (Re: Lisp versus C++ for AI. software)
Date:
Message-ID: <g13ezuyaui.fsf@errhead.airbranch>
>Yes, the fact that Lisp-C++ interfaces take that much code is a
>problem, but trying to avoid it by not interfacing with C/C++ at all
>is throwing out the baby with the bathwater.
You might want to take a look at what ILOG has tried to do with their
Talk product. It specifically attempts to tackle the issue of mixing the
best features of the different languages, ISO Lisp and C/C++ in a
seamless way. It also helps you with the application delivery problem
too.
I haven't actually used their product (I've only browsed some of
the documentation), but a free version of it is available for Linux
that you can down-load and evaluate.
-- Scott
In article <················@best.best.com>,
···@intentionally.blank-see.headers wrote:
> Yes, the fact that Lisp-C++ interfaces take that much code is a
> problem, but trying to avoid it by not interfacing with C/C++ at all
> is throwing out the baby with the bathwater. There are too many
> useful libraries with C APIs out there that you can't reimplement.
> You may take the academic stance that you don't want to interface with
> all that icky stuff, but all the languages that succeed have those
> interfaces. Lisp vendors must address this issue better:
Agreed.
> (1) They should include many more Fortran/C/C++ libraries already
> interfaced to their systems. This includes image I/O, video,
> audio, databases, compression, Web protocols, etc., both vendor
> specific (eg. SGI's libraries, Windows) and vendor neutral (third
> party, reference implementations). There is nothing that will be
> more discouraging to a potential customer than having to spend a
> few weeks building tools that they get automatically and for free
> with other systems before they can start solving their problems.
> Where are the Tools.h++, Linpack.h++, and Socket.h++ wrapper
> libraries of the CommonLisp world?
MCL accesses almost everything on the Mac (the complete toolbox
and standard shared libraries). Compile your stuff with
the usual compilers as a shared library and call it
from Lisp. People are doing this all the time.
> (2) By making FFI a lot easier. This includes automatic parsing
> of C/C++ header files, conservative stack marking, C++ classes
> to manipulate Lisp values, etc. One may not like it (I
> certainly don't), but current APIs are almost all build around C,
> with a little bit of C++ and Fortran.
Every bit of the above paragraph is true (IMHO).
> (3) By standardizing at least a portable subset of FFI facilities.
> There are only two vendors left.
Digitool, Franz, Harlequin, Symbolics. Four, isn't it? Are
there any others (GoldHill, Venue, ...)?
Poplog includes a CL implementation.
> I guess if one of them goes
> out of business, we have standardization in the commercial
> implementations :-( Seriously, to avoid that, addressing this
> issue would be a good idea.
Sure. We need a common approach for FFIs.
> Systems like ILU and CORBA provide a partial solution, but they
> are still distinct from a true, low-overhead FFI.
Well, Digitool is currently developing support for OpenDoc
for MCL. For this purpose IDL/SOM support is needed. This could
be the FFI we are looking for. Better to have one approach
on that particular topic.
Greetings,
Rainer Joswig
From: Georg Bauer
Subject: mixed language programming (Re: Lisp versus C++ for AI. software)
Date:
Message-ID: <199610062231.a41774@ms3.maus.de>
Hi!
RJ>Every bit of the above paragraph is true (IMHO).
Exactly. I personally think a C/C++ header parser is top on the list -
everyday there is a new library, so a vendor can't catch up with (for
example) Microsoft in new APIs. So better get a standard tool for the job.
RJ>Digitool, Franz, Harlequin, Symbolics. Four, isn't it? Are
RJ>there any others (GoldHill, Venue, ...)?
Venue still is up.
RJ>Well, Digitool is currently developing support for OpenDoc
RJ>for MCL. For this purpose IDL/SOM support is needed. This could
RJ>be the FFI we are looking for. Better to have one approach
RJ>on that particular topic.
Actually IDL/SOM is CORBA, so that will be a standard for the future, I
think. The problem is, these aren't low-overhead approaches :-)
I still prefer to connect directly to basic system APIs, but this get's
more problematic with every new release of system software - the basic APIs
are just getting to complex (anyone ever programmed OLE or ACTIVE-X on
native APIs without MFC? ;-) )
bye, Georg
In article <················@best.best.com>,
···@intentionally.blank-see.headers wrote:
> Yes, the fact that Lisp-C++ interfaces take that much code is a
> problem, but trying to avoid it by not interfacing with C/C++ at all
> is throwing out the baby with the bathwater. There are too many
> useful libraries with C APIs out there that you can't reimplement.
> You may take the academic stance that you don't want to interface with
> all that icky stuff, but all the languages that succeed have those
> interfaces. Lisp vendors must address this issue better:
Agreed.
> (1) They should include many more Fortran/C/C++ libraries already
> interfaced to their systems. This includes image I/O, video,
> audio, databases, compression, Web protocols, etc., both vendor
> specific (eg. SGI's libraries, Windows) and vendor neutral (third
> party, reference implementations). There is nothing that will be
> more discouraging to a potential customer than having to spend a
> few weeks building tools that they get automatically and for free
> with other systems before they can start solving their problems.
> Where are the Tools.h++, Linpack.h++, and Socket.h++ wrapper
> libraries of the CommonLisp world?
MCL accesses almost everything on the Mac (the complete toolbox
and standard shared libraries). Compile your stuff with
the usual compilers as a shared library and call it
from Lisp. People are doing this all the time.
> (2) By making FFI a lot easier. This includes automatic parsing
> of C/C++ header files, conservative stack marking, C++ classes
> to manipulate Lisp values, etc. One may not like it (I
> certainly don't), but current APIs are almost all build around C,
> with a little bit of C++ and Fortran.
Every bit of the above paragraph is true (IMHO).
> (3) By standardizing at least a portable subset of FFI facilities.
> There are only two vendors left.
Digitool, Franz, Harlequin, Symbolics. Four, isn't it? Are
there any others (GoldHill, Venue, ...)?
Poplog includes a CL implementation.
> I guess if one of them goes
> out of business, we have standardization in the commercial
> implementations :-( Seriously, to avoid that, addressing this
> issue would be a good idea.
Sure. We need a common approach for FFIs.
> Systems like ILU and CORBA provide a partial solution, but they
> are still distinct from a true, low-overhead FFI.
Well, Digitool is currently developing support for OpenDoc
for MCL. For this purpose IDL/SOM support is needed. This could
be the FFI we are looking for. Better to have one approach
on that particular topic.
Greetings,
Rainer Joswig
Carlos Cid wrote:
>
> Hi everybody,
>
> This is a request for your opinion about the subject theme.--snip--
Here's my opinion: Go with C++ and don't rule out Java
> - Existing implemented ML. algorithms.As stated earlier, more examples in Lisp but they may be "older"
C++ and Java can both utilize processes written in C.
> - An easy maintenance (easy to understand and easy for group working).C++ is OO, good for maintenance and group working, same with Java.
> - Language standarization.I should think this would be of less importance for research purposes than it
would be for commercial distribution of a product. Also, current trends indicate
that C++ will become more and more standardized.
> - Multiplatform support.yep.
> - Interface with graphic enviroments.C++ and Java both lend themselves to graphical development.
> - Software Engineering methodologies developed.yep.
Java is a clear choice in developing applications for the web. C++ has
a speed advantage running locally, but if your application requires more
speed than Java has to offer, you can always use C as your workhorse doing
the intensive stuff.
--Robby Garner
http://www.fringeware.com/usr/robitron