I think that Lisp compiler get their power because in Lisp you
usually declare implicitely the type of the argument:
(person-name "John")
(char "sure I'm a string" 1)
(car '(sure this is a list))
(= A B) ;; sure A is a number....
That is instead of declaring the type of the argument, Lisp language
use multiple names for the same operation:
a[1] versus (aref a 1),
a[1] versus (gethash 1 a) ...
This mean that Lisp compiler should be more and more powerful since
the programmer take a lot of care to declare implicitely the type of
the arguments.
A = "John Smith"
A.name versus (person-name "John Smith")
So I should say that Lisp is weakly implicitely typed.
So you pay this feature to get a fast compiler by creating a lot of
names for functions.
In order to appreciate Lisp you must realize that this is a trade-
off fast speed, no static typing versus more reserved words.
Example: =, eq, eql, string=, equal, equalp ...
Do you agree?
On 12 Okt., 17:47, ·············@gmail.com wrote:
> I think that Lisp compiler get their power because in Lisp you
> usually declare implicitely the type of the argument:
>
> (person-name "John")
> (char "sure I'm a string" 1)
> (car '(sure this is a list))
> (= A B) ;; sure A is a number....
>
> That is instead of declaring the type of the argument, Lisp language
> use multiple names for the same operation:
>
> a[1] versus (aref a 1),
> a[1] versus (gethash 1 a) ...
>
> This mean that Lisp compiler should be more and more powerful since
> the programmer take a lot of care to declare implicitely the type of
> the arguments.
>
> A = "John Smith"
>
> A.name versus (person-name "John Smith")
>
> So I should say that Lisp is weakly implicitely typed.
>
> So you pay this feature to get a fast compiler by creating a lot of
> names for functions.
>
> In order to appreciate Lisp you must realize that this is a trade-
> off fast speed, no static typing versus more reserved words.
>
> Example: =, eq, eql, string=, equal, equalp ...
>
> Do you agree?
Common Lisp has both specific functions and 'generic' functions.
'Generic' is meant in a non-CLOS sense.
Functions for sequences for example work with strings, vectors, lists.
In the I/O system, there are primitive functions like WRITE-CHAR,
generic functions like PRINC,
complex functions like FORMAT or even CLOS generic functions like
PRINT-OBJECT.
Common Lisp has a whole bunch of things in the language that allow
compilers to generate fast code
or to allow interpreters to be reasonably fast. Using type-specific
functions and having the
compiler figuring out to create optimized code for that is one way.
Type-specific functions are also used because
* they provide error checking in safe code
* they provide some information for the human reader what the code is
working on
* provide the building blocks for more generic functions
Also often the Common Lisp programmer thinks that FILE-DELETE, WINDOW-
DELETE and QUEUE-DELETE
are different operations, so they need different names. These symbols
are also naming
first class (they have function objects) objects. Using a single name
DELETE and
make it generic over all kinds of objects is seen as a mistake. There
are two usual
solutions:
a) using packages to have fs:delete, tv:delete and q:delete to be
different names, since
they are in different packages
b) using names like file-delete, window-delete and queue-delete
When using CLOS is not considered good style to have one generic
function DELETE for every
kind of (probably semantically very different) delete operations.
Common Lisp allows you to make a choice between a very specific
(possibly inlined, type inferenced, ...) code and
generic code that has lots of runtime flexibility. A good compiler
might generate specific code
even for generic operations if the types are declared, the types can
be inferenced and the
optimization setting tells the compiler to do so.
·············@gmail.com wrote:
>
> I think that Lisp compiler get their power because in Lisp you
> usually declare implicitely the type of the argument:
>
> (person-name "John")
> (char "sure I'm a string" 1)
> (car '(sure this is a list))
> (= A B) ;; sure A is a number....
>
> That is instead of declaring the type of the argument, Lisp language
> use multiple names for the same operation:
>
> a[1] versus (aref a 1),
> a[1] versus (gethash 1 a) ...
>
> This mean that Lisp compiler should be more and more powerful since
> the programmer take a lot of care to declare implicitely the type of
> the arguments.
>
>
> A = "John Smith"
>
> A.name versus (person-name "John Smith")
>
> So I should say that Lisp is weakly implicitely typed.
>
> So you pay this feature to get a fast compiler by creating a lot of
> names for functions.
>
> In order to appreciate Lisp you must realize that this is a trade-
> off fast speed, no static typing versus more reserved words.
>
> Example: =, eq, eql, string=, equal, equalp ...
>
>
>
> Do you agree?
No.
First reason: Functions like char and car have to check that the type of
the passed argument is correct. If it's not correct, these functions
have to signal an error. Conceptually, this means more overhead,
although there are a number of quite clever implementation techniques to
remedy the situation.
Second reason: In Lisp, many functions are actually generic. Arithmetic
operations operate well on different kinds of numbers (fixnum, integer,
rational, float, complex), many functions accepting strings as arguments
also accept string designators, print functions can print all kinds of
arguments, and so on. This doesn't create a lot of additional overhead,
since the type checks have to be performed anyway (see above), but
instead of signalling errors, different branches of functionality can be
executed. This is actually one of the corner stones of dynamic
languages, that functionality can be selected based on dynamic
properties. This is so important, that Common Lisp actually provides a
systematic way for defining (and extending) generic functions.
If you really want to get rid of runtime typecheck overhead, Lisp
implementations do the same as any other language: Infer static types by
doing control flow analysis, allowing programmers to declare types
statically, and rearranging typechecks at runtime such that branches
that are typically taken most of the time are tested for first and are
encoded as fast checks.
Pascal
--
Lisp50: http://www.lisp50.org
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
On 12 oct, 18:52, Pascal Costanza <····@p-cos.net> wrote:
> ·············@gmail.com wrote:
>
> > I think that Lisp compiler get their power because in Lisp you
> > usually declare implicitely the type of the argument:
>
> > (person-name "John")
> > (char "sure I'm a string" 1)
> > (car '(sure this is a list))
> > (= A B) ;; sure A is a number....
>
> > That is instead of declaring the type of the argument, Lisp language
> > use multiple names for the same operation:
>
> > a[1] versus (aref a 1),
> > a[1] versus (gethash 1 a) ...
>
> > This mean that Lisp compiler should be more and more powerful since
> > the programmer take a lot of care to declare implicitely the type of
> > the arguments.
>
> > A = "John Smith"
>
> > A.name versus (person-name "John Smith")
>
> > So I should say that Lisp is weakly implicitely typed.
>
> > So you pay this feature to get a fast compiler by creating a lot of
> > names for functions.
>
> > In order to appreciate Lisp you must realize that this is a trade-
> > off fast speed, no static typing versus more reserved words.
>
> > Example: =, eq, eql, string=, equal, equalp ...
>
> > Do you agree?
>
> No.
>
> First reason: Functions like char and car have to check that the type of
> the passed argument is correct. If it's not correct, these functions
> have to signal an error. Conceptually, this means more overhead,
> although there are a number of quite clever implementation techniques to
> remedy the situation.
>
> Second reason: In Lisp, many functions are actually generic. Arithmetic
> operations operate well on different kinds of numbers (fixnum, integer,
> rational, float, complex), many functions accepting strings as arguments
> also accept string designators, print functions can print all kinds of
> arguments, and so on. This doesn't create a lot of additional overhead,
> since the type checks have to be performed anyway (see above), but
> instead of signalling errors, different branches of functionality can be
> executed. This is actually one of the corner stones of dynamic
> languages, that functionality can be selected based on dynamic
> properties. This is so important, that Common Lisp actually provides a
> systematic way for defining (and extending) generic functions.
>
> If you really want to get rid of runtime typecheck overhead, Lisp
> implementations do the same as any other language: Infer static types by
> doing control flow analysis, allowing programmers to declare types
> statically, and rearranging typechecks at runtime such that branches
> that are typically taken most of the time are tested for first and are
> encoded as fast checks.
>
> Pascal
>
> --
> Lisp50:http://www.lisp50.org
>
> My website:http://p-cos.net
> Common Lisp Document Repository:http://cdr.eurolisp.org
> Closer to MOP & ContextL:http://common-lisp.net/project/closer/
I agree that Lisp have generic functions CLOS and not CLOS, for
example
arithmetic operations. I think all the computer languages have in
some sense generic functions. The point of my post is that if we
compare
modern languages like python or ruby with Lisp, we see that, in many
cases, Lisp requires the type of the argument:
"a" + "b" for python, ruby, javascript, ...
(concatenate 'string "a" "b") that is not (concatenate "a" "b").
The only way I see of "proving" my point is to take two libraries,
one in python and one in Lisp and analyze the average generality of
the functions in both.
Another prove of this point is that python is slower than Lisp and
the reason of this is not simply bad programming.
I should say that in a mature language
---------------------------------------------------------------
more dynamic functions is proportional to slower code.
----------------------------------------------------------------
But a low level language can construct libraries to use generic
functions (Lisp use CLOS) but this is not the point.
On Oct 12, 11:57 pm, ·············@gmail.com wrote:
> On 12 oct, 18:52, Pascal Costanza <····@p-cos.net> wrote:
>
>
>
> > ·············@gmail.com wrote:
>
> > > I think that Lisp compiler get their power because in Lisp you
> > > usually declare implicitely the type of the argument:
>
> > > (person-name "John")
> > > (char "sure I'm a string" 1)
> > > (car '(sure this is a list))
> > > (= A B) ;; sure A is a number....
>
> > > That is instead of declaring the type of the argument, Lisp language
> > > use multiple names for the same operation:
>
> > > a[1] versus (aref a 1),
> > > a[1] versus (gethash 1 a) ...
>
> > > This mean that Lisp compiler should be more and more powerful since
> > > the programmer take a lot of care to declare implicitely the type of
> > > the arguments.
>
> > > A = "John Smith"
>
> > > A.name versus (person-name "John Smith")
>
> > > So I should say that Lisp is weakly implicitely typed.
>
> > > So you pay this feature to get a fast compiler by creating a lot of
> > > names for functions.
>
> > > In order to appreciate Lisp you must realize that this is a trade-
> > > off fast speed, no static typing versus more reserved words.
>
> > > Example: =, eq, eql, string=, equal, equalp ...
>
> > > Do you agree?
>
> > No.
>
> > First reason: Functions like char and car have to check that the type of
> > the passed argument is correct. If it's not correct, these functions
> > have to signal an error. Conceptually, this means more overhead,
> > although there are a number of quite clever implementation techniques to
> > remedy the situation.
>
> > Second reason: In Lisp, many functions are actually generic. Arithmetic
> > operations operate well on different kinds of numbers (fixnum, integer,
> > rational, float, complex), many functions accepting strings as arguments
> > also accept string designators, print functions can print all kinds of
> > arguments, and so on. This doesn't create a lot of additional overhead,
> > since the type checks have to be performed anyway (see above), but
> > instead of signalling errors, different branches of functionality can be
> > executed. This is actually one of the corner stones of dynamic
> > languages, that functionality can be selected based on dynamic
> > properties. This is so important, that Common Lisp actually provides a
> > systematic way for defining (and extending) generic functions.
>
> > If you really want to get rid of runtime typecheck overhead, Lisp
> > implementations do the same as any other language: Infer static types by
> > doing control flow analysis, allowing programmers to declare types
> > statically, and rearranging typechecks at runtime such that branches
> > that are typically taken most of the time are tested for first and are
> > encoded as fast checks.
>
> > Pascal
>
> > --
> > Lisp50:http://www.lisp50.org
>
> > My website:http://p-cos.net
> > Common Lisp Document Repository:http://cdr.eurolisp.org
> > Closer to MOP & ContextL:http://common-lisp.net/project/closer/
>
> I agree that Lisp have generic functions CLOS and not CLOS, for
> example
> arithmetic operations. I think all the computer languages have in
> some sense generic functions. The point of my post is that if we
> compare
> modern languages like python or ruby with Lisp, we see that, in many
> cases, Lisp requires the type of the argument:
>
> "a" + "b" for python, ruby, javascript, ...
Yeah, Common Lisp avoids this because + is fine for a numeric domain,
but on other things it tends to get arbitrary.
What would (+ "12345" 2) be? "123452" or 12347?
What would (+ 2 "12345") be? "212345" or 12347?
What would (+ (list 1 2 3 4 5) (list 1 2 3 4 5)) be? (2 4 6 8 10) or
(1 2 3 4 5 1 2 3 4 5) ?
Why is (+ 1 3) and (+ 3 1) both 4 but (+ "1" "3") and (+ "3" "1")
would have different results?
>
> (concatenate 'string "a" "b") that is not (concatenate "a" "b").
STRING is the result type. CONCATENATE takes any sequence types as
long as the elements can be put in a sequence of the result type.
? (concatenate 'string '(#\f #\o #\o) '#(#\b #\a #\r))
"foobar"
? (concatenate 'list "foo" "bar")
(#\f #\o #\o #\b #\a #\r)
> The only way I see of "proving" my point is to take two libraries,
> one in python and one in Lisp and analyze the average generality of
> the functions in both.
Common Lisp already contains generic functionality for numbers,
sequences, I/O - what do you need more?
> Another prove of this point is that python is slower than Lisp and
> the reason of this is not simply bad programming.
I can't follow your argumentation here...
>
> I should say that in a mature language
> ---------------------------------------------------------------
> more dynamic functions is proportional to slower code.
> ----------------------------------------------------------------
>
> But a low level language can construct libraries to use generic
> functions (Lisp use CLOS) but this is not the point.
Maybe Common Lisp as a language is better suited to compilation and
some Common Lisp compilers are simply written such that they can
generate fast code
when requested?
You might want to check out some books and papers on compiler
technology for Lisp.
http://library.readscheme.org/page8.html
http://pagesperso-systeme.lip6.fr/Christian.Queinnec/WWW/LiSP.html
http://common-lisp.net/project/cmucl/doc/CMUCL-design.pdf
Design Considerations for CMU Common Lisp by Scott E. Fahlman
in http://mitpress.mit.edu/catalog/item/default.asp?tid=8078&ttype=2
Since the early 60s of the last century there has been a lot of
research and experimentation
on fast runtimes and compilers for Lisp.
On 13 oct, 01:12, ·······@corporate-world.lisp.de" <······@corporate-
world.lisp.de> wrote:
> On Oct 12, 11:57 pm, ·············@gmail.com wrote:
>
>
>
> > On 12 oct, 18:52, Pascal Costanza <····@p-cos.net> wrote:
>
> > > ·············@gmail.com wrote:
>
> > > > I think that Lisp compiler get their power because in Lisp you
> > > > usually declare implicitely the type of the argument:
>
> > > > (person-name "John")
> > > > (char "sure I'm a string" 1)
> > > > (car '(sure this is a list))
> > > > (= A B) ;; sure A is a number....
>
> > > > That is instead of declaring the type of the argument, Lisp language
> > > > use multiple names for the same operation:
>
> > > > a[1] versus (aref a 1),
> > > > a[1] versus (gethash 1 a) ...
>
> > > > This mean that Lisp compiler should be more and more powerful since
> > > > the programmer take a lot of care to declare implicitely the type of
> > > > the arguments.
>
> > > > A = "John Smith"
>
> > > > A.name versus (person-name "John Smith")
>
> > > > So I should say that Lisp is weakly implicitely typed.
>
> > > > So you pay this feature to get a fast compiler by creating a lot of
> > > > names for functions.
>
> > > > In order to appreciate Lisp you must realize that this is a trade-
> > > > off fast speed, no static typing versus more reserved words.
>
> > > > Example: =, eq, eql, string=, equal, equalp ...
>
> > > > Do you agree?
>
> > > No.
>
> > > First reason: Functions like char and car have to check that the type of
> > > the passed argument is correct. If it's not correct, these functions
> > > have to signal an error. Conceptually, this means more overhead,
> > > although there are a number of quite clever implementation techniques to
> > > remedy the situation.
>
> > > Second reason: In Lisp, many functions are actually generic. Arithmetic
> > > operations operate well on different kinds of numbers (fixnum, integer,
> > > rational, float, complex), many functions accepting strings as arguments
> > > also accept string designators, print functions can print all kinds of
> > > arguments, and so on. This doesn't create a lot of additional overhead,
> > > since the type checks have to be performed anyway (see above), but
> > > instead of signalling errors, different branches of functionality can be
> > > executed. This is actually one of the corner stones of dynamic
> > > languages, that functionality can be selected based on dynamic
> > > properties. This is so important, that Common Lisp actually provides a
> > > systematic way for defining (and extending) generic functions.
>
> > > If you really want to get rid of runtime typecheck overhead, Lisp
> > > implementations do the same as any other language: Infer static types by
> > > doing control flow analysis, allowing programmers to declare types
> > > statically, and rearranging typechecks at runtime such that branches
> > > that are typically taken most of the time are tested for first and are
> > > encoded as fast checks.
>
> > > Pascal
>
> > > --
> > > Lisp50:http://www.lisp50.org
>
> > > My website:http://p-cos.net
> > > Common Lisp Document Repository:http://cdr.eurolisp.org
> > > Closer to MOP & ContextL:http://common-lisp.net/project/closer/
>
> > I agree that Lisp have generic functions CLOS and not CLOS, for
> > example
> > arithmetic operations. I think all the computer languages have in
> > some sense generic functions. The point of my post is that if we
> > compare
> > modern languages like python or ruby with Lisp, we see that, in many
> > cases, Lisp requires the type of the argument:
>
> > "a" + "b" for python, ruby, javascript, ...
>
> Yeah, Common Lisp avoids this because + is fine for a numeric domain,
> but on other things it tends to get arbitrary.
>
> What would (+ "12345" 2) be? "123452" or 12347?
> What would (+ 2 "12345") be? "212345" or 12347?
> What would (+ (list 1 2 3 4 5) (list 1 2 3 4 5)) be? (2 4 6 8 10) or
> (1 2 3 4 5 1 2 3 4 5) ?
> Why is (+ 1 3) and (+ 3 1) both 4 but (+ "1" "3") and (+ "3" "1")
> would have different results?
>
>
>
> > (concatenate 'string "a" "b") that is not (concatenate "a" "b").
>
> STRING is the result type. CONCATENATE takes any sequence types as
> long as the elements can be put in a sequence of the result type.
>
> ? (concatenate 'string '(#\f #\o #\o) '#(#\b #\a #\r))
> "foobar"
>
> ? (concatenate 'list "foo" "bar")
> (#\f #\o #\o #\b #\a #\r)
>
> > The only way I see of "proving" my point is to take two libraries,
> > one in python and one in Lisp and analyze the average generality of
> > the functions in both.
>
> Common Lisp already contains generic functionality for numbers,
> sequences, I/O - what do you need more?
>
> > Another prove of this point is that python is slower than Lisp and
> > the reason of this is not simply bad programming.
>
> I can't follow your argumentation here...
>
>
>
> > I should say that in a mature language
> > ---------------------------------------------------------------
> > more dynamic functions is proportional to slower code.
> > ----------------------------------------------------------------
>
> > But a low level language can construct libraries to use generic
> > functions (Lisp use CLOS) but this is not the point.
>
> Maybe Common Lisp as a language is better suited to compilation and
> some Common Lisp compilers are simply written such that they can
> generate fast code
> when requested?
>
> You might want to check out some books and papers on compiler
> technology for Lisp.
>
> http://library.readscheme.org/page8.html
>
> http://pagesperso-systeme.lip6.fr/Christian.Queinnec/WWW/LiSP.html
>
> http://common-lisp.net/project/cmucl/doc/CMUCL-design.pdf
>
> Design Considerations for CMU Common Lisp by Scott E. Fahlman
> inhttp://mitpress.mit.edu/catalog/item/default.asp?tid=8078&ttype=2
>
> Since the early 60s of the last century there has been a lot of
> research and experimentation
> on fast runtimes and compilers for Lisp.
You say: "Maybe Common Lisp as a language is better suited to
compilation"
I'm wondering why it is better suited to compilation, this is
one of the motivation of my post.
Thanks for the references.
On Oct 13, 1:30 am, ·············@gmail.com wrote:
> On 13 oct, 01:12, ·······@corporate-world.lisp.de" <······@corporate-
>
>
>
> world.lisp.de> wrote:
> > On Oct 12, 11:57 pm, ·············@gmail.com wrote:
>
> > > On 12 oct, 18:52, Pascal Costanza <····@p-cos.net> wrote:
>
> > > > ·············@gmail.com wrote:
>
> > > > > I think that Lisp compiler get their power because in Lisp you
> > > > > usually declare implicitely the type of the argument:
>
> > > > > (person-name "John")
> > > > > (char "sure I'm a string" 1)
> > > > > (car '(sure this is a list))
> > > > > (= A B) ;; sure A is a number....
>
> > > > > That is instead of declaring the type of the argument, Lisp language
> > > > > use multiple names for the same operation:
>
> > > > > a[1] versus (aref a 1),
> > > > > a[1] versus (gethash 1 a) ...
>
> > > > > This mean that Lisp compiler should be more and more powerful since
> > > > > the programmer take a lot of care to declare implicitely the type of
> > > > > the arguments.
>
> > > > > A = "John Smith"
>
> > > > > A.name versus (person-name "John Smith")
>
> > > > > So I should say that Lisp is weakly implicitely typed.
>
> > > > > So you pay this feature to get a fast compiler by creating a lot of
> > > > > names for functions.
>
> > > > > In order to appreciate Lisp you must realize that this is a trade-
> > > > > off fast speed, no static typing versus more reserved words.
>
> > > > > Example: =, eq, eql, string=, equal, equalp ...
>
> > > > > Do you agree?
>
> > > > No.
>
> > > > First reason: Functions like char and car have to check that the type of
> > > > the passed argument is correct. If it's not correct, these functions
> > > > have to signal an error. Conceptually, this means more overhead,
> > > > although there are a number of quite clever implementation techniques to
> > > > remedy the situation.
>
> > > > Second reason: In Lisp, many functions are actually generic. Arithmetic
> > > > operations operate well on different kinds of numbers (fixnum, integer,
> > > > rational, float, complex), many functions accepting strings as arguments
> > > > also accept string designators, print functions can print all kinds of
> > > > arguments, and so on. This doesn't create a lot of additional overhead,
> > > > since the type checks have to be performed anyway (see above), but
> > > > instead of signalling errors, different branches of functionality can be
> > > > executed. This is actually one of the corner stones of dynamic
> > > > languages, that functionality can be selected based on dynamic
> > > > properties. This is so important, that Common Lisp actually provides a
> > > > systematic way for defining (and extending) generic functions.
>
> > > > If you really want to get rid of runtime typecheck overhead, Lisp
> > > > implementations do the same as any other language: Infer static types by
> > > > doing control flow analysis, allowing programmers to declare types
> > > > statically, and rearranging typechecks at runtime such that branches
> > > > that are typically taken most of the time are tested for first and are
> > > > encoded as fast checks.
>
> > > > Pascal
>
> > > > --
> > > > Lisp50:http://www.lisp50.org
>
> > > > My website:http://p-cos.net
> > > > Common Lisp Document Repository:http://cdr.eurolisp.org
> > > > Closer to MOP & ContextL:http://common-lisp.net/project/closer/
>
> > > I agree that Lisp have generic functions CLOS and not CLOS, for
> > > example
> > > arithmetic operations. I think all the computer languages have in
> > > some sense generic functions. The point of my post is that if we
> > > compare
> > > modern languages like python or ruby with Lisp, we see that, in many
> > > cases, Lisp requires the type of the argument:
>
> > > "a" + "b" for python, ruby, javascript, ...
>
> > Yeah, Common Lisp avoids this because + is fine for a numeric domain,
> > but on other things it tends to get arbitrary.
>
> > What would (+ "12345" 2) be? "123452" or 12347?
> > What would (+ 2 "12345") be? "212345" or 12347?
> > What would (+ (list 1 2 3 4 5) (list 1 2 3 4 5)) be? (2 4 6 8 10) or
> > (1 2 3 4 5 1 2 3 4 5) ?
> > Why is (+ 1 3) and (+ 3 1) both 4 but (+ "1" "3") and (+ "3" "1")
> > would have different results?
>
> > > (concatenate 'string "a" "b") that is not (concatenate "a" "b").
>
> > STRING is the result type. CONCATENATE takes any sequence types as
> > long as the elements can be put in a sequence of the result type.
>
> > ? (concatenate 'string '(#\f #\o #\o) '#(#\b #\a #\r))
> > "foobar"
>
> > ? (concatenate 'list "foo" "bar")
> > (#\f #\o #\o #\b #\a #\r)
>
> > > The only way I see of "proving" my point is to take two libraries,
> > > one in python and one in Lisp and analyze the average generality of
> > > the functions in both.
>
> > Common Lisp already contains generic functionality for numbers,
> > sequences, I/O - what do you need more?
>
> > > Another prove of this point is that python is slower than Lisp and
> > > the reason of this is not simply bad programming.
>
> > I can't follow your argumentation here...
>
> > > I should say that in a mature language
> > > ---------------------------------------------------------------
> > > more dynamic functions is proportional to slower code.
> > > ----------------------------------------------------------------
>
> > > But a low level language can construct libraries to use generic
> > > functions (Lisp use CLOS) but this is not the point.
>
> > Maybe Common Lisp as a language is better suited to compilation and
> > some Common Lisp compilers are simply written such that they can
> > generate fast code
> > when requested?
>
> > You might want to check out some books and papers on compiler
> > technology for Lisp.
>
> >http://library.readscheme.org/page8.html
>
> >http://pagesperso-systeme.lip6.fr/Christian.Queinnec/WWW/LiSP.html
>
> >http://common-lisp.net/project/cmucl/doc/CMUCL-design.pdf
>
> > Design Considerations for CMU Common Lisp by Scott E. Fahlman
> > inhttp://mitpress.mit.edu/catalog/item/default.asp?tid=8078&ttype=2
>
> > Since the early 60s of the last century there has been a lot of
> > research and experimentation
> > on fast runtimes and compilers for Lisp.
>
> You say: "Maybe Common Lisp as a language is better suited to
> compilation"
>
> I'm wondering why it is better suited to compilation, this is
> one of the motivation of my post.
But you make a primitive assumption (type specific functions) instead
of
just reading a bit on Lisp compilers.
You can read about Common Lisp in the ANSI Common Lisp standard.
ANSI CL for example allows:
* code to be inlined, the developer can declared code to be inlined
* allows types to be declared
* allows compilation to code without runtime checks for types and
other constraints
* allows whole files to be compiled and lets the compiler assume that
functions
in a file don't change
* allows whole files to be compiled and lets the compiler assume that
local functions calling their global function will always call the
same global
function
* allows data to be stack allocated
* allows the compiler to assume that CL built-in functionality cannot
be changed.
CL:+ for example can't be redefined.
This and more means that you can compile Common Lisp to quite
efficient code.
See this SBCL example on a x86 machine:
* (defun foo (a b c)
(declare (fixnum a b c)
(optimize (speed 3)
(safety 0)
(debug 0)
(space 0)
(compilation-speed 0)))
(the fixnum (+ a (- b c))))
FOO
* (disassemble 'foo)
; 1169357B: 29F7 SUB EDI, ESI ; no-arg-
parsing entry point
; 7D: 01FA ADD EDX, EDI
; 7F: C1E202 SHL EDX, 2
; 82: 8D65F8 LEA ESP, [EBP-8]
; 85: F8 CLC
; 86: 8B6DFC MOV EBP, [EBP-4]
; 89: C20400 RET 4
...
It compiles the fixnum arithmetic down to very clean machine code.
Or this example:
* (declaim (inline foo))
(defun foo (a b c)
(declare (fixnum a b c)
(optimize (speed 3)
(safety 0)
(debug 0)
(space 0)
(compilation-speed 0)))
(the fixnum (+ a (- b c))))
(defun bar (a b c d)
(declare (fixnum a b c d)
(inline foo)
(optimize (speed 3)
(safety 0)
(debug 0)
(space 0)
(compilation-speed 0)))
(the fixnum (+ a (the fixnum (foo b c d)))))
*
FOO
* (disassemble 'bar)
BAR
*
; 116ACE96: 29C6 SUB ESI, EAX ; no-arg-
parsing entry point
; 98: 01F7 ADD EDI, ESI
; 9A: C1E702 SHL EDI, 2
; 9D: 01FA ADD EDX, EDI
; 9F: 8D65F8 LEA ESP, [EBP-8]
; A2: F8 CLC
; A3: 8B6DFC MOV EBP, [EBP-4]
; A6: C20400 RET 4
You see that the call to FOO has been replaced by inline code.
There is no runtime dispatch for + and - anymore.
All you see is direct fixnum machine code. There are
several levels of efficiency. That example showed
what the machine code looks like when dynamic features
are removed, types are declared and no runtime safety
is requested. SPEED, SPACE, COMPILATION-SPEED, DEBUG and SAFETY
can take values between 0 (low) and 3 (high) and the
compiler then may use different compilation strategies
depending on those values. The declarations are defined
in ANSI Common Lisp - what the compiler does is implementation
dependent.
So you see that you can portably instruct the Lisp compiler
to generate very efficient code. The compiler may ignore
the declarations, but several compilers are taking advantage
of those. You also see that there is no intermediate
step - the Lisp compiler itself is generating the machine
code directly.
>
> Thanks for the references.
On Sun, 12 Oct 2008 16:30:59 -0700, grande.piedra wrote:
> You say: "Maybe Common Lisp as a language is better suited to
> compilation"
>
> I'm wondering why it is better suited to compilation, this is
> one of the motivation of my post.
One of the things that makes Python particularly nasty for compilation (I
know nothing of Ruby, but perhaps it is similar, if its performance is
similiar) is that its object model is based on hash-lookups of method
names, and the dictionary of name/method pairs is mutable at run-time on
an object-by-object basis, rather than being constrained by type or
class. It is therefore not possible [*] to do most of the usual compiler
"optimizations", which start with inlining and specialization of
functions.
[*] Contra-wise: Javascript has the same "problem" as Python, and has a
broadly similar, extremely dynamic object model. There are currently
some very strong incentives to make javascript fast, and there is some
very good dynamic recompilation-based optimization work being done, in
(for example) the Google Chrome browser and FireFox3. I don't know
whether this trace-optimization/optimistic type inferring style of
compiler is actually making Javascript competitive with other compiled
languages like Common Lisp, just that it is now very much faster than it
was a couple of years ago, and that is opening up a lot more
possibilities.
--
Andrew
From: Marek Kubica
Subject: Re: static, dynamic and implicitely typed languages
Date:
Message-ID: <gd35nu$4su$2@hoshi.visyn.net>
On Mon, 13 Oct 2008 00:28:36 +0000, Andrew Reilly wrote:
> One of the things that makes Python particularly nasty for compilation
> (I know nothing of Ruby, but perhaps it is similar, if its performance
> is similiar) is that its object model is based on hash-lookups of method
> names, and the dictionary of name/method pairs is mutable at run-time on
> an object-by-object basis, rather than being constrained by type or
> class.
Well, Python allows defining a __slots__ variable, for details see
<http://www.python.org/doc/2.5.2/ref/slots.html>. Not that many people
are doing this, but 1) this usually doesn't give really big performance
improvements 2) performance on the microbenchmark level is often just not
that important.
regards,
Marek
On Oct 12, 2:57 pm, ·············@gmail.com wrote:
>
> Another prove of this point is that python is slower than Lisp and
> the reason of this is not simply bad programming.
>
There are other possibilities besides "bad programming". For
instance, "bad design" comes to mind. There is lots of good code in
Python's C implementation, but I'll bet you could translate Python to
Scheme or Common Lisp with exactly the same generality/semantics and
it would run faster in a good Scheme or Lisp implementation. If this
is true, it would prove that Python could run faster if they were
willing to make suitable changes to their interpreter.
In fact that would be kind of cool:
def foo(n):
for i in range(n):
print "Hello", i
becomes:
(def foo (n)
(for i in (range n)
(print "Hello" i)))
with suitable macros for all the Python constructs.
>
> In fact that would be kind of cool:
>
> def foo(n):
> for i in range(n):
> print "Hello", i
>
> becomes:
>
> (def foo (n)
> (for i in (range n)
> (print "Hello" i)))
>
> with suitable macros for all the Python constructs.
While you can produce an AST from Python code, there doesn't seem to
be a way to read in text as AST and compile it :/
--
Mikael Jansson
http://mikael.jansson.be
On Oct 12, 4:35 pm, Scott <·······@gmail.com> wrote:
> I'll bet you could translate Python to
> Scheme or Common Lisp with exactly the same generality/semantics and
> it would run faster in a good Scheme or Lisp implementation.
http://common-lisp.net/project/clpython/
I don't know if it runs faster (yet).
-- Scott
On 15 oct, 18:00, Scott Burson <········@gmail.com> wrote:
> On Oct 12, 4:35 pm, Scott <·······@gmail.com> wrote:
>
> > I'll bet you could translate Python to
> > Scheme or Common Lisp with exactly the same generality/semantics and
> > it would run faster in a good Scheme or Lisp implementation.
>
> http://common-lisp.net/project/clpython/
>
> I don't know if it runs faster (yet).
>
> -- Scott
Hello Scott, thanks for the reference, I thought that this project was
in an early stage, but now it seems more mature. I will try clpython,
it can benefit both communities Python and Lisp to share libraries and
knowledge.
On Sun, 12 Oct 2008 14:57:09 -0700, grande.piedra wrote:
> arithmetic operations. I think all the computer languages have in some
> sense generic functions. The point of my post is that if we compare
Wrong. Many don't.
Also, the point of your post escapes me.
> The only way I see of "proving" my point is to take two libraries,
> one in python and one in Lisp and analyze the average generality of the
> functions in both.
Go ahead, it's your time, feel free to waste it.
> Another prove of this point is that python is slower than Lisp and
> the reason of this is not simply bad programming.
Apparently you don't have a clue about Python either. Hint: the standard
python implementation compiles into bytecode, which is then executed on a
virtual machine (yes, I know about CPython, etc). Most (but not all)
Lisp implementations compile into machine code. Obviously the latter
will be faster.
> But a low level language can construct libraries to use generic
> functions (Lisp use CLOS) but this is not the point.
But what _is_ your point? Oh wait, nevermind, this is c.l.l and you
don't need a point or a clue before you post general vague questions
about Lisp here. Point taken :-)
Tamas
On 13 oct, 00:34, Tamas K Papp <······@gmail.com> wrote:
> On Sun, 12 Oct 2008 14:57:09 -0700, grande.piedra wrote:
> > arithmetic operations. I think all the computer languages have in some
> > sense generic functions. The point of my post is that if we compare
>
> Wrong. Many don't.
>
> Also, the point of your post escapes me.
>
> > The only way I see of "proving" my point is to take two libraries,
> > one in python and one in Lisp and analyze the average generality of the
> > functions in both.
>
> Go ahead, it's your time, feel free to waste it.
>
> > Another prove of this point is that python is slower than Lisp and
> > the reason of this is not simply bad programming.
>
> Apparently you don't have a clue about Python either. Hint: the standard
> python implementation compiles into bytecode, which is then executed on a
> virtual machine (yes, I know about CPython, etc). Most (but not all)
> Lisp implementations compile into machine code. Obviously the latter
> will be faster.
>
> > But a low level language can construct libraries to use generic
> > functions (Lisp use CLOS) but this is not the point.
>
> But what _is_ your point? Oh wait, nevermind, this is c.l.l and you
> don't need a point or a clue before you post general vague questions
> about Lisp here. Point taken :-)
>
> Tamas
I try to answer this question: What _is_ your point?
I see in the computer language benchmark that Lisp (SBCL) is near
the
top.
One factor can be that Lisp use specific type functions, example:
eq to compare symbols and not numbers.
I'm trying to measure or weight if this factor or feature:
incrementing the number of words in the language to obtain faster
code is really good or not (it make the language more difficult to
learn).
One always can increase speed by this trick, add to the function the
type of the arguments, this way you don't have to look in a hash table
or in a class.
read-integer, read-float, read-char, read-line, read-person ...
if you have all of this functions in your language it is sure that it
is faster than other language with only a read-everything.
So I try to make an equation considering factors like:
If you try to measure the speed to read you should have in
consideration not only static and dynamic typing but also the number
of functions to read, so this requires three factors:
Static, dynamic and number of functions.
On Oct 13, 1:10 am, ·············@gmail.com wrote:
> On 13 oct, 00:34, Tamas K Papp <······@gmail.com> wrote:
>
>
>
> > On Sun, 12 Oct 2008 14:57:09 -0700, grande.piedra wrote:
> > > arithmetic operations. I think all the computer languages have in some
> > > sense generic functions. The point of my post is that if we compare
>
> > Wrong. Many don't.
>
> > Also, the point of your post escapes me.
>
> > > The only way I see of "proving" my point is to take two libraries,
> > > one in python and one in Lisp and analyze the average generality of the
> > > functions in both.
>
> > Go ahead, it's your time, feel free to waste it.
>
> > > Another prove of this point is that python is slower than Lisp and
> > > the reason of this is not simply bad programming.
>
> > Apparently you don't have a clue about Python either. Hint: the standard
> > python implementation compiles into bytecode, which is then executed on a
> > virtual machine (yes, I know about CPython, etc). Most (but not all)
> > Lisp implementations compile into machine code. Obviously the latter
> > will be faster.
>
> > > But a low level language can construct libraries to use generic
> > > functions (Lisp use CLOS) but this is not the point.
>
> > But what _is_ your point? Oh wait, nevermind, this is c.l.l and you
> > don't need a point or a clue before you post general vague questions
> > about Lisp here. Point taken :-)
>
> > Tamas
>
> I try to answer this question: What _is_ your point?
>
> I see in the computer language benchmark that Lisp (SBCL) is near
> the
> top.
>
> One factor can be that Lisp use specific type functions, example:
>
> eq to compare symbols and not numbers.
EQ is probably among the most type unspecific functions Lisp has.
EQ takes two arguments of ANY type and returns true of they are the
same object.
>
> I'm trying to measure or weight if this factor or feature:
> incrementing the number of words in the language to obtain faster
> code is really good or not (it make the language more difficult to
> learn).
>
> One always can increase speed by this trick, add to the function the
> type of the arguments, this way you don't have to look in a hash table
> or in a class.
>
> read-integer, read-float, read-char, read-line, read-person ...
>
> if you have all of this functions in your language it is sure that it
> is faster than other language with only a read-everything.
>
> So I try to make an equation considering factors like:
>
> If you try to measure the speed to read you should have in
> consideration not only static and dynamic typing but also the number
> of functions to read, so this requires three factors:
>
> Static, dynamic and number of functions.
having only a few, but heavily overloaded operations tends to make
things only worse.
Especially when it is overloaded for domains that are totally
unrelated
like strings and numbers.
you read a + b but you can't see what it does, since it depends on
the types of a and b which
are probably nowhere to see in the code. It does something like
addition, but totally depending on the runtime
context. Personally I prefer to read (APPEND a b) for an append
operation and not read an overloaded
symbol +, which usually means addition and not append.
On 13 oct, 01:28, ·······@corporate-world.lisp.de" <······@corporate-
world.lisp.de> wrote:
> On Oct 13, 1:10 am, ·············@gmail.com wrote:
>
>
>
> > On 13 oct, 00:34, Tamas K Papp <······@gmail.com> wrote:
>
> > > On Sun, 12 Oct 2008 14:57:09 -0700, grande.piedra wrote:
> > > > arithmetic operations. I think all the computer languages have in some
> > > > sense generic functions. The point of my post is that if we compare
>
> > > Wrong. Many don't.
>
> > > Also, the point of your post escapes me.
>
> > > > The only way I see of "proving" my point is to take two libraries,
> > > > one in python and one in Lisp and analyze the average generality of the
> > > > functions in both.
>
> > > Go ahead, it's your time, feel free to waste it.
>
> > > > Another prove of this point is that python is slower than Lisp and
> > > > the reason of this is not simply bad programming.
>
> > > Apparently you don't have a clue about Python either. Hint: the standard
> > > python implementation compiles into bytecode, which is then executed on a
> > > virtual machine (yes, I know about CPython, etc). Most (but not all)
> > > Lisp implementations compile into machine code. Obviously the latter
> > > will be faster.
>
> > > > But a low level language can construct libraries to use generic
> > > > functions (Lisp use CLOS) but this is not the point.
>
> > > But what _is_ your point? Oh wait, nevermind, this is c.l.l and you
> > > don't need a point or a clue before you post general vague questions
> > > about Lisp here. Point taken :-)
>
> > > Tamas
>
> > I try to answer this question: What _is_ your point?
>
> > I see in the computer language benchmark that Lisp (SBCL) is near
> > the
> > top.
>
> > One factor can be that Lisp use specific type functions, example:
>
> > eq to compare symbols and not numbers.
>
> EQ is probably among the most type unspecific functions Lisp has.
> EQ takes two arguments of ANY type and returns true of they are the
> same object.
>
>
>
>
>
> > I'm trying to measure or weight if this factor or feature:
> > incrementing the number of words in the language to obtain faster
> > code is really good or not (it make the language more difficult to
> > learn).
>
> > One always can increase speed by this trick, add to the function the
> > type of the arguments, this way you don't have to look in a hash table
> > or in a class.
>
> > read-integer, read-float, read-char, read-line, read-person ...
>
> > if you have all of this functions in your language it is sure that it
> > is faster than other language with only a read-everything.
>
> > So I try to make an equation considering factors like:
>
> > If you try to measure the speed to read you should have in
> > consideration not only static and dynamic typing but also the number
> > of functions to read, so this requires three factors:
>
> > Static, dynamic and number of functions.
>
> having only a few, but heavily overloaded operations tends to make
> things only worse.
> Especially when it is overloaded for domains that are totally
> unrelated
> like strings and numbers.
>
> you read a + b but you can't see what it does, since it depends on
> the types of a and b which
> are probably nowhere to see in the code. It does something like
> addition, but totally depending on the runtime
> context. Personally I prefer to read (APPEND a b) for an append
> operation and not read an overloaded
> symbol +, which usually means addition and not append.
You say: EQ is probably among the most type unspecific functions Lisp
has.
EQ takes two arguments of ANY type and returns true of they are the
same object.
Perhaps EQ only take two pointers and compare them?
>
> You say: EQ is probably among the most type unspecific functions Lisp
> has.
> EQ takes two arguments of ANY type and returns true of they are the
> same object.
>
> Perhaps EQ only take two pointers and compare them?
A pedantic like Joswig would call it places, but yes.
--------------
John Thingstad
On Oct 13, 7:26 am, "John Thingstad" <·······@online.no> wrote:
> > You say: EQ is probably among the most type unspecific functions Lisp
> > has.
> > EQ takes two arguments of ANY type and returns true of they are the
> > same object.
>
> > Perhaps EQ only take two pointers and compare them?
>
> A pedantic like Joswig would call it places, but yes.
>
> --------------
> John Thingstad
No thanks. I would not call it 'places'.
'pointers' or 'memory locations' gives much of the basic idea.
'place' is something else in Common Lisp.
On Sun, 12 Oct 2008 16:10:50 -0700, grande.piedra wrote:
> I'm trying to measure or weight if this factor or feature: incrementing
> the number of words in the language to obtain faster
> code is really good or not (it make the language more difficult to
> learn).
>
> One always can increase speed by this trick, add to the function the
> type of the arguments, this way you don't have to look in a hash table
> or in a class.
You are completely missing the point. This "trick" buys you very
little. The speed of CL does not come from this, it comes from two
factors:
1) most CL implementations compile into machine code,
2) the compilers are mature, and able to perform quite a few optimizations
Compared to these two, the "factor" you bring up is insignificant if
anything. BTW, method dispatch in CLOS is quite fast too, so I generally
don't worry about its overhead when I program.
As I read this thread, I notice that several people tried to explain this
to you, with little success.
Tamas
On 13 oct, 01:29, Tamas K Papp <······@gmail.com> wrote:
> On Sun, 12 Oct 2008 16:10:50 -0700, grande.piedra wrote:
> > I'm trying to measure or weight if this factor or feature: incrementing
> > the number of words in the language to obtain faster
> > code is really good or not (it make the language more difficult to
> > learn).
>
> > One always can increase speed by this trick, add to the function the
> > type of the arguments, this way you don't have to look in a hash table
> > or in a class.
>
> You are completely missing the point. This "trick" buys you very
> little. The speed of CL does not come from this, it comes from two
> factors:
>
> 1) most CL implementations compile into machine code,
>
> 2) the compilers are mature, and able to perform quite a few optimizations
>
> Compared to these two, the "factor" you bring up is insignificant if
> anything. BTW, method dispatch in CLOS is quite fast too, so I generally
> don't worry about its overhead when I program.
>
> As I read this thread, I notice that several people tried to explain this
> to you, with little success.
>
> Tamas
Hello Tamas:
You say:the compilers are mature, and able to perform quite a few
optimizations.
I absolutely agree with you, but this is perhaps because the
language use so many functions.
For example, in python you don't have symbols, you have only symbol-
name, and this slow down the code. In python you can't use (eq, you
use = and this overload a lot the language, ...
I agree with you about compiling to native code is faster, but I'm
wondering about what features of a language make better compilers for
the language and what's the prize of it. Is this a vague idea?
On Sun, 12 Oct 2008 16:40:11 -0700, grande.piedra wrote:
> You say:the compilers are mature, and able to perform quite a few
> optimizations.
>
> I absolutely agree with you, but this is perhaps because the
> language use so many functions.
You just can't let go of your original hypothesis, can you?
> For example, in python you don't have symbols, you have only symbol-
> name, and this slow down the code. In python you can't use (eq, you use
> = and this overload a lot the language, ...
>
> I agree with you about compiling to native code is faster, but I'm
> wondering about what features of a language make better compilers for
> the language and what's the prize of it. Is this a vague idea?
Common Lisp was meant to be optimizable. You can declare types, and in
many cases the compiler will infer types for you without explicit
declaration (eg SBCL does a lot of this, even with low optimization
settings). Modern Lisp compilers are very sophisticated and take
advantage of declarations and type inference.
Consider this code:
(defun foo (n)
;; Warning: non-idiomatic, idiotic Lisp code follows.
(let ((k 0))
;; increment k n times
(dotimes (i n)
(incf k))
;; try to access an element of it --- but it is not a vector!
(aref k 12)))
When you compile, SBCL complains:
; in: DEFUN FOO
; (AREF K 12)
;
; note: deleting unreachable code
;
; caught WARNING:
; Asserted type ARRAY conflicts with derived type
; (VALUES UNSIGNED-BYTE &OPTIONAL).
; See also:
; The SBCL Manual, Node "Handling of Types"
This has nothing to do with the fact that aref expects arrays. Same
thing happens with the general sequence accessor nth. Or that the code
is compiled into machine code - type inference would help with bytecode,
too.
Also understand that Lisp hasn't always been as fast as it is know. In
fact, there is a persistent myth out there that Lisp is slow. So it is
quite amusing that you are looking for some secret that makes current CL
compiled code fast.
I still don't get your purpose. You are looking for a fast language. CL
is here. Just use it.
Tamas
On Mon, 13 Oct 2008 00:02:41 +0000, Tamas K Papp wrote:
> (defun foo (n)
> ;; Warning: non-idiomatic, idiotic Lisp code follows. (let ((k 0))
> ;; increment k n times
> (dotimes (i n)
> (incf k))
> ;; try to access an element of it --- but it is not a vector! (aref
> k 12)))
Sorry, linebreak got messed up. Should have been
(defun foo (n)
;; Warning: non-idiomatic, idiotic Lisp code follows.
(let ((k 0))
;; increment k n times
(dotimes (i n)
(incf k))
;; try to access an element of it --- but it is not a vector!
(nth k 12)))
On Mon, 13 Oct 2008 00:04:17 +0000, Tamas K Papp wrote:
> On Mon, 13 Oct 2008 00:02:41 +0000, Tamas K Papp wrote:
>
>
>> (defun foo (n)
>> ;; Warning: non-idiomatic, idiotic Lisp code follows. (let ((k 0))
>> ;; increment k n times
>> (dotimes (i n)
>> (incf k))
>> ;; try to access an element of it --- but it is not a vector! (aref
>> k 12)))
>
> Sorry, linebreak got messed up. Should have been
>
> (defun foo (n)
> ;; Warning: non-idiomatic, idiotic Lisp code follows. (let ((k 0))
> ;; increment k n times
> (dotimes (i n)
> (incf k))
> ;; try to access an element of it --- but it is not a vector! (nth k
> 12)))
Aargh. Pan sucks.
(defun foo (n)
(let ((k 0))
...
On Oct 13, 1:40 am, ·············@gmail.com wrote:
> On 13 oct, 01:29, Tamas K Papp <······@gmail.com> wrote:
>
>
>
>
>
> > On Sun, 12 Oct 2008 16:10:50 -0700, grande.piedra wrote:
> > > I'm trying to measure or weight if this factor or feature: incrementing
> > > the number of words in the language to obtain faster
> > > code is really good or not (it make the language more difficult to
> > > learn).
>
> > > One always can increase speed by this trick, add to the function the
> > > type of the arguments, this way you don't have to look in a hash table
> > > or in a class.
>
> > You are completely missing the point. This "trick" buys you very
> > little. The speed of CL does not come from this, it comes from two
> > factors:
>
> > 1) most CL implementations compile into machine code,
>
> > 2) the compilers are mature, and able to perform quite a few optimizations
>
> > Compared to these two, the "factor" you bring up is insignificant if
> > anything. BTW, method dispatch in CLOS is quite fast too, so I generally
> > don't worry about its overhead when I program.
>
> > As I read this thread, I notice that several people tried to explain this
> > to you, with little success.
>
> > Tamas
>
> Hello Tamas:
>
> You say:the compilers are mature, and able to perform quite a few
> optimizations.
>
> I absolutely agree with you, but this is perhaps because the
> language use so many functions.
No, it's because lisp has many great implementation writers.
For a prove take a look at stalin, highly optimizing compiler for
scheme a minimalistic (at least nefore r6rs) dialect of lisp.
>
> For example, in python you don't have symbols, you have only symbol-
> name, and this slow down the code. In python you can't use (eq, you
> use = and this overload a lot the language, ...
>
> I agree with you about compiling to native code is faster, but I'm
> wondering about what features of a language make better compilers for
> the language and what's the prize of it. Is this a vague idea?- Hide quoted text -
The simpler the langauge, the easier to compile it. It's usually near
to impossible to beat c on speed but you could come very close, and
you should ask yourself does 20% faster code is worth 10 times more
time spend on development.
>
> - Show quoted text -
From: Alexander Schmolck
Subject: Re: static, dynamic and implicitely typed languages
Date:
Message-ID: <yfsfxn1l3ah.fsf@gmail.com>
·············@gmail.com writes:
> For example, in python you don't have symbols, you have only symbol-
> name, and this slow down the code. In python you can't use (eq, you
> use = and this overload a lot the language, ...
Wrong. You can use ``is`` which is pretty much identical to ``eq`` in common
lisp. Also try ``help(intern)``.
'as)
·············@gmail.com wrote:
> I should say that in a mature language
> ---------------------------------------------------------------
> more dynamic functions is proportional to slower code.
> ----------------------------------------------------------------
No, that's definitely wrong, and has been shown in several places.
(Self, Strongtalk, HotSpot VM, etc.)
Pascal
--
Lisp50: http://www.lisp50.org
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
On 13 oct, 00:42, Pascal Costanza <····@p-cos.net> wrote:
> ·············@gmail.com wrote:
> > I should say that in a mature language
> > ---------------------------------------------------------------
> > more dynamic functions is proportional to slower code.
> > ----------------------------------------------------------------
>
> No, that's definitely wrong, and has been shown in several places.
> (Self, Strongtalk, HotSpot VM, etc.)
>
> Pascal
>
> --
> Lisp50:http://www.lisp50.org
>
> My website:http://p-cos.net
> Common Lisp Document Repository:http://cdr.eurolisp.org
> Closer to MOP & ContextL:http://common-lisp.net/project/closer/
Thanks for the references about Self and Strongtalk, I know several
programming languages but nothing about theses.
How they archieve this speed?
What's the key factor in self to be so fast?
Why Python or Ruby don't get close to this one?
P.D. To Tamas clisp is code-byte and I think is faster than CPyton.
Many people are looking for the next dynamic and fast language
(better paralell, easy to learn, battery included, and ready for the
web) so this thinking is not so much vague ideas for wasting time.
(I'm having a good time learning always).
On Sun, 12 Oct 2008 16:25:42 -0700, grande.piedra wrote:
> How they archieve this speed?
> What's the key factor in self to be so fast? Why Python or Ruby don't
> get close to this one?
Maybe because they are badly designed scripting languages with little
emphasis on speed? Just a thought. Surprise: bash doesn't come close to
Common Lisp either! Who would have thought...
> P.D. To Tamas clisp is code-byte and I think is faster than CPyton.
Too bad for CPython. CLisp has bytecode, but it is still an optimizing
compiler.
> Many people are looking for the next dynamic and fast language
> (better paralell, easy to learn, battery included, and ready for the
> web) so this thinking is not so much vague ideas for wasting time. (I'm
OMG, so many buzzwords in a single sentence.
> having a good time learning always).
You don't appear to be learning anything.
Tamas
From: Marek Kubica
Subject: Re: static, dynamic and implicitely typed languages
Date:
Message-ID: <gd359q$4su$1@hoshi.visyn.net>
On Sun, 12 Oct 2008 23:36:42 +0000, Tamas K Papp wrote:
> Surprise: bash doesn't come close to Common Lisp either! Who would have
> thought...
We definitely need a JIT in bash for runtime-compiling these little
throwaway scripts. There's GNU Lightning, you hear me Google, we need a
student working on it in the next Summer of Code.
regards,
Marek
(who doesn't think Python is a badly designed language, but it has it's
warts - so does CL)
·············@gmail.com wrote:
> On 13 oct, 00:42, Pascal Costanza <····@p-cos.net> wrote:
>> ·············@gmail.com wrote:
>>> I should say that in a mature language
>>> ---------------------------------------------------------------
>>> more dynamic functions is proportional to slower code.
>>> ----------------------------------------------------------------
>> No, that's definitely wrong, and has been shown in several places.
>> (Self, Strongtalk, HotSpot VM, etc.)
>>
>> Pascal
>>
>> --
>> Lisp50:http://www.lisp50.org
>>
>> My website:http://p-cos.net
>> Common Lisp Document Repository:http://cdr.eurolisp.org
>> Closer to MOP & ContextL:http://common-lisp.net/project/closer/
>
> Thanks for the references about Self and Strongtalk, I know several
> programming languages but nothing about theses.
>
> How they archieve this speed?
Read the papers.
> What's the key factor in self to be so fast?
Dynamic compilation, polymorphic inline caches, and similar techniques.
Not easy to describe everything in one posting. That's what papers are
there for.
> Why Python or Ruby don't get close to this one?
Because their designers knew several programming languages but nothing
about these.
Pascal
--
Lisp50: http://www.lisp50.org
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
On Oct 13, 1:19 pm, Pascal Costanza <····@p-cos.net> wrote:
> ·············@gmail.com wrote:
> > On 13 oct, 00:42, Pascal Costanza <····@p-cos.net> wrote:
> >> ·············@gmail.com wrote:
> >>> I should say that in a mature language
> >>> ---------------------------------------------------------------
> >>> more dynamic functions is proportional to slower code.
> >>> ----------------------------------------------------------------
> >> No, that's definitely wrong, and has been shown in several places.
> >> (Self, Strongtalk, HotSpot VM, etc.)
>
> >> Pascal
>
> >> --
> >> Lisp50:http://www.lisp50.org
>
> >> My website:http://p-cos.net
> >> Common Lisp Document Repository:http://cdr.eurolisp.org
> >> Closer to MOP & ContextL:http://common-lisp.net/project/closer/
>
> > Thanks for the references about Self and Strongtalk, I know several
> > programming languages but nothing about theses.
>
> > How they archieve this speed?
>
> Read the papers.
>
> > What's the key factor in self to be so fast?
>
> Dynamic compilation, polymorphic inline caches, and similar techniques.
> Not easy to describe everything in one posting. That's what papers are
> there for.
>
> > Why Python or Ruby don't get close to this one?
>
> Because their designers knew several programming languages but nothing
> about these.
I believe it's because speed doesn't matter very much for applications
aimed to be written in Python or Ruby.
If all you want to do is simple web based backed but database app, and
if database and networking is hogging 90% of the time then speeding up
the app can't get you more than 10% faster.
Or even better if you have quad core doing nothing 99% of the time
well then you have to ask yourself is it worth it.
As for myself I believe that speed is relatively unimportant, if
something is more convenient to code I would take it even if it's a
magnitude slower.
bobi
>
> Pascal
>
> --
> Lisp50:http://www.lisp50.org
>
> My website:http://p-cos.net
> Common Lisp Document Repository:http://cdr.eurolisp.org
> Closer to MOP & ContextL:http://common-lisp.net/project/closer/- Hide quoted text -
>
> - Show quoted text -
From: Matthias Buelow
Subject: Re: static, dynamic and implicitely typed languages
Date:
Message-ID: <6li6j9FcjpeeU1@mid.dfncis.de>
·············@gmail.com wrote:
> How they archieve this speed?
> What's the key factor in self to be so fast?
> Why Python or Ruby don't get close to this one?
Forget the micro-optimization nonsense for a while; the best performance
for your program comes from its programmers and maintainers being able
to understand it and change it without getting tangled up in insane
bureaucracy. Lisp is conducive to that, probably Python too (never used
it, don't like the indenting thing). Some other popular languages... not so.
·············@gmail.com schrieb:
> Thanks for the references about Self and Strongtalk, I know several
> programming languages but nothing about theses.
>
> How they archieve this speed?
> What's the key factor in self to be so fast?
> Why Python or Ruby don't get close to this one?
There is no such thing as �fast programming language�.
You can only look at specific implementations.
And the efficiency of the generated code depends on the quality and
intelligence of the compiler that you used.
In some decades a compiler might be more intelligent than any human
being and only use your code in whatever language it is written in
to rewrite it in highly optimized machine code. In such a way as if
the task were trivial and a human had written perfect or nearly perfect
code to solve that task.
This means that in some decades all programs (not depending on the
languages used) can be compiled, and they all will have identical
run time efficiency.
And btw, having all these many names for functions in Lisp (you listed
for example eq, eql, equal, equalp and + for strings and numbers) are
there because they do different things, and because it supports the
readability of code.
This does not necessarily have to be connected with speed of the compiled
code. Just always keep that highly intelligent compiler in mind...
Andr�
--
On 15 oct, 14:09, André Thieme <address.good.until.
···········@justmail.de> wrote:
> ·············@gmail.com schrieb:
>
> > Thanks for the references about Self and Strongtalk, I know several
> > programming languages but nothing about theses.
>
> > How they archieve this speed?
> > What's the key factor in self to be so fast?
> > Why Python or Ruby don't get close to this one?
>
> There is no such thing as “fast programming language”.
> You can only look at specific implementations.
> And the efficiency of the generated code depends on the quality and
> intelligence of the compiler that you used.
> In some decades a compiler might be more intelligent than any human
> being and only use your code in whatever language it is written in
> to rewrite it in highly optimized machine code. In such a way as if
> the task were trivial and a human had written perfect or nearly perfect
> code to solve that task.
> This means that in some decades all programs (not depending on the
> languages used) can be compiled, and they all will have identical
> run time efficiency.
>
> And btw, having all these many names for functions in Lisp (you listed
> for example eq, eql, equal, equalp and + for strings and numbers) are
> there because they do different things, and because it supports the
> readability of code.
> This does not necessarily have to be connected with speed of the compiled
> code. Just always keep that highly intelligent compiler in mind...
>
> André
> --
Hello André:
I give more importance to speed. If python or ruby get faster than
Lisp then perhaps lisp community is going to shring.
Imagine a Lisp faster than C and with small exe files, it could be
the language of choice for development compiler, OS, ...
That is, in a market full of competing technologies there is no place
for slow languages, just wait and see.
About the future of the intelligent of compilers, we only have to
interpolate 50 years to get to nowadays Lisp compiler, 300 years more
to get the rest. (it is more difficult to grow when you are near the
top of your capabilities).
Best, big-rock (in spanish).
·············@gmail.com writes:
> I give more importance to speed. If python or ruby get faster than
> Lisp then perhaps lisp community is going to shring.
>
> Imagine a Lisp faster than C and with small exe files, it could be
> the language of choice for development compiler, OS, ...
We can imagine, but can you finance the development of such a CL compiler?
Let's start on the basis of ten year.man, or about $1M.
Would you bet $1M that with a CL compiler generating faster and
smaller code than C compilers, people would become blind to the
parentheses?
(Even if not, I'd still be interested in working on this project, if
you've got the money ;-))
> That is, in a market full of competing technologies there is no place
> for slow languages, just wait and see.
There is no such thing as "slow languages". Only slow implementations.
And there's a place for slow implementations, as long as they have
interesting features. I use clisp (which is deemed slower than sbcl),
because I like some features of clisp better than the corresponding in
sbcl. Of course, sometimes I have to deploy to sbcl, code I debugged
in clisp. And I've got ideas of projects that would need yet another
implementation for deployment, such as ecl, or abcl.
> About the future of the intelligent of compilers, we only have to
> interpolate 50 years to get to nowadays Lisp compiler, 300 years more
> to get the rest. (it is more difficult to grow when you are near the
> top of your capabilities).
You won't have to wait so long, intelligent compilers will come along
with general AI at the Singularity, or slightly before.
> Best, big-rock (in spanish).
--
__Pascal Bourguignon__
On 15 oct, 16:32, ····@informatimago.com (Pascal J. Bourguignon)
wrote:
> ·············@gmail.com writes:
> > I give more importance to speed. If python or ruby get faster than
> > Lisp then perhaps lisp community is going to shring.
>
> > Imagine a Lisp faster than C and with small exe files, it could be
> > the language of choice for development compiler, OS, ...
>
> We can imagine, but can you finance the development of such a CL compiler?
> Let's start on the basis of ten year.man, or about $1M.
>
> Would you bet $1M that with a CL compiler generating faster and
> smaller code than C compilers, people would become blind to the
> parentheses?
>
> (Even if not, I'd still be interested in working on this project, if
> you've got the money ;-))
>
> > That is, in a market full of competing technologies there is no place
> > for slow languages, just wait and see.
>
> There is no such thing as "slow languages". Only slow implementations.
>
> And there's a place for slow implementations, as long as they have
> interesting features. I use clisp (which is deemed slower than sbcl),
> because I like some features of clisp better than the corresponding in
> sbcl. Of course, sometimes I have to deploy to sbcl, code I debugged
> in clisp. And I've got ideas of projects that would need yet another
> implementation for deployment, such as ecl, or abcl.
>
> > About the future of the intelligent of compilers, we only have to
> > interpolate 50 years to get to nowadays Lisp compiler, 300 years more
> > to get the rest. (it is more difficult to grow when you are near the
> > top of your capabilities).
>
> You won't have to wait so long, intelligent compilers will come along
> with general AI at the Singularity, or slightly before.
>
> > Best, big-rock (in spanish).
>
> --
> __Pascal Bourguignon__
Hello Pascal.
Sorry, I haven't $1M for the project. Perhaps in the next
Singularity summit we can ask the intelligent compiler to generate for
us a financial system to get $1M ))
By now, I should say what's the real Singularity: there is only one
kind of intelligence in this planet with the power to create a full
symbolic language and this is the human mind. You can speculate with
Richard Dakin about the probability of intelligence in our universe.
Long time ago C get over Basic for its speed.
P.D. To Toy. I use maxima, I understand the problem of working
with a legacy system, with many authors, versions of Lisp, ... Perhaps
this is one reason for the code no to be a good one.
On Wed, 15 Oct 2008 08:51:37 -0700 (PDT), ·············@gmail.com
wrote:
> By now, I should say what's the real Singularity: there is only one
>kind of intelligence in this planet with the power to create a full
>symbolic language and this is the human mind. You can speculate with
>Richard Dakin about the probability of intelligence in our universe.
There is no intelligent life on this planet. If life exists
elsewhere, the de facto proof that it is intelligent is that it wants
nothing to do with us.
George
In article <··············@pbourguignon.anevia.com>,
···@informatimago.com (Pascal J. Bourguignon) wrote:
> ·············@gmail.com writes:
> > I give more importance to speed. If python or ruby get faster than
> > Lisp then perhaps lisp community is going to shring.
> >
> > Imagine a Lisp faster than C and with small exe files, it could be
> > the language of choice for development compiler, OS, ...
>
> We can imagine, but can you finance the development of such a CL compiler?
> Let's start on the basis of ten year.man, or about $1M.
>
> Would you bet $1M that with a CL compiler generating faster and
> smaller code than C compilers, people would become blind to the
> parentheses?
>
>
> (Even if not, I'd still be interested in working on this project, if
> you've got the money ;-))
Years ago (end of the 80s and later)
lots of people were thinking about how to increase
the application delivery capability of Lisp. It spawned
a lot of projects. Visible results were new dialects and
implementations. For example EuLisp and Dylan.
The Germans developed a compiler called CLICC from 1991
to 1994.
http://www.informatik.uni-kiel.de/~wg/clicc.html
CLiCC
The Common Lisp to C Compiler
CLiCC is a Common Lisp to C Compiler. It generates C-executables
or modules from Common Lisp application programs. CLiCC is
intended to be used as an addon to existing Common Lisp systems
for generating portable applications. CLiCC supports a strict
and very large subset CL0 (Common Lisp0) of Common Lisp + CLOS.
The compiler is available under GPL from above page.
If somebody wants to develop a Lisp compiler that generates
small exe files, this is a good starting point.
>
>
> > That is, in a market full of competing technologies there is no place
> > for slow languages, just wait and see.
>
> There is no such thing as "slow languages". Only slow implementations.
>
> And there's a place for slow implementations, as long as they have
> interesting features. I use clisp (which is deemed slower than sbcl),
> because I like some features of clisp better than the corresponding in
> sbcl. Of course, sometimes I have to deploy to sbcl, code I debugged
> in clisp. And I've got ideas of projects that would need yet another
> implementation for deployment, such as ecl, or abcl.
>
>
> > About the future of the intelligent of compilers, we only have to
> > interpolate 50 years to get to nowadays Lisp compiler, 300 years more
> > to get the rest. (it is more difficult to grow when you are near the
> > top of your capabilities).
>
> You won't have to wait so long, intelligent compilers will come along
> with general AI at the Singularity, or slightly before.
>
>
> > Best, big-rock (in spanish).
--
http://lispm.dyndns.org/
Rainer Joswig <······@lisp.de> writes:
>> (Even if not, I'd still be interested in working on this project, if
>> you've got the money ;-))
>
>
> Years ago (end of the 80s and later)
> lots of people were thinking about how to increase
> the application delivery capability of Lisp. It spawned
> a lot of projects. Visible results were new dialects and
> implementations. For example EuLisp and Dylan.
>
> The Germans developed a compiler called CLICC from 1991
> to 1994.
>
>
> http://www.informatik.uni-kiel.de/~wg/clicc.html
>
>
> CLiCC
> The Common Lisp to C Compiler
>
> CLiCC is a Common Lisp to C Compiler. It generates C-executables
> or modules from Common Lisp application programs. CLiCC is
> intended to be used as an addon to existing Common Lisp systems
> for generating portable applications. CLiCC supports a strict
> and very large subset CL0 (Common Lisp0) of Common Lisp + CLOS.
>
> The compiler is available under GPL from above page.
>
> If somebody wants to develop a Lisp compiler that generates
> small exe files, this is a good starting point.
I was thinking more about something including global analysis, tree
shakers, etc.
That said, I didn't evaluate CLiCC for executable size, it might
indeed give good results here too (as ECL and GCL I'd guess).
--
__Pascal Bourguignon__
·············@gmail.com schrieb:
> Hello Andr�:
>
> I give more importance to speed. If python or ruby get faster than
> Lisp then perhaps lisp community is going to shring.
Currently some Lisps (sbcl) produce code which is for basic tasks about
twice as fast as the one in the mainstream Python implementation. And
for not so trivial tasks sbcl for example performs 100x better.
The language Ruby can never be faster than Lisp, and the reason is that
one can�t measure the speed of a language. It�s just a bunch of definitions
which don�t move.
For example the number 19 is also not faster than 86. It�s the same with
languages. But implementations on the other hand can interpret/compile
code in such a way that it performs better or worse (speedwise).
> Imagine a Lisp faster than C and with small exe files, it could be
> the language of choice for development compiler, OS, ...
In some other posting in this thread I said that I don�t see it as an
unthinkable thought that Lisp programs run faster than C programs, as
soon something complex is done.
Just look at the regular expression engine of Perl, which is written in
C. It was developed by several people and used a mature compiler and had
several years to mature itself.
Now Edi Weitz alone programmed within a few months a regular expression
engine for Lisp. In many implementations of Lisp his regex engine
outperforms the one written in C.
Edi is a clever guy, but I doubt he would have written this engine the
same way as he would have if he were forced to do it in C. And I doubt
he would be able to write such an engine in so short time which then
will perform equally well as his regex engine written in Lisp.
In these 90-line-programs in the computer language shootout it is
different of course. These well understood problems can be solved in C
in such a way, that todays compilers can generate very efficient code.
> About the future of the intelligent of compilers, we only have to
> interpolate 50 years to get to nowadays Lisp compiler, 300 years more
> to get the rest. (it is more difficult to grow when you are near the
> top of your capabilities).
Just imagine that Sun and/or Microsoft would spend 50% of the money they
put into the development of C# and Java for developing their own
implementation of CL. Lispworks and Franz don�t have enough money to
constantly add new libs, improve the compiler and also do customer
support. They both would benefit from getting some few hundred more Lispers
to write/extend all kinds of Libs.
There is simply not enough money around.
So, we got this far with Lisp, although since the early 90ies the money
flow has stopped.
Andr�
--
Andr� Thieme schrieb:
> In some other posting in this thread I said that [...]
Nope, I was mistaken. I first wanted to post it, but then I didn�t.
:-)
Andr�
--
From: Waldek Hebisch
Subject: Re: static, dynamic and implicitely typed languages
Date:
Message-ID: <gdom5m$6o4$1@z-news.pwr.wroc.pl>
Andr� Thieme <······························@justmail.de> wrote:
> ·············@gmail.com schrieb:
>
> > Thanks for the references about Self and Strongtalk, I know several
> > programming languages but nothing about theses.
> >
> > How they archieve this speed?
> > What's the key factor in self to be so fast?
> > Why Python or Ruby don't get close to this one?
>
> There is no such thing as ?fast programming language?.
> You can only look at specific implementations.
> And the efficiency of the generated code depends on the quality and
> intelligence of the compiler that you used.
But there are languages where effort to write compiler generating
fast code is prohibitive. As a little sampler consider Ucblogo:
main control strctures are implementated as macros and redefinition
of macro takes immediate effect. So if you redefine your loop
in one iteration the next iteration will see new definition.
Also, you seem to assume whole-program compilation, however
in real world program has to interact with other part of
computer system. Once you have rich interaction you may
be forced to do a lot of useless computation simply to
preserve correctness. As a silly example consider language
which at any given time allows you recall past execution
trace. Even if actual computation will request no
trace information you still will be forced to remember
a lot of information so that you can reconstract trace
(in program copying large collection of random numbers
you will be forced to remember all of the numbers).
Today practical languages try to avoid insane ideas, but
some popular features are pretty expensive with known
methods. With better compilers new features will
arrive...
--
Waldek Hebisch
·······@math.uni.wroc.pl
·············@gmail.com writes:
> I think that Lisp compiler get their power because in Lisp you
> usually declare implicitely the type of the argument:
>
> (person-name "John")
> (char "sure I'm a string" 1)
> (car '(sure this is a list))
> (= A B) ;; sure A is a number....
Some compilers take advantage of this to a great degree, some to a
lesser degree. You can also explicitly declare the types of variables,
as in this function:
(defun foo (a b c)
(declare (type number a)
(type fixnum b)
(type cons c))
(cons (+ a (* 2 b)) c))
The compiler obviously knows that the function must return a list, but
it *also* knows that it can multiply B by 2 really quickly because a
FIXNUM is a small number, whereas the addition (+ C (* 2 B)) will
likely be slower because C can be any kind of number.
(I don't know if there's any special-casing if only one of the
arguments to a math op is a FIXNUM. Probably not worth it.)
Obviously you would only bloat your code with explicit declarations if
speed is important. (And how do we know speed is important? We
profile. Guessing means we guess wrong.)
> So I should say that Lisp is weakly implicitely typed.
No, you should not.
B was a weakly, implicitly typed language. In B, every variable was a
single machine word (32 bits on a 32-bit computer, 16 bits on a 16-bit
computer, etc.) and the compiler didn't know if you were multiplying a
floating-point number by an address. It had no static typing at all
(so it was dynamic) and it didn't enforce typing rules (so it was
weakly typed).
B was the ancestor of C, which is a semi-strong, statically typed
language. It knows if you are trying to multiply by an address and it
will stop you, but there are casts to get around all type
checks. (That's why it's only semi-strongly typed.) It also only
allows static typing (the type of every variable must be known at
compile time) (you can emulate implicit typing by casting around the
type system to the point of turning it off completely).
Lisp, on the other hand, is a *strongly*, implicitly typed
language. (Note that implicit typing is more commonly called dynamic
typing. They are the same thing.) It does not require static type
declarations but it *does* keep track of *all* data types and it
enforces *all* type rules *all* the time. There is no way to cast
something in Lisp. It is like Java in this respect, except Java is
statically typed. Strong typing is needed for garbage collection to
work efficiently. (For example, garbage collectors get confused if a
pointer is modified, so casting pointers to integers would make
garbage collectors need to be a lot more cautious and a lot less
efficient.)
Haskell has static, strong, *inferred* typing: The compiler requires
that the type of every variable be known at compile time (static
typing), but it is smart enough to infer (figure out) the type for
you. In fact, its type system is complex enough it can catch some
kinds of bugs just by compiling the program. (Not as many as it would
have to for me to use it regularly, but it is pretty cool.) Haskell
also doesn't allow casting. It's like Java's smarter brother who went
to college and majored in mathematics. ;)
(Yes, it does annoy me to see these terms misused. Calling Lisp weakly
typed is especially frustrating because it implies Lisp runtime
systems can't catch type errors.)
Chris Barts <··············@gmail.com> wrote:
+---------------
| You can also explicitly declare the types of variables,
| as in this function:
|
| (defun foo (a b c)
| (declare (type number a)
| (type fixnum b)
| (type cons c))
| (cons (+ a (* 2 b)) c))
|
| The compiler obviously knows that the function must return a list, but
| it *also* knows that it can multiply B by 2 really quickly because a
| FIXNUM is a small number, whereas the addition (+ C (* 2 B)) will
| likely be slower because C can be any kind of number.
+---------------
To really get the maximum speed, you have to be able to assure the
compiler that the *result* of multiplying B by 2 will also still be
a FIXNUM. If you *are* sure of that, then you can tell a CL compiler
about it this way:
(defun foo (a b c)
(declare (type number a)
(type fixnum b)
(type cons c))
(cons (+ a (the fixnum (* 2 b))) c))
As you note, this may not be of much benefit, since the compiler
will still have to use fully-generic "+" when adding "A"... :-{
-Rob
-----
Rob Warnock <····@rpw3.org>
627 26th Avenue <URL:http://rpw3.org/>
San Mateo, CA 94403 (650)572-2607
In article <································@speakeasy.net>,
····@rpw3.org (Rob Warnock) wrote:
> Chris Barts <··············@gmail.com> wrote:
> +---------------
> | You can also explicitly declare the types of variables,
> | as in this function:
> |
> | (defun foo (a b c)
> | (declare (type number a)
> | (type fixnum b)
> | (type cons c))
> | (cons (+ a (* 2 b)) c))
> |
> | The compiler obviously knows that the function must return a list, but
> | it *also* knows that it can multiply B by 2 really quickly because a
> | FIXNUM is a small number, whereas the addition (+ C (* 2 B)) will
> | likely be slower because C can be any kind of number.
> +---------------
>
> To really get the maximum speed, you have to be able to assure the
> compiler that the *result* of multiplying B by 2 will also still be
> a FIXNUM. If you *are* sure of that, then you can tell a CL compiler
> about it this way:
>
> (defun foo (a b c)
> (declare (type number a)
> (type fixnum b)
> (type cons c))
> (cons (+ a (the fixnum (* 2 b))) c))
>
> As you note, this may not be of much benefit, since the compiler
> will still have to use fully-generic "+" when adding "A"... :-{
But see this (simplifying your example):
(defun foo (a b)
(declare (type number a)
(type fixnum b))
(+ a (the fixnum (* 2 b))))
(defun bar (a b c)
(declare (type fixnum a b c)
(inline foo))
(the fixnum (+ a (foo b c))))
Which can be seen as something like this:
(defun foobar (a b c)
(declare (type fixnum a b c))
(the fixnum (+ a (+ b (the fixnum (* 2 c))))))
So the type of the arguments gets propagated
into inlined functions. So, FOO may use
a generic +. But BAR may use a FIXNUM +
in the inlined code of FOO.
>
>
> -Rob
>
> -----
> Rob Warnock <····@rpw3.org>
> 627 26th Avenue <URL:http://rpw3.org/>
> San Mateo, CA 94403 (650)572-2607
--
http://lispm.dyndns.org/
Rainer Joswig <······@lisp.de> wrote:
+---------------
| ····@rpw3.org (Rob Warnock) wrote:
| > Chris Barts <··············@gmail.com> wrote:
| > +---------------
| > | You can also explicitly declare the types of variables,
| > | as in this function: ...
| > +---------------
| >
| > To really get the maximum speed, you have to be able to assure the
| > compiler that the *result* of multiplying B by 2 will also still be
| > a FIXNUM. If you *are* sure of that, then you can tell a CL compiler
| > about it this way:
| >
| > (defun foo (a b c)
| > (declare (type number a)
| > (type fixnum b)
| > (type cons c))
| > (cons (+ a (the fixnum (* 2 b))) c))
| >
| > As you note, this may not be of much benefit, since the compiler
| > will still have to use fully-generic "+" when adding "A"... :-{
|
| But see this (simplifying your example):
+---------------
[Chris Barts's example, actually...]
+---------------
| (defun foo (a b)
| (declare (type number a)
| (type fixnum b))
| (+ a (the fixnum (* 2 b))))
+---------------
That's just what I wrote above, less the CONS with C.
+---------------
| (defun bar (a b c)
| (declare (type fixnum a b c)
| (inline foo))
| (the fixnum (+ a (foo b c))))
+---------------
But A was given in the original example as a NUMBER, *not* a FIXNUM!
You can't just change the rules in the middle like that.
+---------------
| Which can be seen as something like this:
|
| (defun foobar (a b c)
| (declare (type fixnum a b c))
| (the fixnum (+ a (+ b (the fixnum (* 2 c))))))
|
| So the type of the arguments gets propagated into inlined functions.
| So, FOO may use a generic +. But BAR may use a FIXNUM +
| in the inlined code of FOO.
+---------------
(*sigh*) Even if you change the rules so that A is a FIXNUM
(not a NUMBER), your re-write *still* isn't correct unless
you *know* that the intermediate sums are all still FIXNUMs!!
As it stands above, you're lying to the compiler. Even if you
*know* that the final result (+ A (+ B (THE FIXNUM (* 2 C))))
is a FIXNUM, you *DON'T* know [or, if you do know, you
haven't yet promised the compiler] that the intermediate sum
(+ B (THE FIXNUM (* 2 C))) is a FIXNUM. Consider what happens
when C is positive, B is MOST-POSITIVE-FIXNUM, and A is a
negative FIXNUM less than (- (* 2 C)). The sum with B will
push the intermediate result into a BIGNUM [*expensive!*],
but the sum with A will bring it back into the FIXNUM range.
But if you *know* that can't happen, you could tell the
compiler about it and make the code faster still:
(defun foobar (a b c)
(declare (type fixnum a b c))
(the fixnum (+ a (the fixnum (+ b (the fixnum (* 2 c)))))))
But in these kinds of cases it might be better to just tell
the compiler the actual value ranges of the variables, and
then you don't have to sprinkle THE around all over the place
[if the compiler's type propagation is smart enough], e.g.:
(defun foobar (a b c)
(declare (type (integer -1000 1000) a b)
(type (integer -500 500) c))
(+ a b (* 2 c))) ; *Must* be (INTEGER -3000 3000), so no THE needed.
-Rob
p.s. If you compile the latter with (SAFETY 0) in CMUCL, it will
even eliminate the range checks on the args, giving this code with
all the ops inlined (on x86):
48AEE540: .ENTRY "LAMBDA (A B C)"(a b c) ; (FUNCTION
; ((INTEGER -1000 1000) ..))
58: POP DWORD PTR [EBP-8]
5B: LEA ESP, [EBP-32]
5E: MOV EBX, EDX ; [:NON-LOCAL-ENTRY]
60: LEA EDX, [EBX+EDI] ; No-arg-parsing entry point
; [:NON-LOCAL-ENTRY]
63: LEA EAX, [ESI*2]
6A: ADD EDX, EAX
6C: MOV ECX, [EBP-8] ; [:BLOCK-START]
6F: MOV EAX, [EBP-4]
72: ADD ECX, 2
75: MOV ESP, EBP
77: MOV EBP, EAX
79: JMP ECX
-----
Rob Warnock <····@rpw3.org>
627 26th Avenue <URL:http://rpw3.org/>
San Mateo, CA 94403 (650)572-2607
On Mon, 13 Oct 2008 01:39:24 -0600, Chris Barts wrote:
> regularly, but it is pretty cool.) Haskell also doesn't allow casting.
> It's like Java's smarter brother who went to college and majored in
> mathematics. ;)
Good one, I will save this :-)
Tamas
From: Kaz Kylheku
Subject: Re: static, dynamic and implicitely typed languages
Date:
Message-ID: <20081016101238.342@gmail.com>
On 2008-10-12, ·············@gmail.com <·············@gmail.com> wrote:
>
>
> I think that Lisp compiler get their power because in Lisp you
> usually declare implicitely the type of the argument:
>
> (person-name "John")
> (char "sure I'm a string" 1)
These are function calls, not declarations. Function return types
are not statically determined in Lisp.
(But we have a way to declare functions, for the major purpose of
optimization and the minor purpose of getting some additional
checks from compilers).
> (car '(sure this is a list))
The type of (car x) is that of whatever is stored in the corresponding field of
the cons cell X, which could be absolutely anything: number, string,
function, struct ...
> (= A B) ;; sure A is a number....
What kind of number?
Thanks for playing.
> A = "John Smith"
>
> A.name versus (person-name "John Smith")
A.name is just syntactic sugar for (name A).
> So I should say that Lisp is weakly implicitely typed.
No, you shouldn't.
On Thu, 16 Oct 2008 17:23:39 +0000, Kaz Kylheku wrote:
> These are function calls, not declarations. Function return types are
> not statically determined in Lisp.
So is there a way to declare the return type of your function? I know
that SBCL sometimes figures it out, but I don't know how to declare it.
Tamas
Tamas K Papp wrote:
> On Thu, 16 Oct 2008 17:23:39 +0000, Kaz Kylheku wrote:
>
>> These are function calls, not declarations. Function return types are
>> not statically determined in Lisp.
>
> So is there a way to declare the return type of your function? I know
> that SBCL sometimes figures it out, but I don't know how to declare it.
>
> Tamas
Yes there is, take a look at this example:
(declaim (ftype (function (fixnum fixnum) fixnum) test)
(optimize (speed 3) (safety 0) (debug 0) (space 0))
#+lispworks (fixnum 0))
(defun test (a b)
(declare (fixnum a b))
(the fixnum (+ a b)))
(defun use-test ()
(let* ((a 10)
(b 20)
(c (+ a b)))
(declare (fixnum a b c))
(the fixnum (+ (test a b) c))))
(disassemble 'test)
(disassemble 'use-test)
Under LispWorks:
;;; Safety = 3, Speed = 1, Space = 1, Float = 1, Interruptible = 0
;;; Compilation speed = 1, Debug = 2, Fixnum safety = 3
;;; Source level debugging is on
;;; Source file recording is on
;;; Cross referencing is on
; (TOP-LEVEL-FORM 1)
; (TOP-LEVEL-FORM 2)
;;;*** Warning between functions: Unrecognised proclamation
(FIXNUM-SAFETY 0)
; TEST
; USE-TEST
; (TOP-LEVEL-FORM 3) -- That would be TEST
2008BC7A:
0: 8B7C2404 move edi, [esp+4]
4: 03C7 add eax, edi
6: FD std
7: C20400 ret 4
; (TOP-LEVEL-FORM 4) -- That would be USE-TEST
2008FA42:
0: 55 push ebp
1: 89E5 move ebp, esp
3: 6A28 pushb 28
5: B502 moveb ch, 2
7: B850000000 move eax, 50
12: FF1554ED0922 call [2209ED54] ; TEST
18: 83C078 add eax, 78
21: FD std
22: C9 leave
23: C3 ret
24: 90 nop
25: 90 nop
;;; Compilation finished with 1 warning, 0 errors.
Now change
(declaim (ftype (function (fixnum fixnum) fixnum) test)
(optimize (speed 3) (safety 0) (debug 0) (space 0))
#+lispworks (fixnum 0))
To
(declaim (ftype (function (fixnum fixnum) T) test)
(optimize (speed 3) (safety 0) (debug 0) (space 0))
#+lispworks (fixnum 0))
You get:
;;; Safety = 3, Speed = 1, Space = 1, Float = 1, Interruptible = 0
;;; Compilation speed = 1, Debug = 2, Fixnum safety = 3
;;; Source level debugging is on
;;; Source file recording is on
;;; Cross referencing is on
; (TOP-LEVEL-FORM 1)
; (TOP-LEVEL-FORM 2)
;;;*** Warning between functions: Unrecognised proclamation
(FIXNUM-SAFETY 0)
; TEST
; USE-TEST
; (TOP-LEVEL-FORM 3) -- That would be TEST
200EA412:
0: 8B7C2404 move edi, [esp+4]
4: 03C7 add eax, edi
6: FD std
7: C20400 ret 4
; (TOP-LEVEL-FORM 4) -- That would be USE-TEST
200F0172:
0: 55 push ebp
1: 89E5 move ebp, esp
3: 6A28 pushb 28
5: B502 moveb ch, 2
7: B850000000 move eax, 50
12: FF1554ED0922 call [2209ED54] ; TEST
18: A803 testb al, 3
20: 750C jne L1
22: 89C7 move edi, eax
24: 83C778 add edi, 78
27: 7005 jo L1
29: FD std
30: 89F8 move eax, edi
32: C9 leave
33: C3 ret
L1: 34: FF7500 push [ebp]
37: 83ED04 sub ebp, 4
40: 8B7508 move esi, [ebp+8]
43: 897504 move [ebp+4], esi
46: 894508 move [ebp+8], eax
49: B878000000 move eax, 78
54: C9 leave
55: E954D30100 jmp 2010D502 ; #<Function
SYSTEM::*%+$ANY-STUB 2010D502>
60: 90 nop
61: 90 nop
;;; Compilation finished with 1 warning, 0 errors.
So the function is still getting compiled the same way (TEST), but the
place where it's called has changed.