Did anybody else see this:
http://java.sun.com/developer/technicalArticles/Interviews/livschitz_qa.html
As a guy who has been programming in Java for close to seven years now and
who is now trying to learn CL rabidly, I found it particularly interesting.
I think my conversion to CL is starting to stick, because about all I could
think when I was reading it was "I think Lisp solves all this."
In particular, I found it sort of funny to see somebody from Sun basically
saying that "objects are great, but you need better abstractions in order
to keep battling complexity, and not everything is represented well by an
object." As I read this, I thought, "Wow, that's a big change." Have we
finally hit the post-object era of programming?
Now, not to get down on objects. I love them most of the time and I find
that I have started to think quite naturally in objects after a decade of
OOP in C++ and Java. That said, I think this article actually hits some
nails right on the head. In particular, objects don't work for everything
and sometimes I find the syntax overhead really annoying (basically, being
forced to fit everything in an object syntax means you type a lot of
overhead stuff you don't really need an object). Actually, the ability to
mix C with C++ was sort of nice this way, though I dropped C++ for Java
because I found the C++ syntax just a nightmare (which is sort of funny
because CL syntax always has this bad rap going on and I'm actually finding
that I don't mind the parens a bit now).
What I find that I'm really liking about Lisp is, if I want objects, there
they are. While I still haven't dived into CLOS, I'm expecting a
well-designed, powerful object system. BUT, if I don't want objects, I'm
not forced to use them. In fact, I can program in just about any
programming style and Lisp seems to accommodate my wishes quite well. Using
macros, I can adapt it just about any way that I want. I'm going through
PAIP right now and it's interesting to see the different programming styles
there alone.
-- Dave
Dave Roberts wrote:
> Did anybody else see this:
>
> http://java.sun.com/developer/technicalArticles/Interviews/livschitz_qa.html
>
> As a guy who has been programming in Java for close to seven years now and
> who is now trying to learn CL rabidly, I found it particularly interesting.
> I think my conversion to CL is starting to stick, because about all I could
> think when I was reading it was "I think Lisp solves all this."
I had the same thought.
> mix C with C++ was sort of nice this way, though I dropped C++ for Java
> because I found the C++ syntax just a nightmare (which is sort of funny
> because CL syntax always has this bad rap going on ...
Yeah, C++ is coding via punctuation you really have to get right or face
an inscrutable torrent of compiler errors (well, to newbies at least),
and people think Lisp syntax is a negative. Chya!
and I'm actually finding
> that I don't mind the parens a bit now).
How long did it take?
>
> What I find that I'm really liking about Lisp is, if I want objects, there
> they are. While I still haven't dived into CLOS, I'm expecting a
> well-designed, powerful object system.
I read somewhere it is the only OO variant that satisfies all the
requirements specified by some OO standards group unrelated to any
specific language. I gotta see if I can track that down, but it might
just have been a passing observation in an NG somewhere.
kenny
--
http://tilton-technology.com
Why Lisp? http://alu.cliki.net/RtL%20Highlight%20Film
Your Project Here! http://alu.cliki.net/Industry%20Application
Kenny Tilton wrote:
> Yeah, C++ is coding via punctuation you really have to get right or face
> an inscrutable torrent of compiler errors (well, to newbies at least),
> and people think Lisp syntax is a negative. Chya!
Yea, exactly. I found that C++ was just, well, complex. I was always having
to *think* about the syntax, not the problem. I can handle C pretty well.
In both cases, I still have to think about memory management, which is one
reason I like Java far better, but at least the C syntax was small and
fairly regular. Java also has pretty clean syntax, relatively speaking
(though the new additions they are starting to roll out in JDK 1.5 are
drifting toward more complexity).
> and I'm actually finding
>> that I don't mind the parens a bit now).
>
> How long did it take?
Less than a week of real use. The one thing I still find difficult about
Lisp paren syntax is with forms that take a lists of lists as part of the
form syntax (think DO initializers). I find myself getting lost in a sea of
lists and not quite remembering whether I need to use a list in a specific
place in the syntax ("one list or two, here...?"). This is not a
parenthesis thing; it's a syntax thing for certain forms. Straight function
calls are no problem.
>> What I find that I'm really liking about Lisp is, if I want objects,
>> there they are. While I still haven't dived into CLOS, I'm expecting a
>> well-designed, powerful object system.
>
> I read somewhere it is the only OO variant that satisfies all the
> requirements specified by some OO standards group unrelated to any
> specific language. I gotta see if I can track that down, but it might
> just have been a passing observation in an NG somewhere.
Yea, I think I read somewhere that it satisfies all the OMG criteria for
being OO, whatever that really means. Also, I wouldn't hang my hat on
anything like that. Just about everybody can get some "official" body
somewhere to say something like that. The real test is whether it puts a
greater conceptual power in the hands of the programmer such that you are
able to "program on a higher plane," so to speak. Everybody seems to say
that CLOS does just that, in a way that other object models (C++ and Java,
notably) just can't match. Again, I'm not there yet, but I'm looking
forward to it.
This article is right on a number of fronts: the fundamental issue really is
about how programming languages help programmers manage complexity. By
delivering higher levels of abstraction and models for computation, you can
allow the programmer to concentrate on fewer things, thus allowing those
brain cycles to be spent on the problem at hand rather than needless
details. That's precisely why I dropped C++; it simply forced my brain to
think about too many things that were totally unrelated to the problem I
was trying to solve. That's also precisely why I got interested in CL after
reading Paul Graham's claims that Lisp was a language that allowed you to
work at higher levels of abstraction. To me, this is *THE* issue in
computer science, and one that receives too little attention, IMHO.
-- Dave
Dave Roberts wrote:
> Kenny Tilton wrote:
>
>
>>Yeah, C++ is coding via punctuation you really have to get right or face
>> an inscrutable torrent of compiler errors (well, to newbies at least),
>>and people think Lisp syntax is a negative. Chya!
>
>
> Yea, exactly. I found that C++ was just, well, complex. I was always having
> to *think* about the syntax, not the problem.
Ah, "making the compiler happy", which reminds me of something I think
Victoria got wrong:
"I can see two reasonable ways to create complex programs that are less
susceptible to bugs. As in medicine, there is prevention and there is
recovery. Both the objectives and the means involved in prevention and
recovery are so different that they should be considered separately.
The preventive measures attempt to ensure that bugs are not possible in
the first place. A lot of progress has been made in the last twenty
years along these lines. Such programming practices as strong typing
that allows compile-time assignment safety checking, ...."
Cue the static-typing flamewar! Ironically, a couple of C++ former
static-typing bigots have switched sides after discovering test-driven
development gave them software as correct but an order of magnitude
faster (because they were not fighting the compiler for every inch of
ground).
> Less than a week of real use. The one thing I still find difficult about
> Lisp paren syntax is with forms that take a lists of lists as part of the
> form syntax (think DO initializers).
Or even COND. Yeah, I came up a parens shy on those for a long time,
then once I had it down i had to rein myself in on CASE. :) Paul Graham
mentioned at ILC2003 that he felt COND had too many parentheses. he did
not disclose his position on whether Mozart used too many notes.
> Yea, I think I read somewhere that it satisfies all the OMG criteria for
> being OO, whatever that really means.
What I took away from it was that here is this pie-in-the-sky spec which
is easy for a standards group to write because they do not actually have
to do it, and the CL community Just Delivered It and said, "You mean
something like this?". With no new syntax, just a few macros. Compiler
optimizations of course got interesting, but optimization always does.
> somewhere to say something like that. The real test is whether it puts a
> greater conceptual power in the hands of the programmer such that you are
> able to "program on a higher plane," so to speak. Everybody seems to say
> that CLOS does just that, in a way that other object models (C++ and Java,
> notably) just can't match. Again, I'm not there yet, but I'm looking
> forward to it.
I guess the thing is that, as with the rest of Lisp, CLOS somehow always
seems to work the way you hope it works. So once you have a half-dozen
CLOS mechanical concepts internalized, you don't even think about CLOS
anymore, you just get on with your work. ie, it is not so much "greater
conceptual power", it is more a closer conceptual match (and absence of
harassment) between my thinking and how CLOS works.
> was trying to solve. That's also precisely why I got interested in CL after
> reading Paul Graham's claims that Lisp was a language that allowed you to
> work at higher levels of abstraction.
Chalk up another one for Paul Graham. (See the CL newbie survey in my sig.)
kenny
--
http://tilton-technology.com
Why Lisp? http://alu.cliki.net/RtL%20Highlight%20Film
Your Project Here! http://alu.cliki.net/Industry%20Application
Kenny Tilton wrote:
> Cue the static-typing flamewar! Ironically, a couple of C++ former
> static-typing bigots have switched sides after discovering test-driven
> development gave them software as correct but an order of magnitude
> faster (because they were not fighting the compiler for every inch of
> ground).
Okay, so a simple question. Without cuing the full flamewar, give me the
highlights here. Here's what I think of this issue so far: I have already
found run-time typing to be tremendously helpful. I don't see that static
typing actually solves a lot of *real* problems. Yes, it's nice to have the
compiler checking up after you, but those aren't the kind of bugs that I
find are really difficult to deal with. That said, how often do you find
that latent bugs are present in your code, only to be discovered by an
unsuspecting user, that static checking could have caught? The one argument
I find that seems to hold water in my brain is that I want to have done
everything I can do as a programmer to ensure that a user never sees
something blow up because of something as stupid as a type error.
>> Less than a week of real use. The one thing I still find difficult about
>> Lisp paren syntax is with forms that take a lists of lists as part of the
>> form syntax (think DO initializers).
>
> Or even COND. Yeah, I came up a parens shy on those for a long time,
> then once I had it down i had to rein myself in on CASE. :) Paul Graham
> mentioned at ILC2003 that he felt COND had too many parentheses. he did
> not disclose his position on whether Mozart used too many notes.
When I look at the forms, I find that's probably how I would have designed
them, too, but it just takes a while to get it. That is, it isn't that
they're totally illogical, but just not necessarily intuitive. At least
yet. Have me answer the question in three months and I'll probably tell you
that it's all no big deal.
> I guess the thing is that, as with the rest of Lisp, CLOS somehow always
> seems to work the way you hope it works. So once you have a half-dozen
> CLOS mechanical concepts internalized, you don't even think about CLOS
> anymore, you just get on with your work. ie, it is not so much "greater
> conceptual power", it is more a closer conceptual match (and absence of
> harassment) between my thinking and how CLOS works.
Interesting. I'll have to dig into it and see.
>> was trying to solve. That's also precisely why I got interested in CL
>> after reading Paul Graham's claims that Lisp was a language that allowed
>> you to work at higher levels of abstraction.
>
> Chalk up another one for Paul Graham. (See the CL newbie survey in my
> sig.)
Already done. I actually came to Paul's site when researching some spam
stuff. Go figure. Also, chalk up one for Pascal Costanza. I took Graham's
claims with a bit of salt until I read Pascal's Highly Opinionated Guide to
Lisp. That convinced me that Graham wasn't a solo nut-case. At least there
were two... ;-)
Then I came here (c.l.l) and found a whole barrel full...
-- Dave
Dave Roberts wrote:
> Kenny Tilton wrote:
>
>
>>Cue the static-typing flamewar! Ironically, a couple of C++ former
>>static-typing bigots have switched sides after discovering test-driven
>>development gave them software as correct but an order of magnitude
>>faster (because they were not fighting the compiler for every inch of
>>ground).
>
>
> Okay, so a simple question. Without cuing the full flamewar, give me the
> highlights here. Here's what I think of this issue so far: I have already
> found run-time typing to be tremendously helpful. I don't see that static
> typing actually solves a lot of *real* problems. Yes, it's nice to have the
> compiler checking up after you, but those aren't the kind of bugs that I
> find are really difficult to deal with. That said, how often do you find
> that latent bugs are present in your code, only to be discovered by an
> unsuspecting user, that static checking could have caught? The one argument
> I find that seems to hold water in my brain is that I want to have done
> everything I can do as a programmer to ensure that a user never sees
> something blow up because of something as stupid as a type error.
This is hard to answer in one respect, because as a lispnik I simply do
not think in terms of type. So even if I had a bug where a char got
passed where an instance of elephant was expected, I would just marvel
that that code never got executed once during testing. I would not even
think about possibly having been saved by C++. This maps nicely onto the
experience of the C++ gurus who have discovered that TDD replaces static
typing (by giving them the same QA), while they are insanely more
productive using a dynamic language (in their case Python).
In other respect, I can answer indirectly: I find my code does not break
much at all once I think it works at all. A demo of a very intense app
never failed until someone else installed it on a new machine, forgot to
create a temp directory their code assumed would be there, then
reassured the user they could do the demo without testing the
application even once. Hell, even the user knew that was stupid.
Why the absence of surprises, even though usually I was making mad
revisions the day before a demo? With Lisp one can program at a higher
level, such that a small amount of code handles all the cases, using
meta-information to handle those cases. By making the code smart, one
writes less of it--well, it is hard to write so one writes a small
amount of code over and over, but less code in the end gets shipped. And
like I said, all the cases run through it, so it is hard to have a
surprise. Any code in the app is well worn by the time it runs in anger,
and I do not even do much in the way of testing (and certainly do not do
anything like TDD). Add TDD, and fuggedaboutit. And the funny thing is
that Lisp would make it easier to do automatic testing, precisely
because of the reflective power.
I guess the question "how many production bugs would static typing have
caught?" misses the larger question of how much more productive am I
with a dynamic language. productive means faster, so I have more time
for testing. productive also means fewer lines of smarter code, so it is
easier to get full coverage. and more productive also means it is easier
to test, because of the dynamism and reflection, so i test more effectively.
if a bug now gets through which static type-checking would have
caught... so what? the static-typers want to jump up and down and say "I
told you so!", but they have lost site of the productivity forest for
tree of compiler type-checking. They think it is a free lunch, but only
if they have never worked in a dynamic language.
The boss occasionally asks me what happens to us if MegaCorp decides to
put 20 C++ programmers on a project to produce a competitive product. I
reassure him that I hope they use thirty. I would be nervous if they
used four, but I do not think MegaCorp knows how to do that.
kenny
--
http://tilton-technology.com
Why Lisp? http://alu.cliki.net/RtL%20Highlight%20Film
Your Project Here! http://alu.cliki.net/Industry%20Application
Kenny Tilton wrote:
> This is hard to answer in one respect, because as a lispnik I simply do
> not think in terms of type. So even if I had a bug where a char got
> passed where an instance of elephant was expected, I would just marvel
> that that code never got executed once during testing. I would not even
> think about possibly having been saved by C++. This maps nicely onto the
> experience of the C++ gurus who have discovered that TDD replaces static
> typing (by giving them the same QA), while they are insanely more
> productive using a dynamic language (in their case Python).
Just so we're clear, I don't want this to turn into a C++ thing. C++ has its
own set of issues, quite unrelated to static typing.
> Why the absence of surprises, even though usually I was making mad
> revisions the day before a demo? With Lisp one can program at a higher
> level, such that a small amount of code handles all the cases, using
> meta-information to handle those cases. By making the code smart, one
> writes less of it--well, it is hard to write so one writes a small
> amount of code over and over, but less code in the end gets shipped. And
> like I said, all the cases run through it, so it is hard to have a
> surprise. Any code in the app is well worn by the time it runs in anger,
> and I do not even do much in the way of testing (and certainly do not do
> anything like TDD). Add TDD, and fuggedaboutit. And the funny thing is
> that Lisp would make it easier to do automatic testing, precisely
> because of the reflective power.
I do find that Lisp code tends to be smaller because of dynamic typing. You
can get an algorithm coded and use that algorithm over multiple types
without having to rewrite. In C/Java you'd have separate routines coded up
for each case of type, which just means more stuff to deal with. In C++,
you might use templates, if you could ever figure out the syntax. ;-)
What I also find is that dynamic typing means I can include sentinels in
return values more easily. That is, if a function is supposed to return a
number except when there is an error, I don't have to figure out a special
number that is unlikely to be returned normally (like 0 or -1 or
something), which may end up being valid after all. Instead, I can just
return a symbol like ERR or NIL or something else entirely. This is very
helpful.
> I guess the question "how many production bugs would static typing have
> caught?" misses the larger question of how much more productive am I
> with a dynamic language. productive means faster, so I have more time
> for testing. productive also means fewer lines of smarter code, so it is
> easier to get full coverage. and more productive also means it is easier
> to test, because of the dynamism and reflection, so i test more
> effectively.
>
> if a bug now gets through which static type-checking would have
> caught... so what? the static-typers want to jump up and down and say "I
> told you so!", but they have lost site of the productivity forest for
> tree of compiler type-checking. They think it is a free lunch, but only
> if they have never worked in a dynamic language.
Interesting thought.
> The boss occasionally asks me what happens to us if MegaCorp decides to
> put 20 C++ programmers on a project to produce a competitive product. I
> reassure him that I hope they use thirty. I would be nervous if they
> used four, but I do not think MegaCorp knows how to do that.
I agree fully with this! In my opinion, the ideal team size is < 10. Any
more than that and you have not just diminishing returns (which really
begins around 5 IMHO), but NEGATIVE(!) returns. Either that or you you have
exponential growth required to keep ahead of it. That is, > 10 programmers
requires so much coordination that you end up adding all sorts of planning
meetings, project managers, etc., to keep the team on track. With <10 (and
certainly <5), you need very little than a capable email system to let
people communicate. Less definitely is more.
-- Dave
Dave Roberts wrote:
> I agree fully with this! In my opinion, the ideal team size is < 10. Any
> more than that and you have not just diminishing returns (which really
> begins around 5 IMHO), but NEGATIVE(!) returns.
Cue Fred Brooks. My theory is that a $100 million project with a team of
dozens is just a $2m, five-person project with $100 million to spend.
> certainly <5), you need very little than a capable email system to let
> people communicate. Less definitely is more.
Even when working in tall buildings I was a cowboy contractor working
pretty much solo on a project here or a project there, so I did not have
to sit through many meetings. Or was it that I behaved so badly they
stopped inviting me?
The CliniSys system was developed mostly by two people separated by a
30min subway ride. The other guy announced when Eudora registerd the
1000th email from me (about 60% of the way thru the project). Only once
every three months we had to use phones when we could not figure out
what each other was saying. Overall, very productive.
btw, I had a "bug" this morning a compiler would have caught, passing a
number where a point (a structure of two numbers) was demanded. But! The
bug was simply because I had changed an API so that it no longer
tolerated either a number or a point, and I was working my way thru
various Cello demos propagating the new state of things.
With a compiler checking my code, I could not even have tested the
change without first changing all the demos to suit, or I would have had
to leave in place code which accepted a number but threw an error, so
the compiler would see a suitable overloaded function for code I had not
yet refactored.
In Lisp I just leave hopeless code sitting around until I get back to
it, and if it turns out I decide a given refactoring was a mistake,
boom!, out it goes and no time was wasted on propagating a doomed change.
hellasweet.
--
http://tilton-technology.com
Why Lisp? http://alu.cliki.net/RtL%20Highlight%20Film
Your Project Here! http://alu.cliki.net/Industry%20Application
Kenny Tilton wrote:
> btw, I had a "bug" this morning a compiler would have caught, passing a
> number where a point (a structure of two numbers) was demanded. But! The
> bug was simply because I had changed an API so that it no longer
> tolerated either a number or a point, and I was working my way thru
> various Cello demos propagating the new state of things.
So this brings up a good point. One of the ways that I typically work
through API changes in C/Java is just that: I recompile and let the
compiler tell me all the places that are now broken. With Lisp, do I have
to go through a complete re-test cycle?
> With a compiler checking my code, I could not even have tested the
> change without first changing all the demos to suit, or I would have had
> to leave in place code which accepted a number but threw an error, so
> the compiler would see a suitable overloaded function for code I had not
> yet refactored.
>
> In Lisp I just leave hopeless code sitting around until I get back to
> it, and if it turns out I decide a given refactoring was a mistake,
> boom!, out it goes and no time was wasted on propagating a doomed change.
Hmmmm... my reaction to this is actually negative in the sense that I get
nervous about forgetting things that I haven't yet done. If I was using
Java with Eclipse as my IDE, I'd just back out changes using the IDE and it
wouldn't be a big deal either. That isn't a language thing, it's an IDE
thing. Speaking of which, is there a good IDE for Lisp with refactoring
support? Emacs is great, right up to the point where you want to do some
massive code reorganization or function/variable renaming, then I find it
painful (go back to search/replace). I would think this would actually be
pretty easy for a Lisp to do given the simple syntax.
-- Dave
Dave Roberts wrote:
> Kenny Tilton wrote:
>
>
>>btw, I had a "bug" this morning a compiler would have caught, passing a
>>number where a point (a structure of two numbers) was demanded. But! The
>>bug was simply because I had changed an API so that it no longer
>>tolerated either a number or a point, and I was working my way thru
>>various Cello demos propagating the new state of things.
>
>
> So this brings up a good point. One of the ways that I typically work
> through API changes in C/Java is just that: I recompile and let the
> compiler tell me all the places that are now broken.
Yep, and that does have some appeal, but then there is the downside, not
being able to run until all that work is done. The Lisp Way is to work
in small pieces, so I can have an idea, put in, and give it a try just
like that. I might then decide it was a bad idea, or more likely have an
even better idea, or an /additional/ good idea made possible by the
first change. (One good change leads to another?) Now I have broken two
APIs, but I am running/thinking/improving left and right just wreaking
havoc with the code and it is getting better and better all the time.
I'll sweep up when it all stabilizes.
I wager that with a less friendly compiler I would spot opportunities
for small improvements and just put them off to another day that never
comes, and be more prone to hacking small temporary (ha!) workarounds
rather than fix things properly if I hit some minor obstacle while coding.
With Lisp, do I have
> to go through a complete re-test cycle?
(a) I just wait till it comes up. Of course on a thirty-person team I
would be tarred and feathered for breaking everyone else's code, and
they would not have the advantage of having made the change and would
spend hours scratching their heads. Of course in time they would just
assume Kenny had been at it again and reach for the tar straight away.
(b) Regression test suites are, well, sweet.
(c) Like I said, Lisp programs at best consist of a smaller amount of
smarter code, so it does not take long to encounter the broken bits.
(d) The breakage is never subtle, so they are easy to spot/fix whwn they
do come up.
>
>
>>With a compiler checking my code, I could not even have tested the
>>change without first changing all the demos to suit, or I would have had
>>to leave in place code which accepted a number but threw an error, so
>>the compiler would see a suitable overloaded function for code I had not
>>yet refactored.
>>
>>In Lisp I just leave hopeless code sitting around until I get back to
>>it, and if it turns out I decide a given refactoring was a mistake,
>>boom!, out it goes and no time was wasted on propagating a doomed change.
>
>
> Hmmmm... my reaction to this is actually negative in the sense that I get
> nervous about forgetting things that I haven't yet done.
That will pass. Lisp is all about knocking down the barriers to
productivity, one of which is anxiety over breaking something that
works. Once it becomes clear that code can be got working again without
much trouble, I am free to continually reshape it as I spot better ways.
In a production environment one does want a regression test to run, but
that should be true no matter what language I am using.
If I was using
> Java with Eclipse as my IDE, I'd just back out changes using the IDE and it
> wouldn't be a big deal either. That isn't a language thing, it's an IDE
> thing.
It's an IDE thing /because/ the language is unfriendly to incremental
development. Here's an example, came up yesterday. I had to add a
parameter to a function call to handle a new use case. No existing call
needed it. So that became an &optional parameter with a default of zero
(what everyone else wanted). Boom! Done.
Speaking of which, is there a good IDE for Lisp with refactoring
> support? Emacs is great, right up to the point where you want to do some
> massive code reorganization or function/variable renaming, then I find it
> painful (go back to search/replace). I would think this would actually be
> pretty easy for a Lisp to do given the simple syntax.
Sounds like a good open source project for you. :) AllegroCL at least
searchs across collections of files and shows me where to look, but
that's about it. MCL has a "who calls" dialog, I used to use that a lot.
kenny
--
http://tilton-technology.com
Why Lisp? http://alu.cliki.net/RtL%20Highlight%20Film
Your Project Here! http://alu.cliki.net/Industry%20Application
Kenny Tilton <·······@nyc.rr.com> writes:
> Dave Roberts wrote:
>
> > Speaking of which, is there a good IDE for Lisp with refactoring
> > support? Emacs is great, right up to the point where you want to
> > do some massive code reorganization or function/variable renaming,
> > then I find it painful (go back to search/replace). I would think
> > this would actually be pretty easy for a Lisp to do given the
> > simple syntax.
>
> Sounds like a good open source project for you. :) AllegroCL at
> least searchs across collections of files and shows me where to
> look, but that's about it. MCL has a "who calls" dialog, I used to
> use that a lot.
I keep meaning to start experimenting with Tags
<http://docs.biostat.wustl.edu/cgi-bin/info2html?(emacs)Tags> but I
never seem to get around to it. I know it has support for
project-wide search/replace
<http://docs.biostat.wustl.edu/cgi-bin/info2html?(emacs)Tags%2520Search>,
though I don't know what its strengths/weaknesses are. It might be a
good place to start looking, though.
Damien Kick wrote:
> I keep meaning to start experimenting with Tags
> <http://docs.biostat.wustl.edu/cgi-bin/info2html?(emacs)Tags> but I
> never seem to get around to it. I know it has support for
> project-wide search/replace
> <http://docs.biostat.wustl.edu/cgi-bin/info2html?(emacs)Tags%2520Search>,
> though I don't know what its strengths/weaknesses are. It might be a
> good place to start looking, though.
Good point. I'm just getting back to emacs after a LONG sabbatical and had
forgotten about etags stuff.
-- Dave
Dave Roberts wrote:
>
....
>
> So this brings up a good point. One of the ways that I typically work
> through API changes in C/Java is just that: I recompile and let the
> compiler tell me all the places that are now broken. With Lisp, do I have
> to go through a complete re-test cycle?
API changes in Java and C++ are usually overload and or name changes.
TTBOMK, no CL environment does not help you with these.
...
>
>
> Hmmmm... my reaction to this is actually negative in the sense that I get
> nervous about forgetting things that I haven't yet done. If I was using
> Java with Eclipse as my IDE, I'd just back out changes using the IDE and it
> wouldn't be a big deal either. That isn't a language thing, it's an IDE
> thing. Speaking of which, is there a good IDE for Lisp with refactoring
> support? Emacs is great, right up to the point where you want to do some
> massive code reorganization or function/variable renaming, then I find it
> painful (go back to search/replace). I would think this would actually be
> pretty easy for a Lisp to do given the simple syntax.
Time for a new project on common-lisp.net? :)
Cheers
--
Marco
>>>>> "Dave" == Dave Roberts <·············@re-move.droberts.com> writes:
Dave> Kenny Tilton wrote:
>> btw, I had a "bug" this morning a compiler would have caught, passing a
>> number where a point (a structure of two numbers) was demanded. But! The
>> bug was simply because I had changed an API so that it no longer
>> tolerated either a number or a point, and I was working my way thru
>> various Cello demos propagating the new state of things.
Dave> So this brings up a good point. One of the ways that I typically work
Dave> through API changes in C/Java is just that: I recompile and let the
Dave> compiler tell me all the places that are now broken. With Lisp, do I have
Dave> to go through a complete re-test cycle?
But with a dynamic language such as lisp, you could just change the
function you want to change and then iterate over all callers fixing
them up as you go.
Most CLs has (the equivalent of) `who-calls', there is even a portable
version, and at least some of the IDEs (including I am pretty sure all
of ILISP, Slime and ELI) has access to it from within the IDE.
Again we see static type checking falling short of what you can do
from inside a dynamic language environment.
------------------------+-----------------------------------------------------
Christian Lynbech | christian ··@ defun #\. dk
------------------------+-----------------------------------------------------
Hit the philistines three times over the head with the Elisp reference manual.
- ·······@hal.com (Michael A. Petonic)
Dave Roberts wrote:
> What I also find is that dynamic typing means I can include sentinels in
> return values more easily. That is, if a function is supposed to return a
> number except when there is an error, I don't have to figure out a special
> number that is unlikely to be returned normally (like 0 or -1 or
> something), which may end up being valid after all. Instead, I can just
> return a symbol like ERR or NIL or something else entirely. This is very
> helpful.
Don't do that. It's better to use something like ERROR or CERROR. This
gives you more opportunities to gracefully deal with such situations.
(The "C" in "CERROR" means "correctable", and this is intentional. ;)
Pascal
(Of course, you can return anything you like if you know what you are
doing... ;)
--
Tyler: "How's that working out for you?"
Jack: "Great."
Tyler: "Keep it up, then."
Pascal Costanza wrote:
> Don't do that. It's better to use something like ERROR or CERROR. This
> gives you more opportunities to gracefully deal with such situations.
> (The "C" in "CERROR" means "correctable", and this is intentional. ;)
> (Of course, you can return anything you like if you know what you are
> doing... ;)
Right. It depends on the situation. Sometimes ERROR or CERROR is
appropriate, other times not.
From: ··········@YahooGroups.Com
Subject: Re: The Next Move in Programming
Date:
Message-ID: <REM-2004feb25-004@Yahoo.Com>
> Dave Roberts wrote:
> > What I also find is that dynamic typing means I can include sentinels in
> > return values more easily. That is, if a function is supposed to return a
> > number except when there is an error, I don't have to figure out a special
> > number that is unlikely to be returned normally (like 0 or -1 or
> > something), which may end up being valid after all. Instead, I can just
> > return a symbol like ERR or NIL or something else entirely. This is very
> > helpful.
> Don't do that. It's better to use something like ERROR or CERROR. This
> gives you more opportunities to gracefully deal with such situations.
> (The "C" in "CERROR" means "correctable", and this is intentional. ;)
Bad advice in many cases. If you are debugging the code, and you have
full access to interactive debugger etc., CERROR is often a good idea
initially when your code discovers something that isn't supposed to
happen per your original design of the code. But as soon as you pass
the code to customers/users, you do *not* want low-level utilities
producing CERRORs deep down in a context that is meaningless to the
customer/users. You want low-level code passing some sort of error
signal up to high-level code, which can then explain in a meaningful
way what part of the processing went bad, and then gracefully recover
by avoiding the troublesome part of the program until it can be fixed.
This is especially true when writing CGI or other WebServer
applications, where the read-eval-print loop is not connected to any
actual user in an interactive way, so invoking CERROR (or ERROR) is a
total disaster even when testing your own code.
One alternative to the two ideas above is to THROW all the way from the
point of error up to a standard location where there's a CATCH. For
example, if the user issues a command, and some bug is detected during
the execution of that command, the top-level of executing that command
might be a good place to put the CATCH. The user could then be told
something like the following:
An error occurred deep down inside the program that handles your
command "Delete Record": Invalid array index 47, should be in range 0
to 46, occurring inside HASH-CHAIN-3 which is called from HASH-CHAIN
which is called from inside GETHASH ... The error has been logged as
2004022517a, which has been reported to programming staff. Your command
has been terminated. You are back to where you were before you invoked
that command.
··········@YahooGroups.Com writes:
> > Don't do that. It's better to use something like ERROR or
> > CERROR. This gives you more opportunities to gracefully deal with
> > such situations. (The "C" in "CERROR" means "correctable", and
> > this is intentional. ;)
>
> Bad advice in many cases.
Not at all.
> If you are debugging the code, and you have full access to
> interactive debugger etc., CERROR is often a good idea initially
> when your code discovers something that isn't supposed to happen per
> your original design of the code. But as soon as you pass the code
> to customers/users, you do *not* want low-level utilities producing
> CERRORs deep down in a context that is meaningless to the
> customer/users. You want low-level code passing some sort of error
> signal up to high-level code, which can then explain in a meaningful
> way what part of the processing went bad, and then gracefully
> recover by avoiding the troublesome part of the program until it can
> be fixed. This is especially true when writing CGI or other
> WebServer applications, where the read-eval-print loop is not
> connected to any actual user in an interactive way, so invoking
> CERROR (or ERROR) is a total disaster even when testing your own
> code.
>
> One alternative to the two ideas above is to THROW all the way from
> the point of error up to a standard location where there's a
> CATCH. For example, if the user issues a command, and some bug is
> detected during the execution of that command, the top-level of
> executing that command might be a good place to put the CATCH. The
> user could then be told something like the following: An error
> occurred deep down inside the program that handles your command
> "Delete Record": Invalid array index 47, should be in range 0 to 46,
> occurring inside HASH-CHAIN-3 which is called from HASH-CHAIN which
> is called from inside GETHASH ... The error has been logged as
> 2004022517a, which has been reported to programming staff. Your
> command has been terminated. You are back to where you were before
> you invoked that command.
And why can't you do that with HANDLER-BIND or HANDLER-CASE?
Regards,
--
Nils G�sche
"Don't ask for whom the <CTRL-G> tolls."
PGP key ID #xEEFBA4AF
>>>>> "Nils" == Nils G�sche <···@cartan.de> writes:
>> One alternative to the two ideas above is to THROW all the way from
>> the point of error up to a standard location where there's a
>> CATCH.
Nils> And why can't you do that with HANDLER-BIND or HANDLER-CASE?
One should also not forget about *DEBUGGER-HOOK* which allows you to
use the full power of error handling during debug/development and then
to turn it all off as you release the code.
------------------------+-----------------------------------------------------
Christian Lynbech | christian ··@ defun #\. dk
------------------------+-----------------------------------------------------
Hit the philistines three times over the head with the Elisp reference manual.
- ·······@hal.com (Michael A. Petonic)
Christian Lynbech <·················@ericsson.com> writes:
> One should also not forget about *DEBUGGER-HOOK* which allows you to
> use the full power of error handling during debug/development and
> then to turn it all off as you release the code.
Still, wouldn't it be useful if the programmer, as he's writing the
code, could specifiy e.g. "this restart is useful for end-users", and
"this restart is only for programmers (that know this code)". In
effect, the system could provide for a range of options based on the
user's expected (declared) level of experise, going from the basic
abort/retry of user-initiated actions, to restarts that require more
and more knowledge and analysis to be able to understand. Just an
idea.
--
Frode Vatvedt Fjeld
>>>>> "Frode" == Frode Vatvedt Fjeld <······@cs.uit.no> writes:
Frode> Christian Lynbech <·················@ericsson.com> writes:
>> One should also not forget about *DEBUGGER-HOOK* which allows you to
>> use the full power of error handling during debug/development and
>> then to turn it all off as you release the code.
Frode> Still, wouldn't it be useful if the programmer, as he's writing the
Frode> code, could specifiy e.g. "this restart is useful for end-users", and
Frode> "this restart is only for programmers (that know this code)".
I am no expert but I think this is exactly the kind of thing you could
implement with *debugger-hook*.
If I were to do such I thingk, I would create a separate signal
hierarchy for the stuff that would be usefull to endusers. I would
then either do my own REPL stuffing in handler-cases at the relevant
parts or I would write a clever function for *debugger-hook* that
would make decisions based on where the relevant signal was situated
relevant to certain signal hierarchies.
I would then use my special "relevant-for-end-user" signals where
apropriate and when the debugger hook saw such a thing, it would fall
into the break loop.
However, in general I think that most users either will run away
screaming if confronted with a break loop or could be told "if you
want debugger access, just flip this variable" (and thus not be at
your mercy in deciding what signals are relevant :-)
------------------------+-----------------------------------------------------
Christian Lynbech | christian ··@ defun #\. dk
------------------------+-----------------------------------------------------
Hit the philistines three times over the head with the Elisp reference manual.
- ·······@hal.com (Michael A. Petonic)
>>>>> "Dave" == Dave Roberts <·············@re-move.droberts.com> writes:
Dave> http://java.sun.com/developer/technicalArticles/Interviews/livschitz_qa.html
YES!!!!
What a magnificent article; living proof that Kenny is right and Lisp
will prevail in the end, whatever it may be called when the rest of
the world gets it.
Just take this snippet:
Livschitz> I envision a programming language that is a notch richer
Livschitz> then OO. It would be based on a small number of primitive
Livschitz> concepts, intuitively obvious to any mature human being,
Livschitz> and tied to well-understood metaphors, such as objects,
Livschitz> conditions, and processes. I hope to preserve many features
Livschitz> of the object-oriented systems that made them so safe and
Livschitz> convenient, such as abstract typing, polymorphism,
Livschitz> encapsulation and so on.
This is the best characterisation of Lisp I have seen in a long time.
Dave> Kenny Tilton wrote:
>> Cue the static-typing flamewar!
Dave> Okay, so a simple question.
Dave> ...
Dave> I don't see that static typing actually solves a lot of *real*
Dave> problems. Yes, it's nice to have the compiler checking up after
Dave> you, but those aren't the kind of bugs that I find are really
Dave> difficult to deal with.
You already know the answer. Yes, static typing increases the
verifiable knowledge of your program; yes, static typing will allow
you to catch and correct certain errors faster, but no, this is not
going to help you a bit in terms of productivity.
"My program is type correct" is such an incredible weak statement
about the correctness of the program. As you say, what will hold down
the release date is not when you try to add up to strings but when your
integer is one off or your pointer is null or you are keeping a
pointer to some storage somebody will free in just a second.
I would claim that the characteristic of languages such as Lisp where
an integer remains an integer and a list remains a list, even in an
incorrect program is a much stronger property than correctness of a
statically typed program.
It is also a complexity handling issue. Even if we were to come with a
stronger type system that would be able to capture much stronger
statements of your program, would that make you happier? I think
not. The amount of work of keeping such a strong formal proof
consistent across the entire program at all times, would dwindle any
protection again bugs it gives.
The better way is to have the ability to add type declarations, as
necessary, and write tests, as necessary, allowing the programmer to
focus on verifying the properties that actually matter.
But Livschitz already know this, even though she probably doesn't know
that she knows it. She says stuff like the following:
Livschitz> Jaron's emphasis on "pattern recognition" as a substitute
Livschitz> for the rigid, error-prone, binary "match/no match"
Livschitz> constructs that are dominant in today's programs is
Livschitz> intriguing to me, especially because I've always thought
Livschitz> that the principles of fuzzy logic should be exploited far
Livschitz> more widely in software engineering.
and
Livschitz> Still, all these things combined cannot express the
Livschitz> simplest aggregation of several elements with particular
Livschitz> semantic relationships; therefore, an external graphical
Livschitz> "design pattern" is needed to document which elements are
Livschitz> aggregated and how the collective system works.
Lisp programmers of the world rejoice; you are right and they are
wrong and when the world finally has realised this, you will be
decades ahead of the mob.
------------------------+-----------------------------------------------------
Christian Lynbech | christian ··@ defun #\. dk
------------------------+-----------------------------------------------------
Hit the philistines three times over the head with the Elisp reference manual.
- ·······@hal.com (Michael A. Petonic)
From: Rahul Jain
Subject: Re: The Next Move in Programming
Date:
Message-ID: <87y8r41cgt.fsf@nyct.net>
Christian Lynbech <·········@defun.dk> writes:
> Livschitz> Still, all these things combined cannot express the
> Livschitz> simplest aggregation of several elements with particular
> Livschitz> semantic relationships; therefore, an external graphical
> Livschitz> "design pattern" is needed to document which elements are
> Livschitz> aggregated and how the collective system works.
I don't think that you commented on this statement, but it sounds to me
like that's some sort of interactive <something> browser, which is so
common in the lisp world. The only problem is that we keep creating new
somethings (functions, classes, packages, relational spaces) that the
tools can't keep up with the language. :)
Another thing she mentioned was the ability to recover gracefully from
error conditions as a way of handling the cases where the interfaces
between components don't quite match. Sounds like the condition system
to me.
--
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
Christian Lynbech wrote:
> Lisp programmers of the world rejoice; you are right and they are
> wrong and when the world finally has realised this, you will be
> decades ahead of the mob.
Can you imagine the rates we will get? And my book "Cello In 21 Days"
will sell millions. Harry Potter, move over!
kenny
--
http://tilton-technology.com
Why Lisp? http://alu.cliki.net/RtL%20Highlight%20Film
Your Project Here! http://alu.cliki.net/Industry%20Application
Dave Roberts wrote:
> Kenny Tilton wrote:
>
>>Cue the static-typing flamewar! Ironically, a couple of C++ former
>>static-typing bigots have switched sides after discovering test-driven
>>development gave them software as correct but an order of magnitude
>>faster (because they were not fighting the compiler for every inch of
>>ground).
>
> Okay, so a simple question. Without cuing the full flamewar, give me the
> highlights here. Here's what I think of this issue so far: I have already
> found run-time typing to be tremendously helpful. I don't see that static
> typing actually solves a lot of *real* problems. Yes, it's nice to have the
> compiler checking up after you, but those aren't the kind of bugs that I
> find are really difficult to deal with. That said, how often do you find
> that latent bugs are present in your code, only to be discovered by an
> unsuspecting user, that static checking could have caught? The one argument
> I find that seems to hold water in my brain is that I want to have done
> everything I can do as a programmer to ensure that a user never sees
> something blow up because of something as stupid as a type error.
What static typers are missing is that dynamic type systems can help to
drive the corrrect behavior of a program. Here is an example using a
very helpful CLOS feature.
Welcome to Macintosh Common Lisp Version 5.0!
? (defclass person ()
((name :accessor name :initarg :name)))
#<STANDARD-CLASS PERSON>
? (setf v (make-instance 'person :name "Pascal"))
#<PERSON #x23E27F6>
? (name v)
"Pascal"
? (defclass person ()
((name :accessor name :initarg :name)
(age :accessor age :initarg :age)))
#<STANDARD-CLASS PERSON>
? (defmethod slot-unbound
((class t)
(person person)
slot)
(format t "Slot ~A is missing. Read a value.~%" slot)
(setf (age person) (read)))
#<STANDARD-METHOD SLOT-UNBOUND (T PERSON T)>
? (name v)
"Pascal"
? (age v)
Slot AGE is missing. Read a value.
33
33
? (age v)
33
What I have done here is that I have added a field to a class at run
time. Since I can't decide on a default value that is valid for all
persons in a system, I simply add the necessary means to lazily add the
information that is needed, when it is needed. (Of course, the UI could
need some polish. ;)
The same can be done with removing slots - just redefine the appropriate
accessors to give a useful warning to the user.
I think this is only the tip of the iceberg of what could be done by
something that could be called "bug-driven programming". ;) For example,
imagine doing this with something based on Kenny's Cells - lazily add
and remove active slots at run time.
A static type system makes it very hard to get even half that far.
Pascal
P.S.: Thanks for calling me a nut-case. ;)
--
Tyler: "How's that working out for you?"
Jack: "Great."
Tyler: "Keep it up, then."
Dave Roberts <·············@re-move.droberts.com> writes:
> Kenny Tilton wrote:
>
> > Cue the static-typing flamewar! Ironically, a couple of C++ former
> > static-typing bigots have switched sides after discovering test-driven
> > development gave them software as correct but an order of magnitude
> > faster (because they were not fighting the compiler for every inch of
> > ground).
>
> Okay, so a simple question. Without cuing the full flamewar, give me the
> highlights here. Here's what I think of this issue so far: I have already
> found run-time typing to be tremendously helpful. I don't see that static
> typing actually solves a lot of *real* problems. Yes, it's nice to have the
> compiler checking up after you, but those aren't the kind of bugs that I
> find are really difficult to deal with. That said, how often do you find
> that latent bugs are present in your code, only to be discovered by an
> unsuspecting user, that static checking could have caught? The one argument
> I find that seems to hold water in my brain is that I want to have done
> everything I can do as a programmer to ensure that a user never sees
> something blow up because of something as stupid as a type error.
"You" and "I" are very few datapoints justifying the strong and
general claims made by Kenny. If one really wants to get insight into
the benefits & problems of static and dynamic typing there is, in my
opinion, no way around studying the performance of programmers and the
statistics of bugs under controlled settings.
The only scientific study I am aware of is
Lutz Prechelt, Walter F. Tichy
A Controlled Experiment to Assess the Benefits of Procedure Argument Type Checking
IEEE Trans. on Software Engineering,
Vol. 24, No, 4, April 1998, pp. 302-312
Abstract: http://csdl.computer.org/comp/trans/ts/1998/04/e0302abs.htm
This paper compares ANSI C vs. K&R-style C. Of course, it is not the
final word on the subject, but it is at least an attempt to add an
unbiased datapoint to the discussion.
Matthias wrote:
> If one really wants to get insight into
> the benefits & problems of static and dynamic typing
[...]
> This paper compares ANSI C vs. K&R-style C.
...and this means, AFAICS: some form of static typing vs. no typing at
all. So this is not really related to dynamic typing.
> Of course, it is not the
> final word on the subject, but it is at least an attempt to add an
> unbiased datapoint to the discussion.
Right. Thanks for the link - at least, it shows how this topic should be
approached.
Pascal
--
Tyler: "How's that working out for you?"
Jack: "Great."
Tyler: "Keep it up, then."
Matthias wrote:
> "You" and "I" are very few datapoints justifying the strong and
> general claims made by Kenny. If one really wants to get insight into
> the benefits & problems of static and dynamic typing there is, in my
> opinion, no way around studying the performance of programmers and the
> statistics of bugs under controlled settings.
>
> The only scientific study I am aware of is
>
> Lutz Prechelt, Walter F. Tichy
> A Controlled Experiment to Assess the Benefits of Procedure Argument Type Checking
> IEEE Trans. on Software Engineering,
> Vol. 24, No, 4, April 1998, pp. 302-312
> Abstract: http://csdl.computer.org/comp/trans/ts/1998/04/e0302abs.htm
>
> This paper compares ANSI C vs. K&R-style C. Of course, it is not the
> final word on the subject, but it is at least an attempt to add an
> unbiased datapoint to the discussion.
Looks like a good study, but the datapoint adds nothing to this discussion.
What you have found is an explanation of the origins of static type
checking. When I programmed the Mac in C, certain pointer errors
reliably took down the whole OS, forcing me to sit through a two minute
(felt like twenty) reboot on each iteration. The nifty study would have
been to make the K&R C samples the control against which to compare two
possible improvements: strong static typing vs. dynamic typing in which
data keeps its type during runtime so you get backtraces saying "I don't
know how to add NIL" instead of a frozen mouse.
kt
--
http://tilton-technology.com
Why Lisp? http://alu.cliki.net/RtL%20Highlight%20Film
Your Project Here! http://alu.cliki.net/Industry%20Application
Kenny Tilton <·······@nyc.rr.com> wrote in message news:<····················@twister.nyc.rr.com>...
> Looks like a good study, but the datapoint adds nothing to this discussion.
>
> What you have found is an explanation of the origins of static type
> checking. When I programmed the Mac in C, certain pointer errors
> reliably took down the whole OS, forcing me to sit through a two minute
> (felt like twenty) reboot on each iteration. The nifty study would have
> been to make the K&R C samples the control against which to compare two
> possible improvements: strong static typing vs. dynamic typing in which
> data keeps its type during runtime so you get backtraces saying "I don't
> know how to add NIL" instead of a frozen mouse.
It vaguely drives me nuts that software engineering is dragged into
discussions about static typing. It's unnecessary. They simply could
say that functions are objects taken from mathematics, like integers,
and they'd appreciate it if we didn't leave out a defining
characteristic like domains and ranges. And the conversation about
static vs. dynamic typing could proceed from there. The appeal to the
lisper's sense of completeness would be strong, just as limiting
everyone's integers by default to what fits in a machine word is
frowned upon. Maybe something could come of it, unlike before.
I think it would be cool to have a function type which never produces
different outputs given the same input. Parallelization, memoization,
deferred eval, type constraints... lots of stuff could fall out of
that naturally.
It is maddening when people think lisp is inherently incapable of
capturing the benefits of some paradigm. I'm sure when Smalltalk was
invented, people probably claimed lisp was incapable of OOP.
Tayssir John Gabbour wrote:
> I think it would be cool to have a function type which never produces
> different outputs given the same input. Parallelization, memoization,
> deferred eval, type constraints... lots of stuff could fall out of
> that naturally.
>
> It is maddening when people think lisp is inherently incapable of
> capturing the benefits of some paradigm. I'm sure when Smalltalk was
> invented, people probably claimed lisp was incapable of OOP.
The serious proponents of pure paradigms rightfully say that the
advantages of going pure include a decreased need to think of
combination with features that you are currently not interested in.
Pure functional programming languages (that are free of side effects)
allow you to push certain aspects of a language design further than when
you have to think about interactions with side effects. Likewise, pure
object-oriented programming languages can help you to make the syntax so
simple that the need for something like macros or code generation tools
decreases.
These _are_ very interesting intellectual exercises, and it makes sense
to think about the consequences of going pure with regard to some aspect
of a language, because you get a deeper understanding of what your
options are for designing programs.
However, it is not a very good idea to ignore the fact that the real
world isn't pure and that you need to interact with it at some stage.
Pascal
--
Tyler: "How's that working out for you?"
Jack: "Great."
Tyler: "Keep it up, then."
On Tue, 17 Feb 2004 02:18:13 +0100, Pascal Costanza wrote:
> However, it is not a very good idea to ignore the fact that the real
> world isn't pure and that you need to interact with it at some stage.
What!? You mean that monadic I/O isn't intuitive to a lot of people?
Perish the thought!
faa
In article <···············@goya03.ti.uni-mannheim.de>,
Matthias <··@spam.pls> writes:
> ...
> "You" and "I" are very few datapoints justifying the strong and
> general claims made by Kenny. If one really wants to get insight into
> the benefits & problems of static and dynamic typing there is, in my
> opinion, no way around studying the performance of programmers and the
> statistics of bugs under controlled settings.
>
> The only scientific study I am aware of is
>
> Lutz Prechelt, Walter F. Tichy
> A Controlled Experiment to Assess the Benefits of Procedure Argument Type Checking
> IEEE Trans. on Software Engineering,
> Vol. 24, No, 4, April 1998, pp. 302-312
> Abstract: http://csdl.computer.org/comp/trans/ts/1998/04/e0302abs.htm
>
> This paper compares ANSI C vs. K&R-style C. Of course, it is not the
> final word on the subject, but it is at least an attempt to add an
> unbiased datapoint to the discussion.
maybe unbiased, but completely irrelevant. K&R C had no type checking
for function arguments, which isn't the same as dynamic typing
hs
--
Patriotism is the last refuge of the scoundrel
Samuel Johnson
Patriotism is your conviction that this country is superior to all
others because you were born in it
George Bernard Shaw
Matthias <··@spam.pls> writes:
> Dave Roberts <·············@re-move.droberts.com> writes:
>
> This paper compares ANSI C vs. K&R-style C. Of course, it is not the
> final word on the subject, but it is at least an attempt to add an
> unbiased datapoint to the discussion.
umm, the dynamic type checking in C is called a core file and that
is iff you are lucky.
marc
"Dave Roberts" <·············@re-move.droberts.com> wrote in message
···························@attbi_s54...
> Okay, so a simple question. Without cuing the full flamewar, give me the
> highlights here. Here's what I think of this issue so far: I have already
> found run-time typing to be tremendously helpful. I don't see that static
> typing actually solves a lot of *real* problems. Yes, it's nice to have
the
> compiler checking up after you, but those aren't the kind of bugs that I
> find are really difficult to deal with. That said, how often do you find
> that latent bugs are present in your code, only to be discovered by an
> unsuspecting user, that static checking could have caught? The one
argument
> I find that seems to hold water in my brain is that I want to have done
> everything I can do as a programmer to ensure that a user never sees
> something blow up because of something as stupid as a type error.
The biggest debate tends to be over runtime detection vs compile time
detection. There is the big fear that a large system will find some simple
type error in some innocuous place in the middle of a big job or some other
important place. Simply that this "simple" error will fall into production,
where it can be argued that with compile time checking, this would not have
happened.
But if you work in Java, you can experience the same joy by passing a null
to most anything and watch the stack traces fly, or do anything with loading
dynamic classes. Whee! Then, of course, there are the untyped Collections
which happily shatter any illusions of static typing. Generics are supposed
to help there, but all of the arguments for them I have seen are towards
simply sugary syntax vs solving typing problems. I assume you can cast an
Object Collection to a type specific Collection at whim, I haven't looked
into it myself.
I admit that during mad coding sessions in Java, I'll beat on the
innumerable .java files until Ant comes back with a clean build, and then
test it. But the truth is, I really don't have the choice not to do that,
and just because the compiler quits whining doesn't mean a thing about my
code quality.
Nobody wants to run a large program for hours on end to have it fail over a
simple "typo", but IME, these types of errors aren't endemic to what
aggravates me when I'm writing software. Spending a day and a half on an
inheritance problem, now that's annoying (my fault, but still aggravating).
The biggest argument IMHO for dynamic typing is simply that it makes it
easier to lay out the work on your code canvas and get things working
quickly. You can design and code at a very high level and then come back
later with declarations and what not. With modern CL environments, even if
one of these errors bites you during development, you may well be able to
simply restart from higher up the call tree after you've quickly fixed the
problem.
But, to be fair, I have worked on a large enough system in CL where any real
downsides of it have been painful enough to note, and I'm so ingrained in
the strong typing of other languages that they don't portray an enormous
burden to me. "I always do it like this".
Regards,
Will Hartung
(·····@msoft.com
In article <·························@twister.nyc.rr.com>,
Kenny Tilton <·······@nyc.rr.com> wrote:
> > Less than a week of real use. The one thing I still find difficult about
> > Lisp paren syntax is with forms that take a lists of lists as part of the
> > form syntax (think DO initializers).
>
> Or even COND. Yeah, I came up a parens shy on those for a long time,
> then once I had it down i had to rein myself in on CASE. :) Paul Graham
> mentioned at ILC2003 that he felt COND had too many parentheses. he did
> not disclose his position on whether Mozart used too many notes.
Actually I think COND is pretty fine, while DO annoys me no end.
But in answer to your implied question ("which parens, exactly, do you
want me to leave out"), I quite like the way Dylan's "case" macro (which
is essentially the same thing as COND) does it:
case
player1.money <= 0
=> end-game(player1);
player2.money <= 0
=> end-game(player2);
otherwise
=> move(player1);
move(player2);
end case;
-- Bruce
Bruce Hoult wrote:
> But in answer to your implied question ("which parens, exactly, do you
> want me to leave out"), I quite like the way Dylan's "case" macro (which
> is essentially the same thing as COND) does it:
>
> case
> player1.money <= 0
> => end-game(player1);
> player2.money <= 0
> => end-game(player2);
> otherwise
> => move(player1);
> move(player2);
> end case;
Do you get paid by the keystroke?!:
(bIf (p (find-if 'minusp players :key 'money))
(end-game p)
(dolist (p players)
(move p)))
BIF left as an exercise.
Quibbles aside, nah, spare me the punctuation => and ;. What is close to
getting lost here is that the parentheses are no problem at all, so why
run away from them? It is a non-problem being solved.
Of course this is hard for a Dylanista to swallow since y'all bet the
ranch on c-like syntax and are now sharecropping while the sexpr
holdouts are starting to prosper, but David is a classic case: one good
week of heads-down and the parens are starting to disappear. case closed.
kenny
--
http://tilton-technology.com
Why Lisp? http://alu.cliki.net/RtL%20Highlight%20Film
Your Project Here! http://alu.cliki.net/Industry%20Application
From: Joe Marshall
Subject: Re: The Next Move in Programming
Date:
Message-ID: <wu6oivl6.fsf@comcast.net>
Kenny Tilton <·······@nyc.rr.com> writes:
> Bruce Hoult wrote:
>> But in answer to your implied question ("which parens, exactly, do
>> you want me to leave out"), I quite like the way Dylan's "case"
>> macro (which is essentially the same thing as COND) does it:
>> case
>> player1.money <= 0
>> => end-game(player1);
>> player2.money <= 0
>> => end-game(player2);
>> otherwise
>> => move(player1);
>> move(player2);
>> end case;
>
> Do you get paid by the keystroke?!:
I was wondering about the mismatched `=>' in line 7.
--
~jrm
In article <············@comcast.net>,
Joe Marshall <·············@comcast.net> writes:
> Kenny Tilton <·······@nyc.rr.com> writes:
>
>> Bruce Hoult wrote:
>>> But in answer to your implied question ("which parens, exactly, do
>>> you want me to leave out"), I quite like the way Dylan's "case"
>>> macro (which is essentially the same thing as COND) does it:
>>> case
>>> player1.money <= 0
>>> => end-game(player1);
>>> player2.money <= 0
>>> => end-game(player2);
>>> otherwise
>>> => move(player1);
>>> move(player2);
>>> end case;
>>
>> Do you get paid by the keystroke?!:
>
> I was wondering about the mismatched `=>' in line 7.
mismatched? otherwise seems to be a condition that always return
true, so you have a pattern
case
[ condition
=> action ] ...
end
hs
--
Patriotism is the last refuge of the scoundrel
Samuel Johnson
Patriotism is your conviction that this country is superior to all
others because you were born in it
George Bernard Shaw
From: Joe Marshall
Subject: Re: The Next Move in Programming
Date:
Message-ID: <wu6nx5n7.fsf@comcast.net>
>>> Bruce Hoult wrote:
>>>> case
>>>> player1.money <= 0
>>>> => end-game(player1);
>>>> player2.money <= 0
>>>> => end-game(player2);
>>>> otherwise
>>>> => move(player1);
>>>> move(player2);
>>>> end case;
> In article <············@comcast.net>,
> Joe Marshall <·············@comcast.net> writes:
>>
>> I was wondering about the mismatched `=>' in line 7.
··@heaven.nirvananet (Hartmann Schaffer) writes:
> mismatched? otherwise seems to be a condition that always return
> true, so you have a pattern
Did nobody get the joke?
>>>> player1.money <= 0 => end-game(player1);
>>>> player2.money <= 0 => end-game(player2);
>>>> otherwise => move(player1);
>>>> move(player2);
--
~jrm
From: Rahul Jain
Subject: Re: The Next Move in Programming
Date:
Message-ID: <87ptcf26xq.fsf@nyct.net>
Joe Marshall <·············@comcast.net> writes:
> Did nobody get the joke?
>
>>>>> player1.money <= 0 => end-game(player1);
>>>>> player2.money <= 0 => end-game(player2);
>>>>> otherwise => move(player1);
>>>>> move(player2);
FWIW, my first impression was the same.
--
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
In article <············@comcast.net>,
Joe Marshall <·············@comcast.net> writes:
> ...
>> mismatched? otherwise seems to be a condition that always return
>> true, so you have a pattern
>
> Did nobody get the joke?
>
>>>>> player1.money <= 0 => end-game(player1);
>>>>> player2.money <= 0 => end-game(player2);
>>>>> otherwise => move(player1);
>>>>> move(player2);
sorry, i didn't
hs
--
Patriotism is the last refuge of the scoundrel
Samuel Johnson
Patriotism is your conviction that this country is superior to all
others because you were born in it
George Bernard Shaw
Hartmann Schaffer wrote:
> In article <············@comcast.net>,
> Joe Marshall <·············@comcast.net> writes:
>
>>...
>>
>>>mismatched? otherwise seems to be a condition that always return
>>>true, so you have a pattern
>>
>>Did nobody get the joke?
>>
>>
>>>>>> player1.money <= 0 => end-game(player1);
>>>>>> player2.money <= 0 => end-game(player2);
>>>>>> otherwise => move(player1);
>>>>>> move(player2);
There is no opening "<=" to match the third closing "=>".
kenny
--
http://tilton-technology.com
Why Lisp? http://alu.cliki.net/RtL%20Highlight%20Film
Your Project Here! http://alu.cliki.net/Industry%20Application
>>> Did nobody get the joke?
There's a difference between not getting a joke and not finding it
funny. ;-D
From: Joe Marshall
Subject: Re: The Next Move in Programming
Date:
Message-ID: <brnyx281.fsf@comcast.net>
Ari Johnson <·····@hotmail.com> writes:
>>>> Did nobody get the joke?
>
> There's a difference between not getting a joke and not finding it
> funny. ;-D
I don't mind the latter. It was the multiple emails detailing how
infix works and how `<=' is commonly used as a relational operator.
--
~jrm
From: Rahul Jain
Subject: Re: The Next Move in Programming
Date:
Message-ID: <87isi65k32.fsf@nyct.net>
Joe Marshall <·············@comcast.net> writes:
> Ari Johnson <·····@hotmail.com> writes:
>
>>>>> Did nobody get the joke?
>>
>> There's a difference between not getting a joke and not finding it
>> funny. ;-D
>
> I don't mind the latter. It was the multiple emails detailing how
> infix works and how `<=' is commonly used as a relational operator.
Maybe they were joking with you. In the meantime, I'm wondering what the
<= => delimiters should do in DefDoc. Maybe delimit math-mode? :)
--
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
In article <···················@twister.nyc.rr.com>,
Kenny Tilton <·······@nyc.rr.com> writes:
>
>
> Hartmann Schaffer wrote:
>> In article <············@comcast.net>,
>> Joe Marshall <·············@comcast.net> writes:
>>
>>>...
>>>
>>>>mismatched? otherwise seems to be a condition that always return
>>>>true, so you have a pattern
>>>
>>>Did nobody get the joke?
>>>
>>>
>>>>>>> player1.money <= 0 => end-game(player1);
>>>>>>> player2.money <= 0 => end-game(player2);
>>>>>>> otherwise => move(player1);
>>>>>>> move(player2);
>
> There is no opening "<=" to match the third closing "=>".
i didn't get it the *first* time
hs
--
Patriotism is the last refuge of the scoundrel
Samuel Johnson
Patriotism is your conviction that this country is superior to all
others because you were born in it
George Bernard Shaw
In article <·························@twister.nyc.rr.com>,
Kenny Tilton <·······@nyc.rr.com> wrote:
> Bruce Hoult wrote:
> > But in answer to your implied question ("which parens, exactly, do you
> > want me to leave out"), I quite like the way Dylan's "case" macro (which
> > is essentially the same thing as COND) does it:
> >
> > case
> > player1.money <= 0
> > => end-game(player1);
> > player2.money <= 0
> > => end-game(player2);
> > otherwise
> > => move(player1);
> > move(player2);
> > end case;
>
> Do you get paid by the keystroke?!:
Nope. That code is 137 characters (assuming indentation is free, which
it is in any decent editor), or 132 if you leave off the optional "case"
at the end. The equivalent CL is 136:
(cond ((<= (money player1) 0) (end-game player1))
((<= (money player2) 0) (end-game player2))
(otherwise (move player1) (move player2)))
> (bIf (p (find-if 'minusp players :key 'money))
> (end-game p)
> (dolist (p players)
> (move p)))
That is quite different code, which could equally well be written in the
same way in Dylan. Including the macro.
> Quibbles aside, nah, spare me the punctuation => and ;. What is close to
> getting lost here is that the parentheses are no problem at all, so why
> run away from them? It is a non-problem being solved.
They don't worry me either, it's just that the language with the
features I like happens to come without the parens.
> Of course this is hard for a Dylanista to swallow since y'all bet the
> ranch on c-like syntax and are now sharecropping while the sexpr
> holdouts are starting to prosper
If and when CL starts to prosper to the point that C++ and Java
programmers start to think it's their next meal ticket I'll be cheering
as loud as you, believe me. In the meantime I need a nice language that
can make stand-alone binaries and (worse) shared libraries that conform
to an API defined in C++. Gwydion Dylan and Chicken Scheme both do
that, so that's what I'm using.
-- Bruce
Kenny Tilton wrote:
>
>
> Dave Roberts wrote:
>> Did anybody else see this:
>>
>>
http://java.sun.com/developer/technicalArticles/Interviews/livschitz_qa.html
>>
>> As a guy who has been programming in Java for close to seven years now
>> and who is now trying to learn CL rabidly, I found it particularly
>> interesting. I think my conversion to CL is starting to stick, because
>> about all I could think when I was reading it was "I think Lisp solves
>> all this."
>
> I had the same thought.
>
>> mix C with C++ was sort of nice this way, though I dropped C++ for Java
>> because I found the C++ syntax just a nightmare (which is sort of funny
>> because CL syntax always has this bad rap going on ...
>
> Yeah, C++ is coding via punctuation you really have to get right or face
> an inscrutable torrent of compiler errors (well, to newbies at least),
> and people think Lisp syntax is a negative. Chya!
>
> and I'm actually finding
>> that I don't mind the parens a bit now).
>
> How long did it take?
>
>>
>> What I find that I'm really liking about Lisp is, if I want objects,
>> there they are. While I still haven't dived into CLOS, I'm expecting a
>> well-designed, powerful object system.
>
> I read somewhere it is the only OO variant that satisfies all the
> requirements specified by some OO standards group unrelated to any
> specific language. I gotta see if I can track that down, but it might
> just have been a passing observation in an NG somewhere.
>
It was the OMG spec, and it was with a hint of irony since OMG have
historically pushed deeply, annoyingly crippled languages (by their own
metrics!) as the solution for all the world's ills.
> kenny
>
>
>
David Golden wrote:
>>>What I find that I'm really liking about Lisp is, if I want objects,
>>>there they are. While I still haven't dived into CLOS, I'm expecting a
>>>well-designed, powerful object system.
>>
>>I read somewhere it is the only OO variant that satisfies all the
>>requirements specified by some OO standards group unrelated to any
>>specific language. I gotta see if I can track that down, but it might
>>just have been a passing observation in an NG somewhere.
>>
>
> It was the OMG spec, and it was with a hint of irony since OMG have
> historically pushed deeply, annoyingly crippled languages (by their own
> metrics!) as the solution for all the world's ills.
Do you happen to have a reference?
Pascal
--
Tyler: "How's that working out for you?"
Jack: "Great."
Tyler: "Keep it up, then."
Pascal Costanza wrote:
>> It was the OMG spec, and it was with a hint of irony since OMG have
>> historically pushed deeply, annoyingly crippled languages (by their own
>> metrics!) as the solution for all the world's ills.
>
> Do you happen to have a reference?
>
http://www.franz.com/resources/educational_resources/cooper.book.pdf
In the Cooper book, on page 1, it states
"... and ANSI CL remains the /only/ language that meets all of the
criteria set forth by the Object Management Group (OMG) for
a complete object-oriented language."
Now, *where* these criteria were set forth I'm not so sure about,
as there's no obvious "A Complete Object Oriented language is..." on the
OMG site, at least these days. But IMHO lisp would be one of the languages
where you could do pretty much everything the OMG waffle on about, if you
wanted to. Personally, I don't care much. I'm more of a relational guy
myself.
From: Torkel Holm
Subject: OMG CLOS (was: The Next Move in Programming)
Date:
Message-ID: <87n07ka0ar.fsf_-_@uib.no>
Pascal Costanza <········@web.de> writes:
> David Golden wrote:
>
>>>>What I find that I'm really liking about Lisp is, if I want objects,
>>>>there they are. While I still haven't dived into CLOS, I'm expecting a
>>>>well-designed, powerful object system.
>>>
>>>I read somewhere it is the only OO variant that satisfies all the
>>>requirements specified by some OO standards group unrelated to any
>>>specific language. I gotta see if I can track that down, but it might
>>>just have been a passing observation in an NG somewhere.
>>>
>> It was the OMG spec, and it was with a hint of irony since OMG have
>> historically pushed deeply, annoyingly crippled languages (by their own
>> metrics!) as the solution for all the world's ills.
>
> Do you happen to have a reference?
I think that the origin to this claim come from the book "Basic Lisp
Techniques" by David J. Cooper.
<URL: http://www.franz.com/resources/educational_resources/cooper.book.pdf>
Here is a quote from the relevant part of the book:
ANSI CL remains the only language that meets all of the criteria set
forth by the Object Management Group (OMG) for a complete
object-oriented language.
But which OMG spec he is referring to is another querstion ;)
I am quite curious if anyone can track this further.
--
Torkel Holm, Norway
Torkel Holm wrote:
> But which OMG spec he is referring to is another querstion ;)
> I am quite curious if anyone can track this further.
>
Well, I can't. The OMG has an absurd amount of "specifications",
including a lisp<->corba mapping [1], but I couldn't find a direct statement
of what the OMG considers a "complete object oriented language".
ISTR Bertrand Meyer (yes, the Eiffel guy) set out a longish list of criteria
for OO languages at some stage [2], and AFAIK he is/was involved in OMG.
So maybe they are the criteria meant - but don't take my word for it, I'm
most certainly not the OMG (I don't even trust OO! Use an RDBMS dammit!).
I quess it could be mildly amusing if he had written a list of criteria and
included a criterion that Eiffel didn't meet and Common Lisp did, though.
[1] http://www.omg.org/technology/documents/formal/lisp_language_mapping.htm
[2] http://archive.eiffel.com/doc/oosc/page.html
See contents:
Part A: The issues
1: Software quality
2: Criteria of object orientation
David Golden wrote:
> ISTR Bertrand Meyer (yes, the Eiffel guy) set out a longish list of criteria
> for OO languages at some stage [2], and AFAIK he is/was involved in OMG.
Meyer's criteria can be summarized as follows: "Must be Eiffel".
More seriously, static typing is one of his criteria, so CL doesn't
satisfy all of them.
--
Gareth McCaughan
.sig under construc
On Sun, 15 Feb 2004, Kenny Tilton wrote:
> I read somewhere it is the only OO variant that satisfies all the
> requirements specified by some OO standards group unrelated to any
> specific language. I gotta see if I can track that down
``the only language that meets all of the Gartner Groups 'must-have'
and 'should-have' criteria for object-oriented languages''
ATA Research Note, 07/24/95
quoted from:
<http://www.franz.com/resources/educational_resources/white_papers/dooverview.lhtml>
From: Cesar Rabak
Subject: Re: The Next Move in Programming
Date:
Message-ID: <403637D1.20409@acm.org>
Tibor Simko escreveu:
> On Sun, 15 Feb 2004, Kenny Tilton wrote:
>
>>I read somewhere it is the only OO variant that satisfies all the
>>requirements specified by some OO standards group unrelated to any
>>specific language. I gotta see if I can track that down
>
>
> ``the only language that meets all of the Gartner Groups 'must-have'
> and 'should-have' criteria for object-oriented languages''
>
> ATA Research Note, 07/24/95
>
> quoted from:
> <http://www.franz.com/resources/educational_resources/white_papers/dooverview.lhtml>
But by now this reference has to be put in past, isn't it? We're talking
about an affirmation nine years old!
Also, Gartner, no matter how influencial is not technically an "OO
standards group" ;-)
--
Cesar Rabak
On Fri, 20 Feb 2004, Cesar Rabak wrote:
> But by now this reference has to be put in past, isn't it? We're
> talking about an affirmation nine years old!
It may sound like a lot of time, but how many OO languages out there
offer, for example, multi-method dispatch or [insert your favourite
CLOS feature here]? Possibly in an ANSI standardized way? Maybe the
past and present assessments would not differ that much, after all?
The temporal aspect of this issue reminds me of a recent Bill
Clementson's blog, quoting Grady Booch's recent talk on the future of
programming development environments:
``For those of you looking at the future of development
environments, I encourage you to go back and review some of the
Xerox documentation for InterLisp-D.''
--Grady Booch, EclipseCon 2004, 2-5 February 2004
<http://home.comcast.net/~bc19191/blog/040205.html>
On 2004-02-15 03:39:41, Dave Roberts wrote:
> I think my conversion to CL is starting to stick, because about all I could
> think when I was reading it was "I think Lisp solves all this."
And a fan of an other language would have thought similar about his
language.
Stefan Scholl wrote:
> On 2004-02-15 03:39:41, Dave Roberts wrote:
>
>
>>I think my conversion to CL is starting to stick, because about all I could
>>think when I was reading it was "I think Lisp solves all this."
>
>
> And a fan of an other language would have thought similar about his
> language.
I'm a C lover and a C++/Java hater. I think a language should have a
simple syntax. Lisp syntax is probably much simpler than C one, but yet
incredible powerful. C++/Java has been lost into its own syntax.
javuchi wrote:
> Stefan Scholl wrote:
>> On 2004-02-15 03:39:41, Dave Roberts wrote:
>>
>>
>>>I think my conversion to CL is starting to stick, because about all I
>>>could think when I was reading it was "I think Lisp solves all this."
>>
>>
>> And a fan of an other language would have thought similar about his
>> language.
>
> I'm a C lover and a C++/Java hater. I think a language should have a
> simple syntax. Lisp syntax is probably much simpler than C one, but yet
> incredible powerful. C++/Java has been lost into its own syntax.
Actually, I think of Java as being much closer to C on the syntax complexity
scale. I have no issues with C, like Java a lot, and hate C++.
-- Dave