Just got back from ILC 2003 and I thought I should post some of my notes.
My impressions on the conference are mixed. The lead organizer fell ill
and despite valiant efforts from some of the other organizers, the
feeling of mild chaos was constant. Throughout the conference a complete
schedule (one that included presentation titles) was never posted, some
of the speakers simply did not show up, others did not receive their
schedules until the day of the presentation. Nevertheless the quality of
some of the talks did make up for the confusion.
Christian Queinnec (http://www-spi.lip6.fr/~queinnec/WWW/Queinnec.html)
tutorial on continuations was outstanding. Queinnec started by
explaining basic CPS. He then introduced call/cc and proceeded to
contrast Scheme continuations with their C and Lisp counterparts,
remarking that Scheme continuations have indefinite extent while in Lisp
and C continuations can only be activated while they are still present
on the stack (i.e. they have dynamic extent). He pointed out that when
coupled with parallel programming continuations pose some interesting
challenges such as possibility for critical regions to be exited more
than once.
Jerry Sussman's (http://www.swiss.ai.mit.edu/~gjs/gjs.html) keynote was
awe inspiring. Having had my brain blown by SICP I was very excited by
the prospect of attending his presentation and I was not disappointed.
Sussman's passion these days is teaching and that was obvious from his
talk. Sussman contends that the greatest achievement of 20th century
computer science is not the computer or the network. Instead he argues
that the development of languages that can be used to precisely specify
processes or algorithms is of greater significance. In their efforts to
devise notations that can be used to explain computers how to do things
like square roots or Lagrange equations, Computer Scientists ended up
creating new kinds of languages for human expression. Such languages
happen to be executable by machines, but Jerry feels this is of
secondary importance compared to their use as tools for precise human
reasoning and communication, particularly in teaching. He has
investigated this insight while teaching classical mechanics at MIT.
The idea of programming languages in general and Scheme in particular as
tools for expressing processes was already present in SICP and seems to
be further explored in Jerry's new book, "Structure and Interpretation
of Classical Mechanics" .
Gregor Kiczale's (of MOP fame) (http://www.cs.ubc.ca/~gregor/)
presentation on Aspect Oriented Programming (AOP) was also fascinating.
Kiczales suggested that the Lisp/Scheme community, because of its love
for beautiful code, is particularly well suited to explore what he
considers to be the brand new design territory introduced by AOP. I had
no previous exposure to AOP so I apologize in advance for any
mischaracterizations. Hopefully any blunderings will be irritating
enough for someone to correct them. AOP introduces the notion of joint
points: basically locations in a running program where control leaves a
method to enter another method. Using a declarative language to define
sets of joint points based on their static or dynamic properties
(including method signatures and call graph information) AOP allows the
programmer to associate code to be executed after, before or around any
joint points in these sets. This allows the modularization of, well,
aspects of the code that cross cut modules and that would otherwise be
sprinkled through out the program. Typical examples of such aspects
include logging, transaction management, locking and display updating.
Kiczales mentioned that one typical initial use of AspectJ (an
instantiation of AOP for Java) is in debugging. Initially users can use
AspectJ simply as a debugging aid to add tracing information to relevant
portions of development code, while still using the vanilla Java
compiler for production builds. Later as the usefulness of the tool
becomes clear, production uses start to emerge. For more practical
information on AOP Kiczales recommends "Aspect J in action"
(http://www.manning.com/laddad/)
Espren Verste gave a very interesting talk on the use of Lisp in the
real world. His employer, Net Fonds - a Norwegian online broker, uses
Lisp daily in the majority of their systems, including a cross-platform
(Linux, Windows, Mac) desktop GUI application developed in Lispworks and
used by Net Fonds clients to monitor stock performance and execute trades.
Antonio Leit�o (http://www.evaluator.pt) presented Linj, a fascinating
Lisp to Java compiler. Using some clever heuristics Linj is able to
infer type information from Lisp code and use that information to
generate readable Java code. Additionally full access to the Java API is
possible. Antonio developed this tool in response to some of his
client�s requirement to have solutions implemented in Java. He has been
using this tool for several years and has been able to achieve CL levels
of productivity while still keeping the PHBs happy.
These are only some of the talks. Hopefully others attendees will
provide more information as well as correct any errors or omissions.
-pp
Pedro Pinto wrote:
>
> Just got back from ILC 2003 and I thought I should post some of my notes.
>
> My impressions on the conference are mixed. The lead organizer fell ill
> and despite valiant efforts from some of the other organizers, the
> feeling of mild chaos was constant. Throughout the conference a complete
> schedule (one that included presentation titles) was never posted, some
> of the speakers simply did not show up, others did not receive their
> schedules until the day of the presentation. Nevertheless the quality of
> some of the talks did make up for the confusion.
I agree about the valiant efforts. I think the missing speaker
phenomenon was a rare exception, tho I suppose it depends on which talks
one went looking for. everyone I looked for showed. i do know they
squeezed in an extra thread today (and yesterday) to accommodate an
/overbooking/ of speakers. Talks started and ended on time. The "mild
chaos" was kinda exciting.
As one who was (just a little) closer to what went down, what /really/
happened is that Heow and Rusty and Liana and a bunch of Franz people
and apologies to the ones I do not know about pulled off a fucking
miracle. And it was miracle precsely because the lead organizer had made
the conference happen (especially that great roster of speakers) almost
single-handedly. When he fell ill, lots of things (like the
titles/abstracts of the talks) had no back up.
I cannot compare with other conferences because the Lisp jobbies are the
only one I attend (tho RoboCup may be in my future), but I think the
relatively few "bumps" were the kinda thing you might see in the best of
conferences.
I want to emphasize that I have no problem with what you reported, i
just wanted to weigh in with a "cup 90% full" assessment. Nice report on
the speakers, btw.
kenny
ps. We need to hear Anton's assessment of McCarthy's remarks. I /think/
Anton may not be rubbing my nose in the lambda calculus any more when
the practical vs ivory-tower debate starts up next. I /thought/ I heard
McCarthy say the lambda calculus is the wrong model for a computer
program. True that? :)
pps. John McCarthy actually used my laptop for ten minutes to surf the
Web. I am going to be unbearable for weeks.
k
--
clinisys, inc
http://www.tilton-technology.com/
---------------------------------------------------------------
"[If anyone really has healing powers,] I would like to call
them about my knees."
-- Tenzin Gyatso, the Fourteenth Dalai Lama
> ps. We need to hear Anton's assessment of McCarthy's remarks. I /think/
> Anton may not be rubbing my nose in the lambda calculus any more when
> the practical vs ivory-tower debate starts up next. I /thought/ I heard
> McCarthy say the lambda calculus is the wrong model for a computer
> program. True that? :)
>
Note: I was NOT at ILC:
Arguably, the pi or perhaps the new "fusion" calculus might ultimately
prove more useful,
given the networked nature of modern computing. A pass/fail for a
modern process calculus is whether it can contain the lambda calculus
anyway, as the pi and fusion calculi do.
http://user.it.uu.se/~victor/tr/fusion.shtml
David Golden wrote:
> > ps. We need to hear Anton's assessment of McCarthy's remarks. I /think/
> > Anton may not be rubbing my nose in the lambda calculus any more when
> > the practical vs ivory-tower debate starts up next. I /thought/ I heard
> > McCarthy say the lambda calculus is the wrong model for a computer
> > program. True that? :)
> >
>
> Note: I was NOT at ILC:
>
> Arguably, the pi or perhaps the new "fusion" calculus might ultimately
> prove more useful, given the networked nature of modern computing.
That may very well be true, particularly for networked applications, but
it's outside the scope of what was being discussed at the conference in this
case. My assessment of the exchange in question is as follows: Stavros
Macrakis (I think) asked John McCarthy a question relating to his awareness
of lambda calculus when inventing Lisp - I don't remember the exact details
of the question. McCarthy responded and explained that although he had a
copy of Church's book, he hadn't read it all - I think he might have used
the word "skimmed". McCarthy then went on to say that if he had read it
more thoroughly, he might have been tempted to use more of it in the design
Lisp, but that this would have been a mistake.
I covered this in more detail in my earlier reply in this thread, but in
short, I think McCarthy's point was that if he had implemented a
straightforward normal-order lambda calculus interpreter, without the kinds
of extensions taken for granted today (types, constants, direct recursion,
etc.), that this would have been a mistake. It certainly wouldn't have been
much like Lisp.
Anton
Kenny Tilton wrote:
> ps. We need to hear Anton's assessment of McCarthy's remarks. I /think/
> Anton may not be rubbing my nose in the lambda calculus any more when
> the practical vs ivory-tower debate starts up next.
Heh - nice try! :)
> I /thought/ I heard McCarthy say the lambda calculus is the wrong model
> for a computer program. True that? :)
McCarthy was explicitly talking about the original lambda calculi defined by
Church, with normal-order reduction and with no extensions, which is more or
less what existed at the time Lisp was invented. He's correct that this
generally makes a terrible basis for a practical programming language.
Other than languages used strictly for theoretical exploration, no modern
functional language implements this model.
At the time Lisp was invented, the well-known variations on lambda calculus,
including those that modern functional languages are based on, didn't
exist[1]. This means that that if McCarthy had been more familiar with
lambda calculus and tempted to use it more fully in the design of Lisp, he
would either have gone down the wrong path for a practical programming
language (as he said), or he would have been forced to come up with some
variations on the pure untyped normal-order lambda calculus to address
issues like types, efficiency of reduction/evaluation, and the ability to do
direct recursion.
McCarthy did say that he didn't just borrow lambda syntax from lambda
calculus - that he also borrowed the "meaning of lambda expressions as
giving anonymous functions" (my transcription, may not be perfectly
accurate.) Given these borrowed features, including functional abstraction
and application, the result was an implementation of 'lambda' which passes
the "duck test": it walks & quacks like one, and in modern terms, can be
described as a lambda calculus. Herbert Stoyan in [2] described one of the
original implementations of EVAL as "a substituting call-by-name realization
of an applied lambda-calculus". I leave it as an exercise for the reader to
verify whether this is an accurate description, based on the EVAL source
provided in the reference. One point of divergence, of course, is that it
originally lacked "correct" lexical closures, although dynamic variable
scope did a good job of simulating the desired semantics in many scenarios.
Note that I haven't been saying that Lisp is *only* a lambda calculus. I'm
saying that the Lisp core, specifically the definition of LAMBDA,
constitutes a core lambda calculus. Common Lisp's lambdas, with their
lexical scoping, constitute a call-by-value applied lambda calculus. This
is demonstrable with code, like the examples I've previously given and
demonstrated, so it's hardly controversial.
Anton
[1] Actually, Church opened the way for the modern functional language
approach of treating numbers specially - in his 1941 book, he defines what
are essentially "macros", shorthand for numbers and mathematical operations
like addition. These were precursors to the way in which modern functional
languages extend lambda calculus with the inclusion of types such as
numbers, and built-in mathematical operations such as addition. This
approach could have been used to avoid the inefficencies of e.g. Church
numerals, and take advantage of the machine's capabilities in the way
McCarthy said he wanted to do.
[2] http://www8.informatik.uni-erlangen.de/html/lisp/mcc91.html
Anton van Straaten wrote:
> Kenny Tilton wrote:
>
>>ps. We need to hear Anton's assessment of McCarthy's remarks. I /think/
>>Anton may not be rubbing my nose in the lambda calculus any more when
>>the practical vs ivory-tower debate starts up next.
>
>
> Heh - nice try! :)
>
>
>>I /thought/ I heard McCarthy say the lambda calculus is the wrong model
>>for a computer program. True that? :)
>
>
> McCarthy was explicitly talking about the original lambda calculi...
OK, I don't know any of that highbrow stuff, I am just an applications
guy. My point is (and I welcome correction from others) but as McCarthy
was sailing over my feeble mind I could have sworn I heard him refer to
driving considerations like the nature of the hardware.
Then there was that funny moment when you tried to get away with
"thousands of extra copies..." and he said (what?) something like, "well
it's potentially infinite so you should have said something like a
googleplex", to which you graciously assented.
My only point is that Peter's new book now seems to have a very
propitious title: Practical Lisp.
Did I mention that McCarthy used my laptop for ten minutes to surf the
Web? I should confess I also let him down: no SSH so he could get to his
mail. :(
kenny
--
http://tilton-technology.com
What?! You are a newbie and you haven't answered my:
http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey
From: Shriram Krishnamurthi
Subject: Re: John "Practical" McCarthy? [was Re: ILC 2003: some impressions]
Date:
Message-ID: <w7d8ynkpuvr.fsf@cs.brown.edu>
Kenny Tilton <·······@nyc.rr.com> writes:
> My only point is that Peter's new book now seems to have a very
> propitious title: Practical Lisp.
Peter who?
Shriram
From: Artie Gold
Subject: Re: John "Practical" McCarthy? [was Re: ILC 2003: some impressions]
Date:
Message-ID: <3F8F65E3.7030800@austin.rr.com>
Shriram Krishnamurthi wrote:
> Kenny Tilton <·······@nyc.rr.com> writes:
>
>
>>My only point is that Peter's new book now seems to have a very
>>propitious title: Practical Lisp.
>
>
> Peter who?
>
Seibel.
http://www.apress.com/book/bookDisplay.html?bID=237
Google is your friend!
[Sorry, couldn't resist. ;-) ;-)]
--ag
--
Artie Gold -- Austin, Texas
Oh, for the good old days of regular old SPAM.
Shriram Krishnamurthi wrote:
> Kenny Tilton <·······@nyc.rr.com> writes:
>
>
>>My only point is that Peter's new book now seems to have a very
>>propitious title: Practical Lisp.
>
>
> Peter who?
Sorry, I was too "in house". Peter has been pestering CLL with Deep
Questions on Lisp (his "new" language) for months, thought I could get
away with some shorthand.
The book looks dandy. I only got to look at page one of his Loop doc
before dashing out to watch the Ghost of Babe stomp Pedro "Winless
against the Yanks, undefeated against 72 year-old coaches" Martinez, but
on first glance it looks like a breakthru. Page one certainly seems to
have found a nice taxonomy by which to decompose the mudball of LOOP
into manageable bits.
kenny
--
http://tilton-technology.com
What?! You are a newbie and you haven't answered my:
http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey
Kenny Tilton wrote:
>
>
> Shriram Krishnamurthi wrote:
>
> > Kenny Tilton <·······@nyc.rr.com> writes:
> >
> >
> >>My only point is that Peter's new book now seems to have a very
> >>propitious title: Practical Lisp.
> >
> >
> > Peter who?
>
> Sorry, I was too "in house".
...
> The book looks dandy. I only got to look at page one of his Loop doc
> before dashing out to watch the Ghost of Babe stomp Pedro "Winless
> against the Yanks, undefeated against 72 year-old coaches" Martinez,
To mitigate this additional extreme in-houseness, Shriram might want to know
that the latter sentence appears to have something to do with the American
version of cricket... :)
Anton
"Anton van Straaten" <·····@appsolutions.com> wrote in message
····················@newsread2.news.atl.earthlink.net...
> Kenny Tilton wrote:
> >
> >
> > Shriram Krishnamurthi wrote:
> >
> > > Kenny Tilton <·······@nyc.rr.com> writes:
> > >
> > >
> > >>My only point is that Peter's new book now seems to have a very
> > >>propitious title: Practical Lisp.
> > >
> > >
> > > Peter who?
> >
> > Sorry, I was too "in house".
> ...
> > The book looks dandy. I only got to look at page one of his Loop doc
> > before dashing out to watch the Ghost of Babe stomp Pedro "Winless
> > against the Yanks, undefeated against 72 year-old coaches" Martinez,
>
> To mitigate this additional extreme in-houseness, Shriram might want to
know
> that the latter sentence appears to have something to do with the American
> version of cricket... :)
Surely rounders? :)
(unless I've gotten hold of completely the wrong stick)
alex
In article <···············@newsread2.news.atl.earthlink.net>,
"Anton van Straaten" <·····@appsolutions.com> wrote:
> > The book looks dandy. I only got to look at page one of his Loop doc
> > before dashing out to watch the Ghost of Babe stomp Pedro "Winless
> > against the Yanks, undefeated against 72 year-old coaches" Martinez,
>
> To mitigate this additional extreme in-houseness, Shriram might want to know
> that the latter sentence appears to have something to do with the American
> version of cricket... :)
Strangely enough, that's how I explain cricket to visiting americans.
There are two bases instead of four, you keep batting until you're out
instead of in round-robin (as in rounders?), and there are two innings
instead of nine. Most of the rest of the rules are the same or similar.
Except of course for the fact that more than half of the time you play
for five days and there is no winner, which seems to be a totally
incomprehensible concept to americans who will go to any lengths to find
a sudden death way of obtaining a winner.
-- Bruce
Bruce Hoult wrote:
> In article <···············@newsread2.news.atl.earthlink.net>,
> "Anton van Straaten" <·····@appsolutions.com> wrote:
>
>
>>>The book looks dandy. I only got to look at page one of his Loop doc
>>>before dashing out to watch the Ghost of Babe stomp Pedro "Winless
>>>against the Yanks, undefeated against 72 year-old coaches" Martinez,
>>
>>To mitigate this additional extreme in-houseness, Shriram might want to know
>>that the latter sentence appears to have something to do with the American
>>version of cricket... :)
>
>
> Strangely enough, that's how I explain cricket to visiting americans.
> There are two bases instead of four, you keep batting until you're out
> instead of in round-robin (as in rounders?), and there are two innings
> instead of nine. Most of the rest of the rules are the same or similar.
>
> Except of course for the fact that more than half of the time you play
> for five days and there is no winner,...
Baseball might better be described as the working class version of the
foo-foo hoity-toity crowd's cricket. Originally even pro baseballers
were looked down on as scruffy. Maybe it started with people who only
had a couple of hours to get in a game after work and before the sun
went down. without lights in the latitudes where the game was developed,
fly balls kinda disappear into the murk for a while after about 8:45.
> which seems to be a totally
> incomprehensible concept to americans who will go to any lengths to find
> a sudden death way of obtaining a winner.
jeez, at least we keep playing the game until we have a winner. If FIFA
ran baseball the teams would switch to a home run derby after 10 innings.
:)
--
http://tilton-technology.com
What?! You are a newbie and you haven't answered my:
http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey
Bruce Hoult wrote:
>
> Except of course for the fact that more than half of the time you play
> for five days and there is no winner, which seems to be a totally
> incomprehensible concept to americans who will go to any lengths to find
> a sudden death way of obtaining a winner.
Most of 'em just aren't used to a game that _really_ "isn't over
'til it's over." Probably has something to do with TV and radio
stations with contractual obligations to show this-or-that at
specific times refusing to carry games they can't schedule.
Bear
Kenny Tilton wrote:
> Page one [of Peter Seibel's book] certainly seems to have found a
> nice taxonomy by which to decompose the mudball of LOOP
> into manageable bits.
Was it McCarthy who said something about LOOP not conforming to some basic
property of Lisp syntax or perhaps semantics? Can anyone remember the
details?
Anton
From: Duane Rettig
Subject: Re: John "Practical" McCarthy? [was Re: ILC 2003: some impressions]
Date:
Message-ID: <4fzhrojgu.fsf@beta.franz.com>
"Anton van Straaten" <·····@appsolutions.com> writes:
> Kenny Tilton wrote:
> > Page one [of Peter Seibel's book] certainly seems to have found a
> > nice taxonomy by which to decompose the mudball of LOOP
> > into manageable bits.
>
> Was it McCarthy who said something about LOOP not conforming to some basic
> property of Lisp syntax or perhaps semantics? Can anyone remember the
> details?
I wrote part of it down, but didn't get it all - he wondered if
loop violated principles of [lisp semantics?]
--
Duane Rettig ·····@franz.com Franz Inc. http://www.franz.com/
555 12th St., Suite 1450 http://www.555citycenter.com/
Oakland, Ca. 94607 Phone: (510) 452-2000; Fax: (510) 452-0182
Duane Rettig wrote:
> "Anton van Straaten" <·····@appsolutions.com> writes:
> > Was it McCarthy who said something about LOOP not conforming to some
basic
> > property of Lisp syntax or perhaps semantics? Can anyone remember the
> > details?
>
> I wrote part of it down, but didn't get it all - he wondered if
> loop violated principles of [lisp semantics?]
Thanks - I guess we'll have to wait for the video!
Anton
From: Christian Nyb�
Subject: Re: John "Practical" McCarthy? [was Re: ILC 2003: some impressions]
Date:
Message-ID: <87ismlynjx.fsf@nybo.no>
"Anton van Straaten" <·····@appsolutions.com> writes:
> Kenny Tilton wrote:
> > Page one [of Peter Seibel's book] certainly seems to have found a
> > nice taxonomy by which to decompose the mudball of LOOP
> > into manageable bits.
>
> Was it McCarthy who said something about LOOP not conforming to some basic
> property of Lisp syntax or perhaps semantics? Can anyone remember the
> details?
I recall it as the interpreter having to look at the whole sexp rather
than just the car of it, but I don't recall whether he meant to look
at the whole of the sexpr in order to decide whether it was the simple
or complex loop form. He was comparing loop and #'+, I think.
--
chr
Kenny Tilton wrote:
>
>
> Anton van Straaten wrote:
>
> > Kenny Tilton wrote:
> >
> >>ps. We need to hear Anton's assessment of McCarthy's remarks. I /think/
> >>Anton may not be rubbing my nose in the lambda calculus any more when
> >>the practical vs ivory-tower debate starts up next.
> >
> >
> > Heh - nice try! :)
> >
> >
> >>I /thought/ I heard McCarthy say the lambda calculus is the wrong model
> >>for a computer program. True that? :)
> >
> >
> > McCarthy was explicitly talking about the original lambda calculi...
>
> OK, I don't know any of that highbrow stuff, I am just an applications
> guy. My point is (and I welcome correction from others) but as McCarthy
> was sailing over my feeble mind I could have sworn I heard him refer to
> driving considerations like the nature of the hardware.
Yes, he did, and that's exactly the sort of thing I was referring to in my
previous post. McCarthy's emphasis on pragmatically making use of the
features of the computer is admirable, and that's exactly what modern
functional languages do. I've certainly never argued that anyone should do
otherwise.
Modern functional languages, say Haskell or ML, are considered to be about
as close an implementation+extension of lambda calculus as you can do and
still have a practical language. However, unlike pure lambda calculus, they
don't model numbers as Church or Scott numerals; they don't require you to
use the Y combinator for recursion; and they don't use the powerful and
clean but not-so-practical normal-order reduction model; etc. McCarthy
mentioned all three of these things as reasons that implementing a lambda
calculus interpreter would have been a mistake. And it certainly would have
been a mistake, if he had done such an implementation without optimizing any
of these things. As the first person implementing such a language, it also
probably would not have been obvious up-front which optimizations were
needed, or how to implement them, adding to the "mistake" of going down that
road so early on.
But I don't see how any of this really relates to previous discussions we've
had. I've never said that anyone should use pure lambda calculus
interpreters on practical projects, or that Lisp should have been a lambda
calculus interpreter. I have pointed out that many of the functional
features found in Scheme, and functional languages in general, were
important to McCarthy. He states this in many ways in his own writing, and
in his talk yesterday, e.g. studying computability, proving correctness of
code.
Based on this, I've said that you can't reasonably make claims about the
nature of Lisp in general that exclude these academic considerations. I
also note that these academic considerations have strong practical
implications which you benefit from whether you realize it or not - for
example, you now enjoy the use of safe lexical closures, which is a feature
that arises from having LAMBDA ultimately be implemented in a way that
matches the lambda calculus. Ignoring or being unaware of theoretical
issues in programming languages, in general, is about as successful as
trying to ignore physical laws of the universe, and the entire funarg
problem in Lisp was an example of this.
> Then there was that funny moment when you tried to get away with
> "thousands of extra copies..." and he said (what?) something like,
> "well it's potentially infinite so you should have said something like a
> googleplex", to which you graciously assented.
Just for the record, I didn't "try to get away with" anything. I was
referring to specific examples that I had run comparisons on, related to the
animated LC reduction display I used in my talk, but I didn't phrase my
statement precisely enough.
Anton
From: Bulent Murtezaoglu
Subject: Re: John "Practical" McCarthy? [was Re: ILC 2003: some impressions]
Date:
Message-ID: <87r81cx6ef.fsf@acm.org>
[...]
KT> Did I mention that McCarthy used my laptop for ten minutes to
KT> surf the Web? I should confess I also let him down: no SSH so
KT> he could get to his mail. :(
Still on windows Kenny? Get putty for the next time a celebrity needs
ssh. (it would have taken about 30 sec. to google for it and get it,
another 30 maybe to run it if you don't have crapware virus/firewall stuff.)
cheers,
BM
Bulent Murtezaoglu wrote:
> [...]
>
> KT> Did I mention that McCarthy used my laptop for ten minutes to
> KT> surf the Web? I should confess I also let him down: no SSH so
> KT> he could get to his mail. :(
>
> Still on windows Kenny?
Well, it's got my favorite CL IDE, and that is what matters.
> ...Get putty for the next time a celebrity needs
> ssh. (it would have taken about 30 sec. to google for it and get it,
> another 30 maybe to run it if you don't have crapware virus/firewall stuff.)
Sitting in a hotel conference room? I could not even flame CLL from
there. :)
And I think I had putty on my laptop from a recent encounter with the
Land of X, but could not remember the phenomenally descriptive name.
:)
kenny
--
http://tilton-technology.com
What?! You are a newbie and you haven't answered my:
http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey
Kenny Tilton writes:
> Did I mention that McCarthy used my laptop for ten minutes to surf the
> Web? I should confess I also let him down: no SSH so he could get to
Wow, the Master Himself used your laptop. You should put a
commemorative plaque on it :-)
Paolo
--
Paolo Amoroso <·······@mclink.it>
Pedro Pinto <······@cs.cmu.edu> wrote in message news:<···················@twister.southeast.rr.com>...
> Just got back from ILC 2003 and I thought I should post some of my notes.
Are there any estimates of how many people attended the conference?
Thanks for your impressions,
Paolo
Pedro Pinto <······@cs.cmu.edu> wrote:
> Paolo Amoroso wrote:
> > Are there any estimates of how many people attended the conference?
>
> I would guess around 100 or so.
> -pp
I thought it was more like 200.
--
Geoffrey S. Knauth | http://knauth.org/gsk
In article <···················@twister.southeast.rr.com>,
Pedro Pinto <······@cs.cmu.edu> wrote:
> Just got back from ILC 2003 and I thought I should post some of my notes.
>
> My impressions on the conference are mixed. The lead organizer fell ill
> and despite valiant efforts from some of the other organizers, the
> feeling of mild chaos was constant. Throughout the conference a complete
> schedule (one that included presentation titles) was never posted, some
> of the speakers simply did not show up, others did not receive their
> schedules until the day of the presentation. Nevertheless the quality of
> some of the talks did make up for the confusion.
The more I want to thank the people who helped running the conference!
A BIG THANKS to them!!!!!
I had a mixed feeling about some of the talks - **some** of
the FP and some of the Scheme stuff was pretty boring
(FP) or unconvincing (Scheme). Anyway, it is always good
to meet some Lisp people.
Did I already say a BIG THANKS to the people who made
the conference possible? ;-)
>>>>> "Pedro" == Pedro Pinto <······@cs.cmu.edu> writes:
Pedro> Kiczales suggested that the Lisp/Scheme community, because of
Pedro> its love for beautiful code, is particularly well suited to
Pedro> explore what he considers to be the brand new design territory
Pedro> introduced by AOP.
When I read the ACM theme issue on AOP, I got the distinct feeling
that most of what was going on in AOP was variations on CLOS features
such as multiple inheritance and generic functions (ie. the idea that
a method is not something "inside" an object).
Was Kiczales challenged on the point of AOP being something brand new?
------------------------+-----------------------------------------------------
Christian Lynbech | christian ··@ defun #\. dk
------------------------+-----------------------------------------------------
Hit the philistines three times over the head with the Elisp reference manual.
- ·······@hal.com (Michael A. Petonic)
Christian Lynbech <·················@ericsson.com> writes:
>>>>>> "Pedro" == Pedro Pinto <······@cs.cmu.edu> writes:
>
> Pedro> Kiczales suggested that the Lisp/Scheme community, because of
> Pedro> its love for beautiful code, is particularly well suited to
> Pedro> explore what he considers to be the brand new design territory
> Pedro> introduced by AOP.
>
> When I read the ACM theme issue on AOP, I got the distinct feeling
> that most of what was going on in AOP was variations on CLOS features
> such as multiple inheritance and generic functions (ie. the idea that
> a method is not something "inside" an object).
``What has been is what will be,
and what has been done is what will be done
there is nothing new under the sun.''
-- traditionally attributed to Solomon in Ecclesiastes 1:9
I have been to a few seminars given by people that research
programming languages and are familiar with both CLOS and AOP.
All pretty much agree that AOP is an old idea in new clothes.
Joe Marshall wrote:
> Christian Lynbech <·················@ericsson.com> writes:
>
>
>>>>>>>"Pedro" == Pedro Pinto <······@cs.cmu.edu> writes:
>>
>>Pedro> Kiczales suggested that the Lisp/Scheme community, because of
>>Pedro> its love for beautiful code, is particularly well suited to
>>Pedro> explore what he considers to be the brand new design territory
>>Pedro> introduced by AOP.
>>
>>When I read the ACM theme issue on AOP, I got the distinct feeling
>>that most of what was going on in AOP was variations on CLOS features
>>such as multiple inheritance and generic functions (ie. the idea that
>>a method is not something "inside" an object).
>
>
> ``What has been is what will be,
> and what has been done is what will be done
> there is nothing new under the sun.''
> -- traditionally attributed to Solomon in Ecclesiastes 1:9
>
> I have been to a few seminars given by people that research
> programming languages and are familiar with both CLOS and AOP.
> All pretty much agree that AOP is an old idea in new clothes.
Not quite. For example, you can't define before/after/around methods for
more than one class at a time. Something like this:
(defmethod print :after
((object (or 'myclass1 'myclass2 (eql myobject))))
...)
...which would mean that this method is defined for object being an
instance of myclass1 or myclass2, or it being an object eql to myobject.
And then, there are even fancier so-called pointcut expressions that,
for example, reason about the control flow of a program. Like this:
(defmethod move-point :after
((object (and 'myclass1 (not (cflowbelow move-line)))))
(fire-change-event ...))
...which means that a change event should be fired after a point has
changed its coordinates, but only when it is not called by move-line (in
which case fire-change-event is supposed to be called after move-line).
Of course, these things are easy to add with some macrology on top of
CLOS, but if one wants such macros, someone has to write them. You could
perhaps characterize the goal of AOP to find a good set of such macros.
Pascal
Pascal Costanza <········@web.de> writes:
> Not quite. For example, you can't define before/after/around methods
> for more than one class at a time. Something like this:
>
> (defmethod print :after
> ((object (or 'myclass1 'myclass2 (eql myobject))))
> ...)
>
> ...which would mean that this method is defined for object being an
> instance of myclass1 or myclass2, or it being an object eql to
> myobject.
Er,
(defmethod print-object :around ((object t) stream)
(multiple-value-prog1 (call-next-method)
(write-string ", ergo Carthago delendum est." stream)))
You can either choose the class that is common to the two classes
`myclass1 and `myclass2, or define special methods for each class,
or define it on T as above and simply test the type yourself.
I'd point out that you shouldn't *have* methods that are common
to completely unrelated classes; what would MATRIX-MULTIPLY do when
you invoke it on (make-instance 'ice-cream :flavor 'chocolate)?
So this argument doesn't seem too compelling.
> And then, there are even fancier so-called pointcut expressions that,
> for example, reason about the control flow of a program. Like this:
>
> (defmethod move-point :after
> ((object (and 'myclass1 (not (cflowbelow move-line)))))
> (fire-change-event ...))
>
> ...which means that a change event should be fired after a point has
> changed its coordinates, but only when it is not called by move-line
> (in which case fire-change-event is supposed to be called after
> move-line).
This is hardly new, INTERCAL has similar constructs.
I might suggest that if you don't want move-point to behave a certain
way when called from move-line, that the solution is to advise
move-line, not move-point.
> Of course, these things are easy to add with some macrology on top of
> CLOS,
or via the MOP.
> but if one wants such macros, someone has to write them. You could
> perhaps characterize the goal of AOP to find a good set of such
> macros.
I suppose that if one *does* want them. Are there any examples
that seem even remotely elegant and useful?
·············@comcast.net wrote:
> I'd point out that you shouldn't *have* methods that are common
> to completely unrelated classes; what would MATRIX-MULTIPLY do when
> you invoke it on (make-instance 'ice-cream :flavor 'chocolate)?
>
> So this argument doesn't seem too compelling.
...but that's exactly what they want. They call these things
"cross-cutting concerns": Concerns that cross-cut a given class hierarchy.
>>And then, there are even fancier so-called pointcut expressions that,
>>for example, reason about the control flow of a program. Like this:
>>
>>(defmethod move-point :after
>> ((object (and 'myclass1 (not (cflowbelow move-line)))))
>> (fire-change-event ...))
>>
>>...which means that a change event should be fired after a point has
>>changed its coordinates, but only when it is not called by move-line
>>(in which case fire-change-event is supposed to be called after
>>move-line).
>
>
> This is hardly new, INTERCAL has similar constructs.
:)
> I might suggest that if you don't want move-point to behave a certain
> way when called from move-line, that the solution is to advise
> move-line, not move-point.
But how do you make sure that move-point behaves a certain different way
when it is _not_ called by move-line?
>>Of course, these things are easy to add with some macrology on top of
>>CLOS,
>
> or via the MOP.
Yes, but not for unrelated classes. The MOP allows you to define that a
class is an instance of a metaclass, _one class at a time_. The goal of
AOP is to modify the behavior of more than one class at a time. (This is
related to what Robert Filman calls "obliviousness".)
>>but if one wants such macros, someone has to write them. You could
>>perhaps characterize the goal of AOP to find a good set of such
>>macros.
>
> I suppose that if one *does* want them. Are there any examples
> that seem even remotely elegant and useful?
The paper at http://www.cs.ubc.ca/~ycoady/papers/fse.pdf gives one of
the better examples. (I would be interested in your opinion!)
Pascal
>>>>> "Pascal" == Pascal Costanza <········@web.de> writes:
Pascal> Yes, but not for unrelated classes. The MOP allows you to define that
Pascal> a class is an instance of a metaclass, _one class at a time_. The goal
Pascal> of AOP is to modify the behavior of more than one class at a
Pascal> time. (This is related to what Robert Filman calls "obliviousness".)
Well, if we equate one CLOS class with one Java/AOP class, then there
is a difference, but if we look at crosscutting concerns a bit more
broadly, I still do not see the big difference.
Suppose I had a system, and I wanted to do a generic transaction
facility. I would code this up in a class, add calls to the
appropriate transaction handling methods (using before/around/after)
to all relevant other methods. Then to enable transactions for a
certain class, I would add the transaction class as a mixin to that
certain class and now all of my added transaction would magically
work.
I have tried to put that into code below.
In my thinking, the class hierarchy participates in the specification
of the crosscutting and it may not be quite as much from the outside
as in the AOP systems, but I still do not see the AOP systems as
having that much more power. CLOS does provide separation of concerns
fully as well as any AOP I have seen.
It is all about being able to separate code for more than one dimension
of concerns and Lisp does that very well.
I would also guess that some of the problems AOP are trying to solve
deals with static typing nature of languages such as Java. Having
slots being able to contain arbitrary kinds of data should shortcut
some of the needs for fancy AOP coding.
Contrived example
----------------
The main point here is that you can take an exisiting application
(main-system.lisp), separately develop/lift a transaction mechanism
and combine the two to gain a new application.
In what way would AOP simplify this example, or what limitations does
the method of the example have that AOP overcomes?
main-system.lisp:
(defclass important-class () ((important-slot)))
(defmethod important-function ((x important-class)))
(defun run-application (x)
(important-function x))
transaction.lisp:
(defclass transaction-mixin () ((transaction-info)))
(defmethod transaction-start ((x transaction-mixin)))
extended-main-system.lisp:
(defclass important-class++ (important-class transaction-mixin) ())
(defmethod important-function :around ((x important-class++))
(transaction-start x)
(call-next-method)
(transaction-start x))
REPL:
(run-application (make-instance 'important-class++))
------------------------+-----------------------------------------------------
Christian Lynbech | christian ··@ defun #\. dk
------------------------+-----------------------------------------------------
Hit the philistines three times over the head with the Elisp reference manual.
- ·······@hal.com (Michael A. Petonic)
From: james anderson
Subject: MOP/AOP [ Re: ILC 2003: some impressions
Date:
Message-ID: <3F8FE586.4D2A13B7@setf.de>
Christian Lynbech wrote:
>
> >>>>> "Pascal" == Pascal Costanza <········@web.de> writes:
>
> Pascal> Yes, but not for unrelated classes. The MOP allows you to define that
> Pascal> a class is an instance of a metaclass, _one class at a time_.
?!
defclass behaves that way. other operators in the mop behave differently.
i'm perplexed that this is proposed as significant.
> The goal
> Pascal> of AOP is to modify the behavior of more than one class at a
> Pascal> time. (This is related to what Robert Filman calls "obliviousness".)
>
> Well, if we equate one CLOS class with one Java/AOP class, then there
> is a difference, but if we look at crosscutting concerns a bit more
> broadly, I still do not see the big difference.
if one takes filman's statement from "aspect oriented programming is
quantification and obvliviousness"
"AOP can be understood as the desire to make quantified statements about the
behaviour of programs, and to have these quantifications hold over programs
written by oblivious programmers."
to the logical extreme, that one retrospectively require that the
quantification hold, it is not clear that generic functions do not fulfill
these requirement in and of themselves.
>
> Suppose I had a system, and I wanted to do a generic transaction
> facility...
>
> I have tried to put that into code below.
>
> ..
>
> Contrived example
> ----------------
>
> [ which added a mixin class and (in java terms) replace the factory]
an alternative would be to redefined the operator's method combination to
introduce the intended behaviour. that would require neither additions to nor
changes to application code.
...
james anderson wrote:
>
> Christian Lynbech wrote:
>
>>>>>>>"Pascal" == Pascal Costanza <········@web.de> writes:
>>
>>Pascal> Yes, but not for unrelated classes. The MOP allows you to define that
>>Pascal> a class is an instance of a metaclass, _one class at a time_.
>
>
> ?!
>
> defclass behaves that way. other operators in the mop behave differently.
> i'm perplexed that this is proposed as significant.
One of the original claims of AOP is to avoid a metaobject protocol
because it was deemed too complicated for most programmers.
I don't know whether they still claim this. I think that AOP is clearly
an instance of metaprogramming, and it seems to me that most recent AOP
tools don't try to hide this anymore.
> if one takes filman's statement from "aspect oriented programming is
> quantification and obvliviousness"
>
> "AOP can be understood as the desire to make quantified statements about the
> behaviour of programs, and to have these quantifications hold over programs
> written by oblivious programmers."
>
> to the logical extreme, that one retrospectively require that the
> quantification hold, it is not clear that generic functions do not fulfill
> these requirement in and of themselves.
I don't get this yet. What do you mean by "retrospectively require that
the quantification hold"?
Pascal
--
Pascal Costanza University of Bonn
···············@web.de Institute of Computer Science III
http://www.pascalcostanza.de R�merstr. 164, D-53117 Bonn (Germany)
From: james anderson
Subject: Re: MOP/AOP [ Re: ILC 2003: some impressions
Date:
Message-ID: <3F9025DA.1031C7AD@setf.de>
Pascal Costanza wrote:
>
> james anderson wrote:
> >
> ...
>
> > if one takes filman's statement from "aspect oriented programming is
> > quantification and obvliviousness"
> >
> > "AOP can be understood as the desire to make quantified statements about the
> > behaviour of programs, and to have these quantifications hold over programs
> > written by oblivious programmers."
> >
> > to the logical extreme, that one retrospectively require that the
> > quantification hold, it is not clear that generic functions do not fulfill
> > these requirement in and of themselves.
>
> I don't get this yet. What do you mean by "retrospectively require that
> the quantification hold"?
if one wishes an incontrovertable argument, that a given mechanism supports
quantifications to which the programmers are oblivious, one need only
demonstrate that it can be applied retrospectively.
the facilities in generic functions which enable metaprogramming through
method combination computations would support such arguments.
is there any source which provides a succint summary of the "operations" which
aop comprises?
...
james anderson wrote:
> the facilities in generic functions which enable metaprogramming through
> method combination computations would support such arguments.
Hmm, perhaps, not sure about this.
> is there any source which provides a succint summary of the "operations" which
> aop comprises?
I am afraid that this is still a moving target. The closest thing is
probably http://www.cs.ubc.ca/~gregor/masuhara-ECOOP2003.pdf
Another very short overview is
ftp://ftp.ccs.neu.edu/pub/people/wand/papers/icfp-03.ps
Note that I don't necessarily agree with the ideas presented in those
papers.
Pascal
From: james anderson
Subject: Re: MOP/AOP [ Re: ILC 2003: some impressions
Date:
Message-ID: <3F90A4E1.C37AA19@setf.de>
Pascal Costanza wrote:
>
> james anderson wrote:
>
> > the facilities in generic functions which enable metaprogramming through
> > method combination computations would support such arguments.
>
> Hmm, perhaps, not sure about this.
>
> > is there any source which provides a succint summary of the "operations" which
> > aop comprises?
>
> I am afraid that this is still a moving target. The closest thing is
> probably http://www.cs.ubc.ca/~gregor/masuhara-ECOOP2003.pdf
of the four mechanims, point-cuts/advice, traversal, compositor, and open
classes, on a first reading only pc/a and compositor would appear to require
attention. in both cases, my intuition is that while method combinations which
standardized them would not be trivial, they wouldn't be more complex than
others i know of, and would require nothing more than even less than the mop.
one need only realize that, even though the standard combinations appear
rather benign, compute-effective-method is a runtime macro generator which can
produce any control structure, the argument methods are first class objects,
both the specializing classes and the argument instances are available, and
the qualifiers constitute an arbitrary language composed of atoms.
note that i did not follow their references to the implementation which they
purport to model, but the text does argue that the complete mechanisms can be
implemented in terms of these.
...
>
> Another very short overview is
> ftp://ftp.ccs.neu.edu/pub/people/wand/papers/icfp-03.ps
>
> Note that I don't necessarily agree with the ideas presented in those
> papers.
>
> Pascal
james anderson wrote:
>
> Pascal Costanza wrote:
>
>>james anderson wrote:
>>
>>
>>>the facilities in generic functions which enable metaprogramming through
>>>method combination computations would support such arguments.
>>
>>Hmm, perhaps, not sure about this.
>>
>>
>>>is there any source which provides a succint summary of the "operations" which
>>>aop comprises?
>>
>>I am afraid that this is still a moving target. The closest thing is
>>probably http://www.cs.ubc.ca/~gregor/masuhara-ECOOP2003.pdf
>
>
> of the four mechanims, point-cuts/advice, traversal, compositor, and open
> classes, on a first reading only pc/a and compositor would appear to require
> attention. in both cases, my intuition is that while method combinations which
> standardized them would not be trivial, they wouldn't be more complex than
> others i know of, and would require nothing more than even less than the mop.
> one need only realize that, even though the standard combinations appear
> rather benign, compute-effective-method is a runtime macro generator which can
> produce any control structure, the argument methods are first class objects,
> both the specializing classes and the argument instances are available, and
> the qualifiers constitute an arbitrary language composed of atoms.
I don't see any serious fault in your reasoning, so I think you're
indeed right.
The only thing that seems to be missing is that some of the more recent
ideas for pointcut declarations are able to analyze method bodies, and I
don't see yet how you could do that in CLOS. Maybe by shadowing
DEFMETHOD and keeping track of method bodies...
BTW, there is another interesting fact in this regard: In the historical
paper about AOP by Crista Lopes [1], she explicitly says that a runtime
MOP is relatively straightforward whereas a compile-time MOP becomes
rather unnatural. (My understanding here is that MOPs as provided by
Common Lisp and, say, Smalltalk are runtime MOPS.)
It seems to me that Gregor Kiczales has started to work on compile-time
MOPs in the beginning of the 90's in order to solve efficiency issues
that are hard to solve for runtime MOPS. (For example, it's not a good
idea to define around methods on SLOT-VALUE-USING-CLASS.) The constructs
provided by, say, AspectJ are probably indeed somewhat clearer than
similar constructs a compile-time MOP would provide. Just an idea...
Pascal
[1] http://www.isr.uci.edu/tech_reports/UCI-ISR-02-5.pdf
From: james anderson
Subject: Re: MOP/AOP [ Re: ILC 2003: some impressions
Date:
Message-ID: <3F915066.285307EE@setf.de>
Pascal Costanza wrote:
>
> james anderson wrote:
>
> >...
> >>>is there any source which provides a succint summary of the "operations" which
> >>>aop comprises?
> >>
> >>I am afraid that this is still a moving target. The closest thing is
> >>probably http://www.cs.ubc.ca/~gregor/masuhara-ECOOP2003.pdf
> >
> >
> > of the four mechanims, point-cuts/advice, traversal, compositor, and open
> > classes, on a first reading only pc/a and compositor would appear to require
> > attention....
>
> I don't see any serious fault in your reasoning, so I think you're
> indeed right.
>
> The only thing that seems to be missing is that some of the more recent
> ideas for pointcut declarations are able to analyze method bodies, and I
> don't see yet how you could do that in CLOS. Maybe by shadowing
> DEFMETHOD and keeping track of method bodies...
a macro hook would likely suffice.
i would think that it would compromise the "obliviousness" criteria to propose
actually rewriting the method body, but there's nothing to preclude even that
if necessary.
it would sort of remind me of the underhandedness of things like the runtime
recompilation in java data objects, but i'd suspect that would have to be at
the heart of a java-based aop package anyway.
>
> BTW, there is another interesting fact in this regard: In the historical
> paper about AOP by Crista Lopes [1], she explicitly says that a runtime
> MOP is relatively straightforward whereas a compile-time MOP becomes
> rather unnatural.
i have to admit, i don't really understand the above distinction in [1]. at
least in the sense that i was not able to translate the uses of the apparent
terms "metaobject" and "base object" to the context of clos in such a way that
they are consistent and meaningful. as an additional matter, beyond the
syntactic convenience of being able to express a constituent of some operation
as ClassName.FieldName, the basis for the asserted "naturalness" escapes me.
> (My understanding here is that MOPs as provided by
> Common Lisp and, say, Smalltalk are runtime MOPS.)
i do not know enough smalltalk to comment on that case, but the significance
of a run-time / compile-time distinction in clos-based systems is unclear. the
reference to slot-value-using-class below notwithstanding, as one can, in
theory, recompile at any time. witness inlining optimizations.
>
> It seems to me that Gregor Kiczales has started to work on compile-time
> MOPs in the beginning of the 90's in order to solve efficiency issues
> that are hard to solve for runtime MOPS. (For example, it's not a good
> idea to define around methods on SLOT-VALUE-USING-CLASS.)
cf. the allusion to java data objects above. in a language which does not
support method inflection, one rewrites the binary and observes that this
executes more efficiently. yes, this is true.
> The constructs
> provided by, say, AspectJ are probably indeed somewhat clearer than
> similar constructs a compile-time MOP would provide. Just an idea...
>
> Pascal
>
> [1] http://www.isr.uci.edu/tech_reports/UCI-ISR-02-5.pdf
james anderson wrote:
>
> Pascal Costanza wrote:
>
>>The only thing that seems to be missing is that some of the more recent
>>ideas for pointcut declarations are able to analyze method bodies, and I
>>don't see yet how you could do that in CLOS. Maybe by shadowing
>>DEFMETHOD and keeping track of method bodies...
>
>
> a macro hook would likely suffice.
ok
> i would think that it would compromise the "obliviousness" criteria to propose
> actually rewriting the method body, but there's nothing to preclude even that
> if necessary.
This seems to be acceptable for AOP - as long as the method bodies
aren't "aware" of the fact that they are rewritten.
>>BTW, there is another interesting fact in this regard: In the historical
>>paper about AOP by Crista Lopes [1], she explicitly says that a runtime
>>MOP is relatively straightforward whereas a compile-time MOP becomes
>>rather unnatural.
>
>
> i have to admit, i don't really understand the above distinction in [1]. at
> least in the sense that i was not able to translate the uses of the apparent
> terms "metaobject" and "base object" to the context of clos in such a way that
> they are consistent and meaningful. as an additional matter, beyond the
> syntactic convenience of being able to express a constituent of some operation
> as ClassName.FieldName, the basis for the asserted "naturalness" escapes me.
I don't think they had a precise understanding of the term "naturalness".
>> (My understanding here is that MOPs as provided by
>>Common Lisp and, say, Smalltalk are runtime MOPS.)
>
>
> i do not know enough smalltalk to comment on that case, but the significance
> of a run-time / compile-time distinction in clos-based systems is unclear. the
> reference to slot-value-using-class below notwithstanding, as one can, in
> theory, recompile at any time. witness inlining optimizations.
I can only guess, but I think that recompilation at runtime would result
in a categorization as a runtime MOP. (For example, you have the need
for meaningful reinitialization protocols.)
One of the strongest evidence that support my interpretation can be
found at
http://www2.parc.com/csl/groups/sda/projects/mops/existing-mops.html
That list mentions the MOP for EuLisp which is claimed to do "some
things more cleanly than the CLOS MOP". I have read the referenced
paper, and the only real difference seems to be that the EuLisp MOP
allows programming at the metaobject level at load time only. The claims
are that this results in more efficient and safer code. Redefinition at
runtime is supposed to be only needed within development environments.
I think that that is a pretty bad idea. The fact that you can redefine
things at runtime is really one of the strong advantages of the CLOS MOP
IMHO. But it seems to me that the group at Xerox PARC has shared the
opinions of the EuLisp designers.
From that perspective, I think one can better understand how the ideas
for AOP have emerged.
>>It seems to me that Gregor Kiczales has started to work on compile-time
>>MOPs in the beginning of the 90's in order to solve efficiency issues
>>that are hard to solve for runtime MOPS. (For example, it's not a good
>>idea to define around methods on SLOT-VALUE-USING-CLASS.)
>
>
> cf. the allusion to java data objects above. in a language which does not
> support method inflection, one rewrites the binary and observes that this
> executes more efficiently. yes, this is true.
Exactly.
Pascal
From: james anderson
Subject: Re: MOP/AOP [ Re: ILC 2003: some impressions
Date:
Message-ID: <3F92A028.652E6C21@setf.de>
Pascal Costanza wrote:
>
> james anderson wrote:
>
> ...
> >> (My understanding here is that MOPs as provided by
> >>Common Lisp and, say, Smalltalk are runtime MOPS.)
> >
> >
> > i do not know enough smalltalk to comment on that case, but the significance
> > of a run-time / compile-time distinction in clos-based systems is unclear. the
> > reference to slot-value-using-class below notwithstanding, as one can, in
> > theory, recompile at any time. witness inlining optimizations.
>
> I can only guess, but I think that recompilation at runtime would result
> in a categorization as a runtime MOP. (For example, you have the need
> for meaningful reinitialization protocols.)
>
> One of the strongest evidence that support my interpretation can be
> found at
> http://www2.parc.com/csl/groups/sda/projects/mops/existing-mops.html
nice list.
>
> That list mentions the MOP for EuLisp which is claimed to do "some
> things more cleanly than the CLOS MOP". I have read the referenced
> paper, and the only real difference seems to be that the EuLisp MOP
> allows programming at the metaobject level at load time only. The claims
> are that this results in more efficient and safer code. Redefinition at
> runtime is supposed to be only needed within development environments.
>
> I think that that is a pretty bad idea. The fact that you can redefine
> things at runtime is really one of the strong advantages of the CLOS MOP
> IMHO. But it seems to me that the group at Xerox PARC has shared the
> opinions of the EuLisp designers.
>
> From that perspective, I think one can better understand how the ideas
> for AOP have emerged.
>
aha. ok. so now there's good retrospection and evil retrospection. and clos
evidently allows too much of the latter. oh well.
...
Christian Lynbech wrote:
>>>>>>"Pascal" == Pascal Costanza <········@web.de> writes:
>
>
> Pascal> Yes, but not for unrelated classes. The MOP allows you to define that
> Pascal> a class is an instance of a metaclass, _one class at a time_. The goal
> Pascal> of AOP is to modify the behavior of more than one class at a
> Pascal> time. (This is related to what Robert Filman calls "obliviousness".)
>
> Well, if we equate one CLOS class with one Java/AOP class, then there
> is a difference, but if we look at crosscutting concerns a bit more
> broadly, I still do not see the big difference.
>
> Suppose I had a system, and I wanted to do a generic transaction
> facility. I would code this up in a class, add calls to the
> appropriate transaction handling methods (using before/around/after)
> to all relevant other methods. Then to enable transactions for a
> certain class, I would add the transaction class as a mixin to that
> certain class and now all of my added transaction would magically
> work.
The response here would be twofold:
+ Yes, that's one way to implement these things. AOP is not about
finding ways to implement these things, but about finding ways to
conveniently express them.
+ In your specific example, you need to be aware of the mixin, and need
to use it. (In the source code you have given, you need to add
extended-main-system.lisp - the goal of AOP is that you don't need such
an explicit combination step.)
Here is an analogy that might help to understand what AOP is about:
It can be shown that all you need for expressing most conceivable
iteration constructs is recursive procedure calls. However, for many
cases it is still much more convenient to add specific iteration
constructs into a language (like DO, DOLIST, DOTIMES or even the LOOP
macro in Common Lisp). Yes, you can achieve all the things AOP
constructs provide with metaprogramming, but that's besides the point -
this is exactly like saying that you can achieve all the things the LOOP
macro provides with recursion. Think of AOP as the LOOP macro of
metaprogramming. ;)
Some more notes:
+ Beware, this is only an analogy. It might capture all there is to say
about AOP or not.
+ I don't want to defend AOP. The lack of AOP is clearly less pressing
in Common Lisp than it is in Java.
+ The only important additive I see in AOP is the notion of
obliviousness. But some members of the AOP community heavily disagree in
this respect.
Pascal
--
Pascal Costanza University of Bonn
···············@web.de Institute of Computer Science III
http://www.pascalcostanza.de R�merstr. 164, D-53117 Bonn (Germany)
>>>>> "Pascal" == Pascal Costanza <········@web.de> writes:
Pascal> + Yes, that's one way to implement these things. AOP is not about
Pascal> finding ways to implement these things, but about finding ways to
Pascal> conveniently express them.
Pascal> + In your specific example, you need to be aware of the mixin, and
Pascal> need to use it. (In the source code you have given, you need to add
Pascal> extended-main-system.lisp - the goal of AOP is that you don't need
Pascal> such an explicit combination step.)
I am not really sure I see this. Even in AspectJ, you will at one
point need to express the combination (or AOP is a more devious device
than I was aware of :-)
The point of my example was that "main-system.lisp" and
"transaction.lisp" both were written oblivous of a later
combination. "extended-main-system.lisp" only expresses the
combination.
To me this captures the essence of what AOP tries to do. In this
understanding, "extended-main-system.lisp" is as good a representation
of the combination as anything else; in the words of your analogy it
is still iteration even though it has parenthesis instead of curly
braces and semicolons.
If AOP is just about syntax then I rest my case.
------------------------+-----------------------------------------------------
Christian Lynbech | christian ··@ defun #\. dk
------------------------+-----------------------------------------------------
Hit the philistines three times over the head with the Elisp reference manual.
- ·······@hal.com (Michael A. Petonic)
Christian Lynbech wrote:
>>>>>>"Pascal" == Pascal Costanza <········@web.de> writes:
>
>
>
> Pascal> + Yes, that's one way to implement these things. AOP is not about
> Pascal> finding ways to implement these things, but about finding ways to
> Pascal> conveniently express them.
>
> Pascal> + In your specific example, you need to be aware of the mixin, and
> Pascal> need to use it. (In the source code you have given, you need to add
> Pascal> extended-main-system.lisp - the goal of AOP is that you don't need
> Pascal> such an explicit combination step.)
>
> I am not really sure I see this. Even in AspectJ, you will at one
> point need to express the combination (or AOP is a more devious device
> than I was aware of :-)
>
> The point of my example was that "main-system.lisp" and
> "transaction.lisp" both were written oblivous of a later
> combination. "extended-main-system.lisp" only expresses the
> combination.
>
> To me this captures the essence of what AOP tries to do. In this
> understanding, "extended-main-system.lisp" is as good a representation
> of the combination as anything else; in the words of your analogy it
> is still iteration even though it has parenthesis instead of curly
> braces and semicolons.
You're right. I stand corrected.
Pascal
--
Pascal Costanza University of Bonn
···············@web.de Institute of Computer Science III
http://www.pascalcostanza.de R�merstr. 164, D-53117 Bonn (Germany)
Pascal Costanza <········@web.de> writes:
>
> The paper at http://www.cs.ubc.ca/~ycoady/papers/fse.pdf gives one of
> the better examples. (I would be interested in your opinion!)
Well, I downloaded the paper and have given it some thought.
I do not understand what an `aspect' is. The above-cited paper says
that `aspects' are a linguistic mechanism intended to allow
implementation of `crosscutting concerns'. A `crosscutting concern'
is an element of a system that cuts through the primary system
modularity. But this just begs the question of what the `primary
system modularity' is.
The papers I have read on aspect-oriented programming make some very
large assumptions about the `primary system modularity'. The paper
above uses OS prefetching as the motivating example, so let's stick
with that. The paper assumes that the operating system is `layered'
something like this:
virtual memory abstraction
----------------------------
file-system abstraction
----------------------------
physical device
The paper notes that in order for the file-system abstraction to make
informed decisions about driving the physical device, it must have
knowledge about the intended use of the pages that it is fetching and
it must integrate that knowledge with the its knowledge about the
characteristics of the physical device. For example, if the result of
the disk read is destined to be placed in a random stack-allocated
buffer, it has to behave differently than if the result is to be
placed in at a page-aligned segment of virtual memory.
The paper says:
``Based on our analysis, it appears that the natural modularity
of prefetching modes is that of a single execution path, rather
than of the layers in the system. But these execution paths
crosscut the layers, as shown in Figure 1. This crosscutting
property of the prefetching modes appears to be the reason they
are difficult to modularize using traditional techniques...''
So in this case, they appear to be identifying the `layered approach'
as the `primary system modularity' and `prefetching behavior' as an
`aspect'.
Why are `layers' the `primary system modularity'? Is it simply the
fact that this particular instance of the problem attempted to
abstract the system via this mechanism? If so, isn't this simply an
ad hoc problem --- a strawman? Or are they claiming that this sort of
`layered' modularity is the `primary' way one ought to design? If
that is so, how do we reconcile this with the statement that layers do
not capture the `natural modularity' of prefetching?
Abstraction hides details (we all agree on that). Different
abstractions hide different details. In any design problem, when one
is faced with complexity the usual solution is to use a simplified
model that abstracts away the complexity. The suitability of the
simplified model depends on the desiderata of the system. It appears
that the `layered' model of process-disk interaction abstracts away
the details of `dynamic patterns of usage'.
But if dynamic patterns of usage are important to us, why on earth are
we starting with a model that ignores them?! Furthermore, why are we
continuing to use that model once we've discovered that it isn't a
`natural' abstraction for the process we are interested in?
The second question I have involves use of the word `modularity'.
When a system such as a the file system described above is externally
modified by advice, it destroys the concept of `locality' within the
system. The point of `modularity' is confinement. If a module no
longer provides confinement, how is it in fact `modular'?
The mechanism of aspect-oriented programming is questionable. Filman
and Friedman identify `black-box' vs. `clear-box' aspect-oriented
programming. `Black-box' means using *only* the advertised interface
to the abstraction. How does `black-box' aspect oriented programming
differ from `component configuration'? Is there anything new here, or
are we simply giving a fancy name to the standard `shims and wrappers'
we've been using for decades? Filman and Friedman note that black-box
aspect-oriented programming can be `downright trivial' in Lisp.
`Clear-box' aspect-oriented programming involves externally modifying
the source code of the target module. The notion of `obliviousness'
shows up at this point. The goal is that the programmer of the module
should not have to modify his code to accomodate the possibility of
intrusion by an aspect hook. But since the only guarantee made by a
module is that it implements its interface via some unspecified
mechanism, the only thing an external aspect can really depend upon is
the interface, and we're back to black box again.
On the other hand, once one has made the drastic decision to abandon
the abstraction barrier surrounding the code, why, all of a sudden,
the reluctance to actually *fix* the problem?
It should be obvious that adding undocumented dependencies to modules
on the fly is, in general, an act of desperation.
Aspect-oriented examples seem to demonstrate the following:
1) an impoverished source language where a particular limited
abstraction paradigm is rigidly enforced (objects a la java)
2) a lack of ability to customize the compilation process
3) a lack of introspection facilities to customize the runtime
behavior
When Java first appeared on the scene, many lisp hackers object to it
because it lacked the above features. They predicted that
Java would have difficulty if a) the problem domain wasn't amenable
to a single-inheritance class model, b) macros were unavailable to
facilitate metaprogramming, and c) if the execution model was missing
the ability to reason about itself.
What do you know!
For a lisp hacker, aspect-oriented programming appears to rehash a
number of techniques that have been known about for a while. No, it
isn't *all* multiple inheritance, but rather a combination of MI,
the MOP, a powerful macro system, judicious use of hooks, and the
willingness to fix what is broken. Lisp hackers haven't collected
all these things under a flashy name, but we *have* seen them before.
Joe Marshall wrote:
> Pascal Costanza <········@web.de> writes:
>
>
>>The paper at http://www.cs.ubc.ca/~ycoady/papers/fse.pdf gives one of
>>the better examples. (I would be interested in your opinion!)
>
>
> Well, I downloaded the paper and have given it some thought.
>
> I do not understand what an `aspect' is. The above-cited paper says
> that `aspects' are a linguistic mechanism intended to allow
> implementation of `crosscutting concerns'. A `crosscutting concern'
> is an element of a system that cuts through the primary system
> modularity. But this just begs the question of what the `primary
> system modularity' is.
[...]
> Why are `layers' the `primary system modularity'? Is it simply the
> fact that this particular instance of the problem attempted to
> abstract the system via this mechanism? If so, isn't this simply an
> ad hoc problem --- a strawman? Or are they claiming that this sort of
> `layered' modularity is the `primary' way one ought to design? If
> that is so, how do we reconcile this with the statement that layers do
> not capture the `natural modularity' of prefetching?
What they are claiming is that one cannot avoid cross-cutting concerns
in large projects. The notion of a "primary" modularization and
therefore secondary modularizations that crosscut the primary ones is a
little bit misleading. For example, in the HyperJ approach, it is
claimed that none of the concerns are "primary", they are just different
concerns that crosscut each other.
In the case of operating systems, the system design is typically
layered, AFAIK. At least, operating systems usually have kernels, and
the rest is built around those kernels. Therefore, it seems to me that
the paper above is indeed a good example for AOP. _However_, I am no
expert in operating systems, so I might be wrong here.
Nevertheless, this example shows how code may have the need to be
modularized along different dimensions, and the question is how to
specify and combine these dimensions in a manageable way.
> Abstraction hides details (we all agree on that). Different
> abstractions hide different details. In any design problem, when one
> is faced with complexity the usual solution is to use a simplified
> model that abstracts away the complexity. The suitability of the
> simplified model depends on the desiderata of the system. It appears
> that the `layered' model of process-disk interaction abstracts away
> the details of `dynamic patterns of usage'.
>
> But if dynamic patterns of usage are important to us, why on earth are
> we starting with a model that ignores them?! Furthermore, why are we
> continuing to use that model once we've discovered that it isn't a
> `natural' abstraction for the process we are interested in?
The conjecture is that as soon as you refactor your code, or start from
scratch with these concerns in mind, it's inevitable that other concerns
will emerge that crossscut your new architecture.
In fact, the wording here is not quite right. The claim is, in more
detail, that you can never avoid cross-cutting concerns, but AOP
approaches help you to avoid code tangling because of cross-cutting
concerns. So no matter how you organize your code, you will have
concerns that crosscut the system architecture and will lead to code
tangling when you don't have the appropriate language constructs to deal
with them effectively.
> The second question I have involves use of the word `modularity'.
> When a system such as a the file system described above is externally
> modified by advice, it destroys the concept of `locality' within the
> system. The point of `modularity' is confinement. If a module no
> longer provides confinement, how is it in fact `modular'?
>
> The mechanism of aspect-oriented programming is questionable. Filman
> and Friedman identify `black-box' vs. `clear-box' aspect-oriented
> programming. `Black-box' means using *only* the advertised interface
> to the abstraction. How does `black-box' aspect oriented programming
> differ from `component configuration'? Is there anything new here, or
> are we simply giving a fancy name to the standard `shims and wrappers'
> we've been using for decades? Filman and Friedman note that black-box
> aspect-oriented programming can be `downright trivial' in Lisp.
I think the term "black-box" AOP is just introduced to contrast it with
"clear-box" AOP. No, I don't think that there is anything new here.
> For a lisp hacker, aspect-oriented programming appears to rehash a
> number of techniques that have been known about for a while. No, it
> isn't *all* multiple inheritance, but rather a combination of MI,
> the MOP, a powerful macro system, judicious use of hooks, and the
> willingness to fix what is broken. Lisp hackers haven't collected
> all these things under a flashy name, but we *have* seen them before.
Yes, but the claim is that AOP makes these things simpler. I fear this
is possibly meant in the sense of "simpler for average programmers".
I feel a little uneasy about this discussion. I happen to know some
details about AOP because my research touches similar issues. On the
other hand, I am not completely convinced myself that AOP is indeed a
good idea. The problem that AOP, in some of its current approaches,
allows you to break encapsulation, or fiddle with undocumented internals
of the code, may indeed be a serious problem. For example, I especially
think that AspectJ's approach to use wildcards to match names of methods
to be modified is a very bad idea.
On the other hand, these things might turn out not to be that bad in
practice. We know about features in Lisp that outsiders consider to be
dangerous but aren't in practice. And then there is research going on to
find out how to better deal with encapsulation issues. I don't know much
about the latter because they tend to try to adapt static type systems,
and I am not inerested too much in static approaches anymore.
One interesting question would be: has any Lispnik ever experienced
cross-cutting concerns in Lisp code that could not be effectively
disentangled?
Pascal
Pascal Costanza <········@web.de> writes:
> What they are claiming is that one cannot avoid cross-cutting concerns
> in large projects.
I don't think that they have proven this. It may often be the case,
but the examples given argue that the abstraction used is
inappropriate, not that *all* such abstractions are inappropriate.
> In the case of operating systems, the system design is typically
> layered, AFAIK. At least, operating systems usually have kernels, and
> the rest is built around those kernels. Therefore, it seems to me that
> the paper above is indeed a good example for AOP. _However_, I am no
> expert in operating systems, so I might be wrong here.
There are *a lot* of paradigms. E.G. the exokernel model is to give
the application much more responsibility in the use of resources.
The point is that the layered approach affords certain advantages, but
high performance isn't necessarily one of them. Taking a layered
approach and then complaining that it doesn't perform seems to me to
indicate that maybe a different approach should have been used.
> The conjecture is that as soon as you refactor your code, or start
> from scratch with these concerns in mind, it's inevitable that other
> concerns will emerge that crossscut your new architecture.
Yes, but that is an awfully strong conjecture.
> In fact, the wording here is not quite right. The claim is, in more
> detail, that you can never avoid cross-cutting concerns, but AOP
> approaches help you to avoid code tangling because of cross-cutting
> concerns. So no matter how you organize your code, you will have
> concerns that crosscut the system architecture and will lead to code
> tangling when you don't have the appropriate language constructs to
> deal with them effectively.
So what's wrong with changing the language?
>> For a lisp hacker, aspect-oriented programming appears to rehash a
>> number of techniques that have been known about for a while. No, it
>> isn't *all* multiple inheritance, but rather a combination of MI,
>> the MOP, a powerful macro system, judicious use of hooks, and the
>> willingness to fix what is broken. Lisp hackers haven't collected
>> all these things under a flashy name, but we *have* seen them before.
>
> Yes, but the claim is that AOP makes these things simpler. I fear this
> is possibly meant in the sense of "simpler for average programmers".
Possibly. A horrible thought.
> I feel a little uneasy about this discussion. I happen to know some
> details about AOP because my research touches similar issues. On the
> other hand, I am not completely convinced myself that AOP is indeed a
> good idea. The problem that AOP, in some of its current approaches,
> allows you to break encapsulation, or fiddle with undocumented
> internals of the code, may indeed be a serious problem. For example, I
> especially think that AspectJ's approach to use wildcards to match
> names of methods to be modified is a very bad idea.
Certainly many of the examples given simply defy the fundamental
principles of abstraction.
> On the other hand, these things might turn out not to be that bad in
> practice. We know about features in Lisp that outsiders consider to be
> dangerous but aren't in practice. And then there is research going on
> to find out how to better deal with encapsulation issues. I don't know
> much about the latter because they tend to try to adapt static type
> systems, and I am not inerested too much in static approaches anymore.
>
> One interesting question would be: has any Lispnik ever experienced
> cross-cutting concerns in Lisp code that could not be effectively
> disentangled?
I have *occasionally* run into such things. For instance, ensuring
(statically) that all calls to a particular function used at least
one keyword from a particular set. But these things are not very
common, and the tools are at hand (we used a compiler-macro).
I think it might also be interesting to ask if any Lispnik has
experienced *systemic* cross-cutting concerns in Lisp code that
has clearly illustrated the need for new tools.
·············@comcast.net wrote:
> Pascal Costanza <········@web.de> writes:
>
>
>>What they are claiming is that one cannot avoid cross-cutting concerns
>>in large projects.
>
>
> I don't think that they have proven this.
Yes, this is more like an "axiom". ;)
> It may often be the case,
> but the examples given argue that the abstraction used is
> inappropriate, not that *all* such abstractions are inappropriate.
>
>
>>In the case of operating systems, the system design is typically
>>layered, AFAIK. At least, operating systems usually have kernels, and
>>the rest is built around those kernels. Therefore, it seems to me that
>>the paper above is indeed a good example for AOP. _However_, I am no
>>expert in operating systems, so I might be wrong here.
>
>
> There are *a lot* of paradigms. E.G. the exokernel model is to give
> the application much more responsibility in the use of resources.
>
> The point is that the layered approach affords certain advantages, but
> high performance isn't necessarily one of them. Taking a layered
> approach and then complaining that it doesn't perform seems to me to
> indicate that maybe a different approach should have been used.
OK, point taken.
>>The conjecture is that as soon as you refactor your code, or start
>>from scratch with these concerns in mind, it's inevitable that other
>>concerns will emerge that crossscut your new architecture.
>
>
> Yes, but that is an awfully strong conjecture.
Right.
>>In fact, the wording here is not quite right. The claim is, in more
>>detail, that you can never avoid cross-cutting concerns, but AOP
>>approaches help you to avoid code tangling because of cross-cutting
>>concerns. So no matter how you organize your code, you will have
>>concerns that crosscut the system architecture and will lead to code
>>tangling when you don't have the appropriate language constructs to
>>deal with them effectively.
>
>
> So what's wrong with changing the language?
You mean Lisp? ;)
>>One interesting question would be: has any Lispnik ever experienced
>>cross-cutting concerns in Lisp code that could not be effectively
>>disentangled?
>
> I have *occasionally* run into such things. For instance, ensuring
> (statically) that all calls to a particular function used at least
> one keyword from a particular set. But these things are not very
> common, and the tools are at hand (we used a compiler-macro).
In 3.2.2.1.3, the HyperSpec states the following:
"The presence of a compiler macro definition for a function or macro
indicates that it is desirable for the compiler to use the expansion of
the compiler macro instead of the original function form or macro form.
However, no language processor (compiler, evaluator, or other code
walker) is ever required to actually invoke compiler macro functions, or
to make use of the resulting expansion if it does invoke a compiler
macro function."
Is this a problem in practice?
> I think it might also be interesting to ask if any Lispnik has
> experienced *systemic* cross-cutting concerns in Lisp code that
> has clearly illustrated the need for new tools.
Right.
Pascal
Pascal Costanza <········@web.de> wrote:
>> common, and the tools are at hand (we used a compiler-macro).
> In 3.2.2.1.3, the HyperSpec states the following:
> However, no language processor (compiler, evaluator, or other code
> walker) is ever required to actually invoke compiler macro functions, or
I've on occasion wondered why something like :compiler-macros-used
isn't commonly seen in features. ;)
Cheers,
-- Nikodemus
Pascal Costanza <········@web.de> writes:
> ·············@comcast.net wrote:
>
>> I have *occasionally* run into such things. For instance, ensuring
>> (statically) that all calls to a particular function used at least
>> one keyword from a particular set. But these things are not very
>> common, and the tools are at hand (we used a compiler-macro).
>
> In 3.2.2.1.3, the HyperSpec states the following:
>
> "The presence of a compiler macro definition for a function or macro
> indicates that it is desirable for the compiler to use the expansion
> of the compiler macro instead of the original function form or macro
> form. However, no language processor (compiler, evaluator, or other
> code walker) is ever required to actually invoke compiler macro
> functions, or to make use of the resulting expansion if it does invoke
> a compiler macro function."
>
> Is this a problem in practice?
It was a case where we wanted to programmatically find every place
in the source code where a particular function was being used and
ensure that some condition held (I forget what the condition was).
If the compiler macro *didn't* work, a normal macro would have
sufficed.
I actually believe in the analysis leading to AOP. Based on the ACM
Communications articles, I understood it to say the following (in my
own words):
Object Oriented programming has proven to be no silver
bullet. More sophisticated tools are need to manage and
facilitate reuse in large systems.
So AOP is not wrong, but it is concerned with issues that it would
take a non-lisp programmer to think of as deep problems. In lisp we
have for decades had the tools to solve this, not that some care isn't
needed (Kiczales has a good paper on issues in CLOS library
construction), but the tools are there.
See also Scott McKays followup to my original posting on AOP in
http://groups.google.se/groups?hl=sv&lr=&ie=UTF-8&selm=KByJ7.1716%24eh7.981704%40typhoon.ne.mediaone.net
I am actually very happy for AOP. When discussing lisp with people, it
is normally pretty easy to convince them that lisp is indeed a very
sophisticated programming language, so they usually fall back to the
position that all of that sophistication isn't needed in practice, and
here we have the AOP guys building a case for why the full power of
CLOS is actually a good thing.
So C proved to the masses that a language can be too lowlevel, C++
proved that data abstraction (aka OO) is good, Java proved that
automated memory management (aka GC) rocks and now the AOP community
will prove to the world that multiple inheritance, multidispatch and
generic functions are usefull abstractions!
PS
I too downloaded the Yocady paper, but didn't get beyond the
introduction. However, that contained something very interesting
allthough somewhat tangential to the AOP discussion. I quote:
Recently, Engler et al. captured popular sentiment with an
observation with an observation that disparate parts of
operating system and kernel code are linked together in a
"fragile and intricate mess".
Greenspuns 10. rule strikes again :-)
------------------------+-----------------------------------------------------
Christian Lynbech | christian ··@ defun #\. dk
------------------------+-----------------------------------------------------
Hit the philistines three times over the head with the Elisp reference manual.
- ·······@hal.com (Michael A. Petonic)
Christian Lynbech wrote:
> See also Scott McKays followup to my original posting on AOP in
> http://groups.google.se/groups?hl=sv&lr=&ie=UTF-8&selm=KByJ7.1716%24eh7.981704%40typhoon.ne.mediaone.net
The article of yours that Scott responds to is excellent!
In Scott's response, he writes:
> Rather than complain about this, somebody should simply spend one
> week-end reimplementing the ideas of Aspect/J and HyperJ in Lisp,
> carefully noting how long it took to do it, and describing anything that
> needed to be done outside of what is trivially supported by Lisp and
> the CLOS MOP.
That's what I am trying to do in my spare time. One of the results is my
paper on dynamically scoped functions in which I have tried to distill
what I think is the essence of AOP - a one-page extension of Lisp.
I have some more stuff like this in the pipeline.
"So much to do, so little time."
Pascal
Pascal Costanza <········@web.de> writes:
> (defmethod print :after
> ((object (or 'myclass1 'myclass2 (eql myobject))))
> ...)
>
> (defmethod move-point :after
> ((object (and 'myclass1 (not (cflowbelow move-line)))))
> (fire-change-event ...))
This looks like CLOS with types instead of classes which I also would have
liked to have quite often. As much as I know this was not put in CLOS
because of efficiency reasons.
Nicolas
Christian Lynbech wrote:
>>>>>>"Pedro" == Pedro Pinto <······@cs.cmu.edu> writes:
>
>
> Pedro> Kiczales suggested that the Lisp/Scheme community, because of
> Pedro> its love for beautiful code, is particularly well suited to
> Pedro> explore what he considers to be the brand new design territory
> Pedro> introduced by AOP.
>
> When I read the ACM theme issue on AOP, I got the distinct feeling
> that most of what was going on in AOP was variations on CLOS features
> such as multiple inheritance and generic functions (ie. the idea that
> a method is not something "inside" an object).
>
> Was Kiczales challenged on the point of AOP being something brand new?
He pre-empted that strike by answering it before it was raised, saying
and demonstrating that it is not MI or GF.
It is not, tho /sometimes/ you could achieve an AOP result with MI. But
other times what they are doing with AOP is so horrifically bad (where
is my Bradshaw 2000 Hyperbolizer?) that a clean hack like MI or GFs
would not suffice. AOP is justified: You really do need to get into the
compiler to go where no application semantics (as opposed to debugging
tool) were ever meant to go.
The motivating example from Hell:
You have this application "service". (That's a new word for library,
right?) That service provides functionality to its callers. Duh. That
service is implemented by code divided up into subroutines or submodules
or whatever. Duh. Now sometimes these subroutines need to know something
about the specific caller of the service. Hello?
This was an exciting talk for me, because it means everything I have
learned about application design is completely wrong. It's a good thing
Bradshaw has me enrolled in bartender's school. I thought black boxes
and encapsulation and hiding details were /good/ things. Silly me.
But Lispniks after all are a pretty classy act. Kiczales was jumping
around the stage giddy with delight over what he was presenting, and I
imagine many people went through what I did, wanting to protest but
unable to bring myself to speak up for fear of hurting his feelings.
Still, truth will out, and in the end, without meaning to, the Lisp
community spoke its mind loud and clear. I think it happened on this
very example (service internals wanting to interrogate the service
caller). After he delivered the punchline solution he stopped and looked
at us expectantly. We all just sat there. He waited. We sat. Finally he
almost complained: "When I show this to Java users they stand up and
cheer." We sat.
Mind you, afterwards a group gathered and was speaking admiringly of
what they had just heard. I stopped long enough to spread some flamebait
but then left them alone.
kenny
--
http://tilton-technology.com
What?! You are a newbie and you haven't answered my:
http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey
Jan Rychter <···@rychter.com> writes:
> I had the impression that the audience at ILC2003 was very skeptical,
> for two principal reasons:
>
> 1) Kiczales was presenting examples in Java,
Given who the speaker was, I think the fact that he *only* had Java
examples speaks volumes. He knows CLOS better than almost anyone, but
wasn't able to come up with stunning examples in Lisp (even a
hypothetical one), or even anything directly relevant to CLOS.
I've seen AOP before, but I've never felt the need to implement
anything in Lisp because it seemed like we can already do everything
they can do that's not considered abusive. Unlike some people, I
finally *do* think we can take something from AOP ... but it's only
the idea of composing specifiers. I'm not even sure it would be a
good idea, but it's probably worth trying, to see if it is.
--
/|_ .-----------------------.
,' .\ / | No to Imperialist war |
,--' _,' | Wage class war! |
/ / `-----------------------'
( -. |
| ) |
(`-. '--.)
`. )----'
Jan Rychter wrote:
>>>>>>"Christian" == Christian Lynbech <·················@ericsson.com> writes:
>>>>>>"Pedro" == Pedro Pinto <······@cs.cmu.edu> writes:
>
> Pedro> Kiczales suggested that the Lisp/Scheme community, because of
> Pedro> its love for beautiful code, is particularly well suited to
> Pedro> explore what he considers to be the brand new design territory
> Pedro> introduced by AOP.
>
> Christian> When I read the ACM theme issue on AOP, I got the distinct
> Christian> feeling that most of what was going on in AOP was variations
> Christian> on CLOS features such as multiple inheritance and generic
> Christian> functions (ie. the idea that a method is not something
> Christian> "inside" an object).
>
> Christian> Was Kiczales challenged on the point of AOP being something
> Christian> brand new?
> [...]
>
> I had the impression that the audience at ILC2003 was very skeptical,
> for two principal reasons:
>
> 1) Kiczales was presenting examples in Java,
> 2) everybody knows that Lisp/CLOS is the ultimate one can achieve in
> programming abstractions.
>
> Yes, I'm being ironic.
>
> At the beginning of his talk, I was still following assumption (2)
> closely and thinking "well, this can all be done with multiple
> inheritance and mixins", which turns out to be wrong, as Kiczales
> promptly pointed out.
Right, because they think subroutines inside the belly of a service
should be able to interrogate callers of the service to decide how to
behave. C'mon kiddies, Poli Sci 101, ok? Can you spell abstraction? Sher
ya can.
But that misfeature apparently makes Javans stand up and cheer. i was
trying to keep down my lunch. This dissonance can be taken as a metric
of how much pain those poor Javan souls are in.
We must assemble immediately a rescue mission to the Land of J, and
re-christen AOP as TUG, The Ultimate Goto.
--
http://tilton-technology.com
What?! You are a newbie and you haven't answered my:
http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey
For my interest - can anybody, now, explain (succinctly :-) what AOP is
and/or what problem(s) it is thought to solve? I have not yet stumbled
upon any article/paper that clears this up for me...
thanx
pt
Paul Tarvydas wrote:
> For my interest - can anybody, now, explain (succinctly :-) what AOP is
> and/or what problem(s) it is thought to solve? I have not yet stumbled
> upon any article/paper that clears this up for me...
AOP aims for modularization of so-called cross-cutting concerns. The
idea is that, as soon as programs get very large, concerns emerge that
can't be localized well in the class hierarchy (or the functional
decomposition). Instead, you get fragments of code for a particular
concern at several places of your code, without having a clear
abstraction of these places.
Standard examples are so-called non-functional concerns like logging,
security, locks, and so on. So for example, you want to say things like
"whenever xyz happens, do abc before and hij afterwards".
You can already do that to a certain extend in CLOS. The following method...
(defmethod some-method :before (...)
(format t "about to execute some-method"))
...will print a log message whenever some-method is executed. What you
want to do in AOP is to define such before/after/around methods for
arbitrary "events" in your program.
For example, when you want to implement a persistence layer, you want to
say something like "whenever slots of a persistent object are accessed,
mark them as dirty". Such a "specification" does not mention any
specific classes or any specific methods. The idea of AOP is to have the
same expressive power at the language level, again without mentioning
the specific classes and methods.
The claim is that such language constructs improve understandability,
maintainability and other -ilities of your code. Whether that's actually
true is not clear to me.
Pascal
Pascal Costanza wrote:
> AOP aims for modularization of so-called cross-cutting concerns. The
Hey, thanks - this finally makes sense!
> The claim is that such language constructs improve understandability,
> maintainability and other -ilities of your code. Whether that's actually
> true is not clear to me.
This sure looks like yet another epicycle, waiting for the system to finally
collapse...
pt
Pedro Pinto <······@cs.cmu.edu> writes:
> Espren Verste gave a very interesting talk on the use of Lisp in the
> real world.
I'm flattered - despite the fact that the norwegian meaning of your
misspelling of my last name is "worst" :-)
--
(espen)
Espen Vestre wrote:
> I'm flattered - despite the fact that the norwegian meaning of your
> misspelling of my last name is "worst" :-)
I am so sorry. I thought I had verified the spelling of each name but
apparently I was confident I could spell yours... Alas.
-pp
Espen Vestre <·····@*do-not-spam-me*.vestre.net> writes:
> Pedro Pinto <······@cs.cmu.edu> writes:
>
>> Espren Verste gave a very interesting talk on the use of Lisp in the
>> real world.
>
> I'm flattered - despite the fact that the norwegian meaning of your
> misspelling of my last name is "worst" :-)
Worse is better :-)
Paolo
--
Paolo Amoroso <·······@mclink.it>
Pedro Pinto <······@cs.cmu.edu> wrote in message news:<···················@twister.southeast.rr.com>...
> Jerry Sussman's (http://www.swiss.ai.mit.edu/~gjs/gjs.html) keynote was
> awe inspiring.
[...]
> Such languages
> happen to be executable by machines, but Jerry feels this is of
> secondary importance compared to their use as tools for precise human
> reasoning and communication, particularly in teaching. He has
> investigated this insight while teaching classical mechanics at MIT.
The vid of his colloqium at http://www.aduni.org/colloquia/sussman/
sounds very similar.
Does anyone know the list of speakers that didn't show at the
conference? I'm thinking of getting some of the videos of speakers
from Franz, like they offered from ILC 2002.
Pedro Pinto <······@cs.cmu.edu> wrote:
>
>Just got back from ILC 2003 and I thought I should post some of my notes.
>
>My impressions on the conference are mixed. The lead organizer fell ill
>and despite valiant efforts from some of the other organizers, the
>feeling of mild chaos was constant. Throughout the conference a complete
>schedule (one that included presentation titles) was never posted, some
>of the speakers simply did not show up, others did not receive their
>schedules until the day of the presentation. Nevertheless the quality of
> some of the talks did make up for the confusion.
>
Jan Rychter wrote:
>>>>>>"Pedro" == Pedro Pinto <······@cs.cmu.edu>:
>
> -- Matthias Felleisen's talk, from which I understood how
> continuations can help with web applications. I'm not convinced of
> the practical applications of this technique (think scalability!),
> but it was certainly interesting.
Thanks for the compliment. I need to respond to the
scalability argument, because I believe people misunderstood
a part of the talk and a response to one of the questions from
the audience.
1. There are two kinds of continuations in the world of Web
apps: those that every programmer must use independently of
what language the script is in and call/cc-style continuations.
2. The first part of the talk -- based on an ASE 2002 paper --
shows how to create the first kind of continuation *systematically*.
Most programmers create them in a haphazard manner and therefore
you can find all these bugs I mentioned at the beginning of the
talk.
Every servlet/cgi script/web service in the world manages these
things and so there are usually millions on a commercial server.
3. The second part of the talk suggests that your programs could
be in a "natural" style (rather than the twisted cps style of most
programs) if your language included call/cc. Someone asked me how
many continuations of this kind our running, experimental server
manages, and I conjectured "something small, in the 1000s."
Even though I readily acknowledge that this is not commercial
quality, it won't be until someone tries. I had hoped for a couple
of years that LISPers would do it but it seems that the C people
take the hint more seriously. Take a look at Apache's Cocoon project.
Hope this clarifies the two pieces to my talk; I should have been
clearer in the first place.
-- Matthias
Jan Rychter wrote:
>>>>>>"Matthias" == Matthias Felleisen <········@ccs.neu.edu>:
>
> Matthias> 3. The second part of the talk suggests that your programs
> Matthias> could be in a "natural" style (rather than the twisted cps
> Matthias> style of most programs) if your language included
> Matthias> call/cc. Someone asked me how many continuations of this kind
> Matthias> our running, experimental server manages, and I conjectured
> Matthias> "something small, in the 1000s."
>
> [...]
>
> Matthias> Hope this clarifies the two pieces to my talk; I should have
> Matthias> been clearer in the first place.
>
> I also should have been clearer in stating my doubts -- when I wrote
> "think scalability", I meant "think scalability as in distributed
> processing". When you distribute the load across many servers the
> solution you mention in your point (3) above becomes suddenly much more
> difficult, unless you are really careful to have user session that
> "stick" to particular servers.
Yes this is indeed a tricky issue. I am confident however, based on the
experiences with Kali Scheme and some parallel/distributed Scheme experiments in
Sendai (Japan) that you can migrate these continuations from server to server.
I have thought of investigating this issue that is, posing it as a problem for
a student. Paul Graunke (the first PhD on this project) had considered it, but
Shriram and I believed that at the time other concerns were more important.
-- Matthias
On Wed, 05 Nov 2003 17:43:37 -0500, Matthias Felleisen wrote:
> Jan Rychter wrote:
>
>> [...]
>>
>> I also should have been clearer in stating my doubts -- when I wrote
>> "think scalability", I meant "think scalability as in distributed
>> processing". When you distribute the load across many servers the
>> solution you mention in your point (3) above becomes suddenly much more
>> difficult, unless you are really careful to have user session that
>> "stick" to particular servers.
>
> Yes this is indeed a tricky issue. I am confident however, based on the
> experiences with Kali Scheme and some parallel/distributed Scheme
> experiments in
> Sendai (Japan) that you can migrate these continuations from server to
> server.
> I have thought of investigating this issue that is, posing it as a
> problem for a student. Paul Graunke (the first PhD on this project) had
> considered it, but Shriram and I believed that at the time other
> concerns were more important.
>
>
I can't speak for the entire space of distributed applications or even the
a restricted subset of e-commerce, user interactive websites. I can speak
about one pretty large one.
- For simplicity, speed and scalability the application was designed to be
functionally homogenous. An initial incoming user without a current
session can be simply round robin'd to the next box. This removes the
load balancing problem. No box or cluster of boxes are dedicated to any
particular subset of functionality. ie. Account maintenance is performed
by these app servers vs order maintanance on those. One box is 100%
substitutable for another to service any given request.
- You can safely assume box affinity there days for an e-commerce app
these days. i.e. Once a session is established on box X you can assume
all additional requests by that user for that session will map to box X.
- Some abnormal situation occurs such as box failure or box shutdown is
required.
For speed and scalability purposes you just can't afford to shuttle
session state between boxes on a per request basis. I expect this is true
whether session state is shuttled by persisted data on a common DB, or a
straight peer to peer transfer of state data through a serialization
mechanism, or via a packaged continuation.
Distributed agents are a whole different thing. Agent X could be
suspended on box A and then re-established on Box B by moving the agent.
Again things boil down to speed and scalability. In my experience, real
time user interaction on a typical e-commerce site which has a modicum of
statefulness (personalized user state requiring a user <-> session) just
about has to assume box affinity "for the majority" of user requests.
Ray
Matthias Felleisen wrote:
> Jan Rychter wrote:
>
>
>>>>>>>"Matthias" == Matthias Felleisen <········@ccs.neu.edu>:
>>
>> Matthias> 3. The second part of the talk suggests that your programs
>> Matthias> could be in a "natural" style (rather than the twisted cps
>> Matthias> style of most programs) if your language included
>> Matthias> call/cc. Someone asked me how many continuations of this kind
>> Matthias> our running, experimental server manages, and I conjectured
>> Matthias> "something small, in the 1000s."
>>
>>[...]
>>
>> Matthias> Hope this clarifies the two pieces to my talk; I should have
>> Matthias> been clearer in the first place.
>>
>>I also should have been clearer in stating my doubts -- when I wrote
>>"think scalability", I meant "think scalability as in distributed
>>processing". When you distribute the load across many servers the
>>solution you mention in your point (3) above becomes suddenly much more
>>difficult, unless you are really careful to have user session that
>>"stick" to particular servers.
>
>
> Yes this is indeed a tricky issue. I am confident however, based on the
> experiences with Kali Scheme and some parallel/distributed Scheme experiments in
> Sendai (Japan) that you can migrate these continuations from server to server.
> I have thought of investigating this issue that is, posing it as a problem for
> a student. Paul Graunke (the first PhD on this project) had considered it, but
> Shriram and I believed that at the time other concerns were more important.
This is possible and has been successfully tested using SISC and its
serializable continuations as well, assuming you hold certain
assumptions (such as the same heap running on each machine, using
functional state, etc).
Scott