From: Software Scavenger
Subject: Design patterns for Lisp
Date: 
Message-ID: <a6789134.0111250609.4e0349f1@posting.google.com>
In the recent thread about design patterns, RPG's definition of
patterns seemed to be approximately that patterns are components of a
programmer's knowledge, which differentiate between more and less
experienced programmers.  Such components seem to me to include
algorithms and good usage of a programming language, along with other
kinds of knowledge.  In other words, books of algorithms, such as
Knuth's, and books of good Lisp usage, such as PAIP, etc., could
actually be considered patterns books.  Or they could be combined into
a bigger patterns book, and become individual chapters of it.  And
Cliki, ALU, CLOCC, etc., could be considered Lisp patterns websites.

And since "patterns" is such a hot buzzword, we might help make Lisp
more popular by combining such websites into one big "Lisp Patterns"
website, organized by type of pattern, such as algorithms, usage, etc.
 It could even have a section on the GoF patterns with explanations
for most of them of why each is not needed in Lisp.

Antipatterns might be a very big subset of patterns, and might
therefore be a big section of the website.

Another kind of component of a programmer's knowledge is how to earn a
living from programming.  How to get along with your boss, how to meet
deadlines, how to find a Lisp job, etc.  Those might or might not have
a place in a collection of patterns.  They might be considered an
additional kind of knowledge, in addition to patterns.

From: Richard P. Gabriel
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <B826A880.2B02%rpg@dreamsongs.com>
in article ····························@posting.google.com, Software
Scavenger at ··········@mailandnews.com wrote on 11/25/01 6:09:
> And since "patterns" is such a hot buzzword, we might help make Lisp
> more popular by combining such websites into one big "Lisp Patterns"
> website, organized by type of pattern, such as algorithms, usage, etc.
> It could even have a section on the GoF patterns with explanations
> for most of them of why each is not needed in Lisp.

Alas, patterns are passe. I didn't think Lisp folks would let themselves get
into such a backwater. I recall trying to explain the concepts to some
Scheme guys on one of the Scheme mailing lists about 5 or 6 years ago, but
they also had the abstraction disease a lot of people have and could see
patterns only that way and therefore irrelevant to them. I think they forgot
they were people too.

            -rpg-
From: Xah Lee
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <B826DEE2.489D%xah@best.com>
Dear Richard Gabriel,

> ...
> I think they forgot
> they were people too.

So, the whole design pattern and now the new XP wishy-washy thingy are ways
to encourage stupid people so that we can multiply those incompetent kiddies
and weenies in the software industry?

is society supposed to be moving forward, or is it backward? The pile of
mathematical knowledge and consequently all sort of unforeseen technologies
are impinging forthwith by the armies, but in the software industry are we
supposed to start a New Age dark age with the patterns and eXtreme
Programing soft and informal and hand-waving and poetic justice crap?

The mathematics in computer science along is sufficiently large now that a
single person perhaps cannot apprehend it in a life time. (look at Knuth's
ongoing work) Now you pattern kooks want to rob programers of already
wanting education to the patterns and XP voodoo?

why the computing world is filled with these patterns folks and OOP
go-getters and Larry Wall types of artists these days?

If a bridge falls, then the engineers who build it will be responsible. If a
software caused a disaster, guess what? These patterns and whatnot
wishy-washy monkey coders have licenses and software agreements to shuck
their incompetence.

Is software building really that complex? So complex that errors are
unavoidable? So complex that it is more complex than bridge and
flying-machine engineering?

it is an irony that computer, by their nature, do not make mistakes. Yet,
they are littered with the most errors and crashes.

Software today are so fucked up precisely because it is tooo easy. Every
monkey can don on a "software engineer" hat after few years of keyboard
punching. Every donkey can speak grand patterns and propound and drool XP =
e"X"treme Programing methodology. How many of these arses can speak a single
dialect of modern mathematics? Can they contribute a single non-trivial
theorem? How many programers in the software industry even know what modern
mathematics is or consists of? What are the branches of modern logic? I can
pull from the top of my head discrete math subjects as easy as i can pull
the hairs on my head that even you -- a mathematician -- are not familiar
with.

The problem isn't stupid people. The problem is a stupid attitude in
society, that software are too complex and errors are ok. The very fucking
damn lies by incompetent monkey coders that is unix and C and Perl et al.

What can we do, folks? First of all, stop lying to ourselves and the public
about how software is complex and unpredictable. Software building is ten
thousand times easier than bridge building or flying-machine building. If
the world's bridges and airplanes do not crash now and then, nor should
software. Software engineers must bear all responsibility to all software
faults. The public should demand that software licenses do not have the
"though we tried hard, but we are totally not responsible if the software
fucked up." clause.

This must start by building an awareness in the general public, that
irresponsible licenses are not acceptable. When software engineers bear
responsibility, then monkey coders will gradually fall by the way side, and
patterns and XP and UML and likewise shits will crawl back into the woodwork
they came from.

Given our already fucked up legacy, we can only start to build a system,
environment, and attitude, where every software is mathematically provably
correct. This is not only achievable, but not difficult when all the
programers are well educated.

Actually, i think programing patterns and e"X"treme Programing and OOP
mantras and UML are great, but only after every programer in this world have
mastered Knuth's books, and speaks lambda calculus fluently to begin with.
Then, perhaps we can start to suck each other's dicks and slobber pattern
with fellow XP chaps.

When we have a world where for each software bug the engineer will be
penalized for one month's salary, then we shall see if more math knowledge
prevails, or the patterns and XP type of thingy survives.

postscript:
Dear Richard, you said that the patterns movement are waning? I'm glad to
hear that. As i have expressed here before, if the world is not filled with
100% morons, than stupid ideas will die off, and the world will move on, as
is the pattern in history. I think that OOP today isn't as hot as back in
say 1999. Also, please excuse my language. It is my style. A confrontational
style. If i do not speak up, who can? who would? who is willing?

 Xah
 ···@xahlee.org
 http://xahlee.org/PageTwo_dir/more.html


> From: "Richard P. Gabriel" <···@dreamsongs.com>
> Organization: MindSpring Enterprises
> Newsgroups: comp.lang.lisp
> Date: Sun, 25 Nov 2001 13:54:09 -0800
> Subject: Re: Design patterns for Lisp
> 
> in article ····························@posting.google.com, Software
> Scavenger at ··········@mailandnews.com wrote on 11/25/01 6:09:
>> And since "patterns" is such a hot buzzword, we might help make Lisp
>> more popular by combining such websites into one big "Lisp Patterns"
>> website, organized by type of pattern, such as algorithms, usage, etc.
>> It could even have a section on the GoF patterns with explanations
>> for most of them of why each is not needed in Lisp.
> 
> Alas, patterns are passe. I didn't think Lisp folks would let themselves get
> into such a backwater. I recall trying to explain the concepts to some
> Scheme guys on one of the Scheme mailing lists about 5 or 6 years ago, but
> they also had the abstraction disease a lot of people have and could see
> patterns only that way and therefore irrelevant to them. I think they forgot
> they were people too.
> 
> -rpg-
From: Tim Bradshaw
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <fbc0f5d1.0111260330.230430da@posting.google.com>
Xah Lee <···@best.com> wrote in message news:<·················@best.com>...
> 
> Is software building really that complex? So complex that errors are
> unavoidable? So complex that it is more complex than bridge and
> flying-machine engineering?

I wonder how many bridges would stand up or aricraft would fly if they
could not rely on continuity and near-linearity of the physical
systems to ensure that small errors do not usually result in
explosions.
From: Software Scavenger
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <a6789134.0111260506.19ec4df6@posting.google.com>
Xah Lee <···@best.com> wrote in message news:<·················@best.com>...

> environment, and attitude, where every software is mathematically provably
> correct. This is not only achievable, but not difficult when all the

What does it mean for software to be correct?  Can it accidentally
kill a million people and still be considered mathematically correct?
From: Kent M Pitman
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <sfwy9ktxzmj.fsf@shell01.TheWorld.com>
··········@mailandnews.com (Software Scavenger) writes:

> Xah Lee <···@best.com> wrote in message news:<·················@best.com>...
> 
> > environment, and attitude, where every software is mathematically provably
> > correct. This is not only achievable, but not difficult when all the
> 
> What does it mean for software to be correct?  Can it accidentally
> kill a million people and still be considered mathematically correct?

I think so.  While "provably correct" has its virtues, I think
"provably harmless" is better.  "provably correct" does not imply
"provably harmless".  All kinds of correctly functioning things can
cause harm.  And one can sometimes prove harmlessness without having
to prove correctness.

Xah's discussion also presupposes that the only purpose of
software is engineering.  Were art restricted to that which is
provably correct, there would be no Escher.

I don't mind people trying to refine and make more rigorous that part
of computer science which is intended to be engineering, but I regard
computer science to be much broader than that and would not like to
see this treatment uniformly applied to all it might be.  And
certainly not all science.  The name is an utter misnomer, IMO.  But
then, it was pointed out to me by someone at some point that
"disciplines" that feel compelled to use the word "science" in their
name are rarely true sciences.  In the case of CS, some of that may be
a failure to achieve some goals people wish it would achieve, but some
of it is just as well.  I don't doubt that psychology will ultimately
yield to some fairly rigorous science, but I think it would be a
mistake, at least for the nearterm, to reclassify Philosophy as
"Thought Science" and then to berate it for entertaining fictions,
contradictions, random neuron firings, and whatever other oddities
make up the whole of human perception, conception, and so on.
From: Sashank Varma
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <sashank.varma-2611011240440001@129.59.212.53>
In article <···············@shell01.TheWorld.com>, Kent M Pitman
<······@world.std.com> wrote:

>I don't doubt that psychology will ultimately
>yield to some fairly rigorous science, but I think it would be a
>mistake, at least for the nearterm, to reclassify Philosophy as
>"Thought Science" and then to berate it for entertaining fictions,
>contradictions, random neuron firings, and whatever other oddities
>make up the whole of human perception, conception, and so on.

My take:

Philosophy is not a science, nor does it pretend to me.  Philosophers
don't run experiments and evaluate their theories relative to empirical
data.  This is not a knock on philosophy.  Mathematics is not a science
by my definition either.

Linguistic is an interesting case (as you know).  Linguists certainly
hatch strong theories.  They also take the linguistic world seriously,
evaluating their theories relative to the (presumed) grammaticality
judgements of real speakers.  And in the hands of Whorf, Chomsky, and
others, linguists have influenced the scientific activity of other
fields, e.g., psychology.  But grammaticality judgements are a peculiar
sort of data that differ fundamentally from the data of other sciences.
It is thus unclear whether linguistics is properly scientific.

Experimental (cognitive, developmental) psychology definitely imitates
the physical sciences.  It has theories and it runs experiments.  But
its theories are often informal.  It is difficult to know whether they
are just immature versions of what their natural science counterparts
have had more time to develop, or whether they are flawed in-pinciple.
And psychological experiments seem more inherently noisy and unreliable
than their analogs in the physical sciences.  Experimental psychology 
talks the talk and tries to walk the walk, but it is unclear whether it
will ever 'get there' -- psychological phenomena may prove to be the
sort of thing for which the scientific method is just ill-suited.

Given that what I've said is probably inflammatory enough, I'll remain
quiet on the scientific standing of other disciplines...
From: Kent M Pitman
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <sfwsnb15gvf.fsf@shell01.TheWorld.com>
·············@vanderbilt.edu (Sashank Varma) writes:

> Philosophy is not a science, nor does it pretend to me.  Philosophers
> don't run experiments and evaluate their theories relative to empirical
> data.  This is not a knock on philosophy.  Mathematics is not a science
> by my definition either.

I didn't mean to suggest that Philosophy intended to be a science.  I
meant to say that Xah's suggestion that, paraphrasing, "to understand
is to make mathematical" does not capture the ability of philosophers
to capture truths, even vague ones, in useful ways.  They make no math
out of it, yet they do add to our understanding.
 
> Experimental (cognitive, developmental) psychology definitely imitates
> the physical sciences.  It has theories and it runs experiments.  But
> its theories are often informal.  It is difficult to know whether they
> are just immature versions of what their natural science counterparts
> have had more time to develop, or whether they are flawed in-pinciple.
> And psychological experiments seem more inherently noisy and unreliable
> than their analogs in the physical sciences.  Experimental psychology 
> talks the talk and tries to walk the walk, but it is unclear whether it
> will ever 'get there' -- psychological phenomena may prove to be the
> sort of thing for which the scientific method is just ill-suited.

Practically speaking, we have every reason to assume that science
comes down to a mere handful of variables and formulas to explain the
physics of a very huge universe.  Scientists are pushing for simpler
and simpler theories of unified field theory.  That part of physics
seems to me to be about explaining either micro-effects or homogenous
aggregate effects involving relatively uniform physical quantities.  A
sun might be assumed to be "mostly hydrogen", for example, so more
easily be dealt with in the aggregate.  Solar science does not extent
to predicting individual solar winds at particular geographical points
on a particular sun for example, nor is a sun ever described as having
a particular behavior because "it had a troubling childhood".  Where
we do see such things, whether in the study of the mind or in
meteorology (earthly or otherwise), we see research done in a very
different way because the reseearchers KNOW there are a lot of
variables to be addressed and they know the issue of combination, not
the issue of core nature, is the dominating factor.  If we had to
psychoanalyze fires to understand how they started, I think our
present scientific method would break down.

So to each area of study its best tools.  They should not have to all be
of a kind nor to a single standard.
From: Tim Bradshaw
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <fbc0f5d1.0111270349.2edf8cad@posting.google.com>
Kent M Pitman <······@world.std.com> wrote in message news:<···············@shell01.TheWorld.com>...

> Practically speaking, we have every reason to assume that science
> comes down to a mere handful of variables and formulas to explain the
> physics of a very huge universe.  Scientists are pushing for simpler
> and simpler theories of unified field theory.  That part of physics
> seems to me to be about explaining either micro-effects or homogenous
> aggregate effects involving relatively uniform physical quantities.  

Well I think you have to take into account that things like unified
field theories are done by about 8 people.  OK, it's more than 8, but
it's not everyone.  It has high visibility because it's incredibly
glamorous, but there are an awful lot of people doing stuff that is
less glamorous but still very interesting.

A good example of this is superconductivity.  All the *fundamental*
stuff to explain superconductivity was probably more-or-less sorted
out by the late 30s
(basically nonrelativistic QM), but it took an *awful* long time after
that for people to really have the faintest idea what was going on - I
think until the 70s - and there are still, I suspect, huge lacunae in
our knowledge of how things work.

And there are many other examples: lots and *lots* of people in
physics are doing things which are not trying to sort out the
fundamental laws of the universe, and I think there is an increasing
feeling that trying to understand the behavious of actual macroscopic
physical objects like stars and weather systems is important.  One of
the reasons why this hasn't historically been done is that the best
you can probably do with such systems is to reduce the maths to the
point where actually doing predictions is merely a very
computationally demanding task rather than a completely intractible
one, so until reasonably fast computers became available there was
really no point in trying to attack such problems - who cared, in
1900, that you could reduce some problem from 10^20 multiplications to
10^9, you needed to get it down to 10^4 before you could even think
about it.

--tim
From: Sashank Varma
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <sashank.varma-2711011729560001@129.59.212.53>
For more on this, check out:

     Anderson, P. W. (1972). More is different: Broken symmetry and
     the nature of hierarchical structure of science. Science, 177,
     393-396.

It's a wonderfully short read by one of the physicists who cracked
superconductivity, work for which he won the Nobel prize.  He
examines the fallacy of throwing all of science's eggs in the
reductionist basket.

In article <····························@posting.google.com>,
··········@tfeb.org (Tim Bradshaw) wrote:

>A good example of this is superconductivity.  All the *fundamental*
>stuff to explain superconductivity was probably more-or-less sorted
>out by the late 30s
>(basically nonrelativistic QM), but it took an *awful* long time after
>that for people to really have the faintest idea what was going on - I
>think until the 70s - and there are still, I suspect, huge lacunae in
>our knowledge of how things work.

"In the case of superconductivity, 30 years elapsed between the
time when physicists were in possession of every fundamental law
necessary for explaining it and the time when it was actually
done." (p. 395 of Anderson, 1972)

>And there are many other examples: lots and *lots* of people in
>physics are doing things which are not trying to sort out the
>fundamental laws of the universe, and I think there is an increasing
>feeling that trying to understand the behavious of actual macroscopic
>physical objects like stars and weather systems is important.  One of
>the reasons why this hasn't historically been done is that the best
>you can probably do with such systems is to reduce the maths to the
>point where actually doing predictions is merely a very
>computationally demanding task rather than a completely intractible
>one, so until reasonably fast computers became available there was
>really no point in trying to attack such problems - who cared, in
>1900, that you could reduce some problem from 10^20 multiplications to
>10^9, you needed to get it down to 10^4 before you could even think
>about it.

     "The main fallacy in this kind of thinking is that the
reductionist hypothesis does not by any means imply a
�constructionist� one: the ability to reduce everything to simple
fundamental laws does not imply the ability to start from those
laws and reconstruct the universe. In fact, the more the elementary
particle physicists tell us about the nature of the fundamental
laws, the less relevance they seem to have to the very real
problems of the rest of science, much less to those of society.
     "The constructionist hypothesis breaks down when confronted
with the twon difficulties of scale and complexity. The behaviors
or large and complex aggregates of elementary particles, it turns
out, is not to be understood in terms of a simple extrapolation
of the the properties of a few particles. Instead, at each level
of complexity enturely new properties appear, and the understanding
of the new behaviors requires research which I think is as
fundamental in its natura as any other. That is, it seems to me
that one may array the sciences roughly linearly in a hierarchy,
according to the idea: The elementary entities of science X obey
the laws of science Y. [�]
     "But this hierarchy does not imply that science X is �just
applied Y.� At each stage entirely new laws, concepts, and
generalizations are necessary, requiring inspiration and creativity
to just as great a degree as in the previous one. Psychology is
not applied biology, nor is biology applied chemistry." (p. 393 of
Anderson, 1972)
From: Xah Lee
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <B828EDF9.49CB%xah@best.com>
Dear Kent Pitman and readers,

Kent Pitman wrote:
> I didn't mean to suggest that Philosophy intended to be a science.  I
> meant to say that Xah's suggestion that, paraphrasing, "to understand
> is to make mathematical" does not capture the ability of philosophers
> to capture truths, even vague ones, in useful ways.  They make no math
> out of it, yet they do add to our understanding.

It is apparent to me, that you don't know much philosophy, don't know much
about the evolution of philosophy from Greeks to modern. I don't claim to be
an expert at these, but i don't make clueless remarks about things i don't
know.

When we discuss bout philosophy in our context, we can focus two things: its
subjects and its methodology.

In the Greek era, philosophy is the umbrella for all today's science
subjects plus mysticism plus religion. Its methods are observation and crude
reasoning. A good part of philosophy of that era would be under the general
headings of today's physics and psychology and religion.

In 300 years ago era, all "natural sciences" such as physics, astronomy,
linguistics, moved out of philosophy. Metaphysics and mysticism still
remain. The approach are crude science and refined reasoning.

Today, there is little to none philosophical studies as we know the word.
The subjects of classic philosophy either disappeared totally, such as
metaphysics and existence of God, or moved into vast number of disciplines
under the general headings of physics, chemistry, linguistics, social
sciences, mathematics. The diversity and branches are so fine such that one
accomplished mathematician do not understand the next mathematician.

It is very unfortunate that the online encyclopedia britanica.com is no
longer free. Otherwise, my readers can checkout the vast branches of
sciences that was once under the heading of philosophy, not to mention
getting a real treat on the history of philosophy. Though, you can still
purchase a britanica CD or DVD. Good investment! (too bad they don't make
Mac versions. Fuck!)

Today's intellectualism allow no room for scientifically baseless things,
philosophy or not. Get on track! Get a load of me!

(though a good online britanica is gone, but one can still learn a lot
immediately on the web about philosophy or history of. One can immediately
verify my claims. Here's few tips: "logical positivism", "philology,
linguistics", "rationalism", "mathematical philosophy", mind and body
problem, dualism, ... ah just peruse my English vocabulary page too:
http://xahlee.org/PageTwo_dir/Vocabulary_dir/vocabulary.html
there's enough *isms that if you nose each you'll get substantial
understanding of philosophy and history and English too ...
also keep in mind that online info are often low quality, even incorrect or
misleading.

here's a good one:
Stanford Encyclopedia of Philosophy
http://plato.stanford.edu/contents.html

A History of Western Philosophy by Bertrand Russell.
http://www.amazon.com/exec/obidos/ASIN/0671201581/xahhome-20/
(if you buy the book through that link, i get a commission.)
)

 Xah
 ···@xahlee.org
 http://xahlee.org/PageTwo_dir/more.html


> From: Kent M Pitman <······@world.std.com>
> Organization: My ISP can pay me if they want an ad here.
> Newsgroups: comp.lang.lisp
> Date: Mon, 26 Nov 2001 22:04:04 GMT
> Subject: Re: Design patterns for Lisp
From: Erik Naggum
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <3215866480290824@naggum.net>
* Xah Lee <···@best.com>
| It is apparent to me, that you don't know much philosophy, don't know
| much about the evolution of philosophy from Greeks to modern. I don't
| claim to be an expert at these, but i don't make clueless remarks about
| things i don't know.

  It is apparent to me that Kent Pitman knows a great deal of philosophy as
  such and indeed the philosophies of several disciplines.  It is apparent
  to me because of the questions he asks.  This would not be apparent to
  someone who thought that philosophy provided answers.

///
-- 
  The past is not more important than the future, despite what your culture
  has taught you.  Your future observations, conclusions, and beliefs are
  more important to you than those in your past ever will be.  The world is
  changing so fast the balance between the past and the future has shifted.
From: Xah Lee
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <B82C6DD6.4EC9%xah@best.com>
Dear intellects,

> psychological phenomena may prove to be the
> sort of thing for which the scientific method is just ill-suited.

The above sentence is -- shall we be dramatic? -- oxymoron.

There are lots of the dogmatic dullards types, who like to make everything
as "scientific". Therefore, you see these dullish scientists and scholars
who would _define_ _scientific_ by some dogmatic criterions.

Like, they would say, science is such and such that meet such and such
criterions, such as being verifiable. Such and such, and such and such, that
such science can not and is not supposed to answer certain such and such
questions. Such and such, are the drools of these dullards.

Let me make an insight here:
There is absolutely no truth or facts in this world. Every supposed truth,
facts, or whatnot, are simply agreements of the majority. For example,
1+1==2, ain't that the truth? It depends on who you ask. There are artists
and poets, who will give you grandiloquent and surefire answers. Then
there's pundits, who will make smart asses of themselves. Even among the
greatest mathematicians and logicians and philosophers, it will come to
grave hair-pulling and eye-ball-gauging debates about foundations, little
details, paradoxes, schools of thoughts, and not to mention friend or foe.
(as we can see in history.) And if you bribe an Oxford fella or IBM fella,
then he will prove one plus one will equal to three, and retire with your
money. After all, we are humans beings of flesh and blood. Ain't truths a
product _our_ brain that changes depending on which philosopher is talking?

What science truly is or is not, is not for people to define and qualify as
by some dogmatic criterions. This goes for all questions in life; regardless
whether it is an answerable or technical question.

For all practical and theoretical purposes, scientific is: USE YOUR BRAIN.
When you use your brain on something, that's scientific.

What my insight illustrates is technically a philosophical view point that
truths are human creations. For those of your average intellects, the
practical moral being: do not fall for any dogma; simply use your brain, and
judge for yourself.

Question: What is not scientific?
Answer: not using your brain.

Q: For examples?
A: Mysticism, occultism, OOP fads.

Q: But aren't there things not scientifically understood but useful, like
acupuncture?
A: Yes.

Q: So?
A: Use your brain.

Q: So, i just use my brain, then whatever the conclusion will be my answer?
A: Yes!

Q: That's why even the greatest mathematicians couldn't agree on
mathematical things. Is that right?
A: You got it!

Q: is Design Patterns good?
A: No.

Q: But i used my brain, and i think it is worth a try. What's going on?
A: Use your brain.

Q: Oh, so i should use my brain and JUDGE FOR MYSELF?
A: Yes.

Q: Do you have any final comment on Design Patterns?
A: When we have a world where for each software bug the engineer will be
penalized for one month's salary, then we shall see if more math knowledge
prevails, or the Design Patterns and XP type of utter crap survives.

Q: why should we be believe you? oh, i guess it's "use your brain"??
A: From now on it's $5 per question.

Q: Should we also doubt about the "use your brain" dogma?

 Xah
 ···@xahlee.org
 http://xahlee.org/PageTwo_dir/more.html


> From: ·············@vanderbilt.edu (Sashank Varma)
> Organization: Vanderbilt University usenet news server
> Newsgroups: comp.lang.lisp
> Date: Mon, 26 Nov 2001 12:40:44 -0600
> Subject: Re: Design patterns for Lisp
From: Marco Antoniotti
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <y6c7ks8s68m.fsf@octagon.mrl.nyu.edu>
Xah Lee <···@best.com> writes:

> Dear intellects,
> 
> > psychological phenomena may prove to be the
> > sort of thing for which the scientific method is just ill-suited.
> 
> The above sentence is -- shall we be dramatic? -- oxymoron.
> 
> There are lots of the dogmatic dullards types, who like to make everything
> as "scientific". Therefore, you see these dullish scientists and scholars
> who would _define_ _scientific_ by some dogmatic criterions.
> 
> Like, they would say, science is such and such that meet such and such
> criterions, such as being verifiable. Such and such, and such and such, that
> such science can not and is not supposed to answer certain such and such
> questions. Such and such, are the drools of these dullards.
> 
> Let me make an insight here:
> There is absolutely no truth or facts in this world. Every supposed truth,
> facts, or whatnot, are simply agreements of the majority. For example,
> 1+1==2, ain't that the truth? It depends on who you ask. There are artists
> and poets, who will give you grandiloquent and surefire answers. Then
> there's pundits, who will make smart asses of themselves.

...

As Dave Letterman once asked Rush Limbaugh: "don't you ever think that
maybe you are a bit of a hot-air ballon?" :)

Cheers

-- 
Marco Antoniotti ========================================================
NYU Courant Bioinformatics Group        tel. +1 - 212 - 998 3488
719 Broadway 12th Floor                 fax  +1 - 212 - 995 4122
New York, NY 10003, USA                 http://bioinformatics.cat.nyu.edu
                    "Hello New York! We'll do what we can!"
                           Bill Murray in `Ghostbusters'.
From: Tim Bradshaw
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <fbc0f5d1.0111270323.12c9bb01@posting.google.com>
Kent M Pitman <······@world.std.com> wrote in message news:<···············@shell01.TheWorld.com>...

> Xah's discussion also presupposes that the only purpose of
> software is engineering.  Were art restricted to that which is
> provably correct, there would be no Escher.

Oh it's much worse than that.  Quite apart from the idiot claim about
provable correctness being easy, he has completely unrealistic ideas
about the kind of things mechanical & civil engineers do.  No one
building a bridge is `proving it correct' - probably no one building a
bridge knows what a formal proof *is*.(Here's something that should
strike cold fear into his bones:  not only do almost none of the
engineers building things know what a formal proof is, almost none of
the physicists who build the foundations for engineering do either,
and those who do generally regard them as some stupid thing that
mathematicians do.  And if you go to the maths dept you'll find that
about 90% of mathematicians aren't doing formal proofs either.  It's
only the CS people who get so hung up about this, because they suffer
from *really bad* physics envy.) Instead they use informal 19th
century maths combined with late 20th century computer modelling
techniques.  And sometimes they get it wrong and the bridge falls down
or has bad characteristics (still: look at the millenium bridge in
London).  Planes drop out of the sky too, in case he hasn't noticed.

Engineering is *not* about rigour, its about getting an acceptably
good thing built in an acceptable time at an acceptable cost.

The difference that he's failed to notice is that engineering of
physical objects has some good characteristics whhich computing
systems don't have.  In particular continuity and linearity which
combine to mean that small errors don't matter.  In computing systems
this is absolutely not true: change a bit and the thing blows up, for
a significant proportion of bits.  Imagine if physics was like that.

--tim
From: Will Deakin
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <3C037A60.1030604@hotmail.com>
Tim wrote:
 > In computing systems this is absolutely not true: change a bit 
and > the thing blows up, for a significant proportion of bits. 
Imagine > if physics was like that.
Which it is some times: like atomic weapons, stella novae, smashing 
panes of glass and such.

:)w
From: Tim Bradshaw
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <fbc0f5d1.0111270702.631703b6@posting.google.com>
Will Deakin <···········@hotmail.com> wrote in message news:<················@hotmail.com>...
> Which it is some times: like atomic weapons, stella novae, smashing 
> panes of glass and such.

I think the first two of these are not good examples.  There aren't
really any interesting cases where if you have a star and you make
some *really small* change to it it will go nova.  You could perhaps
construct such a thing (if you could construct stars...) but it would
be ludicrously weird.  Similarly with atomic weapons - this is
something you've carefully constructed to be unstable, they don't
generally occur in nature, for instance (I think atomic piles do, or
have done!).

The glass example is good though.  Glass has this nice thing that if
its bent and you make even a tiny scratch on the stretched surface it
will suffer catastrophic failure because of crack propagation.  So
generally its only used in contexts where you don't care about
catastrphic failure, or when you do you create special glass which has
its surfaces in compression and thus doesn't suffer from this kind of
catastrophic failure nearly so easily, and when it does the failure is
less bad (no shards).

But the point is that we have a really good understanding of what
causes this kind of thing and how to avoid it - people write books on
crack propagation and how to avoid it.  Computing systems are all
crack propagation, all the time.

--tim
From: Will Deakin
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <3C03BD56.3040309@hotmail.com>
Tim wrote:

> I think the first two of these are not good examples...

I agree that the star example is stretching...

 > You could perhaps construct such a thing (if you could construct 
  > stars...) but it would be ludicrously weird.
It would be cool tho' :)

 > Similarly with atomic weapons - this is something you've 
carefully > constructed to be unstable,
I would argue this a bit more -- it is possible to construct a crude 
fission device if you have the materials to hand.

 > they don't generally occur in nature, for instance...
Is this not more to do with the abundance of the source materials?

> (I think atomic piles do, or have done!).

Yes. There is some evidence that a natural fission reactor did occur 
www.ans.org/pi/np/oklo. Maybe there were natural fission explosions? 
Enough! as I could now be accused of arguing the toss.

[...elided cool points about glass...] 
> But the point is that we have a really good understanding of what
> causes this kind of thing and how to avoid it - people write books on
> crack propagation and how to avoid it.

Absolutely. However, even with this knowlege, many people are 
*still* blinded by glass each year.

 > Computing systems are all crack propagation, all the time.
In three or more dimensions. Super smashing glass. I await the 
computing equivalent of triplex.

;)w
From: Boris Schaefer
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <87wv0atpjk.fsf@qiwi.uncommon-sense.net>
* ··········@tfeb.org (Tim Bradshaw) wrote:
| 
| ([...] almost none of the physicists who build the foundations for
| engineering do [know what a formal proof is].  [...] It's only the
| CS people who get so hung up about this, because they suffer from
| *really bad* physics envy.)

Now, that's weird.

Boris

-- 
·····@uncommon-sense.net - <http://www.uncommon-sense.net/>

What did you bring that book I didn't want to be read to out of about
Down Under up for?
From: Richard P. Gabriel
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <B82B21E2.2C98%rpg@dreamsongs.com>
Toward the end of this thread were some good and accurate remarks about
patterns and pattern languages - especially how you cannot have a pattern
really without a pattern language.  This is the fundamental error that the
GoF - not the first by any means in CS to discover Alexander - started with
that led them to the rather damaging book they wrote. Damaging because it
made it easy for people to veer away from patterns - such as the Lisp and
Scheme communities who would have had a lot to contribute had they not
believed they were just a way to get around the abstraction poverty of C++.

Kent: Sorry my remarks tricked you into engaging Xah. He is endearing in his
way, and it is possible to get him to engage seriously, but rarely on a
stage like this one. He has an interesting talent for writing - I have no
evidence of his mathematical abilities. I rather like the guy, though he is
like a high-speed roller coaster with dynamically and randomly changing
turns and dips.

A few years ago I wrote an essay on tubes - Wired asked me to write it, but
the editor who did so left before they were ready to publish it. It's here:

http://www.dreamsongs.com/Tubes.html

                -rpg-
From: Pierre R. Mai
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <87oflmjxzj.fsf@orion.bln.pmsf.de>
Boris Schaefer <·····@uncommon-sense.net> writes:

> * ··········@tfeb.org (Tim Bradshaw) wrote:
> | 
> | ([...] almost none of the physicists who build the foundations for
> | engineering do [know what a formal proof is].  [...] It's only the
> | CS people who get so hung up about this, because they suffer from
> | *really bad* physics envy.)
> 
> Now, that's weird.

Why?  People who envy something are often not very well informed about
the actual nature of that something.

Regs, Pierre.

-- 
Pierre R. Mai <····@acm.org>                    http://www.pmsf.de/pmai/
 The most likely way for the world to be destroyed, most experts agree,
 is by accident. That's where we come in; we're computer professionals.
 We cause accidents.                           -- Nathaniel Borenstein
From: Tim Bradshaw
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <fbc0f5d1.0111290505.70e92e68@posting.google.com>
"Pierre R. Mai" <····@acm.org> wrote in message news:<··············@orion.bln.pmsf.de>...
> 
> Why?  People who envy something are often not very well informed about
> the actual nature of that something.

Yes, exactly.  I think the term comes from some criticism of
?sociology? which got enormously hung up on hugely complex but mostly
meaningless mathematice because it wanted to be a `proper science' and
saw that this was what physicists did.  Of course what they missed was
both that the hairy maths has to mean something, and that, actually,
phycisists really are often doing something else, like trying to get
some kind of mental picture of what is happening, except it's all in 4
or more dimensions and so you can't really draw it on the whiteboard
too well.  And most physics is extremely mathematically unrigorous,
too, the general attitude is `well, this obviously works, and the
mathematicians will sort out the boring details in 50 years or so' -
the Dirac delta function is a good example.  Physicists are a cavaliar
lot, on the whole.

--tim
From: Will Deakin
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <9u628c$23h$1@newsreaderm1.core.theplanet.net>
Tim wrote:
>Of course what they missed was both that the hairy maths has to mean
>something, and that, actually, phycisists really are often doing
>something else,
and the fact that only 10% of the physcist population actually gets this stuff
while the other 90% seem to spend a lot of time not understanding the maths and
trying to wire stuff up in sub-basements to disprove the 15% of the theory and
maths that they did understand. These are experimental physicists[1]...

> And most physics is extremely mathematically unrigorous,
>too, the general attitude is `well, this obviously works, and the
>mathematicians will sort out the boring details in 50 years or so' -
>the Dirac delta function is a good example.
...and hey, this is the small part that we get to just accept so that we can
then go on and measure some stuff.

>Physicists are a cavaliar lot, on the whole.
I think they would like to own cavaliars (sic) but are more likely to ride an
aging sit-up-and-beg bike...

;)w

[1] the other 10% then don't understand what the other 90% are can do and then
go on to develop theories that has an increasing small chance of actually being
measured even with spending monies comesurate with the GDP of a medium sized
country in south east Asia.
From: Xah Lee
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <B828F706.4A12%xah@best.com>
Kent Pitman wrote:
> ...
> Xah's discussion also presupposes that the only purpose of
> software is engineering.
> ...

Really?

Listen to Pitman applying his twisted polemics of prevarication and
fabrication.

That no-show class Tim Bradshaw is quick to follow up.

Sorry i started. I don't have time to carefully follow each post. Perhaps
till this weekend.

Goodbye my loves, cum all over your face.

 Xah
 ···@xahlee.org
 http://xahlee.org/PageTwo_dir/more.html


> From: Kent M Pitman <······@world.std.com>
> Organization: My ISP can pay me if they want an ad here.
> Newsgroups: comp.lang.lisp
> Date: Mon, 26 Nov 2001 16:31:32 GMT
> Subject: Re: Design patterns for Lisp
From: lin8080
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <3C028137.DEEB6ACA@freenet.de>
Xah Lee schrieb:

> Dear Richard Gabriel,

> > ...
> > I think they forgot
> > they were people too.

> So, the whole design pattern and now the new XP wishy-washy thingy are ways
> to encourage stupid people so that we can multiply those incompetent kiddies
> and weenies in the software industry?
...
> Software today are so fucked up precisely because it is tooo easy. ...

                         Well. 
                         The big feature of internet is information.

> What can we do, folks? 

                         go to public

>  If i do not speak up, who can? who would? who is willing?

                         hmmm. Who knows, what is going on ? 

stefan
From: Xah Lee
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <7fe97cc4.0112021417.6b620f9c@posting.google.com>
Dear lisp programers,

i started to give my views on patterns and XP in this thread, then it
got ramified by someone into a hogwash discussion that revolves around
attacking the word "mathematics".

I like to make a coherent summary of my views on the Design Patterns
and eXtreme Programing stuff.

In my previous articles

> Message-ID: <·················@best.com> 
> Date: Mon, 26 Nov 2001 01:46:13 GMT
> Subject: Re: Design patterns for Lisp
http://groups.google.com/groups?selm=B826DEE2.489D%25xah%40best.com&output=gplain

and other posts posted here recently, their contents are summarized
and expanded as follows:

(1) Patterns and XP are wishy-washy, unscientific, unproven, and
without any mathematical basis.

(2) there are enough applied mathematics in computer science for a
life-time of study.

(3) The programers in computing industry lacks sufficient training.
They lack sufficient computer science background, lack mastery of
their tools (languages, protocols etc.), and their knowledge of their
domain/field are often poor.

(4) It is extremely easy and common for non-qualified people to become
a software professional, who will gradually churn out significant
quantity of codes. (this is in contrast with other engineering
disciplines. A desk-top computer is sufficient to send a interested
laymen into software industry.)

(5) Software engineering is ten thousand times easier than physical
engineering such as bridge, airplane, tunnel, train, ship... building.

(6) computer do not make mistakes, people do, and sloppiness is a
common attitude in software industry.

(7) software licenses are irresponsible. The vast majority of software
license agreements contains clauses that don't guarantee it works.

--

The quality of software depends on primarily of two aspects:

* Programer's mastery of the tools. (programing language, protocols,
operating system ...)

* Programer's expertise/knowledge of his field/domain.

Programers should master their tools. Know every detail of their
language well. Then, programer should strive to accumulate knowledge
of their respective domain.

When a programer masters his tools and domain expertise, he obtains
the expertise any Design Pattern tries to describe. Design Patterns is
a snake oil. Icing on the cake. Fluff. Wishful thinking. Slouch's
drug. Idiot's placebo. And that's not all.

Design Patterns is the opium of computing industry. From an addicted
individual or company, it may have improved a code or helped a team.
But on the whole, it hinders the propagation of quality languages. It
stops advancements of language design. It reduces the awareness of
real education needs like mastering languages and domain expertise. It
is a fucking spoon-feeding formula designed to retard the mundane and
drug the smart. It curbs creation ability. It is a plague, a bad-ass
fashion phenomenon; a jargon-riding dimwit-pleasing epidemic.

For the record, the "Gang of Four" mentioned in this thread who wrote
the _Design Patterns_ book are:

 Erich Gamma     <-- fucking ass one
 Richard Helm    <-- fucking ass two
 Ralph Johnson   <-- fucking ass too
 John Vlissides  <-- fucking ass also

These people will be remembered as criminals in the computing history,
along with Larry Wall and those fucking jackasses who distributed
their home work now known as Unix. [criminals here is defined as those
who bring damages to society, including hampering progress in a
massive scale.]

--

I have mentioned above that software engineering is significantly
easier then bridge engineering. Let's drill on this a bit.

When we say A is easier then B, we have to define what we mean by
"easier". Before we can say what we mean by "easier", we need to
understand the nature A and B comparably. We can consider Software
Writing vs Bridge Building. This would be on easy one. We can also
consider Software Engineering vs Bridge Engineering. This will be a
bit difficult. We'll need to further discuss what does "engineering"
really encompasses here. Coming up with a good definition can be quite
involved. Instead of drilling on a sensible definition, I'll simply
mention a few comparisons of things in software engineering and bride
engineering below. The following items will be somewhat hodgepodge.

* Material cost.

* Size of team.

* Cost of raising (educating/training) engineers.

* The nature involved.

In building bridges, there are lots of unknown factors. There's wind,
storm, flood, earthquake all of which we cannot fully control, and can
only predict in a limited way. It will involve many science
disciplines. geo-science, physics, aerodynamics. Software building
requires much lesser disciplines, and significantly much less people.

Build bridges is costly. It can only be done once, and the one time
must be right. It cannot go by trial-and-error. Software building on
the other hand, can trial-and-error all the way. It's essentially
costless.

The essence of computers can be likened to an abacus. You push the
beads and you readout the values. There are no chance events. No storm
or flood to worry about. The information are one hundred percent known
to you, and you control it one hundred percent one hundred percent of
the time.

Which one is ten thousand times easier do you think? Is it bridge
engineering, or software engineering?

--

In the above, i have touched on the problem of software licenses.
Namely, they outright disclaims the functionality of the software.

I propose that we raise an awareness of this situation, so that the
public (consumer) will start to demand more responsible software
(licenses).

... my time's up. Next time when i have some time, reminds me to write
on WHY software industry is the way it is, and why the solution is to
first raise the awareness of irresponsible licenses. This should be
the last message in this episode.

Goodbye my love, cum all over your face.

 Xah
 ···@xahlee.org
 http://xahlee.org/PageTwo_dir/more.html


Feast your eyes. Note the ominous all-caps.

-----------------------------------
Excerpt of License agreement from Mac OS 9:

4. Disclaimer of Warranty on Apple Software.  You expressly
acknowledge and agree that use of the Apple Software is at your sole
risk.  The Apple Software is provided "AS IS" and without warranty of
any kind and Apple and Apple's licensor(s) (for the purposes of
provisions 4 and 5, Apple and Apple's licensor(s) shall be
collectively referred to as "Apple") EXPRESSLY DISCLAIM ALL WARRANTIES
AND/OR CONDITIONS, EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES AND/OR CONDITIONS OF MERCHANTABILITY OR
SATISFACTORY QUALITY AND FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT OF THIRD PARTY RIGHTS.  APPLE DOES NOT WARRANT THAT
THE FUNCTIONS CONTAINED IN THE APPLE SOFTWARE WILL MEET YOUR
REQUIREMENTS, OR THAT THE OPERATION OF THE APPLE SOFTWARE WILL BE
UNINTERRUPTED OR ERROR-FREE, OR THAT DEFECTS IN THE APPLE SOFTWARE
WILL BE CORRECTED.  FURTHERMORE, APPLE DOES NOT WARRANT OR MAKE ANY
REPRESENTATIONS REGARDING THE USE OR THE RESULTS OF THE USE OF THE
APPLE SOFTWARE OR RELATED DOCUMENTATION IN TERMS OF THEIR CORRECTNESS,
ACCURACY, RELIABILITY, OR OTHERWISE.  NO ORAL OR WRITTEN INFORMATION
OR ADVICE GIVEN BY APPLE OR AN APPLE AUTHORIZED REPRESENTATIVE SHALL
CREATE A WARRANTY OR IN ANY WAY INCREASE THE SCOPE OF THIS WARRANTY. 
SHOULD THE APPLE SOFTWARE PROVE DEFECTIVE, YOU (AND NOT APPLE OR AN
APPLE AUTHORIZED REPRESENTATIVE) ASSUME THE ENTIRE COST OF ALL
NECESSARY SERVICING, REPAIR OR CORRECTION.  SOME JURISDICTIONS DO NOT
ALLOW THE EXCLUSION OF IMPLIED WARRANTIES, SO THE ABOVE EXCLUSION MAY
NOT APPLY TO YOU.  THE TERMS OF THIS DISCLAIMER DO NOT AFFECT OR
PREJUDICE THE STATUTORY RIGHTS OF A CONSUMER ACQUIRING APPLE PRODUCTS
OTHERWISE THAN IN THE COURSE OF A BUSINESS, NEITHER DO THEY LIMIT OR
EXCLUDE ANY LIABILITY FOR DEATH OR PERSONAL INJURY CAUSED BY APPLE'S
NEGLIGENCE.

5. Limitation of Liability.  UNDER NO CIRCUMSTANCES, INCLUDING
NEGLIGENCE, SHALL APPLE BE LIABLE FOR ANY INCIDENTAL, SPECIAL,
INDIRECT OR CONSEQUENTIAL DAMAGES  ARISING OUT OF OR RELATING TO THIS
LICENSE.  SOME JURISDICTIONS DO NOT ALLOW THE LIMITATION OF INCIDENTAL
OR CONSEQUENTIAL DAMAGES SO THIS LIMITATION MAY NOT APPLY TO YOU.  In
no event shall Apple's total liability to you for all damages exceed
the amount of fifty dollars ($50.00).

-----------------------------------
GNU General License, excerpt:

NO WARRANTY

11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO
WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW.
EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR
OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY
KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE
PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME
THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.

12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN
WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY
AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU
FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR
CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE
PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING
RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A
FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF
SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH
DAMAGES.


-----------------------------------
END-USER LICENSE AGREEMENT FOR MICROSOFT WINDOWS 98, excerpt:

LIMITED WARRANTY

LIMITED WARRANTY.
Microsoft warrants that (a) the SOFTWARE PRODUCT will perform
substantially in accordance with the accompanying written materials
for a period of ninety (90) days from the date of receipt, and (b) any
Support Services provided by Microsoft shall be substantially as
described in applicable written materials provided to you by
Microsoft, and Microsoft support engineers will make commercially
reasonable efforts to solve any problem. To the extent allowed by
applicable law, implied warranties on the SOFTWARE PRODUCT, if any,
are limited to ninety (90) days. Some states/jurisdictions do not
allow limitations on duration of an implied warranty, so the above
limitation may not apply to you.

CUSTOMER REMEDIES.
 Microsoft's and its suppliers' entire liability and your exclusive
remedy shall be, at Microsoft's option, either (a) return of the price
paid, if any, or (b) repair or replacement of the SOFTWARE PRODUCT
that does not meet Microsoft's Limited Warranty and that is returned
to Microsoft with a copy of your receipt. This Limited Warranty is
void if failure of the SOFTWARE PRODUCT has resulted from accident,
abuse, or misapplication. Any replacement SOFTWARE PRODUCT will be
warranted for the remainder of the original warranty period or thirty
(30) days, whichever is longer. Outside the United States, neither
these remedies nor any product support services offered by Microsoft
are available without proof of purchase from an authorized
international source.

NO OTHER WARRANTIES.
 TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, MICROSOFT AND ITS
SUPPLIERS DISCLAIM ALL OTHER WARRANTIES AND CONDITIONS, EITHER EXPRESS
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, IMPLIED WARRANTIES OR
CONDITIONS OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE
AND NON-INFRINGEMENT, WITH REGARD TO THE SOFTWARE PRODUCT, AND THE
PROVISION OF OR FAILURE TO PROVIDE SUPPORT SERVICES. THIS LIMITED
WARRANTY GIVES YOU SPECIFIC LEGAL RIGHTS. YOU MAY HAVE OTHERS, WHICH
VARY FROM STATE/JURISDICTION TO STATE/JURISDICTION.

LIMITATION OF LIABILITY.
 TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, IN NO EVENT SHALL
MICROSOFT OR ITS SUPPLIERS BE LIABLE FOR ANY SPECIAL, INCIDENTAL,
INDIRECT, OR CONSEQUENTIAL DAMAGES WHATSOEVER (INCLUDING, WITHOUT
LIMITATION, DAMAGES FOR LOSS OF BUSINESS PROFITS, BUSINESS
INTERRUPTION, LOSS OF BUSINESS INFORMATION, OR ANY OTHER PECUNIARY
LOSS) ARISING OUT OF THE USE OF OR INABILITY TO USE THE SOFTWARE
PRODUCT OR THE FAILURE TO PROVIDE SUPPORT SERVICES, EVEN IF MICROSOFT
HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. IN ANY CASE,
MICROSOFT'S ENTIRE LIABILITY UNDER ANY PROVISION OF THIS EULA SHALL BE
LIMITED TO THE GREATER OF THE AMOUNT ACTUALLY PAID BY YOU FOR THE
SOFTWARE PRODUCT OR U.S.$5.00; PROVIDED, HOWEVER, IF YOU HAVE ENTERED
INTO A MICROSOFT SUPPORT SERVICES AGREEMENT, MICROSOFT'S ENTIRE
LIABILITY REGARDING SUPPORT SERVICES SHALL BE GOVERNED BY THE TERMS OF
THAT AGREEMENT. BECAUSE SOME STATES/JURISDICTIONS DO NOT ALLOW THE
EXCLUSION OR LIMITATION OF LIABILITY, THE ABOVE LIMITATION MAY NOT
APPLY TO YOU.
From: Alain Picard
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <868zckbnvn.fsf@gondolin.local.net>
···@xahlee.org (Xah Lee) writes:

> 
> (1) Patterns and XP are wishy-washy, unscientific, unproven, and
> without any mathematical basis.

Dunno 'bout Patterns; from reading rpg's posts lately, I suspect
I actually know _nothing_ about them.  I only know about the GOF
book, and agree it's less than exciting, if you're a lisper.
[It was a Godsend to me when I was a C++ programmer, though!]

> (4) It is extremely easy and common for non-qualified people to become
> a software professional, who will gradually churn out significant
> quantity of codes. 

Definitely true: that's exactly what happened to me.  Turned 
"software engineer" (cough cough) from scientist overnight, with
no formal training whatsoever.  :-)

> (5) Software engineering is ten thousand times easier than physical
> engineering such as bridge, airplane, tunnel, train, ship... building.

Hum...

> For the record, the "Gang of Four" mentioned in this thread who wrote
> the _Design Patterns_ book are:
> 
>  Erich Gamma     [Expletives snipped]
>  Richard Helm    
>  Ralph Johnson   
>  John Vlissides  
> 
> These people will be remembered as criminals in the computing history,

Well, some of those guys have delivered real software systems, used
by thousands of people.  They're acknowledged experts in their fields.
The word "criminal" hardly springs to mind, so, in the absence of an
actual "crime", I'd say that's pretty harsh language.  :-)

> [criminals here is defined as those
> who bring damages to society, including hampering progress in a
> massive scale.]

An interesting definition, but not too many such people are in jail.
In fact, one of them is head of a largish software company.  :-)


On to the serious stuff though:
> 
> In building bridges, there are lots of unknown factors. There's wind,
> storm, flood, earthquake all of which we cannot fully control, and can
> only predict in a limited way.

> The essence of computers can be likened to an abacus. You push the
> beads and you readout the values. There are no chance events. No storm
> or flood to worry about. The information are one hundred percent known
> to you, and you control it one hundred percent one hundred percent of
> the time.

> Which one is ten thousand times easier do you think? Is it bridge
> engineering, or software engineering?

Bridge engineering.

Bridges do not need to be made to 100% exact tolerance to fulfill
their function; normally, 99.9% will do the trick.  With software,
you need 100% tolerance; any 1 bit error is liable to bring the
world down.  Forgeting to screw in 1 rivet doesn't normally bring
a bridge down.  This is due to the continuous nature of physical
systems, versus to the discrete nature of digital systems.


-- 
It would be difficult to construe        Larry Wall, in  article
this as a feature.			 <·····················@netlabs.com>
From: Bradford W. Miller
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <B8310E39.1727%Bradford.W.Miller@motorola.com>
On 12/3/01 4:26 AM, in article ��RTICLE], "Alain Picard"
<·······@optushome.com.au> wrote:

> ···@xahlee.org (Xah Lee) writes:

>> In building bridges, there are lots of unknown factors. There's wind,
>> storm, flood, earthquake all of which we cannot fully control, and can
>> only predict in a limited way.
> 
>> The essence of computers can be likened to an abacus. You push the
>> beads and you readout the values. There are no chance events. No storm
>> or flood to worry about. The information are one hundred percent known
>> to you, and you control it one hundred percent one hundred percent of
>> the time.
> 
>> Which one is ten thousand times easier do you think? Is it bridge
>> engineering, or software engineering?
> 
> Bridge engineering.
> 
> Bridges do not need to be made to 100% exact tolerance to fulfill
> their function; normally, 99.9% will do the trick.  With software,
> you need 100% tolerance; any 1 bit error is liable to bring the
> world down.  Forgeting to screw in 1 rivet doesn't normally bring
> a bridge down.  This is due to the continuous nature of physical
> systems, versus to the discrete nature of digital systems.
> 

Bridge engineers allow a safety margin, so they build to 100+safety margin
for expected events. As the Tacoma Narrows Bridge story shows, however, the
unexpected (at that time) can still occur. You improve things.

Software programs rarely overbuild, instead preferring to underbuild until a
problem surfaces. (Worse is better, XP, etc.), so I agree with Xah on this
point. However, computers need not be a passive abacus, they can also
interact with the world as we perceive it. (To be fair, even non-situated
computers still have physical devices to deal with that are unpredictable,
like disk drives, memory that can fail, when a key press will occur, etc.,
but I'm talking now about computer based artifacts that directly sense and
influence the world around them; solving this problem will make all software
better and more reliable).

The key to better software in the future will be structured overbuilding,
with overlapping responsibilities for modules, etc. Certainly, that's the
tack I've taken with agent based systems, and the artifacts have turned out
to work surprisingly well in the face of unanticipated (by me) phenomena.
[It's a side issue that the phenomena could have been 100% anticipated: I
work with natural language and sensors on the real world. The combinations
and meaning projections are far worse than bridge engineering.]

A fundamental step is to build software to basically "deal with any input it
might get", though not necessarily to always do the best that could have
been done (by some omniscient agent). With an agent, the input is a speech
act, which is far more expressive to begin with than simply an integer or a
symbol. (To be clear, I'm not talking about functions that simply take the
wrong type of argument and generate an error, I'm talking about functions
that are told things they don't think they need to know, and figure out how
to incorporate the data into the desired solution, or decide the information
is spurious, or decide that the sender is broken and needs to be
repaired/replaced/ignored, ...). To make this happen one needs semantic
input to a module, so there is  some chance of figuring out what it actually
*means*. But it also means a different approach to writing the component in
the first place, from something that is purely functional (domains and
ranges, etc.) to something that incorporates inputs into a more holistic
understanding of a situation, and can then provide an appropriate output
contribution.

Personally, I think (intelligent) software agents are the way to solve a
number of software engineering problems... but it starts with a different
understanding of what software is (and can be).
From: Wade Humeniuk
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <9uiqae$34n$1@news3.cadvision.com>
> The key to better software in the future will be structured overbuilding,
> with overlapping responsibilities for modules, etc. Certainly, that's the
> tack I've taken with agent based systems, and the artifacts have turned
out
> to work surprisingly well in the face of unanticipated (by me) phenomena.
> [It's a side issue that the phenomena could have been 100% anticipated: I
> work with natural language and sensors on the real world. The combinations
> and meaning projections are far worse than bridge engineering.]

But having overbuilt modules results in more code being generated and with
that more possible errors.  This has an analogy in structural engineering.
If you build something too large the possbility of a fault in material
increases and thus the greater the possibility of a fatigue failure.  Its a
trade-off between having a safety factor and greater unpredictability.

Simply put, more code, more errors.

I think what you are finding is that you are thinking more completely about
what inputs a module can get and are thus being more careful about your
coding.  A lot of software is written without being thoughtful of all the
possible inputs and how to handle the extrodinary situations.  Incomplete
thinking translated into incomplete code.

I think more effort should be put into knowing the problem more completely.

Wade
From: Kent M Pitman
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <sfwk7w3x949.fsf@shell01.TheWorld.com>
"Wade Humeniuk" <········@cadvision.com> writes:

> > The key to better software in the future will be structured overbuilding,
> > with overlapping responsibilities for modules, etc. Certainly, that's the
> > tack I've taken with agent based systems, and the artifacts have turned
> out
> > to work surprisingly well in the face of unanticipated (by me) phenomena.
> > [It's a side issue that the phenomena could have been 100% anticipated: I
> > work with natural language and sensors on the real world. The combinations
> > and meaning projections are far worse than bridge engineering.]
> 
> But having overbuilt modules results in more code being generated
> and with that more possible errors. ... I think more effort should
> be put into knowing the problem more completely.

Seems to me this might not be incompatible.  Yes, sometimes one just 
reinforces the same thing.  Two bungee cords is probably better than
one because the fault is not likely to be in the design but the 
execution (pardon the grim pun), so having two executions of a bungee
cord would seem good to reduce the risk due to failure, since having
the two doesn't interfere with operation.  Certainly having a RAID disk
assembly doubles the chance of a fault, but it also reduces the risk
that the faults will align, and so overall the system functions better.
A window washer uses a lifeline as a backup in case his railing or base
fails, and yes, maybe that adds more points of failure, but all in all,
it's still better.

I'm not sure what's meant by overbuilt, but if you assign two different
people to do something in two algorithmically divergent ways, or even if
you give them no guidance but you insist they not communicate (better if
they never even knew one another), then a machine that did both of those
tasks in parallel, even at the possible addition of a constant factor
multiplier in the O() notation [which is a no-op, since it reduces out],
is going to be more reliable.   It means dividing the task so that the
operation is either gratuitously repeatable or side-effect free or any
of several other modalities I can think of that don't have names, unless
you have a separate store, as the RAID example does, but even so, it seems
good.

Also, related to this, but not quite the same, I used to follow
computer chess, ages ago, and apologize for not using a more recent
example, since I'm sure there must be one, but Greenblatt had this
chess program for the pdp10 that was really two processors: one did
smart stuff, the other did "blunder control" of brute force multi-ply
lookahead just in case it could rule out stuff the smart part was
considering.  I'd say the blunder control wasn't really a "solver",
just an extra safeguard that statistically improved the reliability of
the overall system.  I'd say the Lisp Machine's hardware typechecking
did the same thing, though RISC architectures have killed this whole
genetic line: hardware type checking didn't take longe than
non-type-checking because the type-check was started in parallel with
doing the add [for example].  By the time the add was done and had a
purported value, different hardware had checked whether the quantity
being added in parallel was wrong, and if so, it kicked the other
processor and said "Hold on, you're getting ahead of yourself".  I
would call this overbuilt, and I would not a priori think it was
introducing extra risk, though I suppose you could disagree.  No
tradeoff comes for free.  I suppose the industry decided there was a
cost to wide instruction architectures, but not whether there was a
cost, but whether the cost was a cost in "risk", and whether, if so,
that risk was of the same kind.  (I'd bet that sometimes all one can
do is shuffle risk around, like a bubble in a carpet, and so it matters
as much where the risk is as how much it is...)

And when you get all done with it, I think the thing that continues to
separate people from computers is not raw compute power or inability
to simulate architectural components, but the willingness to
accomodate "redundancy" and "chance" as key architectural components
of a working system.  The whole left-brain/right-brain issue that
people raise is almost surely just the right half of the brain doing
spontaneous generation of test cases through random (or uncorrelated)
means while the left half is proceeding in the kind of orderly fashion
that we expect of computers.  Sort of like the Greenblatt program, the
right brain can cut off bad ways of going because it's not making an
attempt to be linear or orderly or complete--it's just doing things in
monte carlo style or perhaps following some set of
statistical/associative connections it's made that are unique to the
individual's experience, but in addition to cutting off avenues, it may
chance on solutions or patterns that inspire or direct, or it may suggest
metaplanning activities that it thinks would structure stuff better,
because it's acting redundantly and not constrained to account for its time
as part of the basic problem solving activity.

It baffles me why so much of parallel processing work is spent on making
parallel processors into parallel versions of sequential processors and so
little is spent thinking about how to leverage these issues of redundancy,
chance, meta programming, auditing, and so on.  Human organizations build
such structures.  But computers largely don't.  At least not the ones I
hear much about.  Maybe I'm just looking in the wrong places.
From: ········@acm.org
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <Ye7P7.6623$V12.1495962@news20.bellglobal.com>
Kent M Pitman <······@world.std.com> writes:
> "Wade Humeniuk" <········@cadvision.com> writes:
> > > The key to better software in the future will be structured
> > > overbuilding, with overlapping responsibilities for modules,
> > > etc. Certainly, that's the tack I've taken with agent based
> > > systems, and the artifacts have turned > > out to work
> > > surprisingly well in the face of unanticipated (by me)
> > > phenomena.  [It's a side issue that the phenomena could have
> > > been 100% anticipated: I work with natural language and sensors
> > > on the real world. The combinations and meaning projections are
> > > far worse than bridge engineering.]

> > But having overbuilt modules results in more code being generated
> > and with that more possible errors. ... I think more effort should
> > be put into knowing the problem more completely.

> Seems to me this might not be incompatible.  Yes, sometimes one just
> reinforces the same thing.  Two bungee cords is probably better than
> one because the fault is not likely to be in the design but the
> execution (pardon the grim pun), so having two executions of a
> bungee cord would seem good to reduce the risk due to failure, since
> having the two doesn't interfere with operation.

[A buddy of mine often said that if they'd done bungee hangings in the
Old West, people would have paid to see it :-).  He was also
responsible for other massive bits of political incorrectness...]

> Certainly having a RAID disk assembly doubles the chance of a fault,
> but it also reduces the risk that the faults will align, and so
> overall the system functions better.  

It _may_ be better.  But an array filled with disks of the same model
from the same batch with the same crucial manufacturing defect may
suffer _horribly_...  And if the RAID controller or the power supply
goes bad, the whole system is toast.  _Hopefully_ the system functions
better.  Only if a good job goes into planning the careful use of
RAID, though...

Note that in yesteryear, transatlantic aircraft once were required to
have 4 engines, due to the significant likelihood of one or more
failing in flight.  (Not good if you're 800 miles from land!)

As the reliability, sophistication, and power of jet turbine engines
has improved, the requirement has changed.

In much the same way, mainframes used to be pretty fragile, but are
now the sorts of computers that are used when you can Never Have
Downtime.  Some of that comes with "parallelism," but some comes from
a touch of "over-engineering" on some single components that are
designed not to be fragile the way PCs are.

> I'm not sure what's meant by overbuilt, but if you assign two
> different people to do something in two algorithmically divergent
> ways, or even if you give them no guidance but you insist they not
> communicate (better if they never even knew one another), then a
> machine that did both of those tasks in parallel, even at the
> possible addition of a constant factor multiplier in the O()
> notation [which is a no-op, since it reduces out], is going to be
> more reliable.  It means dividing the task so that the operation is
> either gratuitously repeatable or side-effect free or any of several
> other modalities I can think of that don't have names, unless you
> have a separate store, as the RAID example does, but even so, it
> seems good.

On a RAID array, you'd probably want to have multiple varieties of
disk drives in use to diminish the risk of systematic manufacturing
defects.

On the other hand, it would be pretty insane to have the left engine
on a jet come from Rolls Royce and have the right engine come from GE,
as construction and maintenance would both become a nightmare :-).

> Also, related to this, but not quite the same, I used to follow
> computer chess, ages ago, and apologize for not using a more recent
> example, since I'm sure there must be one, but Greenblatt had this
> chess program for the pdp10 that was really two processors: one did
> smart stuff, the other did "blunder control" of brute force
> multi-ply lookahead just in case it could rule out stuff the smart
> part was considering.  I'd say the blunder control wasn't really a
> "solver", just an extra safeguard that statistically improved the
> reliability of the overall system.

This is seen a fair bit in algorithm work:

-> Fast sorting often involves QuickSort for big lists, but reverting
   to linear insertion for the little sublists.

-> Root finding using Newton's Method can be unstable if it is used by
   itself; it tends to be better to blend it with other search
   schemes, so that when Newton goes off it's nutter due to some sort
   of discontinuity or other such thing, there's _something_ out there
   that can notice "This seems to be going badly; let's try a
   different point."

> It baffles me why so much of parallel processing work is spent on
> making parallel processors into parallel versions of sequential
> processors and so little is spent thinking about how to leverage
> these issues of redundancy, chance, meta programming, auditing, and
> so on.  Human organizations build such structures.  But computers
> largely don't.  At least not the ones I hear much about.  Maybe I'm
> just looking in the wrong places.

a) Most academic algorithm work surrounds all the algorithms that are
   readily analyzed.  Heuristics don't fit well into that.

b) People don't understand heuristics.

   "Heuristics (from the French heure, "hour") limit the amount of
   time spent executing something.  [When using heuristics] it
   shouldn't take longer than an hour to do something."

c) They've spent the money building a system bus that can cope with
   multiple CPUs.  That was expensive.

   And now you propose doing redundant work, thus flushing the money
   down the toilet?  How gauche!

I _don't_ think you're looking in the wrong places, with a bit of
fingers-crossed about the mainframe world, where they _do_ consider
redundancy and auditing [from that list] to be pretty important.

The folks researching parallel work are mostly trying to just plain
speed things up, and anything put into "redundancy" certainly messes
up progress towards that goal.
-- 
(concatenate 'string "aa454" ·@freenet.carleton.ca")
http://www.ntlug.org/~cbbrowne/emacs.html
lp1 on fire (One of the more obfuscated kernel messages)
From: Kent M Pitman
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <sfwzo4y7bmk.fsf@shell01.TheWorld.com>
········@acm.org writes:

> The folks researching parallel work are mostly trying to just plain
> speed things up, and anything put into "redundancy" certainly messes
> up progress towards that goal.

My only problem with this is that things are ALREADY sped up.  What's the
point of running a zillion times faster than the machines of yesteryear,
yet still not be willing to sacrifice a dime of it to anything other than
doing the same kinds of boring computations that you did before?  I want
speedups not just to make my same old boring life faster, but to buy me
the flexibility to do something I wasn't willing to do at slower speeds.
From: ········@acm.org
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <K6eP7.26665$yE5.1637158@news20.bellglobal.com>
Kent M Pitman <······@world.std.com> writes:
> ········@acm.org writes:

> > The folks researching parallel work are mostly trying to just
> > plain speed things up, and anything put into "redundancy"
> > certainly messes up progress towards that goal.

> My only problem with this is that things are ALREADY sped up.
> What's the point of running a zillion times faster than the machines
> of yesteryear, yet still not be willing to sacrifice a dime of it to
> anything other than doing the same kinds of boring computations that
> you did before?  I want speedups not just to make my same old boring
> life faster, but to buy me the flexibility to do something I wasn't
> willing to do at slower speeds.

I wouldn't disagree at all with that; the thing is, if you're trying
to get an NSF or DOD grant for doing parallel processing research,
your project won't sound terribly "sexy" if you say: "Well, we're not
planning to actually _improve_ performance; we're just going to ****
it away on something."

I'm not saying that's the way things _should_ be...
-- 
(concatenate 'string "chris" ·@cbbrowne.com")
http://www.cbbrowne.com/info/rdbms.html
Share and Enjoy!!
From: Kent M Pitman
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <sfwg06qjni0.fsf@shell01.TheWorld.com>
········@acm.org writes:

> Kent M Pitman <······@world.std.com> writes:
> > ········@acm.org writes:
> 
> > > The folks researching parallel work are mostly trying to just
> > > plain speed things up, and anything put into "redundancy"
> > > certainly messes up progress towards that goal.
> 
> > My only problem with this is that things are ALREADY sped up.
> > What's the point of running a zillion times faster than the machines
> > of yesteryear, yet still not be willing to sacrifice a dime of it to
> > anything other than doing the same kinds of boring computations that
> > you did before?  I want speedups not just to make my same old boring
> > life faster, but to buy me the flexibility to do something I wasn't
> > willing to do at slower speeds.
> 
> I wouldn't disagree at all with that; the thing is, if you're trying
> to get an NSF or DOD grant for doing parallel processing research,
> your project won't sound terribly "sexy" if you say: "Well, we're not
> planning to actually _improve_ performance; we're just going to ****
> it away on something."
> 
> I'm not saying that's the way things _should_ be...

No proposal sounds good if you don't wordsmith it right.  If you say instead
"we're going to investigate a qualitative shift in computation that we expect
to exploit modern processor speeds and parallel hardware to materially 
improve the overall robustness of computational systems ..."
you could probably get a different response.  Or, at least, my point is that
it depends a lot on how you spin it.  I suspect ANY proposal which begins with
"Well, we're not planning to actually...." will be turned down, so you kind
of stacked the odds against yourself there.  Then you drew a conclusion that
I don't think was justified by at least the argument you'd raised. Your
conclusion might still be valid, for all I know, but I doubt it's a simple
matter to test.
From: Erik Naggum
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <3216505572815147@naggum.net>
* Kent M Pitman <······@world.std.com>
| My only problem with this is that things are ALREADY sped up.  What's the
| point of running a zillion times faster than the machines of yesteryear,
| yet still not be willing to sacrifice a dime of it to anything other than
| doing the same kinds of boring computations that you did before?  I want
| speedups not just to make my same old boring life faster, but to buy me
| the flexibility to do something I wasn't willing to do at slower speeds.

  Well, you could do what the other "innovative" guys out there do: Create
  another amazingly idiotic virus for some idiotic Microsoft product used
  by millions if not billions of people who would rather die than think
  about what they are doing, and, to misquote Bertrand Russel, in fact they
  do.  That would certainly satisfy "to do something I wasn't willing to do
  at slower speeds", but the speed was probably not the reason.   :)

  Seriously, software secure from Microsoftitis (the leprosy of software)
  would be something that computers could help us attain.  However, it
  would take an act of Congress to finally turn around and notice that if a
  bank that was repeatedly robbed of all its money because it had the level
  of security for which Microsoft's products are famous, had tried to blame
  "crackers" and had taken _no_ precautions for twenty years to prevent
  these incidents from happening, the government would have shut them down
  and incarcerated the (ir)responsible owners and (mis)managers.

  The fact that the U.S. Government does not stop Microsoft from making and
  distributing software that aids and abets electronic terrorists means
  that they are harboring terrorists, according to the standards set by
  Presiding Dimwit George W. Bush (who has reverted to pre-9/11 blabbering
  with the highest pause-to-speak ratio of all present public figures).
  The only solution is to bomb the shit out of Microsoft's headquarters and
  their offices around the world, and to wage war on electronic terrorist
  trainer and leader William H. Gates III.  If the world has finally had
  enough of terrorism, what will it take to make people tire of the crap
  that Microsoft produces and demand the end of their terrorist reign?
  
///
-- 
  The past is not more important than the future, despite what your culture
  has taught you.  Your future observations, conclusions, and beliefs are
  more important to you than those in your past ever will be.  The world is
  changing so fast the balance between the past and the future has shifted.
From: Robert Monfera
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <3C0D7B18.90209@fisec.com>
········@acm.org wrote:

>    "Heuristics (from the French heure, "hour") limit the amount of
>    time spent executing something.  [When using heuristics] it
>    shouldn't take longer than an hour to do something."


I think heuristics comes from the ancient Greek "heureka", "I found 
it!", shouted, for better results, wet and naked running across a public 
street.

Robert
From: Robert Monfera
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <3C0D7D31.7010409@fisec.com>
Oops, I just found the surrounding "" marks.  Who did you quote?  I'd 
like to read more explanations like this.

Robert Monfera wrote:

> ········@acm.org wrote:
> 
>>    "Heuristics (from the French heure, "hour") limit the amount of
>>    time spent executing something.  [When using heuristics] it
>>    shouldn't take longer than an hour to do something."
> 
> 
> 
> I think heuristics comes from the ancient Greek "heureka", "I found 
> it!", shouted, for better results, wet and naked running across a public 
> street.
> 
> Robert
> 
From: Andreas Bogk
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <878zci6gzr.fsf@teonanacatl.andreas.org>
Kent M Pitman <······@world.std.com> writes:

> the overall system.  I'd say the Lisp Machine's hardware typechecking
> did the same thing, though RISC architectures have killed this whole
> genetic line: hardware type checking didn't take longe than
> non-type-checking because the type-check was started in parallel with
> doing the add [for example].  By the time the add was done and had a
> purported value, different hardware had checked whether the quantity
> being added in parallel was wrong, and if so, it kicked the other
> processor and said "Hold on, you're getting ahead of yourself".  I

This is not so different from the situation on a modern RISC
architecture.  Speculative execution combined with branch prediction
makes it possible to execute type and bounds checks in parallel with
the operation.  Of course, this relies on the compiler to properly
generate type checks, but it introduces a chance for the compiler to
optimize away the type check and use the hardware for other purposes.

We have some highly unscientific numbers for the bounds checking case
in the Dylan compiler at:

  http://berlin.ccc.de/cgi-bin/cvsweb/gd/examples/sieve-mark/BENCHMARK.txt?rev=1.1&content-type=text/x-cvsweb-markup

This of course slightly misses the point you were making about
designing reliable systems, but at least it shows that performance is
no excuse to risk system reliability by not doing bounds checks.
Every posting to Bugtraq about Yet Another Buffer Overflow makes me
think: "Why are those people still using C to write mission-critical
software?".

Andreas

-- 
"In my eyes it is never a crime to steal knowledge. It is a good
theft. The pirate of knowledge is a good pirate."
                                                       (Michel Serres)
From: Frank A. Adrian
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <zmiP7.980$PM.333701@news.uswest.net>
Kent M Pitman wrote:

> I'd�say�the�Lisp�Machine's�hardware�typechecking
> did the same thing, though RISC architectures have killed this whole
> genetic line: hardware type checking didn't take longe than
> non-type-checking because the type-check was started in parallel with
> doing the add [for example].��By�the�time�the�add�was�done�and�had�a
> purported value, different hardware had checked whether the quantity
> being added in parallel was wrong, and if so, it kicked the other
> processor and said "Hold on, you're getting ahead of yourself".

You know, this could probably be done in today's pipelined processors with 
very little speed disadvantage over (say) C.  Eventually, you get enough 
ALU's in the system that the deciding factor on performance is the number 
of indpendent operations you can put in a straight-line basic block.  Even 
though (theoretically) a C program could do more "real work" in a basic 
block than a Lisp program, eventually the dependency graphs of both end up 
to be the same length because the type checking for Lisp systems can be 
done in parallel.  Has anyone really looked at parallelizing Lisp for a P4 
or Athlon to see if the typechecks can be packed in parallel more 
efficiently?  For instance, is the code:
        
        long tta = a & 0x7; long c = a + b; 
        if (tta) goto error; long ttc = c & 0x7;
        /* speculatively compute with c` */ if (ttc) goto error;

for an addition really going to be a disadvantage much longer?  You play 
all sorts of games with C code like this already to lengthen the basic 
block (and it usually doesn't work that well).  With 3 cpu's it seems you 
could pack the Lisp typechecks into the code to the point where the speed 
would be competitive.

faa
From: Pierre R. Mai
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <87snap5kem.fsf@orion.bln.pmsf.de>
"Frank A. Adrian" <·······@qwest.net> writes:

> Kent M Pitman wrote:
> 
> > I'd�say�the�Lisp�Machine's�hardware�typechecking
> > did the same thing, though RISC architectures have killed this whole
> > genetic line: hardware type checking didn't take longe than
> > non-type-checking because the type-check was started in parallel with
> > doing the add [for example].��By�the�time�the�add�was�done�and�had�a
> > purported value, different hardware had checked whether the quantity
> > being added in parallel was wrong, and if so, it kicked the other
> > processor and said "Hold on, you're getting ahead of yourself".
> 
> You know, this could probably be done in today's pipelined processors with 
> very little speed disadvantage over (say) C.  Eventually, you get enough 
> ALU's in the system that the deciding factor on performance is the number 
> of indpendent operations you can put in a straight-line basic block.  Even 
> though (theoretically) a C program could do more "real work" in a basic 
> block than a Lisp program, eventually the dependency graphs of both end up 
> to be the same length because the type checking for Lisp systems can be 
> done in parallel.  Has anyone really looked at parallelizing Lisp for a P4 

Last time I looked, people had realised that the kind of code that
runs today (even with all kind of optimizing compilers, etc.) was only
utilizing about 30% of the parallel processing power of current x86
architectures.  That's one of the reasons that Intel is looking into
instruction-level multi-threading, i.e. letting multiple threads of
execution share the available parallel units.  Given that on modern
systems there still is often only one active process, I think that
spending that power on type-checking is going to be a better
investment.

AND we will not even loose in low-level benchmarks against C, since
those will usually not take advantage of instruction-level
multi-threading either.

Regs, Pierre.

-- 
Pierre R. Mai <····@acm.org>                    http://www.pmsf.de/pmai/
 The most likely way for the world to be destroyed, most experts agree,
 is by accident. That's where we come in; we're computer professionals.
 We cause accidents.                           -- Nathaniel Borenstein
From: Tim Bradshaw
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <fbc0f5d1.0112050504.777489aa@posting.google.com>
Kent M Pitman <······@world.std.com> wrote in message news:<···············@shell01.TheWorld.com>...

I think this is a really interesting article, and my comments on it
are unfortunately kind of fragmented because you covered so much
interesting stuff.

> I'd say the Lisp Machine's hardware typechecking
> did the same thing, though RISC architectures have killed this whole
> genetic line: hardware type checking didn't take longe than
> non-type-checking because the type-check was started in parallel with
> doing the add [for example].  By the time the add was done and had a
> purported value, different hardware had checked whether the quantity
> being added in parallel was wrong, and if so, it kicked the other
> processor and said "Hold on, you're getting ahead of yourself".  I
> would call this overbuilt, and I would not a priori think it was
> introducing extra risk, though I suppose you could disagree.  No
> tradeoff comes for free.  I suppose the industry decided there was a
> cost to wide instruction architectures, but not whether there was a
> cost, but whether the cost was a cost in "risk", and whether, if so,
> that risk was of the same kind.  (I'd bet that sometimes all one can
> do is shuffle risk around, like a bubble in a carpet, and so it matters
> as much where the risk is as how much it is...)

I think that, looking back on it, one could have a different take on
this.  The LispM had special typechecking hardware, and could do
typechecks in parallel.  What happened (retrospectively) was that
people decided that this wasn't worth it, and that, rather than
provide a special do-one-thing-in-parallel bit of the processor (the
`one-thing' being `type check'), it would be better to provide a
do-anything-in-parallel bit of the processor.  And this is actually
what's happened - modern CPUs have n execution units (I think
typically divided into floating-point and integer units because
there's not enough in common) all of which can operate in parallel,
providing that what they're doing doesn't  have dependencies on other
execution unit results or that they're not waiting for resources like
memory or registers.  Even more amazingly, these things get
dynamically scheduled on-the-fly by the processor in many cases.  So
it should be the case that even though the compiler generates code
which says something like

     check-type x integer
     if-false go hairy-case
     check-type y integer
     if-false go hairy-case
     integer-add x x y
     if-overflow go fixup

or something, that actually a whole lot of this happens in parallel,
just like it did on the LispM.  Except now, if the compiler can
optimize away the typechecks, the system can use these general-purpose
execution units to do anything it can find a use for them.  So,
actually, we still have typechecking hardware, it's just not
special-purpose any more.
     
> It baffles me why so much of parallel processing work is spent on making
> parallel processors into parallel versions of sequential processors and so
> little is spent thinking about how to leverage these issues of redundancy,
> chance, meta programming, auditing, and so on.  Human organizations build
> such structures.  But computers largely don't.  At least not the ones I
> hear much about.  Maybe I'm just looking in the wrong places.

I think there are two answers to this.

One is money.  If you can design a system which will simulate an
atomic bomb with acceptable accuracy and in acceptable time, then
people will throw *really large* amounts of money at you to do that,
because the ability to do that lets governments do politicially
extremely desirable things, like signing test-ban treaties, without
unacceptable costs, like not knowing if their nuclear weapons will
kill enough people.  Single processors are not fast enough to do this,
so you need huge parallel machines and good algorithms.  Good
algorithms are a real win because they can save you very expensive
hardware.  So there is a lot of research into this, because there are
people willing to spend billions of pounds on it.

The other is that they don't.  Big commercial multiprocessor machines
aren't designed to be super-fast single-processor machines at all. 
They're designed to do a couple of things instead:

1. They are designed to be extremely hardwarily reliable.  So they
have redundant everything, and they can be taken to pieces and
reassembled while they are running.  You can pull disks, power
supplies, memory, pretty much anything from these boxes and they will
cope.  That's a non-trivial hardware engineering task and also a
non-trivial software engineering task.

2. They are designed to run a specific set of applications which
businesses will pay well for and which (a) need to work *all the time*
and cost enormous money when they are not working, (b) are too big for
single processors, and can be fairly easily split among multiple
processors, but (c) where the multiple threads have significant and
critical interaction.  These are typically big database-based systems,
where (a) is true because it's a bank and really can't afford to have
the ATM system go down, (b) is true because there are a lot of ATMs
and (c) is true because it really, really matters that account
transactions interlock correctly.  (c) matters because it means that
you really need good interconnect bandwidth and latency, `balanced
performance'.

The combination of these things drives you towards the kinds of big
commercial boxes that IBM, Sun and co sell.  Of course these machines
are typically not interesting to the parallel algorithms people,
because their peak FLOPS is rotten, but actually from an engineering
perspective they are very interesting, I think, because they are
solving a very real set of problems, including ones of redundancy and
reliability.

Which brings me back to redundancy and so on.  I think that one of the
things that differs between software and hardware is the lack of
anything like a good model for software.  If you're building a
physical thing you have a lot of powerful techniques that let you
model failure mechanisms - things wear out at known rates and you
understand a lot of stuff about tolerances and so on.  Even better,
you can *measure* wear rates - when you do a service on your jet
engine you can notice that some bearing is worn and that it will need
replacing in another few thousand hourse.  Also things like random
failures of components are understood reasonably well - people have
models of how often bolts fail or something.  So there's a whole
culture of this kind of failure analysis which leads to accurate
models of how things will behave.  There are still problems - you may
miss something, and worse there may be systemic problems which you
haven't accounted.  If you have two engines of the same age, and they
both have some issue with fatigue which you haven't taken account of,
then you stand a horribly high chance of *both* engines failing at the
same time, especially when, after the first one fails, the second has
to work harder, thus bringing forward its fatigue failure by far
enough that it fails too.

This kind of analysis works quite well for things like RAID.  It's
often botched I suspect, with hugely expensive RAID systems all run
off the same controller which has gone bad and is spewing junk onto
the disks, or with multiple redundant everything all running off one
noisy PSU, or with correlated disk failures &c.

But I don't think that anything *like* this model exists for software.
 Sure, people try and develop models, like errors per KLOC or
something, but these are laughable, because (apart from KLOC not being
a good measure of size really), there's no notion of small or large
error.  In a machanical device there is this whole world of
continuous, near-linear systems which you can use to build concepts
like `tolerance' and `small error' and so on, as well as understand
how errors combine with each other.  Software is not like that -
everything is infinitely close to everything else in the space of
software, and everything is completely discontinuous
almost-everywhere.  Most single-bit errors will kill you.  So there's
just no model for errors and how they combine, and how to do
reduncancy.

I thing that *developing* such a model would be a very interesting
thing to do, as it would make software engineering actually possible. 
What a system for which a model with reasonable properties could be
built would look like, I have no idea, but I doubt it would be much
like the languages we know.

--tim
From: Eric Marsden
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <wzipu5temg9.fsf@laas.fr>
>>>>> "tb" == Tim Bradshaw <tfb> writes:

  tb> But I don't think that anything *like* this model exists for software.
  tb> Sure, people try and develop models, like errors per KLOC or
  tb> something, but these are laughable, because (apart from KLOC not being
  tb> a good measure of size really), there's no notion of small or large
  tb> error.  In a machanical device there is this whole world of
  tb> continuous, near-linear systems which you can use to build concepts
  tb> like `tolerance' and `small error' and so on, as well as understand
  tb> how errors combine with each other.  Software is not like that -
  tb> everything is infinitely close to everything else in the space of
  tb> software, and everything is completely discontinuous
  tb> almost-everywhere.  Most single-bit errors will kill you.  

there is an entire branch of dependable computing dealing with
software reliability engineering, and I don't think it's very fair to
call their work laughable (boring, maybe ;-). For a good overview see
<URL:http://www.cse.cuhk.edu.hk/~lyu/book/reliability/>.

It is true that models of hardware generally have a finer granularity
than software models. However, this is more due to the way software is
architected than to how it is modelled. If you develop your
application as a large number of interacting components, each in its
own error confinement region, and document all the control and data
interactions, you could develop a fine grained model of the software.
True, this is the opposite of the standard Lisp single-image
architecture.


  tb> So there's just no model for errors and how they combine, and
  tb> how to do reduncancy.

inside a single image, maybe. But in a system composed of multiple
hardware and software components, there are modelling techniques that
let you investigate the impact of the failure (or planned downtime, or
different levels of degraded service) of one or several components on
the entire system. This allows system designers to evaluate the
effectiveness of different system configurations, balancing the cost
of redundancy with its impact on reliability and availability.
  
-- 
Eric Marsden                          <URL:http://www.laas.fr/~emarsden/>
From: Alain Picard
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <864rn7be2f.fsf@gondolin.local.net>
"Bradford W. Miller" <·················@motorola.com> writes:

> The key to better software in the future will be structured overbuilding,
> with overlapping responsibilities for modules, etc.

This is a really interesting notion.  I'm not sure that I totally
understood from your natural language example exactly what that would
mean: would it be possible to give an example how I might apply
this principles to my own programs?


-- 
It would be difficult to construe        Larry Wall, in  article
this as a feature.			 <·····················@netlabs.com>
From: Nils Goesche
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <lkg06syz2f.fsf@pc022.bln.elmeg.de>
Alain Picard <·······@optushome.com.au> writes:

> ···@xahlee.org (Xah Lee) writes:
> 
> > In building bridges, there are lots of unknown factors. There's wind,
> > storm, flood, earthquake all of which we cannot fully control, and can
> > only predict in a limited way.
> 
> > The essence of computers can be likened to an abacus. You push the
> > beads and you readout the values. There are no chance events. No storm
> > or flood to worry about. The information are one hundred percent known
> > to you, and you control it one hundred percent one hundred percent of
> > the time.
> 
> > Which one is ten thousand times easier do you think? Is it bridge
> > engineering, or software engineering?
> 
> Bridge engineering.
> 
> Bridges do not need to be made to 100% exact tolerance to fulfill
> their function; normally, 99.9% will do the trick.  With software,
> you need 100% tolerance; any 1 bit error is liable to bring the
> world down.  Forgeting to screw in 1 rivet doesn't normally bring
> a bridge down.  This is due to the continuous nature of physical
> systems, versus to the discrete nature of digital systems.

Not so sure about that.  Engineers compute with approximations,
omitting a screw will leave the static within the precomputed fault
tolerances.  But, computing with approximations is not at all
`inexact' or `sloppy' mathematics!  The engineer better be 100% exact
in his predictions of fault tolerances and computations of
eigen-vibrations, or he might easily overlook some wind induced
eigen-vibration which will bring the whole bridge down.  Like here:

http://www.civeng.carleton.ca/Exhibits/Tacoma_Narrows/DSmith/photos.html

and

http://www.civeng.carleton.ca/Exhibits/Tacoma_Narrows/TacomaNarrowsBridge.mpg

Regards,
-- 
Nils Goesche
"Don't ask for whom the <CTRL-G> tolls."

PGP key ID 0x42B32FC9
From: Kenny Tilton
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <3C0C34C6.7DB23CDD@nyc.rr.com>
> 
> ···@xahlee.org (Xah Lee) writes:
> 
> > In building bridges, there are lots of unknown factors. There's wind,
> > storm, flood, earthquake all of which we cannot fully control, and can
> > only predict in a limited way.

Sounds like a User.

> 
> > The information are one hundred percent known
> > to you, and you control it one hundred percent one hundred percent of
> > the time.

This is excellent news. I am going to go get that information, test
against it until the program works and then discard the source code.
Woo-hoo!

> 
> > Which one is ten thousand times easier do you think? Is it bridge
> > engineering, or software engineering?

There's a straightforward comparison. They have so much in common! One
difference is that bridge engineers are not asked by management two
months before the ribbon cutting to add a spindle to the bridge so it
can be pointed into the wind to make take-offs easier, marketing sees an
opportunity...

kenny
clinisys
From: Ed L Cashin
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <m3d726skwp.fsf@terry.uga.edu>
··········@mailandnews.com (Software Scavenger) writes:

> In the recent thread about design patterns, RPG's definition of
> patterns seemed to be approximately that patterns are components of a
> programmer's knowledge, which differentiate between more and less
> experienced programmers.  Such components seem to me to include
> algorithms and good usage of a programming language, along with other
> kinds of knowledge.  In other words, books of algorithms, such as
> Knuth's, and books of good Lisp usage, such as PAIP, etc., could
> actually be considered patterns books.  Or they could be combined into
> a bigger patterns book, and become individual chapters of it.  And
> Cliki, ALU, CLOCC, etc., could be considered Lisp patterns websites.
>
> And since "patterns" is such a hot buzzword, we might help make Lisp
> more popular by combining such websites into one big "Lisp Patterns"
> website, organized by type of pattern, such as algorithms, usage, etc.

Hmm.  I would think the reason people check out lisp is because they
are starting to have a clue about distinguishing between empty
marketing talk and useful technology.  

Such folks are not likely to be impressed by cheap buzzwords like
"enterprise", "bullet proof", or even "patterns".  It might even be a
turn off.

>  It could even have a section on the GoF patterns with explanations
> for most of them of why each is not needed in Lisp.

_That_ sounds very helpful, though: recognize the buzzword and show
how its most popularized form looks from the standpoint of someone who
has experience with lisp.

-- 
--Ed Cashin                     integrit file-verification system:
  ·······@terry.uga.edu         http://integrit.sourceforge.net/

    Note: If you want me to send you email, don't munge your address.
From: Lieven Marchand
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <m3y9kusy5c.fsf@localhost.localdomain>
··········@mailandnews.com (Software Scavenger) writes:

> Another kind of component of a programmer's knowledge is how to earn a
> living from programming.  How to get along with your boss, how to meet
> deadlines, how to find a Lisp job, etc.  Those might or might not have
> a place in a collection of patterns.  They might be considered an
> additional kind of knowledge, in addition to patterns.

There's a whole body of literature about 'organisational patterns'
about. Things like ScapeGoat etc.

-- 
Lieven Marchand <···@wyrd.be>
She says, "Honey, you're a Bastard of great proportion."
He says, "Darling, I plead guilty to that sin."
Cowboy Junkies -- A few simple words
From: Richard P. Gabriel
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <B82687BC.2AF3%rpg@dreamsongs.com>
in article ··············@localhost.localdomain, Lieven Marchand at
···@wyrd.be wrote on 11/25/01 6:50:
> There's a whole body of literature about 'organisational patterns'
> about. Things like ScapeGoat etc.

There are patterns and pattern languages about:

* architectures
* organizations for developing software
* open-source communities including governance and standardization
* teaching computing and OO
* completing grad school
* how to be a consultant
* doing writers' workshops
* designing any sort of textual electronic communitications
* telephony
* XP
* writing papers for conferences

And dozens more.

Anti-patterns, by the way, are ignored by the patterns community.

            -rpg-
From: Erik Naggum
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <3215720949746615@naggum.net>
* Software Scavenger
| In the recent thread about design patterns, RPG's definition of patterns
| seemed to be approximately that patterns are components of a programmer's
| knowledge, which differentiate between more and less experienced
| programmers.  Such components seem to me to include algorithms and good
| usage of a programming language, along with other kinds of knowledge.

  If I have understood this patterns thing correctly, these would be some
  of the patterns of design in Common Lisp:

1 Design algorithms to work with the highest meaningful class in the class
  hierarchy.

  E.g., design them for sequence rather than specifically for strings or
  lists, or vector rather than specifically for string.

2 Accept start and end arguments in sequence function to avoid unnecessary
  consing.

  Consequently, use them rather than inventing your own indexing and
  termination scheme.

3 When accepting an "end" position, it is exclusive.  (The "start" position
  is likewise inclusive.)  A nil end argument means the natural end of the
  sequence.  Accept an explicit end argument, do not depend on defaulting.

  I.e., when the start and end arguments are equal, the sequence is empty.

4 If your algorithm scans a sequence in one direction one element at a
  time, accept a from-end argument.

  Consequently, one does not need to call it with reversed arguments.

5 Use iteration rather than recursion to scan a sequence one element at a
  time.  (Reserve recursion for situations where you _require_ a stack.)

  I.e., Common Lisp is not a dialect of Scheme.

6 When iterating over something to collect elements into a list, use loop
  with collect, or push onto a list which you nreverse before returning
  with the following template,

        (do (...
             (list '()))
            (... (nreverse list))
          ...
          (push ... list)
          ...)

  or stuff new items onto the end of a list using the following template
  (which is usually what loop uses)

        (do* (...
              (head (cons nil nil))
              (tail head))
            (... (cdr head))
          ...
          (setf tail (setf (cdr tail) (cons ... nil)))
          ...))

7 Design function interfaces so they can accept designators.

  I.e., study and use the designators already designed-into Common Lisp.
  When designing the functional interface for a new class hierarchy, that,
  say, accept an "employee" instance, and you find employees using a string
  or an integer, design and use an "employee designator" that turns strings
  and integers into employee instances whenever only an employee instance
  makes sense.


  Except for item 6, I do not think these are issues of abstraction or
  macrology.  The patterns in 6 may be cast into macro form, but the degree
  of variation may simply be too large to make general macros useful, which
  is kind of what I expect from patterns: If they were general enough, they
  _would_ be abstractions.

  Now, RPG once told me that I sounded like I had not understood patterns,
  so, Dick, if you read this and think it sounds like I have now, I would
  appreciate a more positive hint this time.  :)

\\\
-- 
  The past is not more important than the future, despite what your culture
  has taught you.  Your future observations, conclusions, and beliefs are
  more important to you than those in your past ever will be.  The world is
  changing so fast the balance between the past and the future has shifted.
From: Russell Senior
Subject: buffer-lists (was Re: Design patterns for Lisp)
Date: 
Message-ID: <86oflooayn.fsf_-_@coulee.tdb.com>
>>>>> "Erik" == Erik Naggum <····@naggum.net> writes:

Erik>   If I have understood this patterns thing correctly, these
Erik> would be some of the patterns of design in Common Lisp:

Erik> 1 Design algorithms to work with the highest meaningful class in
Erik> the class hierarchy.

Erik>   E.g., design them for sequence rather than specifically for
Erik> strings or lists, or vector rather than specifically for string.

This seems like as good a place as any to start a tangent ...

I have recently had need for a way of holding sequences longer than
will fit in a particular implementation's array (i.e., the length is
(or might be) greater than the value of variable
array-dimension-limit).  In response, I "invented" a buffer-list, a
list of buffers with the wrinkle that the list is actually a list of
lists, where the sublists are (buffer-length buffer), for example:
'((5 "01234") (5 "56789") ... ).  Since the ordinary sequence
functions I need would not work on these buffer-lists, I wrote
functions:

(defun search-in-buffer-list (target buffer-list start) 
  "Searches for sequence TARGET in the buffers of BUFFER-LIST starting
  at START, where START is a list consisting of (buffer-number
  buffer-offset).  If found, returns the buffer list address
  (buffer-number buffer-offset).  If not found, returns NIL.  This is
  implemented by successively looking in the buffer-list (with the
  function POSITION) for the first object in the TARGET sequence, and
  when found checking if the successive positions in BUFFER-LIST
  match, using the function MATCH-IN-BUFFER-LIST described below."
  ... )
 
(defun match-in-buffer-list (target buffer-list start) 
  "Checks if sequence TARGET matches the contents of BUFFER-LIST
  starting at START, where START is a list consisting of
  (buffer-number buffer-offset).  This is used as a helper function to
  SEARCH-IN-BUFFER-LIST, but is useful on its own as well.  Returns T
  if the sequences match at the specified position, otherwise NIL."
  ... )

(defun buffer-list-offset (buffer-list start adjustment)
  "Normalizes a buffer list address that has been adjusted forward or
  backwards.  Returns the normalized address in the form of the list
  (buffer-number buffer-offset)."  
  ... )

(defun extract-from-buffer-list (buffer-list start end) 
  "Essentially a SUBSEQ for buffer-lists.  Concatenates, as necessary,
  the contents of the buffers of BUFFER-LIST between the buffer-list
  addresses START (inclusive) and END (exclusive).  This function is
  expected to fail[1] if the concatenation exceeds the value of
  variable array-dimension-limit. [1] by which I mean I know it will,
  but I haven't done anything about it."
  ... )

I've also got functions that read and write buffer-lists from/to
streams of element-type (unsigned-byte 8), which is what my immediate
need is for.  The write function does not suffer from the problem of
the function EXTRACT-FROM-BUFFER-LIST, since (obviously) the
destination isn't subject to the array-dimension-limit.

Having developed these functions to a stage where they are useful to
me, it has entered my consciousness that this problem has certainly
presented itself to others before and I am curious how they might have
solved the problem.  I did a little googling around but didn't find
anything relevant.  Do my "buffer-lists" have another more typical
name?  Suggestions/comments?

A couple thoughts I've had while writing this message are:

  a) whether or not the buffer-lengths in the buffer-list are "good".
     I kind of like the cached values so that I don't have to
     recompute them whenever I need them.

  b) whether I should eliminate the (buffer-number buffer-offset) from
     the function call interface and just use a flat offset,
     recomputing the buffer-number/-offsets as necessary.

At present, the remainder of Erik's suggestions for sequence functions
remain unimplemented.


-- 
Russell Senior         ``The two chiefs turned to each other.        
·······@aracnet.com      Bellison uncorked a flood of horrible       
                         profanity, which, translated meant, `This is
                         extremely unusual.' ''                      
From: Pierre R. Mai
Subject: Re: buffer-lists (was Re: Design patterns for Lisp)
Date: 
Message-ID: <87r8qkcnsb.fsf@orion.bln.pmsf.de>
Russell Senior <·······@aracnet.com> writes:

> I have recently had need for a way of holding sequences longer than
> will fit in a particular implementation's array (i.e., the length is
> (or might be) greater than the value of variable
> array-dimension-limit).  In response, I "invented" a buffer-list, a
> list of buffers with the wrinkle that the list is actually a list of
> lists, where the sublists are (buffer-length buffer), for example:
> '((5 "01234") (5 "56789") ... ).  Since the ordinary sequence
> functions I need would not work on these buffer-lists, I wrote
> functions:

> Having developed these functions to a stage where they are useful to
> me, it has entered my consciousness that this problem has certainly
> presented itself to others before and I am curious how they might have
> solved the problem.  I did a little googling around but didn't find
> anything relevant.  Do my "buffer-lists" have another more typical
> name?  Suggestions/comments?
> 
> A couple thoughts I've had while writing this message are:
> 
>   a) whether or not the buffer-lengths in the buffer-list are "good".
>      I kind of like the cached values so that I don't have to
>      recompute them whenever I need them.

But "recomputation" is a very fast memory access for vectors, which is
likely nearly just as fast as a car operation, and will remove one
indirection to get at the vector, so it seems that not keeping this
duplicate value is a useful simplification.

>   b) whether I should eliminate the (buffer-number buffer-offset) from
>      the function call interface and just use a flat offset,
>      recomputing the buffer-number/-offsets as necessary.

Since you need to retraverse your buffer list in any case,
recomputation seems inexpensive, so I'd go for it.  This might be
different if you have lots of buffers, and either keep them in an
adjustable vector (so that direct indexing with the buffer-number
works), or you keep the buffer-cons instead of the buffer-number,
again giving you direct access.

Regs, Pierre.

-- 
Pierre R. Mai <····@acm.org>                    http://www.pmsf.de/pmai/
 The most likely way for the world to be destroyed, most experts agree,
 is by accident. That's where we come in; we're computer professionals.
 We cause accidents.                           -- Nathaniel Borenstein
From: Jochen Schmidt
Subject: Re: buffer-lists (was Re: Design patterns for Lisp)
Date: 
Message-ID: <9u0cji$k7m$1@rznews2.rrze.uni-erlangen.de>
Pierre R. Mai wrote:
> But "recomputation" is a very fast memory access for vectors, which is
> likely nearly just as fast as a car operation, and will remove one
> indirection to get at the vector, so it seems that not keeping this
> duplicate value is a useful simplification.
> 
>>   b) whether I should eliminate the (buffer-number buffer-offset) from
>>      the function call interface and just use a flat offset,
>>      recomputing the buffer-number/-offsets as necessary.
> 
> Since you need to retraverse your buffer list in any case,
> recomputation seems inexpensive, so I'd go for it.  This might be
> different if you have lots of buffers, and either keep them in an
> adjustable vector (so that direct indexing with the buffer-number
> works), or you keep the buffer-cons instead of the buffer-number,
> again giving you direct access.

I agree that ommiting those offsets may be a good idea.

On the other side the datastructure could be extended to support 
"subsequences" of larger buffers.

'("hallo" *an-1024-char-array* (*buffer* 37 20) ...)

Storing a string or an array is done immediately but you could store a 
subsequence of a larger buffer using a list (<buffer> <start> <end>).
This triplet is ready to directly apply SUBSEQ to it (or creating a 
displaced array?)

(let ((buffer-list '("hallo" *an-1024-char-array* (*buffer* 37 20) ...)))
 (apply #'subseq (third buffer-list)))

I'm not sure if this is a particular good idea or if the same thing could 
be done better by using displaced arrays.

ciao,
Jochen

--
http://www.dataheaven.de
From: Jochen Schmidt
Subject: Re: buffer-lists (was Re: Design patterns for Lisp)
Date: 
Message-ID: <9u0crr$k82$3@rznews2.rrze.uni-erlangen.de>
Pierre R. Mai wrote:
> But "recomputation" is a very fast memory access for vectors, which is
> likely nearly just as fast as a car operation, and will remove one
> indirection to get at the vector, so it seems that not keeping this
> duplicate value is a useful simplification.
> 
>>   b) whether I should eliminate the (buffer-number buffer-offset) from
>>      the function call interface and just use a flat offset,
>>      recomputing the buffer-number/-offsets as necessary.
> 
> Since you need to retraverse your buffer list in any case,
> recomputation seems inexpensive, so I'd go for it.  This might be
> different if you have lots of buffers, and either keep them in an
> adjustable vector (so that direct indexing with the buffer-number
> works), or you keep the buffer-cons instead of the buffer-number,
> again giving you direct access.

I agree that ommiting those offsets may be a good idea.

On the other side the datastructure could be extended to support 
"subsequences" of larger buffers.

'("hallo" *an-1024-char-array* (*buffer* 37 20) ...)

Storing a string or an array is done immediately but you could store a 
subsequence of a larger buffer using a list (<buffer> <start> <end>).
This triplet is ready to directly apply SUBSEQ to it (or creating a 
displaced array?)

(let ((buffer-list '("hallo" *an-1024-char-array* (*buffer* 37 20) ...)))
 (apply #'subseq (third buffer-list)))

I'm not sure if this is a particular good idea or if the same thing could 
be done better by using displaced arrays.

ciao,
Jochen

--
http://www.dataheaven.de
From: Rob Warnock
Subject: Re: buffer-lists (was Re: Design patterns for Lisp)
Date: 
Message-ID: <9u2lhi$bqe9s$1@fido.engr.sgi.com>
Russell Senior  <·······@aracnet.com> wrote:
+---------------
| I have recently had need for a way of holding sequences longer than
| will fit in a particular implementation's array ... In response, I
| "invented" a buffer-list, a list of buffers with the wrinkle that
| the list is actually a list of lists, where the sublists are
| (buffer-length buffer)...
...
| Having developed these functions to a stage where they are useful to
| me, it has entered my consciousness that this problem has certainly
| presented itself to others before and I am curious how they might have
| solved the problem. I did a little googling around but didn't find
| anything relevant. Do my "buffer-lists" have another more typical name?
+---------------

The term I'm most familiar with for that general concept is "cords",
as described in the Boehm-Demers-Weiser conservative garbage collector
<URL:http://www.hpl.hp.com/personal/Hans_Boehm/gc/>:

	The garbage collector distribution includes a C string
	(cord [1]) package that provides for fast concatenation and
	substring operations on long strings. A simple curses- and
	win32-based editor that represents the entire file as a cord
	is included as a sample application.

[1] <URL:http://www.hpl.hp.com/personal/Hans_Boehm/gc/gc_source/cordh.txt>
is the header file for the cord package, which says (with C comment chars
stripped):

	Cords are immutable character strings.  A number of operations
	on long cords are much more efficient than their strings.h
	counterpart. In particular, concatenation takes constant time
	independent of the length of the arguments. (Cords are represented
	as trees, with internal nodes representing concatenation and
	leaves consisting of either C strings or a functional description of
	the string.)

Note particularly that bit about allowing functional representations
for leaves. In the cords package, "functional" leaves are actually
implemented in C as closures(!!) which are (virtual) accessors for
the strings they represent. Again from "cordh.txt":

	/* Cords may be represented by functions defining the ith character */
	typedef char (* CORD_fn)(size_t i, void * client_data);

In CL you could do that, too. Or depending on your needs, maybe make the
closures be just thunks that when called return the data they represent
as strings or as further cords of strings and/or thunks. I've seen
several server-side dynamic HTML systems (mostly in Scheme, as it happens)
that do something similar when generating web pages -- that is, there's
a generation phase that produces a cord-like tree of strings and/or
closures, and an output phase that walks the tree outputting the strings
and calling the closures [usually thunks], which return strings. (Or more
trees, which...)

So you might want to consider adding these two features of "cords" --
trees [possibly (re)balanced] and closures -- to your "buffer-lists".


-Rob

-----
Rob Warnock, 30-3-510		<····@sgi.com>
SGI Network Engineering		<http://www.meer.net/~rpw3/>
1600 Amphitheatre Pkwy.		Phone: 650-933-1673
Mountain View, CA  94043	PP-ASEL-IA

[Note: ·········@sgi.com and ········@sgi.com aren't for humans ]  
From: Brian P Templeton
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <87lmgn3e85.fsf@tunes.org>
Erik Naggum <····@naggum.net> writes:

[...]
> 5 Use iteration rather than recursion to scan a sequence one element at a
>   time.  (Reserve recursion for situations where you _require_ a stack.)
> 
>   I.e., Common Lisp is not a dialect of Scheme.
> 
I think that higher-order functions are also useful, and for some
purposes recursion is more easily understood than iteration.

[...]

-- 
BPT <···@tunes.org>	    		/"\ ASCII Ribbon Campaign
backronym for Linux:			\ / No HTML or RTF in mail
	Linux Is Not Unix			 X  No MS-Word in mail
Meme plague ;)   --------->		/ \ Respect Open Standards
From: Tim Bradshaw
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <fbc0f5d1.0111260333.2b844acf@posting.google.com>
··········@mailandnews.com (Software Scavenger) wrote in message news:<····························@posting.google.com>...
> 
> And since "patterns" is such a hot buzzword, 

Patterns are *old news*, like the Web or any of that 90s crap.  Lisp
people need to understand that doing what everyone else did 5 years
ago is *not* the way to succeed.  It might keep you in business but
you won't get rich writing a web server any more, or writing about
patterns.  You need to find out what people will be doing *next*.

--tim
From: Fernando Rodr�guez
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <anl40uor7kc3ta5lpob4ciodf59j3j3se0@4ax.com>
On 26 Nov 2001 03:33:26 -0800, ··········@tfeb.org (Tim Bradshaw) wrote:

>··········@mailandnews.com (Software Scavenger) wrote in message news:<····························@posting.google.com>...
>> 
>> And since "patterns" is such a hot buzzword, 
>
>Patterns are *old news*, like the Web or any of that 90s crap.  Lisp
>people need to understand that doing what everyone else did 5 years
>ago is *not* the way to succeed.  It might keep you in business but
>you won't get rich writing a web server any more, or writing about
>patterns.  You need to find out what people will be doing *next*.

Any hints? ;-)




--
Fernando Rodr�guez
frr at wanadoo dot es
--
From: Wade Humeniuk
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <9ttpfh$lg$1@news3.cadvision.com>
> >Patterns are *old news*, like the Web or any of that 90s crap.  Lisp
> >people need to understand that doing what everyone else did 5 years
> >ago is *not* the way to succeed.  It might keep you in business but
> >you won't get rich writing a web server any more, or writing about
> >patterns.  You need to find out what people will be doing *next*.
>
> Any hints? ;-)

I think that is part of what the purpose of patterns/pattern languages
is/was.  To create what the next thing will be.  The trick is not to find
out what is next but to create what is next.  Partially by creating a
pattern language strong enough to influence a large enough group of people.
Of course this has to be your definition of success.

Patterns (deeply held beliefs) From the previous post:

There exists old news,
Successful things like the Web are crap,
Lisp people need to succeed,
Part of success is understanding,
There is a wrong way,
There is a right way, (If you could only figure it out)
The old things have been done, there is only room for success in new things.

Wade
From: Ed L Cashin
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <m3snb17ab1.fsf@terry.uga.edu>
"Wade Humeniuk" <········@cadvision.com> writes:

...
> Patterns (deeply held beliefs) From the previous post:
> 
> There exists old news,
> Successful things like the Web are crap,
> Lisp people need to succeed,
> Part of success is understanding,
> There is a wrong way,
> There is a right way, (If you could only figure it out)
> The old things have been done, there is only room for success in new things.

I've always thought that I should learn more about the patterns
concept because it is so influential.  But the more I read here, the
more it seems that it has become an ill-defined or often-misunderstood
concept, bound to confuse more than anything.

Are patterns simply heuristics favored by experts -- sort of "best
practices" in programming?  That seems a lot more interesting than the
vague "deeply held beliefs" definition above.

-- 
--Ed Cashin                     integrit file-verification system:
  ·······@terry.uga.edu         http://integrit.sourceforge.net/

    Note: If you want me to send you email, don't munge your address.
From: Jeffrey Palmer
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <3C02D989.7090604@acm.org>
Ed L Cashin wrote:

> 
> Are patterns simply heuristics favored by experts -- sort of "best
> practices" in programming?  That seems a lot more interesting than the
> vague "deeply held beliefs" definition above.
> 

Yes, exactly.  Patterns are nothing more than a standardized, digestable 
documentation approach for "best practices".  By providing a single 
description of an approach or architecture, patterns provide a good way 
to quickly get up to speed on approaches/solutions you might not be 
familiar with, while at the same time creating a shared terminology that 
makes it much easier to communicate with others in your area.

Patterns are probably not the ONLY thing that would make Lisp more 
popular in the rest of the programming world, but they probably wouldn't 
hurt.  Documenting common approaches to solving problems in a uniquely 
Lisp fashion would be a great way of applying patterns in the Lisp 
community.  (You could easily position the entire _On Lisp_ text as a 
collection of patterns, if it were written in a slightly different style).

The major thing keeping people from using languages like Lisp, Scheme, 
and other functional languages is the learning curve.  Patterns help 
make that curve manageable for people that don't have a lifetime of 
experience in the language.

It's a little disappointing that people often hold such a negative view 
of patterns, given that they're just a way of communicating.  I would 
think the more information flow, the better, especially for a group that 
wants desperately to communicate the advantages of its approach to the 
rest of the world.

	- j


PS: Patterns were not invented by computer scientists, but by 
architects.  If you're interested in the original texts, check out 
Christopher Alexander's "The Timeless Way of Building", and "A Pattern 
Language: Towns, Buildings, Construction".

--
Jeffrey Palmer
Curious Networks, Inc.
http://www.curiousnetworks.com
From: Ed L Cashin
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <m3elmkatyx.fsf@terry.uga.edu>
Jeffrey Palmer <··············@acm.org> writes:

...
> PS: Patterns were not invented by computer scientists, but by
> architects.  If you're interested in the original texts, check out
> Christopher Alexander's "The Timeless Way of Building", and "A Pattern
> Language: Towns, Buildings, Construction".

I am interested, thanks much.   You mean that the practice of thinking
of patterns of software development was inspired directly by these
works? 

-- 
--Ed Cashin                     integrit file-verification system:
  ·······@terry.uga.edu         http://integrit.sourceforge.net/

    Note: If you want me to send you email, don't munge your address.
From: Jeffrey Palmer
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <3C03EA04.9070704@acm.org>
Ed L Cashin wrote:

> 
> I am interested, thanks much.   You mean that the practice of thinking
> of patterns of software development was inspired directly by these
> works? 
> 


Yes.  Christoper Alexander's group began documenting common 
architectural themes as patterns some time ago.  Several individuals 
(the Gang of Four, mostly) recognized the applicability of this approach 
to computer science and Patterns (as we know them) were born.

The architecture patterns are interesting just to see how core concepts 
in a different field can be effectively communicated.

	- j

--

Jeffrey Palmer

Curious Networks, Inc.
http://www.curiousnetworks.com
From: Frank A. Adrian
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <AWZM7.3532$_06.360252@news.uswest.net>
Jeffrey Palmer wrote:

> The architecture patterns are interesting just to see how core concepts
> in a different field can be effectively communicated.

As well as to show how even well-meaning people in another field can fuck 
them over, with all respect due to the GoF.

Flamingly yours...
faa
From: Gareth McCaughan
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <slrna0859q.1ra3.Gareth.McCaughan@g.local>
Jeffrey Palmer wrote:

> Yes, exactly.  Patterns are nothing more than a standardized, digestable 
> documentation approach for "best practices".  By providing a single 
> description of an approach or architecture, patterns provide a good way 
> to quickly get up to speed on approaches/solutions you might not be 
> familiar with, while at the same time creating a shared terminology that 
> makes it much easier to communicate with others in your area.

This is about all patterns are usually considered to be, but
I think in some sense they "should" be something more. The
original inspiration for the patterns thing came from the
works of Christopher Alexander, and the interesting thing
there is that his patterns aren't meant to be used on their
own. They're supposed to fit together to form a "pattern
language", which is a bit like a generative grammar: it
contains "patterns" at many levels, and you're supposed to
be able to start at the top level and gradually follow the
productions down through lower levels until you have a
completely specified system. So the fundamental task in
pattern-making is *not* making patterns, but making pattern
*languages*. This is harder, in the same sort of way as
building a major software system is harder than writing a
single function. So of course it doesn't get done much.
The "pattern languages" that get published are almost all
"little languages"; they explain a way of making a decent
system that covers some rather small amount of ground.

By way of contrast, Alexander's[1] book "A pattern language"
begins with a couple of patterns that are meant to apply at
the *global* scale and works down to the details of how to
construct an individual wall. There's not much in the
software patterns literature with that kind of breadth
of vision.

That isn't to say that writing isolated patterns isn't a
useful activity. Encapsulating a not-entirely-trivial bit
of "best practice" in a pithy but fairly detailed form
is good, provided it's done well. But the really interesting
objects are -- perhaps I should say "might be" -- pattern languages,
not isolated patterns.


[1] Actually, he had several co-authors too.

-- 
Gareth McCaughan  ················@pobox.com
.sig under construc
From: Gareth McCaughan
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <slrna08bcc.1sdq.Gareth.McCaughan@g.local>
I wrote:

> Jeffrey Palmer wrote:
> 
> > Yes, exactly.  Patterns are nothing more than a standardized, digestable 
> > documentation approach for "best practices".  By providing a single 
> > description of an approach or architecture, patterns provide a good way 
> > to quickly get up to speed on approaches/solutions you might not be 
> > familiar with, while at the same time creating a shared terminology that 
> > makes it much easier to communicate with others in your area.
> 
> This is about all patterns are usually considered to be, but
> I think in some sense they "should" be something more.
[etc]

Oh, one other thing that distinguishes a "pattern" from a
"standardized documentation of best practice": a pattern
is supposed to have a short, memorable name. This is more
important than it sounds; the idea is that knowing a bunch
of patterns provides some common language for a community
to talk about things at a higher level. Patterns thus
play a role a bit like that of abstractions like functions
and macros, but for humans rather than computers. That's
the theory, anyway.

Of course you can name interesting phenomena without
having patterns about them. "Ring buffer", "virtual
machine", "metaobject protocol", etc. So no one is
claiming that providing good names for things is some
new capability patterns have that nothing else had
before. Just that one of the things a pattern can do
is to provide a good name for a thing.

Again, this isn't something existing patterns do all
that well. (Some of them do, even when -- as with
"Singleton" and "Visitor" from the GoF book -- they're
good names for undeserving things. I think the names
"Observer"[1] and "Composite"[2], also from that book, are
useful contributions to the terminology of programming.)


[1] The "Observer pattern" is where you have a protocol
    of callback functions to allow one entity to be
    notified when another changes, without that other
    entity needing to know anything about the thing
    that's observing it. This is harder to think of
    in languages without closures.

    The "Composite pattern" is where a container contains
    objects whose class is a superclass of the container's
    class, so that you can do recursive descent uniformly.
    This is harder to think of in statically typed languages.

    Both of these "patterns" are, in some sense, too easy
    in Lisp. Not in the sense that Lisp would be better if
    it made them harder; but in languages where you have
    to fight harder to do interesting things, it's easier
    to notice when you've fought the same fight several
    times. I think the *names* "observer" and "composite"
    are useful even when you're programming in a language
    that doesn't make it a non-trivial exercise to
    implement them.

-- 
Gareth McCaughan  ················@pobox.com
.sig under construc
From: Jeffrey Palmer
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <3C0524BE.7070809@acm.org>
Gareth McCaughan wrote:

> 
> This is about all patterns are usually considered to be, but
> I think in some sense they "should" be something more. The
> original inspiration for the patterns thing came from the
> works of Christopher Alexander, and the interesting thing
> there is that his patterns aren't meant to be used on their
> own. They're supposed to fit together to form a "pattern
> language", which is a bit like a generative grammar: it
> contains "patterns" at many levels, and you're supposed to
> be able to start at the top level and gradually follow the
> productions down through lower levels until you have a
> completely specified system. So the fundamental task in
> pattern-making is *not* making patterns, but making pattern
> *languages*. ...


I agree.  Pattern languages are the glue that holds patterns together. 
I find it difficult to write patterns without automatically placing them 
into a pattern language; the context provided by the pattern language 
allows for a more expressive presentation of the system complexity 
(e.g., pattern languages cleanly support the documentation of alternate 
solutions to a single problem, with a clear distinction based on 
performance, or whatever is important to you).

I approach almost all documentation in this fashion now, regardless of 
language, and I have found it to be very effective.  Then again, I 
probably take a more pragmatic approach to patterns, as opposed to some 
in the industry (which is probably what prompted this thread in the 
first place).

	- j

--
Jeffrey Palmer
Curious Networks, Inc.
http://www.curiousnetworks.com
From: Erik Naggum
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <3215782079772927@naggum.net>
* Wade Humeniuk
| I think that is part of what the purpose of patterns/pattern languages
| is/was.  To create what the next thing will be.  The trick is not to find
| out what is next but to create what is next.  Partially by creating a
| pattern language strong enough to influence a large enough group of
| people.  Of course this has to be your definition of success.

  I think patterns people are trying to prevent the next thing from being
  just another variation of something somebody already did many years ago.
  The "if we do it our _own_ way, we can call it innovation" crowd needs
  serious corrective input to their "creative" processes.  Preventing just
  one stupid reinvention of the obvious from being marketed in glittering
  new clothing would be worth every cost.

///
-- 
  The past is not more important than the future, despite what your culture
  has taught you.  Your future observations, conclusions, and beliefs are
  more important to you than those in your past ever will be.  The world is
  changing so fast the balance between the past and the future has shifted.
From: Daniel Barlow
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <878zct1o7g.fsf@noetbook.telent.net>
Fernando Rodr�guez <·······@must.die> writes:

> On 26 Nov 2001 03:33:26 -0800, ··········@tfeb.org (Tim Bradshaw) wrote:
> 
> >··········@mailandnews.com (Software Scavenger) wrote in message news:<····························@posting.google.com>...
> >> 
> >> And since "patterns" is such a hot buzzword, 
> >
> >Patterns are *old news*, like the Web or any of that 90s crap.  Lisp
> >people need to understand that doing what everyone else did 5 years
> >ago is *not* the way to succeed.  It might keep you in business but
> >you won't get rich writing a web server any more, or writing about
> >patterns.  You need to find out what people will be doing *next*.
> 
> Any hints? ;-)

Patterns were born from Smalltalk-inspired people, and ended up in the
hands of C++ and Java programmers.  I don't think they persuaded many
people to learn Smalltalk.  (Ditto XP, I think)

Web applications were probably originally pioneered by Perl programers
writing CGI.  Then the Java guys muscled in with J2EE and all that
middleware stuff; I don't think many of them decided this would be a
good opportunity to learn Perl.

So I think Lisp programmers have to decide whether their goal is to
get rich, or to popularize Lisp.  Even if you correctly identify the
Next Big Thing ahead of time[*] and win (= get rich) by implementing
it in Lisp, Lisp is not going to ride on its coattails to become the
Next Big Thing Implementation Language.  Your competitors will just
reimplement it in whatever language they already know or has enough
marketing $ behind it to persuade J Random Programmer that he needs to
learn it.


-dan

[*] A more reliable strategy would probably be to shape the next big
thing yourself instead of passively trying to spot it coming out
somewhere else

-- 

  http://ww.telent.net/cliki/ - Link farm for free CL-on-Unix resources 
From: Bijan Parsia
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <Pine.A41.4.21L1.0111261839440.50476-100000@login8.isis.unc.edu>
On 26 Nov 2001, Daniel Barlow wrote:
[snip]
> [*] A more reliable strategy would probably be to shape the next big
> thing yourself instead of passively trying to spot it coming out
> somewhere else
[snip]

In the oft-quote phrase from Alan Kay, "The best way to predict the
future is to invent it."

There is, of course, the lesser known phrase from Bill Gates(*), "The best
way to invent the future is to co-opt/buy/steal it."

Cheers,
Bijan Parsia.

* This attribution is completely spurious, made for the purposes of humor
and point making. The acutal phrase is, "Will no one rid me of this
troublesome antitrust lawsuit?" and it was probablly Balmer who said
it. :)
From: Erik Naggum
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <3216157980834776@naggum.net>
* Bijan Parsia
| In the oft-quote phrase from Alan Kay, "The best way to predict the
| future is to invent it."
| 
| There is, of course, the lesser known phrase from Bill Gates(*), "The
| best way to invent the future is to co-opt/buy/steal it."

  I thought Bill Gates' take on it was "the best way to predict the future
  is to reinvent it".

///
-- 
  The past is not more important than the future, despite what your culture
  has taught you.  Your future observations, conclusions, and beliefs are
  more important to you than those in your past ever will be.  The world is
  changing so fast the balance between the past and the future has shifted.
From: Software Scavenger
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <a6789134.0111301933.60d49733@posting.google.com>
Daniel Barlow <···@telent.net> wrote in message news:<··············@noetbook.telent.net>...

> So I think Lisp programmers have to decide whether their goal is to
> get rich, or to popularize Lisp.  Even if you correctly identify the

If Lisp gains a reputation for making people rich, it will quickly
become popular.  Paul Graham is just one person, not enough to give
Lisp such a reputation.  We need hordes of Lisp millionaires.

Why don't we already have hordes of Lisp millionaires?  What is the
invisible obstacle standing in the way of this logically-expectable
result?
From: Kenny Tilton
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <3C085561.A44FB99F@nyc.rr.com>
Software Scavenger wrote:
> 
> Why don't we already have hordes of Lisp millionaires?  What is the
> invisible obstacle standing in the way of this logically-expectable
> result?

You can't use Lisp and work for Bill. You have to work for Bill to get
rich, he has all the money now.

kenny
clinisys
From: Kaz Kylheku
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <4v_N7.10710$nm3.496740@news1.rdc1.bc.home.com>
In article <·················@nyc.rr.com>, Kenny Tilton wrote:
>Software Scavenger wrote:
>> 
>> Why don't we already have hordes of Lisp millionaires?  What is the
>> invisible obstacle standing in the way of this logically-expectable
>> result?
>
>You can't use Lisp and work for Bill. You have to work for Bill to get
>rich, he has all the money now.

Not so sure about that; you can target Bill's platform with Lisp.
From: Thomas F. Burdick
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <xcvvgfqoa1l.fsf@famine.OCF.Berkeley.EDU>
Kenny Tilton <·······@nyc.rr.com> writes:

> Software Scavenger wrote:
> > 
> > Why don't we already have hordes of Lisp millionaires?  What is the
> > invisible obstacle standing in the way of this logically-expectable
> > result?
> 
> You can't use Lisp and work for Bill. You have to work for Bill to get
> rich, he has all the money now.

Ah, but you forget that not all the people who got their money from
Bill applied for jobs with him.  For a while in the 90's in Seattle,
one commonly-seen business plan was "make a company that does well
enough to make Bill want to buy it to either make it his or crush it".
If you did this with Lisp, you could get some of Bill's money as well
as with any other language.  If he bought you to keep whatever you
made, he'd probably have his minions rewrite it in C++, but presumably
you already figured out the design, so this would even be a reasonable
thing.

-- 
           /|_     .-----------------------.                        
         ,'  .\  / | No to Imperialist war |                        
     ,--'    _,'   | Wage class war!       |                        
    /       /      `-----------------------'                        
   (   -.  |                               
   |     ) |                               
  (`-.  '--.)                              
   `. )----'                               
From: Kenny Tilton
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <3C0960BB.832C18A5@nyc.rr.com>
"Thomas F. Burdick" wrote:
> 
> Ah, but you forget that not all the people who got their money from
> Bill applied for jobs with him.  For a while in the 90's in Seattle,
> one commonly-seen business plan was "make a company that does well
> enough to make Bill want to buy it to either make it his or crush it".
> If you did this with Lisp, you could get some of Bill's money as well
> as with any other language.  

Quibble Alert: Hang on, what if Bill goes for the crush option on me?

Can you imagine if Bill decided to do a Visual Lisp? There'd go the
neighborhood.

kenny
clinisys
From: Florian Weimer
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <87ofljylnp.fsf@deneb.enyo.de>
Kenny Tilton <·······@nyc.rr.com> writes:

> You can't use Lisp and work for Bill.

Are you sure?  Surely you can use Haskell and work for Bill.

> You have to work for Bill to get rich, he has all the money now.

If he had all the money, it would be worthless.
From: Erik Naggum
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <3216201529585621@naggum.net>
* Software Scavenger
| If Lisp gains a reputation for making people rich, it will quickly become
| popular.  Paul Graham is just one person, not enough to give Lisp such a
| reputation.  We need hordes of Lisp millionaires.

  Provided at least a majority of them are satisfied with the language that
  made them rich and do not feel the urge to create their own pet languages.

| Why don't we already have hordes of Lisp millionaires?

  Perhaps because they do not want to credit Lisp with it?

///
-- 
  The past is not more important than the future, despite what your culture
  has taught you.  Your future observations, conclusions, and beliefs are
  more important to you than those in your past ever will be.  The world is
  changing so fast the balance between the past and the future has shifted.
From: Marc Battyani
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <47381AFBE76F97DB.E0A868B9A0587778.FEFE6E696A2ABCBC@lp.airnews.net>
"Software Scavenger" <··········@mailandnews.com> wrote

> If Lisp gains a reputation for making people rich, it will quickly
> become popular.  Paul Graham is just one person, not enough to give
> Lisp such a reputation.  We need hordes of Lisp millionaires.
>
> Why don't we already have hordes of Lisp millionaires?  What is the
> invisible obstacle standing in the way of this logically-expectable
> result?

The really interesting metrics is the ratio (/ millionaires programmers) and
to compute it for Java, C++, Lisp, Python, etc.

Marc
From: Michael Travers
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <a74f7e2e.0112011911.5dd5dca@posting.google.com>
··········@mailandnews.com (Software Scavenger) wrote in message news:<····························@posting.google.com>...

> Why don't we already have hordes of Lisp millionaires?  What is the
> invisible obstacle standing in the way of this logically-expectable
> result?

I know a couple of people who have started successful companies using
Lisp as a technology, and are (I'm pretty sure) millionaires quite a
few times over.  

In both cases, the main ingredients for success were:
- being pretty damn smart
- finding a complex domain that doesn't have good tools yet
- understanding that domain thoroughly
- building a solution in Lisp 
- selling the solution into the market

Knowing Lisp is the least of it really, but Lisp does enable a smart
person to quickly build a solution to a complex problem, with one or
a few developers and maybe without external funding.  That's a good way 
to get rich.
From: Tim Bradshaw
Subject: Re: Design patterns for Lisp
Date: 
Message-ID: <fbc0f5d1.0111270454.6ed20e0e@posting.google.com>
Fernando Rodr?uez <·······@must.die> wrote in message news:<··································@4ax.com>...
> 
> Any hints? ;-)

I have a document that I will probably never finish (it's at least 2
years since I did any work on it).  Its title is `About six months'.
Its intent is that six months is about the difference between winning
and losing, and coincidentally the advantage Lisp gives you is also
about 6 months.  If you can spot something at the same time everyone
else does, then you can get there *first* with Lisp.  If you spot
something significantly before everyone else does it doesn't matter
very mugh how you do it, and if you spot if 5 years too late Lisp
won't help you win.

There are a few things to notice about this.

If you get there first you can set standards, if you are lucky. That
means everyone else has to talk to you, and if you get the standards
right that can further help your system because the standards are
designed to be easy for you.

Getting there first means not doing complicated things. This is a
really crucial point. If it needs XML or CORBA its going to take more
than 6 months - not just in Lisp but in any language because these
systems are just overcomplex.  Things grow complexity as they mature
and die.  If you think you need to implement a new lisp dialect or an
OS you are looking at the wrong things.

Getting there first does not solve everything.  You can do pretty well
but reasonably soon (5 years) some huge monster will come along and
use monopoly power to smash you if they can.  You want to have sold
out before this happens.  Alternatively, like the dot.com boom, the
`next big thing' may turn out not to be so big, in which case you want
to sell before the bubble collapses to get maximum value...

The classic error made by Lisp people is that Lisp is *so good* that
you can not have any other ideas, and just do what everyone else does,
5 years later, but win because of secret Lisp magic. This is probably
enough to keep you afloat, because Lisp is pretty good, but you'll
spend your life fighting integration problems and so on, and needing
XML and CORBA and UML and all that other dross.  The difference
between Lisp and anything else is that 5 years too late is only 4
years 6 months too late in Lisp.

So my theory is that Lisp is the difference between needing to see the
next big thing coming 6 months before anyone else, which means you are
lucky or a genius, and merely seeing it at the same time, which means
you just need to watch things like a hawk.

Of course there are other ways of making a living than spotting the
next big thing, and you can use Lisp for those too.  That's what
almost everyone does, after all.

Finally someone mentioned that you have a choice between doing this
and popularising Lisp.  This is not correct.   Firstly you can get to
set Lisp-friendly standards. Secondly, Once you've done this you never
need to work again.  *Then* you can popularise Lisp.

--tim