From: Raffael Cavallaro
Subject: The psychology of premature optimization
Date: 
Message-ID: <raffael-1512982350510001@raffaele.ne.mediaone.net>
In article <············@sparky.wolfe.net>, "Chas Wade"
<········@wolfenet.com> wisely wrote:


>      My experience has been that programmers are too eager to get faster
>performance, and they both constrain their designs, and program too
>carefully for performance in the early stages of a project.  Build first, th
>en work on the performance.  If your tools let you build fast enough, you
>won't be disheartened by discarding even large portions when they prove too
>slow because you can rebuild a second solution just as quickly.

Here's my take on why people optimize prematurely, FWIW. Life is
uncertain. Will your project succeed? Will you be working at the same
company in 6 month's time? Will you reach the market in time? etc., etc.,
etc. The honest answer to all of these is "I don't know."

But one thing is for sure. If I work at it, I can get this bit of code to
run faster. It's manageable, it has immediate, visible results, even if
these results are irrelevant to the bigger questions we're really
concerned with.

So people try to manage their uncertainty by being certain about somthing
they can deal with directly - the speed of their code.

It really makes much more sense to let the compiler writers worry about
the speed of generated code - after all, that's what they do for a living.
Only once we've gotten something to work should we turn to the task of
speeding up selected, critical portions that the compiler didn't make fast
enough for our purposes.

just my $.02 worth.

Raf

-- 
Raffael Cavallaro

From: Tim Bradshaw
Subject: Re: The psychology of premature optimization
Date: 
Message-ID: <ey3pv9ki94p.fsf@todday.aiai.ed.ac.uk>
* Raffael Cavallaro wrote:

> But one thing is for sure. If I work at it, I can get this bit of code to
> run faster. It's manageable, it has immediate, visible results, even if
> these results are irrelevant to the bigger questions we're really
> concerned with.

> So people try to manage their uncertainty by being certain about somthing
> they can deal with directly - the speed of their code.

In fact, it's a classic displacement activity: `there's all this stuff
I don't really want to think about, because it's hard and or
emotionally difficult, so I'll just concentrate on this little thing
over here which keeps me nice & busy'.

I have *lots* of LaTeX classes/styles which are caused by me not
wanting to think about writing some document... If I was being nasty
I'd claim that LispMs were probably a displacement activity from AI.

I often think that *most* computing activities are of this kind.

--tim
From: Martin Rodgers
Subject: Re: The psychology of premature optimization
Date: 
Message-ID: <MPG.10e182e364a5a2e6989d6f@news.demon.co.uk>
In article <························@raffaele.ne.mediaone.net>, 
·······@mediaone.net says...

[some excellent points]

> Only once we've gotten something to work should we turn to the task of
> speeding up selected, critical portions that the compiler didn't make fast
> enough for our purposes.
 
As someone once said, "Make it right before you make it fast."

> just my $.02 worth.

Appreciated.
-- 
Remove insect from address to email me | You can never browse enough
     will write code that writes code that writes code for food
From: Tim Rowledge
Subject: Re: The psychology of premature optimization
Date: 
Message-ID: <ant16181106cKL&V@goldskin.interval.com>
In article <··························@news.demon.co.uk>, Martin Rodgers
<··············@wildcard.butterfly.demon.co.uk> wrote:

> As someone once said, "Make it right before you make it fast."
That someone might even have been me -- when I used to teach Smalltalk classes, I used to preach
Make it work
Make it work _right_
Make it work right _fast_

as in
get it to do what you want
then if you need, make it more general, better factored etc
then it you need it again, it's time to really bang on it.

tim

-- 
God is real, unless declared integer.
Tim Rowledge:  ········@interval.com (w)  +1 (650) 842-6110 (w)
 ···@sumeru.stanford.edu (h)  <http://sumeru.stanford.edu/tim>
From: David N. Smith
Subject: Re: The psychology of premature optimization
Date: 
Message-ID: <75ba3v$hr0$1@news.btv.ibm.com>
And Knuth often said (and may still say): 

   Premature optimization is the root of all evil.

----------
In article <ant16181106cKL&·@goldskin.interval.com>, Tim Rowledge <········@interval.com> wrote:


>> As someone once said, "Make it right before you make it fast."
>That someone might even have been me -- when I used to teach Smalltalk classes, I used to preach
>Make it work
>Make it work _right_
>Make it work right _fast_
>
>as in
>get it to do what you want
>then if you need, make it more general, better factored etc
>then it you need it again, it's time to really bang on it.
From: Philip Lijnzaad
Subject: Re: The psychology of premature optimization
Date: 
Message-ID: <u7u2ytiskq.fsf@ebi.ac.uk>
along similar lines, but even stronger is Steve Oualline's first rule on
optimizing: "don't". This is in his O'Reilly book Practical C++
Programming. The first rule on that particular activity, BTW, is also a
resounding "don't" :-) 

Merry Lispmas,

                                                                      Philip
-- 
"when flint was at the cutting edge of cutting edge technology" T. Pratchett
-----------------------------------------------------------------------------
Philip Lijnzaad, ········@ebi.ac.uk | European Bioinformatics Institute
+44 (0)1223 49 4639                 | Wellcome Trust Genome Campus, Hinxton
+44 (0)1223 49 4468 (fax)           | Cambridgeshire CB10 1SD,  GREAT BRITAIN
PGP fingerprint: E1 03 BF 80 94 61 B6 FC  50 3D 1F 64 40 75 FB 53
From: Robert Virding
Subject: Re: The psychology of premature optimization
Date: 
Message-ID: <75djq5$3ov$2@news.du.etx.ericsson.se>
In article <··············@ebi.ac.uk>, Philip Lijnzaad <········@ebi.ac.uk> writes:
>
>along similar lines, but even stronger is Steve Oualline's first rule on
>optimizing: "don't". This is in his O'Reilly book Practical C++
>Programming. The first rule on that particular activity, BTW, is also a
>resounding "don't" :-) 
>
>Merry Lispmas,

I have a similar reference:

Rules of Optimisation:
Rule 1: Don't do it.
Rule 2 (for experts only): Don't do it yet.
M.A. Jackson

-- 
Robert Virding                          Tel: +46 (0)8 719 95 28
Computer Science Laboratory             Email: ··@erix.ericsson.se
Ericsson Telecom AB
S-126 25 �LVSJ�, SWEDEN
WWW: http://www.ericsson.se/cslab/~rv
"Folk s�ger att jag inte bryr mig om n�gonting, men det skiter jag i".
From: Bruce Samuelson
Subject: optimize early sometimes [was Re: The psychology of premature optimization]
Date: 
Message-ID: <3679A925.F13A9682@acm.org>
Tim Rowledge wrote:
> 
> In article <··························@news.demon.co.uk>, Martin Rodgers
> <··············@wildcard.butterfly.demon.co.uk> wrote:
> 
> > As someone once said, "Make it right before you make it fast."
> That someone might even have been me -- when I used to teach Smalltalk classes, I used to preach
> Make it work
> Make it work _right_
> Make it work right _fast_
> 
> as in
> get it to do what you want
> then if you need, make it more general, better factored etc
> then it you need it again, it's time to really bang on it.

This thread has expressed a remarkable consensus on the merits of
deferring optimization. I'd like to give a counterexample where it is
dangerous to wait too long. Assume that

- the problem is dominated by complex algorithms
- the algorithms are difficult to design and implement
- they vary tremendously in their performance

An example is compiling grammars for natural languages and then parsing
sentences. To build a system with even adequate performance, you can't
just implement something and then optimize it. The algorithms you choose
may not have the necessary horsepower. A recommended approach when the
problem is dominated by algorithmic complexity is to:

- study the available algorithms
- select suitable ones (as simple as possible that perform adequately)
- design the objects
- implement them
- do detailed tuning if needed
- find better algorithms if needed (but this will be costly)

Tuning by choosing algorithms often has much more impact than tuning by
tweaking the code. The latter can be deferred, but the former often
cannot. Design patterns that support interchangeable algorithms will
help, but if the bulk of the work is in implementing the algorithms
themselves, you still have an incentive to choose good ones up front.
From: Duane Rettig
Subject: Re: optimize early sometimes [was Re: The psychology of premature optimization]
Date: 
Message-ID: <4g1ae17sx.fsf@beta.franz.com>
Bruce Samuelson <···············@acm.org> writes:

> Tim Rowledge wrote:
> > 
> > In article <··························@news.demon.co.uk>, Martin Rodgers
> > <··············@wildcard.butterfly.demon.co.uk> wrote:
> > 
> > > As someone once said, "Make it right before you make it fast."
> > That someone might even have been me -- when I used to teach Smalltalk classes, I used to preach
> > Make it work
> > Make it work _right_
> > Make it work right _fast_
> > 
> > as in
> > get it to do what you want
> > then if you need, make it more general, better factored etc
> > then it you need it again, it's time to really bang on it.
> 
> This thread has expressed a remarkable consensus on the merits of
> deferring optimization. I'd like to give a counterexample where it is
> dangerous to wait too long.

I submit that this example does not go counter to the consensus.
Read on ...

>   Assume that
> 
> - the problem is dominated by complex algorithms
> - the algorithms are difficult to design and implement
> - they vary tremendously in their performance

I further submit that there is a requirement not stated in the
problem domain above: "the result must be fast enough" for some
specific measure of "fast enough".

This then implies that there is an acceptable level of speed
for the resulting application, below which it can be considered
that the application doesn't work because it doesn't meet the
minimum speed requirements.  To get it working "correctly", you
would have to meet those minimum speed requirements.  Once you
meet this requirement, any further optimization is wrong,
because it goes against our concensus that one must get it right
first, and then get it fast later.

Your example has in it some of the key words that demonstrate my
point:

> An example is compiling grammars for natural languages and then parsing
> sentences. To build a system with even adequate performance, you can't
=========================================^^^^^^^^
> just implement something and then optimize it. The algorithms you choose
> may not have the necessary horsepower. A recommended approach when the
> problem is dominated by algorithmic complexity is to:
> 
> - study the available algorithms
> - select suitable ones (as simple as possible that perform adequately)
=============================================================^^^^^^^^^^
> - design the objects
> - implement them
===================================
> - do detailed tuning if needed
> - find better algorithms if needed (but this will be costly)

Note that the flagged lines are establishing a minimum acceptability
requirement, not some vague notion that the app be "as fast as possible"
or "just a little bit faster", both unmeasurable goals.

The last two bullets fall under the category of "optimization", should
thus be saved for after the application is working correctly
(including having "acceptable" performance).

> Tuning by choosing algorithms often has much more impact than tuning by
> tweaking the code. The latter can be deferred, but the former often
> cannot. Design patterns that support interchangeable algorithms will
> help, but if the bulk of the work is in implementing the algorithms
> themselves, you still have an incentive to choose good ones up front.

Of course, the goodness or badness of the algorithms are determined up
front by whether they meet the requirements.  Those algorithms that are
not fast enough to meet minimum requirements are simply wrong.

-- 
Duane Rettig          Franz Inc.            http://www.franz.com/ (www)
1995 University Ave Suite 275  Berkeley, CA 94704
Phone: (510) 548-3600; FAX: (510) 548-8253   ·····@Franz.COM (internet)
From: Barry Margolin
Subject: Re: optimize early sometimes [was Re: The psychology of premature optimization]
Date: 
Message-ID: <Sawe2.83$pr1.9060@burlma1-snr1.gtei.net>
In article <·············@beta.franz.com>,
Duane Rettig  <·····@franz.com> wrote:
>Bruce Samuelson <···············@acm.org> writes:
>> Tuning by choosing algorithms often has much more impact than tuning by
>> tweaking the code. The latter can be deferred, but the former often
>> cannot. Design patterns that support interchangeable algorithms will
>> help, but if the bulk of the work is in implementing the algorithms
>> themselves, you still have an incentive to choose good ones up front.
>
>Of course, the goodness or badness of the algorithms are determined up
>front by whether they meet the requirements.  Those algorithms that are
>not fast enough to meet minimum requirements are simply wrong.

It should also be noted that the types of early optimizations that are
generally warned against are "micro-optimizations".  E.g. low-level
data representation issues (how many bits to use for certain variables),
choice of sorting algorithms, etc.  These types of decisions can generally
be deferred until after profiling the initial implementation, but many
programmers waste valuable time during the initial coding worrying about
them unnecessarily.

-- 
Barry Margolin, ······@bbnplanet.com
GTE Internetworking, Powered by BBN, Burlington, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Don't bother cc'ing followups to me.
From: Carl Gundel
Subject: Re: The psychology of premature optimization
Date: 
Message-ID: <01be2a91$eed7d7e0$b338e49e@cgundel>
Martin Rodgers <···@wildcard.butterfly.demon.co.uk> wrote in article
<··························@news.demon.co.uk>...
<snip>
> > Only once we've gotten something to work should we turn to the task of
> > speeding up selected, critical portions that the compiler didn't make
fast
> > enough for our purposes.
>  
> As someone once said, "Make it right before you make it fast."
> 
> > just my $.02 worth.

Sometimes making it fast is a good part of making it right, especially in
enterprise (and/or distributed) systems that must scale.  So the speed must
be built-in, by design.

-Carl
From: Tim Bradshaw
Subject: Re: The psychology of premature optimization
Date: 
Message-ID: <ey3hfutjvj6.fsf@todday.aiai.ed.ac.uk>
* Carl Gundel wrote:

> Sometimes making it fast is a good part of making it right, especially in
> enterprise (and/or distributed) systems that must scale.  So the speed must
> be built-in, by design.

There are two issues here, which are really being mixed up in this
thread.  

One is the `I can do some bit-twiddly-type things to shave a few
microseconds off something here' thing which is common in C-family
languages.  I think you just obviously don't want to do that until you
really know that this is on the critical path.

The other is `I need to write my algorithms such that they're not
significantly worse in complexity than they should be, so my system
will scale.  And that kind of thing you probably want to do early.

Except this is all wrong, because the classic Lisp case of this is
using an alist when you `should' be using a hashtable -- a classic
complexity lossage case.  But perhaps that alist only ever has 3
things on it, so really the hashtable stuff will be slower, and
scaling turns out not to be an issue.

--tim
From: Carl Gundel
Subject: The prototype is your product (was: The psychology of premature optimization)
Date: 
Message-ID: <01be2abf$d9cd40a0$b338e49e@cgundel>
Tim Bradshaw <···@aiai.ed.ac.uk> wrote in article
<···············@todday.aiai.ed.ac.uk>...
> * Carl Gundel wrote:
> 
> > Sometimes making it fast is a good part of making it right, especially
in
> > enterprise (and/or distributed) systems that must scale.  So the speed
must
> > be built-in, by design.
> 
> There are two issues here, which are really being mixed up in this
> thread.  
> 
> One is the `I can do some bit-twiddly-type things to shave a few
> microseconds off something here' thing which is common in C-family
> languages.  I think you just obviously don't want to do that until you
> really know that this is on the critical path.
> 
> The other is `I need to write my algorithms such that they're not
> significantly worse in complexity than they should be, so my system
> will scale.  And that kind of thing you probably want to do early.
> 
> Except this is all wrong, because the classic Lisp case of this is
> using an alist when you `should' be using a hashtable -- a classic
> complexity lossage case.  But perhaps that alist only ever has 3
> things on it, so really the hashtable stuff will be slower, and
> scaling turns out not to be an issue.

The issue that I meant to address is this mantra: "Just slap something
together (make it work), and then your prototype is the product.  Ship it."
 This was an early theme of Digitalk's, and I've met a lot of Smalltalk
programmers who adopted this attitude to their own harm.  And to think they
paid for that training.  Spaghetti Smalltalk (pasta programming).  :-(

-Carl
From: Joseph Bacanskas
Subject: Re: The prototype is your product (was: The psychology of premature  optimization)
Date: 
Message-ID: <367ACA8F.D2F1437D@mutualtravel.com>
Hi:

I once knew a person who referred to object spagetti code as:

Ravioli Code!

Carl Gundel wrote:

> Tim Bradshaw <···@aiai.ed.ac.uk> wrote in article
> <···············@todday.aiai.ed.ac.uk>...
> > * Carl Gundel wrote:
> >
> > > Sometimes making it fast is a good part of making it right, especially
> in
> > > enterprise (and/or distributed) systems that must scale.  So the speed
> must
> > > be built-in, by design.
> >
> > There are two issues here, which are really being mixed up in this
> > thread.
> >
> > One is the `I can do some bit-twiddly-type things to shave a few
> > microseconds off something here' thing which is common in C-family
> > languages.  I think you just obviously don't want to do that until you
> > really know that this is on the critical path.
> >
> > The other is `I need to write my algorithms such that they're not
> > significantly worse in complexity than they should be, so my system
> > will scale.  And that kind of thing you probably want to do early.
> >
> > Except this is all wrong, because the classic Lisp case of this is
> > using an alist when you `should' be using a hashtable -- a classic
> > complexity lossage case.  But perhaps that alist only ever has 3
> > things on it, so really the hashtable stuff will be slower, and
> > scaling turns out not to be an issue.
>
> The issue that I meant to address is this mantra: "Just slap something
> together (make it work), and then your prototype is the product.  Ship it."
>  This was an early theme of Digitalk's, and I've met a lot of Smalltalk
> programmers who adopted this attitude to their own harm.  And to think they
> paid for that training.  Spaghetti Smalltalk (pasta programming).  :-(
>
> -Carl

--
Thanks!!!

Joseph Bacanskas, Lead Software Engineer
Mutual Travel
1201 Third Avenue
Suite 1800
Seattle, WA  98101
USA

v:      206-676-4993
f:      206-676-4666
e:      ····@mutualtravel.com
From: Kelly Murray
Subject: Re: The prototype is your product (was: The psychology of premature optimization)
Date: 
Message-ID: <367AF6BC.436F5F1F@IntelliMarket.Com>
Carl Gundel wrote:
> 
> The issue that I meant to address is this mantra: "Just slap something
> together (make it work), and then your prototype is the product.  Ship it."
>  This was an early theme of Digitalk's, and I've met a lot of Smalltalk
> programmers who adopted this attitude to their own harm.  And to think they
> paid for that training.  Spaghetti Smalltalk (pasta programming).  :-(
> 
> -Carl

I used "Rapid Productizing" in my lisp marketing literature 
10 years ago, and I think it is a valuable feature for lisp.
But really this doesn't mean hack it up, and release it, it means
hack it up, get people to try the prototype, and then rehack it
and try it again, and  then repeat until the users are satisfied, 
then perhaps reimplement it, then it's a product.  That is different
than hacking it up and then productizing the hack.  
I think one should consider
rewriting the whole thing after a couple iterations, since in
my opinion "premature productization" can leave you supporting
bad designs and implementations for decades 
and even lisp doesn't make that
a good thing even if it does help make the problem better,
primarily because of run-time typing and garbage collection.

-kelly
From: Carl Gundel
Subject: Re: The prototype is your product (was: The psychology of premature optimization)
Date: 
Message-ID: <01be2b0f$01196d60$0f02000a@stbbs>
Kelly Murray <···@IntelliMarket.Com> wrote in article
<·················@IntelliMarket.Com>...
> Carl Gundel wrote:
> > 
> > The issue that I meant to address is this mantra: "Just slap something
> > together (make it work), and then your prototype is the product.  Ship
it."
> >  This was an early theme of Digitalk's, and I've met a lot of Smalltalk
> > programmers who adopted this attitude to their own harm.  And to think
they
> > paid for that training.  Spaghetti Smalltalk (pasta programming).  :-(
> > 
> > -Carl
> 
> I used "Rapid Productizing" in my lisp marketing literature 
> 10 years ago, and I think it is a valuable feature for lisp.
> But really this doesn't mean hack it up, and release it,

Many people tried Smalltalk for its rapid iterative development facility
but didn't obtain good mentoring.  Smalltalk's hackability makes it easy to
throw something together quickly, but good software must be designed
(architected).  I don't know how many times I've pointed out dubious code
to other developers only to get the response "What's wrong with it?  It
works!"  Try to go back and maintain that later!  And when developers see
the need to throw away the prototype and build it over (and over), will
they get the go-ahead from their employer?

> it means
> hack it up, get people to try the prototype, and then rehack it
> and try it again, and  then repeat until the users are satisfied, 
> then perhaps reimplement it, then it's a product.  That is different
> than hacking it up and then productizing the hack.  
> I think one should consider
> rewriting the whole thing after a couple iterations, since in
> my opinion "premature productization" can leave you supporting
> bad designs and implementations for decades 
> and even lisp doesn't make that
> a good thing even if it does help make the problem better,
> primarily because of run-time typing and garbage collection.

I absolutely agree with what you are saying, but often when a project is
under deadline it's the prototype that ends up shipping.  I know it isn't
possible to build software right the first time, but I also think we should
aspire to that goal.

--Carl
From: Pierre Mai
Subject: Re: The prototype is your product (was: The psychology of premature optimization)
Date: 
Message-ID: <873e6cb42t.fsf@orion.dent.isdn.cs.tu-berlin.de>
"Carl Gundel" <·····@world.std.com> writes:

> I absolutely agree with what you are saying, but often when a project is
> under deadline it's the prototype that ends up shipping.  I know it isn't
> possible to build software right the first time, but I also think we should
> aspire to that goal.

But one should keep in mind, that aspiring to the goal of getting it
right the first time can also quite effectively result in prototype
shipment, i.e. since we aspired to get it right the first time, we
don't really (want to) see the short-comings of our design decisions
(which might only have really bad consequences in the maintenance
phase).

This is why I've found it very important to ensure that all stake-holder's
agree on the status of a prototype, i.e. that a created prototype will
never be shipped.  This decision must be made in advance, when agreeing on
a timetable for the project, and gotten down in writing.  Otherwise
project pressures will often weaken the resolve not to ship...

Regs, Pierre.

-- 
Pierre Mai <····@acm.org>               http://home.pages.de/~trillian/
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: Ralph Johnson
Subject: Re: The prototype is your product (was: The psychology of premature optimization)
Date: 
Message-ID: <75gfch$cig$1@vixen.cso.uiuc.edu>
"Carl Gundel" <·····@world.std.com> writes:

>I know it isn't
>possible to build software right the first time, but I also think we should
>aspire to that goal.

Engineering is the art of the possible.  If it isn't possible to do
something, don't try.

The problem is not that the first version of our software isn't right,
it is that what we ship isn't right.  We should only deliver software
that passes all our tests, and that is good enough for the customer.
If management forces us to deliver software that isn't right then we
should fire our managers.  Get another job!

A good methodology for developing software in Smalltalk is
"Extreme Programming".  See http://c2.com/cgi/wiki?ExtremeProgramming
This methodology develops software incrementally, and relies on
prototypes a lot.  It is focused on code, and so is sort of like
what people complained about Digitalk.  However, it also requires
constant refactoring, requires tests to be developed for all software
and requires the system to always pass all unit tests, and requires
a particular way of obtaining requirements and managing them.  Check
it out.

-Ralph Johnson
From: Carl Gundel
Subject: Re: The prototype is your product (was: The psychology of premature optimization)
Date: 
Message-ID: <01be2cf3$e0386560$b338e49e@cgundel>
Ralph Johnson <·······@cs.uiuc.edu> wrote in article
<············@vixen.cso.uiuc.edu>...
> "Carl Gundel" <·····@world.std.com> writes:
> 
> >I know it isn't
> >possible to build software right the first time, but I also think we
should
> >aspire to that goal.
> 
> Engineering is the art of the possible.  If it isn't possible to do
> something, don't try.
> 
> The problem is not that the first version of our software isn't right,
> it is that what we ship isn't right.  We should only deliver software
> that passes all our tests, and that is good enough for the customer.
> If management forces us to deliver software that isn't right then we
> should fire our managers.  Get another job!

The problem as I see it is that not enough time is provided to build it
right.  The first version of a system is seldom right, and working under a
waterfall process (imposed from above) more or less forces you to ship the
first iteration (plus whatever minor changes you can slip in).  It looks as
though we may be given an opportunity to work on some ideas offline.

> A good methodology for developing software in Smalltalk is
> "Extreme Programming".  See http://c2.com/cgi/wiki?ExtremeProgramming
> This methodology develops software incrementally, and relies on
> prototypes a lot.  It is focused on code, and so is sort of like
> what people complained about Digitalk.  However, it also requires
> constant refactoring, requires tests to be developed for all software
> and requires the system to always pass all unit tests, and requires
> a particular way of obtaining requirements and managing them.  Check
> it out.

I've been paying attention to XP for some time now.  There are some things
about it which I find very attractive, like:
-iterative development
-pair programming (I'm fascinated with "grab the keyboard away from your
coworker")
-continuous refactoring
-unit testing

I find some things hard to accept:
-weak code ownership (anyone can just grab the next card, or story)
-do the simplest thing which could possibly work (conflicts with my notion
of deliberately building things, or planning ahead)
-the reversed 80/20 rule, which implies that the XP model is immutable, and
that it cannot be improved on (do anything differently than this and XP
doesn't work)

XP seems compelling enough to put aside my objections and give it a shot. 
I don't know if my employer will go for it though.

-Carl
From: Ronald E Jeffries
Subject: Re: The prototype is your product (was: The psychology of premature optimization)
Date: 
Message-ID: <0C8BCA0F0B064EEE.48B11E3547C5024A.C13982A8C781016A@library-proxy.airnews.net>
Hi Carl,

Can't resist putting an oar in:

On 21 Dec 1998 10:08:39 -0500, "Carl Gundel" <·······@kronos.com>
wrote:

>I've been paying attention to XP for some time now.  There are some things
>about it which I find very attractive, like:
>-iterative development
>-pair programming (I'm fascinated with "grab the keyboard away from your
>coworker")
>-continuous refactoring
>-unit testing
>
>I find some things hard to accept:

>-weak code ownership (anyone can just grab the next card, or story)

You don't say what your concerns are on ownership. Certainly to allow
collective code ownership, we need to get changes in and released very
quickly, or the conflicts get too burdensome.

What collective code ownership gives is that as you work on some piece
of business value, you can (and do) change whatever needs it. The
changes usually start with adding capability to classes that one
wouldn't usually own. Then, however, refactoring may dictate changes
to other methods.

If I can't push through the facilities I need in all classes, then I
have to stop and wait until the owner does it. Should be clear that
this will really slow things down. Alternatives, like extra Facades,
and utility methods, lead to scuzzy code. 

I'm not at all sure that CCO could be used on all projects. I am sure,
having done things both ways, that it speeds things up immensely when
it is possible at all.

>-do the simplest thing which could possibly work (conflicts with my notion
>of deliberately building things, or planning ahead)

Yes, this one is tough. I think it comes down to recognizing that
we're not all that good at predicting the future, and anyway, if a
feature isn't needed today, it's not good use of money to build it
today. "Simplest" does rely on the ability to refactor extensively,
when some new feature is needed, which links it back to code
ownership.

>-the reversed 80/20 rule, which implies that the XP model is immutable, and
>that it cannot be improved on (do anything differently than this and XP
>doesn't work)

Thanks for pointing this out. I believe, and I'm sure Kent does as
well, that many of the XP practices are useful in isolation. Certainly
the ones you mention above are.

Where our concern comes from is that folks often want to do a few of
the practices and then call it XP. We expect that sooner or later an
XP project will fail given all our best efforts. We would rather not
have some yahoo go off, do pair programming and no documentation,
fail, and then say "XP failed".

As for immutable, emphatically not. We make changes as they are
needed. We do it by looking hard at what isn't going just right, and
by figuring out what to do. 19 times out of 20 or more, we just find
that we need to go back to the basics. 

Much of what drives adding process to a project is fear. We would like
for people to try "pure" XP until they get the feel of it before they
start adding process that they THINK they need.

See http://c2.com/cgi/wiki?ImprovingExtremeProgramming for a little
discussion on how and when we modify the XP rules.
>
>XP seems compelling enough to put aside my objections and give it a shot. 
>I don't know if my employer will go for it though.

See http://c2.com/cgi/wiki?StartingWithExtremeProgramming for some
discussion of how one guy is trying to get started, and on
http://c2.com/cgi/wiki?DeveloperOnlyXp, I talk about how XP can be
used in the development organization, basically, I think, without need
to get management permission. Most of the practices for developers are
like that: you can just do them.

Regards,

Ron

Ron Jeffries
http://www.armaties.com
Disclaimer:  I could be wrong -- but I'm not.
	(Eagles, "Victim of Love")
From: John L. Baker
Subject: Re: The psychology of premature optimization
Date: 
Message-ID: <9F69A81F48CCCADB.DCD22AC15FD5696F.E81FE94A548D8594@library-proxy.airnews.net>
Raffael Cavallaro <·······@mediaone.net> wrote in article
<························@raffaele.ne.mediaone.net>...
> In article <············@sparky.wolfe.net>, "Chas Wade"
> <········@wolfenet.com> wisely wrote:
> 
> 
> >      My experience has been that programmers are too eager to get
faster
> >performance, and they both constrain their designs, and program too
> >carefully for performance in the early stages of a project.  Build
first, th
> >en work on the performance.  If your tools let you build fast enough,
you
> >won't be disheartened by discarding even large portions when they prove
too
> >slow because you can rebuild a second solution just as quickly.
> 
> Here's my take on why people optimize prematurely, FWIW. Life is
> uncertain. Will your project succeed? Will you be working at the same
> company in 6 month's time? Will you reach the market in time? etc., etc.,
> etc. The honest answer to all of these is "I don't know."
> 
> But one thing is for sure. If I work at it, I can get this bit of code to
> run faster. It's manageable, it has immediate, visible results, even if
> these results are irrelevant to the bigger questions we're really
> concerned with.
> 
> So people try to manage their uncertainty by being certain about somthing
> they can deal with directly - the speed of their code.
> 

I have seen another phenomenon associated with premature optimization that
can be exploited to an advantage. This has less to do with writing
individual programs and more typically is associated with larger scale
efforts.

A possibly premature optimization is sometimes seized upon as the rationale
for using a specific architecture. For example, if you just discovered some
really neat optimization of, say linear programming or hashing or even
polymorphic method dispatch, you might be tempted to design an architecture
around the idea of making maximum use of that optimization. On more than
one occasion I have seen projects start this way. Once things get farther
along, a more rational assessment may well be that the original
optimization idea is no longer playing a role in the overall system design.
The original optimization was just a way to crystallize creative thought
around a common objective that was previously seen as impossible.

Somehow it makes it easier to get things going when you think there is a
magic bullet. Of course, this only works when you then use sound design
principles and have the objectivity to see when the original magic bullet
has turned back to ordinary lead. Sticking with a misapplied optimization
or worse, just assuming away problems, can lead to disaster.

-- 

John L. Baker

Advanced Boolean Concepts, Ltd.
http://www.advbool.com
From: Elan
Subject: Re: The psychology of premature optimization
Date: 
Message-ID: <36796A10.3927@mail.bellsouth.net>
I knew that inside of me but I didn't know how to express it.
I was looking for that kindOf input from outside.
"Build first, improve later"
Thanks
From: Johan Kullstam
Subject: Re: The psychology of premature optimization
Date: 
Message-ID: <m2ww3npp43.fsf@sophia.axel.nom>
······@volition-inc.com (James Hague) writes:
> And there's always the issue of platform maturity.  I'd love to wrote
> for a UNIX variant if I could, but I can't.  I have to use Windows
> 95/98.  This operating system is really only three years old, and
> most compiler writers haven't come to grips with it yet.  Perhaps it
> is just that writing a Windows compiler is more complex than writing a
> Linux compiler, so fewer people are willing to do it.  Either way, the
> current batch of Windows Lisps are just beginning to scrape toward
> good code generation IMO.

no, the reason windows compilers are so bad is that the x86 arch is a
total loser.  no matter what the original language is, making the x86
do it well is difficult.  the 8086 came out before 1980 and 80386 came
before 1990 and we *still* have bad compilers - even for popular
languages like C.

-- 
Johan Kullstam [·······@idt.net] Don't Fear the Penguin!