From: Richard Smith
Subject: lisp - low permutations explanation of code managability
Date: 
Message-ID: <m2odz9b5b6.fsf@82-35-120-252.cable.ubr01.enfi.blueyonder.co.uk>
Hi again

Was trying to explain to my friend why Lisp so good.

So Lisp, functional and bottom-up programming:

I explained this to my friend

How many ways to arrange a number of objects:
2 objects? a:b or b:a -> 2
3 objects? a:b:c  a:c:b  b:a:c  b:c:a  c:a:b  c:b:a -> 6
4 objects? ...messy
20 objects? ...!

So if a lisp function does one thing and returns one output every
"nugget" can be tested when written because the permutations are
manageable
- whereas screeds of "standard" code offer vast permutations making
bugginess and code management (including non-reusability) problems
inevitable.

And if a functionally-programmed "nugget" (function) is found to be
defective, repairing that one piece restores the whole pile to correct
behaviour.

Does that ring true - keeping the permutations low and manageable?

Something like

"(defun cp ()
   "across-corners profile, measured linear dimensions"
   (curve-scale (cc) +sf+))"

as typical funtion - testable (in the toplevel, as writing) and
comprehensible.


Richard Smith

From: Larry Clapp
Subject: Re: lisp - low permutations explanation of code managability
Date: 
Message-ID: <slrne3llbo.h4i.larry@theclapp.ddts.net>
On 2006-04-10, Richard Smith <······@weldsmith4.co.uk> wrote:
> Was trying to explain to my friend why Lisp so good.
>
> So Lisp, functional and bottom-up programming:
>
> I explained this to my friend
>
> How many ways to arrange a number of objects:
> 2 objects? a:b or b:a -> 2
> 3 objects? a:b:c  a:c:b  b:a:c  b:c:a  c:a:b  c:b:a -> 6
> 4 objects? ...messy
> 20 objects? ...!
>
> So if a lisp function does one thing and returns one output every
> "nugget" can be tested when written because the permutations are
> manageable
> - whereas screeds of "standard" code offer vast permutations making
> bugginess and code management (including non-reusability) problems
> inevitable.
>
> And if a functionally-programmed "nugget" (function) is found to be
> defective, repairing that one piece restores the whole pile to
> correct behaviour.
>
> Does that ring true - keeping the permutations low and manageable?
>
> Something like
>
> "(defun cp ()
>    "across-corners profile, measured linear dimensions"
>    (curve-scale (cc) +sf+))"
>
> as typical funtion - testable (in the toplevel, as writing) and
> comprehensible.

This derives from usage (or, in the Kent Pitman-esque sense, the
culture) of the language, not from the definition of the language.
People big on Test Driven Development and refactoring write small
functions too.  As I understand it, TDD originated in the Smalltalk
world.  Maybe they have small functions, too.

That said, I agree with you.  The *culture* of Lisp users generally
frowns on long hairy functions, whereas the *culture* of (say) C++ or
Fortran (so I've heard) doesn't.  But on the other hand, TDD folks
writing in C++ will probably write small functions, too.

As a Lisp hobbyist that does Perl for a living, I've latched on
big-time to the "domain specific language" wagon, and so many of my
recent Perl scripts have very short functions, many only two lines
(and one of them is "my $self = shift"), and most of them implement a
(tada) domain-specific operation, which the rest of the script builds
upon.

One thing that Lisp does (more correctly, one thing that most Lisp
development environments do) that most other languages (development
environments) don't is make writing and testing small functions so
*easy*.  TDD in Perl or Java doesn't entirely suck, but it's a long
way from a nice REPL, nevertheless.

I recall reading, long ago, an essay on the difference between what a
language *allows*, and what it *supports*.  Lots of languages *allow*
behavior that they nevertheless don't make *easy*.  Perl and Java
(etc) *allow* short functions and (sort of) fast edit/compile/test
cycles, but Lisp environments *support* such a cycle much better.

-- L
From: Rob Thorpe
Subject: Re: lisp - low permutations explanation of code managability
Date: 
Message-ID: <1144749320.482836.201540@i40g2000cwc.googlegroups.com>
Larry Clapp wrote:
> On 2006-04-10, Richard Smith <······@weldsmith4.co.uk> wrote:
> > Was trying to explain to my friend why Lisp so good.
> >
> > So Lisp, functional and bottom-up programming:
> >
> > I explained this to my friend
> >
> > How many ways to arrange a number of objects:
> > 2 objects? a:b or b:a -> 2
> > 3 objects? a:b:c  a:c:b  b:a:c  b:c:a  c:a:b  c:b:a -> 6
> > 4 objects? ...messy
> > 20 objects? ...!
> >
> > So if a lisp function does one thing and returns one output every
> > "nugget" can be tested when written because the permutations are
> > manageable
> > - whereas screeds of "standard" code offer vast permutations making
> > bugginess and code management (including non-reusability) problems
> > inevitable.
> >
> > And if a functionally-programmed "nugget" (function) is found to be
> > defective, repairing that one piece restores the whole pile to
> > correct behaviour.
> >
> > Does that ring true - keeping the permutations low and manageable?
> >
> > Something like
> >
> > "(defun cp ()
> >    "across-corners profile, measured linear dimensions"
> >    (curve-scale (cc) +sf+))"
> >
> > as typical funtion - testable (in the toplevel, as writing) and
> > comprehensible.
>
> This derives from usage (or, in the Kent Pitman-esque sense, the
> culture) of the language, not from the definition of the language.
> People big on Test Driven Development and refactoring write small
> functions too.  As I understand it, TDD originated in the Smalltalk
> world.  Maybe they have small functions, too.
>
> That said, I agree with you.  The *culture* of Lisp users generally
> frowns on long hairy functions, whereas the *culture* of (say) C++ or
> Fortran (so I've heard) doesn't.  But on the other hand, TDD folks
> writing in C++ will probably write small functions, too.

It has always been a guideline of good programming to write fairly
simple functions.
Often that means writing fairly small functions, but not always.  A
1000 line conditional (e.g. cond/switch/when/case) statement can be as
easy to understand as a 20 line function containing sophisticated
logic.  Complexity of an individual function seems to me to be
dependent firstly on its length and secondly on the level of
indentation it requires.

In Fortran large, long functions are often used.  But in good fortran
code these functions are clearly broken up internally.  The intention
here is often to allow the compiler the opportunity to optimize loops
that access arrays easily.  Sometimes they are not designed this way,
several functions are retrospectively built into one long function
later to aid performance.

Of-course lots of Fortran programming isn't good, and just contains
long functions because it's bad.

> I recall reading, long ago, an essay on the difference between what a
> language *allows*, and what it *supports*.  Lots of languages *allow*
> behavior that they nevertheless don't make *easy*.  Perl and Java
> (etc) *allow* short functions and (sort of) fast edit/compile/test
> cycles, but Lisp environments *support* such a cycle much better.

Yes.  The ability to return lists or multiple values helps writing
small functions a lot.  Many of my C programs have long function just
in order to correctly feed all the information to the next stage of the
process.
From: Larry Clapp
Subject: Re: lisp - low permutations explanation of code managability
Date: 
Message-ID: <slrne3n6l1.q8q.larry@theclapp.ddts.net>
On 2006-04-11, Rob Thorpe <·············@antenova.com> wrote:
> Larry Clapp wrote:
>> I recall reading, long ago, an essay on the difference between what
>> a language *allows*, and what it *supports*.  Lots of languages
>> *allow* behavior that they nevertheless don't make *easy*.  Perl
>> and Java (etc) *allow* short functions and (sort of) fast
>> edit/compile/test cycles, but Lisp environments *support* such a
>> cycle much better.
>
> Yes.  The ability to return lists or multiple values helps writing
> small functions a lot.  Many of my C programs have long function
> just in order to correctly feed all the information to the next
> stage of the process.

Well, multiple values helps, too, but I meant the speed with which you
get feedback on a new function, and the ability to have all your data
in the image and play with it there, as opposed to reloading it every
time you run your test suite.  Also the ability to change a function's
signature, and test it, and not have to change everything that calls
it just to get it to compile.  That sort of thing.

-- L
From: Pascal Costanza
Subject: Re: lisp - low permutations explanation of code managability
Date: 
Message-ID: <4a11juFqvlupU1@individual.net>
Larry Clapp wrote:
> On 2006-04-10, Richard Smith <······@weldsmith4.co.uk> wrote:
> 
>>Was trying to explain to my friend why Lisp so good.
>>
>>So Lisp, functional and bottom-up programming:
>>
>>I explained this to my friend
>>
>>How many ways to arrange a number of objects:
>>2 objects? a:b or b:a -> 2
>>3 objects? a:b:c  a:c:b  b:a:c  b:c:a  c:a:b  c:b:a -> 6
>>4 objects? ...messy
>>20 objects? ...!
>>
>>So if a lisp function does one thing and returns one output every
>>"nugget" can be tested when written because the permutations are
>>manageable
>>- whereas screeds of "standard" code offer vast permutations making
>>bugginess and code management (including non-reusability) problems
>>inevitable.
>>
>>And if a functionally-programmed "nugget" (function) is found to be
>>defective, repairing that one piece restores the whole pile to
>>correct behaviour.
>>
>>Does that ring true - keeping the permutations low and manageable?
>>
>>Something like
>>
>>"(defun cp ()
>>   "across-corners profile, measured linear dimensions"
>>   (curve-scale (cc) +sf+))"
>>
>>as typical funtion - testable (in the toplevel, as writing) and
>>comprehensible.
> 
> 
> This derives from usage (or, in the Kent Pitman-esque sense, the
> culture) of the language, not from the definition of the language.
> People big on Test Driven Development and refactoring write small
> functions too.  As I understand it, TDD originated in the Smalltalk
> world.  Maybe they have small functions, too.
> 
> That said, I agree with you.  The *culture* of Lisp users generally
> frowns on long hairy functions, whereas the *culture* of (say) C++ or
> Fortran (so I've heard) doesn't.  But on the other hand, TDD folks
> writing in C++ will probably write small functions, too.

Long functions are not necessarily hairy. I have some doubts whether 
keeping all functions small really helps. As with all guiding 
principles, they should be taken as mere guidelines, and not as rules 
that you should strictly adhere to. There is evidence that keeping all 
functions small is acutally a bad thing. See 
http://www.leshatton.org/IEEE_Soft_97b.html


Pascal

-- 
3rd European Lisp Workshop
July 3-4 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
From: Rob Thorpe
Subject: Re: lisp - low permutations explanation of code managability
Date: 
Message-ID: <1144761914.852763.146730@i40g2000cwc.googlegroups.com>
Pascal Costanza wrote:
> Larry Clapp wrote:
> > On 2006-04-10, Richard Smith <······@weldsmith4.co.uk> wrote:
> >
> >>Was trying to explain to my friend why Lisp so good.
> >>
> >>So Lisp, functional and bottom-up programming:
> >>
> >>I explained this to my friend
> >>
> >>How many ways to arrange a number of objects:
> >>2 objects? a:b or b:a -> 2
> >>3 objects? a:b:c  a:c:b  b:a:c  b:c:a  c:a:b  c:b:a -> 6
> >>4 objects? ...messy
> >>20 objects? ...!
> >>
> >>So if a lisp function does one thing and returns one output every
> >>"nugget" can be tested when written because the permutations are
> >>manageable
> >>- whereas screeds of "standard" code offer vast permutations making
> >>bugginess and code management (including non-reusability) problems
> >>inevitable.
> >>
> >>And if a functionally-programmed "nugget" (function) is found to be
> >>defective, repairing that one piece restores the whole pile to
> >>correct behaviour.
> >>
> >>Does that ring true - keeping the permutations low and manageable?
> >>
> >>Something like
> >>
> >>"(defun cp ()
> >>   "across-corners profile, measured linear dimensions"
> >>   (curve-scale (cc) +sf+))"
> >>
> >>as typical funtion - testable (in the toplevel, as writing) and
> >>comprehensible.
> >
> >
> > This derives from usage (or, in the Kent Pitman-esque sense, the
> > culture) of the language, not from the definition of the language.
> > People big on Test Driven Development and refactoring write small
> > functions too.  As I understand it, TDD originated in the Smalltalk
> > world.  Maybe they have small functions, too.
> >
> > That said, I agree with you.  The *culture* of Lisp users generally
> > frowns on long hairy functions, whereas the *culture* of (say) C++ or
> > Fortran (so I've heard) doesn't.  But on the other hand, TDD folks
> > writing in C++ will probably write small functions, too.
>
> Long functions are not necessarily hairy. I have some doubts whether
> keeping all functions small really helps. As with all guiding
> principles, they should be taken as mere guidelines, and not as rules
> that you should strictly adhere to. There is evidence that keeping all
> functions small is acutally a bad thing. See
> http://www.leshatton.org/IEEE_Soft_97b.html

Ah yes, Les Hatton.  I remember reading about this in IEE magazine
sometime in the 90s.  His theory is that functions in the range 100-250
lines are the most reliable, and that both long and short functions
should be avoided.

I've never had the guts to apply this theory.  I've measured my own
output, and I write functions that have a mean length of 43 lines.
When I get around to it I'll measure bug density against length of
function and see if it works for my code.

His theory is that the best length of function depends on the number of
things a person can think about at once being 7+/- 2.  My problem is
with this leading to the best size of function being so long.  I often
find myself struggling to understand functions of 50 lines because I
have to remember too many things at once.

There are a couple of reasons other than Les' I can think for it
happening:
* I think part of the reason small function often have bugs in is that
a function of  f.e.g. 10 lines in C will often be missing some critical
error test.
* Often a bug happens between functions.  Neither caller or callee can
be blamed, but together they miss something.  In this case the
programmer puts the fix in the shorter function on the premise that it
leads to clearer code later.
From: Pascal Costanza
Subject: Re: lisp - low permutations explanation of code managability
Date: 
Message-ID: <4a25esFr6kr4U2@individual.net>
Rob Thorpe wrote:
> Pascal Costanza wrote:
> 
>>Long functions are not necessarily hairy. I have some doubts whether
>>keeping all functions small really helps. As with all guiding
>>principles, they should be taken as mere guidelines, and not as rules
>>that you should strictly adhere to. There is evidence that keeping all
>>functions small is acutally a bad thing. See
>>http://www.leshatton.org/IEEE_Soft_97b.html
> 
> Ah yes, Les Hatton.  I remember reading about this in IEE magazine
> sometime in the 90s.  His theory is that functions in the range 100-250
> lines are the most reliable, and that both long and short functions
> should be avoided.
> 
> I've never had the guts to apply this theory.  I've measured my own
> output, and I write functions that have a mean length of 43 lines.
> When I get around to it I'll measure bug density against length of
> function and see if it works for my code.
> 
> His theory is that the best length of function depends on the number of
> things a person can think about at once being 7+/- 2.  My problem is
> with this leading to the best size of function being so long.  I often
> find myself struggling to understand functions of 50 lines because I
> have to remember too many things at once.
> 
> There are a couple of reasons other than Les' I can think for it
> happening:
> * I think part of the reason small function often have bugs in is that
> a function of  f.e.g. 10 lines in C will often be missing some critical
> error test.
> * Often a bug happens between functions.  Neither caller or callee can
> be blamed, but together they miss something.  In this case the
> programmer puts the fix in the shorter function on the premise that it
> leads to clearer code later.

Of course, it's not a given that a change of programming language will 
leave his findings unchanged. Basically, the study should be performed 
against several languages to be certain that there is a common "law" at 
work.


Pascal

-- 
3rd European Lisp Workshop
July 3-4 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
From: Pascal Bourguignon
Subject: Re: lisp - low permutations explanation of code managability
Date: 
Message-ID: <873bgjn9vr.fsf@thalassa.informatimago.com>
"Rob Thorpe" <·············@antenova.com> writes:

>> http://www.leshatton.org/IEEE_Soft_97b.html
>
> Ah yes, Les Hatton.  I remember reading about this in IEE magazine
> sometime in the 90s.  His theory is that functions in the range 100-250
> lines are the most reliable, and that both long and short functions
> should be avoided.

You need to take into account the language factor.

100-250 LOC of C correspond to 10-25 LOC of Lisp.

-- 
__Pascal Bourguignon__                     http://www.informatimago.com/

ADVISORY: There is an extremely small but nonzero chance that,
through a process known as "tunneling," this product may
spontaneously disappear from its present location and reappear at
any random place in the universe, including your neighbor's
domicile. The manufacturer will not be responsible for any damages
or inconveniences that may result.
From: Rob Thorpe
Subject: Re: lisp - low permutations explanation of code managability
Date: 
Message-ID: <1144832413.745935.175640@j33g2000cwa.googlegroups.com>
Pascal Bourguignon wrote:
> "Rob Thorpe" <·············@antenova.com> writes:
>
> >> http://www.leshatton.org/IEEE_Soft_97b.html
> >
> > Ah yes, Les Hatton.  I remember reading about this in IEE magazine
> > sometime in the 90s.  His theory is that functions in the range 100-250
> > lines are the most reliable, and that both long and short functions
> > should be avoided.
>
> You need to take into account the language factor.
>
> 100-250 LOC of C correspond to 10-25 LOC of Lisp.

:) I'm talking mainly about C and languages like it, not lisp!

The C function I'm editing today is only 91 lines long, "too small" by
Les Hatton's theory.
But it contains 9 local variables and accesses 2 program globals and 2
file static variables.

I find it too hard to remember all these things already at this length
of function.  And the function isn't even a complex one, and it's full
of vertical whitespace and comments.

Hopefully no-one in the world writes 100-250 lines Lisp functions.
Regardless of what Les says that's something I'm going to try to avoid
too.
From: Pascal Bourguignon
Subject: Re: lisp - low permutations explanation of code managability
Date: 
Message-ID: <8764lfkm92.fsf@thalassa.informatimago.com>
"Rob Thorpe" <·············@antenova.com> writes:
> Hopefully no-one in the world writes 100-250 lines Lisp functions.
> Regardless of what Les says that's something I'm going to try to avoid
> too.

Unfortunately, there are quite a number of emacs lisp functions that
long or longer.  Not always easy to understand...

-- 
__Pascal Bourguignon__                     http://www.informatimago.com/

"Do not adjust your mind, there is a fault in reality"
 -- on a wall many years ago in Oxford.
From: Larry Clapp
Subject: Re: lisp - low permutations explanation of code managability
Date: 
Message-ID: <slrne3n696.q8q.larry@theclapp.ddts.net>
On 2006-04-11, Pascal Costanza <··@p-cos.net> wrote:
> Larry Clapp wrote:
>> On 2006-04-10, Richard Smith <······@weldsmith4.co.uk> wrote:
[some stuff about why Lisp is great]
>> That said, I agree with you.  The *culture* of Lisp users generally
>> frowns on long hairy functions, whereas the *culture* of (say) C++
>> or Fortran (so I've heard) doesn't.  But on the other hand, TDD
>> folks writing in C++ will probably write small functions, too.
>
> Long functions are not necessarily hairy. I have some doubts whether
> keeping all functions small really helps. As with all guiding
> principles, they should be taken as mere guidelines, and not as
> rules that you should strictly adhere to. There is evidence that
> keeping all functions small is acutally a bad thing. See
> http://www.leshatton.org/IEEE_Soft_97b.html

I downloaded it.  Thanks for the link!

-- L
From: Pascal Costanza
Subject: Re: lisp - low permutations explanation of code managability
Date: 
Message-ID: <4a010lFq6ilcU1@individual.net>
Richard Smith wrote:
> Hi again
> 
> Was trying to explain to my friend why Lisp so good.
> 
> So Lisp, functional and bottom-up programming:
> 
> I explained this to my friend
> 
> How many ways to arrange a number of objects:
> 2 objects? a:b or b:a -> 2
> 3 objects? a:b:c  a:c:b  b:a:c  b:c:a  c:a:b  c:b:a -> 6
> 4 objects? ...messy
> 20 objects? ...!
> 
> So if a lisp function does one thing and returns one output every
> "nugget" can be tested when written because the permutations are
> manageable
> - whereas screeds of "standard" code offer vast permutations making
> bugginess and code management (including non-reusability) problems
> inevitable.
> 
> And if a functionally-programmed "nugget" (function) is found to be
> defective, repairing that one piece restores the whole pile to correct
> behaviour.
> 
> Does that ring true - keeping the permutations low and manageable?

This is not only true of functional programming. See the literature on 
XP and test-first programming, which typically focus on object-oriented 
programming. They make similar claims.


Pascal

-- 
3rd European Lisp Workshop
July 3-4 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
From: Alan Crowe
Subject: Re: lisp - low permutations explanation of code managability
Date: 
Message-ID: <86k69w864h.fsf@cawtech.freeserve.co.uk>
Pascal Costanza <··@p-cos.net> writes:
> 
> This is not only true of functional programming. See the literature on 
> XP and test-first programming, which typically focus on object-oriented 
> programming. They make similar claims.

I find the arguments for test-first coding persuasive and
intend to try it out. The work flow that I want to test is

1)design

2)write test cases for function one

3)write test cases for function two

4)write test cases for function three

5,6&7)write the functions to pass the tests

What I've read of XP tells me not to do a Big Upfront Design
but to design as I go, and even to let the design emerge
from the process of refactoring.

This clashes quite sharply with my understanding of what
refactoring is.

Suppose you want to add a new feature to your program. The
theory of refactoring is that it is best to do this in two
stages. First you refactor, which is purely a matter of
improving the structure of your code. You do not attempt to
add any of the new functionality at this stage. This is
important because you can continue to test against the old
test cases. Once you are happy that you have a sound basis
for further progress, you procede to stages 2 and 3. Stage
two - augment the test cases. Stage three - write the code.

But what is refactoring? The most incisive definition that I
can come up with is that it is a version of the
once-and-only-once principle. If you have duplicate code,
you factor it out.

(defun this (x y u v)
  (that (sqrt (+ (* x x)(* y y)))
        (sqrt (+ (* u u)(* v v)))))

becomes

(defun this (x y u v)
  (flet ((pythag (a b)
           (sqrt (+ (* a a)(* b b)))))
    (that (pythag x y)
          (pythag u v))))

There is a snag. How do you tell what duplication is
redundant and what duplication is coincidental. For example,
both arguments to THAT were computed using the L2 norm. Why?
Is there something about the application that means that we
only ever use the L2 norm? When we refactor we are
committing ourselves to the position that the duplication
was redundant, not coincidental.

If we have realised that we would get better results by
using the L1 norm for the second argument to THAT, our
preparatory work might go the other way, expanding the call
of THAT from

(that (pythag x y)
      (pythag u v))

to

(that (pythag x y)
      (sqrt (+ (* x x)(* y y))))

in anticipation of

(that (pythag x y)
      (+ (abs x)(abs y)))

So this question "When is duplication coincidental and when
redundant?" does have an answer. One thinks about likely
changes to the code. If the same change gets made in both
places then the duplication was redundant. If the change
gets make in only one place, then the duplication is
coincidental, and we lose by factoring it out.

I don't think the concept of refactoring makes senses
without a plan or design that offers answers to questions
about the kinds of changes that are in prospect. Big Upfront
Design, at least in the sense of taking some decisions about
where we might go with the code and where we will not go
with the code, seems to be mandatory.

My nightmare about XP is that it has two parts, a good part,
test-first coding, and a bad part, the idea that test-first
coding lets you get away without an initial design. My guess
is that these cancel out leaving you no better off than if
you hadn't bothered, but you could have cherry picked, just
the good bit, and been much better off.

To refine my nightmare a little, I dread that test-first
coding is going to be tainted by its close association with XP.

Alan Crowe
Edinburgh
Scotland
From: Peter Seibel
Subject: Re: lisp - low permutations explanation of code managability
Date: 
Message-ID: <m2zmis139e.fsf@gigamonkeys.com>
Alan Crowe <····@cawtech.freeserve.co.uk> writes:

> Pascal Costanza <··@p-cos.net> writes:
>> 
>> This is not only true of functional programming. See the literature
>> on XP and test-first programming, which typically focus on
>> object-oriented programming. They make similar claims.
>
> I find the arguments for test-first coding persuasive and intend to
> try it out. The work flow that I want to test is
>
> 1)design
>
> 2)write test cases for function one
>
> 3)write test cases for function two
>
> 4)write test cases for function three
>
> 5,6&7)write the functions to pass the tests
>
> What I've read of XP tells me not to do a Big Upfront Design but to
> design as I go, and even to let the design emerge from the process
> of refactoring.
>
> This clashes quite sharply with my understanding of what refactoring
> is.
>
> Suppose you want to add a new feature to your program. The theory of
> refactoring is that it is best to do this in two stages. First you
> refactor, which is purely a matter of improving the structure of
> your code. You do not attempt to add any of the new functionality at
> this stage. This is important because you can continue to test
> against the old test cases. Once you are happy that you have a sound
> basis for further progress, you procede to stages 2 and 3. Stage two
> - augment the test cases. Stage three - write the code.
>
> But what is refactoring? The most incisive definition that I can
> come up with is that it is a version of the once-and-only-once
> principle. If you have duplicate code, you factor it out.
>
> (defun this (x y u v)
>   (that (sqrt (+ (* x x)(* y y)))
>         (sqrt (+ (* u u)(* v v)))))
>
> becomes
>
> (defun this (x y u v)
>   (flet ((pythag (a b)
>            (sqrt (+ (* a a)(* b b)))))
>     (that (pythag x y)
>           (pythag u v))))

While once-and-only-once is a good principle and almost always a good
one to apply, there are other principles that can be applied to guide
refactoring such as choosing names that make the code more
understandable (THIS? THAT?! ;-)), reducing coupling, and increasing
cohesion.

> There is a snag. How do you tell what duplication is redundant and
> what duplication is coincidental. For example, both arguments to
> THAT were computed using the L2 norm. Why? Is there something about
> the application that means that we only ever use the L2 norm? When
> we refactor we are committing ourselves to the position that the
> duplication was redundant, not coincidental.

Well you're only committing yourself insofar as you develop some
disability that will prevent you from changing it back should the need
arise.

> If we have realised that we would get better results by using the L1
> norm for the second argument to THAT, our preparatory work might go
> the other way, expanding the call of THAT from
>
> (that (pythag x y)
>       (pythag u v))
>
> to
>
> (that (pythag x y)
>       (sqrt (+ (* x x)(* y y))))

One point of view is that this code is incompletely factored--it may
be an intermediate step on the way to the next version but if you left
it this way it's just a sloppy bit of code that contains needless
duplication, coincidental or otherwise. Which isn't to say that you
mightn't run your tests against this intermediate version just as a
sanity check.

> in anticipation of
>
> (that (pythag x y)
>       (+ (abs x)(abs y)))
>
> So this question "When is duplication coincidental and when
> redundant?" does have an answer. One thinks about likely
> changes to the code. If the same change gets made in both
> places then the duplication was redundant. If the change
> gets make in only one place, then the duplication is
> coincidental, and we lose by factoring it out.

The loss seems small to me. You can always change it later if you
want.

> I don't think the concept of refactoring makes senses
> without a plan or design that offers answers to questions
> about the kinds of changes that are in prospect. Big Upfront
> Design, at least in the sense of taking some decisions about
> where we might go with the code and where we will not go
> with the code, seems to be mandatory.

There is a difference between Big Upfront Design and thinking about
where you want to go next. Namely the "Big". When doing TDD
refactoring is typically driven by the need to add a feature. You may
go to add some new functionality and discover that there's no good
place to add it and decide to refactor the code first so it is
functionally equivalent but with a good place to add the new
functionality. Or you shove the new functionality in, in some
completely ugly way, get it working, and then refactor the new code to
both work and be non-ugly. In the first case the refactoring is
driven, as you say, by knowing what changes are in the offing. In the
second case the refactoring is driven more by a sense of
esthetics--the code is a mess and you need to tidy it up so you can
understand it next time you come back to it.

> My nightmare about XP is that it has two parts, a good part,
> test-first coding, and a bad part, the idea that test-first coding
> lets you get away without an initial design. My guess is that these
> cancel out leaving you no better off than if you hadn't bothered,
> but you could have cherry picked, just the good bit, and been much
> better off.

I have to say I don't strictly follow any particular methodology. But
I can say, from experience, that it is interesting to do some
experiments with both parts and see how far you can get without
looking more than one design step ahead of where the working code
already is.

> To refine my nightmare a little, I dread that test-first
> coding is going to be tainted by its close association with XP.

That seems unlikely insofar as one of the most common criticisms of XP
is, "There's nothing new here--good programmers have been doing all
these things for decads without calling them XP."

-Peter

-- 
Peter Seibel           * ·····@gigamonkeys.com
Gigamonkeys Consulting * http://www.gigamonkeys.com/
Practical Common Lisp  * http://www.gigamonkeys.com/book/
From: Pascal Costanza
Subject: Re: lisp - low permutations explanation of code managability
Date: 
Message-ID: <4a2590Fr6kr4U1@individual.net>
Alan Crowe wrote:
> Pascal Costanza <··@p-cos.net> writes:
> 
>>This is not only true of functional programming. See the literature on 
>>XP and test-first programming, which typically focus on object-oriented 
>>programming. They make similar claims.
> 
> 
> I find the arguments for test-first coding persuasive and
> intend to try it out. The work flow that I want to test is
> 
> 1)design
> 
> 2)write test cases for function one
> 
> 3)write test cases for function two
> 
> 4)write test cases for function three
> 
> 5,6&7)write the functions to pass the tests
> 
> What I've read of XP tells me not to do a Big Upfront Design
> but to design as I go, and even to let the design emerge
> from the process of refactoring.
> 
> This clashes quite sharply with my understanding of what
> refactoring is.
> 
> Suppose you want to add a new feature to your program. The
> theory of refactoring is that it is best to do this in two
> stages. First you refactor, which is purely a matter of
> improving the structure of your code. You do not attempt to
> add any of the new functionality at this stage. This is
> important because you can continue to test against the old
> test cases. Once you are happy that you have a sound basis
> for further progress, you procede to stages 2 and 3. Stage
> two - augment the test cases. Stage three - write the code.

No. My understanding is that you first write the test cases, and then 
try to change the code to make the whole test suite work fine again. 
While doing this, you will need to refactor your code, but that's part 
of fulfilling the test cases.


Pascal

-- 
3rd European Lisp Workshop
July 3-4 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
From: Alain Picard
Subject: Re: lisp - low permutations explanation of code managability
Date: 
Message-ID: <873bgjl1at.fsf@memetrics.com>
Alan Crowe <····@cawtech.freeserve.co.uk> writes:

> My nightmare about XP is that it has two parts, a good part,
> test-first coding, and a bad part, the idea that test-first
> coding lets you get away without an initial design. 

Well, you can just wake up from the nightmare and be happy, because:

 * XP doesn't say anything about not doing design, and
 * XP's first rule is "They're just rules", so you change
   whatever doesn't fit.

It's about doing sensible things, basically.
From: Pascal Costanza
Subject: Re: lisp - low permutations explanation of code managability
Date: 
Message-ID: <4a3qfoFrffpmU1@individual.net>
Alain Picard wrote:
> Alan Crowe <····@cawtech.freeserve.co.uk> writes:
> 
>>My nightmare about XP is that it has two parts, a good part,
>>test-first coding, and a bad part, the idea that test-first
>>coding lets you get away without an initial design. 
> 
> Well, you can just wake up from the nightmare and be happy, because:
> 
>  * XP doesn't say anything about not doing design, and
>  * XP's first rule is "They're just rules", so you change
>    whatever doesn't fit.

...although you cannot bend them arbitrarily. See 
http://c2.com/cgi/wiki?AlmostExtremeProgramming

Extreme Programming can be understood as a pattern language, where each 
practice ("pattern") complements the other practices. When you adapt 
these practices, you have to be aware of the overall effects that these 
adaptations might have.


Pascal

-- 
3rd European Lisp Workshop
July 3-4 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
From: Alex Mizrahi
Subject: Re: lisp - low permutations explanation of code managability
Date: 
Message-ID: <443acb69$0$15782$14726298@news.sunsite.dk>
(message (Hello 'Richard)
(you :wrote  :on '(Mon, 10 Apr 2006 19:32:36 GMT))
(

 RS> So Lisp, functional and bottom-up programming:

Lisp is not that functional, Haskell is much more functional.
in fact all that symbols can make much interference and "permutations".
i.e. you can change behaviour of  programs on fly:

(defun a() 1)
(defun b() 2)
(defun mess () (rotatef (symbol-function 'a) (symbol-function 'b)))

(defun c()
    (print (a))
    (mess)
    (print (a))
    (mess))

CL-USER> (c)

1
2
NIL

if you don't see definition of mess you'll be guessing what a hell quite 
long. quite messy.

)
(With-best-regards '(Alex Mizrahi) :aka 'killer_storm)
"People who lust for the Feel of keys on their fingertips (c) Inity") 
From: Pascal Costanza
Subject: Re: lisp - low permutations explanation of code managability
Date: 
Message-ID: <4a012cFq6ilcU2@individual.net>
Alex Mizrahi wrote:
> (message (Hello 'Richard)
> (you :wrote  :on '(Mon, 10 Apr 2006 19:32:36 GMT))
> (
> 
>  RS> So Lisp, functional and bottom-up programming:
> 
> Lisp is not that functional, Haskell is much more functional.
> in fact all that symbols can make much interference and "permutations".
> i.e. you can change behaviour of  programs on fly:
> 
> (defun a() 1)
> (defun b() 2)
> (defun mess () (rotatef (symbol-function 'a) (symbol-function 'b)))
> 
> (defun c()
>     (print (a))
>     (mess)
>     (print (a))
>     (mess))
> 
> CL-USER> (c)
> 
> 1
> 2
> NIL
> 
> if you don't see definition of mess you'll be guessing what a hell quite 
> long. quite messy.

Of course, noone would write such nonsense.

How exactly do you think this helps in explaining what's good about Lisp?


Pascal

-- 
3rd European Lisp Workshop
July 3-4 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/