From: Antonio Esteban
Subject: Better habits for programming
Date: 
Message-ID: <Pine.LNX.4.33L2.0207071332340.1639-100000@localhost.localdomain>
Hi all,

I'm a newbie with Lisp. I have a question for the Lisp's gurus/people of
this newsgroup: I have read out there that if I learn to programm in Lisp,
I'll achieve better programming habits in another languages like C/C++.
What do you think about this?.

Thanks in advance,

	--Antonio

From: Joe Marshall
Subject: Re: Better habits for programming
Date: 
Message-ID: <LHYV8.260392$nZ3.118464@rwcrnsc53>
"Antonio Esteban" <·········@arrakis.es> wrote in message
··············································@localhost.localdomain...
>
> Hi all,
>
> I'm a newbie with Lisp. I have a question for the Lisp's gurus/people of
> this newsgroup: I have read out there that if I learn to programm in Lisp,
> I'll achieve better programming habits in another languages like C/C++.
> What do you think about this?.

I think this is true.
From: Software Scavenger
Subject: Re: Better habits for programming
Date: 
Message-ID: <a6789134.0207071600.4e056b15@posting.google.com>
Antonio Esteban <·········@arrakis.es> wrote in message news:<·········································@localhost.localdomain>...

> this newsgroup: I have read out there that if I learn to programm in Lisp,
> I'll achieve better programming habits in another languages like C/C++.

The reason why is because the more you learn to program the better
your programming habits become.  When using Lisp, you learn a lot
faster, and thereby develop better habits faster.
From: Jochen Schmidt
Subject: Re: Better habits for programming
Date: 
Message-ID: <ag9mne$2nd$1@rznews2.rrze.uni-erlangen.de>
Antonio Esteban wrote:

> 
> Hi all,
> 
> I'm a newbie with Lisp. I have a question for the Lisp's gurus/people of
> this newsgroup: I have read out there that if I learn to programm in Lisp,
> I'll achieve better programming habits in another languages like C/C++.
> What do you think about this?.

I think it is true. It is generally a good idea to learn multiple different 
(!) languages. The more languages you know the more means of abstraction 
you learn. The most difficult part in programming is to understand how to 
solve the problem. A good repertoire of abstraction means enables you to 
find the problem solution faster. Then you only have to map this solution 
to the means of a particular language. While this is often (not always) the 
easier part, it can be much less work to use a language which already 
provides you with most of the facilities you need.

I can recommend learning Lisp - you will certainly learn things about 
programming which can make you a better programmer. 

But be careful many people never want to go back to another language as 
soon as they have learned Lisp.  ;-)

ciao,
Jochen

--
http://www.dataheaven.de
From: Frank A. Adrian
Subject: Re: Better habits for programming
Date: 
Message-ID: <iq0W8.381$Uc6.77896@news.uswest.net>
Jochen Schmidt wrote:

> But be careful many people never want to go back to another language as
> soon as they have learned Lisp.  ;-)

Yes!  It teaches them that there are better languages out there than C and 
C++ (and Perl :-).  This is the main reason that all programmers should 
learn Lisp.

faa
From: David Golden
Subject: Re: Better habits for programming
Date: 
Message-ID: <gY0W8.682$lC5.11044@news.iol.ie>
> (and Perl :-).  
Although some of the Perl 6 stuff is beginning to get interesting:
perl regexes to grammars as methods are to classes, including
inheritance and polymorphism...
http://www.perl.com/pub/a/2002/06/26/synopsis5.html?page=5



-- 
Don't eat yellow snow.
From: Joel Ray Holveck
Subject: Re: Better habits for programming
Date: 
Message-ID: <y7c65zomlnn.fsf@sindri.juniper.net>
>> (and Perl :-).  
> Although some of the Perl 6 stuff is beginning to get interesting:
> perl regexes to grammars as methods are to classes, including
> inheritance and polymorphism...
> http://www.perl.com/pub/a/2002/06/26/synopsis5.html?page=5

When the Perl 6 stuff was being spec'd, it was commonly said that they
just want to move to Lisp.

That's how I got one padewan from the Perl world on board with Lisp.

Cheers,
joelh
From: Frank A. Adrian
Subject: Re: Better habits for programming
Date: 
Message-ID: <kDPW8.760$XE2.311029@news.uswest.net>
Joel Ray Holveck wrote:

>>> (and Perl :-).
>> Although some of the Perl 6 stuff is beginning to get interesting:
>> perl regexes to grammars as methods are to classes, including
>> inheritance and polymorphism...
>> http://www.perl.com/pub/a/2002/06/26/synopsis5.html?page=5
> 
> When the Perl 6 stuff was being spec'd, it was commonly said that they
> just want to move to Lisp.

I just can't figure out why, if they want a Lisp, they just don't use Lisp. 
Doesn't anyone in this industry get tired of reinventing these large 
wheels? Small wheels, I could see, having done it myself occasionally :-), 
but ones as big as languages?  And they're such oddly-shaped wheels, at 
that!  I guess Greenspun's 10'th rule will hold forever...

faa
From: ozan s yigit
Subject: Re: Better habits for programming
Date: 
Message-ID: <vi4adozob0h.fsf@blue.cs.yorku.ca>
Frank A. Adrian:

> I just can't figure out why, if they want a Lisp, they just don't use Lisp. 

they don't want lisp. look at the design. [turing equivalence does not count]

oz
-- 
the only zen you find on the tops of mountains is the zen you bring up there.
                                                         -- robert m. pirsig
From: Frank A. Adrian
Subject: Re: Better habits for programming
Date: 
Message-ID: <xCMX8.593$gQ.189939@news.uswest.net>
ozan s yigit wrote:

> they don't want lisp. look at the design.

I have.  I didn't think I would survive the vomiting from my churning 
stomach after looking at it, but, I must say, I'm feeling much better three 
days later.  Most other languages only have me at the toilet for an hour or 
two (well, C++ for a whole day :-).  Greenspun rides again.  Hi-yo, 
BadLisp!  Away!!!

faa
From: Paul D. Lathrop
Subject: Re: Better habits for programming
Date: 
Message-ID: <Xns92448E75D50EDpdlathrocharterminet@216.168.3.40>
Antonio Esteban <·········@arrakis.es> wrote in
··············································@localhost.localdomain: 

> 
> Hi all,
> 
> I'm a newbie with Lisp. I have a question for the Lisp's gurus/people
> of this newsgroup: I have read out there that if I learn to programm
> in Lisp, I'll achieve better programming habits in another languages
> like C/C++. What do you think about this?.
> 
> Thanks in advance,
> 
>      --Antonio

I believe this is true. When I started my undergraduate education, I knew 
a couple languages at the newbie level, but was most fluent in Lisp. My 
professor for my first programming course wanted to know if I had done 
any professional coding. I said no, I hadn't, and he asked where I had 
learned the coding habits I had evidenced. The answer: it makes sense to 
do it that way in Lisp, so I had always done it that way.

In other words, my limited experience with Lisp taught me habits that  
impressed a guy who graduated from Berkley in the 60s. Seems like solid 
anecdotal evidence to me ;-)

But, as others have said before me, beware the lure of Lisp - you won't 
want to go back.

Paul D. Lathrop
From: Software Scavenger
Subject: Re: Better habits for programming
Date: 
Message-ID: <a6789134.0207071518.754c9998@posting.google.com>
"Paul D. Lathrop" <········@chartermi.net> wrote in message news:<····································@216.168.3.40>...

> In other words, my limited experience with Lisp taught me habits that  
> impressed a guy who graduated from Berkley in the 60s. Seems like solid 

Can you give us some examples of some of those habits?  Thanks.
From: ted sandler
Subject: Re: Better habits for programming
Date: 
Message-ID: <agb2ir$phu$1@bob.news.rcn.net>
> Can you give us some examples of some of those habits?

If I had to guess, probably relying less on variable assignment and instead,
using return values.

Therein lies the essence of lisp (though lisp isn't /just/ functional
programming).

-ted



"Software Scavenger" <··········@mailandnews.com> wrote in message
·································@posting.google.com...
> "Paul D. Lathrop" <········@chartermi.net> wrote in message
news:<····································@216.168.3.40>...
>
> > In other words, my limited experience with Lisp taught me habits that
> > impressed a guy who graduated from Berkley in the 60s. Seems like solid
>
> Can you give us some examples of some of those habits?  Thanks.
From: Software Scavenger
Subject: Re: Better habits for programming
Date: 
Message-ID: <a6789134.0207081407.668db6b7@posting.google.com>
"ted sandler" <··········@rcn.com> wrote in message news:<············@bob.news.rcn.net>...

> If I had to guess, probably relying less on variable assignment and instead,
> using return values.
> 
> Therein lies the essence of lisp (though lisp isn't /just/ functional
> programming).

In my opinion the essence of Lisp is in its macros.  They give it the
flexibility to be anything it wants to be.  Lisp is like a jack of all
trades, a language for all programmers, a power tool for every
purpose.  Lisp is not like a hammer, where the programmer sees every
problem as a nail.  Lisp macros give it the power to focus on each
problem as if it were specially designed to solve that one problem.

Instead of the essence of Lisp, what really matters is the essence of
programming.  Programming with Lisp, the essence is to use Lisp as
elegantly as possible to solve the problem at hand.  Without Lisp, the
essence is to find the most elegant way to work around the lack of
Lisp.
From: Paul D. Lathrop
Subject: Re: Better habits for programming
Date: 
Message-ID: <Xns9244D49E3F447pdlathrocharterminet@216.168.3.40>
··········@mailandnews.com (Software Scavenger) wrote in
·································@posting.google.com: 

> "Paul D. Lathrop" <········@chartermi.net> wrote in message
> news:<····································@216.168.3.40>... 
> 
>> In other words, my limited experience with Lisp taught me habits that
>>  impressed a guy who graduated from Berkley in the 60s. Seems like
>> solid 
> 
> Can you give us some examples of some of those habits?  Thanks.

Alas, that time is long ago and far away, so I don't specifically remember 
*what* he was referring to. Sorry.

Paul D. Lathrop
From: Thaddeus L Olczyk
Subject: Not from Lisp ( Re: Better habits for programming )
Date: 
Message-ID: <fs0hiukls8m7pjtl7e46v00g04cu08hk9i@4ax.com>
On Sun, 7 Jul 2002 13:43:11 -0400, Antonio Esteban
<·········@arrakis.es> wrote:

>
>I'm a newbie with Lisp. I have a question for the Lisp's gurus/people of
>this newsgroup: I have read out there that if I learn to programm in Lisp,
>I'll achieve better programming habits in another languages like C/C++.
>What do you think about this?.

Several monthas ago I asked a question. It was:

remove-duplicates removes duplicate entries from an arbitrary list.
Since this list is arbitrary the algorithm must be quadratic in
nature. For a sorted list there is a simple algorithm for removing 
duplicates. Go down the list and copy only the first entry. Can you
help me in writing this without incurring excessive overhead.

( I was worried about what people here would call "excessive consing"
and copying. )
Several people I sugested ways where I replace the elements with
hashes. In it self this might be an improvement, but they implemented
it using the remove-duplicates function ie they used the same damn
algorithm. At the time I didn't have time to examine closely the code,
but it didn't look promising so I put it off for a couple of days.
Then one of the hash implementors posted a message with benchmarks.
The improvement using hashes for a list of 10000  was by a factor of
100.

My response to the post was that if all he got was an improvement of
100 then his algorithm stank.

Three people called me a troll for that.

So that is four people that really don't understand programming. [1]
And several others who posted similar algorithms. All bligthly happy
because they could squeeze a little better performance, rather than
gain a lot.

The point is that they relied on Lisp to make them selves better
programmers.

So too, ifyou rely on Lisp to make you a better programmer then
it will make you a worse programmer. Instead rely on yourself to make
yourself a better programmer.

[1]
The reason I think an improvement of 100 stinks is simple. Going from
a quadatic to a linear algorithm on a list of 10000 should improve
performance 10000 give or take extra overhead. So 100 is not much.
It also indicates that a linear algorithm was probably not used ( as I
already suspected ), so that with a list of 100000, the improvement
would still be 100 ( an improvement in overhead ) vs an improvement
of 100000 for a linear algorithm. The same for 1000000 or 1000000000
etc.
From: Eugene Zaikonnikov
Subject: Re: Not from Lisp ( Re: Better habits for programming )
Date: 
Message-ID: <680a835d.0207071436.270f1dbe@posting.google.com>
Thaddeus L Olczyk <······@interaccess.com> wrote in message news:<··································@4ax.com>...

> 
> My response to the post was that if all he got was an improvement of
> 100 then his algorithm stank.
> 
> Three people called me a troll for that.
> 

And indeed you were a troll. If you think you are able to come up with
better solution, you are free to show off with your code.

> So that is four people that really don't understand programming.

I don't quite catch what you are doing here, then.

--
  Eugene.
From: Damond Walker
Subject: Re: Not from Lisp ( Re: Better habits for programming )
Date: 
Message-ID: <63637457.0207071718.7408bb31@posting.google.com>
······@funcall.org (Eugene Zaikonnikov) wrote in message 

[snip]

> And indeed you were a troll. If you think you are able to come up with
> better solution, you are free to show off with your code.
>

Which probably won't happen...
 

Damond
From: Thaddeus L Olczyk
Subject: Re: Not from Lisp ( Re: Better habits for programming )
Date: 
Message-ID: <acmkiugvf31uo8mudocfok4te934ua20bg@4ax.com>
On 7 Jul 2002 15:36:12 -0700, ······@funcall.org (Eugene Zaikonnikov)
wrote:

>Thaddeus L Olczyk <······@interaccess.com> wrote in message news:<··································@4ax.com>...
>
>> 
>> My response to the post was that if all he got was an improvement of
>> 100 then his algorithm stank.
>> 
>> Three people called me a troll for that.
>> 
>
>And indeed you were a troll. If you think you are able to come up with
>better solution, you are free to show off with your code.
>
And this is the perfect example of what Lisp can teach you that will
make you a poorer programer. Imagine what would happen if you came
to work one day and your boss told you that some code that you wrote
wasn't performing the way it should and you replied, "You don't like
it write your own." Good habits that some Lisp programmers teach.

When I asked my question I specified an algorithm ( scan the list, if
the previous element is not the same as the present element then put
the present element in a new list ). The main difficulty was when
putting the new element in the new list I was worried that I was
making a duplicate list. I fixed it myself using nconc. 

As for seeing code, you can just look it up. Three people
(  Barry Margolin, Pierre Mai and David Hanley ) wrote accurate code
based on my algorithm.  ( Their version using loop seemed better so I
used theirs. I also ran benchmarks and the hash table implementation
ran as poorly as the original remove-duplicates [ no 100x improvement 
for me ]. OTOH the three that ran my algorithm and my own
implementation always ran with 1 sec or 0 secs [ the smallest measure
].  Even up to 100000 which is where I gave up. [ Sorting seemed to
take forever beyond that ]. ) 
From: Paul F. Dietz
Subject: Re: Not from Lisp ( Re: Better habits for programming )
Date: 
Message-ID: <3D2A65F5.A45A7224@dls.net>
Thaddeus L Olczyk wrote:

> >And indeed you were a troll. If you think you are able to come up with
> >better solution, you are free to show off with your code.

> And this is the perfect example of what Lisp can teach you that will
> make you a poorer programer. Imagine what would happen if you came
> to work one day and your boss told you that some code that you wrote
> wasn't performing the way it should and you replied, "You don't like
> it write your own."

I must have missed the part where you sent us paychecks.

	Paul
From: Paul D. Lathrop
Subject: Re: Not from Lisp ( Re: Better habits for programming )
Date: 
Message-ID: <Xns92461CE0E4767pdlathrocharterminet@216.168.3.40>
Thaddeus L Olczyk <······@interaccess.com> wrote in
·······································@4ax.com: 

>>And indeed you were a troll. If you think you are able to come up with
>>better solution, you are free to show off with your code.
>>
> And this is the perfect example of what Lisp can teach you that will
> make you a poorer programer. Imagine what would happen if you came
> to work one day and your boss told you that some code that you wrote
> wasn't performing the way it should and you replied, "You don't like
> it write your own." Good habits that some Lisp programmers teach.

Lisp is responsible for this idea how?

Paul D. Lathrop
From: Eugene Zaikonnikov
Subject: Re: Not from Lisp ( Re: Better habits for programming )
Date: 
Message-ID: <680a835d.0207091347.38a5abcd@posting.google.com>
Thaddeus L Olczyk <······@interaccess.com> wrote in message news:<··································@4ax.com>...
> On 7 Jul 2002 15:36:12 -0700, ······@funcall.org (Eugene Zaikonnikov)
> wrote:
> 
> >And indeed you were a troll. If you think you are able to come up with
> >better solution, you are free to show off with your code.
> >
> And this is the perfect example of what Lisp can teach you that will
> make you a poorer programer.

Your reasoning is amazing. Did you actually consider the possibility
that you *were* rude and you *were* a troll, before blaming Lisp for
all injustices in the world?

> Imagine what would happen if you came
> to work one day and your boss told you that some code that you wrote
> wasn't performing the way it should and you replied, "You don't like
> it write your own." Good habits that some Lisp programmers teach.
> 

I didn't hear much complaints from my supervisors. Besides, they pay
me money.

Consider that:

* Many problems expose their true nature only when you try to code
them down. Often in real life you find obstacles to use
straightforward algorithms, and you may be unaware of them unless you
hit them with your bloody nose. Maybe that's why most people here do
not bother with verbal advises to coding problems.

* Even if it was not so in this case, you were rude for no reason. You
could point out better solution *politely* and nobody would name you a
troll, but no, you behave like a CS freshman who can't code yet but
was already exposed to complexity theory and mumbles 'omega, omicron'
to look smart.

--
  Eugene.
From: Software Scavenger
Subject: Re: Not from Lisp ( Re: Better habits for programming )
Date: 
Message-ID: <a6789134.0207071508.6b318f49@posting.google.com>
Thaddeus L Olczyk <······@interaccess.com> wrote in message news:<··································@4ax.com>...

> The reason I think an improvement of 100 stinks is simple. Going from
> a quadatic to a linear algorithm on a list of 10000 should improve

Why don't you simply test it with 10 and 100 times the number of
elements, to find out just how nonlinear it really is?  All this
abstract math can confuse you and prevent you from seeing reality. 
For example, are you taking into account that the more linear
algorithm might have a high fixed overhead which might make it less
efficient for short lists?
From: Erann Gat
Subject: Re: Not from Lisp ( Re: Better habits for programming )
Date: 
Message-ID: <gat-0707021629230001@192.168.1.50>
In article <··································@4ax.com>,
······@interaccess.com wrote:

> remove-duplicates removes duplicate entries from an arbitrary list.
> Since this list is arbitrary the algorithm must be quadratic in
> nature.

Wrong.  Remove-duplicates can be linear even for arbitrary (non-sorted) lists.

> My response to the post was that if all he got was an improvement of
> 100 then his algorithm stank.
> 
> Three people called me a troll for that.

And rightly so.

> So that is four people that really don't understand programming.

Wrong again.

E.
From: Thaddeus L Olczyk
Subject: Re: Not from Lisp ( Re: Better habits for programming )
Date: 
Message-ID: <em4iiu8e5am6ci0rrlvkmgimaejh6cqhlg@4ax.com>
On Sun, 07 Jul 2002 16:28:38 -0700, ···@jpl.nasa.gov (Erann Gat)
wrote:

>In article <··································@4ax.com>,
>······@interaccess.com wrote:
>
>> remove-duplicates removes duplicate entries from an arbitrary list.
>> Since this list is arbitrary the algorithm must be quadratic in
>> nature.
>
>Wrong.  Remove-duplicates can be linear even for arbitrary (non-sorted) lists.
>
Can you please describe it?
It would be enormously suprising since ACL, LispWorks, cmucl, Corman
lisp, and clisp all use quadratic algorithms.
From: Erann Gat
Subject: Re: Not from Lisp ( Re: Better habits for programming )
Date: 
Message-ID: <gat-0707022350010001@192.168.1.50>
In article <··································@4ax.com>,
······@interaccess.com wrote:

> On Sun, 07 Jul 2002 16:28:38 -0700, ···@jpl.nasa.gov (Erann Gat)
> wrote:
> 
> >In article <··································@4ax.com>,
> >······@interaccess.com wrote:
> >
> >> remove-duplicates removes duplicate entries from an arbitrary list.
> >> Since this list is arbitrary the algorithm must be quadratic in
> >> nature.
> >
> >Wrong.  Remove-duplicates can be linear even for arbitrary (non-sorted)
lists.
> >
> Can you please describe it?

(defun remove-duplicates-in-linear-time (l)
  (let ( (h (make-hash-table))
         (result '()) )
    (dolist (i l)
      (unless (gethash i h)
        (push i result)
        (setf (gethash i h) t)))
    (nreverse result)))

> It would be enormously suprising since ACL, LispWorks, cmucl, Corman
> lisp, and clisp all use quadratic algorithms.

Life is chock full of surprises.

E.
From: Duane Rettig
Subject: Re: Not from Lisp ( Re: Better habits for programming )
Date: 
Message-ID: <47kk648e4.fsf@beta.franz.com>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <··································@4ax.com>,
> ······@interaccess.com wrote:
> 
> > On Sun, 07 Jul 2002 16:28:38 -0700, ···@jpl.nasa.gov (Erann Gat)
> > wrote:

> > > Remove-duplicates can be linear even for arbitrary (non-sorted) lists.
> > >
> > Can you please describe it?
> 
> (defun remove-duplicates-in-linear-time (l)
>   (let ( (h (make-hash-table))
>          (result '()) )
>     (dolist (i l)
>       (unless (gethash i h)
>         (push i result)
>         (setf (gethash i h) t)))
>     (nreverse result)))

Linearity in hash-tables is sometimes a bit tricky.  This solution
isn't quite linear on lists with many unique elements, unless you
provide an estimate or an obviously conservative size estimate for
the hash-table.  A good estimate is (length l), but of course that
is an extra traversal through the list.  You could also provide a
large rehash-size, something larger than 2.0 (say, 5.0) to get the
size of the hash-table up quickly enough to avoid too many rehashes
due to the hash-table growth.

-- 
Duane Rettig          Franz Inc.            http://www.franz.com/ (www)
1995 University Ave Suite 275  Berkeley, CA 94704
Phone: (510) 548-3600; FAX: (510) 548-8253   ·····@Franz.COM (internet)
From: Paul F. Dietz
Subject: Re: Not from Lisp ( Re: Better habits for programming )
Date: 
Message-ID: <3D2A26DE.8727E24A@dls.net>
Duane Rettig wrote:

> Linearity in hash-tables is sometimes a bit tricky.  This solution
> isn't quite linear on lists with many unique elements, unless you
> provide an estimate or an obviously conservative size estimate for
> the hash-table.

Even if you have to expand the hash table multiple times, the algorithm
will still be O(n) if you expand the hash table by at least a constant
factor each time (the constant > 1).

	Paul
From: Kaz Kylheku
Subject: Re: Not from Lisp ( Re: Better habits for programming )
Date: 
Message-ID: <agfmi1$2rb$2@luna.vcn.bc.ca>
In article <·············@beta.franz.com>, Duane Rettig wrote:
> ···@jpl.nasa.gov (Erann Gat) writes:
>> (defun remove-duplicates-in-linear-time (l)
>>   (let ( (h (make-hash-table))
>>          (result '()) )
>>     (dolist (i l)
>>       (unless (gethash i h)
>>         (push i result)
>>         (setf (gethash i h) t)))
>>     (nreverse result)))
> 
> Linearity in hash-tables is sometimes a bit tricky.  This solution
> isn't quite linear on lists with many unique elements, unless you
> provide an estimate or an obviously conservative size estimate for
> the hash-table.  A good estimate is (length l), but of course that
> is an extra traversal through the list.  You could also provide a
> large rehash-size, something larger than 2.0 (say, 5.0) to get the
> size of the hash-table up quickly enough to avoid too many rehashes
> due to the hash-table growth.

_Introduction to Algorithms_, by Cormen, Leiserson an Rivest, has a exposition
of this in the chapter on amortized analysis. 

    Using amortized analysis, we shall show that the amortized cost of
    insertion and deletion is only O(1), even though the actual cost of
    an operation is large when it triggers an expansion or contraction.''

and

    Intuitively, each item pays for 3 elementary insertions: inserting itself
    in the current table, moving itself when the table is expanded, and moving
    another item that has already been moved once when the table is expanded.

The above observation holds for a resize factor of 2. With other factors,
all that changes is the constant; with a slower geometric growth, an
item pays for more moves, but it's still an amortized constant.

If there is a nonlinear component to it, it's only due to the requirement for
ever wider addresses and hash values.  As the data set grows huge, you have to
use wider addresses, you have to take into account the complexity of the bit
operations of actually indexing into the table, etc. We normally don't worry
about the effects of this because these costs are fixed in the given hardware.

I should mention that a good hash table algorithm avoids ``rehashes'', by which
I understand to be a recomputation of the hashing function, and reindexing into
the table.  In one algorithm, which always doubles the table, items either stay
in the same chain or move to a chain in the newly created upper partition of
the table, which is at a fixed displacement from their current chain. Thus
resizing the table simply means iterating over the chains, and based on the
value of a single bit, sorting the items into two chains.  

Suppose the current size of the table is 8, and that the least significant
three bits of the hashes serve indices into the table. If the table doubles to
16 chains, then you simply expose the fourth bit of each hash value. Any
element of table[n] which has a 1 in that new bit position moves to the upper
half of the table to table[n+8]; the others stay in the same chain. 
From: ozan s yigit
Subject: Re: Not from Lisp ( Re: Better habits for programming )
Date: 
Message-ID: <vi4r8i9pqwp.fsf@blue.cs.yorku.ca>
Kaz Kylheku:

>    ... In one algorithm, which always doubles the table, items either stay
> in the same chain or move to a chain in the newly created upper partition of
> the table, which is at a fixed displacement from their current chain. Thus
> resizing the table simply means iterating over the chains, and based on the
> value of a single bit, sorting the items into two chains.  

because of the cost, that is the kind of algorithm usually used for external
hashing. for in-core hashing, it is not clear if the power-of-two limitation
is worth the trouble for general-purpose use.

oz
--- 
a nought, an ought, a knot, a not easily perceived distinction. -- t. duff
From: Jochen Schmidt
Subject: Re: Not from Lisp ( Re: Better habits for programming )
Date: 
Message-ID: <agb862$79r$1@rznews2.rrze.uni-erlangen.de>
Thaddeus L Olczyk wrote:

> On Sun, 07 Jul 2002 16:28:38 -0700, ···@jpl.nasa.gov (Erann Gat)
> wrote:
> 
>>In article <··································@4ax.com>,
>>······@interaccess.com wrote:
>>
>>> remove-duplicates removes duplicate entries from an arbitrary list.
>>> Since this list is arbitrary the algorithm must be quadratic in
>>> nature.
>>
>>Wrong.  Remove-duplicates can be linear even for arbitrary (non-sorted)
>>lists.
>>
> Can you please describe it?
> It would be enormously suprising since ACL, LispWorks, cmucl, Corman
> lisp, and clisp all use quadratic algorithms.

remove-duplicates can be easily made linear if the test-function is one of 
the hash-table test functions.

ciao,
Jochen
 
--
http://www.dataheaven.de
From: Tim Bradshaw
Subject: Re: Not from Lisp ( Re: Better habits for programming )
Date: 
Message-ID: <ey3eleepjat.fsf@cley.com>
* Thaddeus L Olczyk wrote:
> It would be enormously suprising since ACL, LispWorks, cmucl, Corman
> lisp, and clisp all use quadratic algorithms.

To people who don't measure things, it probably is, yes.

I came up with a linear algorithm which is fairly obviously worse than
Erann's (I was too dumb to work out I could just collect the elements,
so I invented an elaborate trick to avoid sorting them from the
hashtable) but basically uses the same trick.  I also wrote some
simple benchmark functions (for all of this, see below).  I tested
three functions - the system one, mine, and a slightly bummed version
of Erann's (collects the lists forwards not backwards).

I benchmarked for lists from 10000 to 100000 elts.

The results were, for the system I tested on:

1. The system function is still winning for lists of length 100000.

2. My version is, amazingly, not much slower than Erann's.  Partly
   this seems to be because the lisp I used can stack allocate the
   array I use to avoid sorting at the end, and this causes what looks
   really bad to not be so bad.  (The flip side of this is that for
   very long lists with a factor of 1 (so almost no duplicates), I run
   out of stack to allocate the array.

3. Everything depends *heavily* on the data.  In particular, both
   Erann's and my function do really badly if there are not many
   duplicates (small factors in my code) and really well if there are.
   This is because if there are almost no duplicates they need to
   build really big hashtables, and, worse, they do this by starting
   with small hashtables and repeatedly growing them.  Finally if
   there are almost no duplicates they probe twice for each element
   (once to check, once to store).

   And my data is pretty simplistic.

But, for this implementation and this data, the breakeven point is
something over 100,000 elts.

So here's today's trite better habits of programming that I learned
from Lisp:

A. Measure, and measure for *real data*.  With a profiler if you have
   one (and if you don't, get one), with test harnesses if not.  You
   can spend a lot of time producing a linear algorithm instead of a
   quadratic one when the measured coefficients are such that the
   linear algorithm loses heavily to the quadratic one for your data.

B. You can not guess the coefficients without measuring.  In general,
   unless you measure you *do not know* where the time is going.

C. Don't waste time on `optimized' algorithms until you have measured
   the non-optimized ones and established that they are what is
   slowing your program down.

--tim

--cut--
;;;; Remove duplicates benchmarks
;;;

;;; You should be in CL-USER and get collecting from www.tfeb.org to
;;; run this, or just use push/nreverse like the original
;;;
(in-package :weld-user)

(defun rd/hash (list)
  ;; tfb algorithm
  (let ((ht (make-hash-table)))
    (declare (dynamic-extent ht))
    (loop with hashc = 0
          for e in list
          unless (gethash e ht)
          do (setf (gethash e ht) hashc
                   hashc (1+ hashc))
          finally
          (return (let ((a (make-array hashc)))
                    (declare (dynamic-extent a))
                    (maphash #'(lambda (k v)
                                 (setf (aref a v) k))
                             ht)
                    (coerce a 'list))))))

(defun remove-duplicates-in-linear-time (l)
  ;; Erann Gat's, changed to use COLLECTING/COLLECT (I wonder if this
  ;; makes any difference...)
  (collecting
    (let ((h (make-hash-table)))
      (declare (dynamic-extent h))
      (dolist (i l)
        (unless (gethash i h)
          (collect i)
          (setf (gethash i h) t))))))


(defun make-random-list (n &optional (factor 10))
  ;; FACTOR controls how many duplicates there will be (bigger it is, the more)
  ;; -- (make-random-list 100 50) makes a list of random 0s and 1s.
  (loop with m = (round n factor)
        repeat n
        collect (random m)))

(defun bench-rd (lengths &key (factor 10) (count 10))
  ;; system function
  (loop for i in lengths
        for l = (make-random-list i factor)
        do (format t "~&~D~%" i)
        collect
        (loop repeat count
              for start = (get-internal-real-time)
              for finish = (progn (remove-duplicates l)
                             (get-internal-real-time))
              sum (- finish start) into total
              finally (return (/ (float total 1.0d0) 
                                 count internal-time-units-per-second)))))

(defun bench-rd/hash (lengths &key (factor 10) (count 10))
  ;; mine
  (loop for i in lengths
        for l = (make-random-list i factor)
        do (format t "~&~D~%" i)
        collect
        (loop repeat count
              for start = (get-internal-real-time)
              for finish = (progn (rd/hash l)
                             (get-internal-real-time))
              sum (- finish start) into total
              finally (return (/ (float total 1.0d0) 
                                 count internal-time-units-per-second)))))

(defun bench-rd-lt (lengths &key (factor 10) (count 10))
  ;; Erann's
  (loop for i in lengths
        for l = (make-random-list i factor)
        do (format t "~&~D~%" i)
        collect
        (loop repeat count
              for start = (get-internal-real-time)
              for finish = (progn (remove-duplicates-in-linear-time l)
                             (get-internal-real-time))
              sum (- finish start) into total
              finally (return (/ (float total 1.0d0) 
                                 count internal-time-units-per-second)))))
From: Joe Marshall
Subject: Re: Not from Lisp ( Re: Better habits for programming )
Date: 
Message-ID: <hEeW8.325941$6m5.325527@rwcrnsc51.ops.asp.att.net>
"Tim Bradshaw" <···@cley.com> wrote in message ····················@cley.com...
>
> 3. Everything depends *heavily* on the data.  In particular, both
>    Erann's and my function do really badly if there are not many
>    duplicates (small factors in my code) and really well if there are.
>    This is because if there are almost no duplicates they need to
>    build really big hashtables, and, worse, they do this by starting
>    with small hashtables and repeatedly growing them.

And the obvious algorithm is only quadratic if there aren't any
duplicates.  If duplicates are plentiful, then the usual `search
the tail' algorithm takes time proportional to the number of
elements times the expected distance between duplicates.
From: Marcin Tustin
Subject: Re: Not from Lisp ( Re: Better habits for programming )
Date: 
Message-ID: <yztb4rfam24o.fsf@werewolf.i-did-not-set--mail-host-address--so-shoot-me>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <··································@4ax.com>,
> ······@interaccess.com wrote:
> 
> > remove-duplicates removes duplicate entries from an arbitrary list.
> > Since this list is arbitrary the algorithm must be quadratic in
> > nature.
> 
> Wrong.  Remove-duplicates can be linear even for arbitrary (non-sorted) lists.

You mean doing something like (psuedo-lisp, mind)

;;This should probably use labels
;;end-cons refers to final cons in output-list
(defun remove-duplicates-inner (input-list hashtable output-list end-cons)
    (case 
        ((contains hashtable (car input-list))
          (remove-duplicates (cdr input-list) hashtable output-list end-cons))
        ('true (let ((new-cons (car input-list) nil))
                    (setf (cdr end-cons new-cons)))
                (add hashtable (car input-list))
                (remove-duplicates (cdr input-list) hashtable output-list new-cons))))

(defun remove-duplicates (input-list)
    (let ((foo (cons (car input-list nil))))
        (remove-duplicates-inner (cdr input-list) (new-hashtable :contains (car foo)) foo foo)))

> 
> > My response to the post was that if all he got was an improvement of
> > 100 then his algorithm stank.
> > 
> > Three people called me a troll for that.
> 
> And rightly so.
> 
> > So that is four people that really don't understand programming.
> 
> Wrong again.
> 
> E.



-- 
Straight outta Cheltenham
From: Jochen Schmidt
Subject: Re: Not from Lisp ( Re: Better habits for programming )
Date: 
Message-ID: <agaiho$q8q$1@rznews2.rrze.uni-erlangen.de>
Thaddeus L Olczyk wrote:
> My response to the post was that if all he got was an improvement of
> 100 then his algorithm stank.
> 
> Three people called me a troll for that.

Well - and they were right because both - your tone and your reasoning 
stank. You cannot determine the complexity class of an algorithm by only 
this one number. The algorithm might have had linear complexity but might 
have had a big constant factor in it. 

> So that is four people that really don't understand programming. [1]

No that is one guy not understanding the basics of complexity theory and 
how to behave if someone offers his help. 

ciao,
Jochen

--
http://www.dataheaven.de
From: David Combs
Subject: Re: Not from Lisp ( Re: Better habits for programming )
Date: 
Message-ID: <ahmbog$jma$1@reader3.panix.com>
In article <··································@4ax.com>,
Thaddeus L Olczyk  <······@interaccess.com> wrote:
>On Sun, 7 Jul 2002 13:43:11 -0400, Antonio Esteban
><·········@arrakis.es> wrote:
>
>>
>>I'm a newbie with Lisp. I have a question for the Lisp's gurus/people of
>>this newsgroup: I have read out there that if I learn to programm in Lisp,
>>I'll achieve better programming habits in another languages like C/C++.
>>What do you think about this?.
>
>Several monthas ago I asked a question. It was:
>
>remove-duplicates removes duplicate entries from an arbitrary list.
>Since this list is arbitrary the algorithm must be quadratic in
>nature. For a sorted list there is a simple algorithm for removing 
>duplicates. Go down the list and copy only the first entry. Can you
>help me in writing this without incurring excessive overhead.
>
>( I was worried about what people here would call "excessive consing"
>and copying. )
>Several people I sugested ways where I replace the elements with
>hashes. In it self this might be an improvement, but they implemented
>it using the remove-duplicates function ie they used the same damn
>algorithm. 

[So what?  Just how would *you* implement an efficent hash-table
that is still efficient for 10,000 items, *without* (got that,
*WITHOUT*) either using remove-duplicates or coding your
own and using that -- in either case "the same damn algorithm".  
SINCE YOU SEEM TO KNOW HOW, PLEASE LET US ALL KNOW!]



> At the time I didn't have time to examine closely the code,
>but it didn't look promising so I put it off for a couple of days.
>Then one of the hash implementors posted a message with benchmarks.
>The improvement using hashes for a list of 10000  was by a factor of
>100.
>
>My response to the post was that if all he got was an improvement of
>100 then his algorithm stank.


>[1]
>The reason I think an improvement of 100 stinks is simple. Going from
>a quadatic to a linear algorithm on a list of 10000 should improve
>performance 10000 give or take extra overhead. So 100 is not much.
>It also indicates that a linear algorithm was probably not used ( as I
>already suspected ), so that with a list of 100000, the improvement
>would still be 100 ( an improvement in overhead ) vs an improvement
>of 100000 for a linear algorithm. The same for 1000000 or 1000000000
>etc.

It takes real chutpaz (sp?) to describe as "stinks" a
a 100-fold speedup!


Before you can honestly and with any believabality claim that, 
you must first prove/show that you have
something *better* than "the same damn algorithm" (that quadratic 
search that you ridicule).


I say that you can't do it.  The typical implemtation
of a hash-table (since way back in that cacm article
of june or july '68, I think it was) is via an
*array* of "buckets" (that's where any claim to
linearity has to come from, that indexing to a 
just-then-computed position), with usually
a *linked list* of "collisions" rooted at that
index.

It's the sarching along that linear list that
is anything *but* linear -- not unless you
want to build some fancy nlogn data-structure hanging
off each filled-slot in the array.


>Three people called me a troll for that.
>
>So that is four people that really don't understand programming. [1]
>And several others who posted similar algorithms. All bligthly happy
>because they could squeeze a little better performance, rather than
>gain a lot.

have you SHOWN us (a) your vastly-superior scheme and
(b) proof that at 10,000 items, it's *vastly* faster
than that stinking mere 100-fold speedup.

>
>The point is that they relied on Lisp to make them selves better
>programmers.

Just what does that have to do with you having to *demonstrate*
some clear superity

>So too, if you rely on Lisp to make you a better programmer then
>it will make you a worse programmer. 

Until the above statement, all we saw was the apparance
of a fool.

Now we're required to add that original "troll"
you were complaining about.


Instead rely on yourself to make
>yourself a better programmer.


David Combs
From: Laurent PERON
Subject: Re: Better habits for programming
Date: 
Message-ID: <3D2873B1.7010100@free.fr>
I believe it is true.

Also, I believe, but have no proof to justify myself, that if you learn 
Common Lisp,
you will experience great difficulties to go back "down" to C++.
BTW, this is what I experienced myself.

Laurent.
From: Joe Marshall
Subject: Re: Better habits for programming
Date: 
Message-ID: <mt%V8.313752$6m5.302379@rwcrnsc51.ops.asp.att.net>
"Laurent PERON" <·············@free.fr> wrote in message ·····················@free.fr...
> I believe it is true.
>
> Also, I believe, but have no proof to justify myself, that if you learn
> Common Lisp,
> you will experience great difficulties to go back "down" to C++.
> BTW, this is what I experienced myself.

It is certainly far more painful to program in C++, but I didn't find
very much difficulty understanding the C++ model.

Simply imagine that a rather dull, inexperienced, and lazy person
were tasked with implementing an object-oriented language.  The first
idea for solving anything is fixated upon as the only possible solution.
When running into a difficulty in implementation, just stop and use
what you have so far.  Punt on anything that is *really* hard.  Ignore
the literature (reading takes too much time away from hacking).
Demonstrate your hacking proficiency by using the more obscure features
of YACC.

Much of C++ becomes transparently obvious if you take this viewpoint.

Unfortunately, there is still a large part of C++ that will remain
obscure no matter what.
From: Kaz Kylheku
Subject: Re: Better habits for programming
Date: 
Message-ID: <agfmi0$2rb$1@luna.vcn.bc.ca>
In article <················@free.fr>, Laurent PERON wrote:
> I believe it is true.
> 
> Also, I believe, but have no proof to justify myself, that if you learn 
> Common Lisp,
> you will experience great difficulties to go back "down" to C++.
> BTW, this is what I experienced myself.

If you learn Lisp  you will *not* suddenly discover a better way to 
use a language like C++, to do things that you could not before.
What will happen instead, is that you will be more clearly able to 
understand and express *why* you can't do certain things easily
in that language. Understanding why doesn't translate to a solution.

I generally understand why a chicken can't fly; that does not mean I know how
to make it fly. I can at best offer the idle conjecture that it could fly if
only it would transmute into a pigeon. :)
From: Eduardo Muñoz
Subject: Re: Better habits for programming
Date: 
Message-ID: <ueleco7g3.fsf@jet.es>
Kaz Kylheku <···@ashi.footprints.net> writes:

> I generally understand why a chicken can't fly; that does not mean I know how
> to make it fly. I can at best offer the idle conjecture that it could fly if
> only it would transmute into a pigeon. :)

But when you realize that it will never fly you
can throw it by the window an get a real pigeon  ;)


-- 

Eduardo Mu�oz
From: Michael Sullivan
Subject: Re: Better habits for programming
Date: 
Message-ID: <1ff3pf3.5xu0976bs0jjN%michael@bcect.com>
Kaz Kylheku <···@ashi.footprints.net> wrote:

> In article <················@free.fr>, Laurent PERON wrote:
> > I believe it is true.
> > 
> > Also, I believe, but have no proof to justify myself, that if you learn
> > Common Lisp,
> > you will experience great difficulties to go back "down" to C++.
> > BTW, this is what I experienced myself.
> 
> If you learn Lisp  you will *not* suddenly discover a better way to 
> use a language like C++, to do things that you could not before.
> What will happen instead, is that you will be more clearly able to 
> understand and express *why* you can't do certain things easily
> in that language. Understanding why doesn't translate to a solution.

Hmm.  I am by no means a lisp expert (or even particularly fluent), but
a big difference between lisp and C (et. al.) seems to be the style of
programming that it supports gracefully, i.e. functional style and
highly modular.  To a certain extent, it's cumbersome to achieve the
kinds of modularity that are the meat of lisp in other languages.  

But some idioms actually translate fairly easily.  Studying a bit of
lisp (and being almost forced into a functional style) is starting to
bring into focus why certain ways of programming work better in other
languages.  I'm really wishing I'd learned it earlier (left college
before I got to the course that used Abelson/Sussman).  

I can definitely see what PG was talking about when he said other
languages are copying methods and style from lisp.  I've been doing a
lot of applescript writing lately, and while it clearly has
disadvantages (not nearly robust enough to make really complicated
nested objects and you're often at the mercy of bass-ackwards object
models of applications you need to drive), there's a style there that
was quite elegant compared to c/java/perl.  I actually think that
working in raw AS was driving me toward lisp.  I'm seeing in great lisp
programs, stuff that AS's style made me *want* to do, but I ran into
scale roadblocks, since it wasn't written with large-scale development
in mind.

OTOH, the idea of programming the program is something I've wanted to do
since the moment I set foot at a computer (Heck, I tried to accomplish
it when the only language I knew was TRS-80 BASIC by peeking and poking
into my program memory). Lisp is the first language I've looked at
that's made to work at that kind of depth very elegantly and easily.
When I first looked at it, I got lost in all the infernal parentheses
and did not see it's essential hacker nature.  Now, I'm starting to
worry that once I get comfortable, working in anything else is going to
put a bad taste in my mouth.  OTOH, the bad taste has always been there,
It's just that I never knew chocolate existed.

> I generally understand why a chicken can't fly; that does not mean I know how
> to make it fly. I can at best offer the idle conjecture that it could fly if
> only it would transmute into a pigeon. :)

I think the analogy here is that having worked with the pigeon, one can
maybe get the chicken to flap it's wings and jump a little higher, if
not actually fly.

It's not quite so satisfying, but better than nothing when all you have
to work with is chickens.


Michael

-- 
Michael Sullivan
Business Card Express of CT             Thermographers to the Trade
Cheshire, CT                                      ·······@bcect.com
From: Joe Marshall
Subject: Re: Better habits for programming
Date: 
Message-ID: <YeXW8.304392$R61.260695@rwcrnsc52.ops.asp.att.net>
"Kaz Kylheku" <···@ashi.footprints.net> wrote in message ·················@luna.vcn.bc.ca...
>
> I generally understand why a chicken can't fly; that does not mean I know how
> to make it fly. I can at best offer the idle conjecture that it could fly if
> only it would transmute into a pigeon. :)

With sufficient initial velocity, it'll fly.
From: Christopher Browne
Subject: Re: Better habits for programming
Date: 
Message-ID: <aghmdl$ljl7a$1@ID-125932.news.dfncis.de>
Centuries ago, Nostradamus foresaw when "Joe Marshall" <·············@attbi.com> would write:
> "Kaz Kylheku" <···@ashi.footprints.net> wrote in message ·················@luna.vcn.bc.ca...
>>
>> I generally understand why a chicken can't fly; that does not mean I know how
>> to make it fly. I can at best offer the idle conjecture that it could fly if
>> only it would transmute into a pigeon. :)
>
> With sufficient initial velocity, it'll fly.

"With sufficient thrust, pigs fly just fine.  However, this is not
necessarily a good idea. It is hard to be sure where they are going to
land, and it could be dangerous sitting under them as they fly
overhead." -- RFC 1925
-- 
(reverse (concatenate 'string ········@" "enworbbc"))
http://cbbrowne.com/info/lisp.html
"I take it all back. Microsoft Exchange _is_ RFC compliant.
RFC 1925, point three." -- Author unknown
From: Joe Marshall
Subject: Re: Better habits for programming
Date: 
Message-ID: <Ph_W8.305178$R61.262754@rwcrnsc52.ops.asp.att.net>
"Christopher Browne" <········@acm.org> wrote in message ···················@ID-125932.news.dfncis.de...
> Centuries ago, Nostradamus foresaw when "Joe Marshall" <·············@attbi.com> would write:
> > "Kaz Kylheku" <···@ashi.footprints.net> wrote in message ·················@luna.vcn.bc.ca...
> >>
> >> I generally understand why a chicken can't fly; that does not mean I know how
> >> to make it fly. I can at best offer the idle conjecture that it could fly if
> >> only it would transmute into a pigeon. :)
> >
> > With sufficient initial velocity, it'll fly.
>
> "With sufficient thrust, pigs fly just fine.  However, this is not
> necessarily a good idea. It is hard to be sure where they are going to
> land, and it could be dangerous sitting under them as they fly
> overhead." -- RFC 1925

Thanks, I couldn't quite remember the exact quote.
From: Donald Fisk
Subject: Re: Better habits for programming
Date: 
Message-ID: <3D28991A.FDE1AE7D@enterprise.net>
Antonio Esteban wrote:
> 
> Hi all,
> 
> I'm a newbie with Lisp. I have a question for the Lisp's gurus/people of
> this newsgroup: I have read out there that if I learn to programm in Lisp,
> I'll achieve better programming habits in another languages like C/C++.
> What do you think about this?.

Eric Raymond thinks so, and so do I.   However, two
potential problems.

(1) Some other C/C++ programmers, who are used to
programming in the style of

  do action1 which changes state;
  do action2 which changes state;
  do action3 which changes state;

and so on ad nauseam

might find your style of programming strange and difficult to
follow.   You'd be surprised how many programmers have problems
with nested function calls.   (See
http://www.prescod.net/python/IsPythonLisp.html and the thread
in comp.lang.lisp discussing it).   The problem is with those
programmers, rather than a Lisp-influenced C style, I hasten to
add.

(2) Once you've learned Lisp, you'll find other languages painful to
work in.   But in the case of C, console yourself with the fact that
it fills a niche not filled by Lisp (except on the LispMs) -- it's
a low level language.

>         --Antonio

-- 
Dalinian: Lisp. Java. Which one sounds sexier?
RevAaron: Definitely Lisp. Lisp conjures up images of hippy coders,
drugs,
sex, and rock & roll. Late nights at Berkeley, coding in Lisp fueled by
LSD.
Java evokes a vision of a stereotypical nerd, with no life or social
skills.
From: Gabe Garza
Subject: Re: Better habits for programming
Date: 
Message-ID: <sn2vjnau.fsf@anubis.kynopolis.org>
Donald Fisk <················@enterprise.net> writes:

> (2) Once you've learned Lisp, you'll find other languages painful to
> work in.   But in the case of C, console yourself with the fact that
> it fills a niche not filled by Lisp (except on the LispMs) -- it's
> a low level language.

I'm more and more starting to view Lisp as a good language even for
"low-level" stuff where I previously used C.  For example, recently I
prototyped an SNMP agent and had to write a system for
encoding/decoding types using a subset of the ASN.1 encodings.  These
are about as low-level as you can get--they specify how to encode
various datatypes down to the bit-level in an octet-array.  Lisp
actually has more bit bangy functions then C--LDB and DPB are pretty
nice.  I find this easier to read:

(defconstant +class-offset+ 6)
(defconstant +class-width+ 2)

...
  (ecase (ldb (byte +class-width+ +class-offset+) (aref octets offset))
    (0 :universal)
    (1 :application)
    (2 :context-specific)
    (3 :private))       
...

Then this:

#define UNIVERSAL 0
#define APPLICATION 1
#define CONTEXT_SPECIFIC 2
#define PRIVATE 3
#define CLASS_OFFSET 6
#define CLASS_MASK 0x03
...
(octets[offset] >> CLASS_OFFSET) | CLASS_MASK
...

And they have exactly the same LOC.  CMUCL, at least, can optimize the
above code pretty well.  The shortcomings I have run into using Lisp
for low-level stuff have been playing well with other (Lisp)
threads/the development environment and certain data operations.  For
example, I haven't found a portable way to get at the raw bits of a
floating point number (yes, I know this varies across
architectures--but it seems like it could make sense to define
functions like FLOAT-MANTISSA, FLOAT-RADIX, FLOAT-POWER, FLOAT-SIGN,
that would make sense on many systems...).

Also, using LDB (or other operators) to store the individual bytes of
a word into an octet-array is admittedly way clunkier then just doing a
bcopy (possibly after a byte swap). 

I can definately see where C could be preferred for low-level stuff, 
but personally I don't write Lisp off of entirely...

Gabe Garza
From: Lieven Marchand
Subject: Re: Better habits for programming
Date: 
Message-ID: <87adp3p4eg.fsf@wyrd.be>
Gabe Garza <·······@ix.netcom.com> writes:


> For example, I haven't found a portable way to get at the raw bits
> of a floating point number (yes, I know this varies across
> architectures--but it seems like it could make sense to define
> functions like FLOAT-MANTISSA, FLOAT-RADIX, FLOAT-POWER, FLOAT-SIGN,
> that would make sense on many systems...).

Take a look at DECODE-FLOAT and INTEGER-DECODE-FLOAT.

-- 
Bored, now.
Lieven Marchand <···@wyrd.be>
From: Roger Corman
Subject: Re: Better habits for programming
Date: 
Message-ID: <3d28e4eb.628004933@news.sf.sbcglobal.net>
On Sun, 07 Jul 2002 20:27:18 GMT, Gabe Garza <·······@ix.netcom.com> wrote:

>above code pretty well.  The shortcomings I have run into using Lisp
>for low-level stuff have been playing well with other (Lisp)
>threads/the development environment and certain data operations.  For
>example, I haven't found a portable way to get at the raw bits of a
>floating point number (yes, I know this varies across
>architectures--but it seems like it could make sense to define
>functions like FLOAT-MANTISSA, FLOAT-RADIX, FLOAT-POWER, FLOAT-SIGN,
>that would make sense on many systems...).

I assume you are referring to some way that INTEGER-DECODE-FLOAT 
and DECODE-FLOAT are not sufficient. It looks to me like the IEEE bits could be
reconstructed from the various standard functions available (assuming the lisp
in question uses IEEE floats). 

If you say this is a limitation of Lisp, I would say neither C or C++ includes 
a portable way to extract the bits of a floating point number. Java
does include some nice library functions for this.

BTW, I fully agree with your point that Lisp is useful for low-level things. I
think it is great for bit-manipulation, and have written several assemblers and
code generators in lisp, among other things. These are about as low level as you
can get, and I found the elegance, lines of code, simplicity, ease of
development all favored lisp over similar projects I have done in other
languages (C/C++/Java). I like all those languages, and am quite fluent with
them. The macro abstractions of lisp, however, are perfect for
implementingturning low-level, nitty-gritty tasks using a higher-level of
abstraction--without sacraficing run-time performance.

Roger
From: David Combs
Subject: Re: Better habits for programming
Date: 
Message-ID: <ahmcbn$jma$2@reader3.panix.com>
In article <··················@news.sf.sbcglobal.net>,
Roger Corman <·····@corman.net> wrote:

><SNIP>

>BTW, I fully agree with your point that Lisp is useful for low-level things. I
>think it is great for bit-manipulation, and have written several assemblers and
>code generators in lisp, among other things. These are about as low level as you
>can get, and I found the elegance, lines of code, simplicity, ease of
>development all favored lisp over similar projects I have done in other
>languages (C/C++/Java). I like all those languages, and am quite fluent with
>them. The macro abstractions of lisp, however, are perfect for
>implementingturning low-level, nitty-gritty tasks using a higher-level of
>abstraction--without sacraficing run-time performance.

Hey -- how about a quick tutorial on some of
that, inluding some of your code (if legal
to show).

Thanks!

David
From: Gabe Garza
Subject: Re: Better habits for programming
Date: 
Message-ID: <k7o6kiha.fsf@anubis.kynopolis.org>
Gabe Garza <·······@ix.netcom.com> writes:
> example, I haven't found a portable way to get at the raw bits of a
> floating point number (yes, I know this varies across
> architectures--but it seems like it could make sense to define
> functions like FLOAT-MANTISSA, FLOAT-RADIX, FLOAT-POWER, FLOAT-SIGN,
> that would make sense on many systems...).

Paul Foley pointed out in an email--and Roger Corman pointed out in
another followup--that such functionality already exists:

FLOAT-RADIX, FLOAT-SIGN, and INTEGER-DECODE-FLOAT are all defined in
the spec and can be used to "get at the raw bits" portably (where the
"raw bits" may or may not be the actual raw bits, but provide the
information that you probably wanted to get from the raw bits...).

Gabe Garza
From: apinkus
Subject: Re: Better habits for programming
Date: 
Message-ID: <3D315C95.B8C98951@xs4all.nl>
> 
> (defconstant +class-offset+ 6)
> (defconstant +class-width+ 2)
> 
> ...
>   (ecase (ldb (byte +class-width+ +class-offset+) (aref octets offset))
>     (0 :universal)
>     (1 :application)
>     (2 :context-specific)
>     (3 :private))
> ...
> 
> Then this:
> 
> #define UNIVERSAL 0
> #define APPLICATION 1
> #define CONTEXT_SPECIFIC 2
> #define PRIVATE 3
> #define CLASS_OFFSET 6
> #define CLASS_MASK 0x03
> ...
> (octets[offset] >> CLASS_OFFSET) | CLASS_MASK
> ...
> 

Well, in c/c++ you could also use a macro for this?

	#define BITS(byte,offset,width) (((byte)>>offset) & ((1L<<width)-1))
	...
	typedef enum SomeTypes
	{
	  UNIVERSAL=0,
	  APPLICATION,
	  CONTEXT_SPECIFIC,
	  PRIVATE
	};
	
	#define CLASS_OFFSET 6
	#define CLASS_WIDTH 2
	...
	BITS(octets[offset], CLASS_OFFSET, CLASS_WIDTH)

Incidentally, I noticed a 'bug' in your code, the '|' should be a '&'?

I'm not saying that Lisp is inferior, just that c/c++ is not inferior
in this case, per se, as far as readability is concerned. Then again
that is a matter of taste ;-)

Ayal
From: Hannah Schroeter
Subject: Re: Better habits for programming
Date: 
Message-ID: <agrjt9$o1i$1@c3po.schlund.de>
Hello!

In article <·················@xs4all.nl>, apinkus  <·······@xs4all.nl> wrote:
>[...]

>Well, in c/c++ you could also use a macro for this?

>	#define BITS(byte,offset,width) (((byte)>>offset) & ((1L<<width)-1))
>	...
>	typedef enum SomeTypes
>	{
>	  UNIVERSAL=0,
>	  APPLICATION,
>	  CONTEXT_SPECIFIC,
>	  PRIVATE
>	};

>	#define CLASS_OFFSET 6
>	#define CLASS_WIDTH 2
>	...
>	BITS(octets[offset], CLASS_OFFSET, CLASS_WIDTH)

Of course, you could just the same use an inline function, too.
So it's no really exciting macro usage, especially if you consider
what you can do with Lisp macros.

>[..]

Kind regards,

Hannah.
From: Paul
Subject: Re: Better habits for programming
Date: 
Message-ID: <3D2A04D7.8070003@hotmail.com>
Antonio Esteban wrote:
> Hi all,
> 
> I'm a newbie with Lisp. I have a question for the Lisp's gurus/people of
> this newsgroup: I have read out there that if I learn to programm in Lisp,
> I'll achieve better programming habits in another languages like C/C++.
> What do you think about this?.
> 
> Thanks in advance,
> 
> 	--Antonio
> 

Like most previous replies I think this is true, and I agree with the 
given arguments.

I'm not sure however how much it will improve your habits in other 
languages like C or C++. A lot of people gave a warning that you may not 
want to go back to another language, and part of the reason is that a 
lot of solutions in Lisp are just not possible in C or C++ or any other 
language. When I need to do some programming in another language than 
Lisp, and I am strugling with something, I often think how easy it could 
have been solved in Lisp. Different languages require different 
solutions and habbits, and they cannot always be transeferred between 
languages.

Another way to improve your programming habits may be to study more 
theoretical aspects. Lately I have spent some time studying semantics 
and designed a small programmng language. This also improved my 
programming habits. (by the way: I implemented the language in Lisp of 
course)

Paul
From: David Combs
Subject: Re: Better habits for programming
Date: 
Message-ID: <ahmcli$jma$3@reader3.panix.com>
In article <················@hotmail.com>, Paul  <········@hotmail.com> wrote:

><SNIP>

>Another way to improve your programming habits may be to study more 
>theoretical aspects. Lately I have spent some time studying semantics 
>and designed a small programmng language. This also improved my 
>programming habits. (by the way: I implemented the language in Lisp of 
>course)


You too -- how about a tutorial on the implementation
of this language you implemented.

THANKS!

David Combs