From: ken yip
Subject: floating point precision
Date: 
Message-ID: <1992Aug13.213615.28011@cs.yale.edu>
How can one specify the floating point precision of a numerical 
routine?  I was writing a LU-decomposition for solving linear equations.  
To see the effect of precision on accuracy, I want to run the code in 
short-float, double-float, and long-float.  Is there a simple way to 
do this short of explicitly declaring every single variable in the
program and all the functions it call?  I am using Lucid.

From: Barry Margolin
Subject: Re: floating point precision
Date: 
Message-ID: <16fnshINN7db@early-bird.think.com>
In article <······················@cs.yale.edu> ·······@CS.YALE.EDU (ken yip) writes:
>How can one specify the floating point precision of a numerical 
>routine?

All the mathematical functions in CL are generic, and will operate on any
floating point types, returning results of the same type (if multiple
arguments of different types are provided, it will coerce them all to the
largest one).  So, if you want to see the result of performing the
computation in double-float, provide double floats as the initial input.
-- 
Barry Margolin
System Manager, Thinking Machines Corp.

······@think.com          {uunet,harvard}!think!barmar
From: ken yip
Subject: Re: floating point precision
Date: 
Message-ID: <1992Aug14.175857.7314@cs.yale.edu>
In article <············@early-bird.think.com>, you write:
|> In article <······················@cs.yale.edu> ·······@CS.YALE.EDU (ken yip) writes:
|> >How can one specify the floating point precision of a numerical 
|> >routine?
|> 
|> All the mathematical functions in CL are generic, and will operate on any
|> floating point types, returning results of the same type (if multiple
|> arguments of different types are provided, it will coerce them all to the
|> largest one).  So, if you want to see the result of performing the
|> computation in double-float, provide double floats as the initial input.
|> -- 
|> Barry Margolin

That's what I hope is the case.  But apparently in the Lucid implementation,
all floats are automatically coerced into double-float (51-bit precision).
E.g.  (* 1.2 3.4)
will still return a float with 51-bit precision.  Correct me
if I am wrong on this.   So my questions are:
(1) Can one emulate 13-bit or 24-bit precision in Lucid? (2) Which
lisp implementation actually does what Barry said?

				   
From: Barry Margolin
Subject: Re: floating point precision
Date: 
Message-ID: <16igqkINNo4h@early-bird.think.com>
In article <·····················@cs.yale.edu> ·······@CS.YALE.EDU (ken yip) writes:
>That's what I hope is the case.  But apparently in the Lucid implementation,
>all floats are automatically coerced into double-float (51-bit precision).
>E.g.  (* 1.2 3.4)
>will still return a float with 51-bit precision.  Correct me
>if I am wrong on this.   So my questions are:
>(1) Can one emulate 13-bit or 24-bit precision in Lucid? (2) Which
>lisp implementation actually does what Barry said?

This depends on the architecture.  LCL/SPARC only has one float type, and
it's double precision (except they have a special exception for arrays of
single floats, but they're converted to double floats when they're
extracted).  I think LCL/VAX only has single precision floats.  Common Lisp
permits an implementation to implement any subset of the four float types,
and Lucid generally only implements one of them.

In an implementation that doesn't have all four float types, some of the
type names are synonyms.  In Lucid, they are all synonyms for FLOAT.

Symbolics impements all four floating point types.

I don't think there's an easy way to get an implementation that only has
one floating point type to emulate the others.  Complain to Lucid.
-- 
Barry Margolin
System Manager, Thinking Machines Corp.

······@think.com          {uunet,harvard}!think!barmar
From: Kevin Layer
Subject: Re: floating point precision
Date: 
Message-ID: <LAYER.92Aug17094840@ice.Franz.COM>
For what it's worth, Allegro CL supports two types of floats
(corresponding to the IEEE single and double floats).
--
-----
Kevin Layer, Franz Inc.         1995 University Avenue, Suite 275
·····@Franz.COM (internet)      Berkeley, CA  94704  USA
Phone: (510) 548-3600           FAX: (510) 548-8253
From: Bruce R. Miller
Subject: Re: floating point precision
Date: 
Message-ID: <2922805766@ARTEMIS.cam.nist.gov>
In article <············@early-bird.think.com>, Barry Margolin writes:
> In article <······················@cs.yale.edu> ·······@CS.YALE.EDU (ken yip) writes:
> >How can one specify the floating point precision of a numerical 
> >routine?
> 
> All the mathematical functions in CL are generic, and will operate on any
> floating point types, returning results of the same type (if multiple
> arguments of different types are provided, it will coerce them all to the
> largest one).  So, if you want to see the result of performing the
> computation in double-float, provide double floats as the initial input.

That's completely true --- as far as it goes.  

But, you do pay a performance price for generic arithmetic, especially
in inner loops.  This is complicated by numerical constants which appear
in any non-trivial numerical code.

Even in a system with efficient dispatch which ignores type
declarations, the cost of repeatedly converting rational constants to
floats is significant (I dont have figures handy, though).  If you code
them as single precision constants you loose accuracy when you give
double-float inputs.  On the other hand, coding double-float constants
infect the results and you'll double-float answers; they'll be
expensive but most likely no more accurate than your single float
results!

Of course, if run time is not a concern, then code it in generic, using
rational constants.  

I recently asked advice here  on a similar problem and got some good help
from Barry, Chris McConnel, Bob Kern and others (thanks!). 

The most elegant solution proposed involved code-walkers. Unfortunately,
to do it right requires knowledge of _all_ functions to be encountered
(ie. dependence of output types on input types). It would probably be
easily piggybacked onto an inference scheme & database like Python, but
I didn't feel like developing the database myself.

My solution was to develop a macro which extended the abstraction of
generic arith.  It checks the types of some of the args and branches to
the duplicate of the body which has been `optimized' for that type.
There's a second macro to be used within the outer one which coerces a
number or named constant to the appropriate type.
The code that you write using this is a bit on the verbose side :< but
it does work.
[My version does no declarations, though.]

bruce
······@cam.nist.gov
From: Jeff Dalton
Subject: Re: floating point precision
Date: 
Message-ID: <7215@skye.ed.ac.uk>
In article <······················@cs.yale.edu> ·······@CS.YALE.EDU (ken yip) writes:
>
>How can one specify the floating point precision of a numerical 
>routine?  I was writing a LU-decomposition for solving linear equations.  
>To see the effect of precision on accuracy, I want to run the code in 
>short-float, double-float, and long-float.  Is there a simple way to 
>do this short of explicitly declaring every single variable in the
>program and all the functions it call?  I am using Lucid.

So you win!  (I think.)  In Lucid, you can use a type-reduce
declaration to say all numbers are, say, single-floats.