From: Joerg-Cyril Hoehle
Subject: getting the right optimize declarations for CMU
Date: 
Message-ID: <3hdo2d$bqa@eurybia.rz.uni-konstanz.de>
Hi,

we're wondering how to set (declare (optimize #)) and
(optimize-interface #) for CMU for software we're working on and for
software we get from others and would never care to debug.

We're using CLM and GINA and like to set optimization very high for
compile-time and the debug level very low. However, as our students
write applications using GINA, it's sometimes interesting to get a
backtrace and see what functions are called with what arguments (I
don't find #<unavailable arg> informative at all). Furthermore, I fear
that if I set debug below 1.0, the compiler will not remember function
type information and in effect produce worse code than with a debug
level of 1.0. On the other hand, such type information will be held in
memory (or in the huge core file) even if it's never looked at.

How safe is it to set a lower debug level for subsystems like CLM,
GINA or others? Can this lead to core dumps (instead of throwing into
the debugger)?

Do people have any suggestion about how to set the various optimize
qualities for use-only systems like CLM and for the system currently
being developed? Does it indeed change the size of the core and
execution speed largely?

Thanks for any help,
 	Joerg Hoehle.
······@inf-wiss.uni-konstanz.de
From: Rob MacLachlan
Subject: Re: getting the right optimize declarations for CMU
Date: 
Message-ID: <3i2ldp$ep@cantaloupe.srv.cs.cmu.edu>
In article <··········@eurybia.rz.uni-konstanz.de> ······@inf-wiss.uni-konstanz.de (Joerg-Cyril Hoehle) writes:
>, it's sometimes interesting to get a
>   backtrace and see what functions are called with what arguments (I
>   don't find #<unavailable arg> informative at all). 

This is cause by speed = 3, debug /= 3.  This is one of the few effects of
speed = 3 as opposed to speed > safety.

>Furthermore, I fear
>   that if I set debug below 1.0, the compiler will not remember function
>   type information and in effect produce worse code than with a debug
>   level of 1.0.

Probably not a big effect.  Note also that this only pertains to inferred type
information, not to explicit ftype declarations.

>   How safe is it to set a lower debug level for subsystems like CLM,
>   GINA or others? Can this lead to core dumps (instead of throwing into
>   the debugger)?

Completely "safe".

>   Do people have any suggestion about how to set the various optimize
>   qualities for use-only systems like CLM and for the system currently
>   being developed? Does it indeed change the size of the core and
>   execution speed largely?

Reducing debug below 2 has a big effect on size.  Setting safety to 0 has a big
effect on space and speed.  Definitely check out the optimize-interface and
context-declaration features, which allow you to selectively preserve safety
and debug info on exported interfaces.  This gives good debuggability with much
less speed/space penalty.  The example in the CMU CL manual of context
declarations is the one actually used in the CMU CL build.  I would guess that
CLM/Gina would also benefit from such context-sensitive declarations.

In article <······················@bitburg.bbn.com>,
Ken Anderson <········@bitburg.bbn.com> wrote:

>I do not recommend global use of high optimization settings, such as
>
> (optimize (speed 3) (safety 0) (debug 0))
>
>even in "use-only libraries", because if the library is reasonably
>optimized to begin with, it is likely not to help much (2-3%, say).

In CMU CL, speed = 3 and debug = 0 are usually unnecessarily extreme, but
safety = 0 is often useful, especially inside functions.  CMU CL is over 25meg
when compiled with the default policy, more like 17meg with our tweaked policy
(which is still quite safe as far as users are concerned.)

>Also,
>you probably don't want the top level library interface to have this
>optmization setting because argument number checking would not be done
>which would cause strange bugs for your students.

This is one of the advantages of optimize-interface and context declarations.
Optimize-interface allows a higher level of checking for function interfaces
(as opposed to function guts.)  Context-declarations provide a simple "expert
system" for deciding what compilation policy to use based on whether a
definition is exported, whether it is a function or a macro, if the name has
some affix, etc.

Here's how CLX is compiled:
(with-compilation-unit
    ("target:compile-clx.log"
     :optimize
     '(optimize (debug #-small 2 #+small .5) 
		(speed 2) (inhibit-warnings 2)
		(safety #-small 1 #+small 0))
     :optimize-interface
     '(optimize-interface (debug .5))
     :context-declarations
     '(((:and :external :global)
	(declare (optimize-interface (safety 2) (debug 1))))
       ((:and :external :macro)
	(declare (optimize (safety 2))))
       (:macro (declare (optimize (speed 0))))))
  (compile-file ...

The :small feature controls whether to compile fully safe and debuggable, or to
compile with "adequate" safety and debugging (as distributed.)  The
significance of debug .5 is that is suppresses arg documentation while still
allowing arg display in a backtrace.  speed 0 indicates byte compilation (of
macros.)  Context declarations are used to compile external functions with arg
checking and arg types/documentation.  External macros are compiled with safe
guts as well.

  Rob