From: ·······@yahoo.com
Subject: Smooth Zooming like in Google Earth / NASA World Wind
Date: 
Message-ID: <1142131143.190890.193520@p10g2000cwp.googlegroups.com>
Given, as your "data base", a full-resolution map or satellite image of
some part of the earth, is there an API (in Lisp, or failing that, in
any language) to do the kind of smooth zooming that you see in Google
Earth and NASA World Wind? Ideally, the API would be usable in Unix and
not be Windows-only.

I assume that in a brute-force (non-smooth) method, zooming would be
implemented as a sequence of gradual changes in the current-view. For
each current-view in the sequence, you would generate a
screen-resolution (e.g., 800 x 600) version of the current-view by
cropping the full-resolution (e.g., 100,000 x 100,000) base image,
resampling it, and displaying it. But this method would probably not
result in a smooth zoom.

From: Frank Buss
Subject: Re: Smooth Zooming like in Google Earth / NASA World Wind
Date: 
Message-ID: <s62f74ieotl2.ncivdzf3stze.dlg@40tude.net>
·······@yahoo.com wrote:

> I assume that in a brute-force (non-smooth) method, zooming would be
> implemented as a sequence of gradual changes in the current-view. For
> each current-view in the sequence, you would generate a
> screen-resolution (e.g., 800 x 600) version of the current-view by
> cropping the full-resolution (e.g., 100,000 x 100,000) base image,
> resampling it, and displaying it. But this method would probably not
> result in a smooth zoom.

Another idea: Use something like mip-mapping: If you have a 128,000 x
128,000 pixel size image and you want to display it to a 1,000 x 1,000
viewing area, create 5461 1,000 x 1,000 versions:

1 image, which resamples all pixels of the original image
4 images, which resamples 64000 x 64000 pixels of the original image
16 images, which resamples 32000 x 32000 pixels of the original image
64 images, which resamples 16000 x 16000 pixels of the original image
256 images, which resamples 8000 x 8000 pixels of the original image
1024 images, which resamples 4000 x 4000 pixels of the original image
4096 images, which resamples 2000 x 2000 pixels of the original image

This will need a hardisk which can store images with altogether
114688000000 pixels, which is only 7 times the size of the original image.
For viewing it in zoom-level 1 in a 1,000 x 1,000 view area, use the first
image. When zooming, you'll need only 4 images for every zoom-level and
position (if it doesn't matter that a pixel is scaled up to 4 x 4 pixels,
then you'll need only one 1,000 x 1,000 image for every zoom-level and
position, I think, and on the other side, if you have lots of memory and
you want single pixel precision for a 1,000 x 1,000 viewing area, one 4,000
x 4,000 image for each zoom-level is sufficient). I assume Google uses a
similiar concept.

You can use http://www.ferzkopp.net/Software/SDL_gfx-2.0/ for fast platform
independant zooming and SDL itself for image loading.
http://www.lispbuilder.org has CFFI mappings for SDL and SDL_gfx, I've
tested it on Windows and Debian Linux and there is a step-by-step tutorial
how to use it for Windows, at least for the SDL part. See the mailing-list
archive from march (
http://www.lispniks.com/pipermail/application-builder/2006-March/thread.html
) for the SDL_gfx part, because the Lispbuilder project is work in
progress.

-- 
Frank Buss, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
From: Kaz Kylheku
Subject: Re: Smooth Zooming like in Google Earth / NASA World Wind
Date: 
Message-ID: <1142271780.521761.81420@i39g2000cwa.googlegroups.com>
Frank Buss wrote:
> ·······@yahoo.com wrote:
>
> > I assume that in a brute-force (non-smooth) method, zooming would be
> > implemented as a sequence of gradual changes in the current-view. For
> > each current-view in the sequence, you would generate a
> > screen-resolution (e.g., 800 x 600) version of the current-view by
> > cropping the full-resolution (e.g., 100,000 x 100,000) base image,
> > resampling it, and displaying it. But this method would probably not
> > result in a smooth zoom.
>
> Another idea: Use something like mip-mapping: If you have a 128,000 x
> 128,000 pixel size image and you want to display it to a 1,000 x 1,000
> viewing area, create 5461 1,000 x 1,000 versions:
>
> 1 image, which resamples all pixels of the original image
> 4 images, which resamples 64000 x 64000 pixels of the original image
> 16 images, which resamples 32000 x 32000 pixels of the original image
> 64 images, which resamples 16000 x 16000 pixels of the original image
> 256 images, which resamples 8000 x 8000 pixels of the original image
> 1024 images, which resamples 4000 x 4000 pixels of the original image
> 4096 images, which resamples 2000 x 2000 pixels of the original image
>
> This will need a hardisk which can store images with altogether
> 114688000000 pixels, which is only 7 times the size of the original image.

Why would you store low resolution images that are blown up the same
size as the original? They are needed when the user zooms out, and so
will have to be shrunk back down anyway. You never show the user "fat
pixels", so why store them?

Each successive lower resolution image only  needs to occupy 25% of the
previous size. So the total space needed is approximated by the sum of
the series 1 + 1/4 + 1/16 + 1/64 ...     This looks like it converges
to about 1 1/3.

Let's whip out that formula for geometric series:

(/ (- 1 0.25))  -> 1.3333334


In other words, you only need on the order of 33% more space for a
series of lower resolution versions of the image, when each successive
one is half the size.

If you can live with a doubling of the space requirements, you can
store finer increments, whereby the pixel count goes up by a factor of
two, and the linear dimension by square root of two.

If you're willing to waste 8 times the space for each image, you can
get dimensional increments of about 0.9354, corresponding to a 7/8
ratio in the area.

> For viewing it in zoom-level 1 in a 1,000 x 1,000 view area, use the first
> image. When zooming, you'll need only 4 images for every zoom-level and

I.e. in this scenario, you start completely zoomed in, using a small
portion of the highest resolution image. As the user zooms out, you
show slightly more and more of that image. This is where your MIP map
starts to kick in: because you have these pre-shrunk images, you can
use a completely naive algorithm for doing the shrinking. You simply
map single pixels samples of increasing portions of the image to the
viewport, ignoring issues of aliasing, until you reach a certain
crossover threshold. Then you start fetching data from the next smaller
resolution.

> position (if it doesn't matter that a pixel is scaled up to 4 x 4 pixels,
> then you'll need only one 1,000 x 1,000 image for every zoom-level and
> position, I think, and on the other side, if you have lots of memory and
> you want single pixel precision for a 1,000 x 1,000 viewing area, one 4,000
> x 4,000 image for each zoom-level is sufficient). I assume Google uses a
> similiar concept.

I suspect that Google actually has independent satellite pictures for
the different resolutions, at least in some cases.

Like say you are looking at a fairly large chunk of a country. You may
get a cohesive picture of the whole thing. It looks like one shot from
one satellite. Then when you zoom in, certain portions, like notably
densely populated urban areas, have significantly more detail. Those
detailed images differ from the surrounding ones. There are differences
in color, lighting. Obviously, they are different shots.

So there is a kind of resolution patchwork. A combination of scaling,
but with the flexibility to override the data with photographs.
From: Jens Axel Søgaard
Subject: Re: Smooth Zooming like in Google Earth / NASA World Wind
Date: 
Message-ID: <4413f760$0$38618$edfadb0f@dread12.news.tele.dk>
·······@yahoo.com wrote:
> Given, as your "data base", a full-resolution map or satellite image of
> some part of the earth, is there an API (in Lisp, or failing that, in
> any language) to do the kind of smooth zooming that you see in Google
> Earth and NASA World Wind? Ideally, the API would be usable in Unix and
> not be Windows-only.
> 
> I assume that in a brute-force (non-smooth) method, zooming would be
> implemented as a sequence of gradual changes in the current-view. For
> each current-view in the sequence, you would generate a
> screen-resolution (e.g., 800 x 600) version of the current-view by
> cropping the full-resolution (e.g., 100,000 x 100,000) base image,
> resampling it, and displaying it. But this method would probably not
> result in a smooth zoom.

Declan Butler has written a very nice essay on how Google Earth works:

<http://www.nature.com/news/2006/060213/full/060213-7.html>

-- 
Jens Axel S�gaard