From: Mark Tarver
Subject: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <1192443828.684280.269020@e9g2000prf.googlegroups.com>
This is gathering some threads that arose from my comp.lang.lisp post
'On the Strange Weakness of Graphical User Programming Languages' -
threads that link up to other essays I've written.

Really I think the problem is that programming has outstripped CS
theory for a long time.  We have these theories of computability
dating back to the 1930s developed by people who didn't even have
access to a computer. Elsewhere I noted in 'Lisp for the C21' (L21)

www.lambdassociates.org/lC21.htm

that there is this vaguer but equally important idea of computational
adequacy.  When I wrote that piece I didn't see how important it was
to be able to define that concept in a clear and elegant way to get
clear and elegant solutions to modern systems programming problems.
I do now.

The truth is that our formal 1930s theory has long been left behind by
the pace of development of commercial software.  Its odd really that
this has not been a focus at university level for longer or that
university CS has not made a bigger issue out of it. I think that part
of the problem is that universities have gone into decline at the same
time that this explosion in innovation has taken place.

http://www.lambdassociates.org/blog/decline.htm

Professors in CS were too busy rushing to think this through and come
up with an organised response.  The whole point of universities was to
create an environment where people were free to think unconstrained by
the horizons of deadlines and filling out paper.  By commercialising
the universities, putting people under the same pressure to produce
that you would find in a software house, we collapsed their horizons
and introduced the same kind of short-term thinking that characterises
much of the software industry.

The problem now is that we have a generation of computer tools based
on 'lets stick things together and see how it flies' that are at least
good enough for most people to want to stick with and patch as
needed.  They are powerful for doing certain things and powerful
enough so that the Right Thing will have a hard time getting
established.

What I think is that this situation will continue for the current
generation of computer tools until later in this century.  At that
point our ambitions will exceed the power of our tools to deliver and
we'll probably have to develop those missing theories and take them
seriously.  I think in particular the rather sinister PAL demoed in
the thread

http://www.radar.cs.cmu.edu/image.jsp?eid=75

will require planning technology and concise formal models of what I
dimly term computational adequacy. What we have got now will not
deliver PAL.

Of far less importance, but important to me is my own response to this
and L21.  I think for me, speaking purely personally and not for
anybody else, this is the end of the road in CS.  I stepped out of the
rat race and lived unemployed in order to reacquire the peace of mind
necessary to solve the problems implicit in Qi.  In a decent
university system it would not have been necessary to do this, but
what we have in the UK is far from that.  Qi is, for me, the Right
Thing.  But doing the Right Thing beyond Qi requires a scale of
resource that I simply do not have.  It was however a great privilege
to have been on this ride for so long.

Mark

From: ···············@gmail.com
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <1192455690.492727.26770@v23g2000prn.googlegroups.com>
On 15 Oct, 11:23, Mark Tarver <··········@ukonline.co.uk> wrote:

> The truth is that our formal 1930s theory has long been left behind by
> the pace of development of commercial software.  Its odd really that
> this has not been a focus at university level for longer or that
> university CS has not made a bigger issue out of it. I think that part
> of the problem is that universities have gone into decline at the same
> time that this explosion in innovation has taken place.
>
> http://www.lambdassociates.org/blog/decline.htm

One thing I have noticed, especially with cross-posted messages
between c.l.l and c.l.f, is that it tends to provoke "real world vs.
academic" arguments which rather supports your view that nothing will
change.

For example, I find it hard to listen to academics at conferences
telling me how I should be writing software when they've never written
a real program for end users before.

Similarly I find it infuriating listening to industry programmers
telling me how amazing these 'new' anonymous functions are in C#.

Maybe large corporations are not the best placed to be bringing
improved techniques to the attention of the working programmer. But I
don't have a suggestion for an alternative, especially given the bleak
picture you paint of academia in your essays.

--
Phil
http://phil.nullable.eu/
From: C Y
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <1192462864.131998.115080@i13g2000prf.googlegroups.com>
On Oct 15, 9:41 am, ···············@gmail.com wrote:

> Maybe large corporations are not the best placed to be bringing
> improved techniques to the attention of the working programmer. But I
> don't have a suggestion for an alternative, especially given the bleak
> picture you paint of academia in your essays.

Indeed - I too have long been troubled by the increasing trend of
commercialization in higher education, though I have experience only
with the student side of the picture.

In simplest terms, a proper university research system can exist only
when there exist funding sources that are not constrained by practical
results achieved.  The constraint must be quality of work performed,
which is a very different metric.  You don't want poor researchers,
but you DO want researchers free to pursue directions that have no
apparent hope of return on investment.  Only in that fashion can
really NEW directions be found.  Knowledge must first be valued in and
of itself, as an end.  Oftentimes it becomes a means as well, but at
the university research level that should not be assumed or even
desired.  That's what the commercial world is for.

The trend of universities patenting inventions as a source of income
is an example of where things are going off course - that's not the
point of a university.  Patents are a monopoly, a short term control
of the use of knowledge, which runs counter to every pure academic
instinct I ever encountered.  When you want to LEARN, a patent is
nothing but a roadblock.  Indeed, I always thought the freely released
knowledge from universities was a healthy counter to the commercial
behavior (understandable in a commercial setting) of securing
advantage via patents.  The erosion of this system is deeply
troubling.

Maybe this is a deeper reflection of where society is headed - with
more and more people on the planet and constant competitive pressure
in a global economy, resource constraints become more severe with
time.  In that environment all non-essentials must be trimmed to
survive, with "non-essential" being roughly defined as anything that
does not immediately and profitably support the short term goal.
There are no long term goals - focus is narrowed to survival.  Perhaps
this is why the focus of universities is shifting - larger pressures
(i.e. funding sources) are forcing them to assume a form that is
immediately useful or face starvation, as a consequence of the
pressure they themselves are feeling.
From: Mark Tarver
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <1192483195.934599.13170@i38g2000prf.googlegroups.com>
On 15 Oct, 16:41, C Y <···········@yahoo.com> wrote:
> On Oct 15, 9:41 am, ···············@gmail.com wrote:
>
> > Maybe large corporations are not the best placed to be bringing
> > improved techniques to the attention of the working programmer. But I
> > don't have a suggestion for an alternative, especially given the bleak
> > picture you paint of academia in your essays.
>
> Indeed - I too have long been troubled by the increasing trend of
> commercialization in higher education, though I have experience only
> with the student side of the picture.
>
> In simplest terms, a proper university research system can exist only
> when there exist funding sources that are not constrained by practical
> results achieved.  The constraint must be quality of work performed,
> which is a very different metric.  You don't want poor researchers,
> but you DO want researchers free to pursue directions that have no
> apparent hope of return on investment.  Only in that fashion can
> really NEW directions be found.  Knowledge must first be valued in and
> of itself, as an end.  Oftentimes it becomes a means as well, but at
> the university research level that should not be assumed or even
> desired.  That's what the commercial world is for.
>
> The trend of universities patenting inventions as a source of income
> is an example of where things are going off course - that's not the
> point of a university.  Patents are a monopoly, a short term control
> of the use of knowledge, which runs counter to every pure academic
> instinct I ever encountered.  When you want to LEARN, a patent is
> nothing but a roadblock.  Indeed, I always thought the freely released
> knowledge from universities was a healthy counter to the commercial
> behavior (understandable in a commercial setting) of securing
> advantage via patents.  The erosion of this system is deeply
> troubling.
>
> Maybe this is a deeper reflection of where society is headed - with
> more and more people on the planet and constant competitive pressure
> in a global economy, resource constraints become more severe with
> time.  In that environment all non-essentials must be trimmed to
> survive, with "non-essential" being roughly defined as anything that
> does not immediately and profitably support the short term goal.
> There are no long term goals - focus is narrowed to survival.  Perhaps
> this is why the focus of universities is shifting - larger pressures
> (i.e. funding sources) are forcing them to assume a form that is
> immediately useful or face starvation, as a consequence of the
> pressure they themselves are feeling.

Not really comp stuff but ....

I think that actually part of the problem came from a certain style of
management that may have originated in the US.   This style advocates
quantitative measuring of performance, targets and performance
feedback.

It all sounds very rational but it has been a complete balls up in
every field of public activity that it has been applied to in the UK.
A total shambles.

The analogy I would use is that of placing software probes within a
program to measure functionality and performance.  This is sound
practice, however programmers don't generally release programs
containing 1000 probes as an end-user product.  Simply in such a case
the program loses umpteen cycles monitoring its own performance.  With
enough such probes the program will barely work at all.

Performance monitoring of this obsessive kind cuts into productivity
cycles by forcing people to obsessively monitor themselves and each
other instead of doing the job.  The result is that the monitoring
becomes a virus, degrading and destroying what it is supposed to
enhance.   In academia, this has led to obsessive form filling.  Also
in our police who have to fill out 30 pages of forms per arrest and in
our NHS which despite being pumped to the gills with money is losing
patients to medieval squalor caused by lack of basic hygiene while
doctors fill out forms.

The other aspect is the obsession with measuring or trying to measure
things that cannot really be measured.  Measuring academic excellence
by volume of papers warps the behaviour of the system you are trying
to measure.  A nice version of the observer effect - that trying to
measure something can change the properties of what you are trying to
observe.

This school of management is really suited only to assembly line
manufacturing  and business (perhaps).  It should not have been
brought into universities.  It is the combination of these methods
with the drive to widen university access on the cheap which has done
the damage.

Mark
From: George Neuner
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <7vsah398s6j1mhnm1qv74fhtjj3lskd7u6@4ax.com>
On Mon, 15 Oct 2007 08:41:04 -0700, C Y <···········@yahoo.com> wrote:

>The trend of universities patenting inventions as a source of income
>is an example of where things are going off course - that's not the
>point of a university.  Patents are a monopoly, a short term control
>of the use of knowledge, which runs counter to every pure academic
>instinct I ever encountered.  When you want to LEARN, a patent is
>nothing but a roadblock.  

Actually the whole point of patents is to teach others to make or to
use the invention[*].  Control over commercialization for a time is
the reward for not keeping the invention a secret.

Not defending the use of patents ... as a CS and a professional
software developer, I personally have many issues with the patent
system as it stands.  I just correcting what appears to be a
misconception about them.


[*] At least that was the intent.  My father is a patent lawyer and I
worked through college as a paralegal writing and evaluating them.
The system has buckled under the crush of filings in the last decade -
the current examination wait is measured in years and examiners can
afford to spend only hours on each patent (not nearly enough time to
properly evaluate it).  Plus, many lawyers are practicing IP law
without proper backgrounds.  Patent lawyers are theoretically required
to have science or engineering backgrounds (my dad is MSCE + 10 years
in industry before becoming a lawyer), but that requirement made
patent law a narrow specialty with few practitioners.  20 years ago
there were fewer than 2000 patent lawyers in the US, today there are
over 30,000.  Over-burdened law firms have pressed liberal arts majors
into practice and the result has been a glut of junk patents and
patents so poorly written (in the technical sense) that one can't
easily learn anything from them.


>Indeed, I always thought the freely released
>knowledge from universities was a healthy counter to the commercial
>behavior (understandable in a commercial setting) of securing
>advantage via patents.  The erosion of this system is deeply
>troubling.

Agreed.

George
--
for email reply remove "/" from address
From: Chris F Clark
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <sdd3aw9rex7.fsf@shell01.TheWorld.com>
George Neuner <·········@/comcast.net> writes:

> Actually the whole point of patents is to teach others to make or to
> use the invention[*].  Control over commercialization for a time is
> the reward for not keeping the invention a secret.

Of course, in the fast-paced world of today, the patent lifetime can
be close to the lifetime of a product, so a patent is a virtual
monopoly.  I'm not certain all of the "junk" patents are necessarily
the result of expanding the patent attourney class to include liberal
arts majors.  

At my day job we invent stuff and file disclosure forms that hopefully
get turned into patents.  However, the patent lawyers rewrite these
into appropriate "patent speak".  One of the goals is to use the
appropriate jargon, but a secondary goal is to obfuscate the patent so
that as little about the "real" invention is revealed as possible,
while still protecting the essential ideas as broadly as possible.
Even when rereading patents of devices we have invented, it feels like
we are English peasants trying to decipher high-church Latin.  The
words might as well be magical incantations that we simply repeat
wrote for all the knowledge they impart.

Gloria patri, pax vobiscum....
From: George Neuner
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <h60dh39if9514otk9g58dnegkp7h9mu16d@4ax.com>
On Wed, 17 Oct 2007 10:31:16 -0400, Chris F Clark
<···@shell01.TheWorld.com> wrote:

>George Neuner <·········@/comcast.net> writes:
>
>> Actually the whole point of patents is to teach others to make or to
>> use the invention[*].  Control over commercialization for a time is
>> the reward for not keeping the invention a secret.
>
>Of course, in the fast-paced world of today, the patent lifetime can
>be close to the lifetime of a product, so a patent is a virtual
>monopoly.  I'm not certain all of the "junk" patents are necessarily
>the result of expanding the patent attourney class to include liberal
>arts majors.  

You are correct.  Some of the blame for frivolous "junk" patents falls
on the PTO for procedurally allowing patents in contradiction of law
(many software and "business process" patents fall into this category)
and on examiners who haven't the time to locate and/or sufficiently
evaluate prior art.

However, patent attorneys have a duty to the law as well as to their
clients - they are supposed to recognize junk and not submit it in the
first place.  The problem with patent attorneys not having technical
backgrounds is that they have to just accept a client's claim of
originality because they have no basis to evaluate it themselves.  

Even with a technical background it can be difficult to evaluate
claims in different areas.  Ideally an electro-mechanical patent would
be handled by a lawyer who was an ME or EE and who could consult a
counterpart of the other discipline when necessary.  That doesn't
happen in real life.  My father is a CE but he handles patents in all
areas: mechanical, electrical, chemical, biological, software,
business, etc.  He is always learning new things - which is great, but
it is impossible to be aware of state of the art in all things.


>At my day job we invent stuff and file disclosure forms that hopefully
>get turned into patents.  However, the patent lawyers rewrite these
>into appropriate "patent speak".  One of the goals is to use the
>appropriate jargon, but a secondary goal is to obfuscate the patent so
>that as little about the "real" invention is revealed as possible,
>while still protecting the essential ideas as broadly as possible.

Which, technically, is illegal.  By law, a patent *must* reveal
sufficient detail to allow replication of the invention and full
understanding of its intended use.

I'd be willing to bet that 80% of patents fall short of that ideal.
But things won't improve (and in fact will get worse) until the PTO
expands the corps of examiners by at least an order of magnitude and
gives them enough time to properly evaluate the applications.


>Even when rereading patents of devices we have invented, it feels like
>we are English peasants trying to decipher high-church Latin.  The
>words might as well be magical incantations that we simply repeat
>wrote for all the knowledge they impart.
>
>Gloria patri, pax vobiscum....

Quoque vobis.

George
--
for email reply remove "/" from address
From: Ulf Wiger
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <471712F4.9090507@e-r-i-c-s-s-o-n.com>
George Neuner wrote:
> On Wed, 17 Oct 2007 10:31:16 -0400, Chris F Clark
> <···@shell01.TheWorld.com> wrote:
> 
>> At my day job we invent stuff and file disclosure forms that hopefully
>> get turned into patents.  However, the patent lawyers rewrite these
>> into appropriate "patent speak".  One of the goals is to use the
>> appropriate jargon, but a secondary goal is to obfuscate the patent so
>> that as little about the "real" invention is revealed as possible,
>> while still protecting the essential ideas as broadly as possible.
> 
> Which, technically, is illegal.  By law, a patent *must* reveal
> sufficient detail to allow replication of the invention and full
> understanding of its intended use.
> 
> I'd be willing to bet that 80% of patents fall short of that ideal.
> But things won't improve (and in fact will get worse) until the PTO
> expands the corps of examiners by at least an order of magnitude and
> gives them enough time to properly evaluate the applications.

Of course, most of the patents filed today are by large corporations,
and to some extent, it's evolved into a balance of power, where power
is measured in number of patents. With a huge patent portfolio, you
can be reasonably sure that you will have some kind of hold on any
competitor who might threaten to sue. It is not necessarily counter-
productive, from this perspective, if many such patents are vague
enough that the issue would have to be settled in court. Most
companies would think twice before challenging the lawyers of
Microsoft, IBM, Ericsson et al in court.

I've thought much about the idea of "reverse patent search", since
this becomes an obvious prerequisite for releasing source code
(assuming your company is a likely target for IPR attacks). Since
software patents can protect both ideas and specific expressions
of an idea, figuring out whether a non-trivial piece of software
infringes on a patent becomes about as difficult as figuring out
whether any existing novel has either used (and patented) the same
plot ingredients, character descriptions, or dialog as you intend
to use.

The only recourse today is basically to try to file your
own patents, but this would be ineffective, even if it didn't
take years before you knew the results. The process could be
compared to conducting pin-prick testing of your own code, by
trying to imagine what bugs might lurk in there - except that
with testing, the feedback loop usually takes seconds or
minutes to complete. Even so, it is considered a primitive and
ineffective way to find bugs (esp. perhaps among readers of
comp.lang.functional).

Not only would the patent office need to process patent applications
much quicker than today (and with much more domain knowledge). They
should also be able to assist in answering the question: does this
program infringe on any existing (or pending) patents?

If the patent examiners would consider this task impossible, it is
even more so for the poor programmer, who is not versed in patent
speak and cannot be expected to monitor the patent application process 
in all parts of the world where her code might be used (my employer
does business in 150 countries). The only reasonable conclusion then
would be that the patent process (at least as regards software) _by
its nature_ stifles the sharing of ideas, rather than encourage it.

<disclaimer>
These thoughts are my own, and do not represent the
opinion of my employer. If I were personally the potential target
of IPR attacks and risked losing millions of dollars, I might have
a different outlook than I do today. As it is, I can affort to be
a bit idealistic.  (:
</disclaimer>

BR,
Ulf W
From: Vesa Karvonen
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <ff760m$i08$1@oravannahka.helsinki.fi>
In comp.lang.functional George Neuner <·········@/comcast.net> wrote:
[...]
> Actually the whole point of patents is to teach others to make or to
> use the invention[*]. [...]
> [*] At least that was the intent. [...]

I'm not really interested in debating patents (I consider it waste of
my time), but, the thing is, as a programmer, professional and
otherwise, I've *never* searched a patent database (and have no plans
to do so) or read even a single patent with the intention to learn how
to make or use inventions.  Further, I've never met or heard about
anyone (programmer or otherwise) that would have done that or, more
interestingly, would do that regularly.  (Yes, such people may exist
and it might be interesting to hear about them.)  What I do (besides
figuring things out by myself), and I've seen a few (a small fraction
of programmers) programmers do, is read articles and books to learn
about new inventions.

-Vesa Karvonen
From: Daniel C. Wang
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <4717220C.50104@gmail.com>
Vesa Karvonen wrote:
{stuff deleted}
> I've *never* searched a patent database (and have no plans
> to do so) or read even a single patent with the intention to learn how
> to make or use inventions. 

Most corporations specifically advise developers *not* to read patent 
databases. Since if the company is found to be in violation of a patent, 
and their is evidence developers may have known about the patent and 
still infringed can result in treble damages if they lose the suit.
From: Chris F Clark
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <sddtzoopkwv.fsf@shell01.TheWorld.com>
"Daniel C. Wang" <·········@gmail.com> writes:

> Most corporations specifically advise developers *not* to read patent
> databases. 

I've been instructed both to look and not look at patents over my
career (never instructed to do a search, but instructed to look at
specific patents).

In the case, where I was instructed not to look at a patent, I was
working on some software that was a competitor to a specific piece of
software that had parts of it patented.  To make sure we were not
infringing, we used a cleanroom approach.  We had someone who read the
patent and whose job was specifically to tell me *not* to do certain
things.  My job was to do the required tasks without doing any of the
forbidden items.

In the reverse case, we were building some hardware and I was
instructed to look at a software patent to see if it had any useful
ideas that we could build hardware assists for.  In that case, the
ideas were interesting but suggested nothing (to me) that I could say
we should build such and such a device to make it work better.

------------------------------------------------------------------------

Bringing this back to the University level, the company I work for
gives out grants to researchers, part of which is to stimulate work in
areas we find interesting.  However, mostly what we want are two
things: research that finds novel uses for our products and students
who are trained in the areas that we want who we can hire as interns.
An example that makes this clear is I championed three grants last
winter to interesting researchers.  Of those three grants, one
resulted in us finding a very well-qualified intern, one resulted in a
device that the professor patented (and wanted to license to us), and
one has produced no interesting fruits yet.  The one which resulted in
us getting an intern will definitely get another grant (and when the
intern graduates, she will get a job offer).  The one which resulted
in patented IP will be hard for me to champion again.  The one which
produced no fruit, well we are still waiting to decide.

The point of the above is that I think the Universities which are
trying to build up IP portfolio's are not doing themselves the service
they think they are.  Sure a few researchers do make significant
strides and create true wealth.  However, most researchers don't and
by trying to sell mediocre results to industry, they are alienating
the source of their grants.  Producing good work that is unencumbered
is far more interesting to their client community.

Reinforcing what someone else said, while I have never searched a
patent database, I often do literature searches.  And, those
literature searches have lead me to contact the various researchers
and collaborate with them.
From: Mark Tarver
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <1192605784.800275.241460@i38g2000prf.googlegroups.com>
On 15 Oct, 14:41, ···············@gmail.com wrote:
> On 15 Oct, 11:23, Mark Tarver <··········@ukonline.co.uk> wrote:
>
> > The truth is that our formal 1930s theory has long been left behind by
> > the pace of development of commercial software.  Its odd really that
> > this has not been a focus at university level for longer or that
> > university CS has not made a bigger issue out of it. I think that part
> > of the problem is that universities have gone into decline at the same
> > time that this explosion in innovation has taken place.
>
> >http://www.lambdassociates.org/blog/decline.htm
>
> One thing I have noticed, especially with cross-posted messages
> between c.l.l and c.l.f, is that it tends to provoke "real world vs.
> academic" arguments which rather supports your view that nothing will
> change.
>
> For example, I find it hard to listen to academics at conferences
> telling me how I should be writing software when they've never written
> a real program for end users before.
>
> Similarly I find it infuriating listening to industry programmers
> telling me how amazing these 'new' anonymous functions are in C#.
>
> Maybe large corporations are not the best placed to be bringing
> improved techniques to the attention of the working programmer. But I
> don't have a suggestion for an alternative, especially given the bleak
> picture you paint of academia in your essays.
>
> --
> Philhttp://phil.nullable.eu/

There is no current substitute for the universities, and I have no
idea of how to get them back on track after nearly 15 years of going
down the wrong track.

Open Source and Distance Learning seems to me the best bet but suffer
at the moment from lack of effective funding.  Many of the grant-
making bodies are still tied into the universities and the money is
not accessible to free lance developers.

This really is wrong because Open Source is producing some of the most
innovative stuff (BitTorrent etc.) whilst the unis are engaged in this
futile brownie point exercise.  So maybe instead of continuing to prop
up the universities we should be thinking of how to finance our best
minds which are by no means confined now to the universities.

Mark
From: Raffael Cavallaro
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <2007101518191637709-raffaelcavallaro@pasdespamsilvousplaitmaccom>
On 2007-10-15 06:23:48 -0400, Mark Tarver <··········@ukonline.co.uk> said:

> Really I think the problem is that programming has outstripped CS
> theory for a long time.

Are you familiar with the work of Peter Wegner and his co-authors along 
these lines?

<http://www.cse.uconn.edu/~dqg/papers/cie05.pdf>

<http://www.cse.uconn.edu/~dqg/papers/turing04.pdf>
(especially section 3)
From: David B. Benson
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <1192487507.164383.195160@z24g2000prh.googlegroups.com>
> ...
> Are you familiar with the work of Peter Wegner and ...

Also see

Dina Golden et al. (eds.)
Interactive COmputation: The New Paradigm
Springer, 2006

which begins with an opening address by Robin Milner.
From: Raffael Cavallaro
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <2007101600560564440-raffaelcavallaro@pasdespamsilvousplaitmaccom>
On 2007-10-15 18:31:47 -0400, "David B. Benson" <·······@eecs.wsu.edu> said:

> Also see
> 
> Dina Golden et al. (eds.)
> Interactive COmputation: The New Paradigm
> Springer, 2006

<http://www.amazon.com/Interactive-Computation-Paradigm-Dina-Goldin/dp/354034666X/sr=1-1/qid=1161185078/ref=sr_1_1/103-3775837-7261440?ie=UTF8&s=books>

for 

those with deep pockets
From: Matthias Blume
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <m2wstnkdd9.fsf@my.address.elsewhere>
Raffael Cavallaro
<················@pas-d'espam-s'il-vous-plait-mac.com> writes:

> On 2007-10-15 06:23:48 -0400, Mark Tarver <··········@ukonline.co.uk> said:
>
>> Really I think the problem is that programming has outstripped CS
>> theory for a long time.
>
> Are you familiar with the work of Peter Wegner and his co-authors
> along these lines?
>
> <http://www.cse.uconn.edu/~dqg/papers/cie05.pdf>
>
> <http://www.cse.uconn.edu/~dqg/papers/turing04.pdf>
> (especially section 3)

When reading the above, I think it is useful to keep in mind the
commentary that others in the scientific community have to offer on
this stuff.  In particular, see Lance Fortnow's opinion and the ensuing
discussion here:

  http://weblog.fortnow.com/2006/07/principles-of-problem-solving-tcs.html
From: Mark Tarver
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <1192514650.048239.80700@t8g2000prg.googlegroups.com>
On 15 Oct, 23:19, Raffael Cavallaro <················@pas-d'espam-s'il-
vous-plait-mac.com> wrote:
> On 2007-10-15 06:23:48 -0400, Mark Tarver <··········@ukonline.co.uk> said:
>
> > Really I think the problem is that programming has outstripped CS
> > theory for a long time.
>
> Are you familiar with the work of Peter Wegner and his co-authors along
> these lines?
>
> <http://www.cse.uconn.edu/~dqg/papers/cie05.pdf>
>
> <http://www.cse.uconn.edu/~dqg/papers/turing04.pdf>
> (especially section 3)

Thankyou - very interesting.  Yes, the authors are right that the
Turing machine model is not sufficient for modelling computational
processes.  Some of their examples are a bit dodgy and I think that
the main point may have been lost (or buried somewhere).   As is usual
the authors spend too long coming to the point.

The main point is this: the conceptual difference between a Turing
machine and a computer is not the TM vs. the computer  *but the person
sitting in front of the computer*.  This creates a situation where
unpredictable and computation-influencing events can come from an
operator sitting outside the current computation and change the flow
of computation. Such deus ex machina behaviour does not belong to the
closed world of the TM.

Mark
From: Raffael Cavallaro
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <2007101712362950073-raffaelcavallaro@pasdespamsilvousplaitmaccom>
On 2007-10-16 02:04:10 -0400, Mark Tarver <··········@ukonline.co.uk> said:

> The main point is this: the conceptual difference between a Turing
> machine and a computer is not the TM vs. the computer  *but the person
> sitting in front of the computer*.  This creates a situation where
> unpredictable and computation-influencing events can come from an
> operator sitting outside the current computation and change the flow
> of computation.

Or, more broadly, interaction with any open ended environment (e.g., 
autonomous robots navigating a real world landscape), hence the title 
of their edited volume --  _Interactive Computing: The New Paradigm_
From: ··············@gmail.com
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <1192718681.269227.82970@e34g2000pro.googlegroups.com>
On Oct 17, 11:36 am, Raffael Cavallaro <················@pas-d'espam-
s'il-vous-plait-mac.com> wrote:
> On 2007-10-16 02:04:10 -0400, Mark Tarver <··········@ukonline.co.uk> said:
>
> > The main point is this: the conceptual difference between a Turing
> > machine and a computer is not the TM vs. the computer  *but the person
> > sitting in front of the computer*.  This creates a situation where
> > unpredictable and computation-influencing events can come from an
> > operator sitting outside the current computation and change the flow
> > of computation.
>
> Or, more broadly, interaction with any open ended environment (e.g.,
> autonomous robots navigating a real world landscape), hence the title
> of their edited volume --  _Interactive Computing: The New Paradigm_

If there is anything wrong with academic publishing, then the Wegner/
Goldin
papers as well as subsequent papers involving other authors such as
Eberbach
are perfect examples as to why.  For a thorough
debunking of their claim of having found a computational model that is
more powerful than the Turing Machine, see e.g., the following
rebuttal
by Cockshott and Michaelson to a more recent paper by Wegner and
Eberbach:

@article{1229962,
 author = {Paul Cockshott and Greg Michaelson},
 title = {Are There New Models of Computation? Reply to Wegner and
Eberbach},
 journal = {Comput. J.},
 volume = {50},
 number = {2},
 year = {2007},
 issn = {0010-4620},
 pages = {232--247},
 doi = {http://dx.doi.org/10.1093/comjnl/bxl062},
 publisher = {Oxford University Press},
 address = {Oxford, UK},
 }

You can find it on the Oxford Journals web site:

  http://comjnl.oxfordjournals.org/cgi/reprint/50/2/232.pdf

>From their summary:

"In Section 2.5, we enunciated the criteria that we think must be met
for the Church-Turing thesis to be displaced. In general, we require a
demonstration that all terms in C-T systems should have equivalent
terms in the new system but there should be terms in the new system
which do not have equivalents in C-T systems. In particular, the new
system should be able to solve decision problems that are semi-
decidable or undecidable in C-T systems. Finally, we require that a
new system be physically realisable. We think that, under these
criteria, Wegner and Eberbach's claims that interaction machines, the
(pi)-calculus and the $-calculus are super-Turing, are not
substantiated.

...

Wegner and Eberbach make bold claims in their paper. But extraordinary
claims require extraordinary evidence. The work of Turing has served
as a foundation for computability theory for 70 years. To displace it
would have required them to bring forward very strong evidence. We
have discussed the criteria by which such claims could be assessed,
and we have discussed the systems that they have exhibited as
potentially surpassing the TM model of computation. We consider that
in all three cases, interaction machines, the (pi)-calculus and the $-
calculus, we have shown their claims to be invalid."

I couldn't have said it any better (and definitely not more politely).

Kind regards,
Matthias
From: Pascal Costanza
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <5nq3t6Fjin0eU1@mid.individual.net>
··············@gmail.com wrote:

> Wegner and Eberbach make bold claims in their paper. But extraordinary
> claims require extraordinary evidence. The work of Turing has served
> as a foundation for computability theory for 70 years. To displace it
> would have required them to bring forward very strong evidence. We
> have discussed the criteria by which such claims could be assessed,
> and we have discussed the systems that they have exhibited as
> potentially surpassing the TM model of computation. We consider that
> in all three cases, interaction machines, the (pi)-calculus and the $-
> calculus, we have shown their claims to be invalid."

 From http://en.wikipedia.org/wiki/Paul_Feyerabend

'One of the criteria for evaluating scientific theories that Feyerabend 
attacks is the consistency criterion. He points out that to insist that 
new theories be consistent with old theories gives an unreasonable 
advantage to the older theory. He makes the logical point that being 
compatible with a defunct older theory does not increase the validity or 
truth of a new theory over an alternative covering the same content. 
That is, if one had to choose between two theories of equal explanatory 
power, to choose the one that is compatible with an older, falsified 
theory is to make an aesthetic, rather than a rational choice. The 
familiarity of such a theory might also make it more appealing to 
scientists, since they will not have to disregard as many cherished 
prejudices. Hence, that theory can be said to have "an unfair advantage".'


Pascal

-- 
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: ··············@gmail.com
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <1192755951.056450.167300@z24g2000prh.googlegroups.com>
On Oct 18, 5:09 pm, Pascal Costanza <····@p-cos.net> wrote:
> ··············@gmail.com wrote:
> > Wegner and Eberbach make bold claims in their paper. But extraordinary
> > claims require extraordinary evidence. The work of Turing has served
> > as a foundation for computability theory for 70 years. To displace it
> > would have required them to bring forward very strong evidence. We
> > have discussed the criteria by which such claims could be assessed,
> > and we have discussed the systems that they have exhibited as
> > potentially surpassing the TM model of computation. We consider that
> > in all three cases, interaction machines, the (pi)-calculus and the $-
> > calculus, we have shown their claims to be invalid."
>
> [quoting Feyerabend]
> "...That is, if one had to choose between two theories of equal explanatory
> power, to choose the one that is compatible with an older, falsified
> theory is to make an aesthetic, rather than a rational choice...."

Red herring.

We are not dealing with two competing theories here.  We don't have an
"older, falsified theory".

The TM is a mathematical construct.  It has certain mathematical
properties that are undisputable (and hopefully undisputed). The C-T
hypothesis says that no physically realizable notion of computation
can be more powerful than that defined by the TM.  Notice that this is
a HYPOTHESIS, not a "theory" or some such.  Nobody claims to have
proved it.  We merely have observed that every attempt of defining
another model of physically realizable computations has produced
something of equal or lesser power.  (There are plenty of more
powerful but non-realizable models of computation.  Just take your
favorite undecidable predicate and throw it in as a primitive.)

Now, Wegner, Goldin, and Eberbach claim they have actually discovered
a more powerful concept of physically realizable computation.  This is
a pretty bold claim, but for all we truly know it could be true.
However, to show that it is true, they have to prove it.  Doing so
should be fairly straightforward:  Just demonstrate a decision
procedure (under the new model) for a predicate that is undecidable
(or merely semi-decidable) for TMs.   Then also argue convincingly
that your model actually is--at least in principle--physically
realizable.  The problem is -- and that's what Cockshott and
Michaelson explain in quite some detail -- that they haven't done so.
In fact, not only have they not done so, there is also very good
reason to believe that they won't be able to do so for the calculi
they have brought forward so far.

Anyway, your quote by Paul Feyerabend has absolutely nothing to do
with this discussion.
From: Raffael Cavallaro
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <2007101823533311272-raffaelcavallaro@pasdespamsilvousplaitmaccom>
On 2007-10-18 21:05:51 -0400, ··············@gmail.com said:

> Anyway, your quote by Paul Feyerabend has absolutely nothing to do
> with this discussion.

Goldin and Wegner obviously think it's very relevant since they devote 
the first quarter of their chapter of _Interactive Computation_ to the 
philosophy of science and epistemology by way of explaining the "strong 
resistance," (to use their words) their views have encountered.

To be completely clear, Goldin and Wegner consider the Strong 
Church-Turing thesis to be an "older, falsified theory," superseded by 
Persistent Turing Machines.
From: ··············@gmail.com
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <1192768660.853319.20830@y27g2000pre.googlegroups.com>
On Oct 18, 10:53 pm, Raffael Cavallaro <················@pas-d'espam-
s'il-vous-plait-mac.com> wrote:
> On 2007-10-18 21:05:51 -0400, ··············@gmail.com said:
>
> > Anyway, your quote by Paul Feyerabend has absolutely nothing to do
> > with this discussion.
>
> Goldin and Wegner obviously think it's very relevant since they devote
> the first quarter of their chapter of _Interactive Computation_ to the
> philosophy of science and epistemology by way of explaining the "strong
> resistance," (to use their words) their views have encountered.

In other words, they trot out the same Red Herring...

> To be completely clear, Goldin and Wegner consider the Strong
> Church-Turing thesis to be an "older, falsified theory," superseded by
> Persistent Turing Machines.

As I said: The C-T thesis is not a theory at all.   It is a conjecture
which could be wrong, but which so far has not been falsified.  If
Goldin and Wegner claim it has been falsified, then they will have to
prove that.  But so far they have not done so.
From: Pascal Costanza
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <5nqv6cFjm567U1@mid.individual.net>
··············@gmail.com wrote:
> On Oct 18, 10:53 pm, Raffael Cavallaro <················@pas-d'espam-
> s'il-vous-plait-mac.com> wrote:
>> On 2007-10-18 21:05:51 -0400, ··············@gmail.com said:
>>
>>> Anyway, your quote by Paul Feyerabend has absolutely nothing to do
>>> with this discussion.
>> Goldin and Wegner obviously think it's very relevant since they devote
>> the first quarter of their chapter of _Interactive Computation_ to the
>> philosophy of science and epistemology by way of explaining the "strong
>> resistance," (to use their words) their views have encountered.
> 
> In other words, they trot out the same Red Herring...
> 
>> To be completely clear, Goldin and Wegner consider the Strong
>> Church-Turing thesis to be an "older, falsified theory," superseded by
>> Persistent Turing Machines.
> 
> As I said: The C-T thesis is not a theory at all.   It is a conjecture
> which could be wrong, but which so far has not been falsified.  If
> Goldin and Wegner claim it has been falsified, then they will have to
> prove that.  But so far they have not done so.

The C-T thesis makes a strong assumption about what computers can and 
cannot do for us. But computers can obviously do much more for us that 
execute algorithms. For example, a plain Turing machine could never 
compute the contents of Wikipedia by using an algorithm (unless it has 
the contents of Wikipedia as an input in the first place, but that's 
lame). Hence, the C-T thesis is inappropriate as a basis for computer 
science.

The problem is not that the C-T thesis may be wrong, the problem is that 
the C-T thesis steers us in the wrong direction.

See also http://www.dreamsongs.com/Feyerabend/Feyerabend.html - 
especially Sussman's statement at the bottom of that page.


Pascal

-- 
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Lauri Alanko
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <ff9hk5$m49$1@oravannahka.helsinki.fi>
In article <··············@mid.individual.net>,
Pascal Costanza  <··@p-cos.net> wrote:
> The C-T thesis makes a strong assumption about what computers can and 
> cannot do for us. But computers can obviously do much more for us that 
> execute algorithms. For example, a plain Turing machine could never 
> compute the contents of Wikipedia by using an algorithm (unless it has 
> the contents of Wikipedia as an input in the first place, but that's 
> lame).

A bit less than that: http://prize.hutter1.net/

Or alternatively you could give the contents of the observable
universe as an input, which is essentially what humans get. Why do you
claim that a plain Turing machine couldn't generate a Wikipedia from
that?


Lauri
From: Raffael Cavallaro
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <2007101903553077923-raffaelcavallaro@pasdespamsilvousplaitmaccom>
On 2007-10-19 02:08:05 -0400, Lauri Alanko <··@iki.fi> said:

> Or alternatively you could give the contents of the observable
> universe as an input, which is essentially what humans get. Why do you
> claim that a plain Turing machine couldn't generate a Wikipedia from
> that?

Because Wikipedia, by its nature, is interactive. What you see is the 
result of a constant back and forth among agents - scholars, 
advertising spammers, politcal hacks etc. To claim that this could be 
precisely forseen in toto by some algorithm with sufficient data is a 
claim of complete determinism.
From: Lauri Alanko
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <ffaqpu$sed$1@oravannahka.helsinki.fi>
In article <····································@pasdespamsilvousplaitmaccom>,
Raffael Cavallaro  <················@pas-d'espam-s'il-vous-plait-mac.com> wrote:
> Because Wikipedia, by its nature, is interactive. What you see is the 
> result of a constant back and forth among agents - scholars, 
> advertising spammers, politcal hacks etc. To claim that this could be 
> precisely forseen in toto by some algorithm with sufficient data is a 
> claim of complete determinism.

That is indeed what the Church-Turing thesis (or at least one variant
of it) entails: all physically realizable computation (which surely
includes all cognitive actions by physical humans) can be also be
computed with a (deterministic) Turing machine. This is a
controversial claim, for sure, but although it hasn't been proven, it
also hasn't been falsified, though Pascal's statement made it sound
like it was obviously false.

Now, if all the interacting agents _are_ computable, then it is in
principle possible to create a single Turing machine that simulates
the interactions of all the agents. And even if humans (or some other
external entities that a computer communicates with) aren't
computable, it merely means that the problem "what does the machine
get as input after producing certain output" is undecidable. We can
still model the situation with a Turing machine that has an oracle for
that problem. Oracles are hardly a new breakthrough, they have been
part of computability theory for ages.


Lauri
From: Raffael Cavallaro
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <2007101915290538165-raffaelcavallaro@pasdespamsilvousplaitmaccom>
On 2007-10-19 13:50:54 -0400, Lauri Alanko <··@iki.fi> said:

> That is indeed what the Church-Turing thesis (or at least one variant
> of it) entails: all physically realizable computation (which surely
> includes all cognitive actions by physical humans) can be also be
> computed with a (deterministic) Turing machine. This is a
> controversial claim, for sure, but although it hasn't been proven, it
> also hasn't been falsified, though Pascal's statement made it sound
> like it was obviously false.

That variant is the *Strong* C-T thesis.

C-T = any algorithm for a *mathematical function* can be done by a TM 
(or the lambda calculus)

SC-T = *any computation* that any real computer can do can be done by a TM.

The reason that Goldin and Wegner deny the latter is that the SC-T must 
hold that everything in the universe (including, as you say "all 
cognitive actions by physical humans") is reducible to a mathematical 
function. This is Determinism.

Those who deny SC-T believe that there are things that a computer can 
compute that are not reducible to a mathematical function. That is, 
that there are inherently unpredictable aspects of the universe such 
that certain computations that a machine (PTM) with access to later 
*interactive* inputs in addition to initial inputs can do, that a 
machine that only has access to initial inputs (a TM) cannot. Goldin 
and Wegner are saying that there are computations that a PTM can do 
interactively that a TM cannot do if constrained to start at the *same 
time*.

The issue here is twofold:

1. Ignoring time in saying that the real world is characterizable by a 
mathematical function. Indeed anything can be trivially characterized 
by a mathematical function if we wait to 'compute' it till after it has 
already occurred. The whole point is to specify an algorithm to do the 
computation *beforehand*, either with no further inputs (TM) or with a 
dynamic stream of inputs and memory (PTM).

2. The Deterministic belief that, given the right initial conditions 
and algorithm, the entire course of the universe from the Big Bang on 
can be computed.

#1 is just a matter of people misunderstanding each other and talking 
at cross purposes. No one denies that a TM could specify the course 
taken by an interactive machine *after the fact*. The PTM advocates 
simply state that this is not the same computation, since an inherent 
part of a computation is *when* it is performed.

#2 is just the long standing argument between Determinists/Rationalists 
on the one hand, and Indeterminists/Empiricists on the other. Clearly, 
Goldin and Wegner are Empiricists and Indeterminists.
From: namekuseijin
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <1193188254.342262.162540@k35g2000prh.googlegroups.com>
On Oct 19, 5:55 am, Raffael Cavallaro <················@pas-d'espam-
s'il-vous-plait-mac.com> wrote:
> Because Wikipedia, by its nature, is interactive. What you see is the
> result of a constant back and forth among agents - scholars,
> advertising spammers, politcal hacks etc. To claim that this could be
> precisely forseen in toto by some algorithm with sufficient data is a
> claim of complete determinism.

hey, if 1000 monkeys in typewriters can rewrite the works of
Shakespeare, why can't a dumb random algorithm fed with huge amounts
of data output something similar to Wikipedia? :)

BTW, ever heard of Functional Reactive Programming?  Or what about
just retro-feeding a Turing Machine with its own output from times to
times?
From: Joachim Durchholz
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <ffmokf$j0l$1@online.de>
namekuseijin schrieb:
> hey, if 1000 monkeys in typewriters can rewrite the works of
> Shakespeare, why can't a dumb random algorithm fed with huge amounts
> of data output something similar to Wikipedia? :)

It can.
It just takes as long as the monkeys.
(Oh, and expressing the day on which the successful typing run started 
as a binary number should require roughly as many bits as the number of 
bits needed to represent the works directly.)

Regards,
Jo
From: Joachim Durchholz
Subject: Re: on the relations of traditional CS theory to modern programming   practice
Date: 
Message-ID: <ff9uet$tr1$1@online.de>
Pascal Costanza schrieb:
> For example, a plain Turing machine could never 
> compute the contents of Wikipedia by using an algorithm (unless it has 
> the contents of Wikipedia as an input in the first place, but that's 
> lame).

That's not lame, that's the point exactly.
The algorithm behind Wikipedia is the transformation of wiki markup to 
HTML. Plus outputting edit, login and logout dialogs on request.

The contents of Wikipedia was produced by humans, not by a computer 
algorithm, so you can't take the contents as an example of what a 
computer can but but not a TM-equivalent algorithm.

 > Hence, the C-T thesis is inappropriate as a basis for computer
> science.

It is just one of the many bases for CS.
Recursion theory is another.
Boolean logic yet another (and other logics are used as well).
Lambda calculus.
Then various theories about ergonomics for programmers (or 
maintainability if you wish): information hiding, modularity, design by 
contract, etc.
Finally, theories about ergonomics for end users (conventions about the 
placement of Abort buttons etc.)

> The problem is not that the C-T thesis may be wrong, the problem is that 
> the C-T thesis steers us in the wrong direction.

For the problems at hand, this is simply a non-issue.

Imagine a situation where the C-T thesis is proven wrong, by providing a 
stronger model of computation.
Will that change the discussion about maintenance? About the relative 
qualities of Perl, PHP, Python, and Ruby? Or Lisp vs. Haskell vs. C++?

I agree that proving the C-T hypothesis wrong would be very interesting, 
but I don't think it would change much. It *might* produce different 
computers in a decade or two - *if* the stronger algorithms (a) can 
adress new classes of problems that are practically relevant *and* (b) 
the new computers needed to run these algorithms can be produced as 
cheaply as current-day computers.

In my eyes, quantum computing is nearer to changing the way we're 
writing programs. It doesn't go beyond the C-T thesis, but it radically 
alters efficiency classes of algorithms.

Regards,
Jo
From: Raffael Cavallaro
Subject: Re: on the relations of traditional CS theory to modern programming   practice
Date: 
Message-ID: <2007101912410837709-raffaelcavallaro@pasdespamsilvousplaitmaccom>
On 2007-10-19 05:47:16 -0400, Joachim Durchholz <··@durchholz.org> said:

> That's not lame, that's the point exactly.
> The algorithm behind Wikipedia is the transformation of wiki markup to 
> HTML. Plus outputting edit, login and logout dialogs on request.

I think you're missing the point here.

The Strong Church-Turing thesis (SCT) posits that, begining at time T0 
and given enough input (including the state of Wikipedia at T0), a TM 
could compute the state of Wikipedia at any arbitrary future time Tn 
*without further inputs*. Computing the state of Wikipedia at some 
arbitrary Tn given the state at Tn itself as input is trivial (we call 
it 'copying') - no one doubts that a TM could do this, and it is not 
what we're discussing here. That's what Pascal characterized as "lame."

According to Goldin and Wegner (and followers) there exist a class of 
computations which cannot be done by a TM no matter what *inital* input 
at time T0 it is given. That is, additional input(s) at Tn, Tn+m etc. 
are needed to perform the computation, as is *memory* of previous 
inputs, outputs and internal computations. They term this model, with 
dynamic streams and persistence, a Persistent Turing Machine (PTM).

Pascal and I are saying that starting at T0, the state of Wikipedia at 
an arbitrary future Tn is one such computation that can be done by a 
PTM (or some other equivalent interactive formalism), but not an 
ordinary TM. It is interactive by nature - it requires further inputs 
at future times between T0 and Tn (excluding the trivial "lame" case of 
having as input the state at Tn itself as discussed above).

Claiming, as the SCT does, that no further inputs are required amounts 
to a claim that the future inputs by human contributors are 
*completely* predictable given the right algorithm and the right 
initial state at T0. This amounts to a claim of complete determinism - 
that the right algorithm and initial state could predict anything in 
the future universe in complete and accurate detail, including, among 
other things, every action of every human Wikipedia contributor.

This determinism is why Goldin and Wegner argue that SCT is incorrect. 
They think that the universe is not practially deterministic, that 
empirical observation of (i.e., interaction with) the larger 
environment is required for certain kinds of computations. They point 
to nonlinear systems as evidence of the failure of Determinism - 
prediction is only possible in such systems if *precise* initial 
conditions are known, and this is a practical impossibility. Others 
have argued that QM demonstrates that the Universe is *inherently* 
non-deterministic.

Thus they characterize SCT supporters as Rationalists and themselves as 
Empiricists, and see this whole argument as just another chapter in the 
long standing debate between these two philosophical schools.
From: Paul Wallich
Subject: Re: on the relations of traditional CS theory to modern programming   practice
Date: 
Message-ID: <ffaoqo$cvc$1@reader1.panix.com>
Raffael Cavallaro wrote:
> On 2007-10-19 05:47:16 -0400, Joachim Durchholz <··@durchholz.org> said:
> 
>> That's not lame, that's the point exactly.
>> The algorithm behind Wikipedia is the transformation of wiki markup to 
>> HTML. Plus outputting edit, login and logout dialogs on request.
> 
> I think you're missing the point here.
> 
> The Strong Church-Turing thesis (SCT) posits that, begining at time T0 
> and given enough input (including the state of Wikipedia at T0), a TM 
> could compute the state of Wikipedia at any arbitrary future time Tn 
> *without further inputs*.

No, it really doesn't. That would be true if the state of Wikipedia at 
some future point were defined as a computable function of its state at 
TO. But no one is making that claim. It's like claiming that the exact 
time of decay of the next radium atom in a sample can be computed from 
everything known at T0.

paul
From: Raffael Cavallaro
Subject: Re: on the relations of traditional CS theory to modern programming   practice
Date: 
Message-ID: <2007101915450943042-raffaelcavallaro@pasdespamsilvousplaitmaccom>
On 2007-10-19 13:17:11 -0400, Paul Wallich <··@panix.com> said:

> That would be true if the state of Wikipedia at some future point were 
> defined as a computable function of its state at TO. But no one is 
> making that claim. It's like claiming that the exact time of decay of 
> the next radium atom in a sample can be computed from everything known 
> at T0.

Determinists make precisely this claim. For example, hidden variable theory.
From: Paul Wallich
Subject: Re: on the relations of traditional CS theory to modern programming   practice
Date: 
Message-ID: <ffbjqk$e8$1@reader1.panix.com>
Raffael Cavallaro wrote:
> On 2007-10-19 13:17:11 -0400, Paul Wallich <··@panix.com> said:
> 
>> That would be true if the state of Wikipedia at some future point were 
>> defined as a computable function of its state at TO. But no one is 
>> making that claim. It's like claiming that the exact time of decay of 
>> the next radium atom in a sample can be computed from everything known 
>> at T0.
> 
> Determinists make precisely this claim. For example, hidden variable 
> theory.

First, they're wrong. And second, they don't make that claim at all 
unless they claim that the hidden variables can in fact be known. And 
the ones I've read (mostly Bohm) tend to avoid that claim like the plague.

It sounds as if this argument is more a social one -- i.e. about what 
matters -- than about the underlying facts. Even the most diehard 
determinists would likely be willing to agree that you could use less 
computing power to compute a system's evolution if you had an oracle 
about some set of future states.

paul
From: Raffael Cavallaro
Subject: Re: on the relations of traditional CS theory to modern programming   practice
Date: 
Message-ID: <2007101921161278840-raffaelcavallaro@pasdespamsilvousplaitmaccom>
On 2007-10-19 20:57:56 -0400, Paul Wallich <··@panix.com> said:

> It sounds as if this argument is more a social one -- i.e. about what matters

Agreed entirely. The Empiricists think that dynamic inputs at runtime 
are more important and Rationalists think that computing functions on 
static inputs is more important. Which is precisely why advocates of 
dynamic languages like me are on the Empiricist side, and advocates of 
static compilation, like Matthias, are on the Rationalist side.
From: ··············@gmail.com
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <1192845126.982072.126870@e9g2000prf.googlegroups.com>
On Oct 19, 8:16 pm, Raffael Cavallaro <················@pas-d'espam-
s'il-vous-plait-mac.com> wrote:
> On 2007-10-19 20:57:56 -0400, Paul Wallich <····@panix.com> said:
>
> > It sounds as if this argument is more a social one -- i.e. about what matters
>
> Agreed entirely. The Empiricists think that dynamic inputs at runtime
> are more important and Rationalists think that computing functions on
> static inputs is more important. Which is precisely why advocates of
> dynamic languages like me are on the Empiricist side, and advocates of
> static compilation, like Matthias, are on the Rationalist side.

Raffael, could you, please, refrain from putting people like myself
into boxes like "Rationalist"?  I am certainly not a rationalist in
the sense you describe.  Moreover, bringing in dynamic vs. static is
yet another Red Herring.  The claim that PTMs have more computational
power than TMs is plainly false in a very strong mathematical sense.
This fact has nothing to do with social issues.

So, before this gets further out of hand, I'll leave this discussion.
Please, use a good introductory book or course on computability theory
for further reference.  Thank you.

Matthias
From: Raffael Cavallaro
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <2007101922281142612-raffaelcavallaro@pasdespamsilvousplaitmaccom>
On 2007-10-19 21:52:06 -0400, ··············@gmail.com said:

> The claim that PTMs have more computational
> power than TMs is plainly false in a very strong mathematical sense.
> This fact has nothing to do with social issues.

And yet computer scientists such as Wegner disagree with you. Obviously 
not as "plainly false" as you think it is.

> 
> So, before this gets further out of hand, I'll leave this discussion.
> Please, use a good introductory book or course on computability theory
> for further reference.  Thank you.

This is merely condescending and I honestly think, beneath you.
From: ··············@gmail.com
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <1192851027.970429.316510@v23g2000prn.googlegroups.com>
On Oct 19, 9:28 pm, Raffael Cavallaro <················@pas-d'espam-
s'il-vous-plait-mac.com> wrote:
> On 2007-10-19 21:52:06 -0400, ··············@gmail.com said:
>
> > The claim that PTMs have more computational
> > power than TMs is plainly false in a very strong mathematical sense.
> > This fact has nothing to do with social issues.
>
> And yet computer scientists such as Wegner disagree with you. Obviously
> not as "plainly false" as you think it is.

Yes, it is as plainly false as it gets.  I'm frankly shocked that
Wegner disagrees with me (and many others) and that he wastes his time
debunking what he perceives as "myths." We are not in the business of
doing proof by authority here, and I don't have to get intimidated in
my view by the existence of someone who disagrees with me.  He has
failed to back his claim with solid proofs, and I'm not the only one
pointing this out.

This is basic stuff, and an introductory course on computability
should cover it.  The one I took certainly did.  Hence my comment that
you found "condescending."

Goldin and Wegner (and whoever brought out that Planck quote and later
Feyrabend) are the ones who are condescending, because their thinly
veiled argument basically is:  "Well, we are geniuses who are ahead --
if not of our time then certainly of YOU.  If you disagree with us
then you are an old-timer, or a 'rationalist', or some other kind of
fool who perhaps likes static compilation, so you don't 'get' it.
We'll wait until you're dead, because trying to convince you is
hopeless."  Frankly, I found THAT rather offensive.

And, to avoid having to reply to two messages every time, let me
answer your other charge here:

> Have you tried to understand what I'm saying? Since there is no
> built-in notion of time, it is a poor abstraction for computation over
> time with dynamic inputs.

Yes, I have tried to understand what you are saying.  I concluded that
you are wrong.  Just because time is not built in does not mean that
you cannot encode it.  In fact, I have explained one particular way of
encoding it.  But let me try one more time:

The TM tape is an abstraction, and its initial contents does not have
to be mapped to the input of a real computer at one particular given
moment of physical time.  The only thing a TM relies upon is this:
Once the head has visited a particular square, it will find the value
in that square unchanged when it visits it the next time.  However,
there is nothing to stop us from thinking of the first visit to any
given square as producing the initial contents "on demand".  Thus, the
fact that the TM abstraction assumes an initial tape that does not
change during the run of the machine (other than those changes made by
the machine itself) can still account for time and interaction --
except the concept of time has to be explicitly coded into the machine
itself since it does not come with the model.

To put it another way:  On a real computer, any input that you see in
the future can be thought of has having sat there from the beginning
and merely not having been inspected by the program at an earlier
point in time.  The existence or non-existence of a particular input
is completely irrelevant to the program until it tries to actually
inspect it.  Therefore, as far as the computing power of the machine
is concerned, nothing changes if you assume all input to exist from
the start.  And that's why an ordinary TM is a perfectly fine fit when
modeling the computational power of such a system, be it interactive
or not.
From: Raffael Cavallaro
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <2007102001571133169-raffaelcavallaro@pasdespamsilvousplaitmaccom>
On 2007-10-19 23:30:27 -0400, ··············@gmail.com said:

> Yes, it is as plainly false as it gets.

I think you misunderstand the word 'plainly.' For something to be 
'plainly false' it has to be generally agreed upon that it is false, 
and there is no such general agreement.

>  On a real computer, any input that you see in
> the future can be thought of has having sat there from the beginning
> and merely not having been inspected by the program at an earlier
> point in time.  The existence or non-existence of a particular input
> is completely irrelevant to the program until it tries to actually
> inspect it.

Not if you are required to load this future input *before* the program 
starts running. IOW, your 'encoding' of time allows for time travel, so 
I'm not buying it.
From: ··············@gmail.com
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <1192861910.971431.286160@e9g2000prf.googlegroups.com>
On Oct 20, 12:57 am, Raffael Cavallaro <················@pas-d'espam-
s'il-vous-plait-mac.com> wrote:
> On 2007-10-19 23:30:27 -0400, ··············@gmail.com said:
>
> > Yes, it is as plainly false as it gets.
>
> I think you misunderstand the word 'plainly.' For something to be
> 'plainly false' it has to be generally agreed upon that it is false,
> and there is no such general agreement.

I don't think that this is the official definition of "plainly", but
so be it.  The claim is still false, and provably so.  That's all I
meant to say.

> >  On a real computer, any input that you see in
> > the future can be thought of has having sat there from the beginning
> > and merely not having been inspected by the program at an earlier
> > point in time.  The existence or non-existence of a particular input
> > is completely irrelevant to the program until it tries to actually
> > inspect it.
>
> Not if you are required to load this future input *before* the program
> starts running.

I tried to explain that:  The TM is not a real computer, and the tape
is not a real tape that needs to be "loaded" in a physical sense.  All
that matters is that the TMs head finds the input in a given square
*when it gets there*.  If we label certain squares with time readings,
and if we disallow any machine that visits these labeled squares out
of sequential order(*), then the initial visits to these squares can
be seen as a model of time.

> IOW, your 'encoding' of time allows for time travel, so I'm not buying it.

The encoding does not allow for time travel.  See the (*) remark
above.  It does not matter that there may exist TMs that violate the
restriction since any PTM can be translated into a TM using the above
encoding in such a way that (*) is satisfied.

If you don't like the (*) restriction, then you can use this
alternative model of time:  If the current time is T0 and you visit a
square labeled T1, then the new current time is going to be
MAX(T0,T1).  This way it is guaranteed that time never runs backwards.
From: Mark Tarver
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <1192872433.935740.260040@q5g2000prf.googlegroups.com>
On 20 Oct, 07:31, ··············@gmail.com wrote:
> On Oct 20, 12:57 am, Raffael Cavallaro <················@pas-d'espam-
>
> s'il-vous-plait-mac.com> wrote:
> > On 2007-10-19 23:30:27 -0400, ··············@gmail.com said:
>
> > > Yes, it is as plainly false as it gets.
>
> > I think you misunderstand the word 'plainly.' For something to be
> > 'plainly false' it has to be generally agreed upon that it is false,
> > and there is no such general agreement.
>
> I don't think that this is the official definition of "plainly", but
> so be it.  The claim is still false, and provably so.  That's all I
> meant to say.
>
> > >  On a real computer, any input that you see in
> > > the future can be thought of has having sat there from the beginning
> > > and merely not having been inspected by the program at an earlier
> > > point in time.  The existence or non-existence of a particular input
> > > is completely irrelevant to the program until it tries to actually
> > > inspect it.
>
> > Not if you are required to load this future input *before* the program
> > starts running.
>
> I tried to explain that:  The TM is not a real computer, and the tape
> is not a real tape that needs to be "loaded" in a physical sense.  All
> that matters is that the TMs head finds the input in a given square
> *when it gets there*.  If we label certain squares with time readings,
> and if we disallow any machine that visits these labeled squares out
> of sequential order(*), then the initial visits to these squares can
> be seen as a model of time.
>
> > IOW, your 'encoding' of time allows for time travel, so I'm not buying it.
>
> The encoding does not allow for time travel.  See the (*) remark
> above.  It does not matter that there may exist TMs that violate the
> restriction since any PTM can be translated into a TM using the above
> encoding in such a way that (*) is satisfied.
>
> If you don't like the (*) restriction, then you can use this
> alternative model of time:  If the current time is T0 and you visit a
> square labeled T1, then the new current time is going to be
> MAX(T0,T1).  This way it is guaranteed that time never runs backwards.

What you have here on this thread, IMO, are two classical theories of
time meeting in opposition.

The TM is designed to represent a  timeless domain of computable
functions.  Matthias's solution to the challenge of dynamic
computation is to represent time as another dimension, in a manner
analogous
to Minkowski's space-time in which events are frozen so to speak in an
ice-block - or, as here, recorded on a tape.  Now if you represent
things this way then the TM works fine.  McTaggart called this B-
series time.

The other great classical model is time as becoming, which is how we
experience time as human observers.  Events in the future are not
there in any meaningful sense until they transpire and become actual
by happening.  This is Raffael's view.  McTaggart called this A-series
time.

The question is which is more appropriate for modelling the computer?
Fascinating that this philosophical duality should emerge here.

Mark
From: Raffael Cavallaro
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <2007102012114529560-raffaelcavallaro@pasdespamsilvousplaitmaccom>
On 2007-10-20 05:27:13 -0400, Mark Tarver <··········@ukonline.co.uk> said:

> The question is which is more appropriate for modelling the computer?
> Fascinating that this philosophical duality should emerge here.

I think my point here has been that the sequential view of time is more 
appropriate for modelling *interactive* computation.
From: ··············@gmail.com
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <1192931949.772919.44590@i38g2000prf.googlegroups.com>
On Oct 20, 11:11 am, Raffael Cavallaro <················@pas-d'espam-
s'il-vous-plait-mac.com> wrote:
> On 2007-10-20 05:27:13 -0400, Mark Tarver <··········@ukonline.co.uk> said:
>
> > The question is which is more appropriate for modelling the computer?
> > Fascinating that this philosophical duality should emerge here.
>
> I think my point here has been that the sequential view of time is more
> appropriate for modelling *interactive* computation.

Obviously, a sequential (or at least somehow ordered) view of time is
*necessary* for modelling interactive computations.  Fortunately, such
a view of time can be part of the encoding of an algorithm as a TM.
From: Joachim Durchholz
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <ffcmqi$9c$1@online.de>
Mark Tarver schrieb:
> Matthias's solution to the challenge of dynamic computation is to
> represent time as another dimension, in a manner analogous to
> Minkowski's space-time in which events are frozen so to speak in an 
> ice-block - or, as here, recorded on a tape. Now if you represent 
> things this way then the TM works fine. McTaggart called this B- 
> series time.
 >
> The other great classical model is time as becoming, which is how we
> experience time as human observers.  Events in the future are not
> there in any meaningful sense until they transpire and become actual
> by happening.  This is Raffael's view.  McTaggart called this A-series
> time.
> 
> The question is which is more appropriate for modelling the computer?

I think you're misunderstanding Matthias' model.
He isn't using B-series time, he's constructing PTM-equivalent TMs so 
that they don't access the cells before they're filled by the external 
entities, so the outcome of the TM doesn't depend on whether you have 
A-series time or B-series time. (A beautiful construction IMHO.)

Regards,
Jo
From: Chris F Clark
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <sddfy05q285.fsf@shell01.TheWorld.com>
Although I hate to participate in this thread, I have a simple
question.  Is it possible to prove that a machine which visits the
tape only in sequential order is as powerful as a machine that can
visit the tape non-sequentially, perhaps by appealing to multi-tape
TMs in the argument?

The reason I ask has to do with the computational power argument.  A
PTM with a separate "infinite" tape that it can process between
receiving inputs on the sequential coded tape is effectively a
computational model of Oracle.  It can compute the closure of any
infinite series in an "instant" and thus deal with true "real
numbers".  Whereas a sequential machine can only compute a finite
series at any instant.  My knowledge of math may not be that firm in
those areas, but I seem to recall that the ability to interchange
arbitrary limits suggests something strictly more powerful (in the
same way there are more reals than integers).

This question has real implications if one is contemplating a
Wolfram-style universe of communicating cellular automata.  If each
element is restricted to a simple finite calculation, then the result
is simply a TM.  If each element can compute an infinite series and
the resulting real number, then isn't the result non-computable?

Asked another way, are there series of computable numbers, such that
the entire series is non-computable?  If so, if the tape given to a TM
contains as input a non-computable series, is not the machine which
reads and processes that tape capable of producing results that a TM
with only a tape containing a computable input could not produce?

Again, the reason I ask has to do with the perceived nature of the
universe.  From what I've read there seem to be some aspects of the
universe that appear to require infinite series to be calculated
precisely and instantaneously, i.e. if one assumed that a rational
approximation was used, the universe wouldn't behave quite right;
there would be little artifacts appearing as a result of the
approximation being used (and being slightly above or below the
correct value).

Or perhaps these artifacts do appear and are the source of
non-deterministic randomness (the experimental error) in the universe
as we perceive it, and the universe is some machine computing an
infinite series using a function that does not converge strictly from
one side, and thus sometimes makes corrections to lower its currently
computed value and other times makes corrections to raise it, and more
importantly might adjust the individual digits (cells) in a seemingly
unpredicatble fashion.  Unpredictable, because our finite
approximation to the real value does not give us knowledge of the
complete series (as it is not finite), but only to the next step (and
the computation of the next step increments "time").  How does one ask
whether such artifacts exist?

My apologies for wandering too far afield.  Hopefully, the more
concrete questions near the beginning are coherent, and the ramblings
interjected don't deter their being answered.  I guess I need to read
Chaitin's book.  I'm just not sure it would clear the soup in my head.
From: Neelakantan Krishnaswami
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <slrnfhkfo5.8n3.neelk@gs3106.sp.cs.cmu.edu>
In article <<···············@shell01.TheWorld.com>>,
Chris F Clark <···@shell01.TheWorld.com> wrote:
> Although I hate to participate in this thread, I have a simple
> question.  Is it possible to prove that a machine which visits the
> tape only in sequential order is as powerful as a machine that can
> visit the tape non-sequentially, perhaps by appealing to multi-tape
> TMs in the argument?

RAM machines are exactly as powerful as Turing machines in terms of
the set of functions they can compute. This is because you can
simulate random access on a tape cell N places away by stepping
forward N steps. Intuitively, this is like simulating an array with a
linked list. However, note that RAM machines can compute some
functions more quickly than TMs can, basically because arrays have
O(1) access to their memory and linked lists are O(n).

> Asked another way, are there series of computable numbers, such that
> the entire series is non-computable?

I don't properly understand what you mean by "entire series". If you
mean "the limit of the series is non-computable", the answer is
certainly yes.

For example, consider the following series of functions. Each function
takes in a program as an argument, and returns true or false, such
that the N-th function in the series will correctly report whether its
argument halts in N steps or less.

Clearly, for any fixed K, you can write a program to test this -- you
can write a program that runs its argument program for K steps, and
report true if it halts in that time and false if not. However, the
limit of this series is not computable, because the limit function is
the solution to the Halting problem.

So you can have chains of computable functions, whose limit is non-
computable. This fact is of considerable importance in denotational
semantics.


-- 
Neel R. Krishnaswami
·····@cs.cmu.edu
From: Rob Warnock
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <reSdndm0kZndO4fanZ2dnUVZ_smnnZ2d@speakeasy.net>
Chris F Clark  <···@shell01.TheWorld.com> wrote:
+---------------
| Although I hate to participate in this thread, I have a simple
| question.  Is it possible to prove that a machine which visits the
| tape only in sequential order is as powerful as a machine that can
| visit the tape non-sequentially, perhaps by appealing to multi-tape
| TMs in the argument?
+---------------

I don't think so. In fact, I think I can present a rather trivial
counterexample:

    Let M be a multi-tape TM with N internal states and R rules
    for which each of T tapes can only be visited in sequential
    order [e.g., the forward direction]. Then it is not possible
    for M to compute [that is, write onto (one or more of) the
    tape(s) being used as the "output device"] a result which is
    greater than N*R*T (and the actual limit might be much, much
    smaller -- I'm just picking a "safe" value). For example, if
    each of the T tapes starts out with a number of 1's representing
    unary integers followed by infinite 0's, then it is not possible
    for M to compute the product of those numbers if that result
    would be greater than N*R*T [or whatever the actual limit is].

Whereas if as few as *one* of tapes is writable and reversable
[can both read & write and step both forwards & backwards],
then a TM with a fixed finite number of states & rules can
compute the product of the numbers on the tapes [or the square
of the number, say, if there's only one tape] no matter *how*
big that product might be!!


-Rob

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Marc Gluch
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <471ab43e$0$28819$4c368faf@roadrunner.com>
Rob Warnock wrote:
> Chris F Clark  <···@shell01.TheWorld.com> wrote:
> +---------------
> | Although I hate to participate in this thread, I have a simple
> | question.  Is it possible to prove that a machine which visits the
> | tape only in sequential order is as powerful as a machine that can
> | visit the tape non-sequentially, perhaps by appealing to multi-tape
> | TMs in the argument?
> +---------------
> 
> I don't think so. In fact, I think I can present a rather trivial
> counterexample:
> 
>     Let M be a multi-tape TM with N internal states and R rules
>     for which each of T tapes can only be visited in sequential
>     order [e.g., the forward direction]. Then it is not possible
>     for M to compute [that is, write onto (one or more of) the
>     tape(s) being used as the "output device"] a result which is
>     greater than N*R*T (and the actual limit might be much, much
>     smaller -- I'm just picking a "safe" value). For example, if
>     each of the T tapes starts out with a number of 1's representing
>     unary integers followed by infinite 0's, then it is not possible
>     for M to compute the product of those numbers if that result
>     would be greater than N*R*T [or whatever the actual limit is].
> 
> Whereas if as few as *one* of tapes is writable and reversable
> [can both read & write and step both forwards & backwards],
> then a TM with a fixed finite number of states & rules can
> compute the product of the numbers on the tapes [or the square
> of the number, say, if there's only one tape] no matter *how*
> big that product might be!!
> 
> 
> -Rob
> 
> -----
> Rob Warnock			<····@rpw3.org>
> 627 26th Avenue			<URL:http://rpw3.org/>
> San Mateo, CA 94403		(650)572-2607
> 

I'm not sure that Chris used the phase "a machine which visits the tape 
only in sequential order" to mean a machine that can move its head in 
one direction only, i.e. to visit/read each cell on the tape only once. 
Such machine would have no purpose for the *write* operation. Thus, I 
suspect he wanted to compare capabilities of machines with sequential 
(left/right) vs. "random" access to their memories (and these two kinds 
are Turing-equivalent).
From: Chris F Clark
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <sdd3aw4ppdr.fsf@shell01.TheWorld.com>
Marc Gluch <··········@mindtap.com> writes:

> Rob Warnock wrote:
>> Chris F Clark  <···@shell01.TheWorld.com> wrote:
>> +---------------
>> | Although I hate to participate in this thread, I have a simple
>> | question.  Is it possible to prove that a machine which visits the
>> | tape only in sequential order is as powerful as a machine that can
>> | visit the tape non-sequentially, perhaps by appealing to multi-tape
>> | TMs in the argument?
>> +---------------
>> I don't think so. In fact, I think I can present a rather trivial
>> counterexample:
...
> I'm not sure that Chris used the phase "a machine which visits the
> tape only in sequential order" to mean a machine that can move its
> head in one direction only, i.e. to visit/read each cell on the tape
> only once. Such machine would have no purpose for the *write*
> operation. Thus, I suspect he wanted to compare capabilities of
> machines with sequential (left/right) vs. "random" access to their
> memories (and these two kinds are Turing-equivalent).

Actually, I was trying to understand the argument of using the
farthest reached position on the Turing tape as an encoding of time.
I think I expressed it quite poorly, as it came across to several
people as asking about RAM machines.

The point I was trying to understand is whether a TM where you could
only reach so far onto a tape at a specific point of time (after a
specific number of steps) was less powerful than a general TM.
Clearly, if the machine can step left (or right) on each transition,
it can reach at most n steps left (or n steps right) after n
transitions, so the bound of 2*n tape squares for n steps is clearly
no bound at all.  In constrast, a fixed constant bound say k squares
after n steps (where k is independent of n, i.e. as n increases, k
does not), renders the TM a FA, which is clearly restricted.  Are
there lesser bounds that are also unrestrictive?  I.e. if we make sure
the TM has not moved left more than ceil(n+1/2) squares to the left or
the right, can it still produce all computable series on its tape?

However, the more I think about it, the more I'm certain that doesn't
answer the question.  For the problem to be interesting, one has to
let the TM run until it halts, no matter how many steps it takes,
between inspection points.  If you are trying to discuss a TM that
interacts with the real universe (pun unintended, but applicable), you
need to allow the other agent to use real numbers (i.e. any convergent
series, and not just the computable ones) and to take those
calculations to determine what symbol to enter on the tape.  (Barb
Knox caught this point precisely by pointing out that the
diagonalization argument thus holds.)

To me, this would seem to be the one argument for studying PTMs, which
is knowing how TMs react to agents that confront them with
uncomputable series.  That is atleast one aspect of computers in the
real world.  The real world is not known to be representable by a
computable series (and as I mentioned, there are some results in
physics that seem to hold only if it is not).  If one posits such a
world, then one might wish to know how ones computers interact with
it.  That is we a given a TM and a tape containing an uncomputable
series, what does the machine do?  Are there interesting things a
machine can do?

I don't know if this is the argument the Wegman et al make, but if I
were trying to argue their point, it is the one I would make (and I
guess I just did).
From: Don Geddis
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <871wboicke.fsf@geddis.org>
Chris F Clark <···@shell01.TheWorld.com> wrote on Sun, 21 Oct 2007:
> The real world is not known to be representable by a computable series (and
> as I mentioned, there are some results in physics that seem to hold only if
> it is not).

What specific physics results are you referring to?

        -- Don
_______________________________________________________________________________
Don Geddis                  http://don.geddis.org/               ···@geddis.org
The graduate with a Science degree asks, "Why does it work?"  
The graduate with an Engineering degree asks, "How does it work?" 
The graduate with an Accounting degree asks, "How much will it cost?" 
The graduate with a Liberal Arts degree asks, "Do you want fries with that?"
From: Chris F Clark
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <sddprz8vvru.fsf@shell01.TheWorld.com>
Don Geddis <···@geddis.org> writes:

> Chris F Clark <···@shell01.TheWorld.com> wrote on Sun, 21 Oct 2007:
>> The real world is not known to be representable by a computable series (and
>> as I mentioned, there are some results in physics that seem to hold only if
>> it is not).
>
> What specific physics results are you referring to?

I believe it is the mass of an electron.  It comes out to some
specific value which only makes sense if some series converges
exactly.  And, experimental measures consistently show that it has
this exact value out to a surprising number of digits, well beyond the
experimental error normally allowed for such measurements (1 part in
10**9th, if I recall correctly).  As, I understand it, it shows that
some power-law holds across a large range of dimensions (essentially
for all scales).  (Apparently, there are physicists who study this
value because if the power law didn't hold at Planck (sp?) distances,
then there would be artifacts of this at larger scales, and the value
would vary slightly from this known value. And, if I follow the
argument correctly, if the value did vary from this known value, it
would allow some specific results from QM to be attributed to other
causes.  From what I gather, some of the fundamental constants can be
derived from very specific integrals that have precise values (and the
mass of the electron is one that can be both calculated and
measured precisely).)

Now, the physics (and the related math) to understand this phenomena
are beyond me, so perhaps I have it mis-interpreted, but this fact was
astonishing and thus it has stuck.  Of course, this being usenet, I
will be summarily disabused of my flawed memory.  Hopefully, someone
will put enough of the pieces together that will I have some dignity
left.  (For example, if the series converges merely to a computable
real.)
From: Joachim Durchholz
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <ffhc68$e3a$1@online.de>
Chris F Clark schrieb:
> I believe it is the mass of an electron.  It comes out to some
> specific value which only makes sense if some series converges
> exactly.  And, experimental measures consistently show that it has
> this exact value out to a surprising number of digits, well beyond the
> experimental error normally allowed for such measurements (1 part in
> 10**9th, if I recall correctly).  As, I understand it, it shows that
> some power-law holds across a large range of dimensions (essentially
> for all scales).  (Apparently, there are physicists who study this
> value because if the power law didn't hold at Planck (sp?) distances,
> then there would be artifacts of this at larger scales, and the value
> would vary slightly from this known value. And, if I follow the
> argument correctly, if the value did vary from this known value, it
> would allow some specific results from QM to be attributed to other
> causes.  From what I gather, some of the fundamental constants can be
> derived from very specific integrals that have precise values (and the
> mass of the electron is one that can be both calculated and
> measured precisely).)

Interesting.

> Now, the physics (and the related math) to understand this phenomena
> are beyond me, so perhaps I have it mis-interpreted, but this fact was
> astonishing and thus it has stuck.

One possible misinterpretation is that this is "just math".
E.g. you'll find pi represented in all kinds of physical measurements, 
with a "surprising" precision, and hints that it's actually exact down 
to Planck levels.

Of course, that's just speculation. But I know that this kind of 
misinterpretation is *very* easy.
For example, the Great Pyramid and its kin have the number pi all over 
their dimensions, and with an uncanny precision. Which, of course, has 
led to all kinds of speculation about the mathematical abilities of 
ancient Egyptians, and I was baffled myself. A few weeks later, I 
stumbled over a factoid: they tended to measure horizontal distances by 
running a wheel along it, and to represent vertical distances as 
multiples of wheel height - no theory, just geometric effect.
This story just popped up in the back of my head. Again, just 
speculation, so I'm holding my judgement, not my breath :-)

 > Of course, this being usenet, I
> will be summarily disabused of my flawed memory.  Hopefully, someone
> will put enough of the pieces together that will I have some dignity
> left.  (For example, if the series converges merely to a computable
> real.)

One criticism might be this: Given the quantum nature of space and time 
itself, it's doubtful whether measuring anything to an exactness beyond 
Planck dimensions makes sense at all.
E.g. pi is almost ubiquitous in physics, but measuring a length to a 
precision of more than ca. 35 digits will not work because space itself 
becomes granular at that scale (Planck length is ~10^-35m).
(The counterargument would be that while the measurements themselves 
cannot be irrational, two measurements can be connected via a 
transcendental function so you can still derive an irrational number to 
arbitrary precision. The counter-counterargument is that you still can't 
drive the measurements themselves to arbitrary precision. I'm 
withholding judgement again.)

Regards,
Jo
From: Don Geddis
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <87zlybf3qk.fsf@geddis.org>
Chris F Clark <···@shell01.TheWorld.com> wrote on Sun, 21 Oct 2007:
> Don Geddis <···@geddis.org> writes:
>> Chris F Clark <···@shell01.TheWorld.com> wrote on Sun, 21 Oct 2007:
>>> The real world is not known to be representable by a computable series
>>> (and as I mentioned, there are some results in physics that seem to hold
>>> only if it is not).
>>
>> What specific physics results are you referring to?
>
> I believe it is the mass of an electron.  It comes out to some specific
> value which only makes sense if some series converges exactly.

I appreciate that you're relating a half-recalled anecdote from the past,
and you aren't a physicist.

But really, this can't be the story.  An electron's mass, in some sense,
"just is".  There isn't any "number" that it corresponds to.  Think about it:
any number would have to be in some kind of units, like "gram" or "pound".
But how are those units defined anyway?  Just some random, human-centric
invention.  Which is more fundamental in the universe?  The mass of the
electron, or a gram or pound?  Obviously, the mass of the electron.

Really, the mass of the electron should be "1".  Just like the speed of light
is also "1".  Those are two of the most fundamental things in the universe.

So it's meaningless to say that the mass of an electron has an interesting
numeric value in some arbitrary unit.  That's a statement about the arbitrary
unit, not about the nature of the physical universe.

There might well be something like what you're thinking of (although I don't
know it myself).  Perhaps the ratio of the mass of an electron to the mass of
a proton, or neutron, or something.  Or maybe one of the unitless constants in
physics, like the fine structure constant.

But the one you've guessed, can't be it.  The mass of an electron is 1
(in units of electron-mass, of course).

> And, experimental measures consistently show that it has this exact value
> out to a surprising number of digits, well beyond the experimental error
> normally allowed for such measurements (1 part in 10**9th, if I recall
> correctly).  As, I understand it, it shows that some power-law holds across
> a large range of dimensions (essentially for all scales).  (Apparently,
> there are physicists who study this value because if the power law didn't
> hold at Planck (sp?) distances, then there would be artifacts of this at
> larger scales, and the value would vary slightly from this known
> value. And, if I follow the argument correctly, if the value did vary from
> this known value, it would allow some specific results from QM to be
> attributed to other causes.

I understand most of the individual words you write here, and it sounds
intriguing, but I have to admit that I can't follow the argument.

> From what I gather, some of the fundamental constants can be derived from
> very specific integrals that have precise values (and the mass of the
> electron is one that can be both calculated and measured precisely).)

As far as I'm aware, there is no more-fundamental theory of physics that
allows one to calculate the mass of an electron.  It simply is.  It's
measured experimentally, not calculated.

> Now, the physics (and the related math) to understand this phenomena are
> beyond me, so perhaps I have it mis-interpreted, but this fact was
> astonishing and thus it has stuck.  Of course, this being usenet, I will be
> summarily disabused of my flawed memory.  Hopefully, someone will put
> enough of the pieces together that will I have some dignity left.  (For
> example, if the series converges merely to a computable real.)

Physics is a big thing, and not fully understood, so there may be something
out there like you suggest.  But the words you write aren't ringing any bells
for me, and I know the specific details can't be right, so it's hard to tell
how to learn more.  Perhaps someone else will recall a reference on this
topic.

Thanks for the recollection, though.

        -- Don
_______________________________________________________________________________
Don Geddis                  http://don.geddis.org/               ···@geddis.org
Any technology distinguishable from magic is insufficiently advanced.
From: Thant Tessman
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <ffivdu$cfq$1@news.xmission.com>
Don Geddis wrote:

[...]

> As far as I'm aware, there is no more-fundamental theory of physics that
> allows one to calculate the mass of an electron.  It simply is.  It's
> measured experimentally, not calculated. [...]

I think the point is that you can calculate the mass of an electron from 
other (measured) quantities. Sometimes those other quantities don't have 
  straight-forward relationships to what it is you are calculating. 
Heck, energy equals mass times the speed of light squared, for example.

But if all we're talking about is natural phenomenon that is described 
by math that requires instantaneous 'calculation' of infinite series, 
physics is *full* of this kind of thing. Heck, calculus of 
infinitesimals was invented to describe Newtonian motion. You have to 
look no further than the three-body problem to find an idealized natural 
phenomenon that requires infinite calculation to describe exactly.

But maybe I misunderstand the points being made.

-thant
From: Paul Wallich
Subject: Re: on the relations of traditional CS theory to modern programming   practice
Date: 
Message-ID: <ffjhel$at$1@reader1.panix.com>
Thant Tessman wrote:
> Don Geddis wrote:
> 
> [...]
> 
>> As far as I'm aware, there is no more-fundamental theory of physics that
>> allows one to calculate the mass of an electron.  It simply is.  It's
>> measured experimentally, not calculated. [...]
> 
> I think the point is that you can calculate the mass of an electron from 
> other (measured) quantities. Sometimes those other quantities don't have 
>  straight-forward relationships to what it is you are calculating. Heck, 
> energy equals mass times the speed of light squared, for example.
> 
> But if all we're talking about is natural phenomenon that is described 
> by math that requires instantaneous 'calculation' of infinite series, 
> physics is *full* of this kind of thing. Heck, calculus of 
> infinitesimals was invented to describe Newtonian motion. You have to 
> look no further than the three-body problem to find an idealized natural 
> phenomenon that requires infinite calculation to describe exactly.
> 
> But maybe I misunderstand the points being made.

Reading this through, it strikes me that the original electron anecdote 
may have been not about its mass but its charge, which (undergraduate 
version) diverges under naive quantum-mechanical calculation. That 
divergence never appears in the real universe because the electron 
evokes a divergent quantity of virtual screening charges from the 
vacuum. The miracle comes in the fact that there's nothing in the 
formalism (ahem! informalism?) used for the calculations that requires 
the infinities to cancel each other out so neatly and leave the paltry 
little _e_ as it is. (There are some similar but not identical arguments 
about the balance between matter and antimatter.)

paul
From: mikel evins
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <472187ce$0$26420$88260bb3@free.teranews.com>
Don Geddis wrote:

> As far as I'm aware, there is no more-fundamental theory of physics that
> allows one to calculate the mass of an electron.  It simply is.  It's
> measured experimentally, not calculated.

Well, there are nits to be picked with that characterization, but, 
leaving those aside, one of the ways you formulate a theory to be 
testable is to derive from it some quantity that you can measure--such 
as the mass of the electron. You definitely do calculate it in that 
context, so that you can compare what you've calculated with what 
experimentalists can measure, and thereby provide a means of testing 
your theory.

Similarly, one of my professors spent a huge effort in class one 
semester deriving the gas laws from quantum mechanics, so that he could 
compare two different theoretical assumptions about elementary 
particles. He calculated the gas laws from each of the two assumptions, 
leading to different results. One of the results closely matched the 
observed gas laws, and the other didn't. He could then claim that the 
observed gas laws provided evidence in favor of the one assumption, and 
against the other.

(As an aside, the assumptions in question had to do with whether it is 
possible in principle to distinguish between two elementary particles of 
the same kind --two electrons, for example--or whether when we observe 
two electrons we are really just making two different observations of 
the same object. Naturally, since this was quantum mechanics, the 
crazier assumption is the one that won out. The somewhat puckish way he 
put it was "voil�! There is only one electron!")


-- 
Posted via a free Usenet account from http://www.teranews.com
From: Don Geddis
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <87lk9pwuf6.fsf@geddis.org>
mikel evins <······@mac.com> wrote on Fri, 26 Oct 2007:
> Don Geddis wrote:
>> As far as I'm aware, there is no more-fundamental theory of physics that
>> allows one to calculate the mass of an electron.  It simply is.  It's
>> measured experimentally, not calculated.
>
> Well, there are nits to be picked with that characterization, but, leaving
> those aside, one of the ways you formulate a theory to be testable is to
> derive from it some quantity that you can measure--such as the mass of the
> electron. You definitely do calculate it in that context, so that you can
> compare what you've calculated with what experimentalists can measure, and
> thereby provide a means of testing your theory.

I'm still not sure that there's any theory which calculates the mass of an
electron.  As far as I know, it doesn't have ("isn't known to have") any
substructure.

Even if we were talking about protons, I'd still say that "calculate" is kind
of a silly term.  Yes, you could say that a proton is made up of two up quarks
and one down, and that up quarks have so-and-so a mass, and down quarks this
other mass, and therefore "calculate" the mass of a proton to be 2x+y.

But really, quark masses are just as arbitrary as proton masses, and no
easier (in fact, far more difficult) to measure.

When somebody states that they can calculate the exact value of something, I
interpret that mean that the value is so precise we can use it to calibrate
the accuracy of our measurements.  If we measure something different than the
calculation, then the measuring device is broken.

This is not the case the electron mass.  There is no theory that will give
you a calculated value more accurate than whatever it is you happen to be
able to measure at this point in time.

This is different, for example, than the change in mass due to special
relativity.  If you give me the rest mass of some object, and its velocity
relative to me, I can _calculate_ -- exactly -- what its apparent current
mass ought to be.  And if I measure something different, that probably means
my measuring device is broken.

Nobody can calculate the mass of an electron in the same sense.

        -- Don
_______________________________________________________________________________
Don Geddis                  http://don.geddis.org/               ···@geddis.org
I can picture in my mind a world without war, a world without hate.  And I can
picture us attacking that world, because they'd never expect it.
	-- Deep Thoughts, by Jack Handey
From: mikel evins
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <47234de7$0$26424$88260bb3@free.teranews.com>
Don Geddis wrote:
> mikel evins <······@mac.com> wrote on Fri, 26 Oct 2007:
>> Don Geddis wrote:
>>> As far as I'm aware, there is no more-fundamental theory of physics that
>>> allows one to calculate the mass of an electron.  It simply is.  It's
>>> measured experimentally, not calculated.
>> Well, there are nits to be picked with that characterization, but, leaving
>> those aside, one of the ways you formulate a theory to be testable is to
>> derive from it some quantity that you can measure--such as the mass of the
>> electron. You definitely do calculate it in that context, so that you can
>> compare what you've calculated with what experimentalists can measure, and
>> thereby provide a means of testing your theory.
> 
> I'm still not sure that there's any theory which calculates the mass of an
> electron.  As far as I know, it doesn't have ("isn't known to have") any
> substructure.

String theories, gauge theories and twistor theories do.


-- 
Posted via a free Usenet account from http://www.teranews.com
From: Rob Warnock
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <No-dnTI8Vq9v3rnanZ2dnUVZ_rSrnZ2d@speakeasy.net>
mikel evins  <······@mac.com> wrote:
+---------------
| Don Geddis wrote:
| > mikel evins <······@mac.com> wrote on Fri, 26 Oct 2007:
| >> Don Geddis wrote:
| >>> As far as I'm aware, there is no more-fundamental theory of physics that
| >>> allows one to calculate the mass of an electron.  It simply is.  It's
| >>> measured experimentally, not calculated.
| >> Well, there are nits to be picked with that characterization, but, leaving
| >> those aside, one of the ways you formulate a theory to be testable is to
| >> derive from it some quantity that you can measure--such as the mass of the
| >> electron. You definitely do calculate it in that context, so that you can
| >> compare what you've calculated with what experimentalists can measure, and
| >> thereby provide a means of testing your theory.
| > 
| > I'm still not sure that there's any theory which calculates the mass of an
| > electron.  As far as I know, it doesn't have ("isn't known to have") any
| > substructure.
| 
| String theories, gauge theories and twistor theories do.
+---------------

[And drifting further off-topic...]

For a very interesting [to me, at least] overview of some problems
inherent in the above, read:

    http://www.houghtonmifflinbooks.com/catalog/titledetail.cfm?titleNumber=689539
    "The Trouble With Physics: The Rise of String Theory,
    the Fall of a Science, and What Comes Next"
    Lee Smolin (Houghton Mifflin, 2006)
    ...
    With its exotic new particles and parallel universes, string theory
    has captured the public's imagination and seduced many physicists.

    But as Smolin reveals, there's a deep flaw in the theory: no
    part of it has been tested, and no one knows how to test it.
    In fact, the theory appears to come in an infinite number of
    versions, meaning that no experiment will ever be able to prove
    it false.  ...

Also see <http://www.thetroublewithphysics.com/> for links to some of
the criticism of Smolin's book and his responses.


-Rob

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Paul Wallich
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <ffg5r0$s79$1@reader1.panix.com>
Chris F Clark wrote:
[...]
>  If you are trying to discuss a TM that
> interacts with the real universe (pun unintended, but applicable), you
> need to allow the other agent to use real numbers (i.e. any convergent
> series, and not just the computable ones) and to take those
> calculations to determine what symbol to enter on the tape.  (Barb
> Knox caught this point precisely by pointing out that the
> diagonalization argument thus holds.)

This is not exactly true. It's true if you want the TM to interact with 
mathematical idealizations of the "real universe" but may not be with 
the universe itself.

paul
From: Joachim Durchholz
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <fffoek$ian$1@online.de>
Chris F Clark schrieb:
> That is we a given a TM and a tape containing an uncomputable
> series, what does the machine do?  Are there interesting things a
> machine can do?

No.
The TM cannot process infinitely many digits, so it cannot distinguish a 
computable from an uncomputable series.

One of the things where a PTM could be considered "more powerful" is 
that it can run for an indefinite time. However, such a machine isn't an 
algorithm - the entire theory of algorithms doesn't apply to the machine 
as such.
(Not that this is considered a serious problem: all the questions that 
we have about the behaviour of such a machine are of the form "given 
state X and input Y, what's the outcome Z?", and *that* is an algorithm.)

Regards,
Jo
From: Neelakantan Krishnaswami
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <slrnfhnca9.gm.neelk@gs3106.sp.cs.cmu.edu>
In article <<···············@shell01.TheWorld.com>>,
Chris F Clark <···@shell01.TheWorld.com> wrote:
>
> The point I was trying to understand is whether a TM where you could
> only reach so far onto a tape at a specific point of time (after a
> specific number of steps) was less powerful than a general TM.
>
> Clearly, if the machine can step left (or right) on each transition,
> it can reach at most n steps left (or n steps right) after n
> transitions, so the bound of 2*n tape squares for n steps is clearly
> no bound at all.
>
> In constrast, a fixed constant bound say k squares after n steps
> (where k is independent of n, i.e. as n increases, k does not),
> renders the TM a FA, which is clearly restricted.  Are there lesser
> bounds that are also unrestrictive?  I.e. if we make sure the TM has
> not moved left more than ceil(n+1/2) squares to the left or the
> right, can it still produce all computable series on its tape?

First, note that you can't easily bound the available memory by the
amount of time, because if you need more memory you can busy wait
until your needed memory usage is below the limit. (Eg, if you can
look at log(N) cells, then you just need to wait 2^N time steps to
get access to linear space.)

However, bounding the memory you use as a function of the size of the
input *is* very useful. If you bound the amount of space to be linear
in the size of the input, you get the linear-bounded automata, which
correspond exactly to the context-sensitive grammars. (IIRC, they're
type 1 grammars in the Chomsky hierarchy, one step below the
unrestricted grammars.)

If you want to use this kind of idea to analyze interaction, you have
some more work to do. That's because an input stream can't necessarily
be bounded in size up front. So instead, you want to treat the TM as a
stream processor, which takes inputs and produces outputs, and check
that you satisfy various liveness properties (eg, you produce an
output within poly-time/space/whatever of each input). Intuitively,
you're moving from pure finite automata to transducers (like Mealy
machines).

Personally, I don't find TMs a terribly convenient formalism for
thinking about this stuff in, though.

-- 
Neel R. Krishnaswami
·····@cs.cmu.edu
From: Barb Knox
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <see-B5E782.14212621102007@lust.ihug.co.nz>
In article <···············@shell01.TheWorld.com>,
 Chris F Clark <···@shell01.TheWorld.com> wrote:
[SNIP]

> Asked another way, are there series of computable numbers, such that
> the entire series is non-computable?  If so, if the tape given to a TM
> contains as input a non-computable series, is not the machine which
> reads and processes that tape capable of producing results that a TM
> with only a tape containing a computable input could not produce?

One conceptually simple example is that no sequence of all the 
computable real numbers is itself computable.  That is, the computable 
reals are not recursively enumerable.  If they were then you could 
diagonalise to compute another real that is provably nowhere in the 
sequence.


-- 
---------------------------
|  BBB                b    \     Barbara at LivingHistory stop co stop uk
|  B  B   aa     rrr  b     |
|  BBB   a  a   r     bbb   |    Quidquid latine dictum sit,
|  B  B  a  a   r     b  b  |    altum viditur.
|  BBB    aa a  r     bbb   |   
-----------------------------
From: Mark Tarver
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <1192905729.657644.100430@z24g2000prh.googlegroups.com>
On 20 Oct, 11:55, Joachim Durchholz <····@durchholz.org> wrote:
> Mark Tarver schrieb:
>
> > Matthias's solution to the challenge of dynamic computation is to
> > represent time as another dimension, in a manner analogous to
> > Minkowski's space-time in which events are frozen so to speak in an
> > ice-block - or, as here, recorded on a tape. Now if you represent
> > things this way then the TM works fine. McTaggart called this B-
> > series time.
>
> > The other great classical model is time as becoming, which is how we
> > experience time as human observers.  Events in the future are not
> > there in any meaningful sense until they transpire and become actual
> > by happening.  This is Raffael's view.  McTaggart called this A-series
> > time.
>
> > The question is which is more appropriate for modelling the computer?
>
> I think you're misunderstanding Matthias' model.
> He isn't using B-series time, he's constructing PTM-equivalent TMs so
> that they don't access the cells before they're filled by the external
> entities, so the outcome of the TM doesn't depend on whether you have
> A-series time or B-series time. (A beautiful construction IMHO.)
>
> Regards,
> Jo

OK, I'm skimming a lot of material in this thread because I'm busy so
maybe this is helpful or not.

There are several ways in which you can criticise traditional CS
theory as not supportive of programming practice.  One is the actual
distance of these traditional models from the environments in which
programmers actually work.You don't have to be fixated on the idea
that there are machines more powerful than the TM to accept this
position .  If we had a convincing model of our actual machines that
was not essentially linked to proprietory hardware or software then we
could sharpen this vague idea of computational adequacy.  A good
virtual machine would provide a virtual machine instruction set for
implementing and porting many of our current languages.  The WAM for
instance, liberated Prolog from the specifics of the DEC-10 and
provided an industry standard.

This has nothing to do with the power of TMs of course.  That is a
seperate story.

Regarding the work of Wegner et al., I've no axes to grind either
way.  But a Turing machine, as traditionally conceived, starts off
with a configuration of symbols S and a program P and its final
configuration (if any) is perfectly predictable.  If however you allow
symbols to appear on the tape from some outside source during the
computation then the final configuration is not predictable.  With
suitable restrictions, you can perhaps say that the final
configuration will fall within some interval but the system as a whole
is radically non-deterministic.  This is exactly what happens when you
hook up a computer to some chaotic peripheral like a
human being.  I take it that is part of what Raffael is saying.

In other words computers then come to display the sorts of behaviour
characteristic of physical processes and you end up like most
forecasters saying what may happen (light rain in Tunbridge Wells) and
what may not (hurricanes will not happen) rather than what must
happen.

The stronger thesis that therefore there are machines more powerful
than TMs (what does 'more powerful' mean here?) does not follow. The
TM may be quite adequate at processing these events.  The point is not
to trash TMs, but rather to study them in a different context from the
classical one.  1930s computer theory was not into embedded systems.

The only way that I can see to try to forestall this is to adopt a B-
series model of time whereby the events are already laid out in a time
line and hence the symbol configuration is already fixed.  But this
really doesn't help us much because our experience of time is A-
series.

Mark
From: Joachim Durchholz
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <fffgdp$7l7$1@online.de>
Mark Tarver schrieb:
> But a Turing machine, as traditionally conceived, starts off
> with a configuration of symbols S and a program P and its final
> configuration (if any) is perfectly predictable.

For an external observer: yes.
But predictability isn't a relevant part of the definition of computability.

> If however you allow symbols to appear on the tape from some outside
> source during the computation [...] the system as a whole is
> radically non-deterministic.  This is exactly what happens when you 
> hook up a computer to some chaotic peripheral like a human being.  I
> take it that is part of what Raffael is saying.
> 
> In other words computers then come to display the sorts of behaviour 
> characteristic of physical processes and you end up like most 
> forecasters saying what may happen (light rain in Tunbridge Wells)
> and what may not (hurricanes will not happen) rather than what must 
> happen.

You're mixing up two entirely different forms of nondeterminism.

The first one is "things happen outside of the domain of your knowledge, 
and without knowledge, we can't predict". It's a case of insufficient data.

The weather effect is something that can happen entirely within the 
computer. Weather simulations are as chaotic and unpredictable as the 
real weather.
"Unpredictable" being "no easier way to determine or approximate the 
outcome than to simply start the program and look at the results".
Which, in turn, is actually the norm in computer programming: if there 
were a simpler way to determine the result of a program, we'd either 
rewrite the program to use that simpler way, or we'd do the processing 
in our heads ;-)

> The stronger thesis that therefore there are machines more powerful
> than TMs (what does 'more powerful' mean here?) does not follow. The
> TM may be quite adequate at processing these events.  The point is not
> to trash TMs, but rather to study them in a different context from the
> classical one.  1930s computer theory was not into embedded systems.

If that's the case, I fail to see the questions that the PTM is answering.
(It may well be an answer for which no question exists.)

> The only way that I can see to try to forestall this is to adopt a B-
> series model of time whereby the events are already laid out in a time
> line and hence the symbol configuration is already fixed.

Let me reiterate:

TMs are about "what can be computed".
The 1930s claim is that TMs are already as powerful as one can get in 
this respect.

Matthias' construction is providing A-series input to the input. He 
allows the external world to modify those parts of the tape that the TM 
hasn't visited yet. (In fact, for the TM itself and its capabilities, 
it's irrelevant whether time is considered A-series or B-series.)
The PTM has the additional capability that the external world can modify 
those parts of the tape that it already has visited.

Now Matthias' claim is that such a PTM can always be rewritten to a TM 
where the external world is limited to rewriting the unaccessed parts of 
the tape.
If this were a formal debate, I'd ask for a formal proof, but given the 
kind of things that have been proven about what kinds of modifications 
of a TM can be mapped back to the standard model, I think that claim is 
entirely reasonable. (Two tapes, multiple tapes, even an infinity of 
tapes: it all doesn't matter.)

Regards,
Jo
From: Tim X
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <87ir50vrj8.fsf@lion.rapttech.com.au>
Paul Wallich <··@panix.com> writes:

> Raffael Cavallaro wrote:
>> On 2007-10-19 13:17:11 -0400, Paul Wallich <··@panix.com> said:
>>
>>> That would be true if the state of Wikipedia at some future point were
>>> defined as a computable function of its state at TO. But no one is
>>> making that claim. It's like claiming that the exact time of decay of
>>> the next radium atom in a sample can be computed from everything known
>>> at T0.
>>
>> Determinists make precisely this claim. For example, hidden variable
>> theory.
>
> First, they're wrong. And second, they don't make that claim at all unless
> they claim that the hidden variables can in fact be known. And the ones
> I've read (mostly Bohm) tend to avoid that claim like the plague.
>
> It sounds as if this argument is more a social one -- i.e. about what
> matters -- than about the underlying facts. Even the most diehard
> determinists would likely be willing to agree that you could use less
> computing power to compute a system's evolution if you had an oracle about
> some set of future states.
>
> paul

I think you may be right. Given the thread title "on the relations of
traditional CS theory to modern programming practice" and Mark's original
point (as I read it anyway) that CS theory has been outstripped by
programming practice, I think we have now reached a different place where
we are now actually debating/comparing different theoretical
'camps'. Persoanlly, the fact we are debating these different perspectives
would seem to indicate CS theory is alive, well and active and not 'stuck'
and outstripped by modern programming practice. 

I am a bit confused as to what people define as 'traditional' CS theory in
this discussion. If it means a really really restrictive definition that
only includes the original CT thesis and basic TMs, then possibly you could
argue it has been outstripped by modern programming practice. However, I
think that as many of the extensions and additions in models of
computability are not much younger than the original work of Church and
Turing, it could also be considered traditional. 

Putting aside arbitrary decisions of what is traditional and what is
'modern', I think you may be right an that what is really being debated is
about the sophistication and reliability of existing tools to deal with the
increasing complexity in the systems we are trying to build. This seems to
be more about design and engineering than about the underlying theory
concerning what is and isn't computable. When designing wikipedia, I'd be
surprised if anyone applied any theory of computability in the process,
traditional or otherwise.

If anything, the possible 'failure' amongst developers/programmers is in
not having knowledge of disciplines other than CS and 'traditional' CS
theory. It seems few programmers understand principles of design,
typography, erganomics, human psychology, team dynamics, complexity and
project management/maintenance etc. These are the things that are likely to
have more bearing on the types of systems we create than underlying
theories of computability.   

Tim

-- 
tcross (at) rapttech dot com dot au
From: ··············@gmail.com
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <1192825382.721226.23200@k35g2000prh.googlegroups.com>
On Oct 19, 11:41 am, Raffael Cavallaro <················@pas-d'espam-
s'il-vous-plait-mac.com> wrote:
> On 2007-10-19 05:47:16 -0400, Joachim Durchholz <····@durchholz.org> said:
>
> > That's not lame, that's the point exactly.
> > The algorithm behind Wikipedia is the transformation of wiki markup to
> > HTML. Plus outputting edit, login and logout dialogs on request.
>
> I think you're missing the point here.
>
> The Strong Church-Turing thesis (SCT) posits that, begining at time T0
> and given enough input (including the state of Wikipedia at T0), a TM
> could compute the state of Wikipedia at any arbitrary future time Tn
> *without further inputs*.  Computing the state of Wikipedia at some
> arbitrary Tn given the state at Tn itself as input is trivial (we call
> it 'copying') - no one doubts that a TM could do this, and it is not
> what we're discussing here. That's what Pascal characterized as "lame."

However, that is precisely what the actualy Wikipedia does: at any
given time Tn it copies the input that it has received until Tn.
There is really no non-trivial computation going on here at all, not
to mention anything that a TM couldn't do.

> Pascal and I are saying that starting at T0, the state of Wikipedia at
> an arbitrary future Tn is one such computation that can be done by a
> PTM (or some other equivalent interactive formalism), but not an
> ordinary TM.

No, it cannot.  Either machine needs to see the future input before it
can compute the future output.

> It is interactive by nature - it requires further inputs
> at future times between T0 and Tn (excluding the trivial "lame" case of
> having as input the state at Tn itself as discussed above).

But excluding that "trivial" "lame" case excludes the PTM as well...

> Claiming, as the SCT does, that no further inputs are required amounts
> to a claim that the future inputs by human contributors are
> *completely* predictable given the right algorithm and the right
> initial state at T0.

You are putting up a strawman here.  Nobody claims that the future
state of Wikipedia can be predicted by the T0 input alone.  No TM and
no PTM can do that.  Either machine needs to see the other input as
well before it can give the answer.

You have to remember that the TM is an abstraction.  There is no
notion of time, so it does not make sense to speak of "when" the input
is actually given.  All that matters is that the input is on a given
input position by the time the read head advances to that position.
You can simulate the behavior of a PTM using a TM by making the
specification more strict:  Divide the input into "sections" and
require the output to be partioned into corresponding "sections" with
the additional requirements that by the time (i.e., the step count) at
which output section i is produced the read head must not yet have
ventured into input section j for any j > i.  It is as simple as that,
and it actually shows that if anything the PTM can never compute more
than a regular TM since a PTM can be simulated by a TM where an
additional restriction has been imposed.
From: Lauri Alanko
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <ffb66s$8q7$1@oravannahka.helsinki.fi>
In article <·······················@k35g2000prh.googlegroups.com>,
 <··············@gmail.com> wrote:
> You can simulate the behavior of a PTM using a TM by making the
> specification more strict:  Divide the input into "sections" and
> require the output to be partioned into corresponding "sections" with
> the additional requirements that by the time (i.e., the step count) at
> which output section i is produced the read head must not yet have
> ventured into input section j for any j > i.

Note that this simulation implies that at any given time, the machine
has only a finite amount of tape available, and hence can only do a
finite amount of work before the next input arrives. So you're not
only simulating interactivity, you're simulating real-timeness! :)

How about just using a multi-tape TM with one tape as a "workspace"
and one tape as "input"? Output could be defined in a number of ways.
Maybe the neatest way (for synchronous communication, anyway) would be
to require that each symbol that has been read from the input tape
must be overwritten exactly once before reading the next symbol. Then
we can envision a circular tape that passes through both the TM and
the mysterious external world, each reading what the other has written
and overwriting it with a response...

Incidentally, since this is comp.lang.functional, we should maybe
rather be discussing how on earth pure mathematical functions could
model interactive computation. The above solution pretty much
corresponds to Miranda's stream-based I/O, one form of which can still
be found in modern Haskell:

interact :: (String -> String) -> IO ()

Of course nowadays we have more convenient ways of defining
interacting processes as lambda-computable (and hence
Turing-computable) functions...


Lauri
From: Raffael Cavallaro
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <2007101920235822503-raffaelcavallaro@pasdespamsilvousplaitmaccom>
On 2007-10-19 17:05:32 -0400, Lauri Alanko <··@iki.fi> said:

> How about just using a multi-tape TM with one tape as a "workspace"
> and one tape as "input"? Output could be defined in a number of ways.

This is precisely the specification of a PTM if output is a third tape.
From: Lauri Alanko
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <ffca08$88k$1@oravannahka.helsinki.fi>
In article <····································@pasdespamsilvousplaitmaccom>,
Raffael Cavallaro  <················@pas-d'espam-s'il-vous-plait-mac.com> wrote:
> On 2007-10-19 17:05:32 -0400, Lauri Alanko <··@iki.fi> said:
> > How about just using a multi-tape TM with one tape as a "workspace"
> > and one tape as "input"? Output could be defined in a number of ways.
> 
> This is precisely the specification of a PTM if output is a third tape.

All right, then, a PTM is a multi-tape TM. As everyone knows, a
multi-tape TM can be simulated with a single-tape TM. Hence PTMs have
no more computational power than any other TMs. Case solved! Nice to
see we could reach agreement!


Lauri
From: Raffael Cavallaro
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <2007102012052897157-raffaelcavallaro@pasdespamsilvousplaitmaccom>
On 2007-10-20 03:16:24 -0400, Lauri Alanko <··@iki.fi> said:

> As everyone knows, a
> multi-tape TM can be simulated with a single-tape TM.

If one of those tapes represents *dynamic inputs* the the TM can only 
simulate the PTM *after* the PTM is done. Having to treat interaction 
as a batch computation is the why the TM is less powerful - it can only 
do what the PTM does post hoc. This is the whole discussion about time 
in the thread with Matthias.
From: ··············@gmail.com
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <1192931845.954601.128940@v29g2000prd.googlegroups.com>
On Oct 20, 11:05 am, Raffael Cavallaro <················@pas-d'espam-
s'il-vous-plait-mac.com> wrote:
> On 2007-10-20 03:16:24 -0400, Lauri Alanko <····@iki.fi> said:
>
> > As everyone knows, a
> > multi-tape TM can be simulated with a single-tape TM.
>
> If one of those tapes represents *dynamic inputs* the the TM can only
> simulate the PTM *after* the PTM is done. Having to treat interaction
> as a batch computation is the why the TM is less powerful - it can only
> do what the PTM does post hoc. This is the whole discussion about time
> in the thread with Matthias.

Again, you don't have to "treat interaction as a batch computation"
because the TM abstraction does not inherently correspond to batch
computation.
From: Joachim Durchholz
Subject: Re: on the relations of traditional CS theory to modern programming   practice
Date: 
Message-ID: <ffarob$3jm$1@online.de>
Raffael Cavallaro schrieb:
> On 2007-10-19 05:47:16 -0400, Joachim Durchholz <··@durchholz.org> said:
> 
>> That's not lame, that's the point exactly.
>> The algorithm behind Wikipedia is the transformation of wiki markup to 
>> HTML. Plus outputting edit, login and logout dialogs on request.
> 
> I think you're missing the point here.
> 
> The Strong Church-Turing thesis (SCT) posits that, begining at time T0 
> and given enough input (including the state of Wikipedia at T0), a TM 
> could compute the state of Wikipedia at any arbitrary future time Tn 
> *without further inputs*.

Assuming the Wikipedia's future content is defined by an algorithm.
Which it isn't (at least not a known one).
So the question whether Wikipedia's content can be predicted using a 
computer program is unrelated to whether the CT hypothesis (*not* 
"thesis"...) holds or not: the situation does not fulfil the 
preconditions of the thesis.

 > Computing the state of Wikipedia at some
> arbitrary Tn given the state at Tn itself as input is trivial (we call 
> it 'copying') - no one doubts that a TM could do this, and it is not 
> what we're discussing here. That's what Pascal characterized as "lame."

Sure, that algorithm is rather trivial (at least from a CS standpoint).

But it's the only algorithm involved in Wikipedia. There is no algorithm 
that will predict Wikipedia's content at T(n+1). (No known algorithm, to 
the least. We don't know whether there's an algorithm that describes 
physical reality or human thought.)

> According to Goldin and Wegner (and followers) there exist a class of 
> computations which cannot be done by a TM no matter what *inital* input 
> at time T0 it is given.

I have understood that.
My point is that Wikipedia's future is determined not by an algorithm, 
so the CT hypothesis does not apply.

 > That is, additional input(s) at Tn, Tn+m etc.
> are needed to perform the computation, as is *memory* of previous 
> inputs, outputs and internal computations. They term this model, with 
> dynamic streams and persistence, a Persistent Turing Machine (PTM).

If stated that way, PTMs are "more powerful than TMs" only by inclusion 
of a external effects (physical reality, human thought) that aren't 
well-understood.

Doesn't sound like that's going to produce much in terms of valuable 
insights, IMNSHO.

> Claiming, as the SCT does, that no further inputs are required amounts 
> to a claim that the future inputs by human contributors are *completely* 
> predictable given the right algorithm and the right initial state at T0. 
> This amounts to a claim of complete determinism - that the right 
> algorithm and initial state could predict anything in the future 
> universe in complete and accurate detail, including, among other things, 
> every action of every human Wikipedia contributor.

Yes, but the future evolution of the content of Wikipedia isn't a 
subject of the CT thesis. There's no algorithm.

So the assumption falls down, and with it the rest.


... OK, I looked up the original claim, as quoted in 
http://lambda-the-ultimate.org/node/1038 . It essentially says that "the 
CT thesis is generally misrepresented to assume that everything that a 
computer does is algorithmic, and that's wrong when external interaction 
comes into play".
Well, big deal. That's not shaking the foundations of computer science, 
it's picking nits with the way an aspect of it is taught. (It was 
*always* understood, at least for me, that all the decidability issues 
hold only for the computations *between* the inputs.)

> This determinism is why Goldin and Wegner argue that SCT is incorrect. 

I think SCT is either a strawman or just an aberration in the way CS is 
taught in the US.

> They think that the universe is not practially deterministic, that 
> empirical observation of (i.e., interaction with) the larger environment 
> is required for certain kinds of computations. They point to nonlinear 
> systems as evidence of the failure of Determinism - prediction is only 
> possible in such systems if *precise* initial conditions are known, and 
> this is a practical impossibility.

Yes.

 > Others have argued that QM
> demonstrates that the Universe is *inherently* non-deterministic.

Agreed. (Assuming quantum interaction is really nondeterministic. I 
never learnt enough about QM to really create my own judgement on the 
issue.)

> Thus they characterize SCT supporters as Rationalists and themselves as 
> Empiricists, and see this whole argument as just another chapter in the 
> long standing debate between these two philosophical schools.

Well, I think they're overrating their contribution.
CS isn't about reality, it's about what can and cannot be done using 
mechanical devices. For that, CT is enough.
SCT (at least as represented by them) goes over the top.

Regards,
Jo
From: Joshua Taylor
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <1192801714.732202.179820@v29g2000prd.googlegroups.com>
On Oct 19, 1:55 am, Pascal Costanza <····@p-cos.net> wrote:
> For example, a plain Turing machine could never
> compute the contents of Wikipedia by using an algorithm (unless it has
> the contents of Wikipedia as an input in the first place, but that's
> lame). Hence, the C-T thesis is inappropriate as a basis for computer
> science.

The point being made is good; no one will ever construct such a
machine. But, while we're discussing the C-T thesis, let's be very
sure that the kind of computation it references is out in the open.

If we were to cut off Wikipedia at any particular point in time (or to
assume that Wikipedia will eventually end), then the states of
Wikipedia form a finite sequence of constant values (alternatively,
the whole sequence is a single constant value). In either case, that
means that the Wikipedia that we see today (and properly terminated
sometime in the future) is Turing computable.

I strongly doubt we'll ever have the machine that could produce
Wikipedia. (And I doubt evenmoreso that that Turing machine could be
all that much smaller than Wikipedia (yes, text compression can do a
lot, but we probably won't have the 'compression' that can read two
articles and enhance a third with insight from the first two).

This disconnect serves to illustrate one issue that some people have
with C-T, namely that we've got no idea of meaningful ways to compute
many provably (effectively) computable functions.

So, while we haven't found any good examples of functions which are
effectively computable that a Turing Machine (or equivalent model)
can't compute, but many of the functions which are computable by
Turing Machine aren't what we might like to think of as computable.

The answer to P == NP, for instance, is computed by exactly one of two
(very small) Turing Machines, one of which always prints "YES", the
other which always prints "NO". We'd probably prefer one which
generated a sensible proof though.

I'm not sure if we mean it in the same way, but this was a very good
summary:

> The problem is not that the C-T thesis may be wrong,
> the problem is that the C-T thesis steers us in the
> wrong direction.

//J
From: Raffael Cavallaro
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <2007101915330084492-raffaelcavallaro@pasdespamsilvousplaitmaccom>
On 2007-10-19 09:48:34 -0400, Joshua Taylor <···········@gmail.com> said:

> If we were to cut off Wikipedia at any particular point in time (or to
> assume that Wikipedia will eventually end), then the states of
> Wikipedia form a finite sequence of constant values (alternatively,
> the whole sequence is a single constant value). In either case, that
> means that the Wikipedia that we see today (and properly terminated
> sometime in the future) is Turing computable.

Only if we 'compute' it *after the fact*. The whole point of the 
interactionists is that there are lots of useful computations that can 
be done in real time, before the fact if we allow additional dynamic 
input streams, such as, for example, constructing and using Wikipedia 
*now*, not having to wait until some hypothetical end point.
From: ··············@gmail.com
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <1192813185.206081.96740@v23g2000prn.googlegroups.com>
On Oct 19, 12:55 am, Pascal Costanza <····@p-cos.net> wrote:
> ··············@gmail.com wrote:
> > On Oct 18, 10:53 pm, Raffael Cavallaro <················@pas-d'espam-
> > s'il-vous-plait-mac.com> wrote:
> >> On 2007-10-18 21:05:51 -0400, ··············@gmail.com said:
>
> >>> Anyway, your quote by Paul Feyerabend has absolutely nothing to do
> >>> with this discussion.
> >> Goldin and Wegner obviously think it's very relevant since they devote
> >> the first quarter of their chapter of _Interactive Computation_ to the
> >> philosophy of science and epistemology by way of explaining the "strong
> >> resistance," (to use their words) their views have encountered.
>
> > In other words, they trot out the same Red Herring...
>
> >> To be completely clear, Goldin and Wegner consider the Strong
> >> Church-Turing thesis to be an "older, falsified theory," superseded by
> >> Persistent Turing Machines.
>
> > As I said: The C-T thesis is not a theory at all.   It is a conjecture
> > which could be wrong, but which so far has not been falsified.  If
> > Goldin and Wegner claim it has been falsified, then they will have to
> > prove that.  But so far they have not done so.
>
> The C-T thesis makes a strong assumption about what computers can and
> cannot do for us. But computers can obviously do much more for us that
> execute algorithms.

Of course.  A TM is not meant to be a model of a real computer, it is
a model of computation.   Let me give you an example.  (Since someone
in Lance Fortnow's blog brought up hairdryers, let me use that same
picture...):

Suppose there is a science of how electricity is transformed into
heat.  Suppose the leading abstract model is that of a current flowing
through a resistor.   Perhaps there is a conjecture that says that no
other means of electricity->heat transformation is more effective.
Now along comes someone who invented a hairblower.  He says:  Look, I
found a much more powerful model of electricity->heat transformation.
It is clearly superior, because in addition to producing heat from
electricity it also dries hair.  It even has attachments for styling
hair into curls and whatnot.  Since the previously leading model does
not handle these, the new model is more expressive and ought to
replace the old, falsified one.

Of course, when you open the hairdryer and look for how it actually
transforms electricity into heat, you'll find an electric current
flowing through an resistor...

> For example, a plain Turing machine could never
> compute the contents of Wikipedia by using an algorithm (unless it has
> the contents of Wikipedia as an input in the first place, but that's
> lame).

Why is that lame?  No real computer computes the contents of Wikipedia
either but merely relies on having it part of its input.  (Besides, as
someone else has already explained, if you take any snapshot of
Wikipedia, you don't have to have it in your input but can make it
part of your program.)

Since the interactive aspect is just the same as a persistent TM, the
same explanation as the one I just gave in another reply applies.

> Hence, the C-T thesis is inappropriate as a basis for computer
> science.

This is seriously confused.  The C-T thesis is not a basis for
computer science at all.  The TM is a model of computation which we
often draw upon in CS.  There are other models, including more
powerful ones either in terms of what they can compute or in terms of
how efficiently they compute.  So far nobody has come up with a model
that can compute more than a TM and also be physically realizable.  It
is important to keep in mind that the TM is an abstraction.

> See alsohttp://www.dreamsongs.com/Feyerabend/Feyerabend.html-
> especially Sussman's statement at the bottom of that page.

I agree with many points here but fail to see what it has to do with C-
T.

Matthias
From: Raffael Cavallaro
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <200710191652127987-raffaelcavallaro@pasdespamsilvousplaitmaccom>
On 2007-10-19 12:59:45 -0400, ··············@gmail.com said:

> So far nobody has come up with a model
> that can compute more than a TM and also be physically realizable.

I think that the PTM model does both - if you don't ignore time.

>  It
> is important to keep in mind that the TM is an abstraction.

Yes, but it's not a particularly useful abstraction for interactive 
computation since it abstracts away time (i.e., it can get its "inputs" 
after the fact), and hence, it abstracts away interaction with 
non-deterministic inputs, the whole domain we're trying to characterize.
From: ··············@gmail.com
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <1192827836.196822.235200@q5g2000prf.googlegroups.com>
On Oct 19, 3:52 pm, Raffael Cavallaro <················@pas-d'espam-
s'il-vous-plait-mac.com> wrote:
> On 2007-10-19 12:59:45 -0400, ··············@gmail.com said:
>
> > So far nobody has come up with a model
> > that can compute more than a TM and also be physically realizable.
>
> I think that the PTM model does both - if you don't ignore time.
>
> >  It
> > is important to keep in mind that the TM is an abstraction.
>
> Yes, but it's not a particularly useful abstraction for interactive
> computation since it abstracts away time (i.e., it can get its "inputs"
> after the fact), and hence, it abstracts away interaction with
> non-deterministic inputs, the whole domain we're trying to characterize.

Not so.  As I have tried to explain, one can code the notion of time
into the problem itself.  It does not have to be built into the
machine model.
From: Raffael Cavallaro
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <2007101920502331729-raffaelcavallaro@pasdespamsilvousplaitmaccom>
On 2007-10-19 17:03:56 -0400, ··············@gmail.com said:

> As I have tried to explain, one can code the notion of time
> into the problem itself.

You canot code the notion of time into the problem itself unless you 
abstract away time in any meaningful sense. Partitioning inputs into 
'sections' is not equivalent to dynamic inputs because all of those 
'sections' representing future inputs have to be already on the tape 
when the TM starts, whether the transition function has the head read 
them yet or not. This amounts to prescience (or time travel) on the 
part of the TM's programmer.

Simply put, once a TM starts, nothing but the TM itself can write to 
it's tape. This makes it incapable of receiving dynamic inputs.

Note that, as someone else in the thread has suggested, an 
oracle-machine could do what a PTM does. Such an oracle machine would 
function by taking as input all possible permutations of dynamic 
inputs, work tape states, and dynamic outputs of the corresponding PTM. 
At each step the oracle simply directs the machine to that portion of 
the tape that corresponds to the actual tape states of the 
corresponding PTM.

Since such an oracle-machine is just a trivial itemization of every 
possible state of the corresponding PTM, the PTM is a much more natural 
and expressive (not to mention more parsimonious) formalization of the 
corresponding interactive computation.

Again, I don't see a PTM as some sort of revelation, just a better 
formalism for interactive computation. Again, I see the resistance to 
it as a reaction to the fact that it's advocates are making explicit 
the importance of treating dynamic inputs as central to interactive 
computation, rather than trying to characterize interactive computation 
as computation of functions from static input.
From: ··············@gmail.com
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <1192844834.307099.242730@z24g2000prh.googlegroups.com>
On Oct 19, 7:50 pm, Raffael Cavallaro <················@pas-d'espam-
s'il-vous-plait-mac.com> wrote:
> On 2007-10-19 17:03:56 -0400, ··············@gmail.com said:
>
> > As I have tried to explain, one can code the notion of time
> > into the problem itself.
>
> You canot code the notion of time into the problem itself unless you
> abstract away time in any meaningful sense. Partitioning inputs into
> 'sections' is not equivalent to dynamic inputs because all of those
> 'sections' representing future inputs have to be already on the tape
> when the TM starts, whether the transition function has the head read
> them yet or not. This amounts to prescience (or time travel) on the
> part of the TM's programmer.

Sigh.  Have you tried to understand what I was saying?  There is no
"from the beginning" since there is no built-in notion of time.  The
tape is an abstraction.

> Simply put, once a TM starts, nothing but the TM itself can write to
> it's tape. This makes it incapable of receiving dynamic inputs.

Again, there is no built-in notion of time.  The tape is not a
physical paper tape.

> Again, I don't see a PTM as some sort of revelation, just a better
> formalism for interactive computation.

That may very well be.  But the claim was added computational power
(compared with an ordinary TM).  That is not so, and that is all I am
arguing.  C-T is still safe for now.

> Again, I see the resistance

There is no resistance (from my side) to PTMs, just to the claim that
they have more computational power than ordinary TMs.  Yes, they may
be more natural for expressing certain things.  But no, these things
are not impossible to encode with ordinary TMs.
From: Raffael Cavallaro
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <2007101922382544303-raffaelcavallaro@pasdespamsilvousplaitmaccom>
On 2007-10-19 21:47:14 -0400, ··············@gmail.com said:

> Have you tried to understand what I was saying?  There is no
> "from the beginning" since there is no built-in notion of time.  The
> tape is an abstraction.

Have you tried to understand what I'm saying? Since there is no 
built-in notion of time, it is a poor abstraction for computation over 
time with dynamic inputs.
From: Marc Gluch
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <471a440f$0$32530$4c368faf@roadrunner.com>
Raffael Cavallaro wrote:
> On 2007-10-19 21:47:14 -0400, ··············@gmail.com said:
> 
>> Have you tried to understand what I was saying?  There is no
>> "from the beginning" since there is no built-in notion of time.  The
>> tape is an abstraction.
> 
> Have you tried to understand what I'm saying? Since there is no built-in 
> notion of time, it is a poor abstraction for computation over time with 
> dynamic inputs.
> 
PTMs share with TMs the inability to express temporal or non-discrete 
characteristics of computation -- according to Dina Goldin herself
(http://citeseer.ist.psu.edu/cache/papers/cs/17885/http:zSzzSzwww.cs.umb.eduzSz~dqgzSzpaperszSzptm2.pdf/goldin99behavior.pdf)
From: Raffael Cavallaro
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <2007102016172793099-raffaelcavallaro@pasdespamsilvousplaitmaccom>
On 2007-10-20 14:08:15 -0400, Marc Gluch <··········@mindtap.com> said:

> Raffael Cavallaro wrote:
>> Have you tried to understand what I'm saying? Since there is no 
>> built-in notion of time, it is a poor abstraction for computation over 
>> time with dynamic inputs.
>> 
> PTMs share with TMs the inability to express temporal or non-discrete 
> characteristics of computation -- according to Dina Goldin herself
> (http://citeseer.ist.psu.edu/cache/papers/cs/17885/http:zSzzSzwww.cs.umb.eduzSz~dqgzSzpaperszSzptm2.pdf/goldin99behavior.pdf)

From 
> 
the paper you link above.

"Persistent Turing Machines (PTMs) are multitape machines
with a persistent work tape preserved between successive
interactions; they are a minimal extension of Turing machines (TMs)
that express interactive behavior [GW].
They model services over time provided by
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

embedded, object-oriented,  or reactive systems
that cannot be expressed by computable functions [MP, We2, WG]."


I said "computation over time with dynamic inputs." Not seeing any 
claim of non-discrete in either.
From: Marc Gluch
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <471ac907$0$26406$4c368faf@roadrunner.com>
Raffael Cavallaro wrote:
> On 2007-10-20 14:08:15 -0400, Marc Gluch <··········@mindtap.com> said:
> 
>> Raffael Cavallaro wrote:
>>> Have you tried to understand what I'm saying? Since there is no 
>>> built-in notion of time, it is a poor abstraction for computation 
>>> over time with dynamic inputs.
>>>
>> PTMs share with TMs the inability to express temporal or non-discrete 
>> characteristics of computation -- according to Dina Goldin herself
>> (http://citeseer.ist.psu.edu/cache/papers/cs/17885/http:zSzzSzwww.cs.umb.eduzSz~dqgzSzpaperszSzptm2.pdf/goldin99behavior.pdf) 
>>
> 
>  From
>>
> the paper you link above.
> 
> "Persistent Turing Machines (PTMs) are multitape machines
> with a persistent work tape preserved between successive
> interactions; they are a minimal extension of Turing machines (TMs)
> that express interactive behavior [GW].
> They model services over time provided by
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> 
> embedded, object-oriented,  or reactive systems
> that cannot be expressed by computable functions [MP, We2, WG]."
> 
> 
> I said "computation over time with dynamic inputs." Not seeing any claim 
> of non-discrete in either.
> 
1. My point was not regarding discreteness, but rather PTMs' (in)ability 
to express *temporal* characteristics of computation, refuting your 
claim that somehow PTMs are better (than TMs) models of interactive 
computation across time
(that claim being implicit in your exchange with Matthias:
On 2007-10-19 16:29:22 -0400, ··············@gmail.com said:
"Again, you are missing the whole point of the TM abstraction.  A TM
does not have a notion of time."
to which you responded:
"Which is precisely why it is a poor model of interactive computation 
across time.")

2. Yes, Goldin wrote both
a) they (TPMs) model services over time
and
b) PTMs share with TMs the inability to express temporal ... 
characteristics of computation.

Yet, somehow I don't think your point was that these statements are 
inconsistent.

3. Don Geddis observed:
'Would there be _any_ problem if you simply made the minimal change in 
the definition of a Turing Machine, no longer required the input tape to 
be fully specified before the TM began executing (although still 
required that any given input square be specified before the TM's head 
attempts to read it)?
This seems a meaningless restriction.  Simply dispense with it, and no 
proof or theorem about regular Turing Machines changes.  Meanwhile, you 
can now "suddenly" model interactive computation.'

If you look at http://plato.stanford.edu/entries/turing-machine/#Equiv,
you'll see that there are many models of computation equivalent to the 
"plain Jane" single tape deterministic TMs -- models that do not require 
the temporal ordering that you claim to be necessary for TMs.

Structurally PTMs *are* TMs. It's not clear to me if Goldin stipulates 
that the "states" (content of the "work" tape) of PTMs are infinite -- 
in the same paper as referenced above she writes: "PTM states can be 
represented by strings, so the set of PTM states is enumerably infinite. 
In contrast, TMs have a finite set of states".
I don't see the implication: state=string => infinite number of states,
as I presume each string is of a finite length (as a result of a finite 
previous computation).

If however she indeed empowers PTMs with infinite states, she renders 
PTMs useless as models or finite (terminating) computations, since in 
each step (interaction) a PTM behaves as a regular TM.
Starting with a finite (perhaps empty) initial state, it would take it 
(PTM) forever to transition to the next (infinite) state.
From: Lauri Alanko
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <ffcff2$da0$1@oravannahka.helsinki.fi>
In article <····································@pasdespamsilvousplaitmaccom>,
Raffael Cavallaro  <················@pas-d'espam-s'il-vous-plait-mac.com> wrote:
> You canot code the notion of time into the problem itself unless you 
> abstract away time in any meaningful sense. Partitioning inputs into 
> 'sections' is not equivalent to dynamic inputs because all of those 
> 'sections' representing future inputs have to be already on the tape 
> when the TM starts, whether the transition function has the head read 
> them yet or not. This amounts to prescience (or time travel) on the 
> part of the TM's programmer.
> 
> Simply put, once a TM starts, nothing but the TM itself can write to 
> it's tape. This makes it incapable of receiving dynamic inputs.

This is _you_ imposing a notion of time on a concept that doesn't
inherently have it.

Again, I suggest considering the more functional example of the same
issue. In Haskell, any interactive command-line application that takes
textual input and produces textual output can be specified with a
function of type String -> String. It takes a list of characters as
input, and produces a list of characters as output.

Now, your objection amounts to saying "That isn't enough for
interactive applications! You have to have the entire input available
before you can produce any output!" And that simply isn't true in
Haskell. It's a strictness condition that _you're_ imposing.


Lauri
From: Raffael Cavallaro
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <2007102013314570933-raffaelcavallaro@pasdespamsilvousplaitmaccom>
On 2007-10-20 04:49:38 -0400, Lauri Alanko <··@iki.fi> said:

> This is _you_ imposing a notion of time on a concept that doesn't
> inherently have it.

It inherently has a notion of temporal *sequence*, before and after.

Input must be available *before* the machine begins execution or there 
will be nothing to operate on.

The machine depends for its operation on the consistent temporal 
ordering of operations. - it matters at each step whether it writes 
first then moves or moves first then writes. If it had no notion of 
temporal sequence whatsoever it would not make any difference if at 
each step, it either did or did not reverse the order.

The machine must visit the initial state *before* any other.

In short, you can't invoke temporal ordering in the operations of the 
machine, and then claim that the machine is immune to notions of 
temporal ordering. The idea that one can have inputs from some point in 
the sequence which originate *after* the machine starts at the same 
time that the machine starts violates the temporal ordering which *is* 
necessary for a TM.

Any model that treats all the inputs to a PTM as a single set of inputs 
to a TM therefore succeeds only in modeling the PTM computation *after 
the fact*.

If you can only model an interactive computation after the fact, it 
more or less defeats the whole purpose of having a formalism for 
*interactive* computing.
From: Don Geddis
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <87y7dxjr5i.fsf@geddis.org>
Raffael Cavallaro <················@pas-d'espam-s'il-vous-plait-mac.com> wrote on Sat, 20 Oct 2007:
> The idea that one can have inputs from some point in the sequence which
> originate *after* the machine starts at the same time that the machine
> starts violates the temporal ordering which *is* necessary for a TM.

Why is it necessary?  Simply because it was originally defined that way?

Would there be _any_ problem if you simply made the minimal change in the
definition of a Turing Machine, no longer required the input tape to be fully
specified before the TM began executing (although still required that any given
input square be specified before the TM's head attempts to read it)?

This seems a meaningless restriction.  Simply dispense with it, and no proof
or theorem about regular Turing Machines changes.  Meanwhile, you can now
"suddenly" model interactive computation.

        -- Don
_______________________________________________________________________________
Don Geddis                  http://don.geddis.org/               ···@geddis.org
The only difference between me and a madman is that I am not mad.
	-- Salvador Dali
From: ··············@gmail.com
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <1192932553.649101.181230@v29g2000prd.googlegroups.com>
On Oct 20, 6:35 pm, Don Geddis <····@geddis.org> wrote:
> Raffael Cavallaro <················@pas-d'espam-s'il-vous-plait-mac.com> wrote on Sat, 20 Oct 2007:
>
> > The idea that one can have inputs from some point in the sequence which
> > originate *after* the machine starts at the same time that the machine
> > starts violates the temporal ordering which *is* necessary for a TM.
>
> Why is it necessary?  Simply because it was originally defined that way?

No need to redefine anything.  It wasn't "originally defined that
way".  One must be careful not to confuse the TM step count with
physical time.  The TM is not a physical machine, it is a mathematical
construct.  When modelling a physical computation using a TM it is up
to us how to map physics (including time) to TM concepts.

Matthias
From: Raffael Cavallaro
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <2007102022340080278-raffaelcavallaro@pasdespamsilvousplaitmaccom>
On 2007-10-20 19:35:37 -0400, Don Geddis <···@geddis.org> said:

> Would there be _any_ problem if you simply made the minimal change in the
> definition of a Turing Machine, no longer required the input tape to be fully
> specified before the TM began executing (although still required that any given
> input square be specified before the TM's head attempts to read it)?

This modification is what Goldin and Wegner call dynamic inputs. Their 
PTM is, in their own words, just such a "minimal extension of Turing 
machines."
This is all anyone has been claiming.

Now maybe you understand why they were so taken aback by the reaction.
From: Don Geddis
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <87bqatjf5h.fsf@geddis.org>
Raffael Cavallaro <················@pas-d'espam-s'il-vous-plait-mac.com> wrote on Sat, 20 Oct 2007:
> On 2007-10-20 19:35:37 -0400, Don Geddis <···@geddis.org> said:
>> Would there be _any_ problem if you simply made the minimal change in the
>> definition of a Turing Machine, no longer required the input tape to be
>> fully specified before the TM began executing (although still required
>> that any given input square be specified before the TM's head attempts to
>> read it)?
>
> This modification is what Goldin and Wegner call dynamic inputs. Their PTM
> is, in their own words, just such a "minimal extension of Turing machines."
> This is all anyone has been claiming.  Now maybe you understand why they
> were so taken aback by the reaction.

There are plenty of different formulations of Turing-equivalent computational
models.  Context-free grammers, recursive functions, etc.  Defining a modified
Turing Machine is not controversial at all.

The controversy comes from the claim that these new PTMs are of greater
comptuational power than ordinary Turing Machines.  That's just wrong.

The Turing Machine is defined in the same way, and performs the same
computation.  It doesn't (and can't!) know whether its input was prepared
"before" it began running, or is dynamically provided "in the middle" of
execution.

Allowing inputs to be dynamic is a mere matter of convenience in the
modelling.  It has no effect on the computational power of the model.

        -- Don
_______________________________________________________________________________
Don Geddis                  http://don.geddis.org/               ···@geddis.org
If the Times Square ball doesn't fall at midnight, does that mean the guy who
drops the ball dropped the ball?  And if he drops it correctly, has he then
NOT dropped the ball?  -- Foxtrot, by Bill Amend
From: ··············@gmail.com
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <1192932349.536809.262430@z24g2000prh.googlegroups.com>
On Oct 20, 12:31 pm, Raffael Cavallaro <················@pas-d'espam-
s'il-vous-plait-mac.com> wrote:
> On 2007-10-20 04:49:38 -0400, Lauri Alanko <····@iki.fi> said:
>
> > This is _you_ imposing a notion of time on a concept that doesn't
> > inherently have it.
>
> It inherently has a notion of temporal *sequence*, before and after.

Sigh.  You are still confusing the step count of a TM with physical
time.  As I have explained, if you insist in taking a physical view of
the TM tape, then this still does not mean all input has to exist at
step 0.  What's on a particular square before the read head first
observes it is irrelevant.  Input does not have to physically exist
until the head tries to read it.

> Input must be available *before* the machine begins execution or there
> will be nothing to operate on.

See above.

> The machine depends for its operation on the consistent temporal
> ordering of operations. - it matters at each step whether it writes
> first then moves or moves first then writes. If it had no notion of
> temporal sequence whatsoever it would not make any difference if at
> each step, it either did or did not reverse the order.

Again, the machine only observes one square at a time.  There are no
requirements on temporal sequencing of inputs the machine has not yet
observed.

> The machine must visit the initial state *before* any other.

But it only "sees" one square at that time.  The other squares are
still unconstrained.

> In short, you can't invoke temporal ordering in the operations of the
> machine, and then claim that the machine is immune to notions of
> temporal ordering.

Indeed, there is *some* notion of temporal ordering -- namely that
after the machine has observed a field's contents one time that
contents must remain consistent wrt. future visits to the same
square.  But there are no other temporal ordering constraints.

> The idea that one can have inputs from some point in
> the sequence which originate *after* the machine starts at the same
> time that the machine starts violates the temporal ordering which *is*
> necessary for a TM.

No.
From: Raffael Cavallaro
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <2007102022445151816-raffaelcavallaro@pasdespamsilvousplaitmaccom>
On 2007-10-20 22:05:49 -0400, ··············@gmail.com said:

> Indeed, there is *some* notion of temporal ordering -- namely that
> after the machine has observed a field's contents one time that
> contents must remain consistent wrt. future visits to the same
> square.  But there are no other temporal ordering constraints.

The fact that there is *any* notion of temporal ordering means that in 
order to be logically consistent the TM must not violate temporal 
ordering full stop. You cannot invoke temporal ordering for the 
operation of the machine and then violate it elswhere (here, in the 
inputs).

I think we've been around this loop often enough that there's nothing 
left to say here. Have the last word if you want but I won't be 
responding any more.

For those who want to read Goldin and Wegner's arguments first hand 
here's a partial recap of the sources cited elsewhere in the thread. 
I'm sure others can point you to critiques as well:

Dina Golden et al. (eds.)
Interactive Computation: The New Paradigm
Springer, 2006

<http://citeseer.ist.psu.edu/cache/papers/cs/17885/http:zSzzSzwww.cs.umb.eduzSz~dqgzSzpaperszSzptm2.pdf/goldin99behavior.pdf>

<http://www.cse.uconn.edu/~dqg/papers/turing04.pdf> 
From: Don Geddis
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <874pgljd9w.fsf@geddis.org>
Raffael Cavallaro <················@pas-d'espam-s'il-vous-plait-mac.com> wrote on Sat, 20 Oct 2007:
> For those who want to read Goldin and Wegner's arguments first hand here's a
> partial recap of the sources cited elsewhere in the thread.
> http://citeseer.ist.psu.edu/cache/papers/cs/17885/http:zSzzSzwww.cs.umb.eduzSz~dqgzSzpaperszSzptm2.pdf/goldin99behavior.pdf

Most of the introduction is unobjectionable.  There are lots of different
formal models of computation, each more or less useful for different
purposes.  There's really no problem with inventing yet another, to explore a
new purpose.

But then, on page 4, we get this:

        Example (FSAs and PDAs): for every Finite State Automaton (FSA),
        there exists an equivalent Push- Down Automaton (PDA), one which
        accepts the same language. However, the reverse is not true; as a
        result, PDAs are more expressive than FSAs [HU].

OK.

        In this paper, we show that PTMs have a richer set of behaviors than
        TMs, and are therefore more expressive than TMs in precisely the same
        sense that PDAs are more expressive FSAs.

I call bull----.

(I'm actually unable to even follow their logic by which they claim to have
shown this result.  They show a lot of things -- but I can't figure out how
they justify this one-sentence summary.  It just seems to be false.)

> <http://www.cse.uconn.edu/~dqg/papers/turing04.pdf>

They are not clever enough about applying the existing Turing Machine
formulation to the problems they want to solve.  Just because they can't
see how to use TMs to solve their problems, doesn't mean that TMs are not
capable of solving their problems.

Their "driving home from work" is a perfect example.  You can run a standard
Turing Machine, and simply provide the inputs from the dynamic data.  The TM
itself need be no different.

Or, you could redefine the problem that the TM is solving, to be a dynamic
control problem: given this history, and this current input, what should the
control outputs be in the next millisecond?  And then to drive the car, just
run that same, single, ordinary Turing Machine over and over again with the
latest inputs.

The fact that they could only think of a single architecture for the Turing
Machine, that of precomputing the entire driving route before ever beginning
to drive, is a statement about their lack of imagination, not a statement
about the limitations of Turing Machines.

        -- Don
_______________________________________________________________________________
Don Geddis                  http://don.geddis.org/               ···@geddis.org
Patriotism is your conviction that this country is superior to all others
because you were born in it.  -- George Bernard Shaw
From: Marc Gluch
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <471ad1d2$0$25722$4c368faf@roadrunner.com>
Raffael Cavallaro wrote:
> On 2007-10-20 22:05:49 -0400, ··············@gmail.com said:
> 
>> Indeed, there is *some* notion of temporal ordering -- namely that
>> after the machine has observed a field's contents one time that
>> contents must remain consistent wrt. future visits to the same
>> square.  But there are no other temporal ordering constraints.
> 
> The fact that there is *any* notion of temporal ordering means that in 
> order to be logically consistent the TM must not violate temporal 
> ordering full stop. You cannot invoke temporal ordering for the 
> operation of the machine and then violate it elswhere (here, in the 
> inputs).
> 
> I think we've been around this loop often enough that there's nothing 
> left to say here. Have the last word if you want but I won't be 
> responding any more.
> 
> For those who want to read Goldin and Wegner's arguments first hand 
> here's a partial recap of the sources cited elsewhere in the thread. I'm 
> sure others can point you to critiques as well:
> 
> Dina Golden et al. (eds.)
> Interactive Computation: The New Paradigm
> Springer, 2006
> 
> <http://citeseer.ist.psu.edu/cache/papers/cs/17885/http:zSzzSzwww.cs.umb.eduzSz~dqgzSzpaperszSzptm2.pdf/goldin99behavior.pdf> 
> 
> 
> <http://www.cse.uconn.edu/~dqg/papers/turing04.pdf>
> 
contrast:
1. ... their (TMs') resources (time and memory) are finite;
(from your link www.cse.uconn.edu/~dqg/papers/turing04.pdf)
and
2. There are two important things to notice about the definition. The 
first is that the machine's tape is infinite in length, corresponding to 
an assumption that the memory of the machine is infinite. The second is 
similar in nature, but not explicit in the definition of the machine, 
namely that a function will be Turing-computable if there exists a set 
of instructions that will result in the machine computing the function 
regardless of the amount of time it takes. One can think of this as 
assuming the availability of infinite time to complete the computation.
(from http://plato.stanford.edu/entries/turing-machine/#Equiv)

Goldin's description of TMs is simply a strawman
From: ··············@gmail.com
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <1192947220.344250.90370@e34g2000pro.googlegroups.com>
On Oct 20, 9:44 pm, Raffael Cavallaro <················@pas-d'espam-
s'il-vous-plait-mac.com> wrote:
> On 2007-10-20 22:05:49 -0400, ··············@gmail.com said:
>
> > Indeed, there is *some* notion of temporal ordering -- namely that
> > after the machine has observed a field's contents one time that
> > contents must remain consistent wrt. future visits to the same
> > square.  But there are no other temporal ordering constraints.
>
> The fact that there is *any* notion of temporal ordering means that in
> order to be logically consistent the TM must not violate temporal
> ordering full stop.

False.  You continue to confuse the step counter of the TM with
physical time.  I guess I should not have said "notion of temporal
ordering" but left it at the more abstract "some constraints".

> I think we've been around this loop often enough that there's nothing
> left to say here.

Yes.
From: Tim X
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <877iljg91e.fsf@lion.rapttech.com.au>
··············@gmail.com writes:

> On Oct 18, 5:09 pm, Pascal Costanza <····@p-cos.net> wrote:
>> ··············@gmail.com wrote:
>> > Wegner and Eberbach make bold claims in their paper. But extraordinary
>> > claims require extraordinary evidence. The work of Turing has served
>> > as a foundation for computability theory for 70 years. To displace it
>> > would have required them to bring forward very strong evidence. We
>> > have discussed the criteria by which such claims could be assessed,
>> > and we have discussed the systems that they have exhibited as
>> > potentially surpassing the TM model of computation. We consider that
>> > in all three cases, interaction machines, the (pi)-calculus and the $-
>> > calculus, we have shown their claims to be invalid."
>>
>> [quoting Feyerabend]
>> "...That is, if one had to choose between two theories of equal explanatory
>> power, to choose the one that is compatible with an older, falsified
>> theory is to make an aesthetic, rather than a rational choice...."
>
> Red herring.
>
> We are not dealing with two competing theories here.  We don't have an
> "older, falsified theory".
>
> The TM is a mathematical construct.  It has certain mathematical
> properties that are undisputable (and hopefully undisputed). The C-T
> hypothesis says that no physically realizable notion of computation
> can be more powerful than that defined by the TM.  Notice that this is
> a HYPOTHESIS, not a "theory" or some such.  Nobody claims to have
> proved it.  We merely have observed that every attempt of defining
> another model of physically realizable computations has produced
> something of equal or lesser power.  (There are plenty of more
> powerful but non-realizable models of computation.  Just take your
> favorite undecidable predicate and throw it in as a primitive.)
>
> Now, Wegner, Goldin, and Eberbach claim they have actually discovered
> a more powerful concept of physically realizable computation.  This is
> a pretty bold claim, but for all we truly know it could be true.
> However, to show that it is true, they have to prove it.  Doing so
> should be fairly straightforward:  Just demonstrate a decision
> procedure (under the new model) for a predicate that is undecidable
> (or merely semi-decidable) for TMs.   Then also argue convincingly
> that your model actually is--at least in principle--physically
> realizable.  The problem is -- and that's what Cockshott and
> Michaelson explain in quite some detail -- that they haven't done so.
> In fact, not only have they not done so, there is also very good
> reason to believe that they won't be able to do so for the calculi
> they have brought forward so far.
>

Well said. I agree totally. You have expressed far more succinctly what I
tried to state. We don't actually gain anything from a new formalism that
appears to have better expressive power if the model it defines cannot be
realised. This is the strength of Church-Turing thesis and the Turing
Machine. Possibly the issue isn't that the existing theory has failed, but
rather it is unweildly or lacking in elegance when applied to the more
complex systems that are now possible. While a new theory based on a more
powerful expressive model that makes it easier to represent these more
complex interactions would be welcomed, it has to prove it has at least the
same expressive power and that the model it defines can be realised. 

Tim


-- 
tcross (at) rapttech dot com dot au
From: Joachim Durchholz
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <ff9tpp$t18$1@online.de>
Tim X schrieb:
> Possibly the issue isn't that the existing theory has failed, but
> rather it is unweildly or lacking in elegance when applied to the more
> complex systems that are now possible.

If you want to express that the Turing Machine is unwieldy: its virtue 
is not in elegance but in primitivity. It's the most restricted model of 
computation that people have come up with, and its quality isn't that it 
is useful for programming, but that all other formalisms devised for 
expressing algorithms have been shown to be equivalent to it.

If you want ergonomy, try the lambda calculus. Preferrably in a version 
that supports information hiding.

 > While a new theory based on a more
> powerful expressive model that makes it easier to represent these more
> complex interactions would be welcomed, it has to prove it has at least the
> same expressive power and that the model it defines can be realised. 

"Expressive power" is a part of ergonomy.
That's not the kind of question that Turing Machines were devised to answer.

Oh, and since it's ergonomy, you can't expect a theory that's as nice as 
that of computability.

Regards,
Jo
From: Raffael Cavallaro
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <200710181941568930-raffaelcavallaro@pasdespamsilvousplaitmaccom>
On 2007-10-18 18:09:42 -0400, Pascal Costanza <··@p-cos.net> said:

> The familiarity of such a theory might also make it more appealing to 
> scientists, since they will not have to disregard as many cherished 
> prejudices. Hence, that theory can be said to have "an unfair 
> advantage".'

"A new scientific truth does not triumph by convincing its opponents 
and making them see the light, but rather because its opponents 
eventually die, and a new generation grows up that is familiar with 
it." -Max Planck
From: ··············@gmail.com
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <1192756343.211207.286090@k35g2000prh.googlegroups.com>
On Oct 18, 6:41 pm, Raffael Cavallaro <················@pas-d'espam-
s'il-vous-plait-mac.com> wrote:
> On 2007-10-18 18:09:42 -0400, Pascal Costanza <····@p-cos.net> said:
>
> > The familiarity of such a theory might also make it more appealing to
> > scientists, since they will not have to disregard as many cherished
> > prejudices. Hence, that theory can be said to have "an unfair
> > advantage".'
>
> "A new scientific truth does not triumph by convincing its opponents
> and making them see the light, but rather because its opponents
> eventually die, and a new generation grows up that is familiar with
> it." -Max Planck

"A new unscientific fallacy does not go away by convincing its
proponets and making them see the light, but rather beceause its
proponents eventually die, and a new generation grows up that has a
good laugh at them." -- Kcnalp Xam

(I wish this were actually true, but unfortunately it isn't.  Most
fallacies tend to come back again and again, sometimes almost
unchanged, more often in disguise.)
From: Raffael Cavallaro
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <2007101823582327544-raffaelcavallaro@pasdespamsilvousplaitmaccom>
On 2007-10-18 21:12:23 -0400, ··············@gmail.com said:

> On Oct 18, 6:41 pm, Raffael Cavallaro <················@pas-d'espam-
> s'il-vous-plait-mac.com> wrote:
>> 
>> "A new scientific truth does not triumph by convincing its opponents
>> and making them see the light, but rather because its opponents
>> eventually die, and a new generation grows up that is familiar with
>> it." -Max Planck
> 
> "A new unscientific fallacy does not go away by convincing its
> proponets and making them see the light, but rather beceause its
> proponents eventually die, and a new generation grows up that has a
> good laugh at them." -- Kcnalp Xam

This is the most elaborate version of "I know you are, but what am I?" 
I've ever come across.

You can keep your fictional quote, I'll go with Planck, thanks.
From: ··············@gmail.com
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <1192769002.402380.292290@z24g2000prh.googlegroups.com>
On Oct 18, 10:58 pm, Raffael Cavallaro <················@pas-d'espam-
s'il-vous-plait-mac.com> wrote:
> On 2007-10-18 21:12:23 -0400, ··············@gmail.com said:
>
> > On Oct 18, 6:41 pm, Raffael Cavallaro <················@pas-d'espam-
> > s'il-vous-plait-mac.com> wrote:
>
> >> "A new scientific truth does not triumph by convincing its opponents
> >> and making them see the light, but rather because its opponents
> >> eventually die, and a new generation grows up that is familiar with
> >> it." -Max Planck
>
> > "A new unscientific fallacy does not go away by convincing its
> > proponets and making them see the light, but rather beceause its
> > proponents eventually die, and a new generation grows up that has a
> > good laugh at them." -- Kcnalp Xam
>
> This is the most elaborate version of "I know you are, but what am I?"
> I've ever come across.
>
> You can keep your fictional quote, I'll go with Planck, thanks.

I'm sorry to hear you didn't appreciate the joke.  So I'll say it non-
jokingly:

You are using Planck's quote incorrectly.  Just because opponents of a
new claim don't go away does not make that claim a truth.  In fact,
when opponents don't go away it is more likely that the reason is not
that these opponents don't want to see the light but that they
actually have seen the light already.

Kind regards,
Matthias
From: Raffael Cavallaro
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <2007101903002550878-raffaelcavallaro@pasdespamsilvousplaitmaccom>
On 2007-10-19 00:43:22 -0400, ··············@gmail.com said:

> Just because opponents of a
> new claim don't go away does not make that claim a truth.

Clearly. The point of Planck's quote is that if such a claim is true, 
we're not likely to know for a generation.
From: ··············@gmail.com
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <1192811966.215395.101890@v29g2000prd.googlegroups.com>
On Oct 19, 2:00 am, Raffael Cavallaro <················@pas-d'espam-
s'il-vous-plait-mac.com> wrote:
> On 2007-10-19 00:43:22 -0400, ··············@gmail.com said:
>
> > Just because opponents of a
> > new claim don't go away does not make that claim a truth.
>
> Clearly. The point of Planck's quote is that if such a claim is true,
> we're not likely to know for a generation.

Actually, no.  Planck was clearly exaggerating somewhat, and -- most
importantly -- what he says applies to science but not to math.  If
Persistent TMs (or whatever else) falsifies the C-T thesis, then this
fact must have a solid mathematical proof.  Once such a proof has been
demonstrated, there won't be any more opposition.  The problem here is
that such a proof has not been demonstrated so far.  Worse, there can
be no such proof (if we believe that our reasoning is internally
consistent).  See below.

A proof would have to demonstrate a decision problem D, show that no
ordinary TM can solve D, and then give a PTM that does solve D (along
with a proof for this, of course).  The problem, of course, is that if
such a PTM exists (i.e., one that after a finite amount of steps --
which means after having iterated a finite amount of time and having
seen a finite amount of total input over all iterations), then there
trivially exists an ordinary TM that -- given the same total amount of
(finite) input on its single tape -- gives the same answer.  Now,
formalizing this -- which I am not going to do here (I'll leave it as
an exercise to the reader) -- constitutes a proof of PTMs not being
more powerful than ordinary TMs.
From: Paul Wallich
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <ffaog2$flc$1@reader1.panix.com>
··············@gmail.com wrote:
> On Oct 19, 2:00 am, Raffael Cavallaro <················@pas-d'espam-
> s'il-vous-plait-mac.com> wrote:
>> On 2007-10-19 00:43:22 -0400, ··············@gmail.com said:
>>
>>> Just because opponents of a
>>> new claim don't go away does not make that claim a truth.
>> Clearly. The point of Planck's quote is that if such a claim is true,
>> we're not likely to know for a generation.
> 
> Actually, no.  Planck was clearly exaggerating somewhat, and -- most
> importantly -- what he says applies to science but not to math.  If
> Persistent TMs (or whatever else) falsifies the C-T thesis, then this
> fact must have a solid mathematical proof.  Once such a proof has been
> demonstrated, there won't be any more opposition.  The problem here is
> that such a proof has not been demonstrated so far.  Worse, there can
> be no such proof (if we believe that our reasoning is internally
> consistent).  See below.

Unfortunately, proofs themselves are social as well as mathematical 
things. You need reputable people to take the time to go through a proof 
and agree with it (or find holes that can be plugged). You need the 
terminology and symbols specific to a particular proof to be unpacked 
for the community at large, and you need the techniques used in the 
proof to be accepted.

One of the clearest examples of this kind of process was the time it 
took for the proof of the four-color theorem to be fully accepted. That 
was essentially as long as it took for the last tenured professors who 
didn't believe that computerized enumeration of cases not doable in 
reasonable time by humans constituted proof to die or retire.

paul
From: Thant Tessman
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <ffaq6j$om6$1@news.xmission.com>
Paul Wallich wrote:

[...]

> Unfortunately, proofs themselves are social as well as mathematical 
> things. [...]

> One of the clearest examples of this kind of process was the time it 
> took for the proof of the four-color theorem to be fully accepted. That 
> was essentially as long as it took for the last tenured professors who 
> didn't believe that computerized enumeration of cases not doable in 
> reasonable time by humans constituted proof to die or retire.

Do you honestly believe the fact that there were tenured professors 
around who didn't believe computerized enumeration constituted proof 
should have had a positive influence on other people's ability to 
disprove the four-color theorem by counter example?

-thant
From: Paul Wallich
Subject: Re: on the relations of traditional CS theory to modern programming   practice
Date: 
Message-ID: <ffbk6u$qhv$1@reader1.panix.com>
Thant Tessman wrote:
> Paul Wallich wrote:
> 
> [...]
> 
>> Unfortunately, proofs themselves are social as well as mathematical 
>> things. [...]
> 
>> One of the clearest examples of this kind of process was the time it 
>> took for the proof of the four-color theorem to be fully accepted. 
>> That was essentially as long as it took for the last tenured 
>> professors who didn't believe that computerized enumeration of cases 
>> not doable in reasonable time by humans constituted proof to die or 
>> retire.
> 
> Do you honestly believe the fact that there were tenured professors 
> around who didn't believe computerized enumeration constituted proof 
> should have had a positive influence on other people's ability to 
> disprove the four-color theorem by counter example?

Huh? At that point the mathematicians were long beyond believing that a 
five-color flat map would be found. But there's a big difference between 
something being true and something having a proof. Four-color was on the 
edge, because given enough time and expendable graduate students you 
could in fact check the enumeration during a single career. And the code 
was relatively simple. More complex enumerations since then have gone 
into the range that you couldn't check even if you had an entire planet 
worth of spare graduate students and millennia of time.

Those ones make it a little clearer that you're trusting the people who 
do the work to have done it right and reported it ditto.

paul
From: Thant Tessman
Subject: Re: on the relations of traditional CS theory to modern programming     practice
Date: 
Message-ID: <ffdhlb$af1$1@news.xmission.com>
Paul Wallich wrote:

> [...] But there's a big difference between 
> something being true and something having a proof. [...]

Of course, but that's not what you said. You said: "Unfortunately, 
proofs themselves are social as well as mathematical things."

How much stock you decide to put into someone else's claim that they got 
a proof right is a social thing of course, but the proof itself (or lack 
thereof) isn't.

"Reality is that which, when you stop believing in it, doesn't go away." 
-- Philip K. Dick

-thant
From: Neelakantan Krishnaswami
Subject: Re: on the relations of traditional CS theory to modern programming     practice
Date: 
Message-ID: <slrnfhkjv5.8n3.neelk@gs3106.sp.cs.cmu.edu>
In article <<············@news.xmission.com>>,
Thant Tessman <···@standarddeviance.com> wrote:
> Paul Wallich wrote:
> 
>> [...] But there's a big difference between 
>> something being true and something having a proof. [...]
> 
> Of course, but that's not what you said. You said: "Unfortunately, 
> proofs themselves are social as well as mathematical things."
> 
> How much stock you decide to put into someone else's claim that they got 
> a proof right is a social thing of course, but the proof itself (or lack 
> thereof) isn't.

No, he's absolutely correct. Every rule of inference in logic is a
social fact -- it's something that trained people agree is a
truth-conserving argument. This goes all the way down to the simplest
of rules, like the three rules for conjunction:


   A true    B true    (A and B) true    (A and B) true
   ----------------    --------------    --------------
    (A and B) true         A true           B true


And in fact, mathematicians are not in full agreement about what
constitutes a legitimate proof -- for example, constructivists reject
the principle of the excluded middle, and relevant logicians reject
the principle of weakening (ie, that A entails (B implies A)).

So whether an argument is a proof or not is fundamentally a /social/
process, because whether the rules of inference I am using are
justified is judged relative to the community of mathematicians I am
working in.

However, at the same time, it's also the case that properly machine
checked proofs are more credible than solely human-checked proofs. The
reason is that it's easy to write a small proof-checking program,
which many people can easily verify is correct. Then, you can feed it
a proof of arbitrary complexity -- whose correctness very few people
can potentially understand -- and get a result from the checker.

Your confidence in this proof is proportional to your confidence in
the checker, and not in the proof itself. (This, incidentally,
invalidates the central claim of De Millo, Lipton and Perlis's famous
paper "Social Processes and Proofs of Theorems and Programs".)

The original proof of the four-color theorem is an improperly machine
checked proof, because there was no easily-checkable proof that the
program that did the enumeration of cases was correct. Gonthier's
proof, OTOH, does satisfy this criterion, because he gave a machine-
checkable correctness proof for his program that enumerates cases.


-- 
Neel R. Krishnaswami
·····@cs.cmu.edu
From: Tayssir John Gabbour
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <1192908776.591335.173090@i13g2000prf.googlegroups.com>
On Oct 20, 8:33 pm, Thant Tessman <····@standarddeviance.com> wrote:
> How much stock you decide to put into someone else's claim that they got
> a proof right is a social thing of course, but the proof itself (or lack
> thereof) isn't.
>
> "Reality is that which, when you stop believing in it, doesn't go away."
> -- Philip K. Dick

My understanding is that a proof is an argument which convinces
mathematically-expert peers. What sort of argument is accepted (and by
whom) is a function of social relations.

The best discussion I know of the social nature of mathematical
arguments is in Davis and Hersh's books, like _The Mathematical
Experience_. That book mentions the ebb and flow of what constitutes
an acceptable proof, given social relations among mathematicians. (And
even external political constraints, like the danger people faced when
challenging Euclidean geometry.)

http://books.google.com/books?id=lMdz84dWNnAC&dq=the+mathematical+experience+davis+hersh&pg=PP1&ots=Bp9XVhP6vZ&sig=U5npC31M8trLx7FuWARLkfBTf18

(Though keep in mind I'm not a mathematician.)


Tayssir
From: Thant Tessman
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <ffdn3e$85l$1@news.xmission.com>
Tayssir John Gabbour wrote:

[...]

> My understanding is that a proof is an argument which convinces
> mathematically-expert peers. What sort of argument is accepted (and by
> whom) is a function of social relations.

If this is all that a proof really was, then proofs would never actually 
be *about* anything.


> The best discussion I know of the social nature of mathematical
> arguments is in Davis and Hersh's books, like _The Mathematical
> Experience_. [...]

This is actually sitting on my bookshelf, but I haven't read it yet.

-thant
From: Tayssir John Gabbour
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <1192919010.613972.303180@v29g2000prd.googlegroups.com>
On Oct 20, 10:06 pm, Thant Tessman <····@standarddeviance.com> wrote:
> Tayssir John Gabbour wrote:
> > My understanding is that a proof is an argument which convinces
> > mathematically-expert peers. What sort of argument is accepted (and by
> > whom) is a function of social relations.
>
> If this is all that a proof really was, then proofs would never actually
> be *about* anything.

The arguments we're advancing are far less rigorous than the ones
mathematicians employ -- yet I think your arguments are intelligible.
(Maybe the jury's still out on mine...)

I'm reminded of Hersh's response to Martin Gardner's (the mathematical
recreations guy) negative review of his book:


	Gardner, like all Platonists (realists) insists that math
	is real.  But it's not physical or mental, so what is it?
	Sometimes he thinks it's a real part of the physical world,
	which would eliminate most of set theory and higher-dimensional
	geometry among many other kinds of non-physical math.  Then
	he thinks its real because it's a formal logical system,
	which would say math was first invented in the late 19th
	century--not true!  He is  oblivious to the inconsistency of
	these two stories.  And like all Platonists he ignores
	the radical difficulty of explaining how a realm of
	pure abstraction interacts with the flesh and blood
	realm of real mathematical practise.

    -- http://cs.nyu.edu/pipermail/fom/1997-November/000128.html


Tayssir
From: Thant Tessman
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <fficpe$njb$1@news.xmission.com>
Tayssir John Gabbour wrote:
> On Oct 20, 10:06 pm, Thant Tessman <····@standarddeviance.com> wrote:
>> Tayssir John Gabbour wrote:
>>> My understanding is that a proof is an argument which convinces
>>> mathematically-expert peers. What sort of argument is accepted (and by
>>> whom) is a function of social relations.
>> If this is all that a proof really was, then proofs would never actually
>> be *about* anything.
> 
> The arguments we're advancing are far less rigorous than the ones
> mathematicians employ -- yet I think your arguments are intelligible.
> (Maybe the jury's still out on mine...)
> 
> I'm reminded of Hersh's response to Martin Gardner's (the mathematical
> recreations guy) negative review of his book:
> 
> 
> 	Gardner, like all Platonists (realists) insists that math
> 	is real.  But it's not physical or mental, so what is it?
> 	Sometimes he thinks it's a real part of the physical world,
> 	which would eliminate most of set theory and higher-dimensional
> 	geometry among many other kinds of non-physical math.  [...]

What about the math behind modern public-key encryption? My 
understanding is that this kind of math isn't used by physicists to 
describe natural phenomenon. (But I wouldn't know for sure.) Does this 
somehow make the use of this math for public-key encryption somehow less 
real? ...somehow less a part of the way the universe actually works?

For some unknown and unknowable reason, the universe manifests 
structure. Our brains have evolved to (among many other things) perceive 
that structure through our intellect. We may 'create' math, as Hersh 
seems to be insisting, but the math we create is what it is only because 
it is a reflection of the structure that genuinely exists outside of our 
minds.

Also, I'm reluctant to refer to myself as a Platonist. Platonism as I 
understand it holds that our knowledge of the universe is inherently 
imperfect. I would prefer to say that our knowledge of the universe is 
inherently incomplete. In my book this is a big difference.

-thant


> 
> 
> Tayssir
> 
From: Rob Warnock
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <IoydncW0y_byy4DanZ2dnUVZ_qOknZ2d@speakeasy.net>
Thant Tessman <···@standarddeviance.com> wrote:
+---------------
| For some unknown and unknowable reason, the universe manifests 
| structure. Our brains have evolved to (among many other things) perceive 
| that structure through our intellect. We may 'create' math, as Hersh 
| seems to be insisting, but the math we create is what it is only because 
| it is a reflection of the structure that genuinely exists outside of our 
| minds.
+---------------

Others might turn this on its head, and prefer to say:

    For some unknown and unknowable reason, our brains have evolved to
    (among many other things) perceive that there "is" an "external"
    universe which appears to manifest structure. Our brains have also
    created the notion of math, which (unsurprisingly) has structure
    which sometimes (but not always) resembles the structure we perceive
    to be in the previously-hypothecated "external universe".

Following Nagarjuna, the Madhyamakan view would be that the truth
of the matter is not the first of these, nor the second, nor both
the first and the second, nor neither the first nor the second.

But this is rapidly drifting off-topic. [Unless, of course, it
happens to lead to a practical demonstration of strong AI, and
then somebody will try to patent it, but that's another story...]


-Rob

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Thant Tessman
Subject: Re: on the relations of traditional CS theory to modern programming   practice
Date: 
Message-ID: <ffl17b$2h8$1@news.xmission.com>
Rob Warnock wrote:
> Thant Tessman <···@standarddeviance.com> wrote:
> +---------------
> | For some unknown and unknowable reason, the universe manifests 
> | structure. Our brains have evolved to (among many other things) perceive 
> | that structure through our intellect. We may 'create' math, as Hersh 
> | seems to be insisting, but the math we create is what it is only because 
> | it is a reflection of the structure that genuinely exists outside of our 
> | minds.
> +---------------
> 
> Others might turn this on its head, and prefer to say:
> 
>     For some unknown and unknowable reason, our brains have evolved to
>     (among many other things) perceive that there "is" an "external"
>     universe which appears to manifest structure. Our brains have also
>     created the notion of math, which (unsurprisingly) has structure
>     which sometimes (but not always) resembles the structure we perceive
>     to be in the previously-hypothecated "external universe".
> 
> Following Nagarjuna, the Madhyamakan view would be that the truth
> of the matter is not the first of these, nor the second, nor both
> the first and the second, nor neither the first nor the second. [...]


Right. Do not try to bend the spoon. That's impossible. Instead...only 
try to realize the truth...

This stuff is fun, and I've spent more time than I care to admit 
exploring the nature of existence from the inside out as it were, but 
one day I just managed to figure out that the pursuit of Buddhist 
selflessness is just as egotistical an act as anything else, if not more 
so. At what point does eastern nihilism become nothing more than 
philosophical masturbation?

Worse, (and the reason I still consider all this on topic,) what all 
this is really about is arguing about what the definition of 'is' is. To 
pick an extreme example, Marx did not bother to refute his critics. He 
denied he even needed to, because, you see, logic itself was a product 
of class consciousness. All this matters because hundreds of millions of 
people were killed in the name of communism.

I guess my point is that it's good to pay attention to the point when a 
discussion turns to questioning the existence of a reality. More often 
than not, it's nothing more than a desperate rhetorical device.

-thant
From: Raffael Cavallaro
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <2007102313532416807-raffaelcavallaro@pasdespamsilvousplaitmaccom>
On 2007-10-23 10:40:54 -0400, Thant Tessman <···@standarddeviance.com> said:

> but one day I just managed to figure out that the pursuit of Buddhist 
> selflessness is just as egotistical an act as anything else, if not 
> more so.

"I gained nothing at all from supreme enlightenment, and for that very 
reason it is called supreme enlightenment"
- Siddhartha Gautama (the Buddha)
From: Rob Warnock
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <BO-dnRt7y_HTkoLanZ2dnUVZ_oHinZ2d@speakeasy.net>
Thant Tessman <···@standarddeviance.com> wrote:
+---------------
| ...one day I just managed to figure out that the pursuit of Buddhist 
| selflessness is just as egotistical an act as anything else, if not
| more so.
+---------------

Uh... Selflessness is not soemthing that one pursues [unless one is
very confused]. Rather, the term "selflessness" is simply a description
of how things *are*, no matter how we wish it were otherwise. [At least,
that's what I've been taught...]

+---------------
| At what point does eastern nihilism become nothing more than 
| philosophical masturbation?
+---------------

Well, don't forget that Madhyamaka rejects *both* nihilism & eternalism.


-Rob

p.s. I'm trying to think of a way to bring this back on-topic,
but I'm drawing a blank, so I'll just quit while I'm behind...

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Paul Rubin
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <7xir4y76x0.fsf@ruckus.brouhaha.com>
Thant Tessman <···@standarddeviance.com> writes:
> Also, I'm reluctant to refer to myself as a Platonist. Platonism as I
> understand it holds that our knowledge of the universe is inherently
> imperfect. I would prefer to say that our knowledge of the universe is
> inherently incomplete. In my book this is a big difference.

I think mathematical platonism refers to the belief that mathematical
objects actually exist independently of human notions of them.  That
there really is such a thing as triangles; they're not just
abstractions.  That the continuum hypothesis is objectively either
true or false even though we can't deduce its truth or falsehood from
the usual (ZFC) axioms of set theory.  Godel himself believed that we
would eventually discover new axioms that would let us settle the CH
question.

http://en.wikipedia.org/wiki/Philosophy_of_mathematics#Platonism
From: Paul Wallich
Subject: Re: on the relations of traditional CS theory to modern programming       practice
Date: 
Message-ID: <ffg6cn$2ru$1@reader1.panix.com>
Thant Tessman wrote:
> Paul Wallich wrote:
> 
>> [...] But there's a big difference between something being true and 
>> something having a proof. [...]
> 
> Of course, but that's not what you said. You said: "Unfortunately, 
> proofs themselves are social as well as mathematical things."

In a completely different context. Both, in practice, are true.

> How much stock you decide to put into someone else's claim that they got 
> a proof right is a social thing of course, but the proof itself (or lack 
> thereof) isn't.

Assuming that you're omniscient, infallible and have infinite time. One 
of the big issues in proofs, for example, is how much of the argument 
may be elided. Things accepted as proofs by generations of 
mathematicians may need expansion by factors of 5-10 if you require (as 
for example for mechanical verification) that all the implicits be made 
implicit. Other long-accepted proofs turn out to have errors and gaps in 
them, but are still accepted by the community as long as they can be 
corrected or filled in.

paul
From: Thant Tessman
Subject: Re: on the relations of traditional CS theory to modern programming         practice
Date: 
Message-ID: <ffgfqp$plv$1@news.xmission.com>
Paul Wallich wrote:
> Thant Tessman wrote:

[...]

>> How much stock you decide to put into someone else's claim that they 
>> got a proof right is a social thing of course, but the proof itself 
>> (or lack thereof) isn't.
> 
> Assuming that you're omniscient, infallible and have infinite time. One 
> of the big issues in proofs, for example, is how much of the argument 
> may be elided. Things accepted as proofs by generations of 
> mathematicians may need expansion by factors of 5-10 if you require (as 
> for example for mechanical verification) that all the implicits be made 
> implicit. Other long-accepted proofs turn out to have errors and gaps in 
> them, but are still accepted by the community as long as they can be 
> corrected or filled in.

It is a fact that indeed we can never be sure every proof is correct. 
But this does not rule out the possibility that some proofs might be 
correct. In fact, if we don't allow for this possibility, why bother 
trying to construct them in the first place?

My point is that a proof's 'existence' or lack thereof is independent of 
the inherently social search for them.

-thant
From: Aatu Koskensilta
Subject: Re: on the relations of traditional CS theory to modern programming         practice
Date: 
Message-ID: <XvhXi.247271$Yn.106382@reader1.news.saunalahti.fi>
On 2007-10-21, in comp.lang.functional, Thant Tessman wrote:
> My point is that a proof's 'existence' or lack thereof is independent of 
> the inherently social search for them.

The existence of a /formal/ proof in some theory is a mathematical question
and as such as independent of any social questions as any mathematical
question of similar combinatorial form. Similarly it makes perfect sense to
speak of the possibility that some proof, in the ordinary sense of the word,
referring to the actual texts appearing in journals and text books intended
for human consumption, is incorrect without anyone ever noticing it, in the
sense of there being some logically invalid step escaping everyone's
attention. (The cynic will note that most substantial proofs ever published
are probably incorrect in this sense, even though the theorems proved are
correct). These questions we naturally regard as objective, even though we
admit that in practice we might lack the necessary time, incentive,
expertise, and so on, to subject every proof to sufficient scrutiny. 

But what about the existence of a proof of some result -- again in the
ordinary sense of an argument that establishes the conclusion from basic
principles by means of logically correct reasoning? Here the question
becomes murky. Does a proof relying on the existence of an inaccessible
prove its conclusion? What about the existence of a measurable, or an
infinity of Woodin cardinals? Replacement? Impredicative comprehension for
sets of naturals? Induction for all arithmetically definable properties?
For each of these we can find reasonable and sane mathematicians who accept
the principle in question as utterly compelling, and those who find it
extremely dubious, the former eagerly accepting a proof using the principle
the latter sternly rejecting the proof as establishing the truth of its
conclusion. 

If we regard proof as a piece of argumentation that establishes, to the
satisfaction of mathematicians, the truth of its conclusion from justified
principles, the question of existence of a proof of this or that depends on
what we take "justified" to mean. The most natural answer being: a principle
is justified if mathematicians find it evident on basis of their basic
conceptual grasp of the concepts involved, or, a bit more pragmatically, it
is accepted as such by those who have bothered to think about these issues.
And this is an inherently social criteria. Unless some explanation of what
justification amounts to that does not refer to what mathematicians actually
find or would find justified, talk about existence or non-existence of
proofs, again in the ordinary sense, in some absolute sense remains obscure
and somewhat pointless.

-- 
Aatu Koskensilta (················@xortec.fi)

"Wovon man nicht sprechen kann, daruber muss man schweigen"
 - Ludwig Wittgenstein, Tractatus Logico-Philosophicus
From: Neelakantan Krishnaswami
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <slrnfhht3u.6lq.neelk@gs3106.sp.cs.cmu.edu>
In article <<············@reader1.panix.com>>,
Paul Wallich <··@panix.com> wrote:
> 
> One of the clearest examples of this kind of process was the time it
> took for the proof of the four-color theorem to be fully
> accepted. That was essentially as long as it took for the last
> tenured professors who didn't believe that computerized enumeration
> of cases not doable in reasonable time by humans constituted proof
> to die or retire.

Actually, I didn't believe the original proof of the four color
theorem, and in fact I still don't believe it. That's because there
were no formal correctness proofs of the programs used to enumerate
the cases. (The programs were actually extremely low level hand-coded
assembly, so verification is essentially hopeless.)

I do, however, believe Gonthier's mechanized proof of it in Coq, since
he did prove the correctness of his enumeration program.



-- 
Neel R. Krishnaswami
·····@cs.cmu.edu
From: Raffael Cavallaro
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <2007101914345564440-raffaelcavallaro@pasdespamsilvousplaitmaccom>
On 2007-10-19 12:39:26 -0400, ··············@gmail.com said:

> A proof would have to demonstrate a decision problem D, show that no
> ordinary TM can solve D, and then give a PTM that does solve D (along
> with a proof for this, of course).  The problem, of course, is that if
> such a PTM exists (i.e., one that after a finite amount of steps --
> which means after having iterated a finite amount of time and having
> seen a finite amount of total input over all iterations), then there
> trivially exists an ordinary TM that -- given the same total amount of
> (finite) input on its single tape -- gives the same answer.  Now,
> formalizing this -- which I am not going to do here (I'll leave it as
> an exercise to the reader) -- constitutes a proof of PTMs not being
> more powerful than ordinary TMs.

Decision problems miss the whole point since they exlude time. Decision 
problems deal only with a single question on infinite inputs, without 
regard for the time at which those inputs are available.

The class of computations that a PTM can do and a TM can't are those 
that must be computed now, interactively, not after the fact. This is 
what Pascal means when he says that the C-T thesis "steers us in the 
wrong direction."

If both the TM and PTM both start at the same time, T0, and the PTM 
receives inputs after it starts that are not predictable from any 
possible input available at time T0, then there is no way when the TM 
starts at time T0, short of time travel, to supply the TM with this 
"same total amount of (finite) input on its single tape".

If you say that you will simply start the TM at some future time Tn 
when such inputs *are* available, I will say that you are not doing the 
same computation, because the whole point of this computation is to do 
it before Tn.

For example, Driving Home From Work Starting At 3:00pm Today And 
Arriving By 3:15 is a computation that must be solved at 3:00 - there's 
no point to a "solution" that's specified at 3:15. To provide a 
solution after the fact using, for example, a video of the route from 
3:00 to 3:15 today is not doing the same computation. A PTM can Drive 
Home From Work because it can get additional inputs as it drives. A TM 
cannot becuase it cannot have those additional inputs until after the 
PTM has already arrived.

If you believe that any possible input after T0 is predictable at time 
T0, then you are a Determinist. Goldin and Wegner are not. This is what 
they believe is the whole crux of the argument.
From: ··············@gmail.com
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <1192825762.180286.136280@q3g2000prf.googlegroups.com>
On Oct 19, 1:34 pm, Raffael Cavallaro <················@pas-d'espam-
s'il-vous-plait-mac.com> wrote:
> On 2007-10-19 12:39:26 -0400, ··············@gmail.com said:
>
> > A proof would have to demonstrate a decision problem D, show that no
> > ordinary TM can solve D, and then give a PTM that does solve D (along
> > with a proof for this, of course).  The problem, of course, is that if
> > such a PTM exists (i.e., one that after a finite amount of steps --
> > which means after having iterated a finite amount of time and having
> > seen a finite amount of total input over all iterations), then there
> > trivially exists an ordinary TM that -- given the same total amount of
> > (finite) input on its single tape -- gives the same answer.  Now,
> > formalizing this -- which I am not going to do here (I'll leave it as
> > an exercise to the reader) -- constitutes a proof of PTMs not being
> > more powerful than ordinary TMs.
>
> Decision problems miss the whole point since they exlude time. Decision
> problems deal only with a single question on infinite inputs, without
> regard for the time at which those inputs are available.
>
> The class of computations that a PTM can do and a TM can't are those
> that must be computed now, interactively, not after the fact. This is
> what Pascal means when he says that the C-T thesis "steers us in the
> wrong direction."
>
> If both the TM and PTM both start at the same time, T0, and the PTM
> receives inputs after it starts that are not predictable from any
> possible input available at time T0, then there is no way when the TM
> starts at time T0, short of time travel, to supply the TM with this
> "same total amount of (finite) input on its single tape".
>
> If you say that you will simply start the TM at some future time Tn
> when such inputs *are* available, I will say that you are not doing the
> same computation, because the whole point of this computation is to do
> it before Tn.
>
> For example, Driving Home From Work Starting At 3:00pm Today And
> Arriving By 3:15 is a computation that must be solved at 3:00 - there's
> no point to a "solution" that's specified at 3:15. To provide a
> solution after the fact using, for example, a video of the route from
> 3:00 to 3:15 today is not doing the same computation. A PTM can Drive
> Home From Work because it can get additional inputs as it drives. A TM
> cannot becuase it cannot have those additional inputs until after the
> PTM has already arrived.
>
> If you believe that any possible input after T0 is predictable at time
> T0, then you are a Determinist. Goldin and Wegner are not. This is what
> they believe is the whole crux of the argument.

Again, you are missing the whole point of the TM abstraction.  A TM
does not have a notion of time.  There is no such thing as T0 or
3:00pm etc.  The tape itself is also an abstraction.  The machine does
not "need" its input to physically exist until it advances its head to
that position.  Everything else follows from there.  See my other
reply to you for a explanation of how a TM can faithfully simulate a
PTM and therefore model an interactive computation.

Matthias
From: Raffael Cavallaro
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <2007101920251882327-raffaelcavallaro@pasdespamsilvousplaitmaccom>
On 2007-10-19 16:29:22 -0400, ··············@gmail.com said:

> Again, you are missing the whole point of the TM abstraction.  A TM
> does not have a notion of time.

Which is precisely why it is a poor model of interactive computation 
across time.

The PTM formalism is more expressive of ongoing interactive 
computation. That's all.

I don't consider this earth shattering, but I do find the resistance to 
it very telling. I think it irks the functional crowd because it makes 
explicit the importance of dynamic inputs at run-time rather than 
treating all computation as functions from static input to output.
From: Joachim Durchholz
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <ffcd9t$kof$1@online.de>
Raffael Cavallaro schrieb:
> On 2007-10-19 16:29:22 -0400, ··············@gmail.com said:
> 
>> Again, you are missing the whole point of the TM abstraction.  A TM
>> does not have a notion of time.
> 
> Which is precisely why it is a poor model of interactive computation 
> across time.

That's not what a TM is constructed for.

> The PTM formalism is more expressive of ongoing interactive computation. 
> That's all.

"More expressive" in what sense?
If it's about being closer to what programmers need: there are even 
better formalisms.
If it's about being computationally more powerful in a computational 
sense: the PTM is strictly equivalent to the TM, it isn't more powerful. 
The additional inputs are just recoded as part of the input on the tape. 
(Yes, the results of doing so will be gruesome and unreadable to humans. 
However, the TM isn't about having something to express algorithms 
easily, it's about easily reasoning about its behaviour, and having a 
mapping - however complicated and gruesome - from other formalisms to it.)

> I don't consider this earth shattering, but I do find the resistance to 
> it very telling. I think it irks the functional crowd because it makes 
> explicit the importance of dynamic inputs at run-time rather than 
> treating all computation as functions from static input to output.

You're mistaken.
Dynamic inputs have long been accounted for. Actually there are several 
approaches available.
The resistance is more of the "so what?" kind: the claim that PTMs are 
more powerful than TMs is simply bogus.
It's also of the "you're working off the wrong assumptions" kind: 
AFAICT, the PTM is "more powerful" only if you use a different model of 
computational power than the one that the CT thesis is based on. (A 
rather cheap shot in my eyes.)

Regards,
Jo
From: Raffael Cavallaro
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <2007102012005517709-raffaelcavallaro@pasdespamsilvousplaitmaccom>
On 2007-10-20 04:12:54 -0400, Joachim Durchholz <··@durchholz.org> said:

> The resistance is more of the "so what?" kind: the claim that PTMs are 
> more powerful than TMs is simply bogus.

PTMs are a simple extension to TMs that allow for dynamic inputs during 
computation. Very straightforward. The resistance is more of the "I 
don't want to acknowledge that an interactive formalism (PTM) is more 
appropriate for interactive computing than a batch formalism (TM)," 
kind.
From: ··············@gmail.com
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <1192931794.992469.145460@q5g2000prf.googlegroups.com>
On Oct 20, 11:00 am, Raffael Cavallaro <················@pas-d'espam-
s'il-vous-plait-mac.com> wrote:
> On 2007-10-20 04:12:54 -0400, Joachim Durchholz <····@durchholz.org> said:
>
> > The resistance is more of the "so what?" kind: the claim that PTMs are
> > more powerful than TMs is simply bogus.
>
> PTMs are a simple extension to TMs that allow for dynamic inputs during
> computation. Very straightforward. The resistance is more of the "I
> don't want to acknowledge that an interactive formalism (PTM) is more
> appropriate for interactive computing than a batch formalism (TM),"
> kind.

No, you are misrepresenting the so-called "resistance".  My point is
taht TM do not have to correspond to a "batch formalism".  With a bit
of encoding they can also be viewed as corresponding to interactive
computing. (Having to encode things is nothing new in the TM world:
almost anything has to be encoded somehow when you show that it
corresponds to a TM.)
From: Tim X
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <87ejfovo9r.fsf@lion.rapttech.com.au>
Raffael Cavallaro <················@pas-d'espam-s'il-vous-plait-mac.com>
writes:

> On 2007-10-20 04:12:54 -0400, Joachim Durchholz <··@durchholz.org> said:
>
>> The resistance is more of the "so what?" kind: the claim that PTMs are
>> more powerful than TMs is simply bogus.
>
> PTMs are a simple extension to TMs that allow for dynamic inputs during
> computation. Very straightforward. The resistance is more of the "I don't
> want to acknowledge that an interactive formalism (PTM) is more appropriate
> for interactive computing than a batch formalism (TM)," kind.
>

Thats certainly not the criticism I've picked up in this thread. All that
I've seen is that people have questioned whether PTM is able to show
something that is computible that TMs cannot. If they can, then that really
would mean something. If they can't and are essentially only equivalent
(possibly with some modelling 'sugar' that makes it easier to represent
some problem classes), then the reaction is very much "so what?". The
'resistance' seems to be due to the author's claim that their PTM is more
powerful than a TM and all that a number of people have asked for is the
proof to back up this claim. Its not what I would call resistance to a new
idea, but rather scientific caution that is asking for the proof. In fact,
it is far more disturbing that instead of a proof, we see criticism for
being resistant to new ideas or approaches. Surely, if it is such a great
new idea/approach, then the proof has already been done (and likely refined
and clarified). It may be possible to argue about the proof (for example,
Matthias' criticism that their failure to use a TM in an imaginative way),
but this is the way proofs are accepted or rejected (referencing the
discussion of social reality of proofs). 

Tim

-- 
tcross (at) rapttech dot com dot au
From: David Formosa (aka ? the Platypus)
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <slrnfhpcp3.8ad.dformosa@localhost.localdomain>
["Followup-To:" header set to comp.lang.functional.]
On Fri, 19 Oct 2007 20:25:18 -0400, Raffael Cavallaro
<················@pas-d'espam-s'il-vous-plait-mac.com> wrote:
> On 2007-10-19 16:29:22 -0400, ··············@gmail.com said:
>
>> Again, you are missing the whole point of the TM abstraction.  A TM
>> does not have a notion of time.
>
> Which is precisely why it is a poor model of interactive computation 
> across time.
>
> The PTM formalism is more expressive of ongoing interactive 
> computation. That's all.

I think the problem is one of vocabulary.  When I (and I expect others
in these newsgroups) here "More expressive" they think "Able to
express functions/algorithms that the TM is unable to".  Proving that
the PTM formalism was more expressive in this sence would be quite
strate forward as one could simply take the halting problem (or
something equivelent to it) and prove that PTM's can solve it.

However if you are using "More expressive" to mean, that PTM's are a
better way of modeling certain computing problems then your unlikely
to get any vigourious argument.

> I don't consider this earth shattering, but I do find the resistance to 
> it very telling.

I don't.  Its a big claim not backed by the type of everdence prefered
by the field.  I see circle squarers, free energy proponents and other
such fringe elements get the same class of resistance.  Extraordinary
claims require extraordinary everdence.  The big counter intuitive
results of people such as Cantor, Godel and Turing where backed by
strong rigourious proofs.  I have seen no such proof backing the
assertions given WRT PTM.

> I think it irks the functional crowd because it makes 
> explicit the importance of dynamic inputs at run-time rather than 
> treating all computation as functions from static input to output.

The functional crowd relize the importance of dynamic inputs at
run-time.  Do you think anyone would learn about mondas if IO wasn't
important :D
From: George Neuner
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <dqqhh3lavtq07tmdsv3g7ck07cfshnkn11@4ax.com>
On Thu, 18 Oct 2007 21:43:22 -0700, ··············@gmail.com wrote:

>On Oct 18, 10:58 pm, Raffael Cavallaro <················@pas-d'espam-
>s'il-vous-plait-mac.com> wrote:
>> On 2007-10-18 21:12:23 -0400, ··············@gmail.com said:
>>
>> > On Oct 18, 6:41 pm, Raffael Cavallaro <················@pas-d'espam-
>> > s'il-vous-plait-mac.com> wrote:
>>
>> >> "A new scientific truth does not triumph by convincing its opponents
>> >> and making them see the light, but rather because its opponents
>> >> eventually die, and a new generation grows up that is familiar with
>> >> it." -Max Planck
>>
>> > "A new unscientific fallacy does not go away by convincing its
>> > proponets and making them see the light, but rather beceause its
>> > proponents eventually die, and a new generation grows up that has a
>> > good laugh at them." -- Kcnalp Xam
>>
>> This is the most elaborate version of "I know you are, but what am I?"
>> I've ever come across.
>>
>> You can keep your fictional quote, I'll go with Planck, thanks.
>
>I'm sorry to hear you didn't appreciate the joke.  So I'll say it non-
>jokingly:
>
>You are using Planck's quote incorrectly.  Just because opponents of a
>new claim don't go away does not make that claim a truth.  In fact,
>when opponents don't go away it is more likely that the reason is not
>that these opponents don't want to see the light but that they
>actually have seen the light already.

Many false or badly supported theories have developed a kind of
psuedo-religious orthodoxy that keeps them going in the face of
contradictory evidence.  

George
--
for email reply remove "/" from address
From: Joachim Durchholz
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <ff9tf5$slb$1@online.de>
Raffael Cavallaro schrieb:
> "A new scientific truth does not triumph by convincing its opponents and 
> making them see the light, but rather because its opponents eventually 
> die, and a new generation grows up that is familiar with it." -Max Planck

This is - at best - a description of the life cycle of a fraction of all 
theories.

In sciences where falsification is easy (mathematics, physics, CS), old 
theories die very fast. The proponents of falsified theories either drop 
them really quickly, or they become irrelevant.
In sciences where falsification is difficult (palaeoanthropology, social 
sciences, ergonomy of programming), Planck may even have been too 
optimistic.

But then that shouldn't come as a surprise: scientific progress is far, 
far easier and quicker if you can test the hypotheses with ease.

I think Planck's quote is more out of personal frustration rather than 
scientific research on the way the sciences progress.

Regards,
Jo
From: Pascal Costanza
Subject: Re: on the relations of traditional CS theory to modern programming   practice
Date: 
Message-ID: <5nrkliFjl6a4U2@mid.individual.net>
Joachim Durchholz wrote:
> Raffael Cavallaro schrieb:
>> "A new scientific truth does not triumph by convincing its opponents 
>> and making them see the light, but rather because its opponents 
>> eventually die, and a new generation grows up that is familiar with 
>> it." -Max Planck
> 
> This is - at best - a description of the life cycle of a fraction of all 
> theories.
> 
> In sciences where falsification is easy (mathematics, physics, CS), old 
> theories die very fast. The proponents of falsified theories either drop 
> them really quickly, or they become irrelevant.
> In sciences where falsification is difficult (palaeoanthropology, social 
> sciences, ergonomy of programming), Planck may even have been too 
> optimistic.

In CS, falsification is not easy.


Pascal

-- 
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: ··············@gmail.com
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <1192812322.857490.110090@y27g2000pre.googlegroups.com>
On Oct 18, 8:05 pm, ··············@gmail.com wrote:
> Joachim Durchholz wrote:
> > Raffael Cavallaro schrieb:
> >> "A new scientific truth does not triumph by convincing its opponents
> >> and making them see the light, but rather because its opponents
> >> eventually die, and a new generation grows up that is familiar with
> >> it." -Max Planck
>
> > This is - at best - a description of the life cycle of a fraction of all
> > theories.
>
> > In sciences where falsification is easy (mathematics, physics, CS), old
> > theories die very fast. The proponents of falsified theories either drop
> > them really quickly, or they become irrelevant.
> > In sciences where falsification is difficult (palaeoanthropology, social
> > sciences, ergonomy of programming), Planck may even have been too
> > optimistic.
>
> In CS, falsification is not easy.

You have to be careful.  At least in the theoretical part of CS,
falsification is just as easy as it is in pure math (because
theoretical CS is essentially a part of math). If C-T is actually
falsifiable, then the procedure for indisputably doing so is quite
clear.  If you actually do it, then there won't be any dispute (at
least not coming from reasonable people).  Of course, coming up with
an actual falsification can be very hard, but that's not the point of
Planck's quote, which refers to the situation where you actually have
a new truth while others are unwilling to accept it.  In math this
rarely if ever happens.
From: Mark H.
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <1192560646.711447.250200@z24g2000prh.googlegroups.com>
On Oct 15, 3:23 am, Mark Tarver <··········@ukonline.co.uk> wrote:
> Really I think the problem is that programming has outstripped CS
> theory for a long time.  

Well... that depends on what you call "CS theory."  I don't think
it's
fair to lump quantum cryptographers and complexity-theory analysts
into
the same group as people who design programming languages and tools
for
programmers.  I've seen talks here at UC Berkeley, for example, about
"CS theory" tools that have found previously undiscovered security
holes
in the Linux kernel.

I get the impression that thinking at a higher level about
programming
language ideas and tools generally gets you further in any kind of
programming work or research.  These are the leading-edge people in my
field at least, that reason about code transformations, semantics of
operations, and mapping from vague "design patterns" to _code_
patterns
and faster, more general algorithms.

mfh
From: Tim X
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <87ir57vd1a.fsf@lion.rapttech.com.au>
Mark Tarver <··········@ukonline.co.uk> writes:

> This is gathering some threads that arose from my comp.lang.lisp post
> 'On the Strange Weakness of Graphical User Programming Languages' -
> threads that link up to other essays I've written.
>
> Really I think the problem is that programming has outstripped CS
> theory for a long time.  We have these theories of computability
> dating back to the 1930s developed by people who didn't even have
> access to a computer. Elsewhere I noted in 'Lisp for the C21' (L21)
>
> www.lambdassociates.org/lC21.htm
>
> that there is this vaguer but equally important idea of computational
> adequacy.  When I wrote that piece I didn't see how important it was
> to be able to define that concept in a clear and elegant way to get
> clear and elegant solutions to modern systems programming problems.
> I do now.
>
> The truth is that our formal 1930s theory has long been left behind by
> the pace of development of commercial software.  Its odd really that
> this has not been a focus at university level for longer or that
> university CS has not made a bigger issue out of it. I think that part
> of the problem is that universities have gone into decline at the same
> time that this explosion in innovation has taken place.
>
> http://www.lambdassociates.org/blog/decline.htm
>
> Professors in CS were too busy rushing to think this through and come
> up with an organised response.  The whole point of universities was to
> create an environment where people were free to think unconstrained by
> the horizons of deadlines and filling out paper.  By commercialising
> the universities, putting people under the same pressure to produce
> that you would find in a software house, we collapsed their horizons
> and introduced the same kind of short-term thinking that characterises
> much of the software industry.
>
> The problem now is that we have a generation of computer tools based
> on 'lets stick things together and see how it flies' that are at least
> good enough for most people to want to stick with and patch as
> needed.  They are powerful for doing certain things and powerful
> enough so that the Right Thing will have a hard time getting
> established.
>
> What I think is that this situation will continue for the current
> generation of computer tools until later in this century.  At that
> point our ambitions will exceed the power of our tools to deliver and
> we'll probably have to develop those missing theories and take them
> seriously.  I think in particular the rather sinister PAL demoed in
> the thread
>
> http://www.radar.cs.cmu.edu/image.jsp?eid=75
>
> will require planning technology and concise formal models of what I
> dimly term computational adequacy. What we have got now will not
> deliver PAL.
>
> Of far less importance, but important to me is my own response to this
> and L21.  I think for me, speaking purely personally and not for
> anybody else, this is the end of the road in CS.  I stepped out of the
> rat race and lived unemployed in order to reacquire the peace of mind
> necessary to solve the problems implicit in Qi.  In a decent
> university system it would not have been necessary to do this, but
> what we have in the UK is far from that.  Qi is, for me, the Right
> Thing.  But doing the Right Thing beyond Qi requires a scale of
> resource that I simply do not have.  It was however a great privilege
> to have been on this ride for so long.
>

Hi Mark,

I agree with your assessment of the issues ini Universities re: research,
funding and performance/productivity assessment and am concerned on the
impact this has, particularly with respect to innovation and research into
areas which don't have an obvious commercial application. I also think the
concept of computational adequacy is potentially useful. However, I'm not
as convinced that this is a sign we have out stripped CS theory. If
anything, our failure has been in not focusing enough on developing these
theories or resolving the limitations that have been identified (again,
highlighting problems in the University and research sectors). Most of the
advances in the last 30 years have not been based on improved theoretical
understanding, but rather on increased computational power. 

As an example, consider planning. A significant problem in this domain is
the 'frame problem', which was identified and described in the late
60s. Since that time, advances in this area have been largely due to
improved processing speed and cheaper storage/memory. This improvement in
pomputational power has enabled some 'solutions' that were identified in
the 60s and 70s to be applied. Initially, these were impracticle because of
the time/power/storage required. They are now more practicle, but they are
really quite crude work arounds rather than efficient
solutions. Essentially, we have not progressed much of the theory past
where it was at the start of the 70s and have been relying on improvements
in technology to make previously inefficient work arounds
feasible. However, we are now beginning to run up against those same
limitations again as the technological improvements begin to slow down. 

In the areas of programming, much of the advances have really been about
hardware allowing higher level programming languages with larger libraries
and a reduction in the need to be as concerned about efficiency. We are
able to handle greater complexity as a result, but essentially, we are
doing the same stuff we were doing 30 years ago - its just more complex and
there is more of it. The advances that have occured have mainly focused on
dealing with the complexities and managing larger code bases i.e. OOP, AOP,
TDD, etc). 

The other factor which I think is often overlooked, particularly when your
talking about systems such as PAL, is that fundamentally, it requires some
workable theory of intelligence. I think one of the main reasons for the AI
winter was a combination of Sci-Fi hype combined with a lack of any clear
theory regarding what intelligence really is. Without the underlying
theory, we have nothing to develop the computational theories relating to
reasoning, belief and knowledge representation that are necessary to get
the sort of intelligent systems depicted in that little movie. 

A prerequisite to developing these theories is the existence of a research
framework where researchers are free to pursue topics that are free from
practicle/commercial interests and in an environment that actually
facilitates providing the time and resources necessary and isn't
constrained by meeting just commercially focused outcomes or require
constant justification/reporting or endless funding
proposals. unfortunately, current attitudes actually force researchers into
having to put together things like that little film, which make rather
grand promises regarding the benefits in 'hot areas' such as fighting the
evils of terrorism, in order to get funding. Unfortunately, this only works
int he short term. If real commercial advances are not achieved within a
relatively short period, funding is withdrawn and we have yet more
partially completed research and undoubtably only partially completed
theories. 

We have not out stripped the traditional CS theory, but we have failed to
develop and extend it or resolve the core limitations identified by these
traditional theories. In the last 30 years, we have focused on the
application and relied on technical improvement to work around theoretical
constraints. What we need is a renewed effort in extending and developing
our theory further. 

Tim


-- 
tcross (at) rapttech dot com dot au
From: Matthias Buelow
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <5nvdb7Fjt0b4U1@mid.dfncis.de>
Mark Tarver wrote:

> The truth is that our formal 1930s theory has long been left behind by
> the pace of development of commercial software. 

Sadly...

People with a strong theoretical background (McCarthy, Ken Thompson,
Donald Knuth) have created a lot better software than the
buzzword-infested junk we often see today.

I think if you've broken in your brain on Turing machines or other
minimal abstractions, so to say, you have a different approach at
problem solving than just glueing together illmatched, badly thought out
buzzwordy libraries, which is what most programming is about today.
From: George Neuner
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <731nh3dna4donc24g0ni8hh4luft0c7lrs@4ax.com>
On Sun, 21 Oct 2007 00:21:26 +0200, Matthias Buelow <···@incubus.de>
wrote:

>Mark Tarver wrote:
>
>> The truth is that our formal 1930s theory has long been left behind by
>> the pace of development of commercial software. 
>
>Sadly...
>
>People with a strong theoretical background (McCarthy, Ken Thompson,
>Donald Knuth) have created a lot better software than the
>buzzword-infested junk we often see today.

Yeah, but the strong theoretical background was only part of it ...
those guys were also excellent system engineers, if not by training
then by natural gift.

I have met more than a few people with strong theoretical backgrounds
whom I wouldn't trust to engineer a diaper.

I've said this before and I haven't changed my mind - writing good
software is an engineering discipline.  Mathematics and theoretical
models aid engineering, but they don't replace it.


>I think if you've broken in your brain on Turing machines or other
>minimal abstractions, so to say, you have a different approach at
>problem solving than just glueing together illmatched, badly thought out
>buzzwordy libraries, which is what most programming is about today.

The hacker god of elegance has been replaced with the golden calf of
agile programming.

George
--
for email reply remove "/" from address
From: Joachim Durchholz
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <ffggj9$jdl$1@online.de>
George Neuner schrieb:
> The hacker god of elegance has been replaced with the golden calf of
> agile programming.

And rightfully so *when it comes to moving targets* (such as 
customer-driven or market-driven specification).
The days of elegant engineering aren't over, but elegant engineering is 
now a small part of programming. (Actually this has been the case since 
Cobol was available.)

Regards,
Jo
From: Rob Warnock
Subject: Re: on the relations of traditional CS theory to modern programming practice
Date: 
Message-ID: <q9udnRAvPdI8p4HanZ2dnUVZ_uKpnZ2d@speakeasy.net>
Matthias Buelow  <···@incubus.de> wrote:
+---------------
| People with a strong theoretical background (McCarthy, Ken Thompson,
| Donald Knuth) have created a lot better software than the
| buzzword-infested junk we often see today.
| 
| I think if you've broken in your brain on Turing machines or other
| minimal abstractions, so to say, you have a different approach at
| problem solving than just glueing together illmatched, badly thought out
| buzzwordy libraries, which is what most programming is about today.
+---------------

As much as I respect a strong theoretical background, the most
significant part of your statement might not be TMs per se, but
the bit about "or other minimal abstractions". I have generally
found that people who had backgrounds which included significant
assembly-language programming in restricted environments [whether
it was small memory, small instruction sets, and/or real-time
constraints (e.g., line-rate packet switching with PDP-8's!]
tended to also write much better code when using high-level
languages, large libraries, and the other accoutrements of
today's nearly-unlimited resources. That is, a certain tendency
towards mimimalism, together with experience in how to get real
work done in a mimimalist environment, is a useful trait even
(or even especially?!?) when the environment is not especially
limited.


-Rob

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607