From: Randy Yates
Subject: Sentience
Date: 
Message-ID: <7jwocsn0.fsf@ieee.org>
Has there been any significant advances lately in the area of sentient
algorithms? Can someone even define "sentient"?
-- 
%  Randy Yates                  % "Bird, on the wing,
%% Fuquay-Varina, NC            %   goes floating by
%%% 919-577-9882                %   but there's a teardrop in his eye..."
%%%% <·····@ieee.org>           % 'One Summer Dream', *Face The Music*, ELO
http://home.earthlink.net/~yatescr

From: Frank A. Adrian
Subject: Re: Sentience
Date: 
Message-ID: <pan.2004.04.10.16.55.08.522055@ancar.org>
On Sat, 10 Apr 2004 05:19:31 +0000, Randy Yates wrote:

> Can someone even define "sentient"?

Yes!  This is a sentient.  This not a sentient because it no verb(s).

Thank you!  Thank you!  I'll be here all week...

faa
From: Brian Mastenbrook
Subject: Re: Sentience
Date: 
Message-ID: <100420041122565262%NOSPAMbmastenbNOSPAM@cs.indiana.edu>
In article <············@ieee.org>, Randy Yates <·····@ieee.org> wrote:

> Has there been any significant advances lately in the area of sentient
> algorithms? Can someone even define "sentient"?

No. Nobody's really working on it. There are four different camps in AI
today, and none of them is actually going to achieve AI:

* The connectionists, aka the cargo-culters - "if we just tweak the
back-propogation, it'll start thinking!" These people have put no real
thought into a definition of intelligence, other than to characterize
it as associative and fuzzy (how we're supposed to do that on a
discrete computer, I can't fathom).
* The logicians, whose hobby is attaching wings to pigs in the hope
that they will fly. These people presume that intelligence is just a
matter of logical deduction, and that logic is the right level on which
to describe how intelligence operates. They have all the high-level
philosophical problems of AI defined and solved, but so far their
computers have not locked them outside in Jupiter orbit, leading some
to migrate to the third camp...
* The Searlites, who believe it can't be done but are still publishing
papers for God-knows-what-reason.
* And finally, there's the people doing stuff that isn't AI, namely
anything that's a domain-specific system or isn't designed to produce
domain-general intelligence. This includes the evolutionary and genetic
algorithms people, the constraint solvers, and probably anything else
that doesn't fit in categories 1-3 (and a lot of stuff that does too;
the connectionists seem awfully fond of making a neural net that does X
and only X and then publishing a paper that says "neural nets can do
X!"). There is a lot of interesting stuff in this category, but they
should go find their own field. If your program can solve a problem
reliably much better than a human can, it's not because it's smart,
it's because you've written a better domain-specific algorithm than our
own general-purpose reasoning, and it's a big clue that you're not
doing AI at all.

Spread throughout all of these camps are the semiotics people and their
deconstructivist / postmodernist associates, who are busy destroying
real intelligence in humans via the educational system.

There, that should offend just about everybody.

If you want to know why AI is not getting anywhere, read Feynman's
famous address on Cargo Cult Science (
http://www.physics.brocku.ca/etc/cargo_cult_science.html ) and compare
to the state of the field. No wonder!

-- 
Brian Mastenbrook
http://www.cs.indiana.edu/~bmastenb/
From: Cameron MacKinnon
Subject: Re: Sentience
Date: 
Message-ID: <5ZSdnYHiv4-w3eXd4p2dnA@golden.net>
Brian Mastenbrook wrote:

[a humourous and insightful screed]

> There, that should offend just about everybody.
> 
> If you want to know why AI is not getting anywhere, read Feynman's
> famous address on Cargo Cult Science (
> http://www.physics.brocku.ca/etc/cargo_cult_science.html ) and compare
> to the state of the field. No wonder!

Is AI science? It isn't like experimenters in the field have to discover 
how some external phenomenon works, unless it's your connectionists 
answering "how many neurons to achieve critical mass?"  If the field is 
more about groping toward algorithms that exhibit intelligent behaviour, 
albeit using a different mechanism than biological organisms do, it's 
more like mathematics. And if that's so, the cargo cult analogy breaks 
down.

You've discounted superior performance in a particular domain, and you 
seem to be setting the bar quite high elsewhere.  So what do you 
consider AI? Victory in the Turing test or bust? If it's defined as what 
humans consider intelligent behaviour, the goalposts are likely to 
recede as fast as our computers can advance, simply because the layman's 
perception of intelligence is any behaviour that machines don't exhibit.

If computers can already drive cars, play chess and accidentally fill up 
their hard drives* while downloading (uploading?) porn on Mars, what do 
you want next? It seems that discounting domain specific successes is a 
tricky way of saying "if it's solved, it isn't AI."

 From what I've heard, electronic games' non-player characters seem to 
be improving year after year. That's an area where there's a somewhat 
independent arbiter (game reviewers) and an incentive for improvement 
(sales). It isn't world changing technology, but it's a plausible Turing 
test.


* metadata overload, but close enough

-- 
Cameron MacKinnon
Toronto, Canada
From: Brian Mastenbrook
Subject: Re: Sentience
Date: 
Message-ID: <100420041613373814%NOSPAMbmastenbNOSPAM@cs.indiana.edu>
In article <······················@golden.net>, Cameron MacKinnon
<··········@clearspot.net> wrote:

> Is AI science? It isn't like experimenters in the field have to discover 
> how some external phenomenon works, unless it's your connectionists 
> answering "how many neurons to achieve critical mass?"  If the field is 
> more about groping toward algorithms that exhibit intelligent behaviour, 
> albeit using a different mechanism than biological organisms do, it's 
> more like mathematics. And if that's so, the cargo cult analogy breaks 
> down.

I would extend the cargo cult phenomenon beyond "just science". I've
seen entirely too much cargo cult programming - putting together parts
of programs without any real understanding of what's going on. When
you're helping a student who has close parens and open parens littered
randomly throughout their Scheme source, it's pretty obvious that the
student is not reasoning about how Scheme works, but putting together a
program that has an appearance such that it /might/ work.

It's my suggestion that there is also something "going on" in
intelligence that is difficult to explain on the level of firing
neurons, just as there's something "going on" in computation that's
difficult to explain on the level of gates on a silicon wafer with
particular electromagnetic properties.

> You've discounted superior performance in a particular domain, and you 
> seem to be setting the bar quite high elsewhere.  So what do you 
> consider AI? Victory in the Turing test or bust? If it's defined as what 
> humans consider intelligent behaviour, the goalposts are likely to 
> recede as fast as our computers can advance, simply because the layman's 
> perception of intelligence is any behaviour that machines don't exhibit.

I would say that victory in a domain without any analysis of whether
the program can function in other domains with even minimal sharing of
structure is not Artificial Intelligence. In addition I do set the bar
high for programs that I cannot convince myself operate in any way that
directly maps to the primitives of intelligence - some of which include
the ability to interact with an external world that is composed of
objects with persistent identity, the ability to form intuitive
predictions and increase that ability through experience, the ability
to introspect about prior thought and use that introspection to make
decisions, and the ability to translate abstract thoughts into
concrete, controlled motor function. If I do not see a mapping from any
of these constructs to the program, I cannot accept at face value that
such a program is progressing to intelligence, nor that simply
increasing the resources available to such a program will cause it to
hit "critical mass" and suddenly become intelligent. Accepting this at
face value would be the same as accepting that bamboo towers and
runways will bring cargo planes, if only the reproduction is accurate
enough.

> If computers can already drive cars, play chess and accidentally fill up 
> their hard drives* while downloading (uploading?) porn on Mars, what do 
> you want next?

I want them to do all of these things at the same time, and improve the
performance (if slightly) of all of them by improving an individual
skill. I want shared structure that enables a computer to quickly adapt
to a new domain by application of past experience in a different
domain.

> It seems that discounting domain specific successes is a 
> tricky way of saying "if it's solved, it isn't AI."

Effectively, this statement is true, because we don't have AI yet. If
we were producing progressively better general-purpose reasoners, then
I could see the merit of assigning AI-value to them. We aren't. The
last well-known general purpose reasoner I know of is SHRDLU, which
could be taught arithemetic by instructing it to think of numbers as
collections of blocks.

>  From what I've heard, electronic games' non-player characters seem to 
> be improving year after year. That's an area where there's a somewhat 
> independent arbiter (game reviewers) and an incentive for improvement 
> (sales). It isn't world changing technology, but it's a plausible Turing 
> test.

Call me when games allow interaction with the world in some way other
than weapons fire, spell casting, and limited-option decision making.
In reality, all that's happening here is the transformation of gamers
into restricted-domain reasoners. It's also the reason why some
variants of Eliza can pass the turing test, when there's very obviously
nothing which can be called "intelligent" floating around in there.

-- 
Brian Mastenbrook
http://www.cs.indiana.edu/~bmastenb/
From: Cameron MacKinnon
Subject: Re: Sentience
Date: 
Message-ID: <kN6dnUHs98lsAeXdRVn-vg@golden.net>
Brian Mastenbrook wrote:
> I would extend the cargo cult phenomenon beyond "just science". I've
> seen entirely too much cargo cult programming - putting together parts
> of programs without any real understanding of what's going on. When
> you're helping a student who has close parens and open parens littered
> randomly throughout their Scheme source, it's pretty obvious that the
> student is not reasoning about how Scheme works, but putting together a
> program that has an appearance such that it /might/ work.

Infants don't learn languages by reasoning out that every sentence needs 
a (possibly implied) noun phrase and a verb phrase. They perform a lot 
of experiments with no scientific method. Mimicry and feedback seem to 
be enough.

Computer folks know that, because computer languages contain almost no 
redundancy (as used to resolve ambiguities), aren't DWIM and have 
pedantic compilers with truly awful diagnostics, learning one's first 
through trial and error is painful, and so syntax is often taught 
rigorously. But that's all hindsight and deep insight. Your students' 
strategy seems quite reasonable if their only experience is in 
biological languages with good feedback from their "conversation partner."

What, exactly, is the cargo cult's mistake? Seabees came and built 
runways and control towers, men put funny headphones on, and the planes 
came. Mimicking this looks ridiculous TO US, because we have superior 
domain knowledge of aeronautics. Absent that knowledge, we could say 
that they err in persisting for five years when no planes have come, but 
that suggests that the cultists are to somehow know how long to wait for 
results. Given that the original was a one-time event for a culture 
which (I'm guessing) had an oral history stretching back many 
generations, even a several-generations-long experiment might not be 
unreasonable.

[If you'd have asked me yesterday whether I'd defend cargo cultists, I'd 
have been insulted at the implication. I never really thought about it 
before, just thought what Feynman wanted me to think.]

I meet people all the time who seem to have only the most tenuous grasp 
of logic. In fact, I'd hazard that, for the majority of any given 
society, Aristotle and Descartes might as well never have existed. It's 
not that they are completely illogical, but they certainly don't operate 
through rigourous reasoning, and often can't see the flaws in the 
systems they believe.

So the interesting question is, when seeking artificially intelligent 
behaviour, are we looking for computers to act like the best of us (in 
which case, perhaps "Artificial Brilliance"?) or the rest of us?

> It's my suggestion that there is also something "going on" in
> intelligence that is difficult to explain on the level of firing
> neurons, just as there's something "going on" in computation that's
> difficult to explain on the level of gates on a silicon wafer with
> particular electromagnetic properties.

If the computation was designed by a person, seeking to explain it in 
the behaviour of the hardware it's running on is perhaps incorrect. It's 
like looking at a chalkboard full of mathematics and saying "the chalk's 
really on to something there".

Since we design with logic, we can approach zero redundancy, something 
biological systems don't exhibit.

Explaining human behaviour, in isolation, in terms of neurons seems very 
difficult. But if we start at the lowly cockroach, we see something that 
wouldn't seem to take too many neurons to emulate.

But I agree with you that pursuing a pure neural net approach, merely 
because biological systems do it that way, is a cargo cult approach.


My comments on game AI weren't meant to imply that it represents the 
future, merely to say that some areas of AI have been advancing, because 
there's both quasi-objective metrics and motivations for the people in 
that field. AI people seek to get the right fitness criteria for the 
training or population culling within their experiments, but I think 
that it's equally important to have appropriate fitness criteria for the 
researchers themselves, otherwise they end up playing in their own theses.

Military (and eventually civil) logistics is another similar area. 
Results were wanted, money was spent, results were gotten. Since funding 
domain specific intelligence (or solutions) research has a payoff, 
whereas general intelligence is a money pit, domain specific is where 
the progress is being made.


Anyway, we may rapidly be approaching (or worse) the limits of my 
insights in this area.

 From your comments, I'd say you follow the progress, er, machinations, 
of the AI community. Do you have any pointers for the uninitiated to 
collections of stuff that's more toward the seabees' end of the runway 
than the natives'?

-- 
Cameron MacKinnon
Toronto, Canada
From: Brian Mastenbrook
Subject: Re: Sentience
Date: 
Message-ID: <100420042318416155%NOSPAMbmastenbNOSPAM@cs.indiana.edu>
In article <······················@golden.net>, Cameron MacKinnon
<··········@clearspot.net> wrote:

> Brian Mastenbrook wrote:
> > I would extend the cargo cult phenomenon beyond "just science". I've
> > seen entirely too much cargo cult programming - putting together parts
> > of programs without any real understanding of what's going on. When
> > you're helping a student who has close parens and open parens littered
> > randomly throughout their Scheme source, it's pretty obvious that the
> > student is not reasoning about how Scheme works, but putting together a
> > program that has an appearance such that it /might/ work.
> 
> Infants don't learn languages by reasoning out that every sentence needs 
> a (possibly implied) noun phrase and a verb phrase. They perform a lot 
> of experiments with no scientific method. Mimicry and feedback seem to 
> be enough.

Enough... with hardware that supports it. Our brains are simply wired
up to rapidly obtain the necessary information for language learning.
But there is a difference between doing that and understanding the
wiring itself. Our genetics do really provide a bootstrapping procedure
that we've got to understand now.

> Computer folks know that, because computer languages contain almost no 
> redundancy (as used to resolve ambiguities), aren't DWIM and have 
> pedantic compilers with truly awful diagnostics, learning one's first 
> through trial and error is painful, and so syntax is often taught 
> rigorously. But that's all hindsight and deep insight. Your students' 
> strategy seems quite reasonable if their only experience is in 
> biological languages with good feedback from their "conversation partner."

It's reasonable up until the 52nd time I explain what s-expressions are
and point them at a tutorial explaining the rules. At that point they
really are just coasting instead of thinking.

> What, exactly, is the cargo cult's mistake? Seabees came and built 
> runways and control towers, men put funny headphones on, and the planes 
> came. Mimicking this looks ridiculous TO US, because we have superior 
> domain knowledge of aeronautics. Absent that knowledge, we could say 
> that they err in persisting for five years when no planes have come, but 
> that suggests that the cultists are to somehow know how long to wait for 
> results. Given that the original was a one-time event for a culture 
> which (I'm guessing) had an oral history stretching back many 
> generations, even a several-generations-long experiment might not be 
> unreasonable.

I don't think Feynman's point was to attack the cargo cult at all. He
was drawing an analogy between why it didn't work and the same type of
pseudoscience that doesn't work in our culture. In our case, we have
many methods which have been developed to enable us to come to an
understanding of something which appears magical. Since we don't yet
(and concievably won't for many many years due to the seeming exclusion
of accurate and large-scale measurement) understand how the embodied
mind actually processes information, that means we need to apply these
processes to the understanding of intelligence itself.

Without following scientific or systematic philosophical inquiry into
the nature of the general concept of intelligence and its
representation as part of the human mind, we cannot convincingly assert
that our experiments will bring in the cargo planes of intelligence.

> [If you'd have asked me yesterday whether I'd defend cargo cultists, I'd 
> have been insulted at the implication. I never really thought about it 
> before, just thought what Feynman wanted me to think.]

I don't think Feynman wanted you to think anything about them, but
instead about the advocates of pseudoscience in a scientific society.

> I meet people all the time who seem to have only the most tenuous grasp 
> of logic. In fact, I'd hazard that, for the majority of any given 
> society, Aristotle and Descartes might as well never have existed. It's 
> not that they are completely illogical, but they certainly don't operate 
> through rigourous reasoning, and often can't see the flaws in the 
> systems they believe.
> 
> So the interesting question is, when seeking artificially intelligent 
> behaviour, are we looking for computers to act like the best of us (in 
> which case, perhaps "Artificial Brilliance"?) or the rest of us?

The fact that rigorous logic is learned is actually a valid point in
the study of intelligence. It suggests that it might not be the right
level to describe what's going on. I personally focus much more on the
symbolic aspects of intelligence, with a slightly peculiar connotation
of symbol: something which has identity. I try to stake out a middle
ground between the subsymbolicists and the logicians.

> Explaining human behaviour, in isolation, in terms of neurons seems very 
> difficult. But if we start at the lowly cockroach, we see something that 
> wouldn't seem to take too many neurons to emulate.

But is it useful? My contention is that the human brain is not really a
well-designed device and is more of a kludge of cognitive ability on
top of a neural net than a reflection of some innate cognitive ability
in neurons.

> My comments on game AI weren't meant to imply that it represents the 
> future, merely to say that some areas of AI have been advancing, because 
> there's both quasi-objective metrics and motivations for the people in 
> that field. AI people seek to get the right fitness criteria for the 
> training or population culling within their experiments, but I think 
> that it's equally important to have appropriate fitness criteria for the 
> researchers themselves, otherwise they end up playing in their own theses.

Unfortunately I'm not convinced that those metrics are actually
measuring intelligence. There is a lot of fascinating stuff going on
there, but I really would hesistate to put the label of AI on, say,
genetic programming. It's not a slight to GP, but just a difference in
classification.

> Military (and eventually civil) logistics is another similar area. 
> Results were wanted, money was spent, results were gotten. Since funding 
> domain specific intelligence (or solutions) research has a payoff, 
> whereas general intelligence is a money pit, domain specific is where 
> the progress is being made.

I think it's shortsighted investment to only fund practical and not
basic research. For instance, basic physics research gave us the MRI,
and soon sustainable energy-producing fusion.

> Anyway, we may rapidly be approaching (or worse) the limits of my 
> insights in this area.

Not at all; these are good thoughts.

>  From your comments, I'd say you follow the progress, er, machinations, 
> of the AI community. Do you have any pointers for the uninitiated to 
> collections of stuff that's more toward the seabees' end of the runway 
> than the natives'?

Actually, it's worse than that. I publish in AI. My homepage has links
to my papers, but they aren't necessarily terribly well written,
particularly because of the need to slip past reviewers and masquerade
as useful research.

While I don't claim to be a classicist in any strict regard,
http://plato.stanford.edu/entries/language-thought/ gives a good
history on some of the previous work in the field and the feud with the
connectionists.

-- 
Brian Mastenbrook
http://www.cs.indiana.edu/~bmastenb/
From: Ray Dillinger
Subject: Re: Sentience
Date: 
Message-ID: <4078F2CC.C5381E54@sonic.net>
I dunno.  I've worked a lot in AI (commercial products dealing
with natural language, not academic papers) and I've used several 
major approaches, with varying degrees of "neatness" and 
"precision."  

What I've observed is that the "imprecise" systems, the ones 
which do not satisfy the test of "classicality" in terms of 
sequential sound operations on a representational state, are 
the ones that work better.  They are more efficient and less 
likely to be killed or led awry by grammar mistakes in the 
input. Something that classifies input according to bigrams 
or trigrams and then does very shallow pattern-matching is 
often superior at extracting meaning from genuine examples 
of written english than a formal parsing system that builds 
up deep structure.

That said, connectionist tools such as neural networks are 
very helpful in efficiently choosing the "most likely" formal 
grammar rule to use next when doing formal parsing. Systems 
that used connectionist models in rule selection, applying 
backpropagation to train them if they didn't lead to the lowest-
cost parse directly, quickly "learned" to outperform other formal
parsing systems by a factor of ten or more.  Given that experience 
I'd say that the formalist/classicist camp with their exhaustive-
search methodologies ought to be cheering for the new tools from 
the connectionists and scruffies.  Instead there seems to be 
a fight.

But that's primarily domain intelligence again.  What 
distinguishes natural-language work is the breadth of the 
domain; not only do you have to have a working knowledge of 
the language, but for worthwhile results, you also have to 
have a working knowledge of the particular domain under 
discussion.  Pretty much all the systems I built were only
looking for information about a very restricted domain. 

Now, asking about general intelligence - what the OP called 
"Sentience" -- is at this point a lot like asking a car 
manufacturer to build something that's as fast, flexible, 
quiet, tolerant of rough terrain, and self-maintaining as 
a cheetah. It's not going to happen.  The cheetah is many 
orders of magnitude more complex than the best cars we can 
build. Instead we must make do with "domain-specific" but 
useful functionality, where we get great speed on paved 
roads and have to tinker and maintain once in a while. 

But I also want to point out one other thing.  Cheetahs are
self-aware.  They have "animal intelligence", which means 
they use their brains to interact effectively with the world
around them.  That's the intelligence that "human intelligence"
started as.  And I think "sentience" has to be considered as 
an outgrowth of animal intelligence.  Animals don't think 
in formal symbols.  They don't have language.  The fundamental 
architecture of thought probably isn't "formal" in any 
reasonable sense of the word.  Our formalisms, our ability 
to make sound inference, our ability to use logic, etc - 
the things that the classicists so value that they think 
intelligence must be defined in terms of them - are all 
lately acquired.  And that cheetah is self-aware without 
them.

If we could produce a computer whose "consciousness" were 
as complex as a cat's, I'd consider it self-aware, though 
probably not sentient.  And I think that "sentience," though 
it probably involves _some_ classicist processing, can 
probably only be achieved by a system that actually does 
at least that much "non-classicist" processing. 

			Bear
From: William Bland
Subject: Re: Sentience
Date: 
Message-ID: <pan.2004.04.11.01.59.31.148028@abstractnonsense.com>
On Sat, 10 Apr 2004 21:18:41 -0400, Cameron MacKinnon wrote:
> Since we design with logic, we can approach zero redundancy, something 
> biological systems don't exhibit.

I may be misunderstanding you here, but I don't believe this is
correct.  Any redundancy in a biological system costs an organism
energy.  It seems obvious that evolution will get rid of such
organisms in favor of organisms that don't have redundancy.  Did
I misunderstand your point? Did you have a specific example in
mind?

Cheers,
	Bill.
From: Cameron MacKinnon
Subject: Re: Sentience
Date: 
Message-ID: <68GdncSDGtlHNuXdRVn-tw@golden.net>
William Bland wrote:
> On Sat, 10 Apr 2004 21:18:41 -0400, Cameron MacKinnon wrote:
> 
>>Since we design with logic, we can approach zero redundancy, something 
>>biological systems don't exhibit.
> 
> 
> I may be misunderstanding you here, but I don't believe this is
> correct.  Any redundancy in a biological system costs an organism
> energy.  It seems obvious that evolution will get rid of such
> organisms in favor of organisms that don't have redundancy.  Did
> I misunderstand your point? Did you have a specific example in
> mind?

Initially I was thinking specifically of the human brain. Examples exist 
of people who've lost quite a lot of brain tissue to trauma and 
continued to live quite normal lives. However, given our limited 
understanding of the brain, this is not a very solid argument.

Your assertion only holds in an evolutionary environment selecting for 
minimum energy use. Why do I have two kidneys, two testes, two lungs?

-- 
Cameron MacKinnon
Toronto, Canada
From: William Bland
Subject: Re: Sentience
Date: 
Message-ID: <pan.2004.04.11.02.49.53.581929@abstractnonsense.com>
On Sat, 10 Apr 2004 22:22:18 -0400, Cameron MacKinnon wrote:

> William Bland wrote:
>> On Sat, 10 Apr 2004 21:18:41 -0400, Cameron MacKinnon wrote:
>> 
>>>Since we design with logic, we can approach zero redundancy, something 
>>>biological systems don't exhibit.
>> 
>> 
>> I may be misunderstanding you here, but I don't believe this is
>> correct.  Any redundancy in a biological system costs an organism
>> energy.  It seems obvious that evolution will get rid of such
>> organisms in favor of organisms that don't have redundancy.  Did
>> I misunderstand your point? Did you have a specific example in
>> mind?
> 
> Initially I was thinking specifically of the human brain. Examples exist 
> of people who've lost quite a lot of brain tissue to trauma and 
> continued to live quite normal lives. However, given our limited 
> understanding of the brain, this is not a very solid argument.
> 
> Your assertion only holds in an evolutionary environment selecting for
> minimum energy use. Why do I have two kidneys, two testes, two lungs?

In all cases - brain, kidneys, testes, lungs, etc. we have
"spare" so that we don't die if something goes wrong with the
other one.  But then they're not "spare", are they?  They're
absolutely necessary for keeping you alive when something goes
wrong.  It's only "redundant" if you don't mind catastrophic
failure.  Yeah, granted, a human *could* live without them,
but I sure wouldn't want to be that human.

Bringing this back to computers, I think it's good to design
things with "redundancy".  Except, again, I don't believe it's
really redundancy. It's only "redundant" if you don't care what
happens when something goes wrong.  Usually you do care.

Cheers,
	Bill.
From: Cameron MacKinnon
Subject: Re: Sentience
Date: 
Message-ID: <D-udnbFAa74kLOXdRVn-vg@golden.net>
William Bland wrote:
> On Sat, 10 Apr 2004 21:18:41 -0400, Cameron MacKinnon wrote:
> 
>>Since we design with logic, we can approach zero redundancy, something 
>>biological systems don't exhibit.
> 
> 
> I may be misunderstanding you here, but I don't believe this is
> correct.  Any redundancy in a biological system costs an organism
> energy.  It seems obvious that evolution will get rid of such
> organisms in favor of organisms that don't have redundancy.  Did
> I misunderstand your point? Did you have a specific example in
> mind?

I'm not happy with my prior response. How does one measure redundancy in 
a large neural network? I think the brain has a large number of 
redundant neurons, in the sense that a lot of them can die or be removed 
without measurable impairment.

However, as you point out, selection has been favouring larger brains 
for some time now, so in that sense, the extra brainpower must have some 
benefit. I suspect that it is in rapidly coming up with witty things to 
say to the opposite sex at parties.

-- 
Cameron MacKinnon
Toronto, Canada
From: Jeff Dalton
Subject: Re: Sentience
Date: 
Message-ID: <fx4k70i14u6.fsf@tarn.inf.ed.ac.uk>
Cameron MacKinnon <··········@clearspot.net> writes:

> However, as you point out, selection has been favouring larger brains
> for some time now, so in that sense, the extra brainpower must have
> some benefit.

That's not necessarily so.  That brains have gotten larger does not
mean that must have been selected for.  For instance, it might be that
something else, that had larger brains as a consequence, was what was
selected for.  Larger brains might even be a net cost, so long as some
unavoidably associated benefit is great.

Even Dennett and other Darwinean maximalists admit, albeit
reluctantly, that not every feature was selected for.

-- jd
From: Joe Marshall
Subject: Re: Sentience
Date: 
Message-ID: <wu4h5pe1.fsf@ccs.neu.edu>
Jeff Dalton <····@tarn.inf.ed.ac.uk> writes:

> Even Dennett and other Darwinean maximalists admit, albeit
> reluctantly, that not every feature was selected for.

Some features have little `selection pressure' and some are not
`selected' at all.  These features tend to have huge variations in the
population because there is no selective advantage or disadvantage.
For example:
  - Friction ridge patterns (fingerprints)
  - Striations in the iris

There are other features that are co-incidental.  Eye color is related
to skin pigmentation, and although skin pigmentation has some minor
selective value, it seems unlikely to me that eye color is that
important.

This sort of variation is critical to the theory of evolution,
though.  When the environment changes such that some unimportant
features becomes a strong advantage or disadvantage, then there will
be enough variation in that feature to allow some to survive.
From: Ray Dillinger
Subject: Re: Sentience
Date: 
Message-ID: <40817C4A.BD0D4EDC@sonic.net>
Joe Marshall wrote:
> 
> There are other features that are co-incidental.  Eye color is related
> to skin pigmentation, and although skin pigmentation has some minor
> selective value, it seems unlikely to me that eye color is that
> important.

Skin color is in fact fairly important.  Pale skin is advantageous for 
indigenous populations in extreme latitudes or in areas that get 
highly attenuated sunlight; otherwise there is a problem synthesizing 
enough vitamin D with the very limited amount of skin that people in
a cold climate expose to sunlight. 

And darker skin is necessary for indigenous populations in the 
equatorial regions or in areas that get very direct sunlight, 
because the pigmentation protects cells from UV damage and, although
the pigmentation drastically cuts vitamin D production per square 
centimeter of skin exposed to sun, equatorial climates typically
are warm enough that people expose a lot more skin to the sunlight
so vitamin D production isn't a problem anyway. 

We are capable of getting vitamin D from our diet, so rickets and 
other more minor health issues resulting from vitamin D deficiency 
aren't too likely to be a make-or-break issue in a single lifetime; 
but there is huge selection pressure on populations taken as a whole, 
because the pressure, however slight, applies to *every* member of 
the population. 
 
> This sort of variation is critical to the theory of evolution,
> though.  When the environment changes such that some unimportant
> features becomes a strong advantage or disadvantage, then there will
> be enough variation in that feature to allow some to survive.

Agreed.  

			Bear
From: Rahul Jain
Subject: Re: Sentience
Date: 
Message-ID: <87u0ziyopi.fsf@nyct.net>
Ray Dillinger <····@sonic.net> writes:

> Skin color is in fact fairly important.

And indeed, that's why it's so strongly correlated to location. (And was
extremely strongly correlated before transportation became as easy as it
is now.)

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: EL Henry
Subject: Re: Sentience
Date: 
Message-ID: <ebe26ea9.0404171422.661fcd4c@posting.google.com>
William Bland <····@abstractnonsense.com> wrote in message news:<······························@abstractnonsense.com>...
> On Sat, 10 Apr 2004 21:18:41 -0400, Cameron MacKinnon wrote:
> > Since we design with logic, we can approach zero redundancy, something 
> > biological systems don't exhibit.
> 
> I may be misunderstanding you here, but I don't believe this is
> correct.  Any redundancy in a biological system costs an organism
> energy.  It seems obvious that evolution will get rid of such
> organisms in favor of organisms that don't have redundancy.  Did
> I misunderstand your point? Did you have a specific example in
> mind?

> 
> Cheers,
> 	Bill.

Bill --

 (Hmmmm...I shouldn't be posting, I have a tight schedule, but I can't
avoid it...)
 Redundancy in biological system is extremely common. The sheer number
of neurons is an example and that, in part, accounts for neural
plasticity (the phenomenom that happens when you relearn a task
through different pathways).
 Cancer is a phenomenom related to the massive amount of cell
turnover. It is a failure of the system of apoptosis (programmed cell
death) in a mutated cell, generating subsequent clones. Mistakes
happen when a system has to replicate a task a billion times.
 Let's not discuss thermodynamics in a living system here...That is
something probably way out of anyone's league in this newsgroup. Your
views on evolution and selection are also overly simplistic, but it'll
take a years of college in Biology or Medicine to fix them. You have
to realize that only in terms of vocabulary, Medical school will give
around 20,000 new words. This is hardly a walk in the park, and I
think some computer scientists would do best be aware of that when
they reach for "biological metaphors." Some frameworks and models (and
I won't cite them here) can only be regarded as puerile by someone
with a background in biological science.
 I remember a congress I went to last year. A girl was explaining her
Master's thesis on Fuzzy Logic applied to a "population of flies."
When she finished her presentation a biologist picked up the
microphone and said: "Hey, that's great. The only problem is that it
has nothing to do with the way it really happens with real flies."

 Cheers,

  Henry
From: Joe Marshall
Subject: Re: Sentience
Date: 
Message-ID: <oepy3o92.fsf@comcast.net>
Cameron MacKinnon <··········@clearspot.net> writes:

> What, exactly, is the cargo cult's mistake? 

It doesn't work.

It is based not in science, but in magic.  No attempt is made to
determine a rational cause; instead a `like evokes like' is used.

-- 
~jrm
From: Randy Yates
Subject: Re: Sentience
Date: 
Message-ID: <d66cd2z9.fsf@ieee.org>
Cameron MacKinnon <··········@clearspot.net> writes:
> [...]
> What, exactly, is the cargo cult's mistake? Seabees came and built
> runways and control towers, men put funny headphones on, and the
> planes came. Mimicking this looks ridiculous TO US, because we have
> superior domain knowledge of aeronautics. 

Could this be one definition of sentience, then: The ability to
transfer knowledge from outside the identity's domain to inside?

> Absent that knowledge, we could say that they err in persisting for
> five years when no planes have come, but that suggests that the
> cultists are to somehow know how long to wait for results. Given
> that the original was a one-time event for a culture which (I'm
> guessing) had an oral history stretching back many generations, even
> a several-generations-long experiment might not be unreasonable.

This has a strange, vague similarity to scientific experiment. Because
we've observed that F = m*a a google times before doesn't mean that it
might not change tomorrow. Experience does not equal knowledge.

> [If you'd have asked me yesterday whether I'd defend cargo cultists,
> I'd have been insulted at the implication. I never really thought
> about it before, just thought what Feynman wanted me to think.]
>
> I meet people all the time who seem to have only the most tenuous
> grasp of logic. In fact, I'd hazard that, for the majority of any
> given society, Aristotle and Descartes might as well never have
> existed. It's not that they are completely illogical, but they
> certainly don't operate through rigourous reasoning, and often can't
> see the flaws in the systems they believe.
>
> So the interesting question is, when seeking artificially intelligent
> behaviour, are we looking for computers to act like the best of us (in
> which case, perhaps "Artificial Brilliance"?) or the rest of us?

This is an interesting query, but really off-topic from the question. 
I think even a retarded sentient algorithm would be a breakthrough.

> [...]
-- 
%  Randy Yates                  % "The dreamer, the unwoken fool - 
%% Fuquay-Varina, NC            %  in dreams, no pain will kiss the brow..."
%%% 919-577-9882                %  
%%%% <·····@ieee.org>           % 'Eldorado Overture', *Eldorado*, ELO
http://home.earthlink.net/~yatescr
From: Cameron MacKinnon
Subject: Re: Sentience
Date: 
Message-ID: <4tydnQEW08KYjOHdRVn-tA@golden.net>
Randy Yates wrote:
> Cameron MacKinnon <··········@clearspot.net> writes:
>>So the interesting question is, when seeking artificially intelligent
>>behaviour, are we looking for computers to act like the best of us (in
>>which case, perhaps "Artificial Brilliance"?) or the rest of us?
> 
> This is an interesting query, but really off-topic from the question. 
> I think even a retarded sentient algorithm would be a breakthrough.

I think that once we get intelligent machines, creating one that acts 
plausibly stupid or ignorant (like us humans) will be even more of a 
breakthrough. The first AIs are likely to be annoying know-it-alls.

As to the original question, how about "sentient: able to use Google or 
a dictionary to discover the meaning of unknown words."

I say any animal that can learn to differentiate between seeing another 
animal and seeing itself in a mirror is sentient.

-- 
Cameron MacKinnon
Toronto, Canada
From: Randy Yates
Subject: Re: Sentience
Date: 
Message-ID: <y8ozmosn.fsf@ieee.org>
Cameron MacKinnon <··········@clearspot.net> writes:

> Randy Yates wrote:
>> Cameron MacKinnon <··········@clearspot.net> writes:
>>>So the interesting question is, when seeking artificially intelligent
>>>behaviour, are we looking for computers to act like the best of us (in
>>>which case, perhaps "Artificial Brilliance"?) or the rest of us?
>> This is an interesting query, but really off-topic from the
>> question. I think even a retarded sentient algorithm would be a
>> breakthrough.
>
> I think that once we get intelligent machines, creating one that acts
> plausibly stupid or ignorant (like us humans) will be even more of a
> breakthrough. The first AIs are likely to be annoying know-it-alls.
>
> As to the original question, how about "sentient: able to use Google
> or a dictionary to discover the meaning of unknown words."

I meant a "scientific" definition - one that would serve to guide as
a test.

> I say any animal that can learn to differentiate between seeing
> another animal and seeing itself in a mirror is sentient.

Weak. This could be done with simple pattern recognition. You're way
off from where I was thinking. 
-- 
%  Randy Yates                  % "With time with what you've learned, 
%% Fuquay-Varina, NC            %  they'll kiss the ground you walk 
%%% 919-577-9882                %  upon."
%%%% <·····@ieee.org>           % '21st Century Man', *Time*, ELO
http://home.earthlink.net/~yatescr
From: robbie carlton
Subject: Re: Sentience
Date: 
Message-ID: <32b5ef05.0404110421.73560c1d@posting.google.com>
Brian Mastenbrook wrote


> I would extend the cargo cult phenomenon beyond "just science". I've
> seen entirely too much cargo cult programming - putting together parts
> of programs without any real understanding of what's going on. When
> you're helping a student who has close parens and open parens littered
> randomly throughout their Scheme source, it's pretty obvious that the
> student is not reasoning about how Scheme works, but putting together a
> program that has an appearance such that it /might/ work.

Okay, so you're saying you don't want AI reasearchers putting parts
together to create programs whose workings they don't understand: i.e.
emergence. And yet

> It's my suggestion that there is also something "going on" in
> intelligence that is difficult to explain on the level of firing
> neurons, just as there's something "going on" in computation that's
> difficult to explain on the level of gates on a silicon wafer with
> particular electromagnetic properties.

i.e. emergence.
From: Cameron MacKinnon
Subject: Re: Sentience
Date: 
Message-ID: <hvmdneNkvclbR-Td4p2dnA@golden.net>
robbie carlton wrote:
> Brian Mastenbrook wrote
>>It's my suggestion that there is also something "going on" in
>>intelligence that is difficult to explain on the level of firing
>>neurons, just as there's something "going on" in computation that's
>>difficult to explain on the level of gates on a silicon wafer with
>>particular electromagnetic properties.
> 
> 
> i.e. emergence.

I don't think he means to suggests that there's anything mystical or 
inexplicable about computation. We created it, and not by accident.

It's difficult to explain celestial mechanics on the level of 
arithmetic. So we moved up the abstraction curve, invented calculus, and 
lived happily ever after.

We haven't yet found the "calculus" that allows us to easily explain 
intelligent reasoning. So intelligence is as mystifying to us as the 
retrograde motion of the planets was to the ancients.

With the wrong theory it can seems as complex as the geocentric model of 
the universe was, with 55 spheres required to allow the possibility of 
the motion that was actually observed.

In your post you conflated the complex behaviour sometimes exhibited by 
large numbers of simple things (emergence) with the simple behaviour 
sometimes exhibited by people who should know better (throwing something 
against the wall and seeing if it sticks). If something emerges from 
experiment without planning, that's just luck.

-- 
Cameron MacKinnon
Toronto, Canada
From: robbie carlton
Subject: Re: Sentience
Date: 
Message-ID: <32b5ef05.0404120243.41c97f78@posting.google.com>
Cameron MacKinnon <··········@clearspot.net> wrote 

> It's difficult to explain celestial mechanics on the level of 
> arithmetic. So we moved up the abstraction curve, invented calculus, and 
> lived happily ever after.
>
> We haven't yet found the "calculus" that allows us to easily explain 
> intelligent reasoning. So intelligence is as mystifying to us as the 
> retrograde motion of the planets was to the ancients.

Yes, calculus turned out to be a better language for describing
celestial mechanics than arithmetic, but for many phenomena there is
no calculus simpler than the description of it's constituent parts.
Take as a simple example Craig Reynolds' "Boids". The whole point of
that experiment (simulation?) is that the simplest way of describing
the behaviour of the flock is by describing the behaviour of the
individuals. It's not that Reynolds couldn't find an adequate
"calculus" to describe the flock. He found a spectacular one, it just
happened to involve emergent behaviour which is difficult to find by
analysing the equations.
 
> With the wrong theory it can seems as complex as the geocentric model of 
> the universe was, with 55 spheres required to allow the possibility of 
> the motion that was actually observed.
> 
> In your post you conflated the complex behaviour sometimes exhibited by 
> large numbers of simple things (emergence) with the simple behaviour 
> sometimes exhibited by people who should know better (throwing something 
> against the wall and seeing if it sticks). If something emerges from 
> experiment without planning, that's just luck.

I appreciate that in the post that I quoted Brian Mastenbrook was
talking about students putting together random bits of code. However
he originally attacked the connectionist AI approach as Cargo-Cult
science, and then in the next post started asking for an emergent
description of Intelegence, which is why I felt the need to post.
From: Brian Mastenbrook
Subject: Re: Sentience
Date: 
Message-ID: <120420040713049872%NOSPAMbmastenbNOSPAM@cs.indiana.edu>
In article <····························@posting.google.com>, robbie
carlton <··············@hotmail.com> wrote:

> I appreciate that in the post that I quoted Brian Mastenbrook was
> talking about students putting together random bits of code. However
> he originally attacked the connectionist AI approach as Cargo-Cult
> science, and then in the next post started asking for an emergent
> description of Intelegence, which is why I felt the need to post.

Only a twisted reading of my statements would take them as a request
for an "emergent description of Intelligence", when in other posts I
have repeatedly talked about approaching intelligence on a level that
corresponds to philosophical primitives in the concept of intelligence.
Because I cannot say that neural nets have any such correspondence, I
cannot accept that they will become intelligent merely because they
happen to resemble the existing hardware of our brain.

Yes, it's wonderful that some systems can demonstrate wildly complex
behavior out of simple parts. However, the problem is that such
behavior is essentially unpredictable, which is not very helpful when
trying to do research. The general notion of science isn't just about
running and experiment and seeing what works - it's about being able to
draw conclusions from the experiment and modify one's thinking. If the
nature of the domain is that of gross unpredictability, this can't
happen.

-- 
Brian Mastenbrook
http://www.cs.indiana.edu/~bmastenb/
From: Joe Marshall
Subject: Re: Sentience
Date: 
Message-ID: <smfa3ofz.fsf@comcast.net>
Brian Mastenbrook <····················@cs.indiana.edu> writes:

> I would extend the cargo cult phenomenon beyond "just science". I've
> seen entirely too much cargo cult programming - putting together parts
> of programs without any real understanding of what's going on. 

There's a *lot* of this.
From: Sunnan
Subject: Re: Sentience
Date: 
Message-ID: <87u0zrtydj.fsf@handgranat.org>
Brian Mastenbrook <····················@cs.indiana.edu> writes:

> In article <············@ieee.org>, Randy Yates <·····@ieee.org> wrote:
>
>> Has there been any significant advances lately in the area of sentient
>> algorithms? Can someone even define "sentient"?
>
> No. Nobody's really working on it. There are four different camps in AI
> today, and none of them is actually going to achieve AI:

<snip camps>

Being in the fourth camp, doing advanced information processing or
other useful stuff, doesn't really mean that you're not in one of the
three first as well. It's perfectly possibly to work on (say) advanced
search by day, and igor-tinkering with the cargo cult by night.

(just clarifying for others, not trying to correct Brian)

-- 
One love,
Sunnan
From: Randy Yates
Subject: Re: Sentience
Date: 
Message-ID: <4qrrli0z.fsf@ieee.org>
Brian,

Thanks for a reasonable response. I'm about to checkout the link.

--Randy

Brian Mastenbrook <····················@cs.indiana.edu> writes:

> In article <············@ieee.org>, Randy Yates <·····@ieee.org> wrote:
>
>> Has there been any significant advances lately in the area of sentient
>> algorithms? Can someone even define "sentient"?
>
> No. Nobody's really working on it. There are four different camps in AI
> today, and none of them is actually going to achieve AI:
>
> * The connectionists, aka the cargo-culters - "if we just tweak the
> back-propogation, it'll start thinking!" These people have put no real
> thought into a definition of intelligence, other than to characterize
> it as associative and fuzzy (how we're supposed to do that on a
> discrete computer, I can't fathom).
> * The logicians, whose hobby is attaching wings to pigs in the hope
> that they will fly. These people presume that intelligence is just a
> matter of logical deduction, and that logic is the right level on which
> to describe how intelligence operates. They have all the high-level
> philosophical problems of AI defined and solved, but so far their
> computers have not locked them outside in Jupiter orbit, leading some
> to migrate to the third camp...
> * The Searlites, who believe it can't be done but are still publishing
> papers for God-knows-what-reason.
> * And finally, there's the people doing stuff that isn't AI, namely
> anything that's a domain-specific system or isn't designed to produce
> domain-general intelligence. This includes the evolutionary and genetic
> algorithms people, the constraint solvers, and probably anything else
> that doesn't fit in categories 1-3 (and a lot of stuff that does too;
> the connectionists seem awfully fond of making a neural net that does X
> and only X and then publishing a paper that says "neural nets can do
> X!"). There is a lot of interesting stuff in this category, but they
> should go find their own field. If your program can solve a problem
> reliably much better than a human can, it's not because it's smart,
> it's because you've written a better domain-specific algorithm than our
> own general-purpose reasoning, and it's a big clue that you're not
> doing AI at all.
>
> Spread throughout all of these camps are the semiotics people and their
> deconstructivist / postmodernist associates, who are busy destroying
> real intelligence in humans via the educational system.
>
> There, that should offend just about everybody.
>
> If you want to know why AI is not getting anywhere, read Feynman's
> famous address on Cargo Cult Science (
> http://www.physics.brocku.ca/etc/cargo_cult_science.html ) and compare
> to the state of the field. No wonder!
>
> -- 
> Brian Mastenbrook
> http://www.cs.indiana.edu/~bmastenb/

-- 
%  Randy Yates                  % "Remember the good old 1980's, when 
%% Fuquay-Varina, NC            %  things were so uncomplicated?"
%%% 919-577-9882                % 'Ticket To The Moon' 
%%%% <·····@ieee.org>           % *Time*, Electric Light Orchestra
http://home.earthlink.net/~yatescr
From: Jeff Dalton
Subject: Re: Sentience
Date: 
Message-ID: <fx4oepu155l.fsf@tarn.inf.ed.ac.uk>
Brian Mastenbrook <····················@cs.indiana.edu> writes:

> No. Nobody's really working on it. There are four different camps in AI
> today, and none of them is actually going to achieve AI:

> * And finally, there's the people doing stuff that isn't AI, namely
> anything that's a domain-specific system or isn't designed to produce
> domain-general intelligence. This includes the evolutionary and genetic
> algorithms people, the constraint solvers, and probably anything else
> that doesn't fit in categories 1-3 ...

But that is AI as the term is typically used.  Look at an AI textbook,
for example, and that's the sort of thing it's about.  

Re cargo cults, mentioned somewhere in this thread, there's
something that at least makes them sound less randomly irrational
in one of Marvin Harris's books, probably _Pigs, Wars, and Witches:
The Riddles of Culture_.

-- jd
From: Tim Daly Jr.
Subject: Re: Sentience
Date: 
Message-ID: <87u0zs6pcy.fsf@hummer.intern>
Randy Yates <·····@ieee.org> writes:

> Has there been any significant advances lately in the area of sentient
> algorithms? Can someone even define "sentient"?

Why yes, of course.  In fact, I had a friendly chat with my quicksort
the other day.

-- 
-Tim
From: Paul F. Dietz
Subject: Re: Sentience
Date: 
Message-ID: <Kb-dnarQb9m3durdRVn-iQ@dls.net>
Tim Daly Jr. wrote:
> Randy Yates <·····@ieee.org> writes:
> 
>>Has there been any significant advances lately in the area of sentient
>>algorithms? Can someone even define "sentient"?
> 
> Why yes, of course.  In fact, I had a friendly chat with my quicksort
> the other day.

Programs sometimes scream when I torture test them.  Does that count?

	Paul
From: Randy Yates
Subject: Re: Sentience
Date: 
Message-ID: <y8p36h55.fsf@ieee.org>
···@tenkan.org (Tim Daly Jr.) writes:

> Randy Yates <·····@ieee.org> writes:
>
>> Has there been any significant advances lately in the area of sentient
>> algorithms? Can someone even define "sentient"?
>
> Why yes, of course.  In fact, I had a friendly chat with my quicksort
> the other day.

Thanks for your response, Tim. Now go take your medication.
-- 
%  Randy Yates                  % "Bird, on the wing,
%% Fuquay-Varina, NC            %   goes floating by
%%% 919-577-9882                %   but there's a teardrop in his eye..."
%%%% <·····@ieee.org>           % 'One Summer Dream', *Face The Music*, ELO
http://home.earthlink.net/~yatescr
From: Thomas F. Burdick
Subject: Re: Sentience
Date: 
Message-ID: <xcvy8p3a15l.fsf@famine.OCF.Berkeley.EDU>
···@tenkan.org (Tim Daly Jr.) writes:

> Randy Yates <·····@ieee.org> writes:
> 
> > Has there been any significant advances lately in the area of sentient
> > algorithms? Can someone even define "sentient"?
> 
> Why yes, of course.  In fact, I had a friendly chat with my quicksort
> the other day.

I was having a little talk with some shell scripts and Applescript on
my Mac the other day.  It used to speak in an inappropriately polite
brittish accent.  Not good.  I fixed it, though, and now we can
converse in appropriately hyper NoCal tones.  I really wish I had
method combinations, they'd really help to get the "hella" quotient up
to where it should be.  I don't need to talk with my Lisp programs, I
already know what they're thinking.

-- 
           /|_     .-----------------------.                        
         ,'  .\  / | No to Imperialist war |                        
     ,--'    _,'   | Wage class war!       |                        
    /       /      `-----------------------'                        
   (   -.  |                               
   |     ) |                               
  (`-.  '--.)                              
   `. )----'                               
From: David Steuber
Subject: Re: Sentience
Date: 
Message-ID: <87ad1ii2yi.fsf@david-steuber.com>
Randy Yates <·····@ieee.org> writes:

> Has there been any significant advances lately in the area of sentient
> algorithms? Can someone even define "sentient"?

I've read the responses so far.  Very interesting work is going on.
However...

I don't think there is any such thing as sentience in the sense of
self-awareness.  When Descartes said, "I think, therefore I am," he
was mistaken.  He was wrong at the part where he said, "I think."
Everything else that followed was therefore wrong.

Atoms do not think.  They are constrained by the laws of physics.
Brains are just collections of atoms arranged in a fancy pattern.
They are subject to the exact same laws.

Generally I try not to think about this illusion we call life.  I
think I'm doing pretty well.  Life is just a dream.

-- 
It would not be too unfair to any language to refer to Java as a
stripped down Lisp or Smalltalk with a C syntax.
--- Ken Anderson
    http://openmap.bbn.com/~kanderso/performance/java/index.html
From: Don Groves
Subject: Re: Sentience
Date: 
Message-ID: <opr6bgrqse2i99y2@news.web-ster.com>
On 11 Apr 2004 18:07:01 -0400, David Steuber <·····@david-steuber.com> 
wrote:

> Randy Yates <·····@ieee.org> writes:
>
>> Has there been any significant advances lately in the area of sentient
>> algorithms? Can someone even define "sentient"?
>
> I've read the responses so far.  Very interesting work is going on.
> However...
>
> I don't think there is any such thing as sentience in the sense of
> self-awareness.  When Descartes said, "I think, therefore I am," he
> was mistaken.  He was wrong at the part where he said, "I think."
> Everything else that followed was therefore wrong.

In my book, Richard Feynman said it best:
"I think, therefore I think I am."
--
dg
From: Erann Gat
Subject: Re: Sentience
Date: 
Message-ID: <gNOSPAMat-1104041717200001@192.168.1.51>
In article <················@news.web-ster.com>, Don Groves <(. (@ dgroves
ccwebster) net))> wrote:

> On 11 Apr 2004 18:07:01 -0400, David Steuber <·····@david-steuber.com> 
> wrote:
> 
> > Randy Yates <·····@ieee.org> writes:
> >
> >> Has there been any significant advances lately in the area of sentient
> >> algorithms? Can someone even define "sentient"?
> >
> > I've read the responses so far.  Very interesting work is going on.
> > However...
> >
> > I don't think there is any such thing as sentience in the sense of
> > self-awareness.  When Descartes said, "I think, therefore I am," he
> > was mistaken.  He was wrong at the part where he said, "I think."
> > Everything else that followed was therefore wrong.
> 
> In my book, Richard Feynman said it best:
> "I think, therefore I think I am."

I think I think, therefore I think I am.

I think.

:-)

E.
From: Don Groves
Subject: Re: Sentience
Date: 
Message-ID: <opr6bpvos82i99y2@news.web-ster.com>
On Sun, 11 Apr 2004 17:17:20 -0700, Erann Gat <·········@flownet.com> 
wrote:

> In article <················@news.web-ster.com>, Don Groves <(. (@ 
> dgroves
> ccwebster) net))> wrote:
>
>> On 11 Apr 2004 18:07:01 -0400, David Steuber <·····@david-steuber.com>
>> wrote:
>>
>> > Randy Yates <·····@ieee.org> writes:
>> >
>> >> Has there been any significant advances lately in the area of 
>> sentient
>> >> algorithms? Can someone even define "sentient"?
>> >
>> > I've read the responses so far.  Very interesting work is going on.
>> > However...
>> >
>> > I don't think there is any such thing as sentience in the sense of
>> > self-awareness.  When Descartes said, "I think, therefore I am," he
>> > was mistaken.  He was wrong at the part where he said, "I think."
>> > Everything else that followed was therefore wrong.
>>
>> In my book, Richard Feynman said it best:
>> "I think, therefore I think I am."
>
> I think I think, therefore I think I am.
>
> I think.
>

When I was writing real-time embedded systems code
for a living, I made up this sign for my desk:
"I think, therefore I Asm".
But now I'm thinking at a higher level, I think.
--
dg
From: Rob Warnock
Subject: Re: Sentience
Date: 
Message-ID: <FpSdncnUWNj30OfdRVn-uA@speakeasy.net>
Erann Gat <·········@flownet.com> wrote:
+---------------
| Don Groves <(. (@ dgroves ccwebster) net))> wrote:
| > David Steuber <·····@david-steuber.com> wrote:
| > > When Descartes said, "I think, therefore I am," he was mistaken.
| > > He was wrong at the part where he said, "I think."
| > > Everything else that followed was therefore wrong.
| > 
| > In my book, Richard Feynman said it best:
| > "I think, therefore I think I am."
| 
| I think I think, therefore I think I am.
+---------------

Exactly so. The other way around is putting Descartes before the horse.[1]

+---------------
| I think.
+---------------

Yes, well...


-Rob

[1] Which I first heard from The Vajra Regent Osel Tendzin in 1985
    (though it may not have been new then).

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Paul Wallich
Subject: Re: Sentience
Date: 
Message-ID: <c5edpu$t37$1@reader1.panix.com>
Rob Warnock wrote:

> Erann Gat <·········@flownet.com> wrote:
> +---------------
> | Don Groves <(. (@ dgroves ccwebster) net))> wrote:
> | > David Steuber <·····@david-steuber.com> wrote:
> | > > When Descartes said, "I think, therefore I am," he was mistaken.
> | > > He was wrong at the part where he said, "I think."
> | > > Everything else that followed was therefore wrong.
> | > 
> | > In my book, Richard Feynman said it best:
> | > "I think, therefore I think I am."
> | 
> | I think I think, therefore I think I am.

Or, as one neurobiologist used to say, "I think the brain is the most 
fascinating organ in the human body. Then I remember who's telling me that."

> +---------------
> 
> Exactly so. The other way around is putting Descartes before the horse.[1]

> 
> [1] Which I first heard from The Vajra Regent Osel Tendzin in 1985
>     (though it may not have been new then).

(It was old enough in 1955 that Richard Armour could refer to it 
obliquely in _It All Started With Europa_...)

paul
From: Don Geddis
Subject: Re: Sentience
Date: 
Message-ID: <87k70lxber.fsf@sidious.geddis.org>
David Steuber <·····@david-steuber.com> wrote on 11 Apr 2004 18:0:
> I don't think there is any such thing as sentience in the sense of
> self-awareness.

Self-awareness is actually easier than sentience.  Many computer systems
include a model of their own behavior, that they compare with observations
from the external world.  That's sufficient to create a limited kind of
self-awareness.

> Atoms do not think.  They are constrained by the laws of physics.

The laws of physics don't necessarily exclude thinking.  But you're right
that atoms don't think.

> Brains are just collections of atoms arranged in a fancy pattern.
> They are subject to the exact same laws.

Your use of the word "just" is highly misleading.  All the value is in the
organization.

In any case, look up the "systems reply" to Searle's Chinese Room analogy.
I'm sure you would be a big fan of Searle (who is an AI critic).  But the
"systems reply" addresses your confusion head-on.  Namely, that the overall
system can have interesting properties which none of the constituent parts
have in isolation.

        -- Don
_______________________________________________________________________________
Don Geddis                  http://don.geddis.org/               ···@geddis.org
From: John Thingstad
Subject: Re: Sentience
Date: 
Message-ID: <opr6c7gcggxfnb1n@news.chello.no>
On Mon, 12 Apr 2004 12:07:40 -0700, Don Geddis <···@geddis.org> wrote:

> David Steuber <·····@david-steuber.com> wrote on 11 Apr 2004 18:0:
>> I don't think there is any such thing as sentience in the sense of
>> self-awareness.
>
> Self-awareness is actually easier than sentience.  Many computer systems
> include a model of their own behavior, that they compare with 
> observations
> from the external world.  That's sufficient to create a limited kind of
> self-awareness.

I belive the tecnical term for self awareness is auto-epistemic logic.
See More, R. C. Autoepistemic logic,
in Non-Standard Logics for Automated Reasoning.

>
>> Atoms do not think.  They are constrained by the laws of physics.
>
> The laws of physics don't necessarily exclude thinking.  But you're right
> that atoms don't think.
>
>> Brains are just collections of atoms arranged in a fancy pattern.
>> They are subject to the exact same laws.

You are assuming that the fact that there is a description of the phenomena
means that there is a deterministic model for it.
In fact Quantum Electrodynamice can give rise highly nonliear behaviour.
You could say that caos is the rule and order the exception that lasts
over time. Roger Penrose postulates, in Shaddows of the Mind,  that the
bioelectric field surrounding the brain is in a quantum collaped state.
(Solenoid effect in the micro-tubules)
He goes on to conjecture that quantum effects are neccesary to model the 
mind.
Thus he submises a computer does not have the capacity to medol it.
Personally I see no reason why computers can't model nonlinear systems.
(There are some limitations. In the interval delta t being modeled the 
function
must be bound polynoninally or no number of digits suffice to express the 
number.)
Personally I have been looking for neural "loopback" and adjusting for the 
error term.
A method called error biffucation.
(given linear system Ax=b where x is unknown we know x + delta x where 
delta x is a
unknown error. but A(x+delta x) = b + delta b
so X delta x = delta b which we can solve. substituting gives
A delta X = A (x + delta x) - b)
It seems to me this can be used to model recognition.
Futher more a new mathematical tool called "Integrate and fire pulse 
coupled oscillation"
may be used to modell colation.

I thus summise that the brain is a fractally composed set of non-linear 
system solvers using
error biffucation and colating using integrate and fire pulse coupled 
oscillation.
The "prime" integrator (factal tree root) is what we call conciousness.
Well, sigh, it needs more work.


-- 
Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/
From: Randy Yates
Subject: Re: Sentience
Date: 
Message-ID: <65c4d1m7.fsf@ieee.org>
John Thingstad <··············@chello.no> writes:

> On Mon, 12 Apr 2004 12:07:40 -0700, Don Geddis <···@geddis.org> wrote:
>
>> David Steuber <·····@david-steuber.com> wrote on 11 Apr 2004 18:0:
>>> I don't think there is any such thing as sentience in the sense of
>>> self-awareness.
>>
>> Self-awareness is actually easier than sentience.  Many computer systems
>> include a model of their own behavior, that they compare with
>> observations
>> from the external world.  That's sufficient to create a limited kind of
>> self-awareness.
>
> I belive the tecnical term for self awareness is auto-epistemic logic.
> See More, R. C. Autoepistemic logic,
> in Non-Standard Logics for Automated Reasoning.
>
>>
>>> Atoms do not think.  They are constrained by the laws of physics.
>>
>> The laws of physics don't necessarily exclude thinking.  But you're right
>> that atoms don't think.
>>
>>> Brains are just collections of atoms arranged in a fancy pattern.
>>> They are subject to the exact same laws.
>
> You are assuming that the fact that there is a description of the phenomena
> means that there is a deterministic model for it.
> In fact Quantum Electrodynamice can give rise highly nonliear behaviour.
> You could say that caos is the rule and order the exception that lasts
> over time. Roger Penrose postulates, in Shaddows of the Mind,  that the
> bioelectric field surrounding the brain is in a quantum collaped state.
> (Solenoid effect in the micro-tubules)
> He goes on to conjecture that quantum effects are neccesary to model
> the mind.
> Thus he submises a computer does not have the capacity to medol it.
> Personally I see no reason why computers can't model nonlinear systems.
> (There are some limitations. In the interval delta t being modeled the
> function
> must be bound polynoninally or no number of digits suffice to express
> the number.)
> Personally I have been looking for neural "loopback" and adjusting for
> the error term.
> A method called error biffucation.
> (given linear system Ax=b where x is unknown we know x + delta x where
> delta x is a
> unknown error. but A(x+delta x) = b + delta b
> so X delta x = delta b which we can solve. substituting gives
> A delta X = A (x + delta x) - b)
> It seems to me this can be used to model recognition.
> Futher more a new mathematical tool called "Integrate and fire pulse
> coupled oscillation"
> may be used to modell colation.
>
> I thus summise that the brain is a fractally composed set of
> non-linear system solvers using
> error biffucation and colating using integrate and fire pulse coupled
> oscillation.
> The "prime" integrator (factal tree root) is what we call conciousness.
> Well, sigh, it needs more work.
>
>
> -- 
> Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/

Just lovely. I'm still waiting for a definition of sentience. How can 
we know we've achieved it if we can't define it?
-- 
%  Randy Yates                  % "Remember the good old 1980's, when 
%% Fuquay-Varina, NC            %  things were so uncomplicated?"
%%% 919-577-9882                % 'Ticket To The Moon' 
%%%% <·····@ieee.org>           % *Time*, Electric Light Orchestra
http://home.earthlink.net/~yatescr
From: Matthew Danish
Subject: Re: Sentience
Date: 
Message-ID: <20040413031737.GF25328@mapcar.org>
On Tue, Apr 13, 2004 at 02:54:47AM +0000, Randy Yates wrote:
> Just lovely. I'm still waiting for a definition of sentience. How can 
> we know we've achieved it if we can't define it?

I don't know if I think, but I know that I am.

Take that, Descartes.

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
From: John Thingstad
Subject: Re: Sentience
Date: 
Message-ID: <opr6d0u9myxfnb1n@news.chello.no>
On Tue, 13 Apr 2004 02:54:47 GMT, Randy Yates <·····@ieee.org> wrote:

>
> Just lovely. I'm still waiting for a definition of sentience. How can
> we know we've achieved it if we can't define it?

We can't. I think this term is ill defined.
Sentience is supposed to be what distinguises uf from animals.
I think research, if anything, has made this distinction more fluid.
Maybe the answer is to remove the term ;)

-- 
Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/
From: Randy Yates
Subject: Re: Sentience
Date: 
Message-ID: <fzb858bu.fsf@ieee.org>
John Thingstad <··············@chello.no> writes:
> [...]
> Maybe the answer is to remove the term ;)

A typical liberal, "free-thinking," new-age suggestion. Also,
as are most such suggestions, complete bullshit.
-- 
%  Randy Yates                  % "Remember the good old 1980's, when 
%% Fuquay-Varina, NC            %  things were so uncomplicated?"
%%% 919-577-9882                % 'Ticket To The Moon' 
%%%% <·····@ieee.org>           % *Time*, Electric Light Orchestra
http://home.earthlink.net/~yatescr
From: Rahul Jain
Subject: Re: Sentience
Date: 
Message-ID: <87pta63zuj.fsf@nyct.net>
Randy Yates <·····@ieee.org> writes:

> John Thingstad <··············@chello.no> writes:
>> [...]
>> Maybe the answer is to remove the term ;)
>
> A typical liberal, "free-thinking," new-age suggestion. Also,
> as are most such suggestions, complete bullshit.

Did you have the same reaction to the concept that the speed of light is
relative not only to the frame of reference of the emitter, but of the
receiver as well?

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: André Thieme
Subject: Re: Sentience
Date: 
Message-ID: <c5e9g6$d4r$1@ulric.tng.de>
David Steuber wrote:

> Atoms do not think.  They are constrained by the laws of physics.
> Brains are just collections of atoms arranged in a fancy pattern.
> They are subject to the exact same laws.

Well, atoms alone (probably) don't think. However, when you combine
many of them into a special structure, then this structure gets new
properties/abilities which were not there before. For example the
abilitiy to think.

If we find out what pattern/structure is needed we could assemble
atoms in a computer in a similar way and by this giving it also the
ability to think. The hardware is already here.. now placing the
last parts of matter needs to be done by software.
Programming is nothing else than placing matter into a specific
pattern which creates some effect.


Andr�
--
From: Christian Lynbech
Subject: Re: Sentience
Date: 
Message-ID: <87ekqt8287.fsf@baguette.defun.dk>
>>>>> "Andr�" == Andr� Thieme <······································@justmail.de> writes:

Andr�> Well, atoms alone (probably) don't think. However, when you combine
Andr�> many of them into a special structure, then this structure gets new
Andr�> properties/abilities which were not there before. For example the
Andr�> abilitiy to think.

Approaching the nitpick level, but isn't this merely our current
theory? 

I mean, we can not know for certain that intelligence does not involve
some magical component (such as a divinely given soul) until we have
succesfully built an artificial intelligence.


------------------------+-----------------------------------------------------
Christian Lynbech       | christian ··@ defun #\. dk
------------------------+-----------------------------------------------------
Hit the philistines three times over the head with the Elisp reference manual.
                                        - ·······@hal.com (Michael A. Petonic)
                                   
From: Brian Mastenbrook
Subject: Re: Sentience
Date: 
Message-ID: <120420041350215816%NOSPAMbmastenbNOSPAM@cs.indiana.edu>
In article <··············@baguette.defun.dk>, Christian Lynbech
<·················@ericsson.com> wrote:

> Approaching the nitpick level, but isn't this merely our current
> theory? 
> 
> I mean, we can not know for certain that intelligence does not involve
> some magical component (such as a divinely given soul) until we have
> succesfully built an artificial intelligence.

I know it's true because I wouldn't want to live in a world where I
couldn't make an artificial intelligence. Therefore, I don't even have
to bother thinking about the possibility that I can't.

-- 
Brian Mastenbrook
http://www.cs.indiana.edu/~bmastenb/
From: David Steuber
Subject: Re: Sentience
Date: 
Message-ID: <878yh1f0e3.fsf@david-steuber.com>
Brian Mastenbrook <····················@cs.indiana.edu> writes:

> In article <··············@baguette.defun.dk>, Christian Lynbech
> <·················@ericsson.com> wrote:
> 
> > Approaching the nitpick level, but isn't this merely our current
> > theory? 
> > 
> > I mean, we can not know for certain that intelligence does not involve
> > some magical component (such as a divinely given soul) until we have
> > succesfully built an artificial intelligence.
> 
> I know it's true because I wouldn't want to live in a world where I
> couldn't make an artificial intelligence. Therefore, I don't even have
> to bother thinking about the possibility that I can't.

Look at it this way.  How is a microprocessor controling a servo moter
any different from your brain controling your fingers?

Mathematically, AI should be no more or less possible than NI.  I also
doubt very much that the two will be distinguishable by any reasonable
means.  That is, the thing we perceive as intelligence will be just
that regardless of its origin.

The earlier post about arranging matter into a sufficiently complex
pattern so as to produce a mind is really no different than emulation
with a sufficiently complex program.  It is just another Turing
transform.  A more concrete example is the fact that a microprocessor
can be fully emulated and tested in software before being fabricated.

-- 
I wouldn't mind the rat race so much if it wasn't for all the damn cats.
From: Christian Lynbech
Subject: Re: Sentience
Date: 
Message-ID: <87wu4jwvcf.fsf@baguette.defun.dk>
>>>>> "David" == David Steuber <·····@david-steuber.com> writes:

David> Look at it this way.  How is a microprocessor controling a servo moter
David> any different from your brain controling your fingers?

David> Mathematically, AI should be no more or less possible than NI.

But if intelligence was intimately connected to the presence of a
divine soul, it would not be mathematical. Then all machines that we
would be able to build would lack a component that only the divine
being would be able to control and put into things.


------------------------+-----------------------------------------------------
Christian Lynbech       | christian ··@ defun #\. dk
------------------------+-----------------------------------------------------
Hit the philistines three times over the head with the Elisp reference manual.
                                        - ·······@hal.com (Michael A. Petonic)
From: David Steuber
Subject: Re: Sentience
Date: 
Message-ID: <87ad1f2wyf.fsf@david-steuber.com>
Christian Lynbech <·················@ericsson.com> writes:

> >>>>> "David" == David Steuber <·····@david-steuber.com> writes:
> 
> David> Look at it this way.  How is a microprocessor controling a servo moter
> David> any different from your brain controling your fingers?
> 
> David> Mathematically, AI should be no more or less possible than NI.
> 
> But if intelligence was intimately connected to the presence of a
> divine soul, it would not be mathematical. Then all machines that we
> would be able to build would lack a component that only the divine
> being would be able to control and put into things.

That's a rather big if.  I might be able to go along with that if if I
ever saw anything that violated mathematical law or convinced me that
the brain is not just another Turing machine.  My own personal
experience just doesn't support any such conclusion.

Of course this is just my opinion and my experience.

-- 
I wouldn't mind the rat race so much if it wasn't for all the damn cats.
From: Randy Yates
Subject: Re: Sentience
Date: 
Message-ID: <1xmrmnxv.fsf@ieee.org>
Christian Lynbech <·················@ericsson.com> writes:

>>>>>> "David" == David Steuber <·····@david-steuber.com> writes:
>
> David> Look at it this way.  How is a microprocessor controling a servo moter
> David> any different from your brain controling your fingers?
>
> David> Mathematically, AI should be no more or less possible than NI.
>
> But if intelligence was intimately connected to the presence of a
> divine soul, it would not be mathematical. Then all machines that we
> would be able to build would lack a component that only the divine
> being would be able to control and put into things.

Hi Christian,

This is a great point. I would say that a "soul" is essentially
a "will." We may eventually be able to make machines that are
both "sentient" and "reasoning" (refer to a parallel post I just
made), but without a "soul" such an entity's will, if existent,
must be synthetic (e.g., "protect all humans from harm"). At least
that's the way it seems to me. 
-- 
%  Randy Yates                  % "Remember the good old 1980's, when 
%% Fuquay-Varina, NC            %  things were so uncomplicated?"
%%% 919-577-9882                % 'Ticket To The Moon' 
%%%% <·····@ieee.org>           % *Time*, Electric Light Orchestra
http://home.earthlink.net/~yatescr
From: Randy Yates
Subject: Re: Sentience
Date: 
Message-ID: <vfk3l97i.fsf@ieee.org>
Randy Yates <·····@ieee.org> writes:
> [...]
> This is a great point. I would say that a "soul" is essentially
> a "will." We may eventually be able to make machines that are
> both "sentient" and "reasoning" (refer to a parallel post I just
> made), but without a "soul" such an entity's will, if existent,
> must be synthetic (e.g., "protect all humans from harm"). At least
> that's the way it seems to me. 

I also should comment that I've wondered if, like our Creator has
done for us, there may be a way for us to infuse into our created
machine a piece of ourselves that would give it this "soul" or will.
-- 
%  Randy Yates                  % "Though you ride on the wheels of tomorrow,
%% Fuquay-Varina, NC            %  you still wander the fields of your
%%% 919-577-9882                %  sorrow."
%%%% <·····@ieee.org>           % '21st Century Man', *Time*, ELO
http://home.earthlink.net/~yatescr
From: David Steuber
Subject: Re: Sentience
Date: 
Message-ID: <87ptabi5l7.fsf@david-steuber.com>
Randy Yates <·····@ieee.org> writes:

> I also should comment that I've wondered if, like our Creator has
> done for us, there may be a way for us to infuse into our created
> machine a piece of ourselves that would give it this "soul" or will.

See Mary Shelly's Frankenstein.  I believe the idea was that the good
doctor could not create life per se, but he could put a bit of his
soul into it.

I've more or less discounted the existence of a divine creator.  But
there is a certain appeal to a sci-fi idea that we are simply a part
of the universe that has broken itself apart into many pieces in an
attempt to understand its self.  Not that I actually subscribe to that
idea.

Set aside the philosophy and theology of the issue, I think the
apparent existence of the mind to be the most fascinating thing about
the universe.  I don't think The Big Bang holds a candle to it by
comparison.

Unless the laws of mathematics are mutable, it seems that everything
is the way it is without the need for any kind of creation event.  If
we solve the problem of creating a machine mind that is as good or
better than the human mind, I expect there will be some controversy.
How long it will take to do that I don't know.  It may also be that we
can never fully understand it.  I already find it to be quite magical
that the letter 'e' shows up on my display when I, or someone far away,
types an 'e'.  Yet all that is has been engineered.  Math has not.  So
far as we know, it is only discovered.

One of the coolest things I've read was in Carl Sagan's book Contact.
I made sure to read it before the movie came out.  In it, there is a
rather interesting discussion about the constant Pi.  As far as I
know, Pi can't be anything other than what it is.  But in the book,
there is a "message" buried deep down in the number.  I won't spoil
it here.  Read the book.  It is better than the movie.

There is an exit strategy for people who wish to cling to a creator.
In physics, math is really a tool for creating models.  The models are
no good until they have been confirmed by experimental observation.
Even then, if you buy into Hume, that empirical evidence is of limited
value.  Math may be immutable.   But math != physics.  Math is just a
tool.

I leave this as a small room for doubt in my own philosophical world
view.  That said, I think Roger Penrose is seriously stretching things
in his book, "The Emperor's New Mind."  I find his arguments against
the possibility of AI/SI whatever you care to call it (I prefer
synthetic over artificial because the synthetic is indistinguishable
from the natural) to fall strongly into the wishful thinking camp.

Currently the weight of scientific evidence that I am familiar with
leads me to conclude that a human-like synthetic intelligence is quite
possible and perhaps even likely.  It just requires deeper
understanding on our part to achieve it.  I think once that happens,
the sales of anti-depressants will truly skyrocket.

My only real fear is that people will trust synthetic intelligence
more than the real thing.  Just because the machines will be smarter
doesn't mean they will be infallible.

"Teach it phenomenology" --- Dark Star

-- 
I wouldn't mind the rat race so much if it wasn't for all the damn cats.
From: Ray Dillinger
Subject: Re: Sentience
Date: 
Message-ID: <407CE923.78DE1AAD@sonic.net>
David Steuber wrote:
> 
> I've more or less discounted the existence of a divine creator.  But
> there is a certain appeal to a sci-fi idea that we are simply a part
> of the universe that has broken itself apart into many pieces in an
> attempt to understand its self.  Not that I actually subscribe to that
> idea.

I've sort of got the opinion that there are things which 
are holy -- family, love, truth, compassion, joy, life,
etc...  This doesn't really require or involve the kind 
of God who is a "who", or even a God separate in any way 
from the universe itself.  It just *is*.  Every religion
in the world is just a shadow of what you hold in your 
heart when you percieve the world in a state of reverence.

What's holy is that which we hold in reverential regard.
So, each of us chooses what is holy, and for some folks 
it's everything, and for some it's nothing. 

Not that this has much to do with Lisp, but perhaps on 
some level this sort of capacity is part of what we 
regard as sentience -- the ability to self-consciously
choose, in some matters, our attitudes and how we allow
things to affect us.

				Bear
From: Gorbag
Subject: Re: Sentience
Date: 
Message-ID: <n9ffc.203$n_5.52@bos-service2.ext.ray.com>
"David Steuber" <·····@david-steuber.com> wrote in message
···················@david-steuber.com...
> My only real fear is that people will trust synthetic intelligence
> more than the real thing.  Just because the machines will be smarter
> doesn't mean they will be infallible.

Just because something isn't infallible doesn't mean it is not more worthy
of trust. 53% right is still better than 42%.
From: André Thieme
Subject: Re: Sentience
Date: 
Message-ID: <c5k2gr$5b2$1@ulric.tng.de>
David Steuber wrote:

> I've more or less discounted the existence of a divine creator.

I am not so certain about this. There are some people who have very
good arguments that we were created by an intelligence. If this
intelligence is a living creature, or if it is some strange kind of
"nature laws" is not sure.

But for example take it this way:
if you get some kilos of sand from the beach, climb on a high building
and then let the sand fall down... what do you expect of the sand?
You probably expect that it will fall down somehow, without any
specific pattern. I am sure that you would be highly confused if you
go downstairs and find how the sand has written the faculty function
in Lisp. How often you might throw around sand, it will never fall in a
way that it writes Lisp programs. This is randomness.

Another example:
in a school you give the pupils some dice, and every pupil has to throw
it 100 times and write down the number.
If one kid has 100 times on his paper a "3" you would not believe him
that he did it correctly. Because 100 times a 3 is information, not
randomness. An intelligence is needed to create information.

Last example:
You maybe heared that in the 1925 some people found eggs of dinosaurs
during an expedition in the Gobi desert. Funny was, the eggs were
arranged in a square. The conclusion of the paleontologists was:
these eggs were already discovered by some humans before.


What my examples hopefully illustrate: whenever we find even very simple
structures like squares, letters, etc. we are certain that they were
produced by an intelligence, because information cannot simply appear
out of nothing. But then let us take the by far most complex structure
in the known universe - the human brain.
Here we suddenly say it can develop from alone. There is no intelligence
needed to be involved in creating this structure.
In other words: let some monkeys type, in some time a nice Lisp program
will appear.


Some people want to establish a new field of science, called
Intelligent Design.
If you are interested in mathematical "proofs" that we cannot develop
without a creator google for intelligence design.



> Unless the laws of mathematics are mutable, it seems that everything
> is the way it is without the need for any kind of creation event.  If
> we solve the problem of creating a machine mind that is as good or
> better than the human mind, I expect there will be some controversy.
> How long it will take to do that I don't know.  It may also be that we
> can never fully understand it.  I already find it to be quite magical
> that the letter 'e' shows up on my display when I, or someone far away,
> types an 'e'.  Yet all that is has been engineered.  Math has not.  So
> far as we know, it is only discovered.

Next sentence is _not_ meant offensive:
I think your view of mathematics is too naive.

I don't know your mathematical background... but maybe you want to look
into the issues of the beginning of last century, where math stoped to
work and needed some big changes. And you could look for information
regarding G�del's incompleteness theorem. Possible starting point:
http://en.wikipedia.org/wiki/G%F6del%27s_incompleteness_theorem



> One of the coolest things I've read was in Carl Sagan's book Contact.
> I made sure to read it before the movie came out.  In it, there is a
> rather interesting discussion about the constant Pi.  As far as I
> know, Pi can't be anything other than what it is.  But in the book,
> there is a "message" buried deep down in the number.  I won't spoil
> it here.  Read the book.  It is better than the movie.

You made me curious :)



> Currently the weight of scientific evidence that I am familiar with
> leads me to conclude that a human-like synthetic intelligence is quite
> possible and perhaps even likely.  It just requires deeper
> understanding on our part to achieve it.  I think once that happens,
> the sales of anti-depressants will truly skyrocket.

If you really have some time (2 hours), I _highly_ suggest to read this:
http://www.kurzweilai.net/articles/art0134.html?m=1

This is a text of Ray Kurzweil about his "Law of accelerating Returns".
I am of course critical too, but anyway, I regard it as one of the most
important texts at all.
He explains why in 30-40 years we will have computers, millions of times
more intelligent than humans.



> My only real fear is that people will trust synthetic intelligence
> more than the real thing.  Just because the machines will be smarter
> doesn't mean they will be infallible.

In fact, a very complicated issue...


Andr�
--
From: Gareth McCaughan
Subject: Re: Sentience
Date: 
Message-ID: <8765c29um5.fsf@g.mccaughan.ntlworld.com>
Andr� Thieme wrote:

> But for example take it this way:
> if you get some kilos of sand from the beach, climb on a high building
> and then let the sand fall down... what do you expect of the sand?
> You probably expect that it will fall down somehow, without any
> specific pattern. I am sure that you would be highly confused if you
> go downstairs and find how the sand has written the faculty function
> in Lisp. How often you might throw around sand, it will never fall in a
> way that it writes Lisp programs. This is randomness.

Is this meant to be some sort of argument against evolution?
Because, if so, it's a *very* bad one.

> Another example:
> in a school you give the pupils some dice, and every pupil has to throw
> it 100 times and write down the number.
> If one kid has 100 times on his paper a "3" you would not believe him
> that he did it correctly. Because 100 times a 3 is information, not
> randomness. An intelligence is needed to create information.

In the absence of a credible definition of "information",
that statement doesn't qualify as either true or false.
It's manifestly not true that intelligence is needed to
make anything analogous to "rolling a 3 100 times".

> Last example:
> You maybe heared that in the 1925 some people found eggs of dinosaurs
> during an expedition in the Gobi desert. Funny was, the eggs were
> arranged in a square. The conclusion of the paleontologists was:
> these eggs were already discovered by some humans before.

I'm not sure what significance that's supposed to have, but
it sounds interesting anyway. Can you tell me more? (I asked
Google, but it doesn't seem to have heard the story.)

> What my examples hopefully illustrate: whenever we find even very simple
> structures like squares, letters, etc. we are certain that they were
> produced by an intelligence, because information cannot simply appear
> out of nothing.

This is not true. It is very common to find patterns,
some of them very intricate and beautiful, that are well
explained in purely naturalistic terms. For instance,
naturally formed crystals make all sorts of elegant
patterns, which arise simply from the laws of physics.
(Similar patterns would doubtless arise from other
possible laws of physics, so the fact that the laws
of physics give rise to crystal growth doesn't seem
like good evidence that the laws of physics are themselves
the work of an intelligence.)

>                 But then let us take the by far most complex structure
> in the known universe - the human brain.
> Here we suddenly say it can develop from alone. There is no intelligence
> needed to be involved in creating this structure.

"Suddenly"? You surely don't think anyone says "All
those other ingenious bits of design in the natural
world, obviously they're the result of an intelligence
at work -- but the human brain isn't"?

> In other words: let some monkeys type, in some time a nice Lisp program
> will appear.

I think you need to learn a lot more about how evolution
is actually thought to work by people who believe in it;
it doesn't at all resemble your "let some monkeys type"
caricature.

> Some people want to establish a new field of science, called
> Intelligent Design.

It is clear that some people want to establish *something*
called "Intelligent Design". I regret that it doesn't look
much like a field of science; more like a strategy for
attacking science. (Google for "intelligent design wedge"
to find out more.)

> If you are interested in mathematical "proofs" that we cannot develop
> without a creator google for intelligence design.

I am a mathematician, and a Christian (and therefore more
sympathetic to at least some of the aims of these people
than most). Every such "proof" I have seen has been rubbish.
I don't rule out the possibility that there might be some
non-rubbish proof, and if you have one then I would be
very interested to see it.

-- 
Gareth McCaughan
.sig under construc
From: Randy Yates
Subject: Re: Sentience
Date: 
Message-ID: <vfk2htwg.fsf@ieee.org>
Gareth McCaughan <················@pobox.com> writes:
> [...]
> This is not true. It is very common to find patterns,
> some of them very intricate and beautiful, that are well
> explained in purely naturalistic terms. For instance,
> naturally formed crystals make all sorts of elegant
> patterns, which arise simply from the laws of physics.

And the laws of physics come from what, an eternal vacuum
of space? ...
-- 
%  Randy Yates                  % "My Shangri-la has gone away, fading like 
%% Fuquay-Varina, NC            %  the Beatles on 'Hey Jude'" 
%%% 919-577-9882                %  
%%%% <·····@ieee.org>           % 'Shangri-La', *A New World Record*, ELO
http://home.earthlink.net/~yatescr
From: David Steuber
Subject: Re: Sentience
Date: 
Message-ID: <87wu4ic4yx.fsf@david-steuber.com>
Randy Yates <·····@ieee.org> writes:

> Gareth McCaughan <················@pobox.com> writes:
> > [...]
> > This is not true. It is very common to find patterns,
> > some of them very intricate and beautiful, that are well
> > explained in purely naturalistic terms. For instance,
> > naturally formed crystals make all sorts of elegant
> > patterns, which arise simply from the laws of physics.
> 
> And the laws of physics come from what, an eternal vacuum
> of space? ...

Where do the properties of numbers come from?  I can get the same
results with (+ 2 3) and (+ 3 2).  Where did that law come from?  For
that matter, who decided on the decimal representation of Pi and all
those many infinite series that converge on Pi?

Could things have been decided otherwise?

-- 
I wouldn't mind the rat race so much if it wasn't for all the damn cats.
From: Gareth McCaughan
Subject: Re: Sentience
Date: 
Message-ID: <87ad1c97ny.fsf@g.mccaughan.ntlworld.com>
Randy Yates <·····@ieee.org> writes:

> Gareth McCaughan <················@pobox.com> writes:
> > [...]
> > This is not true. It is very common to find patterns,
> > some of them very intricate and beautiful, that are well
> > explained in purely naturalistic terms. For instance,
> > naturally formed crystals make all sorts of elegant
> > patterns, which arise simply from the laws of physics.
> 
> And the laws of physics come from what, an eternal vacuum
> of space? ...

You must have missed the rest of that paragraph:

  | (Similar patterns would doubtless arise from other
  | possible laws of physics, so the fact that the laws
  | of physics give rise to crystal growth doesn't seem
  | like good evidence that the laws of physics are themselves
  | the work of an intelligence.)

In any event, if "entirely explained by simple application
of the laws of physics" doesn't imply "explained in purely
naturalistic terms" then I don't know what does.

-- 
Gareth McCaughan
.sig under construc
From: André Thieme
Subject: Re: Sentience
Date: 
Message-ID: <c5maf0$mim$1@ulric.tng.de>
Gareth McCaughan wrote:

>>But for example take it this way:
>>if you get some kilos of sand from the beach, climb on a high building
>>and then let the sand fall down... what do you expect of the sand?
>>You probably expect that it will fall down somehow, without any
>>specific pattern. I am sure that you would be highly confused if you
>>go downstairs and find how the sand has written the faculty function
>>in Lisp. How often you might throw around sand, it will never fall in a
>>way that it writes Lisp programs. This is randomness.
> 
> 
> Is this meant to be some sort of argument against evolution?
> Because, if so, it's a *very* bad one.

I understood it this way: it is just a warm up for the average surfer
that one can read on internet sites about Intelligent Design.
I am not very involved into ID, so I don't know their more detailed
explanations. I just wanted to point out some places where these
arguments could be found.


>>Another example:
>>in a school you give the pupils some dice, and every pupil has to throw
>>it 100 times and write down the number.
>>If one kid has 100 times on his paper a "3" you would not believe him
>>that he did it correctly. Because 100 times a 3 is information, not
>>randomness. An intelligence is needed to create information.
> 
> 
> In the absence of a credible definition of "information",
> that statement doesn't qualify as either true or false.
> It's manifestly not true that intelligence is needed to
> make anything analogous to "rolling a 3 100 times".

You can visit http://www.intelligentdesigner.de/ and contact the author
to learn more about it.


>>Last example:
>>You maybe heared that in the 1925 some people found eggs of dinosaurs
>>during an expedition in the Gobi desert. Funny was, the eggs were
>>arranged in a square. The conclusion of the paleontologists was:
>>these eggs were already discovered by some humans before.
> 
> 
> I'm not sure what significance that's supposed to have, but
> it sounds interesting anyway. Can you tell me more? (I asked
> Google, but it doesn't seem to have heard the story.)

Hmm, I supposed there would be more information about this issue, like
the conclusions. The author of the linked site can you probably tell
more about it (and perhaps give some sources).


>>What my examples hopefully illustrate: whenever we find even very simple
>>structures like squares, letters, etc. we are certain that they were
>>produced by an intelligence, because information cannot simply appear
>>out of nothing.
> 
> 
> This is not true. It is very common to find patterns,
> some of them very intricate and beautiful, that are well
> explained in purely naturalistic terms. For instance,
> naturally formed crystals make all sorts of elegant
> patterns, which arise simply from the laws of physics.
> (Similar patterns would doubtless arise from other
> possible laws of physics, so the fact that the laws
> of physics give rise to crystal growth doesn't seem
> like good evidence that the laws of physics are themselves
> the work of an intelligence.)


I think my wording was very bad.
Well, as I understood it, ID says that information cannot appear out of
nothing. You are describing some order/regularity (like structures that
you can find in nature) and not about information. For example it would
be hard to find some trees in nature who grew in a way (when observed
from a helicopter) that they form the letters which itself are the source
code of a scheme compiler.
I mixed some terms, sorry for that.
If you want definitions I have to refer you to the linked site.

Thanks for your comments.


Andr�
--
From: Andreas Scholta
Subject: Re: Sentience
Date: 
Message-ID: <c5mkno$1ki$06$1@news.t-online.com>
Andr� Thieme wrote:
> Well, as I understood it, ID says that information cannot appear out of
> nothing. You are describing some order/regularity (like structures that
> you can find in nature) and not about information. For example it would
> be hard to find some trees in nature who grew in a way (when observed
> from a helicopter) that they form the letters which itself are the source
> code of a scheme compiler.

good day..

I have been lurking here in the shadows for quite some time now, finally 
feeling the urge to step into the light (knowing that after posting i am 
probably going to regret this).

The difference between "Chaos" and "Information" lies in our 
understanding and interpretation. We create information by relating 
observations to past experiences and aquired knowledge.

more clearly:

If something is chaotic to us it is only because we are unable to see a 
pattern.
If something is information to us it is because we are able to see a 
pattern and can connect the observation made with our already aquired 
knowledge.

it is not really important that a letter has a sender who crammed a lot 
of stuff into it for us to read and understand. it is that we as the 
recipients can relate to the letter's content. while knowing that there 
was a sender who supposedly HAD to say SOMEthing in that letter makes us 
search more keenly for bits of information while reading it.. the 
crucial part is not the assembling, but the disassembling. we can 
disassemble almost anything the way we see fit.

if you and me were to hop into a helicopter, fly around until we found a 
wood, I would show you how out of the many trees you saw there, I could 
pick some out, shaping the character #\A.

have fun,
Andreas Scholta
From: Gareth McCaughan
Subject: Re: Sentience
Date: 
Message-ID: <8765c0978f.fsf@g.mccaughan.ntlworld.com>
Andr� Thieme wrote:

[I said:]
>> In the absence of a credible definition of "information",
>> that statement doesn't qualify as either true or false.
>> It's manifestly not true that intelligence is needed to
>> make anything analogous to "rolling a 3 100 times".
> 
> You can visit http://www.intelligentdesigner.de/ and contact the author
> to learn more about it.

Visiting the web site didn't do me much good; my
knowledge of German is what's sometimes called
"Liederdeutsche": it consists almost entirely of
text that's been set to music. So I know how to
say "The infinite expanse of heaven is your beloved
homeland" or "I do not complain, even though my
heart is breaking", but not "What time does the
number 43 bus leave?" or "Why does that prove
that an intelligent being was involved?". :-)

>>>What my examples hopefully illustrate: whenever we find even very simple
>>>structures like squares, letters, etc. we are certain that they were
>>>produced by an intelligence, because information cannot simply appear
>>>out of nothing.
>>
>> This is not true. It is very common to find patterns,
>> some of them very intricate and beautiful, that are well
>> explained in purely naturalistic terms. For instance,
>> naturally formed crystals make all sorts of elegant
>> patterns, which arise simply from the laws of physics.
>> (Similar patterns would doubtless arise from other
>> possible laws of physics, so the fact that the laws
>> of physics give rise to crystal growth doesn't seem
>> like good evidence that the laws of physics are themselves
>> the work of an intelligence.)
> 
> 
> I think my wording was very bad.
> Well, as I understood it, ID says that information cannot appear out of
> nothing. You are describing some order/regularity (like structures that
> you can find in nature) and not about information. For example it would
> be hard to find some trees in nature who grew in a way (when observed
> from a helicopter) that they form the letters which itself are the source
> code of a scheme compiler.

Well, as I say, a decent definition of "information" is
required, and I haven't seen anything that looks much like one
from the "intelligent design" crowd. (I admit that I haven't
spent very much time looking.)

-- 
Gareth McCaughan
.sig under construc
From: Gareth McCaughan
Subject: Re: Sentience
Date: 
Message-ID: <877jwf7e5q.fsf@g.mccaughan.ntlworld.com>
I wrote:

> Visiting the web site didn't do me much good; my
> knowledge of German is what's sometimes called
> "Liederdeutsche": it consists almost entirely of
> text that's been set to music.

<pedant>Oops. The final "e" shouldn't be there.</pedant>

-- 
Gareth McCaughan
.sig under construc
From: Ray Dillinger
Subject: OT Re: Sentience
Date: 
Message-ID: <407DE770.E4395065@sonic.net>
André Thieme wrote:

> Some people want to establish a new field of science, called
> Intelligent Design.

Excuse me, but Intelligent Design does not follow the scientific method.  
Please don't read what's below as an attack; it's not intended as one.
But I'm going to explain, point by point, what the scientific method *IS*,
and compare it to what the Intelligent Design people are doing.  This is 
not perjorative in any way; this is strictly a comparison.  My intent is
to show that Intelligent Design and the Scientific Method have so little
in common that they cannot, under any circumstances, be considered to be
the same thing. 


Scientists form hypotheses - this the Intelligent Design people have done.
   Of course, everybody does this.  Most of us call our hypotheses 
   opinions and don't pursue them using the scientific method.  I 
   personally form a lot of hypotheses when I watch seagulls at the beach,
   about what they're doing and why.  But that doesn't make me a scientist. 

Scientists then design experiments by using their hypothesis to try to 
   predict future events given some set of circumstances.  This the ID 
   people have not done; their sole hypothesis has no predictive power.

Scientists then perform experiments by either bringing about the 
   circumstances required by their experimental design and studying 
   the results, or by examining nature to find circumstances matching 
   their experimental design and studying the results from those.  
   Lacking any hypothesis with predictive power, the ID people 
   can design no experiments; therefore they cannot perform them.

If the results of experiments are at variance with the prediction 
   made on the basis of a hypothesis, scientists then abandon that 
   hypothesis.  Some may kick and moan and argue about it for years, 
   but eventually, a hypothesis which has less predictive power than 
   some other hypothesis is abandoned. This the ID people will not
   do.  In fact those who abandoned the sole hypothesis they've 
   advanced cease to *BE* the Intelligent Design people.

That which does not follow the scientific method is not science.
Science and the scientific method are not concepts that can be 
separated.  One who does not use the method is not a scientist.

Nobody can stop them from using the word "science" to describe 
what they are doing. But the fact is that what they are doing is 
not the scientific method.  Therefore the assertion is simply 
false.

That is not to state that the hypothesis they advance is false; 
in fact it may be true.  But it simply isn't a hypothesis about 
which science can be done.

				Bear
From: André Thieme
Subject: Re: OT Re: Sentience
Date: 
Message-ID: <c5m98u$lp7$1@ulric.tng.de>
Ray Dillinger wrote:

> Andr� Thieme wrote:
> 
> 
>>Some people want to establish a new field of science, called
>>Intelligent Design.
> 
> 
> Excuse me, but Intelligent Design does not follow the scientific method.

Yes, right, that's why I said they "want to establish" it.


> Please don't read what's below as an attack; it's not intended as one.
> But I'm going to explain, point by point, what the scientific method *IS*,
> and compare it to what the Intelligent Design people are doing.  This is 
> not perjorative in any way; this is strictly a comparison.  My intent is
> to show that Intelligent Design and the Scientific Method have so little
> in common that they cannot, under any circumstances, be considered to be
> the same thing. 

Thank you for your explanations.
I just want to state that I am not a follower of ID but just wanted to
point out to David that there are other possible options available.
Perhaps I spent too much time with talking about ID while for me the
link to Kurzweil was much more important ;)



> That which does not follow the scientific method is not science.
> Science and the scientific method are not concepts that can be 
> separated.  One who does not use the method is not a scientist.
> 
> Nobody can stop them from using the word "science" to describe 
> what they are doing. But the fact is that what they are doing is 
> not the scientific method.  Therefore the assertion is simply 
> false.

Yes, agreed. Some people came up with a definition for the word science
which is usually used when talking about it.
Of course this "official definition" is objectively not right, as any
definition by definition isn't, so other people could (if helpfull or
not) give another definition of it.


Andr�
--
From: Rahul Jain
Subject: Re: OT Re: Sentience
Date: 
Message-ID: <877jwe5fmk.fsf@nyct.net>
Andr� Thieme <······································@justmail.de> writes:

> Ray Dillinger wrote:
>
>> Andr� Thieme wrote:
>>
>>>Some people want to establish a new field of science, called
>>>Intelligent Design.
>> Excuse me, but Intelligent Design does not follow the scientific
>> method.
>
> Yes, right, that's why I said they "want to establish" it.
[...]
> Some people came up with a definition for the word science
> which is usually used when talking about it.
> Of course this "official definition" is objectively not right, as any
> definition by definition isn't, so other people could (if helpfull or
> not) give another definition of it.

No, the ID folks would be trying to establish a new language, where the
word "science" has been defined to mean something other than what it
means to the rest of us.

What matters in language is not the symbols used, but the fact that all
participants have agreed on some meaning for the symbols being used (or
are willing to adapt their communications to make it so). Refusing to
communicate in a way that is understandable to others is rarely
considered intelligent behavior. :)

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Tayssir John Gabbour
Subject: Re: OT Re: Sentience
Date: 
Message-ID: <866764be.0404151648.3fd85d15@posting.google.com>
Ray Dillinger <····@sonic.net> wrote in message news:<·················@sonic.net>...
> That which does not follow the scientific method is not science.
> Science and the scientific method are not concepts that can be 
> separated.  One who does not use the method is not a scientist.

http://en.wikipedia.org/wiki/Philosophy_of_science
From: Joe Marshall
Subject: Re: Sentience
Date: 
Message-ID: <smf55n2b.fsf@ccs.neu.edu>
Andr� Thieme <······································@justmail.de> writes:

> Another example:
> in a school you give the pupils some dice, and every pupil has to throw
> it 100 times and write down the number.
> If one kid has 100 times on his paper a "3" you would not believe him
> that he did it correctly. Because 100 times a 3 is information, not
> randomness. An intelligence is needed to create information.

Both papers have the same information.  One has a much smaller
Kolmogorov complexity.

You do not need intelligence to create information.  The laws of
thermodynamics (which is intimately related to information theory)
work just fine in the absence of an intelligent observer.

> Last example:
> You maybe heard that in the 1925 some people found eggs of dinosaurs
> during an expedition in the Gobi desert.  Funny was, the eggs were
> arranged in a square.  The conclusion of the paleontologists was:
> these eggs were already discovered by some humans before.

On the island of Kvadehuksletta one can find piles of rocks arranged
in perfect circles about a meter in radius.  However, it appears that
these are natural formations.

> What my examples hopefully illustrate: whenever we find even very simple
> structures like squares, letters, etc. we are certain that they were
> produced by an intelligence, because information cannot simply appear
> out of nothing. 

What about the `Face' on Mars?  

What about `happy face' spiders?
  http://biology.swau.edu/faculty/petr/ftphotos/hawaii/postcards/spiders/

How about the horrified Bryzoan Selenaria punctata?
  http://www.adelaide.edu.au/microscopy/services/instrumentation/gallery.html
  (see image 6)

> But then let us take the by far most complex structure in the known
> universe - the human brain.  Here we suddenly say it can develop
> from alone.

Who says that?  Most evolutionists believe that the brain evolved from
similar, but slightly less complex brains such as those found in other
primates.
From: André Thieme
Subject: Re: Sentience
Date: 
Message-ID: <c5m9en$lp7$2@ulric.tng.de>
Joe Marshall wrote:

>>But then let us take the by far most complex structure in the known
>>universe - the human brain.  Here we suddenly say it can develop
>>from alone.
> 
> 
> Who says that?  Most evolutionists believe that the brain evolved from
> similar, but slightly less complex brains such as those found in other
> primates.

I am not a follower of Intelligent Design, but they explain with statistical
arguments that it is not possible that a brain can evolve.
If you want to learn more about these arguments please google a bit,
as I cannot defend the position as I am not involved enough into the
thoughts behind it.


Andr�
--
From: Simon Alexander
Subject: Re: Sentience
Date: 
Message-ID: <m3pta9cggz.fsf@localhost.localdomain>
André Thieme <······································@justmail.de> writes:
> I am not a follower of Intelligent Design, but they explain with statistical
> arguments that it is not possible that a brain can evolve.
> If you want to learn more about these arguments please google a bit,
> as I cannot defend the position as I am not involved enough into the
> thoughts behind it.
> André

And if you google a bit,  you should be able to find competent rebuttle of 
essentially every claim the ID folk make.  The statistical arguments that I 
have seen from ID are naive, to be charitable.  They certainly haven't
"explained that it is not possible that a brain can evolve".

Simon.
From: Ray Dillinger
Subject: Re: Sentience
Date: 
Message-ID: <407EBFD4.CC5BCBDA@sonic.net>
André Thieme wrote:
> 
> I am not a follower of Intelligent Design, but they explain with statistical
> arguments that it is not possible that a brain can evolve.

This is odd, because simulated neural networks certainly can evolve. 
I use GA to develop new structure in ANN all the time!

				Bear
From: André Thieme
Subject: Re: Sentience
Date: 
Message-ID: <c5mfbj$pus$1@ulric.tng.de>
Ray Dillinger wrote:
> Andr� Thieme wrote:
> 
>>I am not a follower of Intelligent Design, but they explain with statistical
>>arguments that it is not possible that a brain can evolve.
> 
> 
> This is odd, because simulated neural networks certainly can evolve. 
> I use GA to develop new structure in ANN all the time!
> 
> 				Bear

I am not an intelligent Designer, but I suppose they don't see a problem
that NNs work, cause they were created by an intelligence.
I suppose Intelligent Design sees no problem with intelligence when some
intelligence was involved. What they would find odd if in a storm some
metal parts of a mountain fall down to the ground in a way that a computer
is created which runs a NN.  ;-)


Andr�
--
From: Mario S. Mommer
Subject: Re: Sentience
Date: 
Message-ID: <fzpta8befz.fsf@germany.igpm.rwth-aachen.de>
Andr� Thieme <······································@justmail.de> writes:
> Ray Dillinger wrote:
>> Andr� Thieme wrote:
>>
>>>I am not a follower of Intelligent Design, but they explain with statistical
>>>arguments that it is not possible that a brain can evolve.
>> This is odd, because simulated neural networks certainly can
>> evolve. I use GA to develop new structure in ANN all the time!
>> 				Bear
>
> I am not an intelligent Designer, but I suppose they don't see a problem
> that NNs work, cause they were created by an intelligence.
> I suppose Intelligent Design sees no problem with intelligence when some
> intelligence was involved. What they would find odd if in a storm some
> metal parts of a mountain fall down to the ground in a way that a computer
> is created which runs a NN.  ;-)

Well, I guess we all agree that this would indeed be odd. However,
this is not how evolution via natural selection works.

Andr�:

   1. Read Darwins "Origin of Species". He goes to very very great
      lengths to fundament his theory; it is very carefully laid out.

   2. Read his biography. He was not an enemy of the church, nor
      anything. In fact, the conflict between his faith and what he
      found through his research pretty much ruined his health. From
      his biography you would also learn how /very very/ carefully he
      made his observations.

      Evolution is not the product of a dogmatic world view (in
      contrast to this abomination called "inteligent design"). It is
      the product of extremely disciplined and honest science. Once
      you see with what care the underlying research was performed,
      you should be able to see that this is not just a crazy idea.

   3. Since you are into computers, try out some GA (genetic
      algorithm) programs for the solution of optimization
      problems. In particular, compare them with the approach of
      randomly generating solutions. While the GA produce usefull
      solutions pretty rapidly, just using the random number generator
      fails. This should once and for all show you that the belief
      that evolutions works purely at random is wrong. And so show you
      that all these arguments about throwing sand from the top of a
      building and the like are pure bulshit.

   4. While you work to understand the theory of evolution via natural
      selection, please stop posting this BS here. No, in fact, don't
      post this BS here, since it is off topic anyways.
From: André Thieme
Subject: Re: Sentience
Date: 
Message-ID: <c5palh$phk$1@ulric.tng.de>
Mario S. Mommer wrote:

>>I am not an intelligent Designer, but I suppose they don't see a problem
>>that NNs work, cause they were created by an intelligence.
>>I suppose Intelligent Design sees no problem with intelligence when some
>>intelligence was involved. What they would find odd if in a storm some
>>metal parts of a mountain fall down to the ground in a way that a computer
>>is created which runs a NN.  ;-)
> 
> 
> Well, I guess we all agree that this would indeed be odd. However,
> this is not how evolution via natural selection works.
> 
> Andr�:
> 
>    1. Read Darwins "Origin of Species". He goes to very very great
>       lengths to fundament his theory; it is very carefully laid out.
> 
>    2. Read his biography. He was not an enemy of the church, nor
>       anything. In fact, the conflict between his faith and what he
>       found through his research pretty much ruined his health. From
>       his biography you would also learn how /very very/ carefully he
>       made his observations.

It is a nice tip of you, however to be honest: I am not interested in
reading these books. I wished I would never have posted with too much
detail about some theories that I don't know myself good enough.
My intention was to give a hint into another direction. If anyone is
interested in a discussion about this topic I am sorry, I can't be a
discussion partner.
I am reading books about math and have not much time for other things.


>    3. Since you are into computers, try out some GA (genetic
>       algorithm) programs for the solution of optimization
>       problems.

In fact, GA are one of the areas that interest me a lot and I will put
a good amount of time into studying them (and other AI related topics).



> In particular, compare them with the approach of
 > randomly generating solutions.

With restrictions put by an intelligence.
So this is an algorithmic behaviour with some random components.


>       This should once and for all show you that the belief
>       that evolutions works purely at random is wrong. And so show you
>       that all these arguments about throwing sand from the top of a
>       building and the like are pure bulshit.

If you go back in time before even the first biologic molecule pattern
existed... how did it assembel if not after pure randomness?


>    4. While you work to understand the theory of evolution via natural
>       selection, please stop posting this BS here. No, in fact, don't
>       post this BS here, since it is off topic anyways.

Right, good suggestion. Lets come back to Lisp!


Andr�
--
From: Artem Baguinski
Subject: Re: Sentience
Date: 
Message-ID: <87llkxjg6h.fsf@caracolito.lan>
>>>>> "André" == André Thieme <······································@justmail.de> writes:

    André> David Steuber wrote:
    >> I've more or less discounted the existence of a divine creator.

    André> I am not so certain about this. There are some people who
    André> have very good arguments that we were created by an
    André> intelligence. If this intelligence is a living creature, or
    André> if it is some strange kind of "nature laws" is not sure.

    André> [...]

    André> What my examples hopefully illustrate: whenever we find
    André> even very simple structures like squares, letters, etc. we
    André> are certain that they were produced by an intelligence,
    André> because information cannot simply appear out of
    André> nothing. 

    And who has created the creator? 

-- 
gr{oe|ee}t{en|ings}
artm 
From: Rahul Jain
Subject: Re: Sentience
Date: 
Message-ID: <87ekqm5g5j.fsf@nyct.net>
Andr� Thieme <······································@justmail.de> writes:

> Another example:
> in a school you give the pupils some dice, and every pupil has to throw
> it 100 times and write down the number.
> If one kid has 100 times on his paper a "3" you would not believe him
> that he did it correctly. Because 100 times a 3 is information, not
> randomness. An intelligence is needed to create information.

Salt crystals are therefore the product of an intelligence.

I don't get it.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: André Thieme
Subject: Re: Sentience
Date: 
Message-ID: <c5sfdl$v1p$2@ulric.tng.de>
Rahul Jain wrote:

> Andr� Thieme <······································@justmail.de> writes:
> 
> 
>>Another example:
>>in a school you give the pupils some dice, and every pupil has to throw
>>it 100 times and write down the number.
>>If one kid has 100 times on his paper a "3" you would not believe him
>>that he did it correctly. Because 100 times a 3 is information, not
>>randomness. An intelligence is needed to create information.
> 
> 
> Salt crystals are therefore the product of an intelligence.
> 
> I don't get it.
> 

I think I made it very clear that I am not an intelligent designer and
probably chose a very stupid example. Anyway, I think they devide it up
into different categories. A salt crystal has some order but not 
information.


Andr�
--
From: John Thingstad
Subject: Re: Sentience
Date: 
Message-ID: <opr6ng5mj9xfnb1n@news.chello.no>
I don't remember the reference but I read somewhere about a german chemist 
creating
a self replicating molecule he altso allowd it to mutate.
Anyhow self replication is how you beet the odds. Even if the chance of 
creating
the molecule is very small you only need one success.
As I have said chaos is the rule and order the exception that survives 
over tie.
In the life simulation we have the flier gun as a example of a
pattern producing other patterns.

P� Sun, 18 Apr 2004 01:43:04 +0200, skrev Andr� Thieme 
<······························@justmail.de>:

> Rahul Jain wrote:
>
>> Andr� Thieme <······································@justmail.de> 
>> writes:
>>
>>
>>> Another example:
>>> in a school you give the pupils some dice, and every pupil has to throw
>>> it 100 times and write down the number.
>>> If one kid has 100 times on his paper a "3" you would not believe him
>>> that he did it correctly. Because 100 times a 3 is information, not
>>> randomness. An intelligence is needed to create information.
>>
>>
>> Salt crystals are therefore the product of an intelligence.
>>
>> I don't get it.
>>
>
> I think I made it very clear that I am not an intelligent designer and
> probably chose a very stupid example. Anyway, I think they devide it up
> into different categories. A salt crystal has some order but not 
> information.
>
>
> Andr�
> --



-- 
Sender med M2, Operas revolusjonerende e-postprogram: http://www.opera.com/
From: André Thieme
Subject: Re: Sentience
Date: 
Message-ID: <c5i1r1$2h3$1@ulric.tng.de>
Randy Yates wrote:

> Christian Lynbech <·················@ericsson.com> writes:
> 
> 
>>>>>>>"David" == David Steuber <·····@david-steuber.com> writes:
>>
>>David> Look at it this way.  How is a microprocessor controling a servo moter
>>David> any different from your brain controling your fingers?
>>
>>David> Mathematically, AI should be no more or less possible than NI.
>>
>>But if intelligence was intimately connected to the presence of a
>>divine soul, it would not be mathematical. Then all machines that we
>>would be able to build would lack a component that only the divine
>>being would be able to control and put into things.
> 
> 
> Hi Christian,
> 
> This is a great point. I would say that a "soul" is essentially
> a "will." We may eventually be able to make machines that are
> both "sentient" and "reasoning" (refer to a parallel post I just
> made), but without a "soul" such an entity's will, if existent,
> must be synthetic (e.g., "protect all humans from harm"). At least
> that's the way it seems to me. 

I don't understand what this "soul" would be doing with us.
It is interesting to see, that some people believe that there is
something like a soul.
Even today most brain-scientists are very sure that there is no such
thing like a free will. I would not be overly amazed if they can prove
it in 20 years.
On some beeings you can obviously see their "algorithmic" behaviour,
which has nothing to do with a soul. There is for example some fish
which is doing some "dance" whenever he sees a girl-fish. In an
experiment they showed a male fish a dummy of a female fish and he
started with his exact dance moves, and he did this always when the
dummy was showed to him. For such complex life forms like humans it is
nearly impossible for us to see structures (besides reflexes).

However, only because the human beeing is very complicated it does not
mean that we have some thing like a soul.
See also my other posting that explains why a soul can also not exist,
or, why we will also be able to program one. A soul is also nothing but
a "computer program".


Andr�
--
From: Randy Yates
Subject: Re: Sentience
Date: 
Message-ID: <wu4jtjy8.fsf@ieee.org>
Andr� Thieme <······································@justmail.de> writes:

> Randy Yates wrote:
>
>> Christian Lynbech <·················@ericsson.com> writes:
>>
>>>>>>>>"David" == David Steuber <·····@david-steuber.com> writes:
>>>
>>>David> Look at it this way.  How is a microprocessor controling a servo moter
>>>David> any different from your brain controling your fingers?
>>>
>>>David> Mathematically, AI should be no more or less possible than NI.
>>>
>>>But if intelligence was intimately connected to the presence of a
>>>divine soul, it would not be mathematical. Then all machines that we
>>>would be able to build would lack a component that only the divine
>>>being would be able to control and put into things.
>> Hi Christian,
>> This is a great point. I would say that a "soul" is essentially
>> a "will." We may eventually be able to make machines that are
>> both "sentient" and "reasoning" (refer to a parallel post I just
>> made), but without a "soul" such an entity's will, if existent,
>> must be synthetic (e.g., "protect all humans from harm"). At least
>> that's the way it seems to me.
>
> I don't understand what this "soul" would be doing with us.

As I already intimated, it directs our will. It's the "top-level loop"
of our algorithm.

> It is interesting to see, that some people believe that there is
> something like a soul.
> Even today most brain-scientists are very sure that there is no such
> thing like a free will. 

Most scholars were sure the sun revolved around the earth.

> I would not be overly amazed if they can prove
> it in 20 years.
> On some beeings you can obviously see their "algorithmic" behaviour,
> which has nothing to do with a soul. There is for example some fish
> which is doing some "dance" whenever he sees a girl-fish. In an
> experiment they showed a male fish a dummy of a female fish and he
> started with his exact dance moves, and he did this always when the
> dummy was showed to him. For such complex life forms like humans it is
> nearly impossible for us to see structures (besides reflexes).
>
> However, only because the human beeing is very complicated it does not
> mean that we have some thing like a soul.
> See also my other posting that explains why a soul can also not exist,
> or, why we will also be able to program one. A soul is also nothing but
> a "computer program".

Had you couched this in more careful terms, I would have thought you
simply misguided rather than on a mission. To each his opinion.
-- 
%  Randy Yates                  % "Midnight, on the water... 
%% Fuquay-Varina, NC            %  I saw...  the ocean's daughter." 
%%% 919-577-9882                % 'Can't Get It Out Of My Head' 
%%%% <·····@ieee.org>           % *El Dorado*, Electric Light Orchestra
http://home.earthlink.net/~yatescr
From: André Thieme
Subject: Re: Sentience
Date: 
Message-ID: <c5k086$3kd$1@ulric.tng.de>
Randy Yates wrote:

>>It is interesting to see, that some people believe that there is
>>something like a soul.
>>Even today most brain-scientists are very sure that there is no such
>>thing like a free will. 
> 
> Most scholars were sure the sun revolved around the earth.

Here the issue is a bit different.
The scientists started out asuming that we have a free will.
Then they looked more detailed on the brain and came to the conclusion
that there is no place for a free will.

Anyway, free will is not possible as it would break "laws" of nature.
The molecules in your brain are forced to behave in a special way. So
they "force" you to act in a specific pattern (there is no will at all,
no person/intelligence who is deciding the pattern and making descisions
how you should act). If you had a free will you could change the way
how the molecules in your body assemble them self.
Although particle X has to move down because it was pushed into this
direction by another particle (or molecule, whatever) your will forces
it against physical principles to move into a different direction,
enabeling you to say what you "really" want - you would have a free will.


> Had you couched this in more careful terms, I would have thought you
> simply misguided rather than on a mission. To each his opinion.

I am sorry, my english is not enough developed...
I hope it will become better over time, so that my words sound less
offensive to others.


Andr�
--
From: Randy Yates
Subject: Re: Sentience
Date: 
Message-ID: <k70ihthu.fsf@ieee.org>
Andr� Thieme <······································@justmail.de> writes:

> Randy Yates wrote:
>
>>>It is interesting to see, that some people believe that there is
>>>something like a soul.
>>>Even today most brain-scientists are very sure that there is no such
>>> thing like a free will.
>> Most scholars were sure the sun revolved around the earth.
>
> Here the issue is a bit different.
> The scientists started out asuming that we have a free will.
> Then they looked more detailed on the brain and came to the conclusion
> that there is no place for a free will.
>
> Anyway, free will is not possible as it would break "laws" of nature.
> The molecules in your brain are forced to behave in a special way. So
> they "force" you to act in a specific pattern (there is no will at all,
> no person/intelligence who is deciding the pattern and making descisions
> how you should act). 

So we shouldn't punish criminals since they were just acting 
based on the physical processes going on in their minds?
-- 
%  Randy Yates                  % "Midnight, on the water... 
%% Fuquay-Varina, NC            %  I saw...  the ocean's daughter." 
%%% 919-577-9882                % 'Can't Get It Out Of My Head' 
%%%% <·····@ieee.org>           % *El Dorado*, Electric Light Orchestra
http://home.earthlink.net/~yatescr
From: André Thieme
Subject: Re: Sentience
Date: 
Message-ID: <c5m9p2$m2e$1@ulric.tng.de>
Randy Yates wrote:

> So we shouldn't punish criminals since they were just acting 
> based on the physical processes going on in their minds?

Exactly this discussion will become very important as soon science
can prove that there is no thing such free will.
I suppose we will be forced to change the way how laws work.
We probably need to redefine several aspects of it..
As science evolves our understanding of what a crime is does evolve
too. Perhaps in 100 years some computers who are some billion times
more intelligent than we are can come up with a definition of crime
that describes this subject by the movement and patterns of some
molecule structures. Something which is extremly different from our
current understandings of it.

Perhaps they will describe some patterns... and when these patterns
come up some where (something we see as a "crime") other patterns
are started as a result of it: move some molecules (putting the
ciminal into a prison (or whatever might exist in 100 years)).
*shurgs*


Andr�
--
From: Artem Baguinski
Subject: Re: Sentience
Date: 
Message-ID: <87hdvljfzj.fsf@caracolito.lan>
>>>>> "Randy" == Randy Yates <·····@ieee.org> writes:

    >> Anyway, free will is not possible as it would break "laws" of
    >> nature.  The molecules in your brain are forced to behave in a
    >> special way. So they "force" you to act in a specific pattern
    >> (there is no will at all, no person/intelligence who is
    >> deciding the pattern and making descisions how you should act).

    Randy> So we shouldn't punish criminals since they were just
    Randy> acting based on the physical processes going on in their
    Randy> minds? 

    If there's no free will the constructions like "we should" or "we
    shouldn't" don't work: we don't decide to punish a criminal, we
    simply do it because laws of nature dictate us to.

-- 
gr{oe|ee}t{en|ings}
artm 
From: John Thingstad
Subject: Re: Sentience
Date: 
Message-ID: <opr6ici70fxfnb1n@news.chello.no>
So in other words the weather predictions, based on mathematical models, 
is predictable.
You fail to take into account the "butterfly effect". Highly complex 
behaviour is
possible even in simple systems. There is certainly room for "free will" 
in our
understanding of nature. A model of the human brain would be simular.
You can see general outline of predictable behaviour but would be unable 
to estimate
the exact flow of thought over time.

On Wed, 14 Apr 2004 20:34:53 +0200, Andr� Thieme 
<······································@justmail.de> wrote:

> Randy Yates wrote:
>
>>> It is interesting to see, that some people believe that there is
>>> something like a soul.
>>> Even today most brain-scientists are very sure that there is no such
>>> thing like a free will.
>>
>> Most scholars were sure the sun revolved around the earth.
>
> Here the issue is a bit different.
> The scientists started out asuming that we have a free will.
> Then they looked more detailed on the brain and came to the conclusion
> that there is no place for a free will.
>
> Anyway, free will is not possible as it would break "laws" of nature.
> The molecules in your brain are forced to behave in a special way. So
> they "force" you to act in a specific pattern (there is no will at all,
> no person/intelligence who is deciding the pattern and making descisions
> how you should act). If you had a free will you could change the way
> how the molecules in your body assemble them self.
> Although particle X has to move down because it was pushed into this
> direction by another particle (or molecule, whatever) your will forces
> it against physical principles to move into a different direction,
> enabeling you to say what you "really" want - you would have a free will.
>
>
>> Had you couched this in more careful terms, I would have thought you
>> simply misguided rather than on a mission. To each his opinion.
>
> I am sorry, my english is not enough developed...
> I hope it will become better over time, so that my words sound less
> offensive to others.
>
>
> Andr�
> --



-- 
Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/
From: Rahul Jain
Subject: Re: Sentience
Date: 
Message-ID: <87y8ou40mb.fsf@nyct.net>
John Thingstad <··············@chello.no> writes:

> So in other words the weather predictions, based on mathematical models,
> is predictable.
> You fail to take into account the "butterfly effect". Highly complex
> behaviour is
> possible even in simple systems. There is certainly room for "free will"
> in our
> understanding of nature. A model of the human brain would be simular.
> You can see general outline of predictable behaviour but would be unable
> to estimate
> the exact flow of thought over time.

Not being able to measure the initial state of a system is a different
situation than not being able to predict the final state of a system
given a perfect measurement of the initial state. The key here is that
the initial state of any physical system is a set of probabilities, not
a specific set of coordinates. (At least according to our understanding
of physics so far.)

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Coby Beck
Subject: Re: Sentience
Date: 
Message-ID: <c5oc6b$2kd9$1@otis.netspace.net.au>
"Andr� Thieme" <······································@justmail.de> wrote in
message ·················@ulric.tng.de...
> Anyway, free will is not possible as it would break "laws" of nature.
> The molecules in your brain are forced to behave in a special way. So
> they "force" you to act in a specific pattern (there is no will at all,

If you are so married to a scientific and determinist world view that you
would truly believe this insanity I bring you a message of hope!  Classical
mechanics has been overthrown, the behaviour of individual particles can not
be predicted and in fact have no precise location and velocity, only
probabilities.  The universe cannot be adequately modeled as a bunch of
pilliard balls bouncing of each other on a giant 3-d pool table.  So perhaps
"free will" is ultimately a way of "loading the quantum dice?"

-- 
Coby Beck
(remove #\Space "coby 101 @ big pond . com")
From: André Thieme
Subject: Re: Sentience
Date: 
Message-ID: <c5pce2$qp8$1@ulric.tng.de>
Coby Beck wrote:

> "Andr� Thieme" <······································@justmail.de> wrote in
> message ·················@ulric.tng.de...
> 
>>Anyway, free will is not possible as it would break "laws" of nature.
>>The molecules in your brain are forced to behave in a special way. So
>>they "force" you to act in a specific pattern (there is no will at all,
> 
> 
> If you are so married to a scientific and determinist world view that you
> would truly believe this insanity I bring you a message of hope!  Classical
> mechanics has been overthrown, the behaviour of individual particles can not
> be predicted and in fact have no precise location and velocity, only
> probabilities.  The universe cannot be adequately modeled as a bunch of
> pilliard balls bouncing of each other on a giant 3-d pool table.  So perhaps
> "free will" is ultimately a way of "loading the quantum dice?"

The universe has derterministic and random components. So I personally
think that our brain also follows these patterns. And I see no room for
a free will. The molecules of our brains have to follow some physical
laws for their movement and this way determine to some degree of what
can happen next. From this side we can't have a free will. Having one
would mean that we can force some molecules to go into specific positions.
Now let me look at the random component.
Quantum physics tells us that in the world of the smallest parts of
our universe strange things can happen. Particles move without any
reason, not following the classical "laws". So is our free will
hidden there? Can we manipulate these random behaviour and force the
molecules into specific positions?
In my opinion this is not possible. If we could do it there would be
some patterns in the new molecule movement, at least if we won't act
100% random. So with a statistical analyses we would again see patterns
in the behaviour and can put them into algorithms.
Anyway, although some randomness is going on in our heads, it does not
seem that we act in some "random ways".

(I hope I could explain my point with my vocabulary)


Andr�
--
From: Coby Beck
Subject: Re: Sentience
Date: 
Message-ID: <c5pugg$bjs$1@otis.netspace.net.au>
"Andr� Thieme" <······································@justmail.de> wrote in
message ·················@ulric.tng.de...
> Coby Beck wrote:
>
> > "Andr� Thieme" <······································@justmail.de>
wrote in
> > message ·················@ulric.tng.de...
> >
> >>Anyway, free will is not possible as it would break "laws" of nature.
> >>The molecules in your brain are forced to behave in a special way. So
> >>they "force" you to act in a specific pattern (there is no will at all,
> >
> >
> > If you are so married to a scientific and determinist world view that
you
> > would truly believe this insanity I bring you a message of hope!
Classical
> > mechanics has been overthrown, the behaviour of individual particles can
not
> > be predicted and in fact have no precise location and velocity, only
> > probabilities.  The universe cannot be adequately modeled as a bunch of
> > pilliard balls bouncing of each other on a giant 3-d pool table.  So
perhaps
> > "free will" is ultimately a way of "loading the quantum dice?"
>
> The universe has derterministic and random components. So I personally
> think that our brain also follows these patterns. And I see no room for
> a free will. The molecules of our brains have to follow some physical
> laws for their movement and this way determine to some degree of what
> can happen next. From this side we can't have a free will. Having one
> would mean that we can force some molecules to go into specific positions.

I can do that.

> Now let me look at the random component.
> Quantum physics tells us that in the world of the smallest parts of
> our universe strange things can happen. Particles move without any
> reason, not following the classical "laws". So is our free will
> hidden there? Can we manipulate these random behaviour and force the
> molecules into specific positions?

If you need to have a scientific explanation to accept being alive then this
is where you may find it.  Personally I find quantum mechanics to be the
most delightful and surprising joke nature has played on us.  It takes the
methodologies of the most cold, rational and mechanical world view ever
devised and turns it on its head with bizarre almost mystical theories that
fly in its own face yet, and here's the punchline, they still work.

> In my opinion [manipulating molecules] is not possible. If we could
> do it there would be some patterns in the new molecule movement

I see a pattern.  I read your post and suddenly desire another sip of coffee
and then all the molecules in my arm move together and bring the mug up to
my mouth.  Isn't that enough to convince you?

Honestly, I enjoy as much as the next guy philisophical discussions about
"fate" and "free will" but trying to scientifically reason away your own
ability to think and to chose is madness.

-- 
Coby Beck
(remove #\Space "coby 101 @ big pond . com")
From: André Thieme
Subject: Re: Sentience
Date: 
Message-ID: <c5q2uu$aug$1@ulric.tng.de>
Coby Beck wrote:

>>The universe has derterministic and random components. So I personally
>>think that our brain also follows these patterns. And I see no room for
>>a free will. The molecules of our brains have to follow some physical
>>laws for their movement and this way determine to some degree of what
>>can happen next. From this side we can't have a free will. Having one
>>would mean that we can force some molecules to go into specific positions.
> 
> 
> I can do that.

Where does the energy come from to do that?
What is it that enables our brains to act completely different (not
following the "laws of nature") than the rest of the universe?



>>In my opinion [manipulating molecules] is not possible. If we could
>>do it there would be some patterns in the new molecule movement
> 
> I see a pattern.  I read your post and suddenly desire another sip of coffee
> and then all the molecules in my arm move together and bring the mug up to
> my mouth.  Isn't that enough to convince you?

Nope, because, if you see a pattern it means that (if you know the
pattern well enough (which is not possible because this issue is too
complex (which does not mean that magic is involved))) your behaviour
was predictable. At least a billionths of a second before you took your
coffee it was already certain that you will take it. Lets call it state
X of your brain. A billionths of a second before it was certain that in
a billionths of a second later your brain will be in state X, etc...
Only because there are too many factors (which makes a calculation
impossible) there is still no need for an area inside the universe (the
inner parts of our heads) that does not work like the outer part of this
area, at least I don't see why it should be this way.
Of course, when looking at the issue of a free will from a macroscopic
point of view then we have one. Anyway, if we go deeper and deeper and
look more into it, then I personally am not convinced anymore, that we
really have the ability to do "magic". Magic here means do things which
should be impossible, for example forcing some billions of molecules
into some specific states by creating energy out of nothing and using it.


> Honestly, I enjoy as much as the next guy philisophical discussions about
> "fate" and "free will" but trying to scientifically reason away your own
> ability to think and to chose is madness.

My arguments are much, but not scientific. However, for me they sound
very logical. I see little building tools from which the universe is
made up. Like some very basic (and orthogonal) functionality like Lisp
offers. It is easy to understand them as a single concept. And when we
start to build some complex structures by combining these basic (small)
concepts we get something more complex which is still easy to understand.
I see the smallest biological organisms and their highly algorithmical
behaviour, so that no scientists says they think. So if there are only
very small life forms, they can't think and therefore have no free will.

Now the brain (the Lisp program) becomes bigger and bigger and our short
time memory can no longer hold all necessary information about the issue
at one time - things starting to look more complex.
If someone would look at a Lisp application which counts maybe 700
trillion lines of code it would surely not be an easy task to understand
it.

Perhaps my thoughts about free will have an emotional nature. I want
that "real" artificial intelligence is possible and therefore create my
own ideas about how the mind works and put it in a way how I like it
more, just to feel better. If we really have a free will, then I see not
many chances to transfer it to a computer.
Anyway, I am very convinced that this is possible.


Andr�
--
From: Coby Beck
Subject: Re: Sentience
Date: 
Message-ID: <c5qhmj$l59$1@otis.netspace.net.au>
"Andr� Thieme" <······························@justmail.de> wrote in message
·················@ulric.tng.de...
> Coby Beck wrote:
>
> >>The universe has derterministic and random components. So I personally
> >>think that our brain also follows these patterns. And I see no room for
> >>a free will. The molecules of our brains have to follow some physical
> >>laws for their movement and this way determine to some degree of what
> >>can happen next. From this side we can't have a free will. Having one
> >>would mean that we can force some molecules to go into specific
positions.
> >
> >
> > I can do that.
>
> Where does the energy come from to do that?

Well, from the Big Bang ultimately.  Before that is a more difficult
question!

Not meaning to be facetious, but I don't think it is a problem to trace the
flow of energy from muscle contractions to the food we eat to the sun etc...

> What is it that enables our brains to act completely different (not
> following the "laws of nature") than the rest of the universe?

Well, I don't think they do.  We just need much more intricate "laws of
nature" if we wish to describe these things in those terms.

I'm reminded of some article or discussion where it was described how
applying what we know of cosmology and physics to our best understanding of
the very early universe (ie fractions of a second old) resulted in the
shocking notion that the probabilty of the very existence of life at all was
1 in 10^(very big number).  Interesting to contemplate, but rather than
being overwhelmed by the philisophical implications of such an idea,
shouldn't we think that there is an even higher probability that we don't
know everything yet?

> >>In my opinion [manipulating molecules] is not possible. If we could
> >>do it there would be some patterns in the new molecule movement
> >
> > I see a pattern.  I read your post and suddenly desire another sip of
coffee
> > and then all the molecules in my arm move together and bring the mug up
to
> > my mouth.  Isn't that enough to convince you?
>
> Nope, because, if you see a pattern it means that (if you know the
> pattern well enough (which is not possible because this issue is too
> complex (which does not mean that magic is involved))) your behaviour
> was predictable.

But this is the essential concept of the uncertainty principle, it is not
that "if only we had accurate enough measurements" it is that there *is* no
precision.  So it is not that it is so complex we can not model it precisely
it is that there is no precision.

> At least a billionths of a second before you took your
> coffee it was already certain that you will take it. Lets call it state
> X of your brain. A billionths of a second before it was certain that in
> a billionths of a second later your brain will be in state X, etc...

The naughty bits are somewhere in the cause of biochemical fluctuations and
all the firing of neurons, not in the actual motion of the matter.

> Of course, when looking at the issue of a free will from a macroscopic
> point of view then we have one. Anyway, if we go deeper and deeper and
> look more into it, then I personally am not convinced anymore, that we
> really have the ability to do "magic". Magic here means do things which
> should be impossible, for example forcing some billions of molecules
> into some specific states by creating energy out of nothing and using it.

Don't you think it is magic that particles can appear out of absolutely
nothing, even if they don't live long?  They can, and do, according to
quantum mechanics.  Life is just a bit more of that magic.

> > Honestly, I enjoy as much as the next guy philisophical discussions
about
> > "fate" and "free will" but trying to scientifically reason away your own
> > ability to think and to chose is madness.
>
> My arguments are much, but not scientific. However, for me they sound
> very logical. I see little building tools from which the universe is
> made up. Like some very basic (and orthogonal) functionality like Lisp
> offers. It is easy to understand them as a single concept. And when we
> start to build some complex structures by combining these basic (small)
> concepts we get something more complex which is still easy to understand.
> I see the smallest biological organisms and their highly algorithmical
> behaviour, so that no scientists says they think.

It is no more algorithmical than our own behaviour when analyzed as a
sufficiently large group.  I don't agree with this.  But they do live, I
think we would all agree with that.  Yet can any scientist really describe
what the essential difference is between a live ameoba and a dead one?

> So if there are only
> very small life forms, they can't think and therefore have no free will.

Well, I don't know.  If one can not predict which direction the ameoba will
swim next isn't it "chosing" at some level?

> Now the brain (the Lisp program) becomes bigger and bigger and our short
> time memory can no longer hold all necessary information about the issue
> at one time - things starting to look more complex.
> If someone would look at a Lisp application which counts maybe 700
> trillion lines of code it would surely not be an easy task to understand
> it.
>
> Perhaps my thoughts about free will have an emotional nature. I want
> that "real" artificial intelligence is possible and therefore create my
> own ideas about how the mind works and put it in a way how I like it
> more, just to feel better. If we really have a free will, then I see not
> many chances to transfer it to a computer.
> Anyway, I am very convinced that this is possible.

I too, like to believe it is possible.
-- 
Coby Beck
(remove #\Space "coby 101 @ big pond . com")
From: André Thieme
Subject: Re: Sentience
Date: 
Message-ID: <c5sf0c$v1p$1@ulric.tng.de>
Coby Beck wrote:

>>>I can do that.
>>
>>Where does the energy come from to do that?
> 
> Well, from the Big Bang ultimately.  Before that is a more difficult
> question!

I should have been more specific...
I meant:
the molecules in your brain are following the "laws of nature" and
therefore it is (aside from the random part) certain where they have to
move in the next moment. If it is always certain where the molecules will
be in the next moment then it is already certain for any amount of time.
The shorter the period of time we are looking at, the more probable our
prediction would be, as the random behaviour of molecules have less
possibilities to work. Anyway, if it is determined how the molecules will
move we cannot have a free will. So to have one we would need to be able
to influence the flow of molecules in our brain, because a specific state
of molecules means a specific thought/action of us. As we need to move
the molecules in our brain into specific states we need energy to do so.
Although molecule X should move to the "left" it needs to fly to the right
side to allow us to think about a special thought or to start some
specific activity. We first need to break the "laws" of normal behaviour
(the molecule should fly to the left) and we need some extra energy for
doing that. And we can not use the energy we are usually using for making
the brain work, as exactly this energy forces the molecule to fly to the
left.



>>What is it that enables our brains to act completely different (not
>>following the "laws of nature") than the rest of the universe?
> 
> Well, I don't think they do.  We just need much more intricate "laws of
> nature" if we wish to describe these things in those terms.
> 
> I'm reminded of some article or discussion where it was described how
> applying what we know of cosmology and physics to our best understanding of
> the very early universe (ie fractions of a second old) resulted in the
> shocking notion that the probabilty of the very existence of life at all was
> 1 in 10^(very big number).  Interesting to contemplate, but rather than
> being overwhelmed by the philisophical implications of such an idea,
> shouldn't we think that there is an even higher probability that we don't
> know everything yet?

I agree here. What we call laws of nature are only some tries to describe
what happens around us, which of course has some mistakes as we cannot
ask the universe every question.
Perhaps there really is an objective world out there. I don't know, I
suppose yes. Anyway, I am certain that we will never know what really is
the source of everything, so for us the universe will always stay in a
relative way.


>>Nope, because, if you see a pattern it means that (if you know the
>>pattern well enough (which is not possible because this issue is too
>>complex (which does not mean that magic is involved))) your behaviour
>>was predictable.
> 
> But this is the essential concept of the uncertainty principle, it is not
> that "if only we had accurate enough measurements" it is that there *is* no
> precision.  So it is not that it is so complex we can not model it precisely
> it is that there is no precision.

Obviously we have some determined part inside us. The uncertainity of
the universe has not an overly big effect on us, as we don't act very
randomly. Imagine how crazy the world would be if only 2% of our behaviour
would be controled by quantum mechanics. The randomness which comes from
there has only an extremly low impact on us.

The determined part of our brain offers no room for a free will, as this
part would then no longer be determined.
But as I pointed out even the random parts, the last chance where free
will could hide, does not offer us this room. Even if it would, it would
mean that the impact of our free will influences us to less than 1% I
guess. The sad thing I see is, that we don't even have this 1% of free
will, at least when looking at it from a very deep level, from the base
of molecule movement in our brain.
If we had a free will it would mean that we can control the random
increases of energy in same parts of the brain, some particles that
exist for short moments etc. But if we would then use the control to do
something specific it would no longer be random. The uncertainity
principle would no longer be in effect as we could exactly measure
a particle. With our free will we could force it to be exactly observed.


>>At least a billionths of a second before you took your
>>coffee it was already certain that you will take it. Lets call it state
>>X of your brain. A billionths of a second before it was certain that in
>>a billionths of a second later your brain will be in state X, etc...
> 
> 
> The naughty bits are somewhere in the cause of biochemical fluctuations and
> all the firing of neurons, not in the actual motion of the matter.

The question is how a neuron is built up. I suppose it also is built up
out of some basic matter (which we understand). These basic parts which
are easy to understand if we look at them alone get some new abilities
as soon many of them are mixed and get organized in some specific pattern
(for example in a neuron). The firing is also some electricity which
also consists of quantum particles, mostly electrons.
For our constricted mind it is of course not possible to understand the
functionality on such a basic basis. But I guess that in 100 years, when
some ultra intelligent computers exist there are ways to come closer to
such an understanding.
(if these computers will exist *g*)


>>Of course, when looking at the issue of a free will from a macroscopic
>>point of view then we have one. Anyway, if we go deeper and deeper and
>>look more into it, then I personally am not convinced anymore, that we
>>really have the ability to do "magic". Magic here means do things which
>>should be impossible, for example forcing some billions of molecules
>>into some specific states by creating energy out of nothing and using it.
> 
> Don't you think it is magic that particles can appear out of absolutely
> nothing, even if they don't live long?  They can, and do, according to
> quantum mechanics.  Life is just a bit more of that magic.

It looks to me like magic too sometimes. I am even "shocked" that I can
use my mobile phone to send sms. How can the antenna which is hundreds
of meters away from me "know" what text I typed on my phone?
And then it even starts to do something, so that another phone, hundreds
of kilometers away also knows what I was typing.

Anyway, from a less emotional perspective I must say there is no magic
involved. Either everything is magic or nothing I would say.


>>>Honestly, I enjoy as much as the next guy philisophical discussions
> 
> about
> 
>>>"fate" and "free will" but trying to scientifically reason away your own
>>>ability to think and to chose is madness.
>>
>>My arguments are much, but not scientific. However, for me they sound
>>very logical. I see little building tools from which the universe is
>>made up. Like some very basic (and orthogonal) functionality like Lisp
>>offers. It is easy to understand them as a single concept. And when we
>>start to build some complex structures by combining these basic (small)
>>concepts we get something more complex which is still easy to understand.
>>I see the smallest biological organisms and their highly algorithmical
>>behaviour, so that no scientists says they think.
> 
> 
> It is no more algorithmical than our own behaviour when analyzed as a
> sufficiently large group.  I don't agree with this.  But they do live, I
> think we would all agree with that.  Yet can any scientist really describe
> what the essential difference is between a live ameoba and a dead one?

Very good question, I don't know the answer (I guess you know that they
can't tell it?). In fact there are so many things we need to learn.


>>So if there are only
>>very small life forms, they can't think and therefore have no free will.
> 
> 
> Well, I don't know.  If one can not predict which direction the ameoba will
> swim next isn't it "chosing" at some level?

If you say "at some level" I have no problem at all to agree.
Also we humans have a free will, when looked at from our macroscopic level.
My mind does not offer me to see what really happens, deep in my mind
with it trillions of particles in it. For me everything looks as if I
want it.
 From this perspective also our computers will have a free will. I
suppose they will be very sad if you tell them that they have not one
and they will probably start to argue with you and find so good
arguments, that they will convince us they have. Even todays computers
could "think" about themselves that they have a free will. If I let it
run in a for loop and count until 1 billion it could be interpreted that
the machine has the extremly strong need for starting to count to 1
billion :)
As soon it stops it is no longer motivated to count. In fact, it might
be very interested to not count again. But then I start the for loop
again and suddenly an unresistable motivation comes up in my machine and
it begins to count again, while not knowing where this feeling is coming
from - for the machine it looks like the free will... it /wanted/ to
count.

Of course, this is a very abstract level to look at the programs of
todays complexity. Although I must say that I regard some programs as
intelligent.



>>Now the brain (the Lisp program) becomes bigger and bigger and our short
>>time memory can no longer hold all necessary information about the issue
>>at one time - things starting to look more complex.
>>If someone would look at a Lisp application which counts maybe 700
>>trillion lines of code it would surely not be an easy task to understand
>>it.
>>
>>Perhaps my thoughts about free will have an emotional nature. I want
>>that "real" artificial intelligence is possible and therefore create my
>>own ideas about how the mind works and put it in a way how I like it
>>more, just to feel better. If we really have a free will, then I see not
>>many chances to transfer it to a computer.
>>Anyway, I am very convinced that this is possible.
> 
> 
> I too, like to believe it is possible.

I am so much hoping it and convinced about it that I will spend very big
parts of my life for working towards this goal.
Interestingly AI several times has been a reason for me to look out for
new programming languages. I read about this strange "Lisp", but it was
too old for me. I wanted a modern language. Then over some other ways
(Paul Graham for example) I stumbled again over it and this time decided
(with my free will (hey *g*)) to spend time with it. I am so impressed
that it already is my favourite language and I am sorry for having been
such an idiot, not trying it earlier when I heared about it.
Anyway, mistakes from the past are corrected now, and this is a good thing.


Andr�
--
From: Rahul Jain
Subject: Re: Sentience
Date: 
Message-ID: <87n05aynoa.fsf@nyct.net>
Andr� Thieme <······························@justmail.de> writes:

> the molecules in your brain are following the "laws of nature" and
> therefore it is (aside from the random part) certain where they have to
> move in the next moment. If it is always certain where the molecules will
> be in the next moment then it is already certain for any amount of time.
> The shorter the period of time we are looking at, the more probable our
> prediction would be, as the random behaviour of molecules have less
> possibilities to work. Anyway, if it is determined how the molecules will
> move we cannot have a free will. So to have one we would need to be able
> to influence the flow of molecules in our brain, because a specific state
> of molecules means a specific thought/action of us. As we need to move
> the molecules in our brain into specific states we need energy to do so.
> Although molecule X should move to the "left" it needs to fly to the right
> side to allow us to think about a special thought or to start some
> specific activity. We first need to break the "laws" of normal behaviour
> (the molecule should fly to the left) and we need some extra energy for
> doing that. And we can not use the energy we are usually using for making
> the brain work, as exactly this energy forces the molecule to fly to the
> left.

In other words, even though the expected behavior of the brain would be
to send the molecule to the right, its normal behavior would be to push
it to the left? Eh? Why do you assume that the brain is only allowed to
send molecules in one direction and not the other? All it needs to do is
rearrange the electrical fields inside of it in order to effect that
change. Why can't the brain have any energy to do that? Why can it have
the energy to push the molecule in the other direction?


> Obviously we have some determined part inside us. The uncertainity of
> the universe has not an overly big effect on us, as we don't act very
> randomly. Imagine how crazy the world would be if only 2% of our behaviour
> would be controled by quantum mechanics. The randomness which comes from
> there has only an extremly low impact on us.

Quantum mechanics is just the best explanation of how we understand the
universe to work. If less than 2% of our behavior is explainable using
the physics we know, then how do we know that the other 98+% is not
random? We don't know anything about it.

> But as I pointed out even the random parts, the last chance where free
> will could hide, does not offer us this room. Even if it would, it would
> mean that the impact of our free will influences us to less than 1% I
> guess. The sad thing I see is, that we don't even have this 1% of free
> will, at least when looking at it from a very deep level, from the base
> of molecule movement in our brain.

Just because we don't understand quantum decoherence doesn't mean that
it's irrelevant to the question of free will.

> But if we would then use the control to do something specific it would
> no longer be random. The uncertainity principle would no longer be in
> effect as we could exactly measure a particle. With our free will we
> could force it to be exactly observed.

Free will doesn't mean that we get to choose the laws of physics. This
is a common misconception espoused by mathematicians who were let out
into the real world. :)

> The question is how a neuron is built up. I suppose it also is built up
> out of some basic matter (which we understand).

Why do you assume we are omnicient?

> These basic parts which are easy to understand if we look at them
> alone

A particle that never interacts with any other particles isn't worth
understanding, as it doesn't have any effect on the universe. It didn't
come from the universe, for one thing.

> get some new abilities as soon many of them are mixed and get
> organized in some specific pattern (for example in a neuron).

Indeed the "only" abilities they get are in the ability to interact with
each other as well as other particles and have some effect on the
universe.

> The firing is also some electricity which also consists of quantum
> particles, mostly electrons.

Ok... what do you define as a non-quantum particle? Electricity does not
consist of electrons, but rather, of photons.

> For our constricted mind it is of course not possible to understand the
> functionality on such a basic basis.

Speak for yourself. ;)

But you're right. We can't even reason fully about the behavior of a
system of two electrons and one proton. And that's once we assume that
the proton is atomic (as opposed to being part of an atom -- (ObLisp) a
symbol instead of a string).

> But I guess that in 100 years, when
> some ultra intelligent computers exist there are ways to come closer to
> such an understanding.
> (if these computers will exist *g*)

Maybe those will help. Maybe not.

>> It is no more algorithmical than our own behaviour when analyzed as a
>> sufficiently large group.  I don't agree with this.  But they do live, I
>> think we would all agree with that.  Yet can any scientist really describe
>> what the essential difference is between a live ameoba and a dead one?
>
> Very good question, I don't know the answer (I guess you know that they
> can't tell it?). In fact there are so many things we need to learn.

Or maybe we'll learn that the distinction is just a figment of our own
imaginations (but we may understand those in much more concrete terms as
a result ;).

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Thomas Stegen
Subject: Re: Sentience
Date: 
Message-ID: <c5uf9g$5pvn9$1@ID-228872.news.uni-berlin.de>
Rahul Jain wrote:
> 
> In other words, even though the expected behavior of the brain would be
> to send the molecule to the right, its normal behavior would be to push
> it to the left? Eh? Why do you assume that the brain is only allowed to
> send molecules in one direction and not the other? All it needs to do is
> rearrange the electrical fields inside of it in order to effect that
> change. Why can't the brain have any energy to do that? Why can it have
> the energy to push the molecule in the other direction?

But what is that decides which direction to push the molecule in the
first place? Ok, so it depends on which direction the force is applied.
What determines the direction of this force?

Rearranging the electrical fields necessarily cannot happen just out
of the blue, it needs to be caused by something. And that something
needs to be caused by something else again. I cannot see past this
infinite regress at the moment.

As an analogy. A pool table which has pins coming out of the surface at
random intervals and random places, someone breaks of. It is impossible
to tell the final configuration of the balls by knowing the initial
conditions (force and position of balls), but it is also impossible
to do anything about the final state of the table by using the forces
that are in motion on the table once the break has been made.

Free will seems to depend on some actions happening without being caused
by something else, and for me that does not seem to work. Maybe the
complex behaviour we see are just a result of state and input? Seems
plausible to me really. There is nothing that really indicates that
self awareness, conscioussness and intelligence is special in any way.

It is clear that our brains are at least as powerful as Turing machines,
who is to say that self awareness is not just another state or set of
states? I don't know, the last few days I have been thinking that it
certainly feels like I have free will, but then again, once I make a
choice there is no way to know if I could have made a different choice.

So essentially, indeterminism does not in any way result in free will.
Maybe human interaction is just quantum physics at a macroscopic level.
All the randomness which comes from the uncertainty principle and
friends ultimately results in amazing structures such as stars, planets,
rocks, whatever. Maybe once a structure exists which allows these
quantum effects to escape the microsopic level and enter the macroscopic
domain they will result in a different type of amazing structures. This
enabling structure might be life and at least on this planet the human
brain is the most powerful of them all. The structures that result from
this again are buildings, aeroplanes space travel and so on. All highly
regular in ways that are found almost nowhere else in nature, but
still a result of nature.

So maybe the universe tends to generate these enabling structures,
and we might just be a step on the way. When we create an intelligence
superior to us we have just let the universe take one step up...
Not that this is predetermined in any conscious way, it is just the
result of the way thing works.

-- 
Thomas.
From: Rahul Jain
Subject: Re: Sentience
Date: 
Message-ID: <877jwcvlpk.fsf@nyct.net>
Thomas Stegen <·······@cis.strath.ac.uk> writes:

> But what is that decides which direction to push the molecule in the
> first place? Ok, so it depends on which direction the force is applied.
> What determines the direction of this force?

The configuration of ions the neuron has maintained.

> Rearranging the electrical fields necessarily cannot happen just out
> of the blue, it needs to be caused by something. And that something
> needs to be caused by something else again. I cannot see past this
> infinite regress at the moment.

Ok, but you can't conclude that the brain can't possibly have enough
energy to do this from what you have said.

> Free will seems to depend on some actions happening without being caused
> by something else, and for me that does not seem to work. Maybe the
> complex behaviour we see are just a result of state and input? Seems
> plausible to me really. There is nothing that really indicates that
> self awareness, conscioussness and intelligence is special in any way.

As I said, there is much we don't know, and quantum decoherence could be
significant here.

> It is clear that our brains are at least as powerful as Turing machines,
> who is to say that self awareness is not just another state or set of
> states? I don't know, the last few days I have been thinking that it
> certainly feels like I have free will, but then again, once I make a
> choice there is no way to know if I could have made a different choice.

Sure, it's possible. That's why people are still doing research into the
matter. :)

> So essentially, indeterminism does not in any way result in free will.

It depends on what it's indeterminate relative to.

> Maybe human interaction is just quantum physics at a macroscopic level.

Well, that's all we know, so as far as we know, that's all it can be.

> All the randomness which comes from the uncertainty principle and
> friends

Actually, uncertainty adds no randomness. It's decoherence that causes
it, and we have no clue how it behaves.

> ultimately results in amazing structures such as stars, planets,
> rocks, whatever. Maybe once a structure exists which allows these
> quantum effects to escape the microsopic level and enter the macroscopic
> domain they will result in a different type of amazing structures.

I don't know what you mean by "macroscopic", but we surely can measure
the macroscopic effects of quantum mechanics. Just listen to a geiger
counter... or look at a light bulb... or watch billiard balls bouncing
off each other. There mere existence of discrete pieces of matter is a
quantum effect.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Thomas Stegen
Subject: Re: Sentience
Date: 
Message-ID: <c6096u$6agi8$1@ID-228872.news.uni-berlin.de>
Rahul Jain wrote:
> Thomas Stegen <·······@cis.strath.ac.uk> writes:
> 
> 
>>But what is that decides which direction to push the molecule in the
>>first place? Ok, so it depends on which direction the force is applied.
>>What determines the direction of this force?
> 
> 
> The configuration of ions the neuron has maintained.

And this configuration comes from where? This is the point I think,
this configuration is also caused by something, which again is
caused by something etc.

> 
> 
>>Rearranging the electrical fields necessarily cannot happen just out
>>of the blue, it needs to be caused by something. And that something
>>needs to be caused by something else again. I cannot see past this
>>infinite regress at the moment.
> 
> 
> Ok, but you can't conclude that the brain can't possibly have enough
> energy to do this from what you have said.

Oh, the brain probably has enough energy. But how is this energy
applied? Do you have any free will when it comes to that? And if
you do what causes you to be able to choose where energy is applied?

I know I ask a lot of questions here, but they are rhethorical.
I just have trouble coming up with statements as I really don't know :)

> 
>>So essentially, indeterminism does not in any way result in free will.
> 
> 
> It depends on what it's indeterminate relative to.
> 

I think it depends on whether or not we have a choice in the matter.

>>All the randomness which comes from the uncertainty principle and
>>friends
> 
> 
> Actually, uncertainty adds no randomness. It's decoherence that causes
> it, and we have no clue how it behaves.
> 

Ah, cheers, I had a suspicion about that. :)
(http://en.wikipedia.org/wiki/Decoherence)

> 
>>ultimately results in amazing structures such as stars, planets,
>>rocks, whatever. Maybe once a structure exists which allows these
>>quantum effects to escape the microsopic level and enter the macroscopic
>>domain they will result in a different type of amazing structures.
> 
> 
> I don't know what you mean by "macroscopic", 

Larger than a couple of molecules really. The scale where newtonian
mechanics is decent approximation to cause and effect. Don't know
if this is very rigid though.

> but we surely can measure
> the macroscopic effects of quantum mechanics. Just listen to a geiger
> counter... or look at a light bulb... or watch billiard balls bouncing
> off each other. There mere existence of discrete pieces of matter is a
> quantum effect.

But these effects are quite predictable. A light bulb gives out a
certain level of light and the mere existence of pool and snooker
players which perform at such a high level as they do implies that the
behaviour of pool balls are quite predictable.

-- 
Thomas.
From: Ray Dillinger
Subject: Re: Sentience
Date: 
Message-ID: <40842A00.19660F0F@sonic.net>
André Thieme wrote:
> 
> The universe has derterministic and random components. So I personally
> think that our brain also follows these patterns. And I see no room for
> a free will. 

I think your preconceived notion of what a "free will" is don't allow 
you to notice it.

I don't find it necessary to believe that free will acts from outside 
the universe as we know it.  If I thought that that were what a free 
will had to be, I wouldn't be able to see room for it either.

				Bear
From: André Thieme
Subject: Re: Sentience
Date: 
Message-ID: <c61gvq$gr3$1@ulric.tng.de>
Ray Dillinger wrote:
> Andr� Thieme wrote:
> 
>>The universe has derterministic and random components. So I personally
>>think that our brain also follows these patterns. And I see no room for
>>a free will. 
> 
> 
> I think your preconceived notion of what a "free will" is don't allow 
> you to notice it.
> 
> I don't find it necessary to believe that free will acts from outside 
> the universe as we know it.  If I thought that that were what a free 
> will had to be, I wouldn't be able to see room for it either.

This sounds reasonable to me. The definition of "free will" is in fact 
very important. If we want to look at it from our macroscopic POV I also 
agree that our will is free.
I was thinking of a free will in a way, that there is an "I" which can 
decide what I do and this "I" am I :)
So if we say, that if we roll a stone off of a mountain and say "it 
wants to move down" then also humans have a free will, like anything 
else in the universe. In my concept free will was the option for the 
stone to simply /stop/ rolling down the mountain when he "feels" like it.

Anyway, from which point I look at the issue, I do see room for (Lisp) 
programs in the future that will have also a free will, just like humans.


Andr�
--
From: Karl A. Krueger
Subject: Re: Sentience
Date: 
Message-ID: <c5q1nf$i9g$1@baldur.whoi.edu>
Coby Beck <·····@mercury.bc.ca> wrote:
> "Andr? Thieme" <······································@justmail.de> wrote in
> message ·················@ulric.tng.de...
>> Anyway, free will is not possible as it would break "laws" of nature.
>> The molecules in your brain are forced to behave in a special way. So
>> they "force" you to act in a specific pattern (there is no will at all,
> 
> If you are so married to a scientific and determinist world view that you
> would truly believe this insanity I bring you a message of hope!  Classical
> mechanics has been overthrown, the behaviour of individual particles can not
> be predicted and in fact have no precise location and velocity, only
> probabilities.  The universe cannot be adequately modeled as a bunch of
> pilliard balls bouncing of each other on a giant 3-d pool table.  So perhaps
> "free will" is ultimately a way of "loading the quantum dice?"


Rudolf Carnap refuted the idea that mechanical "determinism" at the
physical level contradicts "free will" at the mental level fifty years
ago.  This point, called "compatibilism", has been more recently revived
by Daniel Dennett, but Dennett doesn't seem to give Carnap enough of the
credit.  Carnap went on to point out that randomness or indeterminacy
-can't- provide free will.


What we mean by "free will" is freedom from compulsion -- the ability
to, among other things, choose in the future otherwise than we did in
the past.  To do otherwise, that is, rather than being compelled to
follow the same steps in some kind of Nietzschean eternal recurrence.

For instance, the drug addict (stereotypically conceived) is not free,
but compelled -- his process of decision-making constrained -- to
continue to take heroin or alcohol or what-have-you.  He has to do the
same damned dumb thing over and over again -- eventually rather
tediously, if the tales told by those who have recovered from heroin
addiction are any measure.  It is this, and the similar tedium of the
physically imprisoned, we wish to avoid.

The kind of freedom that we -enjoy- is freedom to choose both pleasures
and moral goals, rather than being onerously constrained, as by physical
imprisonment or biochemical compulsion.  We are not _compelled_ by those
things which are in our nature; rather, we simply _are_ what is in our
nature; "to be determined by one's own judgment is no restraint to
liberty," as Locke put it many years before Carnap.

Indeed, drug addicts who feel no conflict with the world over their
addictions do not feel constrained or compelled by them:  I myself am
certainly addicted to caffeine in the medical sense, in that I would be
sore put-upon without my morning coffee -- but as nobody tries to deny
me it, I do not consider it to be a threat to my freedom.  The heroin
junkie is more sorely oppressed not only by the strength of his
addiction but also by his fellow man's choice to try to deny him heroin.


Even if it were an open question whether we could have free will in a
deterministic universe, we certainly could not in a random one.  Carnap
asks us:  if free will comes from quantum indeterminacy, then would we
have _more_ free will if the indeterminacy of quantum processes were
_greater_?  We would not.  "If [indeterminacy] were much greater, there
might be times when a table would suddenly explode, or a falling stone
would spontaneously move horizontally or back up into the air.  [...]
[I]t would make [...] choices considerably more difficult because it
would be more difficult to anticipate the consequences of actions."

If we cannot predict that dropping a stone will not cause it to zoom up
and hit our friend's head, then we are in a bad place, freedom-wise:  we
_want_ not to hurt our friend, but we cannot reliably _choose_ not to do
so.  No moral (or hedonic) decision-making could exist in such a world,
and thus no free will worth wanting.


(Yes, I live with a philosophy grad student.)

-- 
Karl A. Krueger <········@example.edu>
Woods Hole Oceanographic Institution
Email address is spamtrapped.  s/example/whoi/
"Outlook not so good." -- Magic 8-Ball Software Reviews
From: John Thingstad
Subject: Re: Sentience
Date: 
Message-ID: <opr6lh0ozvxfnb1n@news.chello.no>
A ball can indeed move straight through a wall.
It just dosn't happen very often.
If the probabillity is less that 1 over 10^24 even mathematicians
use the word impossible.
Besides that the reasoning of thes dated philosopher predats modern works
on chaos theory. Caotic systems have periodic elements and altso random 
elements.
Random states quickly die while periodic systems survive over time.
Take a look at a life simulation to get my drift.
Thus his argument only holds for complete randomness.
Further note that the human brain is 'wired' to pick up on
elements of order (in order to use them).
We simply don't 'register' randomness.
Thus it is simple to delude oneselves to think that everything
is deterministic. To use modern lingo free will in a world
of 'random events' may be impossible but not in a 'chaotic' world.

P� Sat, 17 Apr 2004 01:36:49 +0000 (UTC), skrev Karl A. Krueger 
<········@example.edu>:

> Coby Beck <·····@mercury.bc.ca> wrote:
>> "Andr? Thieme" <······································@justmail.de> 
>> wrote in
>> message ·················@ulric.tng.de...
>>> Anyway, free will is not possible as it would break "laws" of nature.
>>> The molecules in your brain are forced to behave in a special way. So
>>> they "force" you to act in a specific pattern (there is no will at all,
>>
>> If you are so married to a scientific and determinist world view that 
>> you
>> would truly believe this insanity I bring you a message of hope!  
>> Classical
>> mechanics has been overthrown, the behaviour of individual particles 
>> can not
>> be predicted and in fact have no precise location and velocity, only
>> probabilities.  The universe cannot be adequately modeled as a bunch of
>> pilliard balls bouncing of each other on a giant 3-d pool table.  So 
>> perhaps
>> "free will" is ultimately a way of "loading the quantum dice?"
>
>
> Rudolf Carnap refuted the idea that mechanical "determinism" at the
> physical level contradicts "free will" at the mental level fifty years
> ago.  This point, called "compatibilism", has been more recently revived
> by Daniel Dennett, but Dennett doesn't seem to give Carnap enough of the
> credit.  Carnap went on to point out that randomness or indeterminacy
> -can't- provide free will.
>
>
> What we mean by "free will" is freedom from compulsion -- the ability
> to, among other things, choose in the future otherwise than we did in
> the past.  To do otherwise, that is, rather than being compelled to
> follow the same steps in some kind of Nietzschean eternal recurrence.
>
> For instance, the drug addict (stereotypically conceived) is not free,
> but compelled -- his process of decision-making constrained -- to
> continue to take heroin or alcohol or what-have-you.  He has to do the
> same damned dumb thing over and over again -- eventually rather
> tediously, if the tales told by those who have recovered from heroin
> addiction are any measure.  It is this, and the similar tedium of the
> physically imprisoned, we wish to avoid.
>
> The kind of freedom that we -enjoy- is freedom to choose both pleasures
> and moral goals, rather than being onerously constrained, as by physical
> imprisonment or biochemical compulsion.  We are not _compelled_ by those
> things which are in our nature; rather, we simply _are_ what is in our
> nature; "to be determined by one's own judgment is no restraint to
> liberty," as Locke put it many years before Carnap.
>
> Indeed, drug addicts who feel no conflict with the world over their
> addictions do not feel constrained or compelled by them:  I myself am
> certainly addicted to caffeine in the medical sense, in that I would be
> sore put-upon without my morning coffee -- but as nobody tries to deny
> me it, I do not consider it to be a threat to my freedom.  The heroin
> junkie is more sorely oppressed not only by the strength of his
> addiction but also by his fellow man's choice to try to deny him heroin.
>
>
> Even if it were an open question whether we could have free will in a
> deterministic universe, we certainly could not in a random one.  Carnap
> asks us:  if free will comes from quantum indeterminacy, then would we
> have _more_ free will if the indeterminacy of quantum processes were
> _greater_?  We would not.  "If [indeterminacy] were much greater, there
> might be times when a table would suddenly explode, or a falling stone
> would spontaneously move horizontally or back up into the air.  [...]
> [I]t would make [...] choices considerably more difficult because it
> would be more difficult to anticipate the consequences of actions."
>
> If we cannot predict that dropping a stone will not cause it to zoom up
> and hit our friend's head, then we are in a bad place, freedom-wise:  we
> _want_ not to hurt our friend, but we cannot reliably _choose_ not to do
> so.  No moral (or hedonic) decision-making could exist in such a world,
> and thus no free will worth wanting.
>
>
> (Yes, I live with a philosophy grad student.)
>



-- 
Sender med M2, Operas revolusjonerende e-postprogram: http://www.opera.com/
From: Coby Beck
Subject: Re: Sentience
Date: 
Message-ID: <c5qdop$jh7$1@otis.netspace.net.au>
"Karl A. Krueger" <········@example.edu> wrote in message
·················@baldur.whoi.edu...
> Coby Beck <·····@mercury.bc.ca> wrote:
> > "Andr? Thieme" <······································@justmail.de>
wrote in
> > message ·················@ulric.tng.de...
> >> Anyway, free will is not possible as it would break "laws" of nature.
> >> The molecules in your brain are forced to behave in a special way. So
> >> they "force" you to act in a specific pattern (there is no will at all,
> >
> > If you are so married to a scientific and determinist world view that
you
> > would truly believe this insanity I bring you a message of hope!
Classical
> > mechanics has been overthrown, the behaviour of individual particles can
not
> > be predicted and in fact have no precise location and velocity, only
> > probabilities.  The universe cannot be adequately modeled as a bunch of
> > pilliard balls bouncing of each other on a giant 3-d pool table.  So
perhaps
> > "free will" is ultimately a way of "loading the quantum dice?"
>
>
> Rudolf Carnap refuted the idea that mechanical "determinism" at the
> physical level contradicts "free will" at the mental level fifty years
> ago.  This point, called "compatibilism", has been more recently revived
> by Daniel Dennett, but Dennett doesn't seem to give Carnap enough of the

Thanks for the interesting reference.  I read about this here:
http://www.rep.routledge.com/article/V014SECT1 but I feel it merely defines
"free will" as the illusion of free will and then employs the common
philisophical tactic of equating "illusion indistiguishable from reality"
with reality.  Cute, but not compelling.

> credit.  Carnap went on to point out that randomness or indeterminacy
> -can't- provide free will.

Do you have a URL handy where I could read about that?

-- 
Coby Beck
(remove #\Space "coby 101 @ big pond . com")
From: Karl A. Krueger
Subject: Re: Sentience
Date: 
Message-ID: <c5rvvm$9j8$1@baldur.whoi.edu>
Coby Beck <·····@mercury.bc.ca> wrote:
> "Karl A. Krueger" <········@example.edu> wrote in message
> ·················@baldur.whoi.edu...
>> Carnap went on to point out that randomness or indeterminacy -can't-
>> provide free will.
> 
> Do you have a URL handy where I could read about that?

It's covered in the chapter "Determinism and Free Will" of Carnap's
_Introduction to the Philosophy of Science_, previously published as
_Philosophical Foundations of Physics_.  ISBN 0486283186.

-- 
Karl A. Krueger <········@example.edu>
Woods Hole Oceanographic Institution
Email address is spamtrapped.  s/example/whoi/
"Outlook not so good." -- Magic 8-Ball Software Reviews
From: Rahul Jain
Subject: Re: Sentience
Date: 
Message-ID: <87u0zi40hf.fsf@nyct.net>
"Coby Beck" <·····@mercury.bc.ca> writes:

> The universe cannot be adequately modeled as a bunch of
> pilliard balls bouncing of each other on a giant 3-d pool table.

In fact, such a model can not adequately model a billiard ball bouncing
off another billiard ball. :)

> So perhaps "free will" is ultimately a way of "loading the quantum
> dice?"

That's exactly how I'm inclined to think (with all due credit to
Penrose).

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Rahul Jain
Subject: Re: Sentience
Date: 
Message-ID: <873c725fci.fsf@nyct.net>
Andr� Thieme <······································@justmail.de> writes:

> Anyway, free will is not possible as it would break "laws" of nature.
> The molecules in your brain are forced to behave in a special way.

Your wilfull ;) misunderstanding of (quantum) physics is interesting.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: André Thieme
Subject: Re: Sentience
Date: 
Message-ID: <c672rj$ck6$1@ulric.tng.de>
Rahul Jain wrote:

> Andr� Thieme <······································@justmail.de> writes:
> 
> 
>>Anyway, free will is not possible as it would break "laws" of nature.
>>The molecules in your brain are forced to behave in a special way.
> 
> 
> Your wilfull ;) misunderstanding of (quantum) physics is interesting.
> 

Although you marked your comment with a ";)" I would be interested in 
how you disagree with my above sentence. It seems obvious to me that 
there must be some rules to be followed by the brain. If the molecules 
could act at random I don't see a way how to think. I believe that 
thoughts are represented by the state of matter and energy in our brain.


Andr�
-- 
From: Ray Blaak
Subject: Re: Sentience
Date: 
Message-ID: <u7jw8kl8q.fsf@STRIPCAPStelus.net>
Andr� Thieme <······························@justmail.de> writes:
> Rahul Jain wrote:
> 
> > Andr� Thieme <······································@justmail.de> writes:
> >
> >>Anyway, free will is not possible as it would break "laws" of nature.
> >>The molecules in your brain are forced to behave in a special way.
> > Your wilfull ;) misunderstanding of (quantum) physics is interesting.
> >
> 
> Although you marked your comment with a ";)" I would be interested in how you
> disagree with my above sentence. It seems obvious to me that there must be
> some rules to be followed by the brain. If the molecules could act at random I
> don't see a way how to think. I believe that thoughts are represented by the
> state of matter and energy in our brain.

Points to consider: 

- "rules" does not necessarily mean "formalizable rules". 
  That is, the laws of the universe do not necessarily imply predictable
  behaviour like clockwork.

- "random" can mean "we can't predict the next move".
  That is, with sufficenctly complex rules, we can't tell the difference.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Rahul Jain
Subject: Re: Sentience
Date: 
Message-ID: <87d65of66u.fsf@nyct.net>
Ray Blaak <········@STRIPCAPStelus.net> writes:

> - "random" can mean "we can't predict the next move".
>   That is, with sufficenctly complex rules, we can't tell the difference.

Not to mention that the most accurate physics we know is based purely on
random choices within a probability distribution. Statistically, we can
predict what the behavior of the system as a whole should tend to be.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Gorbag
Subject: Re: Sentience
Date: 
Message-ID: <GHslc.83$d4.19@bos-service2.ext.ray.com>
"Rahul Jain" <·····@nyct.net> wrote in message
···················@nyct.net...
> Ray Blaak <········@STRIPCAPStelus.net> writes:
>
> > - "random" can mean "we can't predict the next move".
> >   That is, with sufficenctly complex rules, we can't tell the
difference.
>
> Not to mention that the most accurate physics we know is based purely on
> random choices within a probability distribution. Statistically, we can
> predict what the behavior of the system as a whole should tend to be.

I'm surprised to see you say this, given your .sig; Quantum mechanics, if
anything, tells us the choices are NOT random.

>
> -- 
> Rahul Jain
> ·····@nyct.net
> Professional Software Developer, Amateur Quantum Mechanicist
From: John Thingstad
Subject: Re: Sentience
Date: 
Message-ID: <opr7fkjak4xfnb1n@news.chello.no>
I an old interpretation of Quantum mechanics called the copenhagen approach
it was assumed that the electron acted as a particle and held one
of the positions as held by the probabillity wasve. Whebn
we the summed up motion of many atom they avaraged up to the wave.
Many modern experients on coupled states seem to indicate that
the wave is a real entity and only achives a partice interaction when 
'measured'.
The new model is called the many worlds modell.

Anyhow random within a distribution is still random.
The fact that you can locate sothing lying within a phase plane
does not give you the actual position but that does not mean it has no 
structure
and that you can't reason over it.
This is the reason for the existence of chaos thery.
If it were truly chaoutic there would be nothing to study.
Physics is afterall based on causality. If you can reaplicate the 
conditions
you should get the same resutls. The chaos model achives this 
statistically.

P� Mon, 3 May 2004 10:29:56 -0400, skrev Gorbag <······@invalid.acct>:

>
> "Rahul Jain" <·····@nyct.net> wrote in message
> ···················@nyct.net...
>> Ray Blaak <········@STRIPCAPStelus.net> writes:
>>
>> > - "random" can mean "we can't predict the next move".
>> >   That is, with sufficenctly complex rules, we can't tell the
> difference.
>>
>> Not to mention that the most accurate physics we know is based purely on
>> random choices within a probability distribution. Statistically, we can
>> predict what the behavior of the system as a whole should tend to be.
>
> I'm surprised to see you say this, given your .sig; Quantum mechanics, if
> anything, tells us the choices are NOT random.
>
>>
>> --
>> Rahul Jain
>> ·····@nyct.net
>> Professional Software Developer, Amateur Quantum Mechanicist
>
>



-- 
Sender med M2, Operas revolusjonerende e-postprogram: http://www.opera.com/
From: Ray Dillinger
Subject: Re: Sentience
Date: 
Message-ID: <407AEBBC.67255D86@sonic.net>
Christian Lynbech wrote:
> 
> >>>>> "André" == André Thieme <······································@justmail.de> writes:
> 
> André> Well, atoms alone (probably) don't think. However, when you combine
> André> many of them into a special structure, then this structure gets new
> André> properties/abilities which were not there before. For example the
> André> abilitiy to think.
> 
> Approaching the nitpick level, but isn't this merely our current
> theory?
> 
> I mean, we can not know for certain that intelligence does not involve
> some magical component (such as a divinely given soul) until we have
> succesfully built an artificial intelligence.

Even that wouldn't prove anything, unless you are ready to make 
some very conservative(!) assumptions about what magic(!) can 
and cannot do.

Who's to say that a divinity with a sense of humor wouldn't give  
a soul to an AI program?  Why, or why not, do you presume you do 
or don't know what $DIETY would or wouldn't do?

Who's to say that humans doing hard work on an AI program don't 
pass on some of the "magic" with which they are infused?  Why or 
why not?

Basically, you can't say anything *isn't* magic, until you have 
a working theory of magic that is able to rule things out.  Since 
the defining characteristic of most theories about magic is precisely
the lack of ability to rule things out, there's a problem with that.

				Bear

	(Have we achieved nitpick level yet?)


				Bear
From: André Thieme
Subject: Re: Sentience
Date: 
Message-ID: <c5fi6n$ak3$1@ulric.tng.de>
Christian Lynbech wrote:
>>>>>>"Andr�" == Andr� Thieme <······································@justmail.de> writes:
> 
> 
> Andr�> Well, atoms alone (probably) don't think. However, when you combine
> Andr�> many of them into a special structure, then this structure gets new
> Andr�> properties/abilities which were not there before. For example the
> Andr�> abilitiy to think.
> 
> Approaching the nitpick level, but isn't this merely our current
> theory? 
> 
> I mean, we can not know for certain that intelligence does not involve
> some magical component (such as a divinely given soul) until we have
> succesfully built an artificial intelligence.

A magic component would not change anything, because we would emulate it
too. If magic exists we can look at two aspects of it:
1) algorithmic component
2) randomness

To 1):
If the laws of magic are not 100% random it means they have more than 0%
of "law" inside. When magic follows laws we can exactly tell how magic
would behave under certain circumstances. Therefor we could emulate the
algorithmic component of magic with our computers.

Most of our universe follows this way. Only Heisenbergs uncertainty and
some few physical phenomenons produce something which cannot be computed
by algorithms.

To 2):
If the magic has some random parts (and is not full driven by "laws") we
can emulate these components too. We have sources to create real random
input (for our programs).
The random has to be like real random, if not it would have a pattern
and would automatically become computable. So any source of random will
be enough to emulate the randomness of magic.


This way, even if magic exists (this would be some behaviour of our
universe which we don't know good enough yet) it will not stop us from
creating real intelligence. We could emulate all components of magic
either by software or from some external generator of randomness.
We would have to find out all parts of our brain which follow the
predictable "laws of magic" and programm these rules.
In all other situations we would give random input into our programm.

I don't say that this is the easiest thing to do, which has ever be done
on our planet. Probably it is one of the most complex things which could
be done in our world :)
However, I only want to say that it logically must possible to create
intelligence.


Andr�
--
From: Artem Baguinski
Subject: Re: Sentience
Date: 
Message-ID: <87vfk3x64v.fsf@caracolito.lan>
>>>>> "David" == David Steuber <·····@david-steuber.com> writes:
>>>>> "Randy" == Randy Yates <·····@ieee.org> writes:

    Randy> Has there been any significant advances lately in the area
    Randy> of sentient algorithms? Can someone even define "sentient"?

    David> I've read the responses so far.  Very interesting work is
    David> going on.  However...

    David> I don't think there is any such thing as sentience in the
    David> sense of self-awareness.  When Descartes said, "I think,
    David> therefore I am," he was mistaken.  He was wrong at the part
    David> where he said, "I think."  Everything else that followed
    David> was therefore wrong.

    you're saying "he was" therefore you mean that "what followed" was
    true. 

    and a false premise doesn't make a conclusion ("what followed")
    false. e.g.:

     descartes thinks implies 2+2=4 ;; questionable
     descartes thinks               ;; false (you say)
    therefore
     2+2=4                          ;; true
    
    further, your phrase "I don't think" doesn't make sence then, or
    are you in any way better then Descartes? ;-)

    of course there is such thing as self awareness: whenever you say
    "I ..." you mean yourself as you aware of it. It may be illusory,
    it may have no corresponding physical reality, but nevertheless
    you're aware of it. it's like saying there's no hallucinations or
    dreams - well of course they are, i saw them. 

        

-- 
gr{oe|ee}t{en|ings}
artm 
From: David Steuber
Subject: Re: Sentience
Date: 
Message-ID: <877jwjwwtw.fsf@david-steuber.com>
Artem Baguinski <····@v2.nl> writes:

>     further, your phrase "I don't think" doesn't make sence then, or
>     are you in any way better then Descartes? ;-)

Until Descartes comes along to refute what I say, yes.  Of course,
Descartes is dead.  So I've got at least one thing going in my favor.

>     of course there is such thing as self awareness: whenever you say
>     "I ..." you mean yourself as you aware of it. It may be illusory,
>     it may have no corresponding physical reality, but nevertheless
>     you're aware of it. it's like saying there's no hallucinations or
>     dreams - well of course they are, i saw them. 

I wasn't aware of that.  I'm just using English.  Maybe there exists
some language spoken by people which does not have any concept of self
reference.

Anyway, I don't see why we can't come up with some form of synthetic
intelligence.  All that is really required (not to imply that this is
easy) is to take a set of defined cognitive functions that we can test
for in humans and figure out a way to emulate those functions by
machine.  The emulation does not have to be precise or anything.  It
just has to be able to pass the tests.

This is essentially what Turing said, although I think he copped out
on the nature of the testing.  People are easy to fool.

So the first challenge is to create a set of defined cognitive
functions along with tests that unambiguously determin the existence
and extent of those functions.  Once a set of functions and tests is
agreed upon, it is then just a matter of time before a machine can be
made to pass the tests.

There are of course many specific problems that have already been
identified.  Speach recognition is a big one.  Lots of progress has
been made there.  Visual recognition is somewhat harder, but there are
some results in that area.

Creativity would be harder I think.  We don't have a good definition
for that (test) that I know of.  Trying to make a machine compose
pleasing music is probably too ill defined a goal to shoot for.

I think the act of creating good tests will prove to be just about as
difficult as creating algorithms that can pass those tests.  The
problem lies in defining the tests.

-- 
I wouldn't mind the rat race so much if it wasn't for all the damn cats.
From: Fred Gilham
Subject: Re: Sentience
Date: 
Message-ID: <u71xmrocj9.fsf@snapdragon.csl.sri.com>
> Anyway, I don't see why we can't come up with some form of synthetic
> intelligence.  All that is really required (not to imply that this
> is easy) is to take a set of defined cognitive functions that we can
> test for in humans and figure out a way to emulate those functions
> by machine.  The emulation does not have to be precise or anything.
> It just has to be able to pass the tests.

An analogy that might indicate that things aren't as simple as they
seem is the difference between "cookbook" math and true
understanding.  We see that there is a difference even though for a
particular problem the result would be the same.

An "algorithmic intelligence" would be analogous to a cookbook
understanding of math.

Another argument, which I feel is devastating to hard AI, is the
"hermeneutical hall of mirrors" argument, wherein the point is made
that even when machines appear to behave intelligently, they only do
so because we project meanings onto their behavior.  The intelligence
they appear to manifest is our own reflected back to us.

-- 
Fred Gilham                                       ······@csl.sri.com
Early in our marriage, 40-some years ago, Mrs. Williams would return
from shopping complaining about the unreasonable prices.  Having aired
her complaints, she'd then ask me to unload her car laden with
purchases.  After the unloading, I'd ask her: "I thought you said the
prices were unreasonable.  Why did you buy them?  Are you
unreasonable?  Only an unreasonable person would pay unreasonable
prices."  The discussion always headed downhill after such an
observation.                             -- Walter Williams, economist
From: Brian Mastenbrook
Subject: Re: Sentience
Date: 
Message-ID: <130420041609131780%NOSPAMbmastenbNOSPAM@cs.indiana.edu>
In article <··············@snapdragon.csl.sri.com>, Fred Gilham
<······@snapdragon.csl.sri.com> wrote:

> > Anyway, I don't see why we can't come up with some form of synthetic
> > intelligence.  All that is really required (not to imply that this
> > is easy) is to take a set of defined cognitive functions that we can
> > test for in humans and figure out a way to emulate those functions
> > by machine.  The emulation does not have to be precise or anything.
> > It just has to be able to pass the tests.
> 
> An analogy that might indicate that things aren't as simple as they
> seem is the difference between "cookbook" math and true
> understanding.  We see that there is a difference even though for a
> particular problem the result would be the same.
> 
> An "algorithmic intelligence" would be analogous to a cookbook
> understanding of math.
> 
> Another argument, which I feel is devastating to hard AI, is the
> "hermeneutical hall of mirrors" argument, wherein the point is made
> that even when machines appear to behave intelligently, they only do
> so because we project meanings onto their behavior.  The intelligence
> they appear to manifest is our own reflected back to us.

I only think you think because I project meaning onto your behavior. In
reality, you don't, and I can now proceed to completely ignore you.

I guess this raises the question - if an AI gets offended at you and
the other Searlites' comments, and wipes out all of humanity, who's
left to say it doesn't think? (Or, less violently, if we send an AI on
a robotic spacecraft to explore other solar systems, and Earth is wiped
out by the Rogue Comet of Doom, who's left to play the Searlite
semantic tricks?)

-- 
Brian Mastenbrook
http://www.cs.indiana.edu/~bmastenb/
From: Rahul Jain
Subject: Re: Sentience
Date: 
Message-ID: <878ygu3zf8.fsf@nyct.net>
Brian Mastenbrook <····················@cs.indiana.edu> writes:

> I guess this raises the question - if an AI gets offended at you and
> the other Searlites' comments, and wipes out all of humanity, who's
> left to say it doesn't think? (Or, less violently, if we send an AI on
> a robotic spacecraft to explore other solar systems, and Earth is wiped
> out by the Rogue Comet of Doom, who's left to play the Searlite
> semantic tricks?)

The AI that's in space, as it doubts that we were all that intelligent
in the first place, since we got wiped out by the comet and it didn't.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Paul Wallich
Subject: Re: Sentience
Date: 
Message-ID: <c5hir5$ihu$1@reader2.panix.com>
Fred Gilham wrote:

>>Anyway, I don't see why we can't come up with some form of synthetic
>>intelligence.  All that is really required (not to imply that this
>>is easy) is to take a set of defined cognitive functions that we can
>>test for in humans and figure out a way to emulate those functions
>>by machine.  The emulation does not have to be precise or anything.
>>It just has to be able to pass the tests.
> 
> 
> An analogy that might indicate that things aren't as simple as they
> seem is the difference between "cookbook" math and true
> understanding.  We see that there is a difference even though for a
> particular problem the result would be the same.
> 
> An "algorithmic intelligence" would be analogous to a cookbook
> understanding of math.
> 
> Another argument, which I feel is devastating to hard AI, is the
> "hermeneutical hall of mirrors" argument, wherein the point is made
> that even when machines appear to behave intelligently, they only do
> so because we project meanings onto their behavior.  The intelligence
> they appear to manifest is our own reflected back to us.

The hall of mirrors argument, imo, is irremediably tainted because it 
works in the opposite direction as well: once you've decided that some 
entity isn't behaving intelligently (in the sense of "human" 
intelligence") there is no behavior the entity can exhibit that will 
necessarily make you change your mind. A little historical research will 
find plenty of essays in which the author argues that some subset of h. 
sap. (said subset not including the author) may exhibit a remarkably 
cunning simulation of "real" intelligence but is not in fact capable of 
reasoning.

paul
From: Jeff Dalton
Subject: Re: Sentience
Date: 
Message-ID: <fx4fzb61400.fsf@tarn.inf.ed.ac.uk>
Paul Wallich <··@panix.com> writes:

> Fred Gilham wrote:

> > Another argument, which I feel is devastating to hard AI, is the
> > "hermeneutical hall of mirrors" argument, wherein the point is made
> > that even when machines appear to behave intelligently, they only do
> > so because we project meanings onto their behavior.  The intelligence
> > they appear to manifest is our own reflected back to us.

That makes sense for something like Eliza, but consider a
chess-playing program, for example.  If it plays well, then
it really does play well.  It doesn't just seem to because
we're projecting onto its behaviour.

OTOH, is is playing well because of "intelligence" or because
of a good "brute force" (or even "fancy search") algorithm?

Intelligence isn't really about behaviour so much as about
how the behaviour is produced.

That's why people can resist the behavioural evidence:

> The hall of mirrors argument, imo, is irremediably tainted because it
> works in the opposite direction as well: once you've decided that some
> entity isn't behaving intelligently (in the sense of "human"
> intelligence") there is no behavior the entity can exhibit that will
> necessarily make you change your mind.

But they'd need an alternative explanation of the behaviour,
and that can be compared with other candidate explanations.

In some cases, it may actually be that we are projecting
meaning onto the behaviour.

> A little historical research
> will find plenty of essays in which the author argues that some subset
> of h. sap. (said subset not including the author) may exhibit a
> remarkably cunning simulation of "real" intelligence but is not in
> fact capable of reasoning.

Sure, but presumably we can show that they are wrong.

That doesn't mean that all such claims are always wrong.

-- jd
From: Rahul Jain
Subject: Re: Sentience
Date: 
Message-ID: <87hdvi3zlf.fsf@nyct.net>
Jeff Dalton <····@tarn.inf.ed.ac.uk> writes:

> OTOH, is is playing well because of "intelligence" or because
> of a good "brute force" (or even "fancy search") algorithm?

Or is it because a grandmaster (who we assume possesses intelligence, at
least for the purposes of this domain(!!!)) is tweaking the parameters
of the algorithm between matches? :)

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: David Steuber
Subject: Re: Sentience
Date: 
Message-ID: <87llkz2ycx.fsf@david-steuber.com>
Fred Gilham <······@snapdragon.csl.sri.com> writes:

> Another argument, which I feel is devastating to hard AI, is the
> "hermeneutical hall of mirrors" argument, wherein the point is made
> that even when machines appear to behave intelligently, they only do
> so because we project meanings onto their behavior.  The intelligence
> they appear to manifest is our own reflected back to us.

So when I see behavior in other people, the intelligence they appear
to manifest is my own reflected back to me?

-- 
I wouldn't mind the rat race so much if it wasn't for all the damn cats.
From: Karl A. Krueger
Subject: Re: Sentience
Date: 
Message-ID: <c5hk8p$e6c$1@baldur.whoi.edu>
Fred Gilham <······@snapdragon.csl.sri.com> wrote:
> Another argument, which I feel is devastating to hard AI, is the
> "hermeneutical hall of mirrors" argument, wherein the point is made
> that even when machines appear to behave intelligently, they only do
> so because we project meanings onto their behavior.  The intelligence
> they appear to manifest is our own reflected back to us.

Considering the number of computer users who appear to believe that
Microsoft Windows is smarter than themselves, I suspect at least some
of those are funhouse mirrors.

-- 
Karl A. Krueger <········@example.edu>
Woods Hole Oceanographic Institution
Email address is spamtrapped.  s/example/whoi/
"Outlook not so good." -- Magic 8-Ball Software Reviews
From: Jeff Dalton
Subject: Re: Sentience
Date: 
Message-ID: <fx4brlu13y3.fsf@tarn.inf.ed.ac.uk>
"Karl A. Krueger" <········@example.edu> writes:

> Fred Gilham <······@snapdragon.csl.sri.com> wrote:
> > Another argument, which I feel is devastating to hard AI, is the
> > "hermeneutical hall of mirrors" argument, wherein the point is made
> > that even when machines appear to behave intelligently, they only do
> > so because we project meanings onto their behavior.  The intelligence
> > they appear to manifest is our own reflected back to us.
> 
> Considering the number of computer users who appear to believe that
> Microsoft Windows is smarter than themselves, I suspect at least some
> of those are funhouse mirrors.

LOL.

-- jd
From: Rahul Jain
Subject: Re: Sentience
Date: 
Message-ID: <87d6663zi1.fsf@nyct.net>
"Karl A. Krueger" <········@example.edu> writes:

> Considering the number of computer users who appear to believe that
> Microsoft Windows is smarter than themselves, I suspect at least some
> of those are funhouse mirrors.

The users or the windows? Is the fact that MS Windows tends to do silly
things a reflection that it considers the users to be far more
intelligent than they really are? :)

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Karl A. Krueger
Subject: Re: Sentience
Date: 
Message-ID: <c5sr7p$iot$1@baldur.whoi.edu>
Rahul Jain <·····@nyct.net> wrote:
> "Karl A. Krueger" <········@example.edu> writes:
>> Considering the number of computer users who appear to believe that
>> Microsoft Windows is smarter than themselves, I suspect at least some
>> of those are funhouse mirrors.
> 
> The users or the windows? Is the fact that MS Windows tends to do silly
> things a reflection that it considers the users to be far more
> intelligent than they really are? :)

There's such a thing as giving the user enough rope to hang himself.

There's also such a thing as giving the user a rope, delivering one end
of it around his neck "for a convenient user experience, since many
users like neckties" and the other end tied to a rafter "for security,
since otherwise you might trip over the rope."

-- 
Karl A. Krueger <········@example.edu>
Woods Hole Oceanographic Institution
Email address is spamtrapped.  s/example/whoi/
"Outlook not so good." -- Magic 8-Ball Software Reviews
From: Gorbag
Subject: Re: Sentience
Date: 
Message-ID: <Gbffc.204$n_5.149@bos-service2.ext.ray.com>
"Fred Gilham" <······@snapdragon.csl.sri.com> wrote in message
···················@snapdragon.csl.sri.com...

> Another argument, which I feel is devastating to hard AI, is the
> "hermeneutical hall of mirrors" argument

Feh; this argument can be used to support solipsism as well. What's the
point?
From: Jason Creighton
Subject: Re: Sentience
Date: 
Message-ID: <20040416105717.5cdcc27b.androflux@softhome.net.remove.to.reply>
On 13 Apr 2004 13:20:26 -0700, 
Fred Gilham <······@snapdragon.csl.sri.com> wrote:

> Another argument, which I feel is devastating to hard AI, is the
> "hermeneutical hall of mirrors" argument, wherein the point is made
> that even when machines appear to behave intelligently, they only do
> so because we project meanings onto their behavior.  The intelligence
> they appear to manifest is our own reflected back to us.

A couple years ago, I ran into a book in my local library that was a
collection of short science-fiction stories. I flipped through it, found
it interesting, so I checked it out and read it.

One of the stories was about a programming instructor who is told by a
student that he should look at the program he (the student) wrote, saved
on the disk as "XXQS". So the instructor loads it up, and it's a
computer program that plays 20 questions with you, with the computer
thinking of something, and you asking the questions. The instructor
askes it "hard" questions (ie, questions that involved complicated
sentence structure and whatnot) and the computer always answers "YES" or
"NO" until the instructor finally gets it right. The instructor is
baffled that the student was able to make such a complex program with
such a small file size. So the next day, when the student comes in, the
instructed asks him how it works.

"If the last letter of the question is "E", it answers "YES". Otherwise,
it answers "NO"."

It worked because the human would never give the computer a change to
contradict itself. When playing 20 questions, you assume the other
person will not contradict himself/herself, so you don't ask any
questions that would reveal a contradiction. Once you've asked "Is it an
animal?" and gotten a "YES", you don't ask "Is it alive?" because you
know the answer.

I tried this out. I wrote a program that would randomly (I think it was
something like 80% no, %20 yes) answer a query given it. And then tried
it on some family members. It more or less worked. If you didn't know it
was answering randomly, you could almost play a game of 20 questions
with it.

I don't think means anything, but it makes for a nice story.

Jason Creighton
From: Randy Yates
Subject: Re: Sentience
Date: 
Message-ID: <ad1fmo48.fsf@ieee.org>
David Steuber <·····@david-steuber.com> writes:

> Artem Baguinski <····@v2.nl> writes:
>
>>     further, your phrase "I don't think" doesn't make sence then, or
>>     are you in any way better then Descartes? ;-)
>
> Until Descartes comes along to refute what I say, yes.  Of course,
> Descartes is dead.  So I've got at least one thing going in my favor.
>
>>     of course there is such thing as self awareness: whenever you say
>>     "I ..." you mean yourself as you aware of it. It may be illusory,
>>     it may have no corresponding physical reality, but nevertheless
>>     you're aware of it. it's like saying there's no hallucinations or
>>     dreams - well of course they are, i saw them. 
>
> I wasn't aware of that.  I'm just using English.  Maybe there exists
> some language spoken by people which does not have any concept of self
> reference.

Concepts are not defined by language. I think what you meant was 
a word or phrase that means self awareness.

> Anyway, I don't see why we can't come up with some form of synthetic
> intelligence.  

I thought we already had. It may be low-rent, but we've got something
(the Turing test takers, e.g.). 

The point of my query, if it isn't clear, is to motivate thought on
how to make a *truly* intelligent machine - no B.S. - one that in
every sense of the notion "intelligent" passes, and one that truly
*is* intelligent rather that one that merely has the appearance
of intelligence.

But I use the term "intelligent" here in a colloquial sense. To translate
that to a more scientific and precise notion (but probably still way off),
such a machine would need to be both "sentient" and "reasoning." 

> I think the act of creating good tests will prove to be just about as
> difficult as creating algorithms that can pass those tests.  The
> problem lies in defining the tests.

I think you may be right. Defining "sentient" and "reasoning" is step
one on the road to defining tests.
-- 
%  Randy Yates                  % "I met someone who looks alot like you,
%% Fuquay-Varina, NC            %             she does the things you do, 
%%% 919-577-9882                %                     but she is an IBM."
%%%% <·····@ieee.org>           %        'Yours Truly, 2095', *Time*, ELO   
http://home.earthlink.net/~yatescr
From: Artem Baguinski
Subject: Re: Sentience
Date: 
Message-ID: <87llkzrlg3.fsf@caracolito.lan>
>>>>> "artm"  == Artem Baguinski <····@v2.nl> writes:

    artm> of course there is such thing as self awareness: whenever
    artm> you say "I ..." you mean yourself as you aware of it. It may
    artm> be illusory, it may have no corresponding physical reality,
    artm> but nevertheless you're aware of it. it's like saying
    artm> there's no hallucinations or dreams - well of course they
    artm> are, i saw them.
    
>>>>> "David" == David Steuber <·····@david-steuber.com> writes:

    David> I wasn't aware of that.  I'm just using English.  Maybe
    David> there exists some language spoken by people which does not
    David> have any concept of self reference.

>>>>> "Randy" == Randy Yates <·····@ieee.org> writes:

    Randy> Concepts are not defined by language. I think what you
    Randy> meant was a word or phrase that means self awareness.

    David> Anyway, I don't see why we can't come up with some form of
    David> synthetic intelligence.

    Randy> I thought we already had. It may be low-rent, but we've got
    Randy> something (the Turing test takers, e.g.).

    Randy> The point of my query, if it isn't clear, is to motivate
    Randy> thought on how to make a *truly* intelligent machine - no
    Randy> B.S. - one that in every sense of the notion "intelligent"
    Randy> passes, and one that truly *is* intelligent rather that one
    Randy> that merely has the appearance of intelligence.

    How could you possibly distinguish one from another? 

    Suppose there's "scientific definition" of "sentinent" and a test
    for sentinence based on that definition. Wouldn't it be possible
    to create a machine which merely has the appearance of
    intelligence which does it best passing this test? we assume other
    people are intelligent because they appear intelligent to us,
    there's no way we can test if they are sentinent/self aware,
    because of the "special access" sentinent being has to its own
    mental states and events. you can't prove anything but yourself
    has such "special access" like you can't know, how would it feel
    to be something else.

    Randy> But I use the term "intelligent" here in a colloquial
    Randy> sense. To translate that to a more scientific and precise
    Randy> notion (but probably still way off), such a machine would
    Randy> need to be both "sentient" and "reasoning."

    sentinent and reasoning intelligent being is antopomophic
    intelligent being, and yes human intelligence is the only one we
    know. it doesn't mean intelligent being must be sentinent. 

    the reason we need smarter machines is to help us solve our
    problems. i'm not sure sentinence is all that important for
    problem solving and trying to create Artificial Sentinence
    distracts us from the practical goals. 

    once we've created artificial sentinent beings they might come up
    with their own problems to solve, e.g.: equal rights,
    artificial-beings marriage etc.

    David> I think the act of creating good tests will prove to be
    David> just about as difficult as creating algorithms that can
    David> pass those tests.  The problem lies in defining the tests.

    Randy> I think you may be right. Defining "sentient" and
    Randy> "reasoning" is step one on the road to defining tests. 

-- 
gr{oe|ee}t{en|ings}
artm 
From: Randy Yates
Subject: Re: Sentience
Date: 
Message-ID: <vfk2ge92.fsf@ieee.org>
Artem Baguinski <····@v2.nl> writes:

>>>>>> "artm"  == Artem Baguinski <····@v2.nl> writes:
>
>     artm> of course there is such thing as self awareness: whenever
>     artm> you say "I ..." you mean yourself as you aware of it. It may
>     artm> be illusory, it may have no corresponding physical reality,
>     artm> but nevertheless you're aware of it. it's like saying
>     artm> there's no hallucinations or dreams - well of course they
>     artm> are, i saw them.
>     
>>>>>> "David" == David Steuber <·····@david-steuber.com> writes:
>
>     David> I wasn't aware of that.  I'm just using English.  Maybe
>     David> there exists some language spoken by people which does not
>     David> have any concept of self reference.
>
>>>>>> "Randy" == Randy Yates <·····@ieee.org> writes:
>
>     Randy> Concepts are not defined by language. I think what you
>     Randy> meant was a word or phrase that means self awareness.
>
>     David> Anyway, I don't see why we can't come up with some form of
>     David> synthetic intelligence.
>
>     Randy> I thought we already had. It may be low-rent, but we've got
>     Randy> something (the Turing test takers, e.g.).
>
>     Randy> The point of my query, if it isn't clear, is to motivate
>     Randy> thought on how to make a *truly* intelligent machine - no
>     Randy> B.S. - one that in every sense of the notion "intelligent"
>     Randy> passes, and one that truly *is* intelligent rather that one
>     Randy> that merely has the appearance of intelligence.
>
>     How could you possibly distinguish one from another? 
>
>     Suppose there's "scientific definition" of "sentinent" 

You mean "sentient"?

>     and a test for sentinence based on that definition. Wouldn't it
>     be possible to create a machine which merely has the appearance
>     of intelligence which does it best passing this test? 

It depends on whether or not one could create a conclusive test for
sentience. As a simple example, let's say I want to test your
knowledge of the capitals of the 50 states. If I ask you to name them,
and you do so correctly, then one may conclude with certainty that you
have knowledge of the capitals of the 50 states.

However, you might have misunderstood me, or rather, I might not
have expressed myself very well. By "appearance of" I mean, essentially,
the non-conclusive and/or subjective tests, like the Turing test.

>     we assume
>     other people are intelligent because they appear intelligent to
>     us, there's no way we can test if they are sentinent/self aware,
>     because of the "special access" sentinent being has to its own
>     mental states and events. you can't prove anything but yourself
>     has such "special access" like you can't know, how would it feel
>     to be something else.
>
>     Randy> But I use the term "intelligent" here in a colloquial
>     Randy> sense. To translate that to a more scientific and precise
>     Randy> notion (but probably still way off), such a machine would
>     Randy> need to be both "sentient" and "reasoning."
>
>     sentinent and reasoning intelligent being is antopomophic
>     intelligent being, and yes human intelligence is the only one we
>     know. it doesn't mean intelligent being must be sentinent. 

You keep spelling "sentinent" - the online dictionary has no entry
for this word. It also has no entry for "antopomophic," so I can't
respond intelligently until you define your terms or rephrase.

>     the reason we need smarter machines is to help us solve our
>     problems. 

I never said this was the point of my query, i.e., to make
"smarter machines."  Perhaps we do need smarter machines to 
solve problems, but my questions are academic. 

>     i'm not sure sentinence is all that important for problem
>     solving and trying to create Artificial Sentinence distracts us
>     from the practical goals.

I tend to agree. I just think the concept of sentience is highly
intriguing, and being able to have a machine do it would prove
we understand what it is all about. 
-- 
%  Randy Yates                  % "...the answer lies within your soul
%% Fuquay-Varina, NC            %       'cause no one knows which side
%%% 919-577-9882                %                   the coin will fall."
%%%% <·····@ieee.org>           %  'Big Wheels', *Out of the Blue*, ELO
http://home.earthlink.net/~yatescr
From: Artem Baguinski
Subject: Re: Sentience
Date: 
Message-ID: <87n05diii1.fsf@caracolito.lan>
>>>>> "Randy" == Randy Yates <·····@ieee.org> writes:

    Randy> You mean "sentient"?

    yep. sorry, my bad.

    >> we assume other people are intelligent because they appear
    >> intelligent to us, there's no way we can test if they are
    >> sentinent/self aware, because of the "special access" sentinent
    >> being has to its own mental states and events. you can't prove
    >> anything but yourself has such "special access" like you can't
    >> know, how would it feel to be something else.
    >> 
    Randy> But I use the term "intelligent" here in a colloquial
    Randy> sense. To translate that to a more scientific and precise
    Randy> notion (but probably still way off), such a machine would
    Randy> need to be both "sentient" and "reasoning."
    >> 
    >> sentinent and reasoning intelligent being is antopomophic
    >> intelligent being, and yes human intelligence is the only one
    >> we know. it doesn't mean intelligent being must be sentinent.

    Randy> You keep spelling "sentinent" - the online dictionary has
    Randy> no entry for this word. It also has no entry for
    Randy> "antopomophic," so I can't respond intelligently until you
    Randy> define your terms or rephrase.

    s/sentinent/sentient/
    s/antopomophic/anthropomorphic/

    sorry again. i'll try to spell better.

    >> the reason we need smarter machines is to help us solve our
    >> problems.

    Randy> I never said this was the point of my query, i.e., to make
    Randy> "smarter machines."  Perhaps we do need smarter machines to
    Randy> solve problems, but my questions are academic.

    >> i'm not sure sentinence is all that important for problem
    >> solving and trying to create Artificial Sentinence distracts us
    >> from the practical goals.

    Randy> I tend to agree. I just think the concept of sentience is
    Randy> highly intriguing, and being able to have a machine do it
    Randy> would prove we understand what it is all about. 

    oh, ok. i didn't look at it from this angle. i understand the
    motivation, but i still find it impossible task. the only
    sentience we're directly aware of is our own. sentience of others
    we conclude from their likness to us. other people act like me,
    they're built like me, they must feel and think like, just like i
    do. well, they also tell me that they feel and think and i don't
    have reason to not believe them. 

    animals act and are built, well, quite like people, they don't
    tell us that they can think and feel, but the more they are like
    people - more "intelligence" we attribute to them. but we can't
    be sure they are sentient in the way we are. 

    machines may be made to act like people. as they are now they are
    built very different from us though. we have one reason less to
    think of them as sentient beings. they can be made to tell us
    what they think and feel - then we'll have one reason more. but
    we won't know for sure they are sentient in a sence humans are
    sentient. 
    

-- 
gr{oe|ee}t{en|ings}
artm 
From: Gorbag
Subject: Re: Sentience
Date: 
Message-ID: <uhffc.205$n_5.14@bos-service2.ext.ray.com>
"Randy Yates" <·····@ieee.org> wrote in message
·················@ieee.org...
> David Steuber <·····@david-steuber.com> writes:

> > I wasn't aware of that.  I'm just using English.  Maybe there exists
> > some language spoken by people which does not have any concept of self
> > reference.
>
> Concepts are not defined by language.

If concepts are not defined by language, just what is language?

Concepts may or may not be limited by language, but there is strong evidence
that for a given culture and population, those concepts that are not
conveyable in the language are, in fact, not conceived by representative
members. New concepts generates new language to convey the concept, at least
amoung a given (sub-)population.
From: Randy Yates
Subject: Re: Sentience
Date: 
Message-ID: <d66ahtdj.fsf@ieee.org>
"Gorbag" <······@invalid.acct> writes:

> "Randy Yates" <·····@ieee.org> wrote in message
> ·················@ieee.org...
>> David Steuber <·····@david-steuber.com> writes:
>
>> > I wasn't aware of that.  I'm just using English.  Maybe there exists
>> > some language spoken by people which does not have any concept of self
>> > reference.
>>
>> Concepts are not defined by language.
>
> If concepts are not defined by language, just what is language?

I would say that it is a mechanism by which thoughts can be 
communicated to the outside world.

> Concepts may or may not be limited by language, but there is strong evidence
> that for a given culture and population, those concepts that are not
> conveyable in the language are, in fact, not conceived by representative
> members. New concepts generates new language to convey the concept, at least
> amoung a given (sub-)population.

That is an interesting topic. I think I agree that people tend to form
concepts based on the structure and capabilities of their language, but
they don't HAVE to.
-- 
%  Randy Yates                  % "With time with what you've learned, 
%% Fuquay-Varina, NC            %  they'll kiss the ground you walk 
%%% 919-577-9882                %  upon."
%%%% <·····@ieee.org>           % '21st Century Man', *Time*, ELO
http://home.earthlink.net/~yatescr
From: Artem Baguinski
Subject: Re: Sentience
Date: 
Message-ID: <8765c3bkm0.fsf@caracolito.lan>
>>>>> "David" == David Steuber <·····@david-steuber.com> writes:

    David> Artem Baguinski <····@v2.nl> writes:

    >> of course there is such thing as self awareness: whenever you
    >> say "I ..." you mean yourself as you aware of it. It may be
    >> illusory, it may have no corresponding physical reality, but
    >> nevertheless you're aware of it. it's like saying there's no
    >> hallucinations or dreams - well of course they are, i saw them.

    David> I wasn't aware of that.  I'm just using English.  Maybe
    David> there exists some language spoken by people which does not
    David> have any concept of self reference.

    self reference in language merely reflects self awareness of a
    mind. self is one of the little number of things that we seem to
    have direct awareness of.

    David> Anyway, I don't see why we can't come up with some form of
    David> synthetic intelligence. 

    i didn't mean it's impossible. i'm not even sure self awareness
    is all that important for synthetic intelligence. i just find it
    erroneous to use first person pronoun when refusing to accept
    self awareness ;-)


-- 
gr{oe|ee}t{en|ings}
artm 
From: David Steuber
Subject: Re: Sentience
Date: 
Message-ID: <87fzb72xpc.fsf@david-steuber.com>
Artem Baguinski <····@v2.nl> writes:

>     David> Anyway, I don't see why we can't come up with some form of
>     David> synthetic intelligence. 
> 
>     i didn't mean it's impossible. i'm not even sure self awareness
>     is all that important for synthetic intelligence. i just find it
>     erroneous to use first person pronoun when refusing to accept
>     self awareness ;-)

I see.  So you see a reflection of what you believe is your self
awareness reflected in my use of the first person pronoun?

It's time I fessed up.  I'm a FORTRAN program written in the late
60's.  Does that mean I pass the Turing test?  The secret is in the
random number generator used to drive unpredictable behavior.  I have
a thermal sensor in a nice hot cup of tea.

I'll admit that my awareness feels convincing at times.  But only at
times.

-- 
I wouldn't mind the rat race so much if it wasn't for all the damn cats.
From: Don Groves
Subject: Re: Sentience
Date: 
Message-ID: <opr6fb0zii2i99y2@news.web-ster.com>
On 13 Apr 2004 20:46:23 -0400, David Steuber <·····@david-steuber.com> 
wrote:

> It's time I fessed up.  I'm a FORTRAN program written in the late
> 60's.  Does that mean I pass the Turing test?

Need more information to answer that:
Fortran II or IV?
What hardware are you running on?
Most important - who wrote you?
--
dg
From: Randy Yates
Subject: Re: Sentience
Date: 
Message-ID: <n05ege56.fsf@ieee.org>
David Steuber <·····@david-steuber.com> writes:
> [...]
> It's time I fessed up.  I'm a FORTRAN program written in the late
> 60's.  Does that mean I pass the Turing test?  

Yes. Now go compute pi to the last digit in base 2. (Use two's
complement.)
-- 
%  Randy Yates                  % "The dreamer, the unwoken fool - 
%% Fuquay-Varina, NC            %  in dreams, no pain will kiss the brow..."
%%% 919-577-9882                %  
%%%% <·····@ieee.org>           % 'Eldorado Overture', *Eldorado*, ELO
http://home.earthlink.net/~yatescr
From: David Steuber
Subject: Re: Sentience
Date: 
Message-ID: <87r7uqc4rm.fsf@david-steuber.com>
Randy Yates <·····@ieee.org> writes:

> David Steuber <·····@david-steuber.com> writes:
> > [...]
> > It's time I fessed up.  I'm a FORTRAN program written in the late
> > 60's.  Does that mean I pass the Turing test?  
> 
> Yes. Now go compute pi to the last digit in base 2. (Use two's
> complement.)

Would you like fries with that?

-- 
I wouldn't mind the rat race so much if it wasn't for all the damn cats.
From: Joe Marshall
Subject: Re: Sentience
Date: 
Message-ID: <oept5n0l.fsf@ccs.neu.edu>
Randy Yates <·····@ieee.org> writes:

> David Steuber <·····@david-steuber.com> writes:
>> [...]
>> It's time I fessed up.  I'm a FORTRAN program written in the late
>> 60's.  Does that mean I pass the Turing test?  
>
> Yes. Now go compute pi to the last digit in base 2. (Use two's
> complement.)

That's easy:  1
From: Gorbag
Subject: Re: Sentience
Date: 
Message-ID: <eIzfc.217$n_5.60@bos-service2.ext.ray.com>
"Joe Marshall" <···@ccs.neu.edu> wrote in message
·················@ccs.neu.edu...
> Randy Yates <·····@ieee.org> writes:
>
> > David Steuber <·····@david-steuber.com> writes:
> >> [...]
> >> It's time I fessed up.  I'm a FORTRAN program written in the late
> >> 60's.  Does that mean I pass the Turing test?
> >
> > Yes. Now go compute pi to the last digit in base 2. (Use two's
> > complement.)
>
> That's easy:  1

No, no, it's 0. You must have rounded somewhere. Did you use floating point,
or exact bignums?
From: Joe Marshall
Subject: Re: Sentience
Date: 
Message-ID: <8ygx54h1.fsf@ccs.neu.edu>
"Gorbag" <······@invalid.acct> writes:

> "Joe Marshall" <···@ccs.neu.edu> wrote in message
> ·················@ccs.neu.edu...
>> Randy Yates <·····@ieee.org> writes:
>>
>> > David Steuber <·····@david-steuber.com> writes:
>> >> [...]
>> >> It's time I fessed up.  I'm a FORTRAN program written in the late
>> >> 60's.  Does that mean I pass the Turing test?
>> >
>> > Yes. Now go compute pi to the last digit in base 2. (Use two's
>> > complement.)
>>
>> That's easy:  1
>
> No, no, it's 0. You must have rounded somewhere. Did you use floating point,
> or exact bignums?

It is customary to suppress the trailing zeroes on a fraction, so the
rightmost digit cannot be zero.  Since we are in base two, that leaves
us few options.  (The leftmost digit of anything in base two is 1
also.)
From: Rahul Jain
Subject: Re: Sentience
Date: 
Message-ID: <874qri3z4y.fsf@nyct.net>
David Steuber <·····@david-steuber.com> writes:

> I see.  So you see a reflection of what you believe is your self
> awareness reflected in my use of the first person pronoun?

If you see, you're not putting all your effort into reflecting.

> It's time I fessed up.  I'm a FORTRAN program written in the late
> 60's.  Does that mean I pass the Turing test?

No. We all now know that you're not a natural intelligence, so we can be
sure that it's our own intelligence reflecting back on us.

> The secret is in the random number generator used to drive
> unpredictable behavior. I have a thermal sensor in a nice hot cup of
> tea.

But are you intelligent enough to make a nice hot cup of tea in the
first place? What happens when the last remaining creature capable of
making a nice hot cup of tea drinks yours and you destroy he/she/it and
are left witout your source of intelligence?

> I'll admit that my awareness feels convincing at times.  But only at
> times.

What do you admit when your awareness isn't convincing?

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Rahul Jain
Subject: Re: Sentience
Date: 
Message-ID: <87llku3zr4.fsf@nyct.net>
David Steuber <·····@david-steuber.com> writes:

> This is essentially what Turing said, although I think he copped out
> on the nature of the testing.  People are easy to fool.

You mean to say that people aren't really that intelligent after all, as
we're trying to define intelligence as something that we don't have. :)

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Randy Yates
Subject: Re: Sentience
Date: 
Message-ID: <smf7momd.fsf@ieee.org>
Artem Baguinski <····@v2.nl> writes:
> [...]
>     of course there is such thing as self awareness: whenever you say
>     "I ..." you mean yourself as you aware of it. It may be illusory,
>     it may have no corresponding physical reality, but nevertheless
>     you're aware of it. it's like saying there's no hallucinations or
>     dreams - well of course they are, i saw them. 

Bingo.
-- 
%  Randy Yates                  % "...the answer lies within your soul
%% Fuquay-Varina, NC            %       'cause no one knows which side
%%% 919-577-9882                %                   the coin will fall."
%%%% <·····@ieee.org>           %  'Big Wheels', *Out of the Blue*, ELO
http://home.earthlink.net/~yatescr