From: RG on
In article <1jivucy.uv2pq41yoq9d8N%wrf3(a)stablecross.com>,
wrf3(a)stablecross.com (Bob Felts) wrote:

> RG <rNOSPAMon(a)flownet.com> wrote:
>
> > In article <1jiv1sx.nlpkc79qkohaN%wrf3(a)stablecross.com>,
> > wrf3(a)stablecross.com (Bob Felts) wrote:
> >
> > > Don Geddis <don(a)geddis.org> wrote:
> > >
> > > > wrf3(a)stablecross.com (Bob Felts) wrote on Fri, 21 May 2010:
> > > > > Hence my question about an AI with a large state space that uses a
> > > > > quantum random generator to drive its behavior. Is it determined, or
> > > > > free? An argument can be made for both.
> > > >
> > > > Or, if you realize that free will is not incompatible with determinism,
> > >
> > > Absent quantum randomness, and assuming a strict naturalistic worldview,
> > > I'd like to see a compelling argument how this could be so. Such an
> > > argument cannot depend on one's knowledge of good and evil, either; i.e.
> > > "man must be free in order to be responsible" is one such fallacious
> > > argument.
> > >
> > > > you can realize that these aren't opposite ends of a spectrum, and the
> > > > random generator really has nothing at all to do with the question of
> > > > whether the AI has free will or not. It either does have free will, or
> > > > it doesn't, but it doesn't matter whether it has a quantum random
> > > > generator inside of it, for answering the question.
> > > >
> > >
> > > If the will doesn't incorporate randomness, then in what sense is it
> > > free? Is the function (defun hello () (print "Hello, World!")) free?
> > >
> >
> > Don, if you'll permit me, I'd like to take a whack at this and see if I
> > understand your point of view. Please correct me if I get this wrong:
> >
> > A system can be deterministic without being predictable. A stochastic
> > system, for example, is deterministic but unpredictable.
>
> Stochastic systems incorporate randomness. Throwing a die is
> deterministic but unpredictable.

I meant chaotic, not stochastic. My mistake.

> > A human brain is not only stochastic, but also a sufficiently complex
> > information processing system that it can generate subjective experiences,
> > self-awareness, desires, and the ability to deliberate and plan, but not
> > to predict its own behavior (by virtue of its being stochastic). Those
> > properties collectively produce a (real) phenomenon that can be usefully
> > labelled "free will."
>
> Just becasue a label is _useful_ doesn't mean that it's _true_. In any
> case, again, this argues that we're free due to a randomness in the
> brain.

Labels (a.k.a. definitions) cannot be true or false. But no, that's not
the argument. As Don said, randomness is a red herring. The crucial
elements are 1) some decision making mechanism with 2) goals and/or
desires and 3) insufficient computing power to introspectively predict
its own behavior in advance of actually making a decision. Or something
like that. Maybe I should let Don make his own arguments. I seem to be
making a mess of it. Let me go back to beating on Ralph for a while :-)

rg
From: Kenneth Tilton on
Raffael Cavallaro wrote:
> Subjective experience is likewise
> flawed when it suggests that it is causing our actions; it is not; it is
> simply being informed of them post hoc.

Your problem is that there is indeed an homunculus: awareness!
Neuroscientists can observe neuronal activity building to a threshold at
which awareness will be achieved, but that does not mean the neuronal
activity is different than the awareness. No one really can explain
awareness, but I imagine in the end it will come down to a simple
number. We all have the experience of that neuronal activity falling
short of awareness. We say something we cannot recall "is on the tip of
my tongue". We actually sense recall building and falling short, the
neuronal activity your scientists are tracking. "I almost had it", we
say. Note that at this point your cherished neuroscientists are
reporting that we /did/ recall it, because the number they have for when
something has been recalled or recognized or decided is a little low.
How will they refine that number? They'll have to ask us, won't they?
And that's the point: there is no other place to go to find out if we
have achieved a given state of awareness, no simpler system to inspect.
Even with the research cited (or something like it) I recall that
sometimes the subjects /did not/ give the signal, tho the
neuroscientists had seen the physiological pattern they wanted to
identify with the state of awareness. Ooops.

kt
From: Leandro Rios on
wrf3(a)stablecross.com (Bob Felts) writes:

> Raffael Cavallaro <raffaelcavallaro(a)pas.despam.s.il.vous.plait.mac.com>
> wrote:
>
>> On 2010-05-19 13:19:13 -0400, Leandro Rios said:
>>
>> > Where can I read about these mind-bending magnets? I can't find the
>> > original reference in the thread if it exists.
>>
>> <http://en.wikipedia.org/wiki/Neuroscience_of_free_will>
>>
>
> See also, http://science.slashdot.org/article.pl?sid=10/03/30/1741224

Thanks to both of you.
From: Don Geddis on
RG <rNOSPAMon(a)flownet.com> wrote on Sat, 22 May 2010:
> As Don said, randomness is a red herring. The crucial elements are 1)
> some decision making mechanism with 2) goals and/or desires and 3)
> insufficient computing power to introspectively predict its own
> behavior in advance of actually making a decision. Or something like
> that. Maybe I should let Don make his own arguments. I seem to be
> making a mess of it.

Not this time! It all sounds good to me.

-- Don

(By the way, in case you're curious: there's more than just
"insufficient computing power" which prevents an entity from predicting
its own decision, prior to making its decision. There's actually an
introspective, reflective, "catch". Ironically, it might be possible
for everybody else in the world to figure out what your decision is
going to be before you make it -- but not for you, yourself! Think
about trying to write a program that does reasoning/planning. Say you
finish version 1. You run it, and it makes some decision. Now you want
the program to figure out what decision its going to make, before it
makes it. How do you do that? Well, you enhance your program to
include a model of itself, and have your program run the model first,
before "actually" making a decision. But wait a minute, now. You now
have version 2 of the program, but the model inside is only a model of
version 1. Your program's model can predict what decision version 1
would have made ... but now the program itself is version 2, not version
1, and nothing forces it to make the same decision this time. For
example, it may have a goal of being ornery, and not wanting to be able
to predict its own behavior. Imagine, for example, that it decided
"whatever my internal model returns as a decision -- I'm going to pick
the opposite, for my real decision!". This is why decision-making
programs [and people] have "free will". Because nothing stops the
program from making whatever decision it wants, including deciding the
opposite of whatever its model says it is going to decide. That's
basically a reductio proof by contradiction, to show that it is not
possible, in general, for a system to always correctly predict its
future behavior. Even in a deterministic world, even if outsiders CAN
predict its behavior! The system itself can't know [in general] what
decision it will make, until it actually makes the decision. Hence,
free will. You will NEVER be in a situation where someone is able to
communicate to you, "in the future you will decide A instead of B", and
somehow find yourself unable to change that decision ["not have free
will"]. The very act of communicating the supposed future decision,
puts you in a different information state than you were before you
received that communication. And a decision process can of course come
to a different decision if it has different information. That's real
free will. Note that it doesn't depend on limited compuatational
resources, it doesn't depend on a soul, and it doesn't depend on randomness.)
_______________________________________________________________________________
Don Geddis http://don.geddis.org/ don(a)geddis.org
When circumstances change, I change my opinion. What do you do?
-- John Maynard Keynes
From: RG on
In article <87k4qvtt20.fsf(a)mail.geddis.org>,
Don Geddis <don(a)geddis.org> wrote:

> RG <rNOSPAMon(a)flownet.com> wrote on Sat, 22 May 2010:
> > As Don said, randomness is a red herring. The crucial elements are 1)
> > some decision making mechanism with 2) goals and/or desires and 3)
> > insufficient computing power to introspectively predict its own
> > behavior in advance of actually making a decision. Or something like
> > that. Maybe I should let Don make his own arguments. I seem to be
> > making a mess of it.
>
> Not this time! It all sounds good to me.
>
> -- Don
>
> (By the way, in case you're curious: there's more than just
> "insufficient computing power" which prevents an entity from predicting
> its own decision, prior to making its decision. There's actually an
> introspective, reflective, "catch". Ironically, it might be possible
> for everybody else in the world to figure out what your decision is
> going to be before you make it -- but not for you, yourself! Think
> about trying to write a program that does reasoning/planning. Say you
> finish version 1. You run it, and it makes some decision. Now you want
> the program to figure out what decision its going to make, before it
> makes it. How do you do that? Well, you enhance your program to
> include a model of itself, and have your program run the model first,
> before "actually" making a decision. But wait a minute, now. You now
> have version 2 of the program, but the model inside is only a model of
> version 1. Your program's model can predict what decision version 1
> would have made ... but now the program itself is version 2, not version
> 1, and nothing forces it to make the same decision this time. For
> example, it may have a goal of being ornery, and not wanting to be able
> to predict its own behavior. Imagine, for example, that it decided
> "whatever my internal model returns as a decision -- I'm going to pick
> the opposite, for my real decision!". This is why decision-making
> programs [and people] have "free will". Because nothing stops the
> program from making whatever decision it wants, including deciding the
> opposite of whatever its model says it is going to decide. That's
> basically a reductio proof by contradiction, to show that it is not
> possible, in general, for a system to always correctly predict its
> future behavior. Even in a deterministic world, even if outsiders CAN
> predict its behavior! The system itself can't know [in general] what
> decision it will make, until it actually makes the decision. Hence,
> free will. You will NEVER be in a situation where someone is able to
> communicate to you, "in the future you will decide A instead of B", and
> somehow find yourself unable to change that decision ["not have free
> will"]. The very act of communicating the supposed future decision,
> puts you in a different information state than you were before you
> received that communication. And a decision process can of course come
> to a different decision if it has different information. That's real
> free will. Note that it doesn't depend on limited compuatational
> resources, it doesn't depend on a soul, and it doesn't depend on randomness.)

Ah, very clever. As soon as someone communicates with me, we become a
joint system. So their prediction of me in isolation may be 100%
accurate, but as soon as they tell me the prediction I'm no longer
isolated so the prediction might be wrong.

Nice.

rg