From: Pascal J. Bourguignon on
Raffael Cavallaro <raffaelcavallaro(a)pas.despam.s.il.vous.plait.mac.com> writes:
> But this is precisely what the average person (and the legal system)
> does *not* mean by free will. [...]
> So "free will" can only be rescued by redefining it into semantic
> absurdity. [...]

By making an analogy that matters a lot for me,

Once you created an intelligent robot, you have two possibilities (I
therefore take the point of view of the creator of the robot):

- you can leave the robots at its own devices (ie. you just let it run
its program, with its physical constraints in its environment),

- or you can keep meedling with its brains, eg debugging it on the
run, or forcing it to do things it wouldn't have done (avatar-like).

As the creator of the robot, I define the first case as being the
"free will" situation, and the second case as being the "bound will"
situation.


If the robot does something bad (in my opinion, as creator of the
robot; I get to choose the fitness function!), in the first case, with
it's free will, I may judge it bad, destroy it and garbage collect its
bits to build a better robot.

In the second case, I cannot infer anything about the goodness or
badness of the robot, since I or some other entity took over and
messed with its brains, and this might have been the cause of the evil
action of the robot.


So if you want to get good results with your genetic algorithms and
from "natural" selection, you better give "free will" to your
creatures!


Now, if you raise the question of judgement of robots by robots, I
find it a little ridiculous, but let's admit that groups of robots may
want to protect themselves from the bad robots. First, notice that
it is not necessarily the same badness as the one defined by the
creator of the robots. But anyways. If the robot has free will, then
it means that it was its nature to commit bad deeds (in the given
circumstances). The robots may choose to change the circumstances of
the bad robot (put it in prison), or ask the creator to garbage
collect it (capital execution ;-)). Without free will, the robot is
not responsible of its bad doings. It would be more moral to try to
safeguard the robot. But the others still have nonetheless the right
to protect themselves from the evildoings of the superior being who
removed the free will of the bad robot. So I think that in practice
they are justified in applying exactly the same punishments (if God
didn't give us free will we would still be justified in emprisoning or
executing criminals).

Now the new question is what about other person removing the free will
of the evil doer. Clearly this person is to be punished, not the evil
doer. I think this is already in the jurisprudence and laws, even if
it's a case harder to prove, and occuring less frequently anyways.

--
__Pascal Bourguignon__
http://www.informatimago.com
From: RG on
In article <lzmxvxi0t8.fsf(a)informatimago.com>,
pjb(a)informatimago.com (Pascal J. Bourguignon) wrote:

> Raffael Cavallaro <raffaelcavallaro(a)pas.despam.s.il.vous.plait.mac.com>
> writes:
> > But this is precisely what the average person (and the legal system)
> > does *not* mean by free will. [...]
> > So "free will" can only be rescued by redefining it into semantic
> > absurdity. [...]
>
> By making an analogy that matters a lot for me,
>
> Once you created an intelligent robot, you have two possibilities (I
> therefore take the point of view of the creator of the robot):
>
> - you can leave the robots at its own devices (ie. you just let it run
> its program, with its physical constraints in its environment),
>
> - or you can keep meedling with its brains, eg debugging it on the
> run, or forcing it to do things it wouldn't have done (avatar-like).
>
> As the creator of the robot, I define the first case as being the
> "free will" situation, and the second case as being the "bound will"
> situation.
>
>
> If the robot does something bad (in my opinion, as creator of the
> robot; I get to choose the fitness function!), in the first case, with
> it's free will, I may judge it bad, destroy it and garbage collect its
> bits to build a better robot.
>
> In the second case, I cannot infer anything about the goodness or
> badness of the robot, since I or some other entity took over and
> messed with its brains, and this might have been the cause of the evil
> action of the robot.
>
>
> So if you want to get good results with your genetic algorithms and
> from "natural" selection, you better give "free will" to your
> creatures!
>
>
> Now, if you raise the question of judgement of robots by robots, I
> find it a little ridiculous, but let's admit that groups of robots may
> want to protect themselves from the bad robots. First, notice that
> it is not necessarily the same badness as the one defined by the
> creator of the robots. But anyways. If the robot has free will, then
> it means that it was its nature to commit bad deeds (in the given
> circumstances). The robots may choose to change the circumstances of
> the bad robot (put it in prison), or ask the creator to garbage
> collect it (capital execution ;-)). Without free will, the robot is
> not responsible of its bad doings. It would be more moral to try to
> safeguard the robot. But the others still have nonetheless the right
> to protect themselves from the evildoings of the superior being who
> removed the free will of the bad robot. So I think that in practice
> they are justified in applying exactly the same punishments (if God
> didn't give us free will we would still be justified in emprisoning or
> executing criminals).
>
> Now the new question is what about other person removing the free will
> of the evil doer. Clearly this person is to be punished, not the evil
> doer. I think this is already in the jurisprudence and laws, even if
> it's a case harder to prove, and occuring less frequently anyways.

You may find this interesting and relevant:

http://www.mit.edu/people/dpolicar/writing/prose/text/godTaoist.html

rg
From: Raffael Cavallaro on
On 2010-05-18 12:33:08 -0400, RG said:

> Not at all. Torture is just one extreme of a continuum.

You can't seriously consider the subjective state of a torture victim
and that of one of the subjects of the magnet experiment to be
comparable.

Again, the whole point of the magnet experiment is that the subjects
felt that their choices were just as free with and without the magnetic
field. No victim of torture feels that his choice is just as free with
and without being tortured.


The whole reason these experiments are interesting is that they show
that our *subjective* evaluation of what is causing our choices is
unreliable in the extreme.


warmest regards,

Ralph


--
Raffael Cavallaro

From: Raffael Cavallaro on
On 2010-05-18 12:45:47 -0400, RG said:

> Here, I'll make it more nuanced for you: a split brain patient is
> essentially two independent brains occupying one body. That is a
> situation that is so far removed from the circumstances under which
> brains evolved to operate in that it is not at all clear that any
> extrapolation to a normal brain can be drawn at all, let alone from a
> single data point.

And yet neuroscientists do just that. The point of these experiments is
not that split brain patients have normal cognition, but to delineate
how our verbal, conscious selves makes sense of our experience. What we
*say* or subjectively perceive is going on in our brains is not a
reliable indicator of what is actually taking place.

It would be one thing if they were merely mystified as to their
choices, but they frequently concoct some story to justify them, and
believe these post-hoc accommodative stories to be the real reason for
their choice. I'm saying that our subjective perception of free will is
just such a post-hoc accommodative argument; it seems true to us
subjectively, but it's really just a false perception concocted to make
sense of our experience.

BTW, this is not a single data point; There are many such trials with
multiple patients, and the results are consistent; the subject doesn't
know why s/he is doing what s/he's doing, though it's obvious to any
outside observer.


warmest regards,

Ralph


--
Raffael Cavallaro

From: Raffael Cavallaro on
On 2010-05-18 12:30:51 -0400, Vend said:

> On 18 Mag, 01:14, Raffael Cavallaro
> <raffaelcavall...(a)pas.despam.s.il.vous.plait.mac.com> wrote:
>
>> You have the same confusion as Tamas and Nicolas. To believe that one
>> has choice unconstrained by the laws of physics is to believe that,
>> given two or more *physically possible* choices, one can choose
>> either/any, and that this choice, is not constrained by the laws of
>> physics.
>
> This belief is a tautology.
>
> If there are multiple physically possible choices (nondeterminism is
> true), then by definition of "physically possible" the laws of physics
> don't constrain the choice.

Yes, dualism is logically consistent; it just isn't supported by the
experimental evidence, that's all.

>
> If there are never multiple physically possible choices (determinism
> is true), then the belief is still correct,

the belief? what belief? The belief that "one has choice unconstrained
by the laws of physics" is simply false if determinism is true. If
determinism is true, then one's feeling of free choice is an illusion
because all of one's choices, along with every other event that ever
occurred or will occur, were decided already by these same laws of
physics when the universe came into being. That's what determinism
means.

> since implication is true
> if the premise is false.

warmest regards,

Ralph



--
Raffael Cavallaro