From: RG on
In article <1jj1s62.j2zo3w1e78n64N%wrf3(a)stablecross.com>,
wrf3(a)stablecross.com (Bob Felts) wrote:

> RG <rNOSPAMon(a)flownet.com> wrote:
>
> > In article <1jj1oi5.1i2fnjnrmuaeN%wrf3(a)stablecross.com>,
> > wrf3(a)stablecross.com (Bob Felts) wrote:
> >
> > > RG <rNOSPAMon(a)flownet.com> wrote:
> > >
> > > > In article <1jj1ld7.n4r6g11yujxd6N%wrf3(a)stablecross.com>,
> > > > wrf3(a)stablecross.com (Bob Felts) wrote:
> > > >
> > > > > > > "Utility" is just as fuzzy as "justice" (and for the same
> > > > > > > reason). Did Graham hang himself by his own petard?
> > > > > >
> > > > > > No. He is not proposing that utility be an *object* of
> > > > > > philosophical study, but a quality metric for the *results* of
> > > > > > philosophical study.
> > > > >
> > > > > I understand that. The point I was making is that "utility", like
> > > > > "justice" is a fuzzy quality metric. People who are utilitarians
> > > > > typically don't want to face that (and I don't know if you are a
> > > > > utilitarian or not), but it's provably so. His proposal is just as
> > > > > flawed as the philosophy he dismisses.
> > > >
> > > > But utility has an objective measure: is someone willing to pay for it.
> > > >
> > >
> > > Justice likewise has an objective measure -- the scales balance.
> >
> > Huh? How does that work exactly? Where can I buy myself a set of these
> > scales? Does Amazon carry them? Do they come with an instruction
> > manual?
> >
>
> You steal $20 from me, you owe me $20 plus compensation for my time in
> recovering my losses (based on what my employer pays me). You take my
> life, yours is forfeit.

I like it. I think I'll start a career as a thief. As long as I don't
get caught too often I can turn a pretty healthy profit. Stealing from
unemployed and retired people would be particularly attractive since
their time is worth nothing on your theory.

> > > As for willingness to pay, that's still fuzzy. It's only an appearance
> > > of objectivity. People pay for things they don't value and sometimes
> > > they don't pay for things they do value. Too, you have to deal with the
> > > N-payer problem. Does utility strictly depend on who has the most
> > > money?
> >
> > Don't confuse being objective with being error-free. They are not the
> > same thing.
>
> I'm not. But if something is supposed to be useful, one has to know how
> to use it.

I think most people know how to use money.

> Does utility strictly depend on who has the most money?

Don't confuse willingness to pay with ability to pay. But yes, utility
is always relative to one's situation. A loaf of bread is worth more to
a starving man than to one who has just finished a meal.

rg
From: RG on
In article <1jj1vkk.1hhtyeaxu2jy8N%wrf3(a)stablecross.com>,
wrf3(a)stablecross.com (Bob Felts) wrote:

> Actually, your definition of free will, as being an illusion of
> subjective perception, is a direct consequence of the "no god"
> hypothesis.

No it isn't. There are lots of other ways to define free will in the
absence of gods.


> > > Freedom is typically defined as a relationship between two or more
> > > objects; relationships which have objective definitions.
> >
> > I thought freedom was just another word for nothing left to lose ;-)
> >
>
> I think the famous philosopher Scott Adams had one of his creations say
> the same thing.

Actually, it was Kris Kristofferson and Fred Foster.

> In any case, as I've said, you can't apply the objective definition of
> freedom to your case, since there's nothing to be free from.

Sure there is: determinism and predestination. Oppressive governments.
Economic adversity.

> > > > > Ron, I think, defines free will in terms of information assymetry
> > > > > between two agents.
> > > >
> > > > I don't *define* it that way. My definition of free will is that it
> > > > is the subjective perception (or illusion if you choose a quantum
> > > > point of view) that we have free will. Information asymmetry is a
> > > > consequence of this definition because only I have access to my own
> > > > subjective experience.
> > >
> > > But if our mental machinery is driven by (quantum) randomness, wouldn't
> > > that make it non-deterministic, and therefore free?
> >
> > It would. But it (almost certainly) isn't.
>
> Citation?

Sorry, you'll have to start doing your own homework.

> > > Oh, wait. You don't subscribe to quantum randomness, do you?
> >
> > I don't know what you mean by that. Quantum randomness is a "real"
> > phenomenon (with "real" in scare quotes because it is only "real" relative
> > to a classical universe, which isn't "really real") but it's (almost
> > certainly) not a factor in mental processes.
> >
>
> Is the universe deterministic at the quantum level?

That depends on what you mean by "at the quantum level." The
propagation of the wave function is deterministic, yes.

> Theologians deal with axiomatic systems just as much as logicians do.

That may be, but their axioms a.k.a. holy texts are not constrained by
objective reality. By following theologian's methods I can "show"
anything.

> The prisoner's
> dilemma experiment makes the unwarranted assumption that maximization of
> self-interest is good. It isn't.

Feel free to come up with your own model and advance the state of the
art of understanding in this area.

> More importantly, the PD only deals with two people. It breaks
> down when extending it to the N-body problem.

Not necessarily. Axelrod ran simulations involving large "populations"
of programs that evolved under Darwinian rules. Every individual
interaction was pair-wise, but the proposition that this is a reasonable
model of interpersonal interaction is defensible. Like I said, Axelrod
is surely not the last word on this. Feel free to offer your own
contributions.

> Since morality exists only in minds with creative power

But that's not true. If you think it is then you have completely missed
the point of Axelrod's work.

> we can
> logically deduce that it is sufficient, even if not always necessary,
> that differences of moral opinion can be settled by terminating the mind
> that holds a contrary position.

Of course. It is tautological that any conflict can be settled by
destroying one or more of the entities that are in conflict. But that's
not a particularly interesting or useful observation.

> In this case, might does make right.

Only if your quality metric is the absence of conflict. If that's your
quality metric then eliminating all life would be the greatest good you
could do, since that would necessarily eliminate all conflict.
Personally, I take this as an indication that there's something wrong
with your quality metric rather than as a guide to moral behavior.

rg
From: Bob Felts on
RG <rNOSPAMon(a)flownet.com> wrote:

> In article <1jj1vkk.1hhtyeaxu2jy8N%wrf3(a)stablecross.com>,
> wrf3(a)stablecross.com (Bob Felts) wrote:
>
> > Actually, your definition of free will, as being an illusion of
> > subjective perception, is a direct consequence of the "no god"
> > hypothesis.
>
> No it isn't. There are lots of other ways to define free will in the
> absence of gods.

Sure. You and Don have slightly different formulations. But the
general form is the way it is because of your a priori assumptions. For
example, you couldn't possibly define it the way Pascal did.

[...]

>
> > In any case, as I've said, you can't apply the objective definition of
> > freedom to your case, since there's nothing to be free from.
>
> Sure there is: determinism and predestination.

In your universe, absent other intelligent agents, what would determine
a will?

> Oppressive governments.
> Economic adversity.

Yes, but that typically isn't what is meant when discussing whether or
not the will is free. You certainly didn't use any of this in your
definition.

[...]

> >
> > Is the universe deterministic at the quantum level?
>
> That depends on what you mean by "at the quantum level." The
> propagation of the wave function is deterministic, yes.
>

And the spin of a photon when the wave function collapses?

> > Theologians deal with axiomatic systems just as much as logicians do.
>
> That may be, but their axioms a.k.a. holy texts are not constrained by
> objective reality

That's simply not true. You are defining "objective reality" to exclude
a very objective phenomena.

> By following theologian's methods I can "show" anything.
>

And I can prove that 1=0.


> > The prisoner's dilemma experiment makes the unwarranted assumption that
> > maximization of self-interest is good. It isn't.
>
> Feel free to come up with your own model and advance the state of the
> art of understanding in this area.
>

I have.

[...]

>
> > Since morality exists only in minds with creative power
>
> But that's not true. If you think it is then you have completely missed
> the point of Axelrod's work.
>

It is absolutely true. Axelrod started with an arbitrary "ought"
(selfishness is good), and lo and behold, discovered that people act in
a way to maximize their selfishness.

Why not set up another experiment: two people are imprisoned. One of
them will be released in 6 months. There is exactly enough food to keep
only one of them alive for 6 months (and the guards will adjust the food
level as necessary to ensure this). What should the prisoners do?

> > we can logically deduce that it is sufficient, even if not always
> > necessary, that differences of moral opinion can be settled by
> > terminating the mind that holds a contrary position.
>
> Of course. It is tautological that any conflict can be settled by
> destroying one or more of the entities that are in conflict. But that's
> not a particularly interesting or useful observation.

Of course it's useful. We wiped out the neanderthals. Should we have?
>
> > In this case, might does make right.
>
> Only if your quality metric is the absence of conflict.

In the presence of moral conflict, how do you show which side is right?
The act of chossing between two ethical systems is itself an ethical
question; that is "ia moral choice A better than moral choice B?"
requires a moral choice. How do you break the infinite regress?

> If that's your quality metric then eliminating all life would be the
> greatest good you could do, since that would necessarily eliminate all
> conflict.

No, just eliminating those who disagree with me.

> Personally, I take this as an indication that there's something wrong
> with your quality metric rather than as a guide to moral behavior.

On what basis, other than your personal preference?
From: RG on
In article <1jj28cd.1n06j4qoooghsN%wrf3(a)stablecross.com>,
wrf3(a)stablecross.com (Bob Felts) wrote:

> > >
> > > Is the universe deterministic at the quantum level?
> >
> > That depends on what you mean by "at the quantum level." The
> > propagation of the wave function is deterministic, yes.
> >
>
> And the spin of a photon when the wave function collapses?

It doesn't collapse. But if you suspend disbelief and accept a
classical universe as real, then yes, it's random. (BTW, you probably
meant polarization, not spin. Photons are spin-zero.)

> > > Theologians deal with axiomatic systems just as much as logicians do.
> >
> > That may be, but their axioms a.k.a. holy texts are not constrained by
> > objective reality
>
> That's simply not true. You are defining "objective reality" to exclude
> a very objective phenomena.

"Phenomena" is plural. You mean "an objective phenomenon." And I call
shenanigans for claiming this without saying which objective phenomenon
you're referring to.

> > > Since morality exists only in minds with creative power
> >
> > But that's not true. If you think it is then you have completely missed
> > the point of Axelrod's work.
> >
>
> It is absolutely true. Axelrod started with an arbitrary "ought"
> (selfishness is good), and lo and behold, discovered that people act in
> a way to maximize their selfishness.

No. You are as wrong about this as Ralph is about QM. What Axelrod
discovered is that behavior that appears structurally similar to what we
call moral behavior can arise from processes that obey the laws of
Darwinian evolution. There is no "ought" about it.

> Why not set up another experiment: two people are imprisoned. One of
> them will be released in 6 months. There is exactly enough food to keep
> only one of them alive for 6 months (and the guards will adjust the food
> level as necessary to ensure this).

If they both eat half the available food that will be difficult to do.
Much better to just give them each a pistol and say that one minute from
now, if they are both still alive they will both be executed, but if one
of them is dead the survivor will be freed.

> What should the prisoners do?

This is a one-shot prisoner's dilemma with a somewhat different payoff
matrix than usual, since the payoff for cooperating is zero (death)
regardless of what the other player does. Figuring out the "correct"
action in this case is left as an exercise.

Moral behavior only arises in an iterated PD where the number of rounds
is not known in advance. Did you bother to read the post I referred you
to?

> > > we can logically deduce that it is sufficient, even if not always
> > > necessary, that differences of moral opinion can be settled by
> > > terminating the mind that holds a contrary position.
> >
> > Of course. It is tautological that any conflict can be settled by
> > destroying one or more of the entities that are in conflict. But that's
> > not a particularly interesting or useful observation.
>
> Of course it's useful. We wiped out the neanderthals. Should we have?

According to what quality metric? "Should" can only ever be decided
relative to some quality metric.

> > > In this case, might does make right.
> >
> > Only if your quality metric is the absence of conflict.
>
> In the presence of moral conflict, how do you show which side is right?

How do yo know either side is "right"?

> The act of chossing between two ethical systems is itself an ethical
> question; that is "ia moral choice A better than moral choice B?"
> requires a moral choice. How do you break the infinite regress?

By reading Axelrod more carefully, and in particular paying attention to
the properties of evolutionarily stable strategies.

> > If that's your quality metric then eliminating all life would be the
> > greatest good you could do, since that would necessarily eliminate all
> > conflict.
>
> No, just eliminating those who disagree with me.

I'm pretty sure that would be a temporary solution at best. If you
leave any trace of life you run the risk of intelligent life evolving
all over again and you're right back where you started from. If you
really want to eliminate conflict long term you pretty much have no
choice but to sterilize the planet.

> > Personally, I take this as an indication that there's something wrong
> > with your quality metric rather than as a guide to moral behavior.
>
> On what basis, other than your personal preference?

On the basis of Axelrod's work and the properties of evolutionarily
stable strategies.

rg
From: RG on
In article <1jj27d4.qhnq627r7xuoN%wrf3(a)stablecross.com>,
wrf3(a)stablecross.com (Bob Felts) wrote:

> > The context here is that we're talking about PG's proposal on how to
> > measure the quality of philosophical work, not how to distinguish right
> > from wrong.
>
> Fundamentally there's no difference.

OK, we'll just have to agree to disagree about that.

rg