From: RG on 27 May 2010 00:11 In article <1jj437b.zf95bx1aa6s0iN%wrf3(a)stablecross.com>, wrf3(a)stablecross.com (Bob Felts) wrote: > Don Geddis <don(a)geddis.org> wrote: > > > wrf3(a)stablecross.com (Bob Felts) wrote on Wed, 26 May 2010: > > >> >> > > > Is the universe deterministic at the quantum level? > > >> >> The propagation of the wave function is deterministic, yes. > > >> And the spin of a [electron] when the wave function collapses? > > > Thanks. I'm familiar with the many worlds theory. > > > > Then I wonder why you seemed so confused about whether quantum mechanics > > is deterministic (it is), and what happens when the wave function > > "collapses" (it doesn't). > > > > Because, to the best of my knowledge, no experiment has been performed > that confirms MWI over Copenhagen. There is no such experiment. But the argument against Copenhagen can be found here: http://en.wikipedia.org/wiki/Quantum_decoherence "Decoherence does not generate actual wave function collapse. It only provides an explanation for the appearance of wavefunction collapse." rg
From: RG on 27 May 2010 00:22 In article <1jj447w.1f1hh65zu6fy8N%wrf3(a)stablecross.com>, wrf3(a)stablecross.com (Bob Felts) wrote: > RG <rNOSPAMon(a)flownet.com> wrote: > > > In article <1jj3vlb.34i945vdg9tuN%wrf3(a)stablecross.com>, > > wrf3(a)stablecross.com (Bob Felts) wrote: > > > > > RG <rNOSPAMon(a)flownet.com> wrote: > > > > > > > In article <1jj3f74.150oc7skfxkhsN%wrf3(a)stablecross.com>, > > > > wrf3(a)stablecross.com (Bob Felts) wrote: > > > > > > > > > And yet you went to the trouble to write a post on "Morality without > > > > > God" which used the PD as a basis for moral behavior. > > > > > > > > My essay does not "use PD as a basis for moral behavior." It uses PD > > > > as the basis of a scientific model of how moral intuition can arise by > > > > Darwinian evolution. Until you understand the significant difference > > > > between my actual thesis and your straw-man recasting of it you may as > > > > well go argue with yourself. > > > > > > > > BTW, this is the THIRD TIME I have had to point out to you that you > > > > are raising a straw man. It's really getting tiresome. > > > > > > > > > > I assure you, I'm not trying to manufacture non-existant issues. > > > > Then you need to learn to read more carefully. > > > > Now I am going to be cantankerous. > > If it were only that simple. Interpretation is hard, even for careful > readers. Good grief, the US Supreme Court was split, 5-4, on whether or > not the 2nd Amendment confers the right to bear arms on individuals or > militias. > > So I'm not going to engage in a pissing contest by saying that either > the justices should have been more careful readers, or the framers of > the Constitution more careful writers. > > I want to engage in the clash of ideas, hopefully well articulated, > hopefully properly understood, hopefully well supported. I don't want > to engage in underhanded slights. Sorry, I'm dealing with a very stressful situation in my personal life right now. My patience is wearing very thin. > > > But you did write, "The third feature that makes evolved intuition > > > attractive as a basis for morality...". If that's not what you were > > > trying to convey, then I'll accept that. > > > > A *possible* basis for morality. Look at the title of the post. > > > > An *attractive* basis for morality - your words from the body of the > post. "Possible" and "attractive" are not mutually exclusive. And neither of those words means the same as "actual." > Should one receive more weight than the other? Doesn't the body of a > work expand on, and perhaps take new direction, from the title? > > See? Interpretation can hard. It can be. But in this case you were simply overreaching. > I'll take you at your subsequent word that the purpose of that post was > to provide a possible explanation for how our moral intuitions came to > be; and that it shouldn't be taken as a naturalistic explanation for > what our moral intuitions ought to be. That's right. But I'm not done yet. rg
From: Don Geddis on 27 May 2010 01:22 wrf3(a)stablecross.com (Bob Felts) wrote on Wed, 26 May 2010: > Don Geddis <don(a)geddis.org> wrote: >> > On 2010-05-26 11:20:46 -0400, Bob Felts said: >> >> I understand that. However, what makes the choice is, IMO, irrelevant, >> >> whether it is an "immortal non-physical soul", or a meat machine >> >> containing a random number generator. >> >> Bob, you need to keep in mind that there are more alternatives than just >> either a soul, or a random number generator. > > Well, let's see. I appreciate your extensive research, tracking back through this much-too-long thread, to find and extract the relevant parts of what I wrote. Given that reasonable effort, I conclude that I must have been communicating poorly, rather than you deliberately ignoring my points. My apologies. To be clear, then: my point is that, when considering entities that may be capable of free will, one possible choice is a soul (Ralph's); another possible choice is a random number generator (your's); and a THIRD possible choice is an ordinary deterministic algorithm that happens to implement a decision procedure which operates on beliefs and goals to produce action plans. My criticism above was that you basically wrote, "it doesn't matter whether free will comes from a soul or from a random number", while completely leaving out my preferred alternative for free will (a deterministic decision algorithm). > knowledge isn't predestination. Can you explain this more fully? You've said it a few times, and I feel like I'm missing your point. Obviously, these are two separate concepts. But in the context of a discussion on free will, we're talking about whether one entity can know the decision that another entity will make in the future, perhaps even before that second entity knows itself what its own decision will be. What does "knowledge isn't predestination" mean in that context? > Correct me if I'm wrong, but you're using the "determinism" in the > sense of "able to be foreknown" I don't think so. I simply mean: given the same inputs, the process will always produce the same outputs. The result is "determined" by the inputs. I'm not sure whether "able to be foreknown" means the same thing. > I'm using it in the sense of "predestined". And I don't understand the distinction you're making either. Let's say I write an algorithm for calculating the digits of pi. I ask it what the trillionth digit is. I happen not to know what the answer is, but there is SOME answer, and only one, and the (deterministic) program will eventually produce it, whatever it is. And no matter how many times I run it over and over again with the same question, it will always produce the same answer. That's what deterministic means. In principle, you could predict the result ahead of time, at the worst by making a model of the entity you want to predict, and then emulating the operation of the model, and finally reading off the output. In practice, many things (like the Nth digit of pi) apparently have no shortcut, so there's really no faster way to find out what the answer will be, other than just having the original algorithm do its computation and tell you the answer. So in practice, you find that the results are not predictable. But they're still deterministic. They'll still give the same answers for the same inputs, every time. > Suppose we want to create an AI that mimics people. I know, from > self-reflection, that we're going to need a module that simulates the > imagination. I don't know what gives a person creative power, but I > suspect that some type of random number generator could be used You know so little about how imagination or creativity works. It's a little premature for you to conclude that a random number generator is an important part of it. > especially since the imagination isn't limited by reality. Imagination and reality are two different things, with partial overlap. Sometimes I get the impression that you think imagination is a superset of reality. You should realize that that are surely things in reality that haven't occurred to any human's imagination. All we really know about the Venn diagram is that they overlap. There are things about reality that we can imagine. And there are surely things that are only in imagination, or only in reality, but not in the other. > I know, from self-reflection, that our moral sense is based on some > kind of "distance" measurement between "is" and "ought", and that > "ought" resides in the realm of imagination. I think you trust your introspection far too much. You don't actually know much about how you got the "moral sense" that you have. There is the real world, and there are possible worlds that you can imagine, yes. And morality ("ought") can be thought of as one kind of evaluation of possible worlds. But that doesn't mean that "ought" is "in the imagination". It's a function that is applied to the imagination. You don't really know where that function comes from, though. > So I'm going to need heuristics that "measure" goals against > imagination space, i.e. converts "ought" to "is". But goal seeking isn't the same as morality. Any AI planner does this (without random numbers!), considering possible actions, imagining the worlds that would result upon executing the actions, and eventually coming up with an action plan that maximizes its utility based on its goals. That doesn't necessarily have anything to do with morality, or with random numbers. > Out of the 23 messages you've posted on the subject, that encompasses > most of what I think the issues are. Perhaps one difference between us is: I think free will can be discussed in the absence of a discussion about morality. You can leave out souls and morals, and just ask the more narrow question of whether it is possible to "freely" make choices (that maximize your goals?), whatever that might mean. > The one remaining one is: > | What Bob is looking for is, are we able to make whatever choices we > | wish to make? The answer is yes. > Even if I grant this (and if I have a problem with it, I'm not sure I > know what it is, except an as-yet-unexamined sense of unease), that's > typically not what is meant by free will. We're running into > definitional issues. Some would contend that the proper way to ask this > question is "are we able to make whatever/some choices that we ought to > make?" But that opens up a can of worms. There are actions that you are physically capable of performing. That seems to be a superset of the actions that you "should" perform. So when I say that free will is about whether you are able to decide to take whatever action you want (that is feasible), that seems to encompass your concern at least having the ability to perform the actions you "ought" to take. Doesn't it? > So, do I not understand what you're saying, or is disagreement being > taken as lack of understanding? If I don't understand you, it certainly > isn't because I don't want to, nor am I being contrary simply for the > sake of being contrary. I appreciate, again, you research through this thread. The sole disagreement I was complaining about (at the beginning of this email), is your lack of acknowledgment that there is a compelling version of free will that requires neither a soul nor random numbers. I thought you had deliberately ignored that case, but now my guess is that I haven't yet been able to communicate it to you. -- Don _______________________________________________________________________________ Don Geddis http://don.geddis.org/ don(a)geddis.org If trees could scream, would we be so cavalier about cutting them down? We might, if they screamed all the time, for no good reason. -- Deep Thoughts, by Jack Handey
From: His kennyness on 27 May 2010 02:15 On 05/26/2010 10:13 AM, Raffael Cavallaro wrote: > On 2010-05-25 09:41:55 -0400, Bob Felts said: > >> Just curious, but why? Truth isn't decided by numbers, is it? > > The truth of what most people *actually do believe* is in fact, > determined by the numbers of people who actually do believe that thing > (no surprise there!). > > I've never once said I actually agree with these people who believe in a > soul that gives them free will or that such a belief is true. > > Articulating a deterministic, compatibilist position is articulating a > kind of "free will" that such people (who are billions in number) would > definitely not consider to be real free will. I understand completely > that *you* consider such a deterministic thing to be free will - I'm > just saying that what *you* believe about free will is more or less > irrelevant *to them*, since they define free will as being > *non-deterministic*, moral choice, exercised by an immortal, > non-physical soul. The compatibilist "free will" strips out one of the > essential features of *their* notion of free will - i.e., the word > "free" in free will means "non-deterministic." > > Again, I think our positions are irreconcilable. You think the concept > of free will can be reformulated to be compatible with determinism; I > think it is an inherent part of the *definition* of free will that it is > non-deterministic. So by your definition what we take to be a free-will choice must have no cause, meaning it must follow from some form of roll of the dice, which is not at all what anyone takes free will to be. Nice work, Einstein. kt
From: Nick Keighley on 27 May 2010 04:08
On 26 May, 15:48, w...(a)stablecross.com (Bob Felts) wrote: > RG <rNOSPA...(a)flownet.com> wrote: > > In article <1jj2env.bhkk2z1l5wzb4N%w...(a)stablecross.com>, > > w...(a)stablecross.com (Bob Felts) wrote: > > > > > > > Theologians deal with axiomatic systems just as much as > > > > > > > logicians do. > > > > > > > That may be, but their axioms a.k.a. holy texts are not > > > > > > constrained by objective reality > > > > > > That's simply not true. You are defining "objective reality" to > > > > > exclude a very objective phenomena. [what phenomena?] > > > Intelligence. <snip> > See my response to Don on this. It's always "an invisible pink > unicorn"; never "an invisible _intelligent_ pink unicorn". its trivial to add that property I don't think anyone ever realised it was important. I've got calculus performing invisible elves at the bottom of my garden. <snip> > > > > > > > Since morality exists only in minds with creative power > > > > > > > But that's not true. If you think it is then you have completely > > > > > > missed the point of Axelrod's work. > > > > > > It is absolutely true. Axelrod started with an arbitrary "ought" > > > > > (selfishness is good), and lo and behold, discovered that people act > > > > > in a way to maximize their selfishness. > > > > > No. You are as wrong about this as Ralph is about QM. What Axelrod > > > > discovered is that behavior that appears structurally similar to what > > > > we call moral behavior can arise from processes that obey the laws of > > > > Darwinian evolution. There is no "ought" about it. > > > > Of course there is. There is an "ought" to _everything_. nonsense. If you mean "ought" in the moral sense. Then most of the universe ticks along without it. I think a lot of people believe this but they're wrong "New Orleans shouldn't have been hit by a hurricane" > > > There is > > > nothing that _is_ about which we cannot say "this ought/ought not be".. > > > Oh? Ought the earth rotate on its axis? > > Ought the earth to rotate at all? Who sez? dunno. If you are talking "its reasonable to expect it to", then yes given the laws of physics and the earth's history then it ought to be rotating. Uranus that is rotating in a very odd fashion "it's not rotating how it ought to be" requires some auxillary hypothesis. I think most astonomers expect this to be some history we don't yet know about (I'm assuming there isn't yet a good explanation). > > Ought the stars shine in the sky? > > One day they won't. Why now? we just happen to live at a point in time when they do. It'd be pretty chilly and generally high entropyish if they didn't. Weak Anthropic. <snip> > > Darwinian evolution isn't "better" than anything, it's just how we got > > here. > > And yet you went to the trouble to write a post on "Morality without > God" which used the PD as a basis for moral behavior. you say that like its a bad thing > Morality is > concered with what we _ought_ to do. we ought to be nice to people so they'll be nice to us > As you wrote, you can say, "my > morality comes from a moral intuition wired into my brain by evolution > according to Axelrod's model". not wired really. Enlightened self interst goes a remarkable long way. As does the Golden Rule. > That is, "what I think is good comes > from the wiring in my brain as produced by evolutionary mechanisms". > Part of that is motherhood and apple pie, and it has commonality with > what I said: that "morality exists in minds with creative power". But > you left out the "creative power" part, which means you've missed out on > the essence of what's going on, since that's where "ought" comes from. that's a different "ought" from the earth spinning on its axis. Are chimps "creative powers"? Crows? I'm trying to come up with an example of moral behaviour without intelligence. No luck so far! If IPD is so then even simple things ought to be subject to it... > But explaining why something is doesn't explain why it ought to be that > way. that's because I (and probably other posters) don't use "ought" the way you do. To me it's EITHER just and right OR it's historically contingent. And the two meanings are quite distinct. People who conglomorate them also tend to say things like "but you can't do that!". Just after I've done it. > You try to do that by giving three reasons: it changes over time > (why is this good, again? It matches what is? Is-ought fallacy); I'm only up to two. > it embraces religion not in my world > (but in an extremely odd way by defining God as the > product of man, instead of vice versa); <shrug> > and transcendence of short term > needs (except that it doesn't work for the Kobayashi Maru variant I gave > above). What you really should have said is that "this theory gives me > warm fuzzies because it allows a paradigm that I favor to explain more > of what I see." > > The bottom line is that you're trying to justify an inherently selfish > way of living. If something is done for someone else, it's only for > what you get out of it. Selfishness is immoral. Didn't your mother > teach you that? yes, but only as a strategy for maximising my self interest! (yes I know my mother didn't see it that way). The trouble is atheists aren't in general particularly selfish. And some religious people are. I've helped strangers when there is no obvious short term payback (but it does have the payback that I can use the examples now!) > > > Are you a slave to your genetics? well I doubt I'll run a three minute mile > > Part of me is. Part of me isn't. This is one of the interesting things > > about being human -- we're hosts to two different kinds of replicators: > > genes and memes. for those of us that believe in memes... [...] > Would you be so kind as to post a link in comp.lang.lisp when you get it > done? I'm very interested in knowing which part of you is a slave. > Maybe the will? Maybe your ego? I'm not sure these things are even real. > Are you a slave to your sense of self? > "I am the master of my fate; I am the captain of my soul"? "I am the captain of this ship, and I have my wife's permmission to say so" <snip> > How do you figure out what the right thing to do is? upbringing? Sense of fairness? > Can science answer > the question? (Hint: it can't, any more than science can say which > value one roll of a die will turn up. It can say it has to be 1-6, and > it can say that over time a unbiased die will show vaules at certain > probabilities. But it can't say "for this roll it ought to be 3"). well it might if it knew enough about initial conditions. Perfect die, in a vacuum, stuck with a known impulse, from a known position. > We can argue over the mechanism in our brains that gives rise to > creative power; whether it is some form of randomness (hence the above > example); doubtful. People aren't vey good at "random" > or whether it comes from some metaphysical "soul" stuff I don't see how the metaphysical can have physical effects > (which I can only model on randomness). odd > I try to stay worldview-neutral when > discussing these things; but it has to be taken into account. > > > > > Moral behavior only arises in an iterated PD where the number of rounds > > > > is not known in advance. Did you bother to read the post I referred you > > > > to? > > > > Yes, I read it. Were I to channel Dirac, I'd say "it's not right. It's > > > not even wrong." Moral behavior arises out of everything we do, because > > > we can (and do) compare every is to multiple oughts. > > > > Furthermore, are you saying that if you were alone, that any action you > > > took would be moral? You don't even judge yourself? would I be alone for ever? > > Being alone is a red herring. > > On the contrary. It's an important boundary condition. > > > I can have an impact on other people (and they in turn on me) even if we > > are not in close proximity. Witness what is happening in the Gulf of > > Mexico right now. But yes, anything that you do that doesn't have a > > negative impact on someone else is moral. > > So you don't consider the impact on _yourself_ to be a moral issue? > Fascinating. give examples <snip> > Theodicy is a particularly interesting > subject, since it typically boils down to two incorrect notions. The > first is that there is a standard of good and evil to which both God and > man must adhere. God is not good because He doesn't measure up to that > standard (whatever the heck that standard is supposed to be). Do No Unnecessary Harm. Protect The Weak. If your god can't manage these then he is my book not good. > The > second is that "God does things I don't like, therefore He isn't good." > That's the really interesting one. covered by the first I think <snip> > > > That's what I'm asking you. I've made the claim that science cannot > > > answer that question, since it deals with *is*, not *ought*. Don Geddis > > > disagreed. Got a scientific answer? > > > Yes, but it's complicated, and I can't do it justice here. But I'll be > > writing about it on my blog in the coming weeks. > > As I said previously, let me know when you get it posted. I suspect > that you'll take an arbitrary "ought" (e.g. "we (really I) ought to > survive) and will derive subsequents oughts. We rarely argue that two > plus two ought to equal four, since it's a product of combining a > certain set of axioms with a certain set of logical operations. We > rarely argue that the sky ought to be blue (or the earth should rotate), > since that's the way it is; but that's due to a lack of imagination on > our parts. no. I absoloutly disagree. > Picking an arbitrary "ought" and creating a construct on top > of that is like building on sand. You simply can't cast an arbitrary > ought into concrete, any more than you can say "on this roll, this 1000 > sided die ought to show 456". Unless the creative power of our minds > is, in fact, deterministic. But I don't suspect you'll be showing that.. I think you're still misunderstanding what he said. He's talking about how moral behaviour can arise not if it "ought" to arise. You need a god to explain it. Occam. On balance I'd rather live in a society that follows basic morality because it is a more comfortable society for me to be in. I don't want no-go zones in my cities or gated communities for the rich. I'd rather live in a stable, democratic and tolerant society (despite its flaws) than Somalia. |