Prev: doubts about tetrahedral packing in 3D is a solid packing #516 Correcting Math
Next: JSH:Twin primes probability correlation
From: rabid_fan on 15 Mar 2010 09:36 On Mon, 15 Mar 2010 05:35:44 -0700, Huang wrote: >> >> No. In other words the human organism is not equipped to comprehend >> quantum phenomena. > > > - not true. Explain why it is not true. Or are you some mighty potentate just issuing a terse declaration that is to be accepted by all? If it were not true, then human beings would have envisioned quantum phenomena thousands of years ago, just like Democritus envisioned atoms. Paint us a vivid picture of an electron or a quark so that we may behold with ultimate clarity the totality of its true nature within our minds. Come on. We are all waiting.
From: rabid_fan on 15 Mar 2010 09:54 On Mon, 15 Mar 2010 08:15:49 -0400, J. Clarke wrote: > > If that is true eventually we will find something we can't deal with. > We've already found something. Look in a mirror. When the ancient wise men said, "Philosopher, know thyself," they didn't mean you.
From: tonyb on 15 Mar 2010 13:28 On 15 Mar, 13:14, rabid_fan <r...(a)righthere.net> wrote: > On Mon, 15 Mar 2010 02:58:57 -0700, tonyb wrote: > > > okay, just to be clear, I'm not talking about probability in relation to > > wave equations. > > I'm talking about the probability that a descriptive mathematical > > statement about the > > behaviour of a phenomena is correct. > > > Having put all theory into the same category - namely description - we > > then create models about the 'appearance of things' in terms of > > mathematics, based in observation but also, and very importantly, guided > > by the consistency of our theory with other more supported theories. > > (e.g. a theory that breaks some relativistic principle is possible, but > > less likely; as an aside, its this prior probability assigned to a > > theory in terms of pre-existing theories that made us reluctant to > > modify Newton's ideas in the first place) > > > So then we use experiment to test the theory - but we'll always get > > systematic and random noise over our data - there is never a perfect > > fit; often we must also invoke secondary theories to remove unwanted > > artefacts from our data. So, in fact when we demonstrate the data is > > consistent with a theory, we should (I would argue) assign a probability > > to that theory and also take into account the current probability of any > > invoked secondary theories. > > That's not how science works. sorry, which part(s) are incorrect? If there are anomalous data, such as > outliers or other unexpected values, this data could be noise or > it could be valid. More experimentation is necessary. this assumes we have infinite resources to carry out experiments. However they are finite and (sadly) getting smaller by the day. At some point we stop repeating experiments and choose which new ones to carry out. My question is about being able to say when it is okay to stop and which avenues to best pursue, perhaps based in probability > > When beta decay was first observed, a distribution of electron > energy was measured that was inconsistent with the well established > principle of the conservation of energy. What were the hypotheses? > 1) Conservation of energy may not be valid at the sub-atomic level. > 2) Another emitted particle may carry some of the energy. > > Any probability assigned to either hypothesis can only be subjective. > There is no way to rigorously define a probability in the same way > we can describe a "goodness of fit." > This is a good example (I'm probably missing something, but here is where my question lies.) Why is it not possible to use goodness of fit to assign a probability to a hypothesis? If we state our model we can create a probability density function, which given our data, provides us with a measure of likelihood. Using the rules of probability, can't we (in principle) then combine the likelihoods of multiple experiments e.g. all those that support conservation of energy? In this case, such a prior probability would strongly indicate dedicating the majority of our resources to the second possibility - the undetected particle - because its supported by the most likely hypothesis. Couldn't we formalize this process? > The only true way out of the conundrum is more experimental work. > > Subjective probabilities can be useful in an engineering sense, e.g. > the likelihood that a nuclear power plant will explode, but in pure > scientific work they can only be a rough guide to further directions > of investigation. I don't see how these probabilities are necessarily subjective. Given a statistical model, aren't these probabilities actually objective, because they are based only on our measured data and model? The probability of any supporting hypothesis can also be factored in, and combined into a belief network. Or are there other factors I've overlooked; things that we can't model or take into account? > > > The second part of this problem, for me is that pure mathematics lives > > in a realm of pure reasoning, uncorrupted by doubt (although often > > abstracted from reality.) In order to apply it to Physics, we have to > > assign meaning to all our variables. c becomes the velocity of light > > etc... In order to do this, we must take into account *how* we are > > measuring that variable. If we use different methods, we might actually > > be measuring different phenomena. Without any understanding of the > > probability of our hypothesis or phenomena, this could be assumption > > which is *probably* unreasonable. > > > So my question is, how do we as Scientists, reason within an uncertain > > (essentially probabilistic) framework in a rigorous manner; but > > primarliy how do we do this issue into account on a daily basis, as we > > go about our business? > > (I hope I've nailed my question this time.) > > We deal with it by not reasoning. We experiment. > > The art of pure reason, and perhaps subjective probabilities, died > with Plato. Here here! I strongly agree; I'm no rationalist. And there's the rub. Given a finite set of resources to investigate a massive hypothesis space, how would I attempt to decide which experiments to do? When should I stop re-checking results? I'd like some kind of measure probability for a new hypothesis, based on its supporting hypothesis, model and (eventually) data. But if you are saying that is not possible (and if that is the case, I'd really like to know why), what do you think is a good rule-of- thumb for making these decisions on a day-to-day basis? PS: out of curiosity, I'd be interested in knowing what research you are doing.
From: rabid_fan on 15 Mar 2010 15:48 On Mon, 15 Mar 2010 10:28:40 -0700, tonyb wrote: > > this assumes we have infinite resources to carry out experiments. > However they are finite and (sadly) getting smaller by the day. > Then science will sadly die. There is no substitute for experimental work. > I don't see how these probabilities are necessarily subjective. The whole idea of a probability depends on an "outcome space," which represents in a highly rigorous manner every possible result (or outcome) regarding a system of interest. If there is no outcome space there can be, strictly speaking, no measure of probability. You are somehow proposing that there is some outcome space of hypotheses, but such an outcome space is clearly beyond construction. (see below) The "automated hypothesis selection" approach that you hope to formalize is already practiced intuitively. Some may even call it common sense. > Given a finite set of resources to investigate a massive hypothesis > space, how would I attempt to decide which experiments to do? > Check with the social authorities to determine the acceptable interpretation method. If I find a dinosaur bone, I can claim that it represents the remnant of a very ancient reptilian life form. But this claim may not be permitted in Kansas, Texas, Saudia Arabia, or other states that are opposed to the evolutionary hypothesis. In southeast Asia, the dragon hypothesis is in vogue. Scientists are paid by the state, and whoever pays the piper names the tune. > I'd like some kind of measure probability for a new hypothesis, based on > its supporting hypothesis, model and (eventually) data. But if you are > saying that is not possible (and if that is the case, I'd really like to > know why), what do you think is a good rule-of- thumb for making these > decisions on a day-to-day basis? > A hypothesis does not precede the data. A hypothesis is not a priori. A hypothesis is immanent within data or observation. You are proposing that we construct an outcome space using nothing. Well, then, anything can happen. > PS: out of curiosity, I'd be interested in knowing what research you are > doing. I am the CEO of the Institute For Everything.
From: rabid_fan on 15 Mar 2010 17:16
On Mon, 15 Mar 2010 10:28:40 -0700, tonyb wrote: > Couldn't we formalize this process? > > Given a finite set of resources to investigate a massive hypothesis > space, how would I attempt to decide which experiments to do? > Easy! We can put a hundred monkeys in front of a hundred keyboards and, give or take 10^130 years, amid the massive gibberish will arise a massive hypothesis space. Now put yourself back in the early 20th century. A certain hypothesis is found in your hypothesis space. It is the idea that matter -- yes hard and palpable matter -- possesses wave-like characteristics. What probability do you assign to this hypothesis? Well, after you stop laughing, you rate it about P = 10^-104, toss it aside, and then keep dipping into your "space" for something else. Amazing! Your formalized approach has just saved the planet from yet another scientific revolution. How would you rate this hypothesis: Insect regulatory genes are also found in the human organism? Using the best information available, you would also be forced to give it an extremely low likelihood. Voila! Another scientific revolution is nipped in the bud. Your formalized scheme gets high points for neatness, but it fails miserably everywhere else. Science is creativity in the true sense. i.e. creation ex nihilo, and such things are beyond all accountability. |