From: unruh on
On 2010-02-10, Spinner <nospam(a)nospam.net> wrote:
> Mok-Kong Shen <mok-kong.shen(a)t-online.de> wrote:
>
>>bmearns wrote:
>>
>>> Well my background in quantum mechanics is even more limited than my
>>> background in information theory, but as far as I know, quantum
>>> processes are generally accepted to be strictly random; based, I'm
>>> sure, on complex mathematics that are far above my head.
>>
>>As I illuded to, the present argumentation is "theoretical", or one
>>might say, pedantic, and has no practical significance. Invariably,
>>one accepts in practical life, depending on individuals, something
>>as "ideal", even though that may in fact not be so. For instance, I
>>would have considered myself a rich person, if I possessed one million
>>dollars, but to a multi-milliardair that might be a tiny little sum.
>>For randomness, I would accept the outcomes of dice that I buy from
>>a game shop to be entirely satisfactory, while others would claim that
>>there are inevitably bias in manufacture and therefore no good. In
>>other words, there is no boundary to "perfectness", just like "infinity"
>>has no boundary, and hence both are never "exactly" achievable but can
>>be better approached further and further without limit, if one works
>>hard, spends enough resources (money, time, etc.) and as the science
>>and technology advance with time. On the other hand, one needs concepts
>>that are "ideal" that represent the ultimate (even though consciously
>>not exactly attainable) goals that one "should" acieve, for it is only
>>with the existence of them that one obtains "directions" in which one
>>can sensibly proceed in life (instead of doing some "random walk").
>>
>>M. K. Shen
> spontaneous nuclear decay is provem to be as quantum effect and such
> effects are proven to be random.

It is not "proven" in either case in the sense of a mathematical proof.
It is observed and the observations are consistant with that randomness.
And the theory-- which has a very wide range of applicability-- says it
is random.

>
> This is known because a WHOLE lot of tech depends on the math related
> to nuclear physics. To prove its not random is to prove general
> relativity wrong. Many holes have been blown in the desert and the

Hardly. General rel and quantum have nothing to do with each other, so
much so that that is one of real puzzles of modern physics.

And those holes in the desert really have nothing to do with whether or
not the decay is random or not. Even if it were not random, the holes
would still have appeared.

> pacific ocean proving that nuclear physics has a sound theoretical
> basis.
>
> Hence, decay is provably random and spontaneous MATHEMATICALLY and
> practically. The ability to detect decay products is proven. A
> number based on detection of a random process is provably random.

No. It is not proven. The theory says it is true, and many experiments
have been shown to be consistant with the theory, and no other theory
has been advanced which is consistant, but that does not "prove" it in
the sense of a mathematical proof.

>
> If you are patient, build a cosmic ray coincidence detector, a random
> event.

???

>
> You generally never give up on your pedantic side-trips throwing in a
> lot of verbal noise when the alternative wouild be to admit you
> actually got something wrong. Arguing that we cant 'prove' we exist
> to a metaphysical certainty is just noise and smoke, turning a logical
> discussion to philosophical and vice versa to suit your purposes.
>
> It is stupid but I have to admit it's entertaining.
>
> --
> 2+2!=5 even for extremely large values of 2
From: bmearns on
I hope you don't mind if I condense this into a single response:

On Feb 9, 5:36 pm, Mok-Kong Shen <mok-kong.s...(a)t-online.de> wrote:
[snip]
> Even if the entire message is 2*128 bits, what is added in can't be
> more than 0.5 bit per bit (of bits being processed), isn't it? That's
> my point, no more no less. See also my response to your other post.
>
> M. K. Shen

Yes, that point exactly is correct. Mixing 128 bits of data into an
existing data set, using some deterministic method, cannot possibly
add more than 128 Shannons of entropy to the data. That means if the
message has n bits, it will increase the average entropy-per-bit by no
more than 128/n Shannons per bit. However, that's still not relevant
to security.


On Feb 9, 5:42 pm, Mok-Kong Shen <mok-kong.s...(a)t-online.de> wrote:
> unruh wrote:
[snip]
> > Calculate the frequency distribution of the letters. Do Sum P_i ln(P_i)
> > where P_i is the noralised frequency of the ith letter. This is an upper
> > bound. You can also look for correlations to refine the estimate. Do
> > this for a bunch of sample texts. Not hard.
>
> I don't have that impression. I vaguely remember (hence I didn't mention
> in my previous post) to have seen Shannon's paper where he obtained
> an estimate. If I remembered right (I am not sure), experiments using
> test persons were done.
>
> M. K. Shen

Yes, there are ways to do it using test people as well. The
information contained in a message is directly related to the number
of possible meanings it could have. I believe the experiment you're
referring to involved something like: present a person with a sample
of text, their ability to correctly predict the next character in the
text is inversely related to the entropy density of the language. For
instance, a sample text in English might be "This is my fathe". The
next character will almost certainly be an 'r'. You know this because
there is a great deal of redundancy in the English language: you know
that there are few (if any) other English words that start with
"fathe", you know that the missing word is probably a noun, you know
that this would be a common sentence when introducing a masculine
parent, etc. In other words, actually putting the 'r' there conveys
almost no additional information that you couldn't already get from
the rest of the sentence and your knowledge of the English language.
Therefore, the entropy contained in that particular 'r' is almost
zero. This is actually a more rigorous method for determining the
entropy of a natural language compared to counting frequencies of
characters, or even of bigrams, trigrams, etc., because there is
redundancy built in to not just the characters themselves, but the
actual structure/syntax of the language and in it's relationship to
the real world (i.e., it's semantics). I think this may have been what
unruh was getting at when he suggests that looking for correlations
will refine the estimate. Still, counting letter frequencies will
certainly give an estimate (an upper bounds, I think) of the entropy
in the language.


On Feb 10, 3:54 am, Mok-Kong Shen <mok-kong.s...(a)t-online.de> wrote:
> bmearns wrote:
> > You're assuming causation where none is required. AES also generates
> > binary ciphertext, that doesn't mean that this is the reason it
> > doesn't have perfect security. Likewise, just because it doesn't have
> > maximum entropy doesn't mean this is necessarily the reason it isn't
> > perfectly secure.
>
> I don't think that your second sentence ever touches what I argumented.
> As to the issue of perfect security, I suppose that what is in any
> textbook about OTP contradicts your third sentence.
>
> M. K. Shen

Your argument was:
> My problem is this: An AES encrypted plaintext bits (let's
> forget that they come originally from symbols), doesn't have perfect
> security. Can't one view this fact from the view point that this is
> because this sequence of bits doesn't have full entropy? If not, why not?

Strictly in terms of logic, your assumption is fallacious. (Note, that
doesn't necessarily mean your primary conclusion is incorrect). Your
argument goes as follows:
Let A be the clause that a cipher does not provide perfect security.
Let B be the clause that a cipher output does not have full entropy.
You're argument above asserts (correctly) that for AES, A is true and
B is also true. Your assumption from this is that B implies A, or in
other words: if a cipher output doesn't have full entropy it can't
have perfect security. Cryptography aside (just for a moment), that's
not a valid conclusion from the assertions. Just because two clauses
are both true, it doesn't mean that one causes the other. This is what
I meant about your argument assuming causation when it's not required.

But enough rhetoric, back to cryptography. Your primary point (as I
understand it) is that if a ciphertext doesn't have full entropy, then
the cipher cannot provide perfect security. Consider a simple counter-
example: a plaintext with 100 bits and an average entropy rate of 0.1
Shannons per bit, and a cipher algorithm as follows: a 100 bit OTP,
with 1 Shannon per bit, is XORd with the plaintext. For each resulting
bit, if the bit is a 1, the cipher outputs the bit sequence '01', and
if the bit is a 0, the cipher outputs the bit sequence '10'. So now
you put in 10 Shannons of entropy with the plaintext, and 100 Shannons
with the OTP: the maximum entropy of the ciphertext is 110 Shannons
(in reality, it would only be 100 Shannons, but that doesn't matter).
So now the ciphertext has, at most, 110 Shannons in 200 bits. It's a
stupid cipher, but it does provide perfect security even though the
output doesn't have full entropy. Let me also point out that any
cipher can (theoretically) be altered to provide full entropy in the
ciphertext by compressing the output.

To reiterate unruh's point, perfect security comes from the fact that
there are as many unique decryptions for a given ciphertext as there
are unique plaintext messages that could have generated it. It is
perfect because it is no easier to decrypt the ciphertext than it is
to just guess the plaintext. I'll concede that this necessarily means
the total entropy in the ciphertext must be at least equal to the
number of bits in the plaintext. But that still says nothing about the
entropy /density/ of the ciphertext, which I believe is what is
implied by full-entropy, and what your earlier posts were on the topic
of (e.g., when you pointed out that the entropy per character of an
AES ciphertext is only marginally greater than the entropy per
character of the input plaintext).


On Feb 10, 3:49 am, Mok-Kong Shen <mok-kong.s...(a)t-online.de> wrote:
> bmearns wrote:
> > Well my background in quantum mechanics is even more limited than my
> > background in information theory, but as far as I know, quantum
> > processes are generally accepted to be strictly random; based, I'm
> > sure, on complex mathematics that are far above my head.
>
> As I illuded to, the present argumentation is "theoretical", or one
> might say, pedantic, and has no practical significance. Invariably,
> one accepts in practical life, depending on individuals, something
> as "ideal", even though that may in fact not be so. For instance, I
> would have considered myself a rich person, if I possessed one million
> dollars, but to a multi-milliardair that might be a tiny little sum.
> For randomness, I would accept the outcomes of dice that I buy from
> a game shop to be entirely satisfactory, while others would claim that
> there are inevitably bias in manufacture and therefore no good. In
> other words, there is no boundary to "perfectness", just like "infinity"
> has no boundary, and hence both are never "exactly" achievable but can
> be better approached further and further without limit, if one works
> hard, spends enough resources (money, time, etc.) and as the science
> and technology advance with time. On the other hand, one needs concepts
> that are "ideal" that represent the ultimate (even though consciously
> not exactly attainable) goals that one "should" acieve, for it is only
> with the existence of them that one obtains "directions" in which one
> can sensibly proceed in life (instead of doing some "random walk").
>
> M. K. Shen

Sorry, I think I lost track of the debate =). I'm not really sure what
you're getting at with regards to perfect randomness. This line of the
conversation began with whether or not you can prove that an apparent
OTP is truly an OTP, meaning it has exactly 1 Shannon of entropy per
bit. As far as I understand it, quantum effects are strictly random,
with no amount of approximation required. What that means is that no
matter how much information you have about the universe, you cannot
predict with 100% certainty the outcome of any particular quantum
process. Again, that's my understanding of quantum mechanics, and I
invite anybody to step in and correct it if I misunderstand. My
conclusion from this is that it is strictly possible to have a perfect
OTP.

For practical purposes, quantum effects are at least the best
approximation to random we have, even if I'm wrong and they are not
strictly random. As for having an ideal to work towards (or at least
to judge against), I think my definition of randomness from the
previous paragraph should do it: A random process is one for which you
cannot with 100% certainty predict the outcome of the process,
regardless of how much other information you have.

I suppose you can think of a random process as a black box. It could,
for any arbitrarily long period of time act like a predictable,
deterministic, function. However, without knowing what's actually
inside you can't know for sure that it will always act like that, or
when it will stop acting like that, or what it will start acting like
instead. From this idea, I'll offer an alternate definition of a
random process: A random process is one for which it is impossible to
know exactly how the process produces its output. With this definition
you can see that Hawking Radiation emitted by a black hole should be a
strictly random process, since no information can be passed out of the
black hole and it is therefore impossible to know what "function"
inside the black hole is producing the radiation.



Regards,
-Brian
From: bmearns on
On Feb 10, 9:11 am, Spinner <nos...(a)nospam.net> wrote:
> Mok-Kong Shen <mok-kong.s...(a)t-online.de> wrote:
> >bmearns wrote:
>
> >> Well my background in quantum mechanics is even more limited than my
> >> background in information theory, but as far as I know, quantum
> >> processes are generally accepted to be strictly random; based, I'm
> >> sure, on complex mathematics that are far above my head.
>
> >As I illuded to, the present argumentation is "theoretical", or one
> >might say, pedantic, and has no practical significance. Invariably,
> >one accepts in practical life, depending on individuals, something
> >as "ideal", even though that may in fact not be so. For instance, I
> >would have considered myself a rich person, if I possessed one million
> >dollars, but to a multi-milliardair that might be a tiny little sum.
> >For randomness, I would accept the outcomes of dice that I buy from
> >a game shop to be entirely satisfactory, while others would claim that
> >there are inevitably bias in manufacture and therefore no good. In
> >other words, there is no boundary to "perfectness", just like "infinity"
> >has no boundary, and hence both are never "exactly" achievable but can
> >be better approached further and further without limit, if one works
> >hard, spends enough resources (money, time, etc.) and as the science
> >and technology advance with time. On the other hand, one needs concepts
> >that are "ideal" that represent the ultimate (even though consciously
> >not exactly attainable) goals that one "should" acieve, for it is only
> >with the existence of them that one obtains "directions" in which one
> >can sensibly proceed in life (instead of doing some "random walk").
>
> >M. K. Shen
>
> spontaneous nuclear decay is provem to be as quantum effect and such
> effects are proven to be random.
>
> This is known because a WHOLE lot of tech depends on the math related
> to nuclear physics. To prove its not random is to prove general
> relativity wrong.  Many holes have been blown in the desert and the
> pacific ocean proving that nuclear physics has a sound theoretical
> basis.
>
> Hence, decay is provably random and spontaneous MATHEMATICALLY and
> practically.  The ability to detect decay products is proven.  A
> number based on detection of a random process is provably random.
>
> If you are patient, build a cosmic ray coincidence detector, a random
> event.
>
> You generally never give up on your pedantic side-trips throwing in a
> lot of verbal noise when the alternative wouild be to admit you
> actually got something wrong.  Arguing that we cant 'prove' we exist
> to a metaphysical certainty is just noise and smoke, turning a logical
> discussion to philosophical and vice versa to suit your purposes.
>
> It is stupid but I have to admit it's entertaining.
>
> --
> 2+2!=5 even for extremely large values of 2

Come on now, Spinner. I agree with your conclusion, but you're really
not doing anything to help our case. Your arguments are all fallacy of
necessity: just because a whole lot of tech works, and just because
that tech is based on an assumption, doesn't mean that assumption has
to be true. It could, very conceivably, be the case that another more
subtle and as-yet undiscovered phenomenon is responsible for all the
tech working correctly. Newtonian gravity was "proven" to be correct
for a few hundred years because it explained the things people saw
around them: until they started looking closer.

I think what M.K. is looking for is a rigid mathematical proof of why
quantum processes are strictly random.


On Feb 10, 9:47 am, Mok-Kong Shen <mok-kong.s...(a)t-online.de> wrote:
[snip]
> Are you aware that all thechnical appartus, no matter how fine they are
> manufactured, have imperfectness? That would exclude any "perfectly"
> randomness from being obtained practically, even if a perfectly random
> sourcem that can be tapped, could exist.

That's not a tautology, nor is it anyway obvious (at least not to me),
so I think the burden of proof is on you: why would physical
imperfection of an apparatus prevent it from generating perfect
randomness from a perfectly random source? For instance, the random
source is radioactive decay, and the apparatus detects this decay and
uses the time between them as random values. If we assume for now that
this is a perfectly random process, then how is the data generated by
the apparatus not random? Sure, there are limits in the radiation
detectors, so it won't know exactly the amount of energy emitted by
the decay, but that doesn't matter. There are limits to the timer that
measures the time between decays: say it has a precision of 1 second
(I have no idea of the appropriate scale here). So we won't be able to
detect exactly when within that second the decay occurred, but it is
still perfectly random whether the decay occurs in this second or the
next (or any other). So we may not be able to capture all of the
entropy of the event, but the entropy we capture can still be strictly
random.

-Brian
From: Mok-Kong Shen on
bmearns wrote:
[snip]

> Strictly in terms of logic, your assumption is fallacious. (Note, that
> doesn't necessarily mean your primary conclusion is incorrect). Your
> argument goes as follows:
> Let A be the clause that a cipher does not provide perfect security.
> Let B be the clause that a cipher output does not have full entropy.
> You're argument above asserts (correctly) that for AES, A is true and
> B is also true. Your assumption from this is that B implies A, or in
> other words: if a cipher output doesn't have full entropy it can't
> have perfect security. Cryptography aside (just for a moment), that's
> not a valid conclusion from the assertions. Just because two clauses
> are both true, it doesn't mean that one causes the other. This is what
> I meant about your argument assuming causation when it's not required.

I didn't employ causation in the sense you claimed. The (theoretical)
OTP's perfect security is grounded on the fact that each bit of OTP has
full entropy. See explanations about the term "perfect security" in
textbooks.

> But enough rhetoric, back to cryptography. Your primary point (as I
> understand it) is that if a ciphertext doesn't have full entropy, then
> the cipher cannot provide perfect security. Consider a simple counter-
> example: a plaintext with 100 bits and an average entropy rate of 0.1
> Shannons per bit, and a cipher algorithm as follows: a 100 bit OTP,
> with 1 Shannon per bit, is XORd with the plaintext. For each resulting
> bit, if the bit is a 1, the cipher outputs the bit sequence '01', and
> if the bit is a 0, the cipher outputs the bit sequence '10'. So now
> you put in 10 Shannons of entropy with the plaintext, and 100 Shannons
> with the OTP: the maximum entropy of the ciphertext is 110 Shannons
> (in reality, it would only be 100 Shannons, but that doesn't matter).
> So now the ciphertext has, at most, 110 Shannons in 200 bits. It's a
> stupid cipher, but it does provide perfect security even though the
> output doesn't have full entropy. Let me also point out that any
> cipher can (theoretically) be altered to provide full entropy in the
> ciphertext by compressing the output.

No, one can't have more than 1 bit of entropy per bit. If you mix two
streams, assuming both with full entropy, then you still get in the
best case only one bit of entropy per bit. An analog will be a bottle,
you can't make it more than "full".

> To reiterate unruh's point, perfect security comes from the fact that
> there are as many unique decryptions for a given ciphertext as there
> are unique plaintext messages that could have generated it. It is
> perfect because it is no easier to decrypt the ciphertext than it is
> to just guess the plaintext. I'll concede that this necessarily means
> the total entropy in the ciphertext must be at least equal to the
> number of bits in the plaintext. But that still says nothing about the
> entropy /density/ of the ciphertext, which I believe is what is
> implied by full-entropy, and what your earlier posts were on the topic
> of (e.g., when you pointed out that the entropy per character of an
> AES ciphertext is only marginally greater than the entropy per
> character of the input plaintext).

>
> Sorry, I think I lost track of the debate =). I'm not really sure what
> you're getting at with regards to perfect randomness. This line of the
> conversation began with whether or not you can prove that an apparent
> OTP is truly an OTP, meaning it has exactly 1 Shannon of entropy per
> bit. As far as I understand it, quantum effects are strictly random,
> with no amount of approximation required. What that means is that no
> matter how much information you have about the universe, you cannot
> predict with 100% certainty the outcome of any particular quantum
> process. Again, that's my understanding of quantum mechanics, and I
> invite anybody to step in and correct it if I misunderstand. My
> conclusion from this is that it is strictly possible to have a perfect
> OTP.

I don't question that you "understand" quantum effects are strictly
random. I question however which academic paper has "rigorously" proved
that they are strictly random.

Thanks,

M. K. Shen



From: bmearns on
On Feb 10, 6:39 pm, Mok-Kong Shen <mok-kong.s...(a)t-online.de> wrote:
> bmearns wrote:
>
> [snip]
>
> > Strictly in terms of logic, your assumption is fallacious. (Note, that
> > doesn't necessarily mean your primary conclusion is incorrect). Your
> > argument goes as follows:
> > Let A be the clause that a cipher does not provide perfect security.
> > Let B be the clause that a cipher output does not have full entropy.
> > You're argument above asserts (correctly) that for AES, A is true and
> > B is also true. Your assumption from this is that B implies A, or in
> > other words: if a cipher output doesn't have full entropy it can't
> > have perfect security. Cryptography aside (just for a moment), that's
> > not a valid conclusion from the assertions. Just because two clauses
> > are both true, it doesn't mean that one causes the other. This is what
> > I meant about your argument assuming causation when it's not required.
>
> I didn't employ causation in the sense you claimed.

The language you used implied causation, specifically the use of the
word "because" when you said:
"Can't one view this fact from the view point that this is because
this sequence of bits doesn't have full entropy?"
If that's not what you meant, then it was just a misunderstanding.


> The (theoretical)
> OTP's perfect security is grounded on the fact that each bit of OTP has
> full entropy.

You're making a bare assertion. The very thing we're debating is
whether or not an OTP's perfect security is grounded in the full-
entropy of it's output. Simply stating it as a fact does nothing for
your argument.

But before going further, we need to have a definition for full-
entropy. I've been unable to find one and have been working on the
assumption that full-entropy means exactly 1 Shannon of entropy per
bit of data. Is this what you mean, or something different? If we have
different definitions of full entropy, it could be that we're actually
agreeing without realizing it.


> See explanations about the term "perfect security" in
> textbooks.

Here's a definition of "perfect security" from NYU's "Introduction to
Cryptography" course [cs.nyu.edu/courses/fall02/V22.0480-001/lect/
lecture2.ps, section 1, Definition 1]:
"Let M...be a random message and C...be the ciphertext of M... For any
m...and c..., an encryption system is called perfectly secure in the
Shannon sense if from the perspective of the attacker, [P(M=m|C=c) =
P(M=m)]. This means that Eve’s probability of guessing M remains
unchanged after seeing any particular outcome C = c."

So to paraphrase, an attacker has no less chance of simply guessing
the plaintext than they do successfully decrypting the ciphertext,
which is exactly what unruh and myself said. Is this because the
ciphertext has 1 Shannon per bit? No, it's because the ciphertext has
1 Shannon per bit-of-plaintext, which is also what I stated (and
illustrated) in my last post.


> > But enough rhetoric, back to cryptography. Your primary point (as I
> > understand it) is that if a ciphertext doesn't have full entropy, then
> > the cipher cannot provide perfect security. Consider a simple counter-
> > example: a plaintext with 100 bits and an average entropy rate of 0.1
> > Shannons per bit, and a cipher algorithm as follows: a 100 bit OTP,
> > with 1 Shannon per bit, is XORd with the plaintext. For each resulting
> > bit, if the bit is a 1, the cipher outputs the bit sequence '01', and
> > if the bit is a 0, the cipher outputs the bit sequence '10'. So now
> > you put in 10 Shannons of entropy with the plaintext, and 100 Shannons
> > with the OTP: the maximum entropy of the ciphertext is 110 Shannons
> > (in reality, it would only be 100 Shannons, but that doesn't matter).
> > So now the ciphertext has, at most, 110 Shannons in 200 bits. It's a
> > stupid cipher, but it does provide perfect security even though the
> > output doesn't have full entropy. Let me also point out that any
> > cipher can (theoretically) be altered to provide full entropy in the
> > ciphertext by compressing the output.
>
> No, one can't have more than 1 bit of entropy per bit. If you mix two
> streams, assuming both with full entropy, then you still get in the
> best case only one bit of entropy per bit. An analog will be a bottle,
> you can't make it more than "full".

I understand that. I assume you're referring to me saying that there
are a maximum of 110 Shannons in the ciphertext. In the case of a
straight XOR combination, you're right, there would be a maximum of
100 Shannons, which is what I put in parenthesis. Strictly speaking,
with no knowledge of what operation is being performed to mix the key
and the plaintext together (i.e., in the general case), the maximum
entropy would be the sum of their individual entropies, which is 110
Shannons. For the sake of the argument I was using this general case,
instead of limiting my argument to a specific case where some entropy
is lost.

But you're red-herring'ing me, anyway: I think my argument clearly
illustrates that perfect security is not requisite on the cipher-text
having full-entropy. Do you see any flaws with my argument or disagree
with the conclusion?


> > To reiterate unruh's point, perfect security comes from the fact that
> > there are as many unique decryptions for a given ciphertext as there
> > are unique plaintext messages that could have generated it. It is
> > perfect because it is no easier to decrypt the ciphertext than it is
> > to just guess the plaintext. I'll concede that this necessarily means
> > the total entropy in the ciphertext must be at least equal to the
> > number of bits in the plaintext. But that still says nothing about the
> > entropy /density/ of the ciphertext, which I believe is what is
> > implied by full-entropy, and what your earlier posts were on the topic
> > of (e.g., when you pointed out that the entropy per character of an
> > AES ciphertext is only marginally greater than the entropy per
> > character of the input plaintext).
>
> > Sorry, I think I lost track of the debate =). I'm not really sure what
> > you're getting at with regards to perfect randomness. This line of the
> > conversation began with whether or not you can prove that an apparent
> > OTP is truly an OTP, meaning it has exactly 1 Shannon of entropy per
> > bit. As far as I understand it, quantum effects are strictly random,
> > with no amount of approximation required. What that means is that no
> > matter how much information you have about the universe, you cannot
> > predict with 100% certainty the outcome of any particular quantum
> > process. Again, that's my understanding of quantum mechanics, and I
> > invite anybody to step in and correct it if I misunderstand. My
> > conclusion from this is that it is strictly possible to have a perfect
> > OTP.
>
> I don't question that you "understand" quantum effects are strictly
> random. I question however which academic paper has "rigorously" proved
> that they are strictly random.

I know what your question is, I've already responded that I don't know
the answer. The answer is probably available from someone at
sci.physics. However, I believe that with or without the aid of
quantum mechanics, the last bit about black-holes illustrates that a
strictly random process is indeed possible. If you disagree, please
explain why.

-Brian