From: bmearns on
On Jan 31, 6:18 pm, jmorton123 <jmorton...(a)rock.com> wrote:
[snip]
> It's all about the "random" numbers.  If no one can reproduce the
> "random" numbers then no one can decrypt the encrypted message,
> regardless of where those "random" numbers came from or how they were
> produced.
[snip]

While I can't claim to be any kind of expert in this field, I don't
believe you're correct in this conclusion. Preventing someone from
exactly recreating your pseudorandom sequence is of course very
important, but that's not the only vector of attack. You need to be
careful to ensure that there are no patterns or biases in your
sequence, either. You might have a stream of bits in which the
distance between 1's is perfectly random, but if it's 90% 0's, that's
not a very secure sequence. That's an extreme example, of course, but
more subtle patterns and biases could be devastating to someone
relying on this for security.

You sound like you know quite a bit about this field, so perhaps
you're already well aware of this and have accounted for it in your
software, but I do want to emphasize that simply being irreproducible
is not sufficient to consider a random sequence generator secure for
cryptographic purposes.

-Brian
From: Richard Outerbridge on
In article
<f1471543-c3e8-40b5-856f-9b933b6b7afe(a)j31g2000yqa.googlegroups.com>,
bmearns <mearns.b(a)gmail.com> wrote:

> On Jan 31, 6:18�pm, jmorton123 <jmorton...(a)rock.com> wrote:
> [snip]
> > It's all about the "random" numbers. �If no one can reproduce the
> > "random" numbers then no one can decrypt the encrypted message,
> > regardless of where those "random" numbers came from or how they were
> > produced.
> [snip]
>
> While I can't claim to be any kind of expert in this field, I don't
> believe you're correct in this conclusion. Preventing someone from
> exactly recreating your pseudorandom sequence is of course very
> important, but that's not the only vector of attack. You need to be
> careful to ensure that there are no patterns or biases in your
> sequence, either. You might have a stream of bits in which the
> distance between 1's is perfectly random, but if it's 90% 0's, that's
> not a very secure sequence. That's an extreme example, of course, but
> more subtle patterns and biases could be devastating to someone
> relying on this for security.

For cryptographic purposes, does it not suffice that the entropy of the
keystream be greater than the redundancy of the plaintext? It need not
be perfectly random - just sufficiently random.
From: Greg Rose on
In article <outer-8C04BE.12064208022010(a)news.ssl.Ngroups.NET>,
Richard Outerbridge <outer(a)interlog.com> wrote:
>In article
><f1471543-c3e8-40b5-856f-9b933b6b7afe(a)j31g2000yqa.googlegroups.com>,
> bmearns <mearns.b(a)gmail.com> wrote:
>
>> On Jan 31, 6:18�pm, jmorton123 <jmorton...(a)rock.com> wrote:
>> [snip]
>> > It's all about the "random" numbers. �If no one can reproduce the
>> > "random" numbers then no one can decrypt the encrypted message,
>> > regardless of where those "random" numbers came from or how they were
>> > produced.
>> [snip]
>>
>> While I can't claim to be any kind of expert in this field, I don't
>> believe you're correct in this conclusion. Preventing someone from
>> exactly recreating your pseudorandom sequence is of course very
>> important, but that's not the only vector of attack. You need to be
>> careful to ensure that there are no patterns or biases in your
>> sequence, either. You might have a stream of bits in which the
>> distance between 1's is perfectly random, but if it's 90% 0's, that's
>> not a very secure sequence. That's an extreme example, of course, but
>> more subtle patterns and biases could be devastating to someone
>> relying on this for security.
>
>For cryptographic purposes, does it not suffice that the entropy of the
>keystream be greater than the redundancy of the plaintext? It need not
>be perfectly random - just sufficiently random.

To use my favourite quote: "In theory there's no
difference between theory and practice, but in
practice, there is." -- Yogi Berra.

Theoretically you're right, but it's very
dangerous. In a real cryptosystem, you have to
assume multiple messages, and while the individual
messages might be highly compressed or have high
entropy, you are much less certain about the
entropy rate of multiple messages. Perhaps one of
the recipients modified or quoted the content and
sent it out again. Correlation between the two
messages would show this (see the Venona
intercepts) and then the total entropy of the two
messages might well fall below what is necessary
for security.

Greg.

--
Greg Rose
232B EC8F 44C6 C853 D68F E107 E6BF CD2F 1081 A37C
From: Mok-Kong Shen on
Richard Outerbridge wrote:

> For cryptographic purposes, does it not suffice that the entropy of the
> keystream be greater than the redundancy of the plaintext? It need not
> be perfectly random - just sufficiently random.

I believe you have brought out a very valuable and interesting point.
Let me paraphrase what you wrote: If a given plaintext stream has an
entropy of 0.1 per bit (presumably any natural text has more than
that, anyway when encrypted with some (weak) classical ciphers), then
it sufficies to combine it with a stream having an entropy of 0.9 per
bit, if a good combiner is used. Of course, in practice one wouldn't
have a "perfect" combiner, but one could employ a factor of safety,
say using a stream having an entropy of 0.95 etc. to counteract the
technical imperfectness. (Note BTW that the theoretical OTP with
an entropy of 1.0 per bit is practically not obtainable, or at least
not "knowable" to have been achieved in practice.) The essence of the
point, I suppose, is that it can be a valuable research enquiry to find
efficient and good entropy combiners such that, by inputing a plaintext
and a sufficiently random key stream, one could achieve a resulting
entropy of 1 - epsilon per bit, and that would be entirely satisfactory
for the practical applications.

M. K. Shen
From: Mok-Kong Shen on
Mok-Kong Shen wrote:
[snip]

> ....................... The essence of the
> point, I suppose, is that it can be a valuable research enquiry to find
> efficient and good entropy combiners such that, by inputing a plaintext
> and a sufficiently random key stream, one could achieve a resulting
> entropy of 1 - epsilon per bit, .................

Just an observation: If one uses a good block cipher like AES to
encrypt, it is common that a single key is used to process a fairly
long plaintext stream. But the key has at most 128 bits of entropy.
Isn't it a miracle that the resulting ciphertext stream (a result
of combination) has very high entropy? Or is it rather the case that
the ciphertext stream doesn't possess much higher entropy per bit
"after all" in comparison with the plaintext stream (the enhancement
of entropy per bit being at most 128 divided by the (commonly
relatively large) total number of bits being processed) and thus the
achieved security, on which one "believes", were actually an illusion
(cf. optical illusions)?

M. K. Shen