From: Joseph Ashwood on
"Michael B Allen" <ioplex(a)gmail.com> wrote in message
news:eac7587c-8d79-4fd9-8880-ec95d92ab2f3(a)g31g2000vbr.googlegroups.com...
> The rational is that if / when an algorithm is
> broken, the enclosed encrypted layer would look random and thus not
> give the attacker any feedback as to their success. They would have to
> successfully crack all layers simultaneously. Is this reasoning valid?

There are several additional questions about the security, I'll be skipping
over those to avoid leaving you feeling like you just got thrown into the
deep end of the shark tank.

I assume you feel you have made a rational decision about why to do it.
There has actually been quite a bit of useful research. Just to summarize
everything, and make several assumptions for you:

Use AES either 3 or 7 times, in a STRONG mode. The basic pseudo-code looks
like:

encrypt_block(in, out, keyset[] )
{
temp = in
for( I = 0 to 6)
temp = AES256-encryptBlock(temp, keyset[I])
out = temp
}


This is absolutely overkill, and at the best knowledge of the cryptanalytic
industry will take more compute capability than the entire universe can ever
create.

For the mode, you'll want to use CCM mode, good strong proof of security
that it is at least as strong as the underlying cipher. It is simple to
implement, and hard to mess up. To simplify things in the formatting, format
the text with a header containing the length, just use a 64-bit integer.
Apply the OMAC across the formatted text, append the OMAC value to the end
of the text. Apply CTR mode to the formatted-appended data, you can start
with a 0x000...000 as the IV, you're only using it once don't worry about
the IV too much.

When generating the keys the most important thing MAKE ABSOLUTELY SURE YOU
GENERATE THE KEYS RANDOMLY, roll a pair of dice to generate 5 bits at a time
(you'll need ~3600 bits, it'll take a while), destroy the dice, burning is
best.

This is far from the most efficient system, its slow, its clumsy, but if you
implement the system strictly it is largely invulnerable.
Joe

From: Tom St Denis on
On Jan 13, 3:44 pm, Peter Fairbrother <zenadsl6...(a)zen.co.uk> wrote:
> unruh wrote:
> > On 2010-01-13, Tom St Denis <t...(a)iahu.ca> wrote:
> >> On Jan 12, 2:06?pm, Michael B Allen <iop...(a)gmail.com> wrote:
> >>> My first thought is to simply encrypt the data multiple times using
> >>> different algorithms and key sizes (e.g. AES128 -> RC4 -> AES256)
> >>> using different segments of a randomly generated 32 character
> >>> alphanumeric password. The rational is that if / when an algorithm is
> >>> broken, the enclosed encrypted layer would look random and thus not
> >>> give the attacker any feedback as to their success. They would have to
> >>> successfully crack all layers simultaneously. Is this reasoning valid?
> >> Don't.
>
> >> Also why has nobody mentioned that encryption even times is vulnerable
> >> to MITM attacks?
>
> > Possibly because the memory requirements are so huge. (2^128 memory is a
> > bit beyond most people, planets,...)
>
> And possibly because he's satisfied with the security of one encryption,
> which even times encryptions doesn't diminish.
>
> -- Peter Fairbrother (Hi Tom!)

Hi :-)

Well moreso [to answer unruhs comment] from a practical standpoint, if
double-AES provides "512-bits" of security and that's "more secure"
than single AES, then the MITM attack is legitimate. If the attack is
not legitimate [because O(2^256) is insane] then double-AES is in fact
NOT more secure than single AES.

People need to separate what feels good to what actually is good. And
there are in fact good reasons not to homebrew a multi-cipher solution
up yourself. So all in all it's better to stick to known usage models
and/or protocols.

Tom
From: Mok-Kong Shen on
Peter Fairbrother wrote:

> I don't know about screwing up the implementation, which could do
> horrible things - but yes, there is established theory that multiple
> encryption with statistically independent keys is at least as secure as
> the first encryption, see:
>
> M. Maurer and J. L. Massey, Cascade ciphers: The importance of being
> first, Journal of Cryptology, vol. 6, no. 1, pp. 55�61, 1993.
>
> www.isiweb.ee.ethz.ch/archive/massey_pub/pdf/BI434.pdf
>
> so if one pass is secure enough you're okay.
>
> (I have a tiny doubt about their result, which I've been working on for
> some years now - but in practice it won't mean much even if it pans out)

Wouldn't an 'definition' of 'statistically independent keys' (not very
simple to verify absolutely in practice, I surmise) by itself
('logically') imply that the 2nd cipher couldn't weaken the work of the
1st cipher? (For in the other case one could say then that the keys
aren't 'independent' after all.)

When your work bears fruit, please let us know as soon as possible, for
that would be very interesting and valuable.

M. K. Shen
From: unruh on
On 2010-01-14, Mok-Kong Shen <mok-kong.shen(a)t-online.de> wrote:
> Peter Fairbrother wrote:
>
>> I don't know about screwing up the implementation, which could do
>> horrible things - but yes, there is established theory that multiple
>> encryption with statistically independent keys is at least as secure as
>> the first encryption, see:
>>
>> M. Maurer and J. L. Massey, Cascade ciphers: The importance of being
>> first, Journal of Cryptology, vol. 6, no. 1, pp. 55?61, 1993.
>>
>> www.isiweb.ee.ethz.ch/archive/massey_pub/pdf/BI434.pdf
>>
>> so if one pass is secure enough you're okay.
>>
>> (I have a tiny doubt about their result, which I've been working on for
>> some years now - but in practice it won't mean much even if it pans out)
>
> Wouldn't an 'definition' of 'statistically independent keys' (not very
> simple to verify absolutely in practice, I surmise) by itself
> ('logically') imply that the 2nd cipher couldn't weaken the work of the
> 1st cipher? (For in the other case one could say then that the keys
> aren't 'independent' after all.)

One could imagine the the cryptosystems were a focusing system, such
that encrypting the file twice with different keys was equivalent to
encrypting it once with a much reduced keysize. Thus that
AES(k2,(AES(k1,M)) = AES(k3,M) but with there only being say 100
possible k3, no matter how independent k1 and k2 are. Thus, this would
apparently make the double cypher weaker. Of course it would not really,
since an attacker could always encrypt again and thus procude the double
cypher. But of course someone who did not know this would think that the
single key encryption was strong.
(Ie, the proof is trivial-- If multiple encryption made a cypher weaker,
then the attacker could multiply encrypt the output of the first cypher,
and the actual strength of the first cypher would just be the strength
of the multiple cypher. Ie, by definition multiple encryption could not
weaken it, since the strength includes the strength against multiple
encryptions. )

So, let S(C) be the strength of cryptosystem C, say the number of
operations necessary to break C. Then S(C2(C1)) >= S(C1) since if
S(C1(C2))< S(C1) one could decrypt C1 by by decrypting C2(C1) and thus
find S(C1)<S(C1(C2)) (ie a contradiction)


>
> When your work bears fruit, please let us know as soon as possible, for
> that would be very interesting and valuable.
>
> M. K. Shen
From: Sebastian Garth on
AES is a good algorithm, and so are RSA and Diffie-Hellman. Primary
strength, again, being key length. Those three work with large
integers and primes, and if the key length is long enough, they are
more than secure. Using an 8192 bit prime, for example, would be ample
for the next few thousand years (or at least until quantum computing
becomes viable)! The algorithms to avoid are cypher-type schemes.
These are worthless, and a dime a dozen...