From: Tom St Denis on
On Jan 17, 4:21 pm, unruh <un...(a)wormhole.physics.ubc.ca> wrote:
> That may be, but it is NOT this MITM attack. The claim was that MITM
> made double AES weak, at least as weak as single AES, because of the
> MITM attack. I was simply pointing out that this is false, because the
> MITM requires 2^256 space, and 2^256 time to search or sort the
> database.
> It is possible that a break for single AES would also break double
> double AES but it would probably also break triple AES as well by the same
> argument.

I don't get your complaint. You're making it like 2^256 space is
impossible but 2^256 time is possible (one could argue that anything
requiring 2^n time ALSO requires 2^n space in a physical sense since
you need energy to perform the algorithm). And you're right I do
assume that future undefined attacks have undefined characteristics
[e.g. I don't assume they break only n rounds but not n+1 thereby
basing an argument on n+1].

My point is that double ANYTHING is generally a bad idea. Under the
premise that if single whatever is insecure then so is double since
presumably you have the time/space to mount the attack.

Tom
From: Tom St Denis on
On Jan 17, 4:14 pm, Paul Rubin <no.em...(a)nospam.invalid> wrote:
> Tom St Denis <t...(a)iahu.ca> writes:
>
> > Actually, there is a variant of RSA where N = p^2q and it's provably
> > reducible to factoring.  It's weird in that encryption uses N as the
> > exponent....
>
> I thought Rabin's method (e=2) was also equivalent to factoring.

There is no unique solution for Rabin's method though you have to
encode headers and then figure out which of the 4 solutions is right.
You also can't sign with Rabin [directly].

The Katja method of N=p^2q works like RSA otherwise e.g. M^e^d == M,
and M^d^e == M

Tom
From: Joseph Ashwood on
"Simon Johnson" <simon.johnson(a)gmail.com> wrote in message
news:6efb7e29-9634-4e1c-8f7f-ee61a7cafa37(a)v25g2000yqk.googlegroups.com...
>
>> An advance that history has proven to be quite likely. Improved factoring
>> attacks seem to come in about 5 year cycles, and while the graph has been
>> fairly predictable, the deviation from predictability is growing. With
>> symmetric attacks ( diff, linear, etc) the rate is about one every 10
>> years,
>> but far less predictability. We just recently saw the attack for the
>> decade
>> on AES, the related key attacks, but its just about time for the
>> factoring
>> attack. The expectable factoring attack should advance the 768 that we
>> just
>> saw to about 820, any deviation from this could change things
>> substantially
>> in the predictions of security.
>
> I'm not sure if I'm getting my point across clearly so I'll state it
> again.

Your point was clear, it is wrong, but it is clear.

> With AES we have no idea whether the proof of security that we've got
> for the current collection of attacks will resist an unknown attack.

Actually, it can be said rather exactly that the proofs of security will not
apply to a new attack.

> With BBS we know there is only ever one attack: integer factorisation.
> That's it; anything that breaks BBS must also factor integers.

No, that's not what the proof says. There are also entropy attacks, usage
attacks, bit-flipping attacks, etc. Of these, only entropy attacks would
provide factorization, but not general purpose factoring.

> Moreover, integer factorisation punishes the *memory* of the attacker.

You forget several qualifying statements there. Current fast methods of
integer factorization on current hardware with current levels of knowledge
punishes the memory of the attacker. The problems are many, method of
computation are being rewritten almost completely, computation hardware is
undergoing major changes, memory is improving vastly in excess of Moore's
law. The memory is the biggest change, search the USPTO for my patent
application, building it can easily deliver many PB of storage operating at
hundreds of GB/sec. So it punishs an area of the computer that is improving
faster than any portion of a computer has ever improved before.

> Once you're up in the 2048-bit
> modulus range this becomes many petabytes.

> You've gone from a collection of disks that can exist in a single
> machine to a significant fraction of a datacenter to run the attack.

Current scales of flash memory is about 3GB per cubic millimeter. Adding the
overhead for the patent will keep it at 3GB per cubic millimeter per cubic
millimeter. 1PB is just 655 000 cubic millimeters. Assuming 1.75 inches
thick (1U), less than 15000 square millimeters. 23 inch wide and each PB is
only 1/2 inch deep. At maximum current density, a 1U rack can store almost
50PB. And yes these really are real densities, Micron has published that
they can slim a flash chip down to 0.05 mm thick, standard is 1cm by 1cm,
and they have 16GB chips available, do the math.

> Moreover, once you're out of a single physical machine the
> interconnects become much slower.

This has been incorrect since the initial introduction of PCI-Express when
the interconnect bandwidth exceeded RAM bandwidth. Before you even try to go
there, ethernet bonding has been around longer, its just a matter of using
the available technologies.

> Factoring does not scale in the same
> way most cryptographic attacks do.

It scales better for the attacker.

> You're making the classic mistake that there's no way to solve AES
> faster than brute-force.

You really should read what I said before replying "With symmetric attacks
( diff, linear, etc) the rate is about one every 10 years." In fact you even
quoted this section. You are making baseless assertions that are simply
wrong.

> Again, at the risk of repeating myself ad-nauseum: We don't know if
> there is a brutal unpublished or undiscovered attack on AES. However,
> we do know that any attack that breaks BBS must also factor integers.
> There are a potentially *infinite* number of ways to break AES. There
> is exactly one way to break BBS.

You can repeat it all you want, it will still be wrong.

> Just that fact alone should convince even the most hardened cynic that
> BBS is on much more solid ground than AES.

Only if the cynic doesn't understand the actual statement, or more likely,
refuses to recognise that it is simply wrong.

> Given the fact that Euclid,
> Euler, Gauss, Riemann and a whole other bunch of pre-eminent
> mathematicans were unable to find an efficient factoring algorithm
> should give us the warm fuzzies about BBS.

Given the fact that factoring is nearly polynomial. Given that the
algorithms keep getting better. Given that hardware is quickly outpacing the
normal limitations to factoring. Given that BBS is therefore nearly broken,
noone should have warm fuzzies about BBS.

> I simply don't understand how you can possibly claim that AES is more
> secure than BBS.

Probably the ability to handle logic, or maybe the ability to understand
that BBS is very nearly broken.

>> We must be looking at very different BBS algorithms. If the initial X is
>> published along with N, then anyone can decrypt. If only N is published
>> then
>> decryption requires guessing X, even if the factorization is known. So it
>> cannot be subsituted for a public key algorithm. There is also the matter
>> of
>> the missing MAC. So to edit the list you provided to replace all the
>> portions possible with BBS:
>
> Yes, it can. It is trivial to recover the key stream of BBS with very
> little known plain-text and the factors.

Now you have provided additional constraints, additional disclosure of
information beyond the BBS proof, thereby losing the protection of the proof
that you so proudly push.

> If you want to make it really straightforward you can simply clock the
> cipher one extra time after encryption and output the internal state.
> With the factors you can easily recover the key-stream by computing
> the nth's square root of the outputed internal state.
>
> The missing MAC can be solved by means of a Carter-Wegman hash using
> the BBS's generated key stream.

Which of course eliminated it being BBS, directly violating your claim that
it can all be done with BBS.

So to summarize:
BBS is nearly broken for usable key sizes
Your claim that it can be used for every portion is directly false.
Joe

From: Kristian Gj�steen on
Tom St Denis <tom(a)iahu.ca> wrote:
>The Katja method of N=p^2q works like RSA otherwise e.g. M^e^d == M,
>and M^d^e == M

Keep in mind that it might be easy to factor integers of the form p^2q,
but not integers of the form pq.

--
Kristian Gj�steen
From: Mok-Kong Shen on
Peter Fairbrother wrote:

> Let's say a perfectly strong cipher cannot be broken by any method
> easier than brute force.
>
> Any attack which needs less than brute force weakens the cipher.
>
> I assume that the first cipher is perfectly strong.
>
> If the second cipher does some of the work of a brute-force attack, then
> the combo can be broken with less effort (on the part of the attacker)
> than a brute-force attack.

In a previous post (13.01) I wrote :"if one cascades two algorithms
A and B and A and B are of different nature and the keys employed for
the two are random and independent, then the combined strength is
evidently greater than either of the components (assuming, of course,
that neither component has strength 0)". Your second cipher is not
'independent' of the first, being specially adapted to the first, I
would consider.

On the other hand, it seems at least intuitively strange why being
the 'first' is important. I mean one need not consider what is normally
the 'cleartext' as such (as being immediately understandable) and think
of the whole process as merely a transformation of a text X to another
text Y (without considering the intelligibility of any). Now in a
combination there are two process A and B, why should AB be better than
BA, if both A and B do some 'general' transformation and are
'independent' of each other? (If they are not 'independent', that could
be a different story, conceivably.) Note also that here the mapping
X to Y is bijective and the reverse is also a 'general' transformation.
That is, the scrambling in the direction from X to Y should be as
well done as the scrambling in the direction from Y to X (for A and
B singly and for AB and BA). Now, when doing transformation from Y to X
using BA, B becomes the first and would be better according to the
claim of being the first is important, which is a contradiction in my
humble view.

M. K. Shen