From: unruh on
On 2010-01-20, Joseph Ashwood <ashwood(a)msn.com> wrote:
> "unruh" <unruh(a)wormhole.physics.ubc.ca> wrote in message
> news:slrnhl9g5v.ok8.unruh(a)wormhole.physics.ubc.ca...
>> On 2010-01-18, Joseph Ashwood <ashwood(a)msn.com> wrote:
>>> "Simon Johnson" <simon.johnson(a)gmail.com> wrote in message
>>> news:6efb7e29-9634-4e1c-8f7f-ee61a7cafa37(a)v25g2000yqk.googlegroups.com...
>>>> Given the fact that Euclid,
>>>> Euler, Gauss, Riemann and a whole other bunch of pre-eminent
>>>> mathematicans were unable to find an efficient factoring algorithm
>>>> should give us the warm fuzzies about BBS.
>>>
>>> Given the fact that factoring is nearly polynomial. Given that the
>>
>> How in the world is sub-exponential nearly polynomial?
>
> A reasonable argument is:
>> exp(ln(N)^(1/3)) is not a polynomial in any definition I know of.
>> While it is slower than exponential in the length( length=ln(N)) it is
>> still far from polynomial. Now it happens that for the lengths actually
>> used, it does not increase terribly fast, but that is NOT "nearly
>> polynomial"
>
> Based on that it is not polynomial, but it is nearly polynomial. The
> category of sub-exponential/super-polynomial is relatively narrow, and quite
> close to polynomial. The rate of advance on the asymptotic is very slow, and
> most of the recent advances run out of steam fairly quickly, but the
> asymptotics dominate rather quickly at the lengths already in use. As
> progress inevitably happens on the factoring problem, one of two things will
> happen; either the lower asymptotic will rise (currently linear) or the
> upper bound will fall (currently too complex to write here). Either of these
> is helpful in really knowing the complexity. Right now though, factoring is
> nearly polynomial.

Right now factoring is a problem with a very finite length. Thus
polynomial has nothing to do with it. For factoring a number of length
less than 200000 digits, it is O(1). or order any function you care to
choose.
If you look at the assymptote, then superpolynomial grows faster than
any polynomial no matter how big. That is not polynomial. The log is
polynomial.


>
>>> algorithms keep getting better. Given that hardware is quickly outpacing
>> BBS with modulus of length 5 was broken long long ago. BBS with length
>> 10^5 is far from broken. Which are you talking about.
>
> I'm not convinced 10kbit factoring is "far from broken." Certainly it would

10^5=100K, not 10K

> be substantially more difficult than the 768-bit factoring recently
> completed, but it is not that far from being considered. The 768-bit was
> done publicly, the computer farms of major corporations are notably larger
> than the 768 farm, not yet the million or so times more powerful that would
> be needed, but also not far enough for my taste.
>
> Beyond this the usability factor is important. BBS with a 1024 bit modulus
> is very slow, few kbyte/sec (each squaring only gives 1 bit), this is
> already too slow for most purposes. Increasing the modulus to 10kbit will
> cause this to become unusable in most circumstances.
>
>>> So to summarize:
>>> BBS is nearly broken for usable key sizes
>>
>> Key sizes which are "useable" scales with the speed and size of
>> computers, and much more favourably than does the breaking. So what are
>> you talking about?
>
> I don't find even 1024-bit BBS's few kbyte/sec to be usable in most
> circumstances, and that is borderline right now, within reach of major

It is a lot more useable that is breaking it.
The growth in work to encrypt IS polynomial. the growth in work to break
is not.


> corporations within a few years. There is also the problem that after
> compensating for the growth of file sizes, the overall complexity of
> en/decryption is growing possibly more quickly than the complexity of the
> attack. File sizes, and so the en/decryption complexity, have increased
> dramatically, this drags on the ability to grow the modulus size. I don't
> have the exact metrics in front of me so I don't know whether this is enough
> to completely negate the asymptotic change, but it will change the ability
> to use BBS and should be considered strongly.

Any of these exponentiation encryptions will be slow compared with
symmetric key crypto, which is why noone uses them. Of course the
discussion here is about the relative strength of various encryptions,
so if you use two (one to encrypt the key to the other) you are at the
mercy of the weakest of the two, not the strongest. So, I do agree that
for any practical use of crypto, one would not want to use RSA or BBS,
or....

> Joe
>
From: Paul Rubin on
Kristian Gjøsteen <kristiag+news(a)math.ntnu.no> writes:
> Consider the following two statements:
>
> We have no idea if there are unknown, efficient methods that
> solve the "AES problem" (ie how to distinguish the AES permutation
> from a random permutation).
>
> We have no idea if there are unknown, efficient methods that
> solve the factoring problem.
>
> Now spot the difference.

The difference is this. Let AES-n denote a "generalized AES" with an
n-bit key for arbitrary n. There is a famous theorem going all the way
back to Shannon, that says that a random boolean function on n inputs
takes O(exp(n)) gates to implement (i.e. takes exponential time to
compute). For some fixed n, consider the function F taking a few
plaintext/ciphertext pairs (p_i,c_i)... and returning "Yes" iff there is
an AES-n key K that encrypts each p_i to the corresponding c_i. F
basically performs known-plaintext cryptanalysis of AES. If we are to
have some hope of implementing F more efficiently than brute force
search, we better think it is somehow better than a random function.

So what do we know about F? Mainly that it is in NP, since we can
concisely prove a "yes" instance by showing the key K. We don't know,
for example, if F is in co-NP: for a "no" instance there is no obvious
way to show that no key exists. Does F being in NP make us think that F
is tractable? That's the P-vs-NP question. Most theorists think
that P != NP, but it's a very hard problem. There is a lot of
meta-theory (relativization, natural proofs, etc) that show that various
approaches to it have no chance of working.

For factoring, in terms of concrete algorithms we're not really better
off, but we do know that factoring is in co-NP as well as NP. Does that
help? We don't know for sure, but there is a conceptual picture that it
might, i.e. factoring sits inside a region of NP that AES may sit on the
outside of. Russell Impagliazzo in the paper I linked earlier
distinguished these situations as "Minicrypt" vs. "Cryptomania".

In complexity terms, public-key cryptography seems to rest on somewhat
more delicate presumptions than symmetric cryptography.
From: Kristian Gj�steen on
Paul Rubin <no.email(a)nospam.invalid> wrote:
>In complexity terms, public-key cryptography seems to rest on somewhat
>more delicate presumptions than symmetric cryptography.

I know nothing about complexity theory, but I do not find this argument
entirely appealing.

--
Kristian Gj�steen
From: Peter Fairbrother on
Paul Rubin wrote:
> Peter Fairbrother <zenadsl6186(a)zen.co.uk> writes:
>> To rephrase that somewhat: is there a method of breaking AES which on
>> average needs less resources than brute force?
>>
>> And the answer is yes - but we (probably) don't know what it is.
>
> Why do you think the answer is yes?


There's an existence proof of this - actually there are several, which
apply to all possible block ciphers where the block size equals the key
size - but I don't know which ones are in the literature.

Here's an outline of one, which applies to work rather than resources.
Warning, I have drink taken, and may get details wrong, or be slightly
incoherent.

Consider a square array of ciphertexts arranged with the key at the top
and the plaintext down the sides.

By pigeonhole principle, as AES's block size is equal to it's key size,
looking along a row of the array (~=chosen plaintext) there must be on
average one instance of each possible ciphertext per plaintext.

It's possible for some cipher that there is only one such instance per
key, but this is not true of AES, and it would weaken the cipher in
other ways.

Let's assume (because it's difficult to show) that there is some
distribution of these (eg for an ideal cipher model cipher it's a
Gaussian) and a proportion of plaintexts give the same ciphertext under
say four different keys.

Do a whole lot of work, and find these. Note, that that doesn't
technically count as work to break an instance of the cipher, as you
only have to do it once for every instance.

Now do a chosen plaintext attack with these plaintexts. You are going to
eliminate four possible keys per known plaintext.

That's less work than brute force.



-- Peter Fairbrother
From: unruh on
On 2010-01-20, Peter Fairbrother <zenadsl6186(a)zen.co.uk> wrote:
> Paul Rubin wrote:
>> Peter Fairbrother <zenadsl6186(a)zen.co.uk> writes:
>>> To rephrase that somewhat: is there a method of breaking AES which on
>>> average needs less resources than brute force?
>>>
>>> And the answer is yes - but we (probably) don't know what it is.
>>
>> Why do you think the answer is yes?
>
>
> There's an existence proof of this - actually there are several, which
> apply to all possible block ciphers where the block size equals the key
> size - but I don't know which ones are in the literature.
>
> Here's an outline of one, which applies to work rather than resources.
> Warning, I have drink taken, and may get details wrong, or be slightly
> incoherent.
>
> Consider a square array of ciphertexts arranged with the key at the top
> and the plaintext down the sides.
>
> By pigeonhole principle, as AES's block size is equal to it's key size,
> looking along a row of the array (~=chosen plaintext) there must be on
> average one instance of each possible ciphertext per plaintext.
>
> It's possible for some cipher that there is only one such instance per
> key, but this is not true of AES, and it would weaken the cipher in
> other ways.
>
> Let's assume (because it's difficult to show) that there is some
> distribution of these (eg for an ideal cipher model cipher it's a

No it is not a gaussian. It may be a binomial.

> Gaussian) and a proportion of plaintexts give the same ciphertext under
> say four different keys.
>
> Do a whole lot of work, and find these. Note, that that doesn't
> technically count as work to break an instance of the cipher, as you
> only have to do it once for every instance.

Sure it does count. It is just that you can distribute that amongst the
various times you want to break something. So if you are trying to use this a thousand times,
you can divide the work by 1000. But since the work is so huge (roughly
2^2L where L is the length of the block-- ie 2^256 for a 128 bit cypher,
even dividing that by 1000 makes no difference. It is huge.
Neglecting it entirely is just wrong.


>
> Now do a chosen plaintext attack with these plaintexts. You are going to
> eliminate four possible keys per known plaintext.

??

>
> That's less work than brute force.

No, it is far far more. In brute force you only have to build one row of
that table effectively, not all 2^128 rows. If you neglect enough work
of course everything can be decrypted in one step.