Prev: A published list of rare tetragraphs?
Next: A discrepancy in literatures on permutation polynomials
From: Peter Fairbrother on 20 Jan 2010 16:00 unruh wrote: > On 2010-01-20, Peter Fairbrother <zenadsl6186(a)zen.co.uk> wrote: >> Paul Rubin wrote: >>> Peter Fairbrother <zenadsl6186(a)zen.co.uk> writes: >>>> To rephrase that somewhat: is there a method of breaking AES which on >>>> average needs less resources than brute force? >>>> >>>> And the answer is yes - but we (probably) don't know what it is. >>> Why do you think the answer is yes? >> >> There's an existence proof of this - actually there are several, which >> apply to all possible block ciphers where the block size equals the key >> size - but I don't know which ones are in the literature. >> >> Here's an outline of one, which applies to work rather than resources. >> Warning, I have drink taken, and may get details wrong, or be slightly >> incoherent. >> >> Consider a square array of ciphertexts arranged with the key at the top >> and the plaintext down the sides. >> >> By pigeonhole principle, as AES's block size is equal to it's key size, >> looking along a row of the array (~=chosen plaintext) there must be on >> average one instance of each possible ciphertext per plaintext. >> >> It's possible for some cipher that there is only one such instance per >> key, but this is not true of AES, and it would weaken the cipher in >> other ways. >> >> Let's assume (because it's difficult to show) that there is some >> distribution of these (eg for an ideal cipher model cipher it's a > > No it is not a gaussian. It may be a binomial. > >> Gaussian) and a proportion of plaintexts give the same ciphertext under >> say four different keys. >> >> Do a whole lot of work, and find these. Note, that that doesn't >> technically count as work to break an instance of the cipher, as you >> only have to do it once for every instance. > > Sure it does count. It is just that you can distribute that amongst the > various times you want to break something. It's an existence proof. It's assumed that the number of times you want to break something approaches infinity. It isn't a practical attack. So if you are trying to use this a thousand times, > you can divide the work by 1000. But since the work is so huge (roughly > 2^2L where L is the length of the block-- ie 2^256 for a 128 bit cypher, > even dividing that by 1000 makes no difference. It is huge. > Neglecting it entirely is just wrong. > > >> Now do a chosen plaintext attack with these plaintexts. You are going to >> eliminate four possible keys per known plaintext. Should be "per chosen plaintext", beer! > > ?? Each chosen plaintext eliminates four possible keys. >> That's less work than brute force. > > No, it is far far more. In brute force you only have to build one row of > that table effectively, not all 2^128 rows. If you neglect enough work > of course everything can be decrypted in one step. ??? -- Peter Fairbrother
From: Peter Fairbrother on 20 Jan 2010 17:00 unruh wrote: > On 2010-01-20, Peter Fairbrother <zenadsl6186(a)zen.co.uk> wrote: >> Consider a square array of ciphertexts arranged with the key at the top >> and the plaintext down the sides. >> >> By pigeonhole principle, as AES's block size is equal to it's key size, >> looking along a row of the array (~=chosen plaintext) there must be on >> average one instance of each possible ciphertext per plaintext. >> >> It's possible for some cipher that there is only one such instance per >> key, but this is not true of AES, and it would weaken the cipher in >> other ways. >> >> Let's assume (because it's difficult to show) that there is some >> distribution of these (eg for an ideal cipher model cipher it's a > > No it is not a gaussian. It may be a binomial. Poisson, of course. Sorry. -- Peter Fairbrother
From: Peter Fairbrother on 20 Jan 2010 17:22 Paul Rubin wrote: > Kristian Gjøsteen <kristiag+news(a)math.ntnu.no> writes: >> Consider the following two statements: >> >> We have no idea if there are unknown, efficient methods that >> solve the "AES problem" (ie how to distinguish the AES permutation >> from a random permutation). >> >> We have no idea if there are unknown, efficient methods that >> solve the factoring problem. >> >> Now spot the difference. > > The difference is this. Let AES-n denote a "generalized AES" with an > n-bit key for arbitrary n. There is a famous theorem going all the way > back to Shannon, that says that a random boolean function on n inputs > takes O(exp(n)) gates to implement (i.e. takes exponential time to > compute). I don't know about the history, or the definitions, but speaking for every randomly chosen boolean function that's simply not true - some of them will be implementable in less. It's also untrue as an average, for the same reason. -- Peter Fairbrother
From: Joseph Ashwood on 21 Jan 2010 16:03 "unruh" <unruh(a)wormhole.physics.ubc.ca> wrote in message news:slrnhlct2k.lu.unruh(a)wormhole.physics.ubc.ca... > Right now factoring is a problem with a very finite length. Thus > polynomial has nothing to do with it. For factoring a number of length > less than 200000 digits, it is O(1). or order any function you care to > choose. > If you look at the assymptote, then superpolynomial grows faster than > any polynomial no matter how big. That is not polynomial. The log is > polynomial. You keep trying to say that I claimed factoring is polynomial. I have repeatedly stated that it is not polynomial, it is nearly polynomial. This seems to be the core of your lack of understanding, the word "near" happens to have significant meaning in the statement. >>>> algorithms keep getting better. Given that hardware is quickly >>>> outpacing >>> BBS with modulus of length 5 was broken long long ago. BBS with length >>> 10^5 is far from broken. Which are you talking about. >> >> I'm not convinced 10kbit factoring is "far from broken." Certainly it >> would > > 10^5=100K, not 10K Yeah, I made a calculation error there. Doesn't make much difference, 100kbit factoring is not sufficiently more complex than 10k to change my statements. >>> Key sizes which are "useable" scales with the speed and size of >>> computers, and much more favourably than does the breaking. So what are >>> you talking about? >> >> I don't find even 1024-bit BBS's few kbyte/sec to be usable in most >> circumstances, and that is borderline right now, within reach of major > > It is a lot more useable that is breaking it. > The growth in work to encrypt IS polynomial. the growth in work to break > is not. Actually that is itself debatable. The time taken for encryption is length(data)*performance(BBS), the time taken for breaking is performance(factoring). This is a critical difference, if dt(length(data)) > dt(performance(factoring)/performance(BBS)) then BBS is actually losing secure usability. Unfortunately for BBS dt(length(data)) may or may not be polynomial, and it may be system specific. If we look to the performance of internet connections for guidance, those are increasing exponentially, so the dt(length(data)) there is likely super-polynomial, so BBS is losing there. In files, the capacity of hard drives is expanding exponentially, BBS is losing. even in hard disks, early 1990s was 512 bytes/sector, 4096-bytes/sector is the new standard, and the SSD standard is 64Kbytes/sector, the data points aren't enough to conclude exponential, there is still a polynomial that fits, but the indication is that it is also super-polynomial/exponential. External storage as well, 720kB/1.44MB/650MB/5GB/25GB clearly an exponential pattern. I'm actually hard pressed to find a system where the data expansion has not been exponential. This is not good for BBS usability vs BBS security. >> corporations within a few years. There is also the problem that after >> compensating for the growth of file sizes, the overall complexity of >> en/decryption is growing possibly more quickly than the complexity of the >> attack. File sizes, and so the en/decryption complexity, have increased >> dramatically, this drags on the ability to grow the modulus size. I don't >> have the exact metrics in front of me so I don't know whether this is >> enough >> to completely negate the asymptotic change, but it will change the >> ability >> to use BBS and should be considered strongly. > > Any of these exponentiation encryptions will be slow compared with > symmetric key crypto, which is why noone uses them. Of course the > discussion here is about the relative strength of various encryptions, > so if you use two (one to encrypt the key to the other) you are at the > mercy of the weakest of the two, not the strongest. Which of course makes no difference to whether or not "encrypting twice [is] much more secure" but also requires violating the conditions of the BBS proof, completely invalidating the concept that a proof means only factoring can break BBS. Specifically the index method necessary to create the public key system requires disclosing the entire final state of the generator, the BBS proof requires that no more than log(log(modulus)) bits are output, these are notably different. > So, I do agree that > for any practical use of crypto, one would not want to use RSA or BBS, > or.... Actually its getting to be a couple of years since I recommended RSA or Integer DH for anything serious, I believe we've passed the trade-off point, I still recommend overkill sizes for ECC because I don't have as much trust in it, but there have been few problems. Joe
From: Paul Rubin on 21 Jan 2010 21:38
Peter Fairbrother <zenadsl6186(a)zen.co.uk> writes: >> There is a famous theorem going all the way >> back to Shannon, that says that a random boolean function on n inputs >> takes O(exp(n)) gates to implement (i.e. takes exponential time to >> compute). > > I don't know about the history, or the definitions, but speaking for > every randomly chosen boolean function that's simply not true - some > of them will be implementable in less. > > It's also untrue as an average, for the same reason. Stated more precisely (if I have it right): for any c < 1, as n increases, the probability that a randomly selected boolean function on n inputs can be implemented in less than 2**(cn) gates approaches zero. I shouldn't have said O(exp(n)) instead of "exponential in n". I spent a little time looking up the theorem, which I'd known secondhand but had never looked at carefully. The proof is a simple counting argument (Shannon 1949 [1]; the version below is basically copied from Kulikov [2]). An n-input boolean function can be viewed as a truth table with 2**n entries of 1 bit each. So there are 2**(2**n) such functions. Now given some number t, suppose you have t 2-input gates each of which computes an arbitrary 2-input function (so there are 16 types of gates). Each input comes from either 1) one of the t gate outputs; 2) a constant (1 or 0); or 3) one of the n input variables. So there are t+n+2 possible input sources or (t+n+2)**2 assignments of inputs to gates. Since there are 16 choices of gate the total number of possible t-gate circuits is <= F(t,n) = (16*(t+n+2)**2)**t. If t=2**n/(10n) then F(t,n) is approx 2**(2**n/5) which is vanishingly small compared with 2**2**n when n is large enough. Obviously the average case is also exponential. But, it shows what a horrible state complexity theory is in, since we don't have any *specific* functions (those whose inverses are in NP) whose complexity is proven to be worse than linear. References 1. C. E. Shannon, `The Synthesis of Two-Terminal Switching Circuits,'' Bell System Technical Journal, Vol. 28 (Jan., 1949), pp. 59-98. 2. http://cs.ioc.ee/~tarmo/tday-meintack/kulikov-slides.pdf (found by gooble search), see the first few slides. |