Prev: A published list of rare tetragraphs?
Next: A discrepancy in literatures on permutation polynomials
From: Simon Johnson on 16 Jan 2010 16:54 > An advance that history has proven to be quite likely. Improved factoring > attacks seem to come in about 5 year cycles, and while the graph has been > fairly predictable, the deviation from predictability is growing. With > symmetric attacks ( diff, linear, etc) the rate is about one every 10 years, > but far less predictability. We just recently saw the attack for the decade > on AES, the related key attacks, but its just about time for the factoring > attack. The expectable factoring attack should advance the 768 that we just > saw to about 820, any deviation from this could change things substantially > in the predictions of security. I'm not sure if I'm getting my point across clearly so I'll state it again. With AES we have no idea whether the proof of security that we've got for the current collection of attacks will resist an unknown attack. With BBS we know there is only ever one attack: integer factorisation. That's it; anything that breaks BBS must also factor integers. Moreover, integer factorisation punishes the *memory* of the attacker. The importance of this isn't recognised nearly enough. The matrix reduction step in GNFS is a giant performance problem. And it gets much, much worse the bigger N is. The 768-bit factorisation effort required a 5-6 TB matrix reduction. Once you're up in the 2048-bit modulus range this becomes many petabytes. You've gone from a collection of disks that can exist in a single machine to a significant fraction of a datacenter to run the attack. Moreover, once you're out of a single physical machine the interconnects become much slower. Factoring does not scale in the same way most cryptographic attacks do. > It does not. BBS's proof of security is proof that it is nearly broken for > usable key sizes, and the necessary key sizes for the equivalent security of > 128-bit AES is currently arguable at up to about 30,000 kilobits, largely > unusable. Factoring a 2048-bit modulus is not possible with current technology. It won't be for the foreseeable future. You're making the classic mistake that there's no way to solve AES faster than brute-force. You have no proof that this is true. I can define a cipher with a megabyte key, that does not make it more secure than AES or BBS. Key length does not equal security. Again, at the risk of repeating myself ad-nauseum: We don't know if there is a brutal unpublished or undiscovered attack on AES. However, we do know that any attack that breaks BBS must also factor integers. There are a potentially *infinite* number of ways to break AES. There is exactly one way to break BBS. Just that fact alone should convince even the most hardened cynic that BBS is on much more solid ground than AES. Given the fact that Euclid, Euler, Gauss, Riemann and a whole other bunch of pre-eminent mathematicans were unable to find an efficient factoring algorithm should give us the warm fuzzies about BBS. This is a problem that has been studied since the dawn of human history and no-one, not a single soul, has found an efficient solution to it. AES is an efficient, cleverly constructed, block cipher. However the mathematics it deploys are very new and the proofs of security are comparatively weak. I simply don't understand how you can possibly claim that AES is more secure than BBS. > We must be looking at very different BBS algorithms. If the initial X is > published along with N, then anyone can decrypt. If only N is published then > decryption requires guessing X, even if the factorization is known. So it > cannot be subsituted for a public key algorithm. There is also the matter of > the missing MAC. So to edit the list you provided to replace all the > portions possible with BBS: Yes, it can. It is trivial to recover the key stream of BBS with very little known plain-text and the factors. If you want to make it really straightforward you can simply clock the cipher one extra time after encryption and output the internal state. With the factors you can easily recover the key-stream by computing the nth's square root of the outputed internal state. The missing MAC can be solved by means of a Carter-Wegman hash using the BBS's generated key stream. Cheers, Simon
From: unruh on 16 Jan 2010 19:41 On 2010-01-16, Peter Fairbrother <zenadsl6186(a)zen.co.uk> wrote: > Phoenix wrote: >> Well, is encrypting twice much more secure? >> >> Yes or no? .... > > > > If you aren't doing one of the layering things, or belt-and-braces, or > universal re-encryption, then don't double/multiple encrypt - it's > usually not the best way to go. > The OP wants to put out there the source code for his program he is selling, and encrypting it. He is giving some lawyer the keys so that if he dies, the lawyer can give his customers the keys and they can get the source code. He wants the encrypted source code to be public. He is worried that before he dies someone will break AES and his source code will be released and he stops raking in the money from selling the binary code. Thus, he wants to triple encrypt to make sure that his source code is safe while he is alive. So it falls under the belts and braces category. Of course his trowsers are made of tissue paper, so a couple of million to the lawyer will open up his source code like toilet paper in a rain, but he is worried about someone spending a million to break AES instead to get at his source code.
From: unruh on 16 Jan 2010 19:47 On 2010-01-16, Peter Fairbrother <zenadsl6186(a)zen.co.uk> wrote: > unruh wrote: >> On 2010-01-16, Peter Fairbrother <zenadsl6186(a)zen.co.uk> wrote: >>> Peter Fairbrother wrote: >>>> Mok-Kong Shen wrote: >>>>> Peter Fairbrother wrote: >>>>> >>>>>> I don't know about screwing up the implementation, which could do >>>>>> horrible things - but yes, there is established theory that multiple >>>>>> encryption with statistically independent keys is at least as secure as >>>>>> the first encryption, see: >>>>>> >>>>>> M. Maurer and J. L. Massey, Cascade ciphers: The importance of being >>>>>> first, Journal of Cryptology, vol. 6, no. 1, pp. 55?61, 1993. >>>>>> >>>>>> www.isiweb.ee.ethz.ch/archive/massey_pub/pdf/BI434.pdf >>>>>> >>>>>> so if one pass is secure enough you're okay. >>>>>> >>>>>> (I have a tiny doubt about their result, which I've been working on for >>>>>> some years now - but in practice it won't mean much even if it pans out) >>>>> Wouldn't an 'definition' of 'statistically independent keys' (not very >>>>> simple to verify absolutely in practice, I surmise) by itself >>>>> ('logically') imply that the 2nd cipher couldn't weaken the work of the >>>>> 1st cipher? (For in the other case one could say then that the keys >>>>> aren't 'independent' after all.) >>> No, and here's a concrete, if unlikely, example to prove otherwise. >>> >>> Iirc, it directly disproves Maurer and Massey's theory, although I >>> haven't read it for awhile. >>> >>> Imagine the first cipher is a secure block cipher with block size equal >>> to key size. Take a chosen plaintext, and construct an ordered list of >>> keys against ciphertexts for the first cipher. >>> >>> Now imagine the second cipher has only one possible key (or two, if you >>> don't like the idea of a cipher with only one key, but they are both >>> very similar permutations) which is a permutation the reverse of the >>> list above. >>> >>> You mow have a very simple chosen plaintext attack - get the encryptor >>> to input the chosen plaintext, and out comes - the key, with very high >>> probability! >>> >>> It doesn't matter if the first cipher is perfectly secure, IND-CPA, >>> IND-CCA2, the whole schmear. >>> >>> The second cipher has done the brute-force work of a chosen plaintext >>> attack for you. >> >> No. since the second cypher can only be the inverse for a single >> plaintext. > > The second cipher is not an inverse for the known plaintext. It inputs > the corresponding ciphertext from the first cipher and outputs the key > for the first cipher - not the plaintext. The function is the mapping from key to encrypted text. The inverse is the function from encrypted text to key. > >> But this is exactly exhaustive search with known plaintext. > > Yes, with the proviso above. But the encryptor does the work for you! Sorry, but if we are allowed to hypothesise such an encryptor, why not just hypothesis a key finder, for which you hand it the encryptoed text and it produces the key. Under that hypothesis, all crypto systems are insecure. Ie, you have to first make it plausible that such an encryptor as you hypothesis exists, and estimate the work that it has to do to encrypt. If the work is does is greater than exhaustive search, then it is no advance at all. > > There is no requirement that the second cipher be easy or efficient ... > just a cipher, which technically it is. Sure there is such a requirement. If the work needed to run it is far larger tan the work required to break the original cypher by exhaustive search, then it is not a break in anyone's definition. It does notweaken the original cypher. > > > -- Peter Fairbrother > > >> >> Note that if such a second cypher exists, then this is exactlychosen >> attack method on the cypher that tests the strength of the cypher. Ie, >> this cypher is very weak. That second cypher therefor does not decrease >> the strength of the first, because the first is already very weak, and >> this is precisely the weakness.
From: Peter Fairbrother on 17 Jan 2010 05:11 unruh wrote: > On 2010-01-16, Peter Fairbrother <zenadsl6186(a)zen.co.uk> wrote: >> unruh wrote: >>> On 2010-01-16, Peter Fairbrother <zenadsl6186(a)zen.co.uk> wrote: >>>> Peter Fairbrother wrote: >>>>> Mok-Kong Shen wrote: >>>>>> Peter Fairbrother wrote: >>>>>> >>>>>>> I don't know about screwing up the implementation, which could do >>>>>>> horrible things - but yes, there is established theory that multiple >>>>>>> encryption with statistically independent keys is at least as secure as >>>>>>> the first encryption, see: >>>>>>> >>>>>>> M. Maurer and J. L. Massey, Cascade ciphers: The importance of being >>>>>>> first, Journal of Cryptology, vol. 6, no. 1, pp. 55?61, 1993. >>>>>>> >>>>>>> www.isiweb.ee.ethz.ch/archive/massey_pub/pdf/BI434.pdf >>>>>>> >>>>>>> so if one pass is secure enough you're okay. >>>>>>> >>>>>>> (I have a tiny doubt about their result, which I've been working on for >>>>>>> some years now - but in practice it won't mean much even if it pans out) >>>>>> Wouldn't an 'definition' of 'statistically independent keys' (not very >>>>>> simple to verify absolutely in practice, I surmise) by itself >>>>>> ('logically') imply that the 2nd cipher couldn't weaken the work of the >>>>>> 1st cipher? (For in the other case one could say then that the keys >>>>>> aren't 'independent' after all.) >>>> No, and here's a concrete, if unlikely, example to prove otherwise. >>>> >>>> Iirc, it directly disproves Maurer and Massey's theory, although I >>>> haven't read it for awhile. >>>> >>>> Imagine the first cipher is a secure block cipher with block size equal >>>> to key size. Take a chosen plaintext, and construct an ordered list of >>>> keys against ciphertexts for the first cipher. >>>> >>>> Now imagine the second cipher has only one possible key (or two, if you >>>> don't like the idea of a cipher with only one key, but they are both >>>> very similar permutations) which is a permutation the reverse of the >>>> list above. >>>> >>>> You mow have a very simple chosen plaintext attack - get the encryptor >>>> to input the chosen plaintext, and out comes - the key, with very high >>>> probability! >>>> >>>> It doesn't matter if the first cipher is perfectly secure, IND-CPA, >>>> IND-CCA2, the whole schmear. >>>> >>>> The second cipher has done the brute-force work of a chosen plaintext >>>> attack for you. >>> No. since the second cypher can only be the inverse for a single >>> plaintext. >> The second cipher is not an inverse for the known plaintext. It inputs >> the corresponding ciphertext from the first cipher and outputs the key >> for the first cipher - not the plaintext. > > The function is the mapping from key to encrypted text. The inverse is > the function from encrypted text to key. Okay. I thought you had implied the first function was encryption, which is a mapping from plaintext to ciphertext. > >>> But this is exactly exhaustive search with known plaintext. >> Yes, with the proviso above. But the encryptor does the work for you! > > Sorry, but if we are allowed to hypothesise such an encryptor, why not > just hypothesis a key finder, for which you hand it the encryptoed text > and it produces the key. Yes - it's also known as exhaustive search. > Under that hypothesis, all crypto systems are > insecure. Ie, you have to first make it plausible that such an encryptor > as you hypothesis exists, and estimate the work that it has to do to > encrypt. If the work is does is greater than exhaustive search, then it > is no advance at all. > >> There is no requirement that the second cipher be easy or efficient ... >> just a cipher, which technically it is. > > Sure there is such a requirement. *Not in the definitions of cascaded ciphers there isn't.* The second cipher may be equivalent to a brute force attack, and take as much work to perform - but it's still a cipher. > If the work needed to run it is far larger tan the work > required to break the original cypher by exhaustive search, then it is > not a break in anyone's definition. It does not weaken the original > cypher. All I'm doing is showing is that in this theoretical and impractical but *follows-the-definitions-of-cascaded-ciphers* case the cascaded ciphers are weaker than the first cipher alone - which M+M said, incorrectly, was impossible. If you don't like the extreme example (although it is theoretically sound), suppose instead that the second cipher merely does some of the work of breaking the first cipher - I think I've shown that that is at least possible. The combo is then weaker than the first cipher alone. -- Peter Fairbrother > > >> >> -- Peter Fairbrother >> >> >>> Note that if such a second cypher exists, then this is exactlychosen >>> attack method on the cypher that tests the strength of the cypher. Ie, >>> this cypher is very weak. That second cypher therefor does not decrease >>> the strength of the first, because the first is already very weak, and >>> this is precisely the weakness.
From: Peter Fairbrother on 17 Jan 2010 05:30
unruh wrote: > On 2010-01-16, Peter Fairbrother <zenadsl6186(a)zen.co.uk> wrote: >> Phoenix wrote: >>> Well, is encrypting twice much more secure? >>> >>> Yes or no? > > ... >> >> >> If you aren't doing one of the layering things, or belt-and-braces, or >> universal re-encryption, then don't double/multiple encrypt - it's >> usually not the best way to go. >> > > The OP wants to put out there the source code for his program he is > selling, and encrypting it. He is giving some lawyer the keys so that if > he dies, the lawyer can give his customers the keys and they can get the > source code. He wants the encrypted source code to be public. He is > worried that before he dies someone will break AES and his source code > will be released and he stops raking in the money from selling the > binary code. Thus, he wants to triple encrypt to make sure that his > source code is safe while he is alive. Are you (he) talking about triple AES? Has anyone analysed that? If eg AES is a group then that's no more secure than single AES (I very much doubt AES is actually a group, it's just an example of what can go wrong with triple encryption using the same cipher). > So it falls under the belts and braces category. If he's using different ciphers. There is some logic to that. What I'd do is use two 256-bit block, 256-bit keyed ciphers chosen to be as dissimilar as possible. With independently chosen keys. Rijndael-256/256 (not AES-256) might be one, and Twofish 256/256 another. The reason for using two ciphers is to protect against a possible break in only one of them. The reason for using 256-bit keys is defense against quantum computers. The reasons for using 256-bit blocks are - too complex to explain in a short post. The reason for not just using one cipher is - history. If you want a good chance of being a long way out in advance of history then you need to take extreme measures. But it's only a good chance, we simply don't know what advances will happen over the next 50 years. Practical crypto is an art, not a science - there's always someone wanting to break your system, and they will think of potential attacks you haven't thought of. Lots of them, and maybe one will work. And there ain't many ciphers in use 50 years ago which ain't broke now. Of course his trowsers > are made of tissue paper, so a couple of million to the lawyer will open > up his source code like toilet paper in a rain, but he is worried about > someone spending a million to break AES instead to get at his source > code. ITYM a billion to break AES :) But then some people do have the money and desire to try, NSA springs to mind. But you're right. Key sharing? Hmm, maybe someone breaks key sharing in the next 50 years - quantum computers could do that, mostly it's public key type stuff. Though iirc there are some known-work techniques which QC won't break - but it's not my field. -- Peter Fairbrother |