Prev: A link to a news on SSL
Next: New visual for the Lp pattern found in the first 300 million Primesums.
From: Jonathan on 25 Mar 2010 11:30 Hi, A probably simple question. A glance through block encryption modes that also support authentication seem (to me) to keep the authentication distinct from how the next block is handled. I'm sure someone can point me to a useful (online or dead-tree) guide on the whys and wherefores of this, and/or whether my observation is indeed correct. I would have thought that authentication data would be included in any transform of the next block of data, so that data that has been tampered with is no longer decryptable, although decrypting would then be purely sequential and you'd lose any resilience to transmission error the mode would otherwise have. This may be naive of me, and/or involve me not quite understanding what I've read up on the subject of block encryption modes, so please forgive me if this is grotesquely noob. Assuming my understanding is correct insofar as the authentication output from the block mode is kept separate from whatever else the mode generates, doesn't this reveal something about the internal state of the block encryption mode?
From: Mok-Kong Shen on 25 Mar 2010 15:17 Jonathan wrote: [snip] > I would have thought that authentication data would be included in any > transform of the next block of data, so that data that has been > tampered with is no longer decryptable, although decrypting would then > be purely sequential and you'd lose any resilience to transmission > error the mode would otherwise have. [snip] You might be interested in a proposal of mine in the thread "Using a kind of running accumulation of ciphertext as chaining values" of 06.03.2010. M. K. Shen
From: Greg Rose on 25 Mar 2010 17:14 In article <hogcsr$gsq$03$1(a)news.t-online.com>, Mok-Kong Shen <mok-kong.shen(a)t-online.de> wrote: >Jonathan wrote: >[snip] >> I would have thought that authentication data would be included in any >> transform of the next block of data, so that data that has been >> tampered with is no longer decryptable, although decrypting would then >> be purely sequential and you'd lose any resilience to transmission >> error the mode would otherwise have. >[snip] > >You might be interested in a proposal of mine in the thread "Using a >kind of running accumulation of ciphertext as chaining values" of >06.03.2010. Then again, you might not. Greg. --
From: Maaartin on 25 Mar 2010 17:32 > A probably simple question. A glance through block encryption modes > that also support authentication seem (to me) to keep the > authentication distinct from how the next block is handled. I'm sure > someone can point me to a useful (online or dead-tree) guide on the > whys and wherefores of this, and/or whether my observation is indeed > correct. > > I would have thought that authentication data would be included in any > transform of the next block of data, so that data that has been > tampered with is no longer decryptable, although decrypting would then > be purely sequential and you'd lose any resilience to transmission > error the mode would otherwise have. No expert is going to answer, so let me try. Everything is decryptable, but you may get a gibberish plaintext. You can't use CBC mode for both encryption and authetication, as only the next block get corrupted in case of a forgery. You could use inverse CBC or http://en.wikipedia.org/wiki/Block_cipher_modes_of_operation#Propagating_cipher-block_chaining_.28PCBC.29 but it doesn't work. AFAIK no mode using N=blockCount encryptions for both privacy and authetication works. Somewhere I saw an algorithm using N+log(N)/log(2) encryptions and a proof that you need at least that much. I'm not sure under which conditions the proof holds, as there're http://en.wikipedia.org/wiki/Galois/Counter_Mode and http://en.wikipedia.org/wiki/Poly1305-AES which both need only N+1 encryptions. > This may be naive of me, and/or involve me not quite understanding > what I've read up on the subject of block encryption modes, so please > forgive me if this is grotesquely noob. Assuming my understanding is > correct insofar as the authentication output from the block mode is > kept separate from whatever else the mode generates, doesn't this > reveal something about the internal state of the block encryption > mode? No, since there's always something which prevents it. For example in Poly1305-AES you add encrypted nonce to a keyed non-cryptographic hash of the message, which is necessary both for the unforgeability and for preventing the information leak you spoke about.
From: Jonathan on 25 Mar 2010 17:43 On Mar 25, 2:14 pm, g...(a)nope.ucsd.edu (Greg Rose) wrote: > In article <hogcsr$gsq$0...(a)news.t-online.com>, > Mok-Kong Shen <mok-kong.s...(a)t-online.de> wrote: > > >Jonathan wrote: > >[snip] > >> I would have thought that authentication data would be included in any > >> transform of the next block of data, so that data that has been > >> tampered with is no longer decryptable, although decrypting would then > >> be purely sequential and you'd lose any resilience to transmission > >> error the mode would otherwise have. > >[snip] > > >You might be interested in a proposal of mine in the thread "Using a > >kind of running accumulation of ciphertext as chaining values" of > >06.03.2010. > > Then again, you might not. > > Greg. > > -- No offense to Mok-Kong, but this is more a question on the theory and underlying principles, which is not quite the same as experiments (however good they may be). The problem with any experiment is that cryptanalysis won't tell you how strong something is, and it can only identify weakspots herustically (ie: you've no guarantee of finding weaknesses in finite time, even if the method/algorithm is riddled with them). What I need to know is something about the whys and wherefores, the objectives and purposes of given lines of research and development. For example, let's say you used a cryptographic block cipher mode that was ultra-sensitive to corruption. You can use a mix of Reed-Solomon (to fix bit errors) and Turbo Codes (to fix block errors) to ensure that the ciphertext is not corrupt. You could then both detect modification and undo it. But again, this is generally not what is done in practice. Sure, there may well be some error-correction and some authentication, but these appear to be considered extensions and extras rather than as intrinsic to the very nature of the encryption process. There's going to be a reason for this. I'm no genius, but it's obvious to even me that the best crypto experts in the world aren't doing things the way they are purely for their own amusement. It might be that the specific characteristics considered desirable by the majority of crypto users are dictating specific approaches but that other approaches would be perfectly good if other characteristics prevailed. Or it might be that overloading authentication data onto the key and/ or data would actually weaken the encryption (it may be a potentially knowable part of the effective encryption key, or an insufficiently- random element even if not knowable). And so on. This is not something I've seen really discussed in texts on the subject, it's not covered on any of the usual online repositories, and it's not mentioned in any of the papers discussing competing encryption modes on NIST's website on their battle-of-the-modes contest. Therefore I depend utterly on a crypto expert to give me a reason. Examples of encryption modes I can find, there's enough of them. What I can't find is WHY those methods are used and not others. I want the why and I'm beginning to wonder if the experts themselves actually have one.
|
Next
|
Last
Pages: 1 2 3 Prev: A link to a news on SSL Next: New visual for the Lp pattern found in the first 300 million Primesums. |