Prev: Why is Kerberos ever used, rather than modern public key cryptography?
Next: Server and Client Analogy � The New Cryptography Model
From: Christian Baer on 16 Mar 2010 13:03 On Sat, 13 Mar 2010 21:12:44 +0000 (UTC) Kristian Gjøsteen wrote: > One more note on XTS vs CBC: XTS has some protection against active > attacks, while CBC has no protection. When your threat model is that > you worry about theft, active attacks aren't interesting. Then again, > the cost of using XTS over CBC might be so small that it is worth it. I don't quite understand the last sentence. What costs more and what might not be worth it (worth what)? Regards, Chris
From: Christian Baer on 16 Mar 2010 13:01 On 13 Mar 2010 16:26:18 GMT Thomas Pornin wrote: > I am not, unless you use the term "watermarking" with a meaning which I > am not aware of. As far as I know, "watermarking" designates some > techniques to embed a hidden mark into some data, such that the mark > resists alterations to the data, such as lossy compression, while not > impacting the qualitative aspects of the marked data. I fail to see how > this relates to ECB. What I mean is this: http://en.wikipedia.org/wiki/Watermark_attack Basicly, some piece of data is on the drive several times. These pieces can be found if they all look the same. This way is may be possible to figure out the key. > It is the other way round. The 64-bit block size was the main reason why > a new standard was needed. As for speed, the call for candidates to the > AES competition stated that candidates should not be slower than 3DES, > so the poor software performance of 3DES was a minor concern; for most > applications, especially on big software platforms (read: a PC), > performance is not a big issue and 3DES was fast enough. That most AES > candidates turned out to be much faster than 3DES was considered to be a > bonus. That was some bonus they got if I consider the performance difference between AES and 3DES. > Weak keys (and pairs of related keys) are not a concern as long as you use > 3DES for encryption and not as, for instance, a building block for a hash > function. Note that the same applies to Rijndael, which was not evaluated > for use as compression function in a hash function (the Whirlpool hash > function uses a Rijndael-derivative as compression function, but with a > much altered key schedule precisely for that reason). Is SHA512 a derivate of something? I have been using that and Whirlpool quite extensively for some time now. > 3DES uses a nominally 192-bit key. 24 of those bits are not used, so the > key is really a 168-bit key. Also, there is a rather generic attack > which can theoretically break it in 2^112 steps, assuming that you have > a few billions of gigabytes of very fast RAM, something which seems > technologically doable, but not cheaply. With a regulation-oriented > mind, you can then declare 3DES to offer "112 bits of security" and > that's exactly what NIST does. 112 bits are not bad at all. AES was > required to use a 128-bit key because 128 is more "round" in binary, and > cryptographers are no less superstitious than anybody, so they love > round numbers. This relates more to marketing than science, but half of > making a secure cryptosystem is to create trust, and trust requires > proper marketing. I really hope that more effort went into the cipher than into it's marketing. :-) > It does not mean that CBC is weak; only that it is easier to get XTS > "right". As security goes, a properly set up CBC is as hard (or impossible) to break as a similar system with XTS? While that starts me asking the question why XTS isn't used more widely. Avoiding bugs in security software is one on the prime goals. So using stuff (modes) that are less susceptible to bugs seems only logical. > This is a failure of marketing. When together, cryptographers do not > speak English but crypto-English, a scientific idiom in which some words > have a subtly different meaning. The point that cryptographers wanted to > make was that the 5 finalists, and actually most of the 10 other > systems, looked as robust as could be wished for. But they also wanted > to nitpick a bit, because that's what experts are paid for. The stuff > you read was some of that nitpicking, expressed in crypto-English. Well, it wasn't really crypto-English. I could pretty much understand it. But you're right, it included quite a bit of nitpicking. It was quite clear that Schneier wanted to push the cipher he was involved with. But I'm also careful when evaluating facts like that: Even if Schneier wanted Twofish to win the contest and become AES (for obvious reasons), and he even goes nitpicking a bit, that doesn't mean he can't have a point with his critisism. > In the end, NIST chose Rijndael mostly because it could be implemented > efficiently (and compactly) on all platforms, with very decent > performance, and some less measurable bonus, including the mathematical > structure (Rijndael is not a big mess of bits in which backdoors could > lurk, we kind of know why it is secure) and the political advantage of > choosing a candidate from Belgium (NIST knows that the all-american > history of the birth of DES, with NSA as a godmother, proved to be a > potent fuel for all kind of paranoia, and paranoia means bad marketing). Ok, Twofish was mainly designed by Americans. Same goes for MARS. So those two pretty much drop out under these circumstances. Serpent would have been an alternative, but it's much slower. [Camellia cipher] > I have not looked at it closely. But it was investigated during the > NESSIE project, and apparently it remained unscathed. Anyway, I do not > trust myself to decide whether an algorithm looks good or not. I rather > trust the collective mind of hundreds of cryptographers, and this is why > I trust AES: many smart people studied it, and none of them found an > actual weakness (although some found ways to express security > quantitatively, which is good because it means that the problem could be > studied). Actually, I meant if you considered it to be messy or not. :-) > I refer to the subset of the i5/i7 which have the 32nm foundry. As > usual, names under which CPUs are sold do not capture the wide > variety of models. For details see: > http://en.wikipedia.org/wiki/AES-NI Well, as it seems, even AMD has plans to use this instruction set. >> Currently, this doesn't change much for me because I don't have any >> Intel CPUs in use. > Unfortunately, computers sometimes die and must then be changed. Using > AES for the sake of AES-NI is a way to ease _future_ usage, when you > will buy a new computer. It may make the data transfer easier (simply > plug the disks in the new machine). Depending on your situation and why > you want to encrypt data, this may or may not be an important feature. Yeah, I know computers die. Had that happen to me several times - although they don't die as quickly as the manufacturers would like. To my mind, this instruction set came "a bit late in the year". At home I have an AMD Opteron 185 CPU. That isn't really new and doesn't have a special instruction set for AES. The internal benchmark of Truecrypt under Windows says it can encrypt and decrypt data with more than 200MB/s with AES. Twofish is much slower at about 125MB/s and Serpent is just under 100MB/s. Any of these ciphers can be used without noticing much (if anything) when moving stuff around on the hard drives. Newer CPUs will be even faster - even without a special instruction set. Note: The reason why AES is so much faster is because of a special implementation in Truecrypt. Before that AES wasn't much snappier than Twofish. After the optimized implementation AES just took way off. On this machine (Sun U60) you can notice the work that goes into disk encryption. Still it doesn't feel crippled. GnuPG does a better job at that. :-) >> I'd say it would be much better to keep the key in the RAM of the >> machine but in a protected area so it doesn't get moved about. > My point is that a plain computer _does not_ have such a protected area. > To some extent, the contents of the CPU registers could be deemed to be > a protected place, but they are flushed to RAM often enough that this > does not hold. I'd have to read up on this, but I remember something about FreeBSD and other Unix-like operating systems being able to protect certain areas of the RAM for security reasons. GnuPG uses this feature. The main reason for this is to prevent the keys being swapped. But I guess this also means that the keys cannot be moved around in the RAM and can thus be wiped again when no longer needed. > There are people who sell "protected areas" for computers. These are called > HSM, as "Hardware Security Module". For pointers, see: > http://en.wikipedia.org/wiki/Hardware_security_module > HSM are expensive, but that is somewhat intrinsic to what a HSM is trying > to achieve. Physical resistance _and_ good performance, that's not cheap. These things are way out of my league (in terms of price). Although I have the feeling that the prices are being kept high on purpose. This may be high-quality hardware but there is nothing magical about it. > Your running system must know the key as long as it accesses the data, > so any kind of "wiping out" is not an option in the attack model where > the attacker can physically approach the running system. That depends. The person aproaching the machine could possible trip an alarm, which causes the system to power down - after first wiping the key. The same thing could happen if someone tries to open the case. >> The machine won't be in a bunker or the like but it will be locked >> away. It could also help if finding the machine is a bit of a >> challenge. Meaning, a machine in a cage where everyone can see it >> might make it clear where the crown jewels are. :-) > Making the machine inconspicuous is a bit like code obfuscation: it adds > some security, but it is very difficult to know how much security it > adds. When I design a system, I prefer to think about how rich the > attacker is, rather than about how smart he is, because wealth is easily > measured (e.g. in dollars), whereas smartness is not. A big cage is > visible, but there is ample data which helps estimate the kind of > resource that the attacker will need to break into it (mankind has been > building cages for quite a long time). It is much harder to quantify how > much "hidden" a computer is. I partly agree with you there. There is just one hole in your argument: Every cage must be opened at some time and opening it hast to be relatively easy to do. A smart thief will probably pick the lock instead of using brute force to get inside the cage. So somehow we are back to smart after all. The part that I agree with is the assessment: It is much easier to assess the security of a cage with a good lock than how well hidden the stuff you are trying to protect is. So you are not wrong in my eyes, I just wanted to point this out. A smart adversary might find a weakness in the cage and use it. In which case the strength of the bars don't really come into the equation. My "problem" with a cage or bunker is rather more mundane: It may be impractical or impossible to install a cage or a bunker for the machine. Sometimes protection has to be more subtle. Regards, Chris
From: Kristian Gj�steen on 16 Mar 2010 13:27 Christian Baer <christian.baer(a)uni-dortmund.de> wrote: >On Sat, 13 Mar 2010 21:12:44 +0000 (UTC) Kristian Gjøsteen wrote: >> One more note on XTS vs CBC: XTS has some protection against active >> attacks, while CBC has no protection. When your threat model is that >> you worry about theft, active attacks aren't interesting. Then again, >> the cost of using XTS over CBC might be so small that it is worth it. > >I don't quite understand the last sentence. What costs more and what might >not be worth it (worth what)? I think XTS requires a bit more computation than CBC (how much more, I don't know). This is the cost of using XTS compared to CBC. At the same time, XTS has somewhat better security properites. This is the benefit of using XTS compared to CBC. If the cost of using XTS compared to CBC is negligible for your purposes, the (possibly negligible) benefits might outweigh the costs, and it might make sense to use XTS over CBC. -- Kristian Gj�steen
From: Christian Baer on 16 Mar 2010 13:12 On Tue, 16 Mar 2010 09:23:15 -0700 (PDT) Maaartin wrote: >> The data would be accessed by us (people in our firm) only and for >> starters only locally (meaning from within the offices, not via VPN or the >> like). Information would be passed on by phone. > This is surely not very secure. In any country where there's a > technical possibility to wiretap a phone, it can be done - either > because of a court order or because of the responsible person being > too curious. That will always be a problem. But that is one out of our control because we cannot distribute secure phones all over Germany. >> > What happens if s/he is bribed, threatened, given too many drinks, etc.? > If he's really given too manu drinks, he wouldn't be able to remember > his name, and the passphrase would be more safe than ever. Even better: The operator could pass out. :-) > Sure, and there's a nice algorithm for this: secret sharing. I know > only the one by Shamir, which is quite easy to implement and proven do > it its job perfectly. But I don't know if there's a disc encryption > software integrating a key sharing algorithm. You can always use more than one key file and each operator holds one. Basicly, if they want to conspire against you, there is no real technical way to stop them doing that. >> > That sounds way too casual. How are you going to erase old keys from >> > the USB stick when you are done with them? > Doesn't this belong to the wide class of problems solvable by a sledge > hammer? It doesn't even have to be solved that way. Just define a new key for the data encrypted and do what you like with the smart card or USB-drive. Regards, Chris
From: Thomas Pornin on 16 Mar 2010 17:43
According to Christian Baer <christian.baer(a)uni-dortmund.de>: > Is SHA512 a derivate of something? I have been using that and Whirlpool > quite extensively for some time now. SHA-512 is from the long family originating in MD4, then MD5, SHA-0, SHA-1, and the SHA-2 family (SHA-224, SHA-256, SHA-384 and SHA-512). SHA-512 can be viewed as an "inflated" SHA-256 (32-bit words become 64-bit words). These function use the Merkle-Damgard construction. The message is split into successive blocks of a given size. Then each block M_i is used as key in a block cipher. There is an initial value IV, and if the current state before block i is x_i (with x_0 = IV) then x_(i+1) = x_i + E(x_i, M_i) (the state is encrypted with M_i as key, and the result is added to the state). In the MD/SHA family, functions have been designed to be hash functions. Retroactively, the encryption function used in SHA-1 (respectively SHA-256) has been named SHACAL-1 (resp. SHACAL-2), in case someone would like to use it for encryption. The NESSIE project dedicated some analysis time on those block ciphers, and SHACAL-2 appears to be quite secure (but not very fast). Whirlpool was built the other way round. There first was Rijndael, which used blocks of 128, 192 or 256 bits (Rijndael with 128-bit blocks is what became the AES). Then they built an inflated Rijndael, with 512-bit blocks and a brand new key schedule. That block cipher is called W. W is then used in a construction similar to Merkle-Damgard and this is the Whirlpool function. Then again, NESSIE had a look at Whirlpool and found it to be ok. But few people use it because it is quite slow. SHA-256 and SHA-512 have no known weakness, save the bad reputation of their ancestors (MD4 and MD5 have been throroughly broken, and SHA-1 quite weakened). They are also deemed to be a bit slow, but nowhere as slow as Whirlpool. Also, they were designed by some US agency without much external input, and they robustness seems to come from the designers having been a bit heavy-handed with the number of rounds. In brief, we do not really know what to think of these functions. For these reasons, NIST has initiated the SHA-3 competition, along the same lines than AES. They want an "openly secure" algorithm (an algorithm which is robust and such that we know why it is robust) and the implicit rule of engagement is that is should be no slower than SHA-256 or SHA-512. A few figures, on two systems: one is my PC (Intel Core2, 2.4 GHz, 64-bit mode), the other is a cheap Linksys router (Mips core, 200 MHz, 32-bit). All implementations are from sphlib, which means that I wrote them, with "similar efforts at optimization". Speeds are given in megabytes per second; the benchmark is built so that the code and data are already in L1 cache, so these are "ideal speeds". Which yields: Intel Core2 Mips MD4 682 13.9 MD5 411 10.0 SHA-1 266 6.0 SHA-256 144 2.8 SHA-512 187 1.4 Whirlpool 58 0.15 SHABAL 316 6.1 (SHABAL is one of the SHA-3 candidates.) From which we can see that Whirlpool is much slower than the other hash functions, especially on "small" architectures. This is due to the fact that an implementation of Whirlpool needs to use precomputed tables which do not fit in the L1 cache of the Mips system (only 8 kB of L1 cache on that system). Also, Whirlpool benefits from the presence of 64-bit registers, and suffers from their absence on the Mips. Similarly, SHA-512 is faster than SHA-256 on 64-bit systems, but slower on 32-bit systems. Please note that while the implicit rules for SHA-3 are "no slower than SHA-256/512", chances are that the new SHA-3 will actually be substantially faster, offering performances closer to what SHA-1 yields. This is the AES story which repeats itself. My usual speach at that point is that small systems like my Linksys router are actually much more important, industrially speaking, than the PC. Such a router is in the perfect position to run cryptography all day long (it is a router, hence should handle a VPN) and is hooked with fast networks (100 Mbit/s). It is also quite starved on power: as the figures above show, it already does not have enough power to hash data at its full bandwidth. Yet these systems are very common (the cheaper a system, the more common it gets). Comparatively, PC do not have performance issues. > As security goes, a properly set up CBC is as hard (or impossible) to > break as a similar system with XTS? While that starts me asking the > question why XTS isn't used more widely. There is quite a lot of hard work which hides under that inconspicuous "properly". CBC is much more used than XTS because XTS is much more recent. It was standardized (by IEEE) at the end of 2007. CBC was invented in 1976. Thirty years are quite enough to establish "best practices" which tend to replicate themselves among generations of programmers. > On this machine (Sun U60) you can notice the work that goes into disk > encryption. Still it doesn't feel crippled. As I said above, big desktop or server computers do not really have performance issues with cryptography. Well, cryptography _can_ be a bottleneck in some situations, but those situations are very specific. The true battle for performance is rather fought on smaller hardware, such as mobile phones, where a faster algorithm means that the CPU can run at a lower clock rate, thus saving battery. > I'd have to read up on this, but I remember something about FreeBSD and > other Unix-like operating systems being able to protect certain areas of > the RAM for security reasons. They can "lock" parts of the RAM, which means that the OS will refrain from copying these parts to the disk, somewhere in the swap area. Here, I am talking about preventing the contents of CPU registers from being ever written in RAM, something which is actually impossible due to how CPU handle multitasking (basically by being able to dump all registers in RAM and reload them from there). > These things are way out of my league (in terms of price). Although I > have the feeling that the prices are being kept high on purpose. They are kept high by market pressure, which is much worse than any purpose. They are expensive because they are not sold by millions, and they are not sold by millions because they are expensive, and also because they are not deemed "generally useful". Security is, at best, invisible to the end user (that is, _good_ security is invisible, while bad security is crippling). Users tend to refuse to pay for features that they do not see. > There is just one hole in your argument: Every cage must be opened at > some time and opening it hast to be relatively easy to do. In practice, people who use cages also use multi-control. Namely, there are several locks, and several people, each guarding one key, must act together. Although, various systems keep track of what is going on (e.g. security cameras), allowing a lot of traceability (this mostly discourages corruption of key holders). Many good Hollywood movies will show you that there is no 100% foolproof system (except if you manage to keep Bruce Willis in the cage) but the point is that companies have been using guards and locks for centuries, and ample experience has been accumulated on that subject. Which means that businesses can get insurance at decent prices. > My "problem" with a cage or bunker is rather more mundane: It may be > impractical or impossible to install a cage or a bunker for the > machine. Sometimes protection has to be more subtle. You can always replace any level of protection by a fair amount of prayer, but I cannot guarantee that whatever deities you call up will look favourably upon you. --Thomas Pornin |