From: Tom St Denis on
On Feb 10, 1:59 pm, "Scott Fluhrer" <sfluh...(a)ix.netcom.com> wrote:
> > I'd personally write off the 16-byte block size since the calling
> > overhead is non-trivial at that point.
>
> Why?  Doing hashes of small blocks isn't that uncommon...

Because if you wanted to optimize MD5 for 16 byte blocks [why?] you'd
re-write the compress function so that some of the M[0..15] blocks are
constants [e.g. 12 of 16 of them]. You could write 15 versions, 14
for 1..14 32-bit words and the 15th for 15+ words. MD5 hashing of
anything less than 56 bytes involves a single compression and you can
avoid all the normal overhead by computing the entire hash in a single
call.

If all you have is a generic routine [which normally is ideal] the
calling overhead of the typical api [e.g. init/process/done] will
obliterate any useful performance inside the compression function.

So in short, if performance of <56 byte messages matters you'd not use
a typical hash implementation as it'd be sub-optimal.

Tom
First  |  Prev  | 
Pages: 1 2 3
Prev: The incompatibility hurdle
Next: Hash combining