Prev: Rendering prediction of congruential random number generators hard
Next: Hey guys, what is this all about? MQQ digital signature scheme?
From: Scott Contini on 25 Oct 2009 19:33 On Oct 26, 10:11 am, yawnmoth <terra1...(a)yahoo.com> wrote: > On Oct 25, 3:16 pm, Paul Rubin <http://phr...(a)NOSPAM.invalid> wrote: > > > yawnmoth <terra1...(a)yahoo.com> writes: > > > But, as I said, if newton-raphson division and the montgomery > > > reduction have the same time complexity, I don't see the advantage. > > > You would have to do the newton-raphson process at every step of the > > exponentiation. You only have to do the montgomery process once. > > No... you have to do the montgomery process every time you do a trial > division. To quote from <http://en.wikipedia.org/wiki/ > Montgomery_reduction#Modular_exponentiation>: > > "To fix our ideas, suppose that a particular modular exponentiation > requires 800 multiplications. In that case 802 Montgomery steps will > be needed: one to Montgomeryize the number being exponentiated, 800 to > do the exponentiation, and one to de-Montgomeryize the result." > > 802 != 1. The pre-transformation (to get it into Montgomery form) and post- transformation are done only once, the other operations are done each iteration. I might be wrong on this, but as far as my decrepit memory tells me, the Montgomery reduction is not a bit speedup. It is a small speedup (something like 10%) when implemented efficiently. If I am wrong on this, then I'm sure people will let you know. Have a look at the paper I posted a link to in the other reply, however. Scott
From: Scott Contini on 25 Oct 2009 19:36 On Oct 26, 10:11 am, yawnmoth <terra1...(a)yahoo.com> wrote: > On Oct 25, 3:16 pm, Paul Rubin <http://phr...(a)NOSPAM.invalid> wrote: > > > yawnmoth <terra1...(a)yahoo.com> writes: > > > But, as I said, if newton-raphson division and the montgomery > > > reduction have the same time complexity, I don't see the advantage. > > > You would have to do the newton-raphson process at every step of the > > exponentiation. You only have to do the montgomery process once. > > No... you have to do the montgomery process every time you do a trial > division. To quote from <http://en.wikipedia.org/wiki/ > Montgomery_reduction#Modular_exponentiation>: > > "To fix our ideas, suppose that a particular modular exponentiation > requires 800 multiplications. In that case 802 Montgomery steps will > be needed: one to Montgomeryize the number being exponentiated, 800 to > do the exponentiation, and one to de-Montgomeryize the result." > > 802 != 1. The pre-transformation (to get it into Montgomery form) and post- transformation are done only once, the other operations are done each iteration. I might be wrong on this, but as far as my decrepit memory tells me, the Montgomery reduction is not a big speedup. It is a small speedup (something like 10%) when implemented efficiently. If I am wrong on this, then I'm sure people will let you know. Have a look at the paper I posted a link to in the other reply, however. Scott
From: yawnmoth on 25 Oct 2009 19:39 On Oct 25, 5:30 pm, Mok-Kong Shen <mok-kong.s...(a)t-online.de> wrote: > yawnmoth schrieb: > > > > > On Oct 25, 12:28 pm, Mok-Kong Shen <mok-kong.s...(a)t-online.de> wrote: > >> yawnmoth wrote: > >>> According to <http://en.wikipedia.org/wiki/ > >>> Computational_complexity_of_mathematical_operations>, division, if > >>> implemented with Newton's method, has a complexity equal to that of > >>> whatever multiplication algorithm is in use. The article on Newton's > >>> method elaborates: "NewtonRaphson division ... can be used to quickly > >>> find the reciprocal of a number using only multiplication and > >>> subtraction." > >>> My question is... what's the point of using Montgomery reduction in > >>> modular exponentiation if NewtonRaphson division can implement > >>> division efficiently, as well? Montgomery reduction, similarly, uses > >>> just multiplication and addition / subtraction. It does some bit > >>> shifting (or string shifting if you're using base-10 strings, I > >>> guess), as well, but that's pretty inconsequential. > >> I could gravely err, but isn't it that the "normal" Newton method > >> can't work on integer numbers in a residue system (i.e. mod N)? > > > If it returned 2.5 for 5/2, you could get the remainder by doing 5 - > > floor(5/2) = 1. > > But you couldn't compute approximate quotient like that in a residue > system, for it doesn't make sense in general, if I don't err. For > instance, let's compute 36/7 mod 31. You would compute floor(36/7)=5. > But 36/7 mod 31 is neither 5 nor 6, nor a number near these, but 14. Modular exponentiation doesn't really do division save for to calculate the residue. As such, for that particular application, 36/7 mod 31 seems like maybe a false scenario?
From: Francois Grieu on 26 Oct 2009 02:52 Scott Contini wrote : > On Oct 26, 10:11 am, yawnmoth <terra1...(a)yahoo.com> wrote: >> On Oct 25, 3:16 pm, Paul Rubin <http://phr...(a)NOSPAM.invalid> wrote: >> >>> yawnmoth <terra1...(a)yahoo.com> writes: >>>> But, as I said, if newton-raphson division and the montgomery >>>> reduction have the same time complexity, I don't see the advantage. >>> You would have to do the newton-raphson process at every step of the >>> exponentiation. You only have to do the montgomery process once. >> No... you have to do the montgomery process every time you do a trial >> division. To quote from <http://en.wikipedia.org/wiki/ >> Montgomery_reduction#Modular_exponentiation>: >> >> "To fix our ideas, suppose that a particular modular exponentiation >> requires 800 multiplications. In that case 802 Montgomery steps will >> be needed: one to Montgomeryize the number being exponentiated, 800 to >> do the exponentiation, and one to de-Montgomeryize the result." >> >> 802 != 1. > > The pre-transformation (to get it into Montgomery form) and post- > transformation are done only once, the other operations are done > each iteration. > > I might be wrong on this, but as far as my decrepit memory tells me, > the Montgomery reduction is not a big speedup. It is a small speedup > (something like 10%) when implemented efficiently. If I am wrong on > this, then I'm sure people will let you know. Have a look at the > paper I posted a link to in the other reply, however. An undeniable benefit of Montgomery reduction is that it replaces the difficult "quotient estimation" step in traditional division by a simple step. And it never needs a messy "correction step". True, the odds of mis-estimation of the quotient can be made so low that it has hardly measurable impact on average speed, yet the need for this step complicates hardware and software, and sometime timing attacks conceivably can be mounted that deduce information from the occurrence of a correction step. Most importantly from a speed and hardware perspective, Montgomery reduction makes it easy to implement each step of multiplication by a "digit" (which increase the intermediary result by one digit), together with the reduction step (which decrease the intermediary result by one digit), with the intermediary result scanned once per digit and per modular multiplication, and remaining of the same size as N. By contract, when using "traditional" modular multiplication to compute W = U*V mod N in the context of modular exponentiation, the textbook method is to compute U*V, then reduce mod N; the intermediary result grows twice the size of N, and the number of read/write operations to the individual digits of the intermediary results is bigger than with Montgomery reduction using the same base. Also, loop overhead is typically bigger than with Montgomery reduction. It is possible to massage "traditional" modular multiplication to the point that the intermediary result remains the same size as N, and the number and dominant pattern of access to individual digits is the same as in Montgomery reduction. The rare case of mis-estimation of the quotient can even be turned into a rare speedup. Any speed benefit of Montgomery modular multiplication (using the same base) then becomes marginal, to the point that the initial and final adjustment outweighs this benefit when doing modular exponentiation with small exponent (say 64 bits or less). However, I know no good public description of how to achieve this. Most importantly, when manipulating secret data, guarding against timing attacks on the individual modular multiplication requires extra caution, when it comes nearly for free with Montgomery reduction. Francois Grieu
From: Mok-Kong Shen on 26 Oct 2009 04:08
yawnmoth wrote: > Mok-Kong Shen wrote: >> yawnmoth wrote: Mok-Kong Shen wrote: >>>> I could gravely err, but isn't it that the "normal" Newton method >>>> can't work on integer numbers in a residue system (i.e. mod N)? > >>> If it returned 2.5 for 5/2, you could get the remainder by doing 5 - >>> floor(5/2) = 1. > >> But you couldn't compute approximate quotient like that in a residue >> system, for it doesn't make sense in general, if I don't err. For >> instance, let's compute 36/7 mod 31. You would compute floor(36/7)=5. >> But 36/7 mod 31 is neither 5 nor 6, nor a number near these, but 14. > > Modular exponentiation doesn't really do division save for to > calculate the residue. As such, for that particular application, 36/7 > mod 31 seems like maybe a false scenario? I don't know how you'll exactly implement the Newton method for your purpose to compare with Montgomery's procedure (which I haven't closely studied). But I want to say that you have to examine and ensure that nothing involving an analogue of the normal division in the sense above (directly or indirectly) ever occurs in your scheme. M. K. Shen |