From: Joshua Cranmer on
On 06/30/2010 10:09 PM, JSH wrote:
> On Jun 30, 7:01 pm, MichaelW<ms...(a)tpg.com.au> wrote:
>> My criticism is that the algorithm does not on average produce a
>> solution in a reasonable time.
>
> So? You keep puzzling me here.
>
> Why do you think that is significant at this stage?
>
> What do you believe it proves or may prove?
>
> Please, be detailed in your response. I find your postings to be
> curiously odd.

As I share MichaelW's sentiment, let me explain my rationale:

1. The math does not indicate underlying structure that can
substantially reduce the pool of possible trial numbers. Substantially
here means at least a non-constant factor.

2. The math does not appear amenable to finding underlying structure.
I'm not a number theorist, but it doesn't seem to me like there's a
particularly useful link between the sum of a set of numbers and the
product of one.

3. Persuant to the above, the algorithm then boils down to a "try all
possible numbers in a different permutation." Nothing suggests that this
permutation puts the more likely answers towards the front and less
likely ones towards the end.

4. An intermediary step is factorization. For each number you try, you
need to factor a number. My basic test seemed to suggest that the
complexity of the number is not uniformly low but rather varying like
picking random numbers on the number line. Therefore, factorization
would still be rather slow, which only increases the runtime.

5. Your main couch is that it's only at a "basic stage." Yet, in order
to improve the runtime considerably, you have to fix two main issues
(which seem to be inversely correlated in speed towards each other:
improving bulk factorization makes it harder to seriously consider less
numbers, and vice versa). The other possible variant is incorporating a
feedback loop, but basic thought outlines suggest that the cost of that
loop is still significant.

6. The barrier to break is high. Recall that interesting numbers are at
least 1024 bits in length; an algorithm that is O(sqrt(N)) for such a
number would require on the order of 10^154 steps. Your algorithm is
Omega(N), so it right now requires on the order of 10^308 steps in the
worst case right now for such numbers. 10^10 is feasible on a standard
computer; a long-term distributed effort might be able to "handle" on
the order of 10^20. For the amount of self-assuredness you sputter, that
is a very long ways away. Specifically, a factor of, say,
1797693134862315907729305190789024733617976978942306572734300811577326758055009631327084773224075360211201138798713933576587897688144166224928474306394741243777678934248654852763022196012460941194530829520850057688381506823424628814739131105408272371633505106845862982399472459384797163048.
I'm not exactly holding my breath.

--
Beware of bugs in the above code; I have only proved it correct, not
tried it. -- Donald E. Knuth
From: JSH on
On Jun 30, 7:44 pm, MichaelW <ms...(a)tpg.com.au> wrote:
> On Jul 1, 12:09 pm, JSH <jst...(a)gmail.com> wrote:
>
>
>
>
>
> > On Jun 30, 7:01 pm, MichaelW <ms...(a)tpg.com.au> wrote:
>
> > > On Jul 1, 9:54 am, JSH <jst...(a)gmail.com> wrote:
>
> > > > I've noted a way to solve for m, when k^m = q mod N, through integer
> > > > factorization, which is then an approach to solving discrete
> > > > logarithms in a prior post.  In this post I'll explain when the
> > > > equations MUST work, where a simple analysis can be done trivially
> > > > using methods familiar to those who've solved simultaneous equations
> > > > in regular algebra.
>
> > > > Here are relevant equations without the complete detail explaining
> > > > them all of the prior post which should be read for reference:
>
> > > > Everything follows from use of a simple system of equations:
>
> > > > f_1 = a_1*k mod N thru f_m = a_m*k mod N
>
> > > > Two important constraining equations:
>
> > > > a_1*...*a_m = q mod N
>
> > > > and
>
> > > > a_1+...+a_m = m mod N
>
> > > > Resultant equations:
>
> > > > f_1*...*f_m = q^2 mod N
>
> > > > and
>
> > > > f_1+...+f_m = mk mod N
>
> > > > (These are arbitrary constraints that I used.  There may be others
> > > > that are of practical use.)
>
> > > > Now assume that for some unknown number m-c of the f's that the a's
> > > > are simply the modular inverse of k, then for that number the f's
> > > > simply equal 1, which allows me to solve for m with:
>
> > > > (k-1)*m = (f_1+...+f_c - c) mod N
>
> > > > If k-1 is coprime to N, you can simply use the modular inverse to get
> > > > m.  Otherwise you'd to divide off common factors from both sides and
> > > > then use the modular inverse with what remained.
>
> > > > All of which was given in my prior post, but notice I can now go back
> > > > to the constraining equations for the a's with the information that
> > > > some of the a's have been set to the modular inverse of k:
>
> > > > a_1*...*a_c*k^{-(m-c)} = q mod N, so a_1*...*a_c= q*k^{-(m-c)}
>
> > > > and
>
> > > > a_1+...+a_c + (m-c)k^{-1} = m mod N, so a_1+...+a_c = -(m-c)k^{-1} + m
> > > > mod N
>
> > > > which means there are two simultaneous congruence equations with c
> > > > unknowns:
>
> > > > a_1*...*a_c= q*k^{-(m-c)}
>
> > > > and
>
> > > > a_1+...+a_c = -(m-c)k^{-1} + m mod N
>
> > > > Using the first to substitute out a_1 into the second and simplifying
> > > > slightly gives:
>
> > > > q*k^{-(m-c)}  + a_2^2*a_3*...*a_c + a_2*a_3^2*...*a_c...+
> > > > a_2*...*a_c^2 = a_2*...*a_c[-(m-c)k^{-1} + m] mod N
>
> > > > Notice then that with any c-2 variables set arbitrarily, the existence
> > > > of the remaining one is set if a quadratic residue exists.
>
> > > > So for instance if c=4, then you can have a_3 and a_4 completely free
> > > > variables, as long as quadratic residues exist to allow for a_2, which
> > > > would indicate a 50% probability in that case if N is prime.
>
> > > > However, you can also further constrain one more of the a's to remove
> > > > squares, for instance let a_3 = a_2^{-1} mod N, and you have:
>
> > > > q*k^{-(m-c)}  + a_2*a_4*...*a_c + a_3*a_4*...*a_c...+ a_4*...*a_c^2 =
> > > > a_4*...*a_c[-(m-c)k^{-1} + m] mod N
>
> > > > which allows a solution for a_2 to always exist.  Which would leave
> > > > with c=4, a_4 completely free.
>
> > > > Assuming human nature will be to look for smaller values to factor to
> > > > actually use the algorithm, one can assume that
>
> > > > f_1*...*f_m = q^2 mod N
>
> > > > will be with smaller values of q^2 mod N, based on human preference,
> > > > so if a_4 is completely free, and is non-unit, it would likely in many
> > > > cases mean that f_4 will be 2.
>
> > > > If a_3 and a_4 are free then one would expect f_3 and f_4 to be 2.
>
> > > > With c = 4 or greater then the area to consider variability that
> > > > cannot just be arbitrarily set to a convenience value is with a_1 and
> > > > a_2 which could make f_1 and f_2 just about any residue mod N.  Worst
> > > > case they are both near N, so you'd have a size of approximately 4N^2.
>
> > > > So algorithms based on this method should exit within q^2 mod N less
> > > > than 4N^2.
>
> > > > Here's the example given in my prior post, which should make more
> > > > sense given the information above:
>
> > > > Solve for m, where:
>
> > > > 2^m = 13 mod 23
>
> > > > so k = 2, and f_1*...*f_m = q^2 mod N = 13^2 mod 23 = 8 mod 23.
>
> > > > And I found a solution with f_1*...*f_m = 54, and
>
> > > > f_1 = 2, f_2 = 3, f_3 = 3, f_4 = 3
>
> > > > so c=4, and
>
> > > > m = (k-1)^{-1}(f_1 +...+ f_c - c) mod N = 2+3+3+3 - 4 = 7
>
> > > > And 2^7 = 128 = 13 mod 23.
>
> > > > Notice that 2 is a factor as are small primes.  The method will try to
> > > > fit small primes if you are using small values which is about human
> > > > preference.  The algebra tries to give you what you want based on the
> > > > size of the numbers you are factoring.
>
> > > > The value of c is dynamically set by the factorization of q^2 mod N..
> > > > But the results above indicate that the algebra must give you a
> > > > factorization within that range which will work.
>
> > > > And that is what's found with a first blush basic analysis.  It's not
> > > > clear at this time what further information might result from more
> > > > basic research.  My aim at this point is to answer criticism against
> > > > this approach.
>
> > > > Routinely posters reply requesting I demonstrate by breaking current
> > > > encryption.  Well, if I could do that I wouldn't need to bother
> > > > posting on newsgroups now would I?
>
> > > > It's basic research.  Early stages.
>
> > > > James Harris
>
> > > Solve one of the following with your algorithm.
>
> > > 2^m = 16 mod 23
> > > 2^m = 2 mod 107
> > > 2^m = 4 mod 107
> > > 2^m = 10 mod 107
> > > 2^m = 14 mod 107
> > > 2^m = 15 mod 107
> > > 2^m = 16 mod 107
>
> > > My criticism is that the algorithm does not on average produce a
> > > solution in a reasonable time.
>
> > So?  You keep puzzling me here.
>
> > Why do you think that is significant at this stage?
>
> > What do you believe it proves or may prove?
>
> > Please, be detailed in your response.  I find your postings to be
> > curiously odd.
>
> > James Harris- Hide quoted text -
>
> > - Show quoted text -
>
> Your algorithm is too inefficient to be of any use. No amount of
> research will ever make it more efficient than the brute method. There
> is no next stage, there is no advanced research, there is no
> breakthrough. You have no evidence to suggest otherwise but my many
> examples provide evidence that I am correct.

Many examples? I've ran my own "many examples". I've found it didn't
work that well, which is why I released it.

> Detailed enough?

It shows your opinion. My own view is that at first blush what you
say is quite reasonable, and could actually be correct!

But what then?


James Harris
From: JSH on
On Jun 30, 8:00 pm, Joshua Cranmer <Pidgeo...(a)verizon.invalid> wrote:
> On 06/30/2010 10:09 PM, JSH wrote:
>
> > On Jun 30, 7:01 pm, MichaelW<ms...(a)tpg.com.au>  wrote:
> >> My criticism is that the algorithm does not on average produce a
> >> solution in a reasonable time.
>
> > So?  You keep puzzling me here.
>
> > Why do you think that is significant at this stage?
>
> > What do you believe it proves or may prove?
>
> > Please, be detailed in your response.  I find your postings to be
> > curiously odd.
>
> As I share MichaelW's sentiment, let me explain my rationale:
>
> 1. The math does not indicate underlying structure that can
> substantially reduce the pool of possible trial numbers. Substantially
> here means at least a non-constant factor.

Sounds reasonable.

> 2. The math does not appear amenable to finding underlying structure.
> I'm not a number theorist, but it doesn't seem to me like there's a
> particularly useful link between the sum of a set of numbers and the
> product of one.

Seems intuitive.

> 3. Persuant to the above, the algorithm then boils down to a "try all
> possible numbers in a different permutation." Nothing suggests that this
> permutation puts the more likely answers towards the front and less
> likely ones towards the end.
>
> 4. An intermediary step is factorization. For each number you try, you
> need to factor a number. My basic test seemed to suggest that the
> complexity of the number is not uniformly low but rather varying like
> picking random numbers on the number line. Therefore, factorization
> would still be rather slow, which only increases the runtime.
>
> 5. Your main couch is that it's only at a "basic stage." Yet, in order
> to improve the runtime considerably, you have to fix two main issues
> (which seem to be inversely correlated in speed towards each other:
> improving bulk factorization makes it harder to seriously consider less
> numbers, and vice versa). The other possible variant is incorporating a
> feedback loop, but basic thought outlines suggest that the cost of that
> loop is still significant.

It IS only the basic stage.

I think part of the problem is that some of you don't know what "basic
research" is.

How many modern ideas do you think looked good to anyone right out of
the box?

> 6. The barrier to break is high. Recall that interesting numbers are at
> least 1024 bits in length; an algorithm that is O(sqrt(N)) for such a
> number would require on the order of 10^154 steps. Your algorithm is
> Omega(N), so it right now requires on the order of 10^308 steps in the
> worst case right now for such numbers. 10^10 is feasible on a standard
> computer; a long-term distributed effort might be able to "handle" on
> the order of 10^20. For the amount of self-assuredness you sputter, that
> is a very long ways away. Specifically, a factor of, say,
> 179769313486231590772930519078902473361797697894230657273430081157732675805 500963132708477322407536021120113879871393357658789768814416622492847430639 474124377767893424865485276302219601246094119453082952085005768838150682342 4628814739131105408272371633505106845862982399472459384797163048.
> I'm not exactly holding my breath.

If early results with this approach had looked great there is no way
I'd have released it on Usenet.

So noting that your own examples do not show it working well does not
interest me.

Also intuitive feels for how badly you suppose it should work, are
boring.

There is only one thing that matters with this approach at this point
from the basic research side: it directly attacks using a large m by
directly eliminating a large portion of that m.

So it can reduce an m of arbitrary size to finding 4 factors.

There is NOTHING else that even comes close to that concept out there
at all.

If it can be figured out how to work it well it simply eliminates the
advantage of discrete logs.

Blows it away.

Now then, your tries have not shown that it can work. My own tries
have not given me much of a notion that it works well or I wouldn't
have put it on Usenet. Your intuition says it can't work well.

So what would you do next if you were me?


James Harris
From: MichaelW on
On Jul 1, 1:20 pm, JSH <jst...(a)gmail.com> wrote:
>
> So what would you do next if you were me?
>
> James Harris

Spend 5 years getting a decent education in mathematics.

The basic disagreement between you and everyone else is the value of
your research. You think it deserves serious consideration, we think
it is of very poor quality.

For reference here is a paper doing what you are attempting to
achieve.

http://www.dtc.umn.edu/~odlyzko/doc/arch/discrete.logs.pdf

I do not expect you to understand the maths; from experience even the
abstract is beyond you. Do note however the quality of the paper
compared to your own work. If you can see it then you will understand.

Regards, Michael W.
From: JSH on
On Jun 30, 10:06 pm, MichaelW <ms...(a)tpg.com.au> wrote:
> On Jul 1, 1:20 pm, JSH <jst...(a)gmail.com> wrote:
>
>
>
> > So what would you do next if you were me?
>
> > James Harris
>
> Spend 5 years getting a decent education in mathematics.

But what about THIS result? What would YOU do with it if it were
yours?

> The basic disagreement between you and everyone else is the value of
> your research. You think it deserves serious consideration, we think
> it is of very poor quality.

It is my math. Do a search in Google on: mymath

So who are you anyway? Who is this "we"?

What makes you at all relevant to anyone, especially me?

Why should I give a damn what you think?

> For reference here is a paper doing what you are attempting to
> achieve.
>
> http://www.dtc.umn.edu/~odlyzko/doc/arch/discrete.logs.pdf


Oh, Odlyzko. I'm still mad at him for years ago telling me my prime
counting function wasn't interesting.

When is the last time you had a chat with him?

> I do not expect you to understand the maths; from experience even the
> abstract is beyond you. Do note however the quality of the paper
> compared to your own work. If you can see it then you will understand.

Now that you've finished strutting and talking down to me, back to the
CURRENT IDEA.

What do you think should be done with it?

Should it be abandoned completely and best forgotten by all?


James Harris