From: adacrypt on
On Mar 26, 12:49 pm, Tom St Denis <t...(a)iahu.ca> wrote:
> On Mar 26, 7:56 am, Maaartin <grajc...(a)seznam.cz> wrote:
>
> >  OOO    M    M   GGGG
> > O   O   MM  MM  G    G
> > O   O   M MM M  G
> > O   O   M    M  G   GG
> >  OOO    M    M   GGGG
>
> > You may need a fixed size font to decrypt the message. :D
>
> See it isn't that there is a small intersection of "reality" and "what
> Adacrypt" knows, it's that it defines a null space.
>
> What I love about him so much is that he so confidently announces how
> nothing he knows is relevant to any conversation at hand.  He's like
> the little 5 yr old kid tugging at an adults shirt to try and add to a
> conversation between adults.  You know the kid has nothing useful to
> say but you still like hearing them spout off their insanity just the
> same.
>
> Tom

I deliberately go down such roads looking under every stone for
inspiration and I can't stop now - I don't want to stop, that is what
research is all about - my contribution to mathematics i.e. factoring
of physical vectors is no joke - I mean the mathematics speak for
themselves - some of the well established theorems of the past are a
joke when you compare them - like Lami's theorem but worse still is
Fermat's Last. - cheers - Adacrypt - after the revolution!
From: Tom St Denis on
On Mar 26, 10:38 am, adacrypt <austin.oby...(a)hotmail.com> wrote:
> I deliberately go down such roads looking under every stone for
> inspiration and I can't stop now - I don't want to stop,  that is what
> research is all about - my contribution to mathematics  i.e. factoring
> of physical vectors is no joke - I mean the mathematics speak for
> themselves - some of the well established theorems of the past are a
> joke when you compare them - like Lami's theorem but worse still is
> Fermat's Last. - cheers - Adacrypt - after the revolution!

How do you factor a vector?

Yes Austin, that's very good, why not go play with the other children
now.

Tom
From: WTShaw on
On Mar 26, 1:18 pm, Bruce Stephens <bruce+use...(a)cenderis.demon.co.uk>
wrote:
> adacrypt <austin.oby...(a)hotmail.com> writes:
>
> There's a new twist this year in that you're simultaneously arguing that
> computers are decimal.  The two claims seem inconsistent: why should a
> decimal computer have difficulty performing arithmetic on numbers larger
> than 2147483647?

Decimal still means base 10 and digital does mean alternatively having
10 digits.
>
> So you've concurrently got two belief systems each wrong in several ways
> and which are mutually inconsistent.  Is this some kind of attempt to
> imitate Robert E. McElwaine?

Sounds like a religious argument against denominations, there being of
course none like that here?
From: Gordon Burditt on
>I have done some considerable work in the past factoring large numbers
>by conventional methods and for some time I wrongly believed that the
>crunch came when the computer could not store the very large parent
>number being tested by the string of candidates ie. the largest +ve
>integer that can be stored in 32 bit arithmetic is 2147483647.

Many so-called "32-bit" computers have instructions that do 32bit
* 32bit yielding 64bit multiply instructions and 64bit / 32bit
yielding 32bit divide instructions. Also, many floating-point units
are able to do math on integers with a 64-bit mantissa.

Modern computers have RAM of 1GB or more, and nobody's seriously
suggesting use of RSA with more than 8-billion-bit keys, which
would require swapping/paging and *REALLY* slow things down.

Ever notice how modern use of RSA uses maybe 2048-bit or 4096-bit
keys, and 512-bit keys are considered rather weak? Those are rather
larger than any 32-bit registers.

Needing to do multi-precision arithmetic slows down both multiplication
and factoring. Which do you think slows down factoring going from
32 to 64 bit more? The fact that you have to do multi-precision
math (which might now require 4 multiplications instead of one),
or that the number of numbers for trial division is multiplied by
2**16? (Trial division by 2 thru sqrt(N)).

>I next
>assumed that because of this the computation would have to be done
>externally by hand - hence the time taken ?

Well, if you assumed that it would have to be done counting on
fingers and toes, or using a stone abacus, it would, but programmers
are smarter than that. It's possible to do multi-precision arithmetic
using multiple abaci or the hands of multiple people also.

>Current research is into
>methods that will get around this ? How rigth or wrong is this
>hypothesis ?

Multi-precision and arbitrary-precision math packages such as the
GNU mp bignum library have been around for quite a while. I had
an implementation of C about 30 years ago that did 32-bit multiplies
and divides just fine (slow but not outrageously slow) on a 2MHz
8-bit machine (Intel 8080) with no hardware multiply instruction
(not even an 8-bit one).

I'd say your hypothesis is about 99.99% wrong, just like the theory
that good running shoes will greatly speed up how quick you can get
from New York to Peking (it might shave off a microsecond if you
run between your bed and your car, but do nothing for flight time).

>Something to consider in my view is graphical arithmetic using GPS to
>approximate very close with some error analysis ?

I don't think *approximate* math is going to work well for factoring.
Either it divides or it doesn't.