Prev: Permutation Extrapolation Function (PXF) (you might find theend of the post useful)
Next: Collecting true randomness/entropy from decimal part of the values calculated in stock exchange market
From: Pubkeybreaker on 5 Apr 2010 12:49 Check out NFS(a)Home; help them push towards larger numbers. http://escatter11.fullerton.edu/nfs/
From: harry on 5 Apr 2010 15:33 "Pubkeybreaker" <pubkeybreaker(a)aol.com> wrote in message news:3adfeaf6-843b-4379-b31c-54ace9c44fc1(a)z9g2000vbm.googlegroups.com... > Check out NFS(a)Home; help them push towards larger numbers. > > http://escatter11.fullerton.edu/nfs/ JSH already has this done
From: Noob on 6 Apr 2010 03:52 harry wrote: > Pubkeybreaker wrote: > >> Check out NFS(a)Home; help them push towards larger numbers. >> >> http://escatter11.fullerton.edu/nfs/ > > JSH already has this done It's true. JSH has, indeed, already factored 15 and 35.
From: Mok-Kong Shen on 6 Apr 2010 05:58 Pubkeybreaker wrote: > Check out NFS(a)Home; help them push towards larger numbers. > > http://escatter11.fullerton.edu/nfs/ I wouldn't refute being called a selfish person, but I think I would like to take part in some internet collective scientific computing projects, if the following conditions could be satisfied (I don't know much about the actual working conditions of such projects, so part below may be irrelevant): 1. One can dynamically set an upper limit of the CPU load of the process. 2. One can download on one's initiative the task to be done. During the actual processing there is no need of an internet connection to the server of the project. One need not have one's computer on 24 hours a day, i.e. the process can be interrupted and resumed at any time. One uploads the result oneself, when the task processing comes to an end. M. K. Shen
From: Chip Eastham on 6 Apr 2010 09:30
On Apr 6, 5:58 am, Mok-Kong Shen <mok-kong.s...(a)t-online.de> wrote: > Pubkeybreaker wrote: > > Check out NFS(a)Home; help them push towards larger numbers. > > >http://escatter11.fullerton.edu/nfs/ > > I wouldn't refute being called a selfish person, but I think > I would like to take part in some internet collective scientific > computing projects, if the following conditions could be satisfied > (I don't know much about the actual working conditions of such > projects, so part below may be irrelevant): > > 1. One can dynamically set an upper limit of the CPU load of the > process. > > 2. One can download on one's initiative the task to be done. During > the actual processing there is no need of an internet connection to > the server of the project. One need not have one's computer on 24 hours > a day, i.e. the process can be interrupted and resumed at any time. One > uploads the result oneself, when the task processing comes to an end. > > M. K. Shen Hi, Mok-Kong: What operating system are you using? The FAQ at the BOINC website (a utility for sharing workload used by the NFS(a)HOME project and others) suggests that if CPU limits are needed, these must be set by an external utility (external to the application). I imagine that projects will differ in how much work is "farmed out" to an individual computer at one time. The process is no doubt resilient, so if a workstation crashes or is otherwise taken offline without completing its assigned task, computation as a whole is not disrupted. As computers will differ in their speed, I don't think the timing of delegated tasks is precisely metered in advance. However from what I've seen so far the NFS(a)HOME project assigns task that take (on my Linux-based dual processor computer) about 1-2 hours to finish. There's a small "manager" application that let's you see what the "client" application is doing (at a high level of detail). regards, chip |