I want to factorize 2^12771
Hello,
I want to force it until any factorization found for 2^12771 [url]https://www.mersenne.org/report_exponent/?exp_lo=1277&full=1[/url] I'm getting it by "manual assignment" but I'm not sure if this is correct, how can I be sure that my computer will solve it someday? Thanks. 
[QUOTE=tanaydin;485958]Hello,
I want to force it until any factorization found for 2^12771 [url]https://www.mersenne.org/report_exponent/?exp_lo=1277&full=1[/url] I'm getting it by "manual assignment" but I'm not sure if this is correct, how can I be sure that my computer will solve it someday? Thanks.[/QUOTE] First WELCOME TO GIMPS!!! To your questions: Will your computer solve it someday? Maybe; someone will. By you? As good as chance as anyone else working on it. In your lifetime? Maybe, maybe not. (Serious) ECM assignments are about the only option now that TF and P1 have been done VERY VERY high. Unless you completely understand ECM you are best to accept the Manual Assignment as is. The very last term in the assignment is the number of ECM curves to run. This is the only term you can confidently change. Increase it if you want it to run longer before you have to get another assignment; Decrease it if it takes longer to complete than you would like. But understand that even running all 360,000 curves at the current range has a pretty small chance of finding a factor (I think about 7%) Anyway please don't let me discourage you; someone will find it someday. Wayne 
Assuming we are in a case similar to M1061 (which is the most probable case, considering that there are many more large prime candidates than small prime candidates), you will need some more computers and some more lifetimes to get a factor of M1277 by ECM or P1. The TF is totally out of discussion, you need few lifetimes to catch the point where ECM is now...
This not considering a lot of "experiments" that other people run on this number, which are usually not reported to gimps. If the factor would have been some "low hanging fruit", it would have been found up to now. My bet is that the smaller factor has over 100120 digits, most probably over 155 digits (the half of 385 is 192). I would be very happy if somebody proves me wrong by finding a factor, hehe... 
[QUOTE=tanaydin;485958].... how can I be sure that my computer will solve it someday?
Thanks.[/QUOTE] You could run SNFS on it. If you get an Amazon AWS account, you could likely solve it within the year for $10,000 or so. Ok, maybe $20k. There's a stub of an explanation at [url]http://www.mersennewiki.org/index.php/SNFS[/url], and googling "special number field sieve" will give you more information. SNFS is *not* a "you might find a factor" algorithm; when the process completes, factors will be known. However, we're talking a quite substantial effort, like hundreds of cores for over a year. A cluster would be helpful to solve the matrix; a single machine with 64GB memory might not be sufficient, and would likely take over a year on its own. There's a reason we haven't tried this job as a forum team; we can marshal 200+ cores to the task, but nobody has the cluster and interest to solve the matrix. 
[QUOTE=VBCurtis;485978]You could run SNFS on it. If you get an Amazon AWS account, you could likely solve it within the year for $10,000 or so. Ok, maybe $20k.
There's a stub of an explanation at [url]http://www.mersennewiki.org/index.php/SNFS[/url], and googling "special number field sieve" will give you more information. SNFS is *not* a "you might find a factor" algorithm; when the process completes, factors will be known. However, we're talking a quite substantial effort, like hundreds of cores for over a year. A cluster would be helpful to solve the matrix; a single machine with 64GB memory might not be sufficient, and would likely take over a year on its own. There's a reason we haven't tried this job as a forum team; we can marshal 200+ cores to the task, but nobody has the cluster and interest to solve the matrix.[/QUOTE] An r4.4xlarge instance has 122 GiB of memory and 8 cores; an r4.2xlarge has 61 GiB and 4 cores. Would that single instance in lieu of a cluster be able to solve the matrix? (and does the number of cores make any difference?) The spot price is about 15 cents an hour for the r4.4xlarge or about 7.5 cents an hour for the r4.2xlarge (useast2, Ohio). Assuming those prices hold steady (no guarantee, but they have for the past 3 months), then that would be $1315 per year, or $660 per year, respectively. If the matrix solving waits for the preliminary SNFS to complete, then it would begin over a year from now and last over a year, and hourly spot prices would likely be even lower in the future, or there might be r5 instances by then. Obviously spot instances are interruptible. However, instances with less than 100 GB of memory can be set to hibernate on interruption, rather than stop or terminate. That might be an issue if the larger r4.4xlarge was used, since it's too big to hibernate. I'm not volunteering to actually do this, just trying to see if I understand correctly. You say that the matrix solving is the stumbling block but it seems that running 200+ cores for over a year is by far the bigger expense, whereas a thousand bucks or two seems doable for a determined individual with money to burn, assuming those hundreds of cores were marshalled for free. 
We can corrupt a 10cores machine with 64G for the LA, and are willing to double the cores and double or quadruple the memory for this very particular purpose. There is a lot of fame coming with it, hehe... but we are afraid that these resources are not enough for LA, and also we are afraid that some software update will be needed to handle such a large matrix...

[QUOTE=LaurV;485986]We can corrupt a 10cores machine with 64G for the LA, and are willing to double the cores and double or quadruple the memory for this very particular purpose.[/QUOTE]If the memory isn't ECC backed then undertaking a oneyearorso job is very risky IMO.

[QUOTE=retina;485987]If the memory isn't ECC backed then undertaking a oneyearorso job is very risky IMO.[/QUOTE]
I believe there are frequent checks that are usually done that have a good probability of detecting errors. 
Details on M1061.
[URL]https://physics.fullerton.edu/gchilders/M1061.pdf[/URL] [quote]The linear algebra was performed using a 24×24 MPI grid (576 total cores), and was split between the Trestles cluster at the San Diego Supercomputing Centerand the Lonestar cluster at the Texas Advanced Computing Center[/quote] M1061 was a SNFS 319.7, M1277 is SNFS 384.4. 
Page 15 of [url]https://eprint.iacr.org/2014/653.pdf[/url] summarizes matrix size and CPUtime for a list of mersenne factorizations completed by the CADO group.
Sampling, 2^11231 was 124M matrix, and took 55 coreyears. 2^11991 (the largest solved yet) was 270M matrix, and took 190 coreyears. Those are 76 bits apart; M1277 is 78 bits larger, so let's scale: 270/124 = 2.18, so M1277 might be 2.18 * 270M = ~520M matrix. 190/55 = 3.45, so M1277's matrix might take 3.45 * 190 = ~660 coreyears. I don't think any single machine is going to handle that, even if memory weren't a constraint. MPI splits the matrix in a way that also splits the memory requirement, so even if this matrix requires 200GB a cluster of 16 machines could solve it with normal amounts of memory. A similar scaling of Greg's M1061 note of 3 CPUcenturies sieve time is sobering. Edit to add: Table 2 page 11 of the above linked paper lists sieve data. For 2^11991: 37bit large primes, mfbr = 109, mfba = 74, 13G raw relations. Not sure whether M1277 would best use 38 or 39 bit LP; let's say 39 and aim for 40G relations. 
[QUOTE=tanaydin;485958]Hello,
I want to force it until any factorization found for 2^12771 [url]https://www.mersenne.org/report_exponent/?exp_lo=1277&full=1[/url] I'm getting it by "manual assignment" but I'm not sure if this is correct, how can I be sure that my computer will solve it someday? Thanks.[/QUOTE] So if I may try to summarize briefly (and I have no issue being corrected). Any more TF is completely out of the question; as is any more P1. Completing the current ECM up to 65 digits (which is almost half done) has a few percent chance of finding a factor. ECM beyond that (i.e. 70 or 75 or 80 digits), which is NOT currently set up in Prime95 but has been attempted a bit by some, would take [B]significantly longer[/B] but still with a small chance of finding a factor. If the smallest factor is actually much larger than 65 digits than the only option is SNFS ... which I know nothing about, but the comments above suggest the world does NOT yet have a computer big enough or fast enough to factor this exponent in our lifetimes. HOWEVER, in the meantime some members here are still actively running ECM on 1277 because there is still a better than 0.0% chance of it finding a factor. Feel free to join them. 
All times are UTC. The time now is 00:44. 
Powered by vBulletin® Version 3.8.11
Copyright ©2000  2021, Jelsoft Enterprises Ltd.