Until now no one has come up with a much faster algorithm. This doesn't necessarily have to mean there isn't any as on the other hand it also hasn't been proven that it is impossible to do it much faster.
One optimization you might want to take into account is that it's not necessary to search up to n/2 you can already stop when sqrt (n) is reached.
... and be sure to to choose a different storage location for your numbers if you really want to return a list of all candidates found as already mentioned in "chris" comment.
EDIT:
As I was adverted that there is a rather large variety of algorithms available that in terms of time complexity might run a little bit faster than the one you asked about it might be indicated to add a few more words than the short comment given above.
While besides the most obvious possibility to safe some computation time by just running the loop in steps of 2 after first dividing it down to an odd number there are still some other tricks available I didn't mention them as substantially faster in the answer given above.
The main reason that led to this decision is, that while for example cutting to number of iterations down by a factor of 2 seems like a big win to make in comparison to the expected growth in runtime with growing numbers a gain measured in terms of a constant becomes so small that in complexity theory there even won't be made any difference at all and both algorithms will be said to have (almost) the same time complexity.
Even if it was possible to get a constant gain of hundreds of a billion times the runtime of the original algorithm there still wouldn't be made any difference between both of them at all.
The bigger the numbers get the less influence any constant be it as big as you may ever be able to image plays in terms of runtime if it is also rapidly growing with the magnitude of the number you are adressing.
One very special barrier in terms of time complexity often regarded as the border between practically feasible and merely impossible is that of so called polynomial
runtime.
This means nothing else than that even though the runtime might grow drastically with growing n
it is still possible to describe this growth with a constant exponent k
, such that the runtime is something around n^k
.
On the other hand an algorithm without polynomial
runtime can't be measured with any exponent how ever big you may want to make it.
To give an example why this difference might really matter at all let's take a look at two imaginary algorithms. The first one having polynomial runtime, say n^10
and just another one say this one with runtime n!
.
While it doesn't seem to bad for small numbers, let's say n is just 10 here algorithm one takes 10^10 = 10000000000
time units while with only 3628800
units our second algorithm seems to run even a lot faster.
The problem with our algorithm two is, that compared to algorithm one its runtime will grow dramatically faster. At n=100
you get something like: 100000000000000000000
for algorithm one while it is already something like 93326215443944152681699238856266700490715968264381621468592963895217599993229915608941463976156518286253697920827223758251185210916864000000000000000000000000
for algorithm two.
Pushing frontiers even further with n=1000
we end up with: algorithm one at 1000000000000000000000000000000
while our second algorithm will take something like 402387260077093773543702433923003985719374864210714632543799910429938512398629020592044208486969404800479988610197196058631666872994808558901323829669944590997424504087073759918823627727188732519779505950995276120874975462497043601418278094646496291056393887437886487337119181045825783647849977012476632889835955735432513185323958463075557409114262417474349347553428646576611667797396668820291207379143853719588249808126867838374559731746136085379534524221586593201928090878297308431392844403281231558611036976801357304216168747609675871348312025478589320767169132448426236131412508780208000261683151027341827977704784635868170164365024153691398281264810213092761244896359928705114964975419909342221566832572080821333186116811553615836546984046708975602900950537616475847728421889679646244945160765353408198901385442487984959953319101723355556602139450399736280750137837615307127761926849034352625200015888535147331611702103968175921510907788019393178114194545257223865541461062892187960223838971476088506276862967146674697562911234082439208160153780889893964518263243671616762179168909779911903754031274622289988005195444414282012187361745992642956581746628302955570299024324153181617210465832036786906117260158783520751516284225540265170483304226143974286933061690897968482590125458327168226458066526769958652682272807075781391858178889652208164348344825993266043367660176999612831860788386150279465955131156552036093988180612138558600301435694527224206344631797460594682573103790084024432438465657245014402821885252470935190620929023136493273497565513958720559654228749774011413346962715422845862377387538230483865688976461927383814900140767310446640259899490222221765904339901886018566526485061799702356193897017860040811889729918311021171229845901641921068884387121855646124960798722908519296819372388642614839657382291123125024186649353143970137428531926649875337218940694281434118520158014123344828015051399694290153483077644569099073152433278288269864602789864321139083506217095002597389863554277196742822248757586765752344220207573630569498825087968928162753848863396909959826280956121450994871701244516461260379029309120889086942028510640182154399457156805941872748998094254742173582401063677404595741785160829230135358081840096996372524230560855903700624271243416909004153690105933983835777939410970027753472000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
.
If you don't believe just calculate it yourself. The bc
manual even contains an example of how to implement the factorial function.
But don't get dizzy while counting digits...
It might be interesting to know how many trailing zeroes we would have have to add to 10 to get the factor by which to multiply the age of our universe to get such a big span of time even if we measured in units of planck time. Unfortunately I don't know.
What's interesting about all that is the fact that until today there isn't any known algorithm that can perform factorization in polynomial
time.
As it is not only an interesting field of research int itself the practical impossibility to factorize large integer numbers also plays an important role in the RSA public key encryption algorithm which is in widespread use today, so almost naturally there has already been a lot of resarch in this area.
Discovering algorithms that (without breaking the barrier already mentioned) run sligthly faster than the algorithm you supposed.
As "Jim Balter" alredy correctly annotated in his comment you might want to take a look at the referenced article (see: http://en.wikipedia.org/wiki/Integer_factorization#General-purpose) to take a look upon the methods others already came up with.
This article also mentioned by "Jim" might be another interesting resource: (see: What is the fastest factorization algorithm?)
Another interesting link to take a look upon might be the list of the RSA factoring challange winners of the past years to somehow get an idea where the border between feasible and almost impossible may lies today. (http://en.wikipedia.org/wiki/RSA_Factoring_Challenge)