Will optimizing code become unnecessary?
Asked Answered
C

26

7

If Moore's Law holds true, and CPUs/GPUs become increasingly fast, will software (and, by association, you software developers) still push the boundaries to the extent that you still need to optimize your code? Or will a naive factorial solution be good enough for your code (etc)?

Concoff answered 12/6, 2009 at 23:59 Comment(1)
This seems like a community wiki question since it has no finite answer.Shizue
U
18

Poor code can always overcome CPU speed.

For an excellent example, go to this Coding Horror column and scroll down to the section describing the book Programming Pearls. Reproduced there is a graph showing how, for a certain algorithm, a TRS-80 with a 4.77MHz 8-bit processor can beat a 32-bit Alpha chip.TRS-80 vs. Alpha
(source: typepad.com)

The current trend in speedups is to add more cores, 'cause making individual cores go faster is hard. So aggregate speed goes up, but linear tasks don't always benefit.

The saying "there is no problem that brute force and ignorance cannot overcome" is not always true.

Unbent answered 13/6, 2009 at 0:11 Comment(2)
++ I tried to find that saying, and could not. There is a song of that name. I would say "there is no advantage that brute force and ignorance cannot overcome".Photojournalism
Here's the picture: codinghorror.typepad.com/.a/…Premeditate
T
31

2x the processing power doesn't do much to ameliorate the awfulness of your lousy n^2 search.

Tamikotamil answered 13/6, 2009 at 0:4 Comment(1)
but it lets you search 40% more data!Romie
U
18

Poor code can always overcome CPU speed.

For an excellent example, go to this Coding Horror column and scroll down to the section describing the book Programming Pearls. Reproduced there is a graph showing how, for a certain algorithm, a TRS-80 with a 4.77MHz 8-bit processor can beat a 32-bit Alpha chip.TRS-80 vs. Alpha
(source: typepad.com)

The current trend in speedups is to add more cores, 'cause making individual cores go faster is hard. So aggregate speed goes up, but linear tasks don't always benefit.

The saying "there is no problem that brute force and ignorance cannot overcome" is not always true.

Unbent answered 13/6, 2009 at 0:11 Comment(2)
++ I tried to find that saying, and could not. There is a song of that name. I would say "there is no advantage that brute force and ignorance cannot overcome".Photojournalism
Here's the picture: codinghorror.typepad.com/.a/…Premeditate
A
13

The faster computers get, the more we expect them to do.

Ambrose answered 13/6, 2009 at 0:0 Comment(6)
Would you elaborate? I'm still learning to program (wrote my first Stack class today), and I can't think of many things specifically that programmers want to do today that they can't do to hardware limitations.Concoff
There is a largest prime number? Cool ey ;)Lightly
We also expect them to do things with more fidelity, and faster. Eg. Games - the faster computers get, the higher quality the graphics will get.Culpable
...and the sooner the robot revolution will start!Carpogonium
This answer doesn't say much.Lawerencelawes
@Lawerencelawes Maybe this video says it better: youtube.com/watch?v=Pdk2cJpSXLgAmbrose
P
13

Whether it's faster code for more polygons in a videogame, or faster algorithms for trading in financial markets, if there's a competitive advantage to being faster, optimization will still be important. You don't have to outrace the lion that's chasing you and your buddy--you just have to outrace your buddy.

Pau answered 13/6, 2009 at 0:6 Comment(1)
I agree. For the thousands of applications that come out, those that are optimized will stand out against the competition. It'd be funny if it becomes a forgotten black art again. "You did what?" I think this is the correct answer.Lawerencelawes
B
12

Until all programmers write optimal code the first time around, there will always be a place for optimization. Meanwhile, the real question is this: what should we optimize for first?

Benefield answered 13/6, 2009 at 0:2 Comment(1)
Amen to that - was going to write something similar +1. Optimization can also mean "wtf was that guy thinking when he wrote that muck."Factfinding
N
10

Moore's Law speaks about how many transistors we can pack on a chip -- it has nothing to say about those transistors being able to switch as increasingly fast speed. Indeed, in the last few years clock speeds have more or less stagnated - we just keep getting more and more "cores" (complete CPUs, essentially) per chip. To take advantage of that requires parallelization of the code, so if you're writing "naively" the magical optimizer of the future will be busy finding out hidden parallelism in your code so it can farm it out to multiple cores (more realistically, for the foreseeable future, you'll have to be helping out your compiler a lot;-).

Noonday answered 13/6, 2009 at 0:6 Comment(2)
Good explanation. I never considered the difference between have a 32core 3.0GHz CPU and a 96GHz CPU, but the difference is clear now.Concoff
The magical optimizer of the future can only do so much; see Amdahl's Law en.wikipedia.org/wiki/Amdahl%27s_law .Surgeonfish
J
10

Wirth's law:

Software is getting slower more rapidly than hardware becomes faster.

P.S. On a more serious note: as the computing model moves to parallel processing, code optimization becomes more important. If you optimize your code 2x and it runs 5 min instead of 10 min on 1 single box, it may be not that impressive. The next computer with 2x speed will compensate this. But imagine of you run your program on 1000 CPUs. Then any optimization saves A LOT of machine time. And electricity. Optimize and save the Earth! :)

Jugal answered 13/6, 2009 at 0:12 Comment(0)
K
6

Computational tasks seem to be divided into roughly two broad groups.

  1. Problems with bounded computational needs.
  2. Problems with unbounded computational needs.

Most problems fit in that first category. For example, real-time 3d rasterisation. For a good long time, this problem was out of reach of typical consumer electronics. No convincing 3d games or other programs existed that could produce real-time worlds on an Apple ][. Eventually, though, technology caught up, and now this problem is achievable. A similar problem is simulation of protein folding. Until quite recently, it was impossible to transform a know peptide sequence into the resulting protein molecule, but modern hardware makes this possible in a few hours or minutes of processing.

There are a few problems though, that by their nature can absorb all of the computational resources available. Most of these are dynamic physical simulations. Obviously its possible to perform a computational model of, say, the weather. We've been doing this almost as long as we've had computers. However, such a complex system benefits from increased accuracy. simulation at ever finer space and time resolution improves the predictions bit-by-bit. But no matter how much accuracy any given simulation has, there's room for more accuracy with benefit that follows.

Both types of problems have a very major use for optimization of all sorts. The second type is fairly obvious. If the program doing the simulation is improved a bit, then it runs a bit faster, giving results a bit sooner or with a bit more accuracy.

The first one is a bit more subtle, though. For a certain period, no amount of optimization is worthwhile, since no computer exists that is fast enough for it. After a while, optimization is somewhat pointless, since hardware that runs it is many times faster than needed. But there is a narrow window during which an optimal solution will run acceptably on current hardware but a suboptimal solution wont. during this period, carefully considered optimization can be the difference between a first-to-market winning product and an also ran.

Kantar answered 13/6, 2009 at 0:34 Comment(0)
L
3

There is more to optimization than speed. Moore's law doesn't apply to computer memory. Also optimization is often the process of compiling your code to take advantage of CPU-specific instructions. These are just a few of the optimizations I can think of that will not be solved by faster CPUs.

Lora answered 13/6, 2009 at 0:0 Comment(1)
Moore's Law does apply to memory, it just has a much slower growth rate. I forget the numbers, but I think it's something like memory speeds double every 7 years or so.Surgeonfish
L
3

Optimization will always be necessary, because the mitigating factor to Moore's Law is bloatware.

Lumbar answered 13/6, 2009 at 0:6 Comment(0)
A
2

Other answers seem to be concentrating on the speed side of the issue. That's fine. The real issue I can see is that if you optimise your code, it'll take less energy to run it. Your datacenter runs cooler, your laptop lasts longer, your phone goes for more than a day on a charge. There's a real selection pressure at this end of the market.

Alithia answered 13/6, 2009 at 0:13 Comment(0)
D
1

Optimisation will continue be needed in many situations, particularly:

  • Real time systems, where cpu time is at a premium

  • Embedded systems, where memory is memory is at a premium

  • Servers, where many processes are demanding attention simultaneously

  • Games, where 3-D ray tracing, audio, AI, and networking can make for a very demanding program

Dichromic answered 13/6, 2009 at 0:10 Comment(0)
D
1

The world changes, and we need to change with it. When I first began, being a good programmer was all about knowing all of the little micro-optimizations you could do to squeeze another 0.2% out of a routine by manipulating pointers in C, and other things like that. Now, I spend much more of my time working on making algorithms more understandable, since in the long run, that's more valuable. But - there are always things to optimize, and always bottlenecks. More resources means people expect more from their systems, so being sloppy isn't a valid option to a professional.

Optimization strategies change as you add more speed/memory/resources to work with, though.

Some optimization has nothing to do with speed. For example, when optimizing multithreaded algorithms, you may be optimizing a reduction in the total number of shared locks. Adding more processing power in the form of speed (or worse, processors) may not have any effect if your current processing power is spent waiting on locks.... Adding processors can even make your overall performance drop if you're doing things incorrectly. Optimization in this case means trying to reduce the number of locks, and keep them as fine grained as possible instead of trying to reduce the number of instructions.

Digestible answered 13/6, 2009 at 0:31 Comment(0)
I
1

As long as some people write slow code that uses excessive resources, others will have to optimize their code to provide those resources faster and get the speed back.

I find it amazing how creative some developers can get writing suboptimal code. At my previous job, one guy wrote a function to compute the time between two dates by continuning to increment one date and comparing, for example.

Incorrigible answered 13/6, 2009 at 0:33 Comment(1)
Hey! I just realized I have been on your site. Lot's of cool stuff. You never know who you see on these forums. (I saw Peter Shor the other day) Anyway, really liked the site.Thereupon
B
1

Computer speed can't always overcome human error. The questions might be phrased, "Will CPU's become sufficiently fast that compilers can take the time to catch (and fix) implementation problems." Obviously, code optimization will be needed (for the foreseeable future) to fix Shlemiel the painter-type problems.

Software development is still a matter of telling the computer exactly what to do. What "increasingly fast" CPUs will give us is the ability to design increasingly abstract and natural programming languages, eventually to the the point where computers take our intentions and implement all the low-level details... someday.

Badalona answered 13/6, 2009 at 0:54 Comment(0)
P
1

A computer is like a teenager's room.

It will never be big enough to hold all the junk.

Photojournalism answered 19/6, 2009 at 18:7 Comment(0)
T
1

I think that result of all this is that computing power is getting cheaper so that the programmer can spend less time to accomplish a given task. For example, higher level languages, like Java or Python, are almost always slower than lower level languages like Assembly. But it is so much easier for the programmer that new things are possible. I think the end destination will be that computers will be able to directly communicate with humans, and compile human speech into byte-code. Then programmers will cease to exist. (And computers might take over the world)

Thereupon answered 13/12, 2011 at 2:29 Comment(0)
V
0

Right or wrong, it's happening already in my opinion, and it's not always necessarily a bad thing. Better hardware does present opportunity for the developer to focus more energy on solving the problem at hand than worrying about the extra 10% of memory utilization.

Optimization is inarguable, but only when it's needed. I think the additional hardware power is simply decreasing the instances where it is truly needed. However, whoever is writing the software to launch the space shuttle to the moon better have his code optimized :)

Verret answered 13/6, 2009 at 0:6 Comment(1)
I disagree with that last point. NASA uses very old computer technology on spacecrafts, because the hardware has been tested for many, many years. Reliability and redundancy are far more important for space hardware than speed.Surgeonfish
D
0

Given that computers are about a thousand times faster than they were a few decades ago, but don't generally appear much faster, I'd say that we have a LONG way to go before we stop worrying about optimization. The problem is that as computers become more powerful, we have the computers do more and more work for us so that we can work at higher levels of abstraction. Optimization at each level of abstraction remains important.

Yes, computers do many things a lot faster: You can draw a Mandelbrot in minutes that used to require days of computer time. A GIF loads near-instantaneously, rather than taking visible seconds to be drawn on the screen. Many things are faster. But browsing, for example, is not that much faster. Word processing is not that much faster. As computers get more powerful, we just expect more, and we make computers do more.

Optimization will be important for the forseeable future. However, micro-optimizations are far less important than they used to be. The most important optimization these days may be choice of algorithm. Do you choose O(n log n) or O(n^2) .... etc.

Disposable answered 13/6, 2009 at 0:6 Comment(0)
M
0

The cost of optimization is very low, so I doubt it will become necessary to drop it. The real problem is finding tasks to utilize all the computing power that's out there -- so rather than drop optimization, we will be optimizing our ability to do things in parallel.

Marquesan answered 13/6, 2009 at 0:6 Comment(0)
R
0

Eventually we wont be able to get faster, eventually we will be limited by space hence why you see newer processors under 3GHZ and multi core.. So yes optimization is still a neccessity.

Rubato answered 13/6, 2009 at 0:7 Comment(0)
S
0

Optimizing code will always be required to some degree and not just to speed up execution speed and lower memory usage. Finding the optimal energy-efficient method of processing information will be a major requirement in data-centres for example. Profiling skills are going to become a lot more important!

Snuffle answered 13/6, 2009 at 0:31 Comment(0)
K
0

Yes, we are at the point where optimization matters and will be there in foreseeable future. Because:

  • RAM speeds increase at lower pace than CPU speeds. Thus there is a still-widening performance gap between CPU and RAM, and, if your program accesses RAM a lot, you have to optimize access patterns to exploit cache efficiently. Otherwise the super-fast CPU will be idle 90% of time, just waiting for the data to arrive.
  • Number of cores increases and increases. Does your code benefit from each added core or does it run on a single core? Here optimization means parallelization, and, depending on the task at hand, it may be hard.
  • CPU speeds will never ever catch up with exponential algorithms and other brute force kinds of things. As nicely illustrated by this answer.
Kissner answered 13/6, 2009 at 7:6 Comment(0)
B
0

Let's hope network speeds keep up so we can shovel enough data over the wire to keep up with the CPUs...

As mentioned, there will always be bottlenecks

Brockbrocken answered 13/6, 2009 at 7:49 Comment(0)
P
0

Suppose your CPU has as many transistors as the number of subatomic particles in the universe, and its clock runs at the frequency of hard cosmic rays, you can still beat it.

If you want to stay ahead of the latest CPU clock, just add another level of nesting in your loops, or add another level of nesting in your subroutine calls.

Or if you want to be really professional, add another layer of abstraction.

It's not hard :-)

Photojournalism answered 13/6, 2009 at 13:51 Comment(0)
A
0

Even though CPUs get faster and faster, you can always optimize

  • network throughput,
  • disk seeks,
  • disk usage,
  • memory usage,
  • database transactions,
  • number of system calls,
  • scheduling and locking granularity,
  • garbage collection.

(that's real world examples I've seen during last half a year).

Different parts of complex computer systems are considered expensive at different points of computing history. You have to measure the bottlenecks and judge where to put the effort.

Altimeter answered 13/6, 2009 at 14:6 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.