When is optimization premature? [closed]
Asked Answered
D

10

6

I see this term used a lot but I feel like most people use it out of laziness or ignorance. For instance, I was reading this article:

http://blogs.msdn.com/b/ricom/archive/2006/09/07/745085.aspx

where he talks about his decisions he makes to implement the types necessary for his app.

If it was me, talking about these for code that we need to write, other programmers would think either:

  1. I am thinking way too much ahead when there is nothing and thus prematurely optimizing.
  2. Over-thinking insignificant details when there is no slowdowns or performance problems experienced.

or both.

and would suggest to just implement it and not worry about these until they become a problem.

Which is more preferential?

How to make the differentiation between premature optimization vs informed decision making for a performance critical application before any implementation is done?

Disquietude answered 28/1, 2011 at 20:16 Comment(3)
Different in every situation, but a properly designed archtecture from the start will allow optimizations to be more easily implemented in the future, when you can determine that they are necessary.Publicspirited
Check out this answer.Corridor
A couple of other articles that you might find interesting: The Fallacy of Premature Optimization and The 'premature optimization is evil' myth.Novercal
L
13

Optimization is premature if:

  1. Your application isn't doing anything time-critical. (Which means, if you're writing a program that adds up 500 numbers in a file, the word "optimization" shouldn't even pop into your brain, since all it'll do is waste your time.)

  2. You're doing something time-critical in something other than assembly, and still worrying whether i++; i++; is faster or i += 2... if it's really that critical, you'd be working in assembly and not wasting time worrying about this. (Even then, this particular example most likely won't matter.)

  3. You have a hunch that one thing might be a bit faster than the other, but you need to look it up. For example, if something is bugging you about whether StopWatch is faster or Environment.TickCount, it's premature optimization, since if the difference was bigger, you'd probably be more sure and wouldn't need to look it up.

If you have a guess that something might be slow but you're not too sure, just put a //NOTE: Performance? comment, and if you later run into bottlenecks, check such places in your code. I personally don't worry about optimizations that aren't too obvious; I just use a profiler later, if I need to.

Another technique:

I just run my program, randomly break into it with the debugger, and see where it stopped -- wherever it stops is likely a bottleneck, and the more often it stops there, the worse the bottleneck. It works almost like magic. :)

Limn answered 28/1, 2011 at 20:23 Comment(9)
Thanks man, that's a good technique.Disquietude
With the level of modern compilers, you need to be extremely proficient in assembly (not to mention a master of the architecture you're aiming for, in terms of understanding CPU, Bus, RAM etc.) to beat them.Luff
+1 @Eldad: Yup, definitely. :)Limn
++ Your last paragraph is the method I rely on. Since it gets a lot of doubt, I've tried to explain it with a lot of statistical arguments.Corridor
+1 Massive plus 1 on your last paragraph, that is one of my favorite debugging techniques ;-)Cohort
@Chris: With you, and I like that you call it a debugging technique. These "bottlenecks" are essentially like bugs. (Sorry, I put quotes around "bottlenecks" because I hate the word. Typical performance problems are not narrow places the CPU has to squeeze through. They are more like cancer, innocent-looking lines of code that just happen to spawn massive hard-to-see call trees.)Corridor
@Eldad - at the low level this is true but at the higher level you can almost certainly beat the compiler - e.g. no compiler is going to help you if sorting is a significant cost in your application and you chose an inappropriate sorting algorithm. But I'm sure you meant low level - just wanted to point this distinction out.Fishnet
@George Hawkins - of course. I was referring specifically to using Assembly for optimization. Without being an expert in Assembly, the OS and the hardware you're writing for - you will have a hard time beating the compiler.Luff
@George: I think there's another reason why hand-written assembly code can beat compiled code. It's not because of any weakness in compilers. It's because assembly is such a pain to write that you write no more than you need.Corridor
K
4

Premature optimization is making an optimization for performance at the cost of some other positive attribute of your code (e.g. readability) before you know that it is necessary to make this tradeoff.

Usually premature optimizations are made during the development process without using any profiling tools to find bottlenecks in the code. In many cases the optimization will make the code harder to maintain and sometimes also increases the development time, and therefore the cost of the software. Worse... some premature optimizations turn out not to be make the code any faster at all and in some cases can even make the code slower than it was before.

Karolinekaroly answered 28/1, 2011 at 20:18 Comment(4)
Well, sometimes you should "optimize" even if you don't necessarily need it: e.g., I'd say you should never use ArrayList for int instead of List<int>, even if it doesn't make much difference in your particular program. (But in case you're wondering, I'm not the one who gave the -1.)Limn
@Mehrdad: That's more of a maintainability issue rather than an optimization.Ethiopic
@R. Bemrose: It's both -- avoiding the boxing/unboxing is certainly an optimization, and to me it's a more important reason than readability/maintainability.Limn
I'd put type safety on the same level as performance -- I avoid ArrayList<Integer> in Java a lot (I use other classes that use int[] instead), even though it's type-safe.Limn
A
4

This proverb does not (I believe) refer to optimizations that are built into a good design as it is created. It refers to tasks specifically targeted at performance, which otherwise would not be undertaken.

This kind of optimization does not "become" premature, according to the common wisdom — it is guilty until proven innocent.

Anarchism answered 28/1, 2011 at 20:25 Comment(0)
O
4

Optimisation is the process of making existing code run more efficiently (faster speed, and/or less resource usage)

All optimisation is premature if the programmer has not proven that it is necessary. (For example, by running the code to determine if it achieves the correct results in an acceptable timeframe. This could be as simple as running it to "see" if it runs fast enough, or running under a profiler to analyze it more carefully).

There are several stages to programming something well:

1) Design the solution and pick a good, efficient algorithm.

2) Implement the solution in a maintainable, well coded manner.

3) Test the solution and see if it meets your requirements on speed, RAM usage, etc. (e.g. "When the user clicks "Save", does it take less than 1 second?" If it takes 0.3s, you really don't need to spend a week optimising it to get that time down to 0.2s)

4) IF it does not meet the requirements, consider why. In most cases this means go to step (1) to find a better algorithm now that you understand the problem better. (Writing a quick prototype is often a good way of exploring this cheaply)

5) IF it still does not meet the requirements, start considering optimisations that may help speed up the runtime (for example, look-up tables, caching, etc). To drive this process, profiling is usually an important tool to help you locate the bottle-necks and inefficiences in the code, so you can make the greatest gain for the time you spend on the code.

I should point out that an experienced programmer working on a reasonably familiar problem may be able to jump through the first steps mentally and then just apply a pattern, rather than physically going through this process every time, but this is simply a short cut that is gained through experience

Thus, there are many "optimisations" that experienced programmers will build into their code automatically. These are not "premature optimisations" so much as "common-sense efficiency patterns". These patterns are quick and easy to implement, but vastly improve the efficiency of the code, and you don't need to do any special timing tests to work out whether or not they will be of benefit:

  • Not putting unnecessary code into loops. (Similar to the optimisation of removing unnecessary code from existing loops, but it doesn't involve writing the code twice!)
  • Storing intermediate results in variables rather than re-calculating things over and over.
  • Using look-up tables to provide precomputed values rather than calculating them on the fly.
  • Using appropriate-sized data structures (e.g. storing a percentage in a byte (8 bits) rather than a long (64 bits) will use 8 times less RAM)
  • Drawing a complex window background using a pre-drawn image rather than drawing lots of individual components
  • Applying compression to packets of data you intend to send over a low-speed connection to minimise the bandwidth usage.
  • Drawing images for your web page in a style that allows you to use a format that will get high quality and good compression.
  • And of course, although it's not technically an "optmisation", choosing the right algorithm in the first place!

For example, I just replaced an old piece of code in our project. My new code is not "optimised" in any way, but (unlike the original implementation) it was written with efficiency in mind. The result: Mine runs 25 times faster - simply by not being wasteful. Could I optimise it to make it faster? Yes, I could easily get another 2x speedup. Will I optimise my code to make it faster? No - a 5x speed improvement would have been sufficient, and I have already achieved 25x. Further work at this point would just be a waste of precious programming time. (But I can revisit the code in future if the requirements change)

Finally, one last point: The area you are working in dictates the bar you must meet. If you are writing a graphics engine for a game or code for a real-time embedded controller, you may well find yourself doing a lot of optimisation. If you are writing a desktop application like a notepad, you may never need to optimise anything as long as you aren't overly wasteful.

Oneman answered 29/1, 2011 at 9:54 Comment(2)
Thanks, btw I fixed a few typos, hope you don't mind.Disquietude
@Joan Venge: No worries - I'm always missing out characters on this flimsy laptop keyboard :-)Oneman
F
3

When starting out, just delivering a product is more important than optimizing.

Over time you are going to profile various applications and will learn coding skills that will naturally lead to optimized code. Basically at some point you'll be able to spot potential trouble spots and build things accordingly.

However don't sweat it until you've found an actual problem.

Fascia answered 28/1, 2011 at 20:24 Comment(0)
V
1

When you have less that 10 years of coding experience.

Verbenaceous answered 28/1, 2011 at 20:20 Comment(0)
C
1

Having (lots of) experience might be a trap. I know many very experienced programmers (C\C++, assembly) who tend to worry too much because they are used to worry about clock ticks and superfluous bits.

There are areas such as embedded or realtime systems where these do count but in regular OLTP/LOB apps most of your effort should be directed towards maintainability, readability and changeabilty.

Circumlocution answered 28/1, 2011 at 20:23 Comment(0)
L
1

Optimization is tricky. Consider the following examples:

  1. Deciding on implementing two servers, each doing its own job, instead of implementing a single server that will do both jobs.
  2. Deciding to go with one DBMS and not another, for performance reasons.
  3. Deciding to use a specific, non-portable API when there is a standard (e.g., using Hibernate-specific functionality when you basically need the standard JPA), for performance reasons.
  4. Coding something in assembly for performance reasons.
  5. Unrolling loops for performance reasons.
  6. Writing a very fast but obscure piece of code.

My bottom line here is simple. Optimization is a broad term. When people talk about premature optimization, they don't mean you need to just do the first thing that comes to mind without considering the complete picture. They are saying you should:

  1. Concentrate on the 80/20 rule - don't consider ALL the possible cases, but the most probable ones.
  2. Don't over-design stuff without any good reason.
  3. Don't write code that is not clear, simple and easily maintainable if there is no real, immediate performance problem with it.

It really all boils down to your experience. If you are an expert in image processing, and someone requests you do something you did ten times before, you will probably push all your known optimizations right from the beginning, but that would be ok. Premature optimization is when you're trying to optimize something when you don't know it needs optimization to begin with. The reason for that is simple - it's risky, it's wasting your time, and it will be less maintainable. So unless you're experienced and you've been down that road before, don't optimize if you don't know there's a problem.

Luff answered 28/1, 2011 at 20:34 Comment(0)
T
1

Note that optimization is not free (as in beer)

  • it takes more time to write
  • it takes more time to read
  • it takes more time to test
  • it takes more time to debug
  • ...

So before optimizing anything, you should be sure it's worth it.

That Point3D type you linked to seems like the cornerstone of something, and the case for optimization was probably obvious.

Just like the creators of the .NET library didn't need any measurements before they started optimizing System.String. They would have to measure during though.

But most code does not play a significant role in the performance of the end product. And that means any effort in optimization is wasted.

Besides all that, most 'premature optimizations' are untested/unmeasured hacks.

Tilghman answered 28/1, 2011 at 20:36 Comment(0)
C
1

Optimizations are premature if you spend too much time designing those during the earlier phases of implementation. During the early stages, you have better things to worry about: getting core code implemented, unit tests written, systems talking to each other, UI, and whatever else. Optimizing comes with a price, and you might well be wasting time on optimizing something that doesn't need to be, all the while creating code that is harder to maintain.

Optimizations only make sense when you have concrete performance requirements for your project, and then performance will matter after the initial development and you have enough of your system implemented in order to actually measure whatever it is you need to measure. Never optimize without measuring.

As you gain more experience, you can make your early designs and implementations with a small eye towards future optimizations, that is, try to design in such a way that will make it easier to measure performance and optimize later on, should that even be necessary. But even in this case, you should spend little time on optimizations in the early phases of development.

Cohort answered 29/1, 2011 at 0:14 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.