Cost of using params in C#
Asked Answered
F

6

29

Does anyone have advice for using the params in C# for method argument passing. I'm contemplating making overloads for the first 6 arguments and then a 7th using the params feature. My reasoning is to avoid the extra array allocation the params feature require. This is for some high performant utility methods. Any advice? Is it a waste of code to create all the overloads?

Foxhound answered 16/10, 2010 at 20:10 Comment(8)
Sounds as premature optimization to me...Meunier
What @Petar says. This sounds awfully like unnecessary micro-optimization. I don't know C# but this sounds like something that might even be optimized out by the compiler anywayRichia
Perhaps you could use the framework as an example and limit overloads to up to 3 parameters, then use params for more. e.g., System.String.Format().Hosmer
I'm very unsure as to how the compiler should be able to optimize away the array allocation as the general mechanism is that the arguments are provided as an array that may be iterated any way the my code see's fit, skip arguments at certain positions etc.Foxhound
Has anyone made any measurements as to how many % to gain from making overloads rather than just sticking to using params?Foxhound
why don't you write a simple benchmark and compare just the timings ?Carom
hhm I guess arrays (params) could actually improve performance in some cases. You only sent a reference with the method and not N*values-of-some-size.Phebe
@lasseespholt: You might only be sending a reference in that situation, but you need to construct the array and copy those N*values-of-some-size into it before sending its reference.Bhayani
D
71

Honestly, I'm a little bothered by everyone shouting "premature optimization!" Here's why.

  1. What you say makes perfect sense, particularly as you have already indicated you are working on a high-performance library.
  2. Even BCL classes follow this pattern. Consider all the overloads of string.Format or Console.WriteLine.
  3. This is very easy to get right. The whole premise behind the movement against premature optimization is that when you do something tricky for the purposes of optimizing performance, you're liable to break something by accident and make your code less maintainable. I don't see how that's a danger here; it should be very straightforward what you're doing, to yourself as well as any future developer who may deal with your code.

Also, even if you profiled the results of both approaches and saw only a very small difference in speed, there's still the issue of memory allocation. Creating a new array for every method call entails allocating more memory that will need to be garbage collected later. And in some scenarios where "nearly" real-time behavior is desired (such as algorithmic trading, the field I'm in), minimizing garbage collections is just as important as maximizing execution speed.

So, even if it earns me some downvotes: I say go for it.

(And to those who claim "the compiler surely already does something like this"--I wouldn't be so sure. Firstly, if that were the case, I fail to see why BCL classes would follow this pattern, as I've already mentioned. But more importantly, there is a very big semantic difference between a method that accepts multiple arguments and one that accepts an array. Just because one can be used as a substitute for the other doesn't mean the compiler would, or should, attempt such a substitution).

Diseur answered 16/10, 2010 at 20:27 Comment(2)
I agree that optimising early isn't necessarily optimising prematurely, and it sounds like the OP probably has good reasons for doing this, but without knowing more about what's going on inside these methods it's difficult to know for sure: acm.org/ubiquity/views/v7i24_fallacy.html bluebytesoftware.com/blog/2010/09/06/…Bhayani
I couldn't have said it any better.Hosmer
R
16

Yes, that's the strategy that the .NET framework uses. String.Concat() would be a good example. It has overloads for up to 4 strings, plus a fallback one that takes a params string[]. Pretty important here, Concat needs to be fast and is there to help the user fall in the pit of success when he uses the + operator instead of a StringBuilder.

The code duplication you'll get is the price. You'd profile them to see if the speedup is worth the maintenance headache.

Fwiw: there are plenty of micro-optimizations like this in the .NET framework. Somewhat necessary because the designers could not really predict how their classes were going to be used. String.Concat() is just as likely to be used in a tight inner loop that is critical to program perf as, say, a config reader that only runs once at startup. As the end-user of your own code, you typically have the luxury of not having to worry about that. The reverse is also true, the .NET framework code is remarkably free of micro-optimizations when it is unlikely that their benefit would be measurable. Like providing overloads when the core code is slow anyway.

Rescue answered 16/10, 2010 at 20:27 Comment(1)
Haha, "fall in the pit of success" -- I like that.Diseur
C
4

You can always pass Tuple as a parameter, or if the types of the parameters are always the same, an IList<T>.

As other answers and comments have said, you should only optimize after:

  1. Ensuring correct behavior.
  2. Determining the need to optimize.
Composed answered 16/10, 2010 at 20:12 Comment(1)
The tuple will also incur extra memory allocation like the array. Why would it be better?Foxhound
B
2

My point is, if your method is capable of getting unlimited number of parameters, then the logic inside it works in an array-style. So, having overloads for limited number of parameters wouldn't be helping. Unless, you can implement limited number of parameters in a whole different way that is much faster.

For example, if you're handing the parameters to a Console.WriteLine, there's a hidden array creation in there too, so either way you end up having an array.

And, sorry for bothering Dan Tao, I also feel like it is premature optimization. Because you need to know what difference would it make to have overloads with limited number of parameters. If your application is that much performance-critical, you'd need to implement both ways and try to run a test and compare execution times.

Brittenybrittingham answered 16/10, 2010 at 20:46 Comment(1)
Yes thats a good point. The context in which I asked the question was in fact, that I am able to express a meaningful function when one or more arguments are provided.Foxhound
S
1

Don't even think about performance at this stage. Create whatever overloads will make your code easier to write and easier to understand at 4am two years from now. Sometimes that means params, sometimes that means avoiding it.

After you've got something that works, figure out if these are a performance problem. It's not hard to make the parameters more complicated, but if you add unnecessary complexity now, you'll never make them less so later.

Substrate answered 16/10, 2010 at 20:14 Comment(0)
M
1

You can try something like this to benchmark the performance so you have some concrete numbers to make decisions with.

In general, object allocation is slightly faster than in C/C++ and deletion is much, much faster for small objects -- until you have tens of thousands of them being made per second. Here's an old article regarding memory allocation performance.

Midwinter answered 16/10, 2010 at 20:46 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.