Most Efficient Way to Scale an Array in Java?
Asked Answered
N

7

14

(Apologies if this has been asked before - I can't believe it hasn't, but I couldn't find one. Perhaps my search-fu is weak.)

For years I've "known" that Java has no native function to scale an array (i.e. multiply each element by a constant). So I've been doing this:

for (int i=0; i<array.length; i++) {
  array[i] = array[i] * scaleFactor;
}

Is this actually the most efficient way (in this application, for example, it's an array of around 10000 doubles)? Or is there a better way?

Narrative answered 17/10, 2011 at 9:59 Comment(1)
if you're interested in micro-optimization then what you wrote can typically be speed up by using "loop unrolling": en.wikipedia.org/wiki/Loop_unwinding However the JVM may be already be doing that for you should that part of the code be called repeatedly. The HotSpot VM even has an XX:LoopUnrollLimit= option to "control" that behavior.Scripture
K
14

Looks absolutely fine to me. I can't think of a more efficient way. Obviously try to put that code in one place rather than having the actual code all over the place, but other than that, no obvious problems.

Keitloa answered 17/10, 2011 at 10:1 Comment(0)
Y
8

Only other suggestion I can offer is to lazily scale whereby you only pay the cost of multiplication on accessing each element; e.g.

public class MyArray {
  private final double[] arr;
  private double scale = 1.0;

  public MyArray(double[] arr) {
    this.arr = arr;
  }

  public double getScale() {
    return scale;
  }

  public void setScale(double scale) {
    this.scale = scale;
  }

  public double elementAt(int i) {
    return arr[i] * scale;
  }
}

Obviously this is only better in certain situations:

  • When your array is huge AND
  • You are only accessing a few elements AND
  • You are typically accessing these elements once.

In other situations it's a micro-optimisation with no real benefit on modern CPUs.

Yellows answered 17/10, 2011 at 10:5 Comment(3)
In this case, it might be better to internally use an ArrayList<T> and condense it into an array when requested.Showy
@0xCAFEBABE: I would say that depends entirely on whether the OP expects the array to vary in size. If not, better to save on memory and use primitives.Yellows
My application will be reading the entire array at once and it's of a fixed size, so my guess would be that "lazy" scaling and using an ArrayList would both be less efficient.Narrative
L
5

The "better way" is to write array[i] *= scaleFactor; instead of array[i] = array[i] * scaleFactor;. :-)

Really, that's just syntactic sugar though - the compiled output (and hence performance) should be exactly the same. As Jon says, you're not going to be able to get any better performance, but personally I'll take a reduction in typing any day.

Lowdown answered 17/10, 2011 at 10:10 Comment(1)
Even though I like the *= approach of writing, this answer is completely outside the scope of what is asked, and probably would better be suited as a comment.Alternant
S
3

Only thing I can think to add in addition to Adamski and Jon Skeet is that if it happens to be an array of ints/longs and you're scaling by a power of 2, then you might get a slight improvement by using bitshift operators. YMMV though, since it will depend on the compiler (and possibly even the VM).

Subjunctive answered 17/10, 2011 at 10:29 Comment(3)
"it will depend on the VM" depend on the compiler rather than the VM, surely?Sterling
Um, yeah. Corrected. It could potentially depend on hardware too (bitshift could take as long as multiply) so I sort of stand by my point. :)Subjunctive
@Raedwald: it will depend on the JIT compiler which is generally considered part of the VM. Optimization is not considered the job of a Java-to-bytecode compiler.Laurettalaurette
H
2

In Java 8:

double coef = 3.0;
double[] x1 = {1,2,3};
double[] x2 = DoubleStream.of(x1).map(d->d*coef).toArray();

System.out.println(Arrays.toString(x2));

output: [3.0, 6.0, 9.0]

Hokkaido answered 10/10, 2017 at 17:25 Comment(0)
P
0

You could work with threads, to reduce the runtime, but the bottom line is you would include this code and let each thread run a part of the for loop so the resulting program is as efficient as yours; it's just made faster

Plier answered 17/10, 2011 at 10:14 Comment(1)
The code is (I'm pretty sure) memory bound, not CPU bound. So, more threads/CPU power won't make any difference. I guess it'll just run slower. However, if you write a program to prove me wrong, I'll upvote!Serf
F
0

Looks optimal to me.

Don't fall for false optimisations like declaring the array length in a final field outside the loop. This works for Collections by avoiding repeat method calls to .size() and Strings avoiding method calls to .length() but on an array .length is already a public final field.

Also, looping backwards towards zero might be an assembly language optimisation but in a high level language like Java the VM will take care of any obvious tweaks.

Frogfish answered 17/10, 2011 at 14:10 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.