Javascript big-O property access performance [closed]
Asked Answered
T

1

19

What are the performance characteristics of JavaScript property access (on current implementations)?

  • Is it safe to assume array access is O(1)?
  • If I use an object as a hash table (with string keys) can I safely assume O(1) or O(log n) access time?

  • Are there any common browsers or environments that are significantly faster/slower than the others and that I should keep an eye for?

  • Do the JavaScript standards have anything to say?

And most importantly:

  • Where can I find good references for this kind of asymptotic JavaScript performance issues?
That answered 10/9, 2011 at 19:42 Comment(3)
What has the research you've done on your own tell you about any of these questions?Titration
I don't want to do a lot of research on my own on something that a) I'll probably get wrong anyway and b) It is very likely to have been done already by someone more versed on the topic then me.That
Update: Is there anything that guarantees constant time for accessing a property of an object in JavaScript?Dariusdarjeeling
M
9

Every object in JavaScript is implemented as an object hash, so there is no functional difference.

For example, check out this test:

var arr = [];
arr[5] = 5;
arr['5'] = 'not 5';
console.log(arr.length, arr);
// output: 6 [undefined, undefined, undefined, undefined, undefined, "not 5"]

Numbers are stringified when used as positions.

See Crockford's website about JavaScript for more information. The especially important part is under the heading "Arrays":

Arrays in JavaScript are also hashtable objects.

Performance isn't really an issue unless you have a ton of objects to keep track of (like 500,000+), in which case you're probably doing something wrong.

There are optimizations you can do, but they don't really make sense unless you're doing something unnatural with JavaScript (like compression algorithms... I worked on an LZMA implementation in JS... bad idea).

Note:

If you have a spare set (like you define only 10 indexes out of 10,000), you should probably be using a regular object. Arrays will initialize all 10,000 indexes to 'undefined', whereas Object.keys(obj) will only report the 10 you have set. This is a minor optimization that actually makes sense.

Magician answered 10/9, 2011 at 20:25 Comment(2)
I do know how the objects and arrays behave, but I really want to know what happens with big numbers. You probably wouldn't need to go to 500000 anyway - if something turns out to be O(N^2) just a few thousand should be enough to start worrying.That
Read the link. Arrays are hash tables. This means that it's probably O(logn) for lookups (there's lots of optimization there). My coworker ran some tests, and found that it wasn't significant until you got above 50,000 or something else big.Magician

© 2022 - 2024 — McMap. All rights reserved.