What's the purpose of allowing duplicate property names?
Asked Answered
G

1

22

I'm reading the MDN javascript reference, accordingly the following code no longer returns false:

function haveES6DuplicatePropertySemantics(){
  "use strict";
  try {
    ({ prop: 1, prop: 2 });

    // No error thrown, duplicate property names allowed in strict mode
    return true;
  } catch (e) {
    // Error thrown, duplicates prohibited in strict mode
    return false;
  }
}

In ECMAScript 5 strict mode code, duplicate property names were considered a SyntaxError. With the introduction of computed property names making duplication possible at runtime, ECMAScript 6 has removed this restriction.

My question is, what are the practical benefits of allowing duplicate property-names in the initializers? I can see, how when object properties are assigned dynamically this might sometimes occur, but since order of precedence apparently determines which of the properties is actually set on the newly created object -- this seems more than anything like an indefinite behaviour that's best avoided.

Gileadite answered 3/6, 2015 at 10:12 Comment(6)
Throws an error for me on chromium v40 Uncaught SyntaxError: Duplicate data property in object literal not allowed in strict mode but is not caught by try..catchSchade
It returns true for me on chromium v42 / firefox 37, it might require the "Experimental JavaScript features" flag to get the equivalent behaviour on v40?Gileadite
Are you wanting to execute the script in ES6 (experimental) or in ES5? Do chromium v42 and firefox 37 now run standard in ES6 mode?Schade
Yes, I believe the later is the case. kangax.github.io/compat-table/es6Gileadite
Ok. I would have assumed in ES6 that the restriction of duplication would only be lifted when the property names were computed i.e. in [..]. Seems like a backward step to me, or they got it wrong. will be interesting to see what answers present themselves.Schade
The argumentation seems plain wrong. At runtime, I am free to do whatever the heck I want with my objects, including changing my properties. The point of that restriction was (I thought) to avoid typos. Fortunately eslint can deal with this for us still...Eberto
A
24

what are the practical benefits of allowing duplicate property-names in the initializers

There are no practical benefits as such. Now that there are computed property keys in ECMA Script 6, the actual value of the keys will be determined only at the runtime. As it is, keys can be added to objects at runtime and they overwrite the existing key and value. The same behavior is extended in ES-6 and the restriction of not allowing compile time duplicate keys check is removed.

Quoting Allen Wirfs-Brock from the discussion in ESDiscuss Mailing list,

The plan has been that runtime validation would be performed for any object literals containing computed property keys and the current spec. draft contains pseudo code for doing the checks. However a bug report (https://bugs.ecmascript.org/show_bug.cgi?id=1863 ) points out an issue with the current spec. For example, the current spec. throws an error on:

({get a() {},
  get ["a"]() {}
});

but not on:

({get ["a"]() {},
  get a() {}
});

Basically, it isn't sufficient to only check for an already defined property key when processing property definitions that contains a computed key. If any computed keys exist the checking has to be done even for the definitions that have literal property names. And it isn't sufficient to just consider the property keys and the data/accessor property distinction, the validation also has to take into account the syntactic form of the definition and whether or not strict mode applies..

It turns out that even in pseudo code, this is a fairly complicated set of runtime validation rules to apply. I'm having a hard time convincing myself that the runtime computational and meta data costs of this dynamic validation is justified. It costs too much and the actual benefit is pretty small.

For that reason, I propose that we drop this runtime validation of object literals (and class definition). We would still have the static validation and early errors for property definitions that don't have computed keys. But anything that makes it past those checks (including all property definitions with computed names) are just processed sequentially with no duplicate name checking.

So, the proposal was to retain the compile time check for normal keys and as per this comment the check was dropped later. In Revision 26,

Eliminated duplicate property name restrictions on object literals and class definitions

Adriell answered 3/6, 2015 at 10:44 Comment(8)
So you think it was too difficult to code a compile time check, to maintain the "strict" rules defined in ES5 and it was easier just to make "strict" less "strict" to comply with ES6 computed property keys. Or is it actually written in the current ES6 spec that it must be dropped, I can't find an answer in the spec?Schade
@Schade It is mentioned in Annex E - In ECMAScript 2015, it is no longer an early error to have duplicate property names in Object Initializers.Adriell
Ok, thank you for pointing me to the reference. To me it seems strange to remove such an early error. I don't understand the reasoning behind it, other than possible coding difficulties.Schade
With adding keys though (this would have explicit precedence) but initializing with two identical keys {prop: 1, prop:2} (at least I think that precedence is kind of surprising). Are they not sure to be identical at run-time anyways? I'm not sure why the reference you provided doesn't mention strict-mode either?Gileadite
@Gileadite Keys in the Object literals will be processed left-to-right. And the reference doesn't mention anything about strict mode because this check is not going to be there at all irrespective of the mode.Adriell
@Xotic750: To be specific, it isn't that thefourtheye thinks that it is easier to make the rules less strict, rather it is Allen Wirfs, who's a Mozilla engineer who thinks this. If the guys who implement Firefox considers it a better solution I'm inclined to believe them because, you know, the actually have experience writing a web browser/javascript engine. Whereas we here at SO are mostly good at theorizing.Trainman
While I see that is how my question could be interpreted it was not the meaning. I wanted to know if he had any idea or anything specific as to why it was so, my conjecture was that it was too difficult. I was not suggesting that it was that way be he considered and ruled it so. And it was great that he knew where to find the information and provided it for us. :)Schade
I was just about to say that duplicate normal keys could have still raised an error, until you updated to link bug #1863 in your answer, I think that rather covers my question; although I definitely agree @RobertRossmann's comment that a linter should complain here. Not sure why the bug report uses ({["a"]: 0, a: 0}) as an example, since that's easy to check for, but obviously something like var b="a"; ({[b]: 0, a: 0}) is possible, and not determinable.Gileadite

© 2022 - 2024 — McMap. All rights reserved.