Regarding `Foo()` being identical to `Foo(undefined)`

In Why is BigInt() broken? I noticed this comment by @ljharb:

We’ve tried pretty hard to avoid that legacy pattern, and make absence and undefined always be treated the same.

I can't help but wonder if this doesn't introduce a potential issue.

If Foo() is identical to Foo(undefined), then that seems to imply that Foo(undefined, undefined) is also the same as Foo(), as would any length of undefined parameters. Even if this seeming implication is false, the original statement seems to contradict the behavior of arguments in general.

function Foo(...args) {
   console.log(`argument count: ${args.length}`);
}

In the end, such a convention seems to be just as awkward as property presence detection done as such:

let a  = { bar: undefined; }
if (a.bar === undefined) {
   ...
}

Please do not argue on the poor nature of such a test. Any such argument is moot as, sadly, code with similar construction exists in the wild.

In both cases, the detectable presence of a value is being ignored. I'm not arguing that the legacy pattern should be continued. I would simply like to have an understanding of the desire to change the well-known, and occasionally useful pattern.

Code exists in the wild for everything that's possible, but "code exists" isn't a justification to encourage it as a pattern. Many things are detectable that are often ignored in builtins - methods coerce to number/string instead of throwing on non-numbers/non-strings, as of ES6 most things that expect an Object no longer throw on non-nullish primitives, etc. The strong idiom in the language itself is to be liberal in what is accepted - in other words, to intentionally ignore detectable things when it makes sense, is intuitive and/or wouldn't cause incorrect or surprising behavior.

Note that the above is an argument for the general case, that it is both acceptable and good for the language to make design choices.

Separately, a function that checks arguments.length, or that behaves differently between a missing parameter and an explicitly undefined one is passed, is highly likely to be a very confusing one - and I'd guess wouldn't be considered very idiomatic.

Similarly, writing a function that checks that an object lacks a property versus that it's set to undefined is highly unfriendly to any inheritance patterns - subclassing in particular.

While I personally would like BigInt() to return 0n and BigInt(undefined) to throw, in the general case I think it is a very bad idea to differentiate between "absent" and "undefined", and my comment on that thread was pointing out the unfortunate situation we find ourselves in, where neither option is free of downsides.

3 Likes

Of all the things you stated, there's only 1 that I cannot agree with:

Separately, a function that checks arguments.length, or that behaves differently between a missing parameter and an explicitly undefined one is passed, is highly likely to be a very confusing one - and I'd guess wouldn't be considered very idiomatic.

I would agree if something like Foo(...args) either didn't exist or didn't have the potential for massive side effects depending on the value of args.length. Depending entirely on what the function actually does with the arguments, the entire program could end up in a very different state just due to the presence of an undefined argument.

I guess what I'm looking for is this: What is the reason that the special case of 1 undefined argument being treated as equivalent to no arguments is considered preferable when under normal circumstances (like function Foo(arg1) {}) the argument must be ignored or checked for undefined to ensure the special case processing?

Similarly, writing a function that checks that an object lacks a property versus that it's set to undefined is highly unfriendly to any inheritance patterns - subclassing in particular.

Congratulations, you've just reiterated one of the arguments against private fields. It does exactly this, but let's not dig into that dead subject.

The strong idiom in the language itself is to be liberal in what is accepted - in other words, to intentionally ignore detectable things when it makes sense, is intuitive and/or wouldn't cause incorrect or surprising behavior.

This is precisely the point: when it makes sense. BigInt has to do something roughly similar to parseInt on its object to get a workable number. parseInt(undefined) returns NaN. I guess a similar parallel can be drawn to the 0 argument case. However, in that case, BigInt has the excuse of being a default constructor to explain why it initializes the value. Even given that, it doesn't excuse why instead of being "liberal" and treating undefined like "undefined" or "foo" and throwing, it goes out of the way and performs an extra check to excuse the otherwise illegal value.

I agree that it makes sense for BigInt to have them differ, but in the general case, i don’t think it does make sense. The challenge here is: do we make BigInt more useful yet more inconsistent? Or less useful yet perfectly consistent?

For the general case, we're probably going to have to "agree to disagree". Even given the dynamic nature of ES, it doesn't make sense to say that [void 0] is semantically equivalent to [] and should be processed by all objects in the same fashion. I would parallel that to the difference between var a = {} and var b = new Proxy(a, {}). Value-wise, both [void 0][0] and [][0] return the same. However, there is a distinct identity difference between them.

As for the challenge, which one is which? From where I sit, the distinction between the two cases is important enough and should not be ignored, thus producing the possibility of useful features. As such, I see it as a case of more useful and still perfectly consistent. So from that standpoint, I don't see the challenge.