Why is BigInt broken?

when I call Number() I get 0
when I call String() I get ""
when I call Array() I get []
when I call Object() I get {}
when I call Boolean() I get false
when I call BigInt() I get Uncaught TypeError: Cannot convert undefined to a BigInt

BigInt() SHOULD return 0n

3 Likes

The closest discussion I could find about the BigInt constructor was this issue:

The very first draft spec text already had the behaviour that BigInt() would be treated as BigInt(undefined) and says that tuning undefined into a BigInt should throw.

note: BigInt also differs in that it is not a new target constructor, it is more like Symbol.

new Number(0) // Number(0)
new BigInt(0) // Uncaught TypeError: BigInt is not a constructor

@aclaymore do you have thoughts on how I could potentially gather more insight into why these decisions were made? The way I've always thought about JS is anything that represents a primitive type should follow the same interface as the original types in JS. a BigInt is a constructor for a new primitive type which in my opinion should follow the interface of the rest of the JS types.

TC39's conventions are changing over time. For example, you can do new Number(0) but not new Symbol() or new BigInt(1) since, around ES6 times, it was decided that new for primitive types was a footgun, and BigInt followed Symbol's pattern. Similarly, there's been a general tendency to treat missing arguments the same as undefined by default, which BigInt followed here.

I wouldn't be opposed to creating a special case for BigInt() returning 0. This could be pursued as a normative (needs-consensus) pull request against the specification. It would help if you had use cases in mind to motivate the change, as consistency as a goal often cuts in multiple directions (as it does here).

3 Likes

@littledan thanks for the response!

The point of the discussion is in relation to being able to rely on the ability of setting predictable defaults of a type.

I often use things like a Boolean constructor or a Number constructor to set "empty" values of a type so I was quite surprised to not see the same interface from BigInt when I came across this caveat.

I'm very much in favor of not including a new keyword for a primitive type. I'd just like to be able to fall back on the language for telling me what its "empty" value should be for a given type.

A good use case for something like this would be a schema builder where you would pass Constructors as the type of data a field should be, and then the tool would initialize default values for you.

If you'd like to see a some demo code, I could write something crude to illustrate it in code if you like. Just let me know.

With the idea that BigInt() === 0n and BigInt(undefined) // error thrown. There is precedent for having different behaviour depending if an argument is passed through as undefined or not passed through at all

[].map(() => {}, undefined) === undefined;
[].map(() => {}); // TypeError 'Reduce of empty array with no initial value'
Number() === 0;
Number(undefined) === NaN;
String() === '';
String(undefined) === 'undefined';

@aclaymore this is a great point, so the implementation should check that arguments.length === 0

if we want BigInt(undefined) to continue throwing, I'm fine with that.
Both an error or NaN would make sense to me.

1 Like

We’ve tried pretty hard to avoid that legacy pattern, and make absence and undefined always be treated the same.

2 Likes

Another aspect to this is that if the Records & Tuples proposal moves forwards with its current behaviour, then BigInt won't be the only odd-one-out, Record() will also throw. Giving us this:

// primitives:
BigInt(); // throws
Boolean(); // false
Number(); // 0
Record(); // throws
String(); // ''
Symbol(); // Symbol()
Tuple(); // #[]

// non-primitive:
Date(); // returns a string
Error(); // returns a new Error() object
Object(); // {}
Map(); // requires new
Set(); // requires new
Uint16Array() // requires new
// ...
// most other 'constructors' seem to require new

EDIT:

Tuples do still currently inherit the [].reduce behavior of treating undefined and absence of the default value differently.

#[].reduce(v => v); // throws
#[].reduce(v => v, undefined); // undefined

Why is there a move to avoid this pattern? What’s the modern way to figure out what the empty value of a type is?

I don't believe that "provide an empty type value" is a thing that anybody's been concerned with doing. There's no such value for Symbols, (or if objects are included) for Map or Set, or WeakMap or WeakSet, or WeakRef, or Promise, or FinalizationRegistry, etc.

None of those are primitive types...

Symbols are.

@ljharb Yep, and when you pass nothing into it, you get something back that is not an error. Albeit I don't think you could represent an "empty" value for a symbol because its not really a value in the first place, its more like a reference.

I'm new to the TC39 Working group, in my initial question to you I was asking why there is a move away from this pattern, I'm not coming at this discussion aggressively. I'm only interested in learning if this was a mistake or oversight and if there's a possibility of fixing it, or what the reason is behind the move away from it.

I'm not here to argue with you about macro differences.

So if you can contribute to the understanding of why these decisions were made, please share your thoughts. But keep it positive.

Apologies if the things I've said so far haven't come across as positive; "positive" was certainly my intent. My response was to the part of your question that presumes that "the empty value of a type" is a thing that every type has (it's not) or a thing that there's consensus should exist (unfortunately, there isn't).

I believe the motivation is that it's simpler, and mirrors the idioms found in user code, to define named arguments and check if they're undefined or not. Userland code does not often check arguments.length - but the decisions predate my involvement, so I'm not certain about the motivations.

Ah, the joys of JavaScript growing more and more internally inconsistent over time. When (oh when) are people going to wake up and realize that consistency and cohesiveness are of the utmost importance in language design. Making BigInt() throw is neither consistent nor cohesive, causing some cases where coders have to spend more time debugging or have to write many extra lines of code just to fit a square peg in a round hole. Previously, with JavaScript, one could evaluate the truthiness of any primitive by doing x != null && x.constructor().valueOf() !== x. Now, one has to do x != null && (typeof x === "bigint" ? x !== 0n : x.constructor().valueOf() !== x). Ah, the joys of internal inconsistancy indeed.

To be fair, you can't rely on .constructor or .valueOf existing, so the way to evaluate the truthiness of any primitive is always and will always remain !!x, which still works fine with BigInt.

1 Like

While consistency is very important, I'd say Javascript as a language is evolving over time towards a language that works well for everyone. I'm sure we're all worked on a codebase before where you kept repeating a pattern just to be consistent, while actually, the whole pattern could be improved. Just like we're trying to move away from things like new String('foo') being a thing (because it can introduce nasty bugs later), we can slowly move away from things like Number() returning 0. If you didn't specify what the number should be, why is it 0? I guess you could say it is "because it's falsey and all other types do a similar thing" but I'm not convinced that that is a good reason. String() returns an empty string, Symbol() returns a (truthy!) symbol, but nowhere did I specify that that's the behavior I wanted. If I want an empty string, why would I use String() over ''? If I want zero as a bigint, why would I write BigInt() rather than BigInt(0) or just 0n? I think it's good that the language is moving towards making people choose the right approach by design rather than keeping in flaws that provide alternate, (subjectively) inferior options that can make their life harder in the future. These functions, in my opinion, should all throw if you didn't provide an explicit argument, because it's not clear what they should do in the first place.

BigInt() / BigInt(undefined) TypeError also caused to WebAssembly problems when passing default values for i64:

1 Like

@vrugtehagel

These functions, in my opinion, should all throw if you didn't provide an explicit argument, because it's not clear what they should do in the first place.

While you said a lot here, it only shows that you might not understand the point of a default constructor in an object. A default constructor is supposed to give you back a fully initialized value in an acceptably "neutral" state. That means for String() it should give back an empty string. How is that neutral? What it gives back must be a string, but cannot have any content as none was specified. The same goes for Number(). It must give back a number. The only existing number that can be thought of as "neutral" reasonably is 0 since it is its own negative. Likewise Symbol() must return a Symbol. The problem here is that there isn't even a concept of a "neutral" value when it comes to Symbols. So in this case, the constructor must return a fully ready-to-use value. This is the reason BigInt() should return 0n.

The problem with BigInt(undefined) IMO is that it should be parallel to something like this:

let a = BigInt();  //Create an initialize a BigInt to a neutral value
a.setValue(undefined);  //Imagining BigInt had a setValue function

The second line should either throw or assign NaN to a as there is no BigInt value equivalent to void 0. Likewise, the constructor call BigInt(undefined) should throw as it is being told to initialize itself to a value for which it has no equivalent.

As for why someone would use String() over '' or BigInt() over 0n, it could be because they want to use pre-boxed values. The constructors return an Object, not a primitive (except Symbol, which is why I don't balk at it not having a new Symbol() equivalent). This has certain advantages, such as the ability (though weird and possibly considered an anti-pattern) to attach functions and data.

I find it rather disturbing that there is a growing pressure on the language to defend against developer mistakes that can "introduce nasty bugs" in lieu of a growing pressure on developers to understand how to use the language. Making the language take up the role of developer nanny only forces limitations on the language that encroach on its usefulness. Making the developer take up responsibility for learning how to use a language well leads to more innovative achievements and a more flexible, more capable language. All-in-all, I don't understand arguments like the one you gave.