Why is BigInt broken?

when I call Number() I get 0
when I call String() I get ""
when I call Array() I get []
when I call Object() I get {}
when I call Boolean() I get false
when I call BigInt() I get Uncaught TypeError: Cannot convert undefined to a BigInt

BigInt() SHOULD return 0n

The closest discussion I could find about the BigInt constructor was this issue:

The very first draft spec text already had the behaviour that BigInt() would be treated as BigInt(undefined) and says that tuning undefined into a BigInt should throw.

note: BigInt also differs in that it is not a new target constructor, it is more like Symbol.

new Number(0) // Number(0)
new BigInt(0) // Uncaught TypeError: BigInt is not a constructor

@aclaymore do you have thoughts on how I could potentially gather more insight into why these decisions were made? The way I've always thought about JS is anything that represents a primitive type should follow the same interface as the original types in JS. a BigInt is a constructor for a new primitive type which in my opinion should follow the interface of the rest of the JS types.

TC39's conventions are changing over time. For example, you can do new Number(0) but not new Symbol() or new BigInt(1) since, around ES6 times, it was decided that new for primitive types was a footgun, and BigInt followed Symbol's pattern. Similarly, there's been a general tendency to treat missing arguments the same as undefined by default, which BigInt followed here.

I wouldn't be opposed to creating a special case for BigInt() returning 0. This could be pursued as a normative (needs-consensus) pull request against the specification. It would help if you had use cases in mind to motivate the change, as consistency as a goal often cuts in multiple directions (as it does here).

3 Likes

@littledan thanks for the response!

The point of the discussion is in relation to being able to rely on the ability of setting predictable defaults of a type.

I often use things like a Boolean constructor or a Number constructor to set "empty" values of a type so I was quite surprised to not see the same interface from BigInt when I came across this caveat.

I'm very much in favor of not including a new keyword for a primitive type. I'd just like to be able to fall back on the language for telling me what its "empty" value should be for a given type.

A good use case for something like this would be a schema builder where you would pass Constructors as the type of data a field should be, and then the tool would initialize default values for you.

If you'd like to see a some demo code, I could write something crude to illustrate it in code if you like. Just let me know.

With the idea that BigInt() === 0n and BigInt(undefined) // error thrown. There is precedent for having different behaviour depending if an argument is passed through as undefined or not passed through at all

[].map(() => {}, undefined) === undefined;
[].map(() => {}); // TypeError 'Reduce of empty array with no initial value'
Number() === 0;
Number(undefined) === NaN;
String() === '';
String(undefined) === 'undefined';

@aclaymore this is a great point, so the implementation should check that arguments.length === 0

if we want BigInt(undefined) to continue throwing, I'm fine with that.
Both an error or NaN would make sense to me.

1 Like

We’ve tried pretty hard to avoid that legacy pattern, and make absence and undefined always be treated the same.

1 Like

Another aspect to this is that if the Records & Tuples proposal moves forwards with its current behaviour, then BigInt won't be the only odd-one-out, Record() will also throw. Giving us this:

// primitives:
BigInt(); // throws
Boolean(); // false
Number(); // 0
Record(); // throws
String(); // ''
Symbol(); // Symbol()
Tuple(); // #[]

// non-primitive:
Date(); // returns a string
Error(); // returns a new Error() object
Object(); // {}
Map(); // requires new
Set(); // requires new
Uint16Array() // requires new
// ...
// most other 'constructors' seem to require new

EDIT:

Tuples do still currently inherit the [].reduce behavior of treating undefined and absence of the default value differently.

#[].reduce(v => v); // throws
#[].reduce(v => v, undefined); // undefined

Why is there a move to avoid this pattern? What’s the modern way to figure out what the empty value of a type is?

I don't believe that "provide an empty type value" is a thing that anybody's been concerned with doing. There's no such value for Symbols, (or if objects are included) for Map or Set, or WeakMap or WeakSet, or WeakRef, or Promise, or FinalizationRegistry, etc.

None of those are primitive types...

Symbols are.

@ljharb Yep, and when you pass nothing into it, you get something back that is not an error. Albeit I don't think you could represent an "empty" value for a symbol because its not really a value in the first place, its more like a reference.

I'm new to the TC39 Working group, in my initial question to you I was asking why there is a move away from this pattern, I'm not coming at this discussion aggressively. I'm only interested in learning if this was a mistake or oversight and if there's a possibility of fixing it, or what the reason is behind the move away from it.

I'm not here to argue with you about macro differences.

So if you can contribute to the understanding of why these decisions were made, please share your thoughts. But keep it positive.

Apologies if the things I've said so far haven't come across as positive; "positive" was certainly my intent. My response was to the part of your question that presumes that "the empty value of a type" is a thing that every type has (it's not) or a thing that there's consensus should exist (unfortunately, there isn't).

I believe the motivation is that it's simpler, and mirrors the idioms found in user code, to define named arguments and check if they're undefined or not. Userland code does not often check arguments.length - but the decisions predate my involvement, so I'm not certain about the motivations.

Ah, the joys of JavaScript growing more and more internally inconsistent over time. When (oh when) are people going to wake up and realize that consistency and cohesiveness are of the utmost importance in language design. Making BigInt() throw is neither consistent nor cohesive, causing some cases where coders have to spend more time debugging or have to write many extra lines of code just to fit a square peg in a round hole. Previously, with JavaScript, one could evaluate the truthiness of any primitive by doing x != null && x.constructor().valueOf() !== x. Now, one has to do x != null && (typeof x === "bigint" ? x !== 0n : x.constructor().valueOf() !== x). Ah, the joys of internal inconsistancy indeed.

To be fair, you can't rely on .constructor or .valueOf existing, so the way to evaluate the truthiness of any primitive is always and will always remain !!x, which still works fine with BigInt.

1 Like