Why can't `#private` fields return undefined before initialization?

There are so many cases of constructors calling methods, and subclasses overriding those methods and using class properties. When converting such code bases to attempt to use #private fields for implementation details, it just doesn't work, due to the temporal dead zone of private fields.

Why can't they just return undefined during a super() call, and why can't they just be accessed or written in a subclass's method if super() calls a method that the subclass overrides or extends?

Private fields don't have a temporal dead zone, they simply don't exist on the instance until base class constructor finishes executing. They can't be installed before that point because the base constructor is the thing which creates the object in the first place.

As a general rule, you should avoid calling derived methods in base class constructors. Derived methods can (as in this case) rely on state which the derived class constructor sets up, and that hasn't happened when the base class constructor is running. This is true in every OO language; it's a good rule to learn.

As to your particular suggestion, falling back to undefined for private fields which haven't been set up wouldn't help in most cases, because you'd usually actually need to know what the value of the field should be. If you're just using the fallback to undefined to be defensive against running before the derived class constructor has executed, you can use if (#x in this) { ... } instead of if (this.#x === void 0) { ... }.

2 Likes

It's effectively like a temporal dead zone. It currently throws an error, just like a TDZ does. The error message doesn't make a practical difference for most use cases.

In the vast majority of cases, this is the same object throughout a class hierarchy. Straying away from that is not even practical for class inheritance, except with Proxy (which we all know breaks with private fields).

I'm implying that private fields could have been designed differently (I'm not asking why they are they way they currently are), with actual WeakMap semantics:

class Base {
  constructor() {
    this.someMethod?.()
  }
}

class Foo extends Base {
  someMethod() {
    this.#foo = 123
  }
}

could hypothetically have been designed to desugar to something more like

class Base {
  constructor() {
    this.someMethod?.()
  }
}

const foo = new WeakMap

class Foo extends Base {
  someMethod() {
    foo.set(this, 123) // no error, works for the vast majority of use cases.
  }
}

And for the average case (the majority case) this would be perfectly usable, even in super() calls that call subclass methods.

I know we're going in circles now. But the current private fields are just bad.

That's not a hard rule. Better to design language features that work better for most cases, including those that currently exist.

People do this all the time. For example.

The point of having private fields is to allow the class to enforce invariants about its instances. Silently exposing partially-initialized fields to methods in the class is worse than having an error. If you actively want to deal with the object in a partially initialized state, you can do that explicitly, as mentioned above.

could hypothetically have been designed to desugar to something more like

That would mean that invoking the method on a non-instance would install the private field on a non-instance, instead of giving an error. That is not a good outcome.

Better to design language features that work better for most cases, including those that currently exist.

It's my belief that the current design works better for most cases, which is why we went with it.

Anyway, I've said my piece. You are welcome to disagree, but I doubt there's much more I can say here, so I'm going to step away now.

1 Like

Though, if we actually look at that use case, they're calling that initialize function after all the fields have been initialized, which means the derived method is still working with a fully-initialized object. Well, almost, for some reason they're setting the idAttr property after they call initialize(), presumably because it didn't matter in this scenario, but it would have been better if they had just done that before they called initialize(), in case the derived initialize() functions needed access to that property.

The spirit of @bakkot's rule is that derived methods shouldn't be called while the instance isn't fully initialized. So, calling a derived method at the end of a constructor should be just fine.

I can certainly understand the pain of migrating code that wasn't built to support private fields, but outside of that use case, I'm not sure there's really much of a difference between either behavior (you either do your is-this-field-defined checks by doing an undefined check, or a #field in obj check, and that's about it). And, perhaps it goes against the loose-natured spirit of JavaScript to not default-initialize it to undefined :man_shrugging:

1 Like

This is a troublesome problem. On one side, it isn't safe to call members of a derived class until the derived class has been fully initialized. On the other side, it isn't safe to call an initialization function outside of the constructor chain.

It isn't something I'd ever put into the language, however I've been thinking of an initializer function to handle these order-of-operations problems in the constructor. It would allow you to supply a pre-init, init, and post-init function so that everything could be handled in the constructor chain properly. Don't know if it's worth it though.

It would have been better if all code bases before class fields were designed to be compatible with class fields. But that's just not the case. Porting such code bases to modern code can be difficult because of this (I know from trying at work).

This is not something that relates to types, so TypeScript will not, for example, enforce this. It would be like enforcing | undefined checks on all class field just because a super class can call a subclass method before the subclass constructor runs, something that TypeScript also doesn't enforce.

The engine could easily just return undefined if a private field doesn't exist, rather than having TDZ-like behavior that throws. This is simply an arbitrary design choice that was not made, which in my opinion is not the ideal path, especially for making old code forward compatible.

Ultimately, either case is valid, so long as the code is clear and documented. Like "NOTE! this method is called by a superclass during construction, so..." or "NOTE! Don't forget to call initialize() after construction.", etc. Ultimate good docs are the source of truth (although some people will make assumptions without reading, inevitably).


I opened a new GitHub issue in proposal-decorators about the possibility of accessor being able to work around the issue:

That doesn't solve the problem, but provides an alternative that requires refactoring. One main issue I have with current private fields is that variables in old code cannot simply be renamed with # to gain privacy, because their behavior changes in an undesirable way. Heck, even public class fields were a total mess to forward port old code to while having accessors in the mix, but that's another topic.

Is the issue around adding:

#f in this ? this.#f : undefined

to read sites that it adds visual noise? Could a custom decorator on the accessor help make this pattern more lightweight code wise?

Just had a thought that I know will likely be shot down. What if this:

class Base {
   #foo = "Whatever";
   constructor() {
      call_me();
   }
   call_me() {
      console.log(this.#foo);
   }
}

class Child extends Base {
   #bar = 42;
   call_me() {
      console.log(this.#bar);
      super.call_me()
   }
}

...desugared to something more like this:

Object.defineProperty(Symbol, "initFields", {value: Symbol("initFields")});

const Base = (() => {
   foo = new WeakMap();
   return class Base {
      [Symbol.initFields](inst) {
         bar.set(inst, "Whatever");
      }
      constructor() {
         new.target[Symbol.initFields](this);
         call_me();
      }
      call_me() {
         console.log(this.#foo);
      }
   }
})();

const Child = (() => {
   bar = new WeakMap();
   return class Child extends Base {
      [Symbol.initFields](inst) {
         super[Symbol.initFields](inst);
         bar.set(inst, 42);
      }
      call_me() {
         console.log(this.#bar);
         super.call_me()
      }
   }
})();

I think something like this gets around all the issues in question. Much like in compiled languages, this allows all of the fields to have a chance to be initialized to whatever default value before being manipulated by even the first line of the base class constructor. The only problem with doing this is that it's basically double construction. Developers would have to separate default value initialization from constructed value initialization. So cases of:

class Ex {
   field = doSomething();
   ...
}

... would have to be decomposed into something like:

const Ex = (() => {
   return class Ex {
      [Symbol.initFields](inst) {
         field.set(inst, undefined);
      }
      field = doSomething();
      constructor() {
         new.target[Symbol.initFields](this);
         field.set(this, doSomething());
      }
      ...
   }
})();

.. as running doSomething() before its class is constructed still poses potential initialization risks.

IMO, any code using this kind of construct needs to be redesigned to account for the initialization limitations of the language if it intends to incorporate fields of any kind. The TL;DR of it is that since TC39 didn't put fields on the prototype, those fields are not at all reliably available until after the constructor of the defining class has been run. There's no ergonomic way around this issue that I can conceive.