Zero-overhead Async/Await

I would like to propose "zero-overhead async/await" syntax. Zero-overhead would essentially mean that async code is achieved by a single callback, instead of current Promise-based code which is 2 callbacks + object.

The syntax would very much look similar to the existing async/await syntax but would operate on top of callbacks instead of promises.

Await a callback-powered async function:

await cb fs.readFile('a.txt.', cb);

Await callback-powered async function, where the last "callback" argument is implicit:

await _ fs.readFile('a.txt');

Create an async function powered by callback, note the async keyword in place of the callback argument:

function readFile(filename, async) {
  // ...
  return await _ napi.binding.read(filename);
}

The above readFile method, can be called using the regular callback-based JavaScript:

readFile('a.txt.', (error, value) => {
  if (error) {
    // ...
  } else {
    // ...
  }
});

Or using the zero-overhead callback-based async/await (when in async context):

try {
  let value = await _ readFile('a.txt');
  // ...
} catch (error) {
  // ...
}

More on the proposal: Zero-overhead Async/Await - DEV Community

Hi @streamich, welcome!

It sounds like the primary motivator here is performance, if so an experiment that shows this would likely be helpful in determining if it's worth pursuing. While callbacks on their own may be faster, there would still be overhead with this syntax.

That said, I'm not sure how much appetite there is for introducing a 3rd way to do continuations, so there is a chance that even if an experiment showed this was faster that may not mean it would advance.

Also, are there other languages that have similar syntax? That can also be interesting to analyse.

While callbacks on their own may be faster, there would still be overhead with this syntax.

What would be the overhead of the syntax?

This would still have to schedule the continuation into a new tick, because there is no guarantee that the callback is being called on a fresh stack. ReadFile could be:

function readFile(path, cb) {
  cb();
  console.log("after cb");
}
1 Like

This would still have to schedule the continuation into a new tick, [..]

The "zero-overhead" abstraction means there would be no "scheduling" and no microtask queue involved. If the cb is called in the same stack, then the corresponding await will resume in the same stack.

Comparison to promises and some behavior description:

  • Promises allow to fanout the result to multiple subscribers, the zero-overhead async/await pushes the result only to a single callback.
    • No overhead of multiple subscribers.
  • A promise result is stored in an object, here the result is not stored anywhere, it is immediately pushed to the callback.
    • No overhead of extra object construction.
  • A promise always resolves in the next stack, the zero-overhead async/await can resume continuation in the next stack, but also it could be the same stack.
    • No overhead of scheduling, no queue required.
    • No penalty of waiting until the next stack frame, if continuation possible in the current stack frame.
  • Promises enforce that the subscription callbacks are executed at most once. The zero-overhead async/await does not provide such guarantee.
    • No overhead of tracking if the callback was already filled.

The issue there is that it may lead to unexpected execution semantics:

function f(cb) {
  try {
    cb();
  } catch {}
}

await cb f(cb);
throw new Error();

the error is actually being thrown to f.

While promise can have over head compared to direct callbacks, that overhead is mostly there to make the execution order easier to reason about.

1 Like

I guess that would be something to live with as it exists with the current callback code as well.

The zero-overhead syntax is not about solving callback semantics. It is about removing callback hell (indentation), while still keeping zero overhead.

With existing callback code it is more visually evident which portion of code is the callback, making it easier to understand where a throw might be caught

What do you propose for this?

function func(cb) {
    doSomethingAsync(() => {
        console.log("1")
        cb()
        console.log("2")
        cb()
        console.log("3")
    })
}

await cb func(cb)
console.log("next")

And what about this?

function func(cb) {
    console.log("1")
    cb()
    console.log("2")
    cb()
    console.log("3")
}

await cb func(cb)
console.log("next")

The main concern around maybe-sync code is race conditions resulting in subtle state bugs. It's extremely tricky to catch those errors, and in a couple of those bugs in sometimes-sync code, I had to resort to informal process networks and actor messaging graphs just to find where the state bug even was. If everything's async, it's far simpler to work with.

Also, regarding performance, runtimes that allow concurrent use of the same runtime context (like WebKit but not V8) allow you to allocate objects and resolve promises off the main thread. This avoids some of the performance impact when working with event loops, since you no longer need to queue for a lock just to queue up the promise resolution's reaction job.

What do you propose for this?

@claudiameadows it would do the same as what a function with callback would do. The output would be, in both cases:

1
next
2
next
3

The idea is that the syntax transforms the code to something like this:

func(() => {
  console.log("next")
})

The idea is to remove the callback hell while introducing zero-overhead. If extra security guarantees/convenience are needed, the developer can always choose to use Promise instead.

Why is their concern about the overhead of promises in the first place? Do you have an example where promise overhead was a significant bottleneck and you had to switch to regular callbacks as a result?

From what's been discussed, it sounds like the overhead can fall into one of three categories:

  • The cpu overhead of actually using a promise
  • The memory overhead of allocating the promise object
  • The fact that if you use a promise with a synchronous task, you're force to wait until the next tick before the promise resolves.

All of these sound insignificant if the promises are being used for their intended purpose - scheduling work that needs to be done asynchronously. 1. The amount of time it takes for the engine to run internal promise-related code is going to be nothing compared to, say, waiting for a file to be read in, or a network request to come back, etc, so the cpu usage is typically not a concern. 2. You shouldn't have so many active async tasks at a time that causes the memory utilization of all of your promise objects to become a concern. 3. You generally shouldn't be using promises with synchronous work, so the fact that it forces a minimum wait time of one tick shouldn't be a concern.

1 Like

@theScottyJam In some high performance servers, the speed of promises can show up on profiles sometimes, but I do generally agree with you in that promise resolution speed is almost never the real problem.

On a related note, I a while back ran some benchmark tests using raw TCP connections to a local server in Node. I found that the runtime itself limits you to around 2 Mbps single-threaded and 10 Mbps multi-threaded, with the bottleneck being the server. For comparison, on an optimized Rust+Tokio equivalent, I achieved 50 Gbps throughput. Promise overhead should be margin of error on Node.