Synchronous exceptions thrown from complex expressions create abandoned promises. Solutions?

Example:

async function foo() {
  try {
    await Promise.all([bar(), baz()]);
  } catch(err) {}
}

may create an unhandled promise rejection -- which is considered a fatal error, or ought to be -- if:

  • bar() returns a promise
  • baz() synchronously throws
  • promise returned by bar() subsequently rejects

Note that this is a more general problem than just a shortcoming of the Promise.all() pattern: any expression that contains subexpressions evaluating to promises in conjunction with subexpressions that may throw synchronously is affected. There appears to be no general clean solution -- i.e. not requiring a significant rewrite of the expression itself -- in ES6. A requirement that only async functions be invoked from an expression evaluating to a promise is too strong: it precludes literally any
sync computations of e.g. parameters being passed into such async functions as Javascript lacks any form of nothrow declaration.

Within the confines of present day language, my solution is to use a lazily evaluated argument:

async function foo() {
  try {
    await Promise.all([
       promise( () => bar() ),
       promise( () => baz() )
    ]);
  } catch(err) {}
}

where bar() and baz() do, of course, stand for subexpressions of arbitrary complexity, uncertain value type (specifically, promise vs. immediate value) and propensity to synchronously throw. The explicit evaluator promise() can be implemented as

async function promise(lazyValue) {
  return lazyValue();
}

or

function promise(lazyValue) {
  return new Promise( resolve => resolve(lazyValue()) );
}

or even

function promise(lazyValue) {
  try {
    return Promise.resolve(lazyValue());
  } catch(err) {
    return Promise.reject(err);
  }
}

It is a verbose pattern, which could be somewhat improved by a simple language addition: an unary "operator" that evaluates its argument and returns a promise resolved to the resulting value if the termination is normal or a promise rejected with the thrown error if the termination is abrupt. So the key pattern becomes
Promise.all([ promise foo(), promise bar() ])
which removes the superfluous arrow function syntax and can be interpreted as an opposite of the await "operator" (scare quotes intentional in both cases). This is still not a completely clean solution but it has the advantage of simple and obvious implementation along with that of explicitness.

Hi @MaxMotovilov,

I think something like this should work:

async function foo() {
  const promises = [];
  try {
    promises.push(bar());
    promises.push(baz());
  } catch(err) {
    handleSyncError(err);
  }
  const results = await Promise.allSettled(promises)
  return results.flatMap(({status, reason, value}) => {
    if (status === 'rejected') {
      handleAsyncError(reason);
      return [];
    }
    return [value];
  }); 
}

If GitHub - tc39/proposal-async-do-expressions: async `do` expressions for JavaScript goes through. Then you could do this:

async function foo() {
  const promises = [
    async do {bar( ...argumentsToBar() )},
    async do {baz( ...argumentsToBaz() )},
  ];
  return await Promise.allSettled(promises);
}
1 Like

Hi,

I think that your premise (emphasis added):

is incorrect: it is totally expected that only the first exception thrown is caught and the other one ignored, regardless whether they are thrown synchronously or asynchronously.

Your scenario is not fundamentally different from:

  • bar() returns a promise;
  • baz() returns a promise that asynchronously throws (i.e. rejects);
  • the promise returned by bar() subsequently rejects as well.

In that case, the semantics of Promise.all() is to reject with the rejection of baz() and leave the promise returned by bar() silently unhandled.

If you must handle all potential exceptions, you should consider Promise.allSettled().

This sure does - at a price pf essentially rewriting the expression in an assembly language :-D

Thank you! I didn't find this one in my search - it uses a different syntax from my "unary operator" to the same effect. It isn't any less verbose than the approach with lazy evaluation of subexpressions though, but perhaps could be optimized better by the JIT.

Claude, I think you didn't understand... my scenario is fundamentally different: a synchronous exception from baz() prevents Promise.all(), or, for that matter, any other expression that suffered a throw from its subexpression, from being evaluated completely. It has nothing to do with semantics of Promise.all() vs. Promise.allSettled().

Consider this [contrived] example:

async function consumePromise(promise, unusedValue) {
  await promise;
}

function goodActor() {
  setImmediate( (_, reject) => reject() );
}

function badActor() {
  throw Error();
}

async function victim() {
  try {
    consumePromise(goodActor(), badActor());
  } catch(err) {}
}

The rejection of promise returned by goodActor() will not be handled and there is absolutely nothing victim() can do about it short of rewriting the expression.

1 Like

Unhandled rejections are certainly not fatal errors, nor should be. Promise.reject(); is not intended to crash any program.

In your example, you may benefit from https://github.com/tc39/proposal-promise-try - but you can also achieve the same with new Promise.

Isn't the whole thing Node's been trying to do is turn unhandled promises into fatal errors that crash the program? I assume that's why @MaxMotovilov is bringing up this issue. For browsers, having an unhandled promise error like that isn't as big of a deal, but in node, it is.

1 Like

There's significant disagreement with that statement:

$ node
Welcome to Node.js v14.16.0.
Type ".help" for more information.
> Promise.reject()
Promise { <rejected> undefined }
> (node:9669) UnhandledPromiseRejectionWarning: undefined
(Use `node --trace-warnings ...` to show where the warning was created)
(node:9669) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)
(node:9669) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.

Yes I can -- my initial post actually describes the approach. The point is that this solution introduces a spurious function context, and a lot of undue verbosity, to an otherwise very fundamental and widely useful pattern: evaluating multiple asynchronous and synchronous subexpressions as part of the same expression generally and using the primary library facility for concurrent asynchronous execution (that being Promise.all()) more specifically.

1 Like

Yes, and node 15 crashes by default on unhandled rejections, despite many debates that it violates the design of the language - in which they are not fatal errors.

I certainly share your dislike for node's choice here @ljharb.

As for the issue at hand, I will note that if a function is intended to be async, then it's pretty bad behavior for it to throw synchronous errors - all of its errors should be returned via Promise.reject(). If the function is declared using "async", then it's not even possible for it to throw a sync error.

What kind of functions are being called here?

  • If they're functions owned by the code maintainers, they should be fixed, so they never throw a sync error.
  • If they're functions owned by third-party libraries, then a bug-report can be filed with them, and while that's being fixed, it might be ok to use a little extra verbose syntax to handle both sync and async errors. The verbosity reminds us that this really shouldn't be happening.

Let me give you my actual use case rather than the simplified and contrived ones I've shown so far:

async function compareApiImplementations(parameters) {
  return compareResults(...await Promise.all([
    newImplementation(parameters),
    oldImplementation(adaptParametersToOldImplementation(parameters))
  ]));
}

Here, newImplementation() and oldImplementation() are async functions of a value argument (property bag to be encoded into the URL of a REST service), compareResults() and adaptParametersToOldImplementation() are regular sync functions. Due to GIGO issues (implementation differences, essentially) adaptParametersToOldImplementation() may throw unexpectedly.

I do disagree with the position that unhandled rejections are harmless, btw. They are no more harmless than uncaught exceptions from a detached async context (like a timer tick handler). It is just that Promise.all() makes a design choice to ignore all rejections but the first one propagated; we can easily write an alternative API that would both propagate the result of the first rejected promise and report all subsequent rejections -- or successful resolutions! -- through a different channel, e.g. an event stream. Promises abandoned by an incomplete evaluation of the expression are fundamentally different as the programmer has no recourse in the matter.

PS. Sorry for belated editing, I almost missed my own point :-)

1 Like

You don't mean that "async and sync functions should never be called from the same expression", do you? That would be a very restrictive practice.

No, I mostly meant that it would be bad for a function to sometimes throw sync errors and other times return a promise. this would be a no-no for example:

function doSomething(param) {
  if (param == null) throw new Error('Bad Param!')
  return new Promise(...)
}

Your use case is interesting though. I guess it could be resolved by moving all sync logic before your calls to the async functions:

async function compareApiImplementations(parameters) {
  const adaptedParams = adaptParametersToOldImplementation(parameters);
  return compareResults(...await Promise.all([
    newImplementation(parameters),
    oldImplementation(adaptedParams)
  ]));
}

This will ensure you get sync errors over with before you start sending off your async tasks.

This kind of fragile ordering of statements does leave a bit of a bad taste in my mouth though. Without some comments explaining why it was ordered this way, it would be tempting for future programmers to try to inline adaptParametersToOldImplementation() with the other stuff again, causing unexpected issues.

1 Like

However, this issue isn't unique with async logic.

There's plenty of times when you need to be careful how you order statements in your code, to make sure you don't leave your program in a bad state if an error occurs. For example:

function updateFile(fileName, permissions, getContentFn) {
  createFileSync(fileName)
  writeToFileSync(fileName, getContentFn())
  setFilePermissionsSync(fileName, permissions)
}

In this example, if getContentFn() throws, then a blank file would have been created with the wrong file permissions. If it was instead written as this:

function updateFile(fileName, permissions, getContentFn) {
  const content = getContentFn()
  createFileSync(fileName)
  setFilePermissionsSync(fileName, permissions)
  writeToFileSync(fileName, content)
}

then, if getContentFn() throws, the system wouldn't be left in an invalid state.

That's putting it very mildly... my sentiment -- upon recognizing the issue for what it is -- ran all the way to "Javascript is irretrievably broken" ;-)

As a matter of taste, I tend to object to breaking up expressions into imperative sequences -- in most languages that reduces optimization opportunities plus it generally brings the code one step closer to the assembly level... escaping which was the whole idea of high level programming languages, no?

I do maintain that this particular problem is highly pernicious: Javascript generally lacks any declarative mechanisms to positively ensure that a particular piece of synchronous code does not throw. Hence, if you want to write provably (i.e. a-priori) correct code you're reduced to treating every synchronous function as potentially throwing and therefore perform the refactoring you outlined, i.e. evaluate all synchronous subexpressions first and bind them to local names and only then evaluate the asynchronous subexpressions. This goes well in the face of the entire concept of await-expressions. In fact, I'd argue that we're no better off than we were with .then-chains to begin with.

PS. I still prefer my original solution with explicit lazy evaluation of subexpressions to everything suggested thus far. At least it does not require breaking up the whole expression, even if it adds some spurious constructs to it :-(

I do think the issue you're describing is still a fundamental issue with languages that rely on throwing errors. Simply having a convenient way to turn sync errors into async errors won't always fix this issue, it just does in your case.

Let's, for example, say that your newImplemenation function does some sort of modification on a server. And, this modification is incomplete unless your oldImplementation function also runs. i.e. your program will be left in a bad state if only newImplementation fires, and not oldImplementation(). The only way to ensure that they get fired together (or neither are fired) is to rearange the code. If you simply wrap oldImplementation(adaptParametersToOldImplementation(parameters)) in a promise, then newImplementation() will still get called, adaptParametersToOldImplementation() will still throw, and oldImplementation() will still not get called. The only difference is that the Promise.all() will be able to catch and silently ignore the async error, instead of node crashing because the async error wasn't explicitly ignored. I presume in browsers, such syntax wouldn't add much value as those two scenarios really are the same in a browser.

I agree that splitting up code into imperative sequences doesn't feel as nice, I certainly don't like how the re-ordered code looks. But, when there's a definite order in which things are supposed to execute, then imperative code can make sense.

...and maybe this is an issue with node's decision on how to handle Promise rejection - which I know you don't agree with, and that's fine :). As node has made their decision, we now have to live with it, and maybe provide some language tooling to help, as that's just the way things are.

But, here's how node's promise rejection behavior has hit me before, when I tried to do a code snippet such as this:

function fetchUserData(params) {
  try {
    const promisedValue1 = getValue1() // Don't want to await yet, as I want to send off other requests at the same time.
    const value2 = await getValue2(params)
    const [value1, value3] = await Promise.all([
      promisedValue1,
      getValue3(value2)
    ])
    return { value1, value2, value3 }
  } catch (err) {
    if (err.type === NOT_LOGGED_IN) return null
    throw err
  }
}

If getValue2() threw a NOT_LOGGED_IN error, we would end up in the catch block, promisedValue1 would later reject with a NOT_LOGGED_IN error, and node would crash.

This code snippet poses a similar issue to your code snippet. There's logic that happens between when a promise was made, and where it gets handled, and that logic might throw an error. Looking at this, I don't think wrapping anything in a Promise would have fixed this code snippet either.

What we're really trying to achieve in both cases is to make the uncaught promise rejection get ignored. You're doing this indirectly with Promise.all(), and I had to rework this function logic in order to make this happen.

You can override node's unfortunate decision with the --unhandled-rejections flag, which can also be specified in the NODE_OPTIONS environment variable.