Timeout for an async loop: if loop do not finishes before timeout, it will break anyway.

Iā€™m not sure about this being easy, JS engines have a lot of subtle cases to handle. The calls that would now have to create a different type of promise could be cross-realm, or be within functions already created and then called by the host (e.g DOM event listeners).

To track this state I think a prerequisite to the language would be GitHub - tc39/proposal-async-context: Async Context for JavaScript

Yeah not that easy.

However there are enough cues available in the syntax itself, to decide what needs to be done.
If an await[timeInMs] is on stack, then every async/promise based function call on that stack needs to consider the hasTimedOut variable.

I have explained this in example, reiterated here with more explanations:
A function declaration like this:

async function doLongRunningTask() {
//somethings
return someValue;//this someValue could again be a promise itself, whatever user deens fit
} 

Is currently resolved to a promise something like this:

function doLongRunningTask() { 
    return new Promise(res=>{ //An async in the function wul ultimately resolve to a promise
   //somethings
   res(someValue);
 });
}

An await[timeInMs] will prompt JSEngines to instead resolve this function

function doLongRunningTask(){
    new Promise((res,rej)=>{//this could be inherited Promise with this Properties
        let timeout = 3000;
        let hasTimedOut = false;

        setTimeout(()=>{
            hasTimedOut = true;
        },timeout,"TimeoutError");

        class TimeOutAwarePromise extends Promise{
            then(cb){
                super.then((v)=>{
                    if(hasTimedOut){
                        throw `TimeoutError`;
                    }
                    cb(v);
                })
            }

            catch(cb){
                super.catch((v)=>{
                    if(hasTimedOut){
                        throw `TimeoutError`;
                    }
                    cb(v);
                });
            }
        }

        // all promise below this will be now instance of TimeOutAwarePromise
        res(someValue);
}

Every Promise that's being done on such a stack will make use of hasTimedOut to know if timeout has indeed happened.

It will require a big change, but it is almost like no brainer, as their are no special cases, all promise on such stack will consider the last hasTimedOut on such a stack. (Discalimer: I am not a JS engine expert though)

If and only if await[timeInMs] is used, for a normal await, everyting will behave as it always have.

That would let caller code intercept and change the behavior of other promises that are not part of the chain of promises that it is await'ing.

class MyCustomPromise extends Promise {
  constructor(delay) {
    super(resolve => setTimeout(delay, resolve));
  }
}

let cachePromise = null;
const someCache = new Cache();

async function doWork() {
   if (!cachePromise) {
     // this promise is an internal timer, usually this can't be cancelled by any external source
     cachePromise = new MyCustomPromise(10_000).then(() => {
       cachePromise = null;
       someCache.routineClean();
     });
   }

  const result = await doWorkSome(someCache);
  return result; 
}

...

await[2000] doWork();  // what should this do? Intercept the custom promise class and reject it?

Every user who use await[timeInMs] will use this syntax with an expectation that doWork must be done by timeInMs, which it does perfectly.

Now world without such syntax (Current system), such a user has designed a system with a wrong expectation from the API (As he has no idea about the API 10_000 time boundation). The syntax await[timeInMS] instead is his savior, which gives an oportunity (by throwing error) to such user to be aware that his expectation are not correct, and this incorrect assumptions about the code , that MyCustomPromise can work with in lesser than 10_000 ms is completely wrong.

This can save himself instead from otherwise completely untraceable reason of error which would happen in such code designed with wrong expectation, as people who have coded with this assumptions, are by gone now.
If instead a sytanx like this existed, every one would know exactly what was the expected intention at the line.

Errors are not enemy, they instead tells what has gone beyond the expectation.
When this error will be thrown and he can either tune timeInMs to be greater than 10000 or he can simply modify the code to use normal await. Such a new system now had the opportunity to be properly tuned meet its assumptions.

Its an incorrect assumption, that a deliberate coding like await[3000] is messing the code.
A user can deliberatly mess up same code like this too:

let tempHolder = doWork;
doWork = function(){
throw "Deliberate error"; //this is not some error introduced by JS syntax, 
//this is a deliberate messing up, user is expected to know the repercussion
return tempHolder();
}

When something is deliberate, all repercussions, are not unexpected, they are intended outcomes.

Its an incorrect assumption, that a deliberate coding like await[3000] is messing the code.
A user can deliberatly mess up same code like this too:

Believe it or not, some people write code that's (almost) completely impossible to mess up its internals. For example, they never call myArray.map() directly, instead, they pick off map from Array.prototype, then use map.call() whenever they need to map over array values. That way, no one can monkeypatch prototypes and cause the library to behave in unexpected ways. The library itself may also choose to freeze their exports so that you can't monkey patch them either.

There's people out there who really do care about being untouchable. I find this practice to be a bit overkill for some situations, but it's understandable for other situations, e.g. Node follows this practice, because it would be weird if their built-in functions changed behaviors when you monkeypatched some globals.

As for your concept in general, I still don't think it'll work great, so I've prepared a number of counter examples. For one, you're relying on .then() to be called often. What if, it's only a single async step that's taking forever. For example, how would you protect against this?

async function doLongRunningTask() {
    const connection = await getConnectionFromPool();
    await new Promise((resolve) => setTimeout(resolve, Infinity)) // This will never end
    return new Promise((resolve, reject) => {
        ...
    });
}

Also, note that your current await[...] solution has no effect on functions such as the following, where the first thing they do is construct a new promise, and everything within the promise is done via callbacks.

function myAsyncFn() {
  return new Promise(resolve => {
    callback1(resource1 => {
      const mappedResources = resource1.map(entry => {
        return process(entry)
      })
      callback2(mappedResources, resource2 => {
        resolve(entry)
      })
    })
  })
}

In this scenario, your special error-throwing promise won't even get made until after all of this long-running callback code has finished executing. You're unable to stop it in the middle.

Here's another example, borrowing off @aclaymore's thoughts:

let promisedResultsCache = new Map()

async function getUser(userId) {
  if (promisedResultsCache.has(userId)) {
    return promisedResultsCache.get(userId)
  }
  const promise = getUser_(id)
  promisedResultsCache.set(userId, promise)
  return promise
}

async function getUser_() { ... }

// ... elsewhere ...

await getUser(1) // This promise ends up going in the cache also.

await[100] getUser(1)
// This will timeout after 100ms.
// This will also cause anyone else who was trying to get a user with id 1 to timeout as well.

Now you're causing other people's promises to throw timeout errors.

Another place where this could be a problem is, for example, if you're using an async function that "rate limits" itself, by making it so only a certain number of active promises can be created by that function at a time. Here's a simplified implementation that only allows one active promise at a time.

let currentlyRunningTask = Promise.resolve()
let tasksInQueue = 0

export async function fetchResource(...args) {
  return new Promise((resolve, reject) => {
    tasksInQueue++
    currentlyRunningTask = currentlyRunningTask.then(() => {
      tasksInQueue--
      try {
        resolve(await fetchResources_(...args))
      } catch (err) {
        reject(err)
      }
    })
  })
}

export const getTasksInQueue = () => tasksInQueue

Note that in this system, if you did await[...] on one of the promises returned by fetchResource(), you're going to cause all promises forever after to be rejected with a timeout error as well. You'll also make the tasksInQueue counter go out of sync - it'll just keep incrementing without ever decrementing. Why? Because the currentlyRunningTask promise was never supposed to reject. As it's currently coded, it can't reject. All errors get caught. Once you inject a rejection in there (which is currently impossible), then all of the promises that get tacked on will inherit that rejectedness, due to the way promises work. There may have been other ways to code this to get around this issue, but that's the thing, you have to code it with this await[...] syntax in mind, at which point, you might as well just receive a cancel token and code your logic with a cancel token in mind instead.

Here's another issue. What if the async task creates another run-away async task, and doesn't await it? For example:

async function getInfo(id) {
  await getResource(id)
  sendAnalyticInformation(id)
}

The sendAnalyticInformation() is not awaited. It's fired and forgotten. It could hold up system resources forever. It might not even use promises, it could be a callback-based API.

There's one more issue. What if the timeout error gets caught? e.g. what if you're performing a query that tries to keep retrying, with an exponential back-off whenever an error occurs. This source of system would silently catch and ignore the timeout error, and just keep retrying.

  1. The existing await syntax (one without timeout) is not going anywhere, its still there and will behave as it has been behaving, so where ever the expectation of infinite time await is required, use normal await as it is the most appropriate/simplest thing to do.

  2. All the example suggested in your last reply have assumed an incorrect timeout value, and simply can have been solved by using a proper value of timeout or simply using the normal await as timeout simply do not applies to them.

  3. Errors are thrown in a code, when something do not meets the expectations. Error are not enemy/bad, they are indicator that current state do not meets the code's assumptions/expectations and processing beyond would be logically insane. Thrown errors when caught gives a chance to correct what has went wrong.

  4. Developer make many assumption/expectations while coding, which is never documented, and which are the primary reasons of hard to debugs bugs in code. One such assumption is that an async process will return or will return in an appropriate time to meet over all system demands. Consider below example, Developer simply assumed that such function returns:

    async function processWebTrafficAndSendResponse(requestObject){
        const someHeavyValue = getSomeHeavyValue();
        const valueFromDB = await getValueFromDB(someHeavyValue);
        // the code below assumes that getValueFromDB will return;
        return new Response({
            //some blah blah
            body: valueFromDB
        });
    }
    //user who is waiting for response will have to wait at mercy of getValueFromDB.
    //Meanwhile thousand other user have requested same thing and all there data is blotting RAM with no resolution.
    

    What if it never returns or not return in appropriate time (Say 10 minutes, will user of a website stay that long).
    Blots on RAM is as bad as unclosed connections and in many case worst.
    Connections, like the one from Database, can be timed out and released from the DB end, but such an unused blot on RAM has no counter measure as of now.

    An incorrect assumption in above code can cause synchronization problems, memory problems, connections problems, back pressure! and over all system failure with no tracing of what went wrong.
    Why hard to trace, as this function is the major contributor of the problems, but all other function are suffering due to it and pin pointing which one is hard.

    The new additional syntax allows user to make a well documented assumption about system performance and expectations.

    try{
        let TIMEOUT= 3000;
        const someExpectedValue = await[TIMEOUT] getFromSomePromiseFromSomeAPI();
    }catch(e){
        //If code comes here, user is well knowing that timeout needs to be tuned if desired.
        //Allows a chance to correct mutations caused by timeout error.
        //If the API do not provide measures to counter act it, 
        //[switch lib/modify the lib/revise your expectations] (while writing code, not necessarily during runtime)
        //Revise expectation by either tuning value of timeout or simply use await without timeout
    }
    
  5. Codes which use an API with wrong expectations, will anyway leave many state unexpectedly mutated.

    function someAPIFunction(someState){
        //this is mutated, even though this entire code flow may not complete.
        mutateState(someState); 
    
        if(someExpectationNotMet){
            throw "InCorrect expectation1";
        }
        doSomeThingWithState(state);
    }
    
    //now lets call this API function
    try{
        let state = someStateBefore();
        someAPIFunction(state);
    }catch(e){
        //if this code throws error:
        //you get a chance to correct the mutation
    }
    

    Some states can be easily rolled back, some can't be rolled back at all.
    A developer is vulnerable to such unknowns mutation, even without the new syntax.
    The root cause of such problem is incorrect usage of the API or API not providing enough mechanism to roll back in worst case.

    Now such developer (who has not enough info about internals of API, which is the most probable case in general) has following options, after receiving errors:

    1. Either to change his expectations from the api, and instead make changes to his code. (which he is free to do in new syntax, either he tune in new timeout value or simply use await without timeout)
    2. Either to contribute a new patch himself in the API, to meet new requirements.
    3. Switch/Write another library, which meets his expectation from the API.

    But user will only be able to conclude to this solutions, if an api throws errors about its expectations.
    Else someday he will have many mutated state, which he has no idea what caused them. As he is still unaware about internals of API.
    The error worked as defence, that made him aware about expectation of the API.

The point of half of those examples is the fact that they're resilient to this await[timeout] operator.

  • In the case of the API auto-catching and retrying a request, it'll silently auto-catch timeout errors. Thus you, the person who used await[timeout], will never be informed of the fact that the API is still humming along in the background, retrying requests over and over, even after it timed out.
  • In the case of an API that has an await new Promise(resolve => setTimeout(resolve, Infinity)) in the middle, or any other long-running task, a timeout error will never get thrown in the middle, because you're only throwing errors when a .then() occures. You're not throwing errors in between. The same story happens with an API that's implemented entirely by callbacks, and only at the last step does it convert to a promise.

In other words, await[timeout] as an operator doesn't do a great job at doing its job of force-stopping the execution of async tasks that have started up. There are many types of async functions that exist today, that would circumvent this operator.

So how do you know what would be a proper timeout value to use? Are you saying that one should never use await[timeout] unless it's been explicitly documented in the API what timeout you should put in the brackets? If API authors have to explicitly support/document how to use this syntax, we might as well make them explicitly support cancel tokens instead.

Or, are you suggesting that you give it your best guess, and you learn from trial-and-error what a good timeout value would be. If that's the case, many of those examples I shared show that other code will break in very unexpected ways if you give use await[timeout]. For example, you might use an API that implements the rate-limiting logic I gave above, and maybe that API normally returns results in a good amount of time. So you push this code to production. Everything is good. Except for the fact that the occasional user will complain that the website occasionaly will fall apart everywhere. Why? Because at some point they sent out too many requests at once, it got rate limited, something hit the timeout, and everything from then on out was a failure. That does not sound like a fun error to debug.

I'm not saying errors are inherently bad, I agree, they're a good thing, they let you know when something goes wrong. But it's bad to allow errors to start being thrown in the middle of code that used to work 100% of the time, and the API authors were depending on it functioning correctly. There's a right place and a wrong place to throw an error. Throwing an error in the middle of working code is a wrong place.

Perhaps a warning would be more appropriate? If you think this timeout should never get reached, and it's a bug if it does get reached, instead of trying to do crazy modifications to make errors get thrown in odd places, in the hopes that it might clean up some resources without putting to much stuff in an invalid state, what if you just put a warning on the console that said "hey, this thing took longer than expected and timed out". Now the user's will continue to be able to use the application as expected, but you'll also have a way to gather potential issues in the code. You could even maybe have a way for these warnings to automatically be reported to some server, so you can collect them and learn where things are going wrong. This is less likely to break the user experience. And, technically these warning should never show up if appropriate timeouts have been picked, like you mentioned - if they do show up, then yes, there's a possibility that some extra ram will be hogged by these runaway async tasks (though in most cases, it wouldn't be too much ram). But, you would have also collected this information, and would have what you need to correct it.

Edit: I'll just add a couple more misc thoughts.

I want to expound on why I think it's bad to throw a timeout error in the middle of library author's code. Like you said, an error is for when something goes wrong. If you're attempting to guess the appropriate timeout value to use for a particular function via trial and error, and you guess wrong, then it's your fault that the error occurred, not the library's. The library was working just fine. Therefor, if there's going to be any location that a timeout error would appear when you do await[timeout], it'll be at the locataion where the error happened, i.e. at the await[timeout] spot, not in the middle of the library's working code.

Next, one of the big reasons you're wanting this feature is so you can force clean up memory. As already shown, there's many existing functions that would get around this forcefulness, just by virtue of the fact that they may be storing stuff in global/module-level space. What's worse, it's possible an await[timeout] syntax could actually cause a memory leak to happen. I demonstrated previously a piece of code that can be used to rate-limit the number of active promises that are out there. It wouldn't be too much trouble to extend that code to also have a queue of tasks that need to be ran next. If await[timeout] is used on this modifies logic, it's possible that, due to the fact that it's preventing cleanup logic from occurring, this task queue will continually fill up without ever getting emptied. In other words, await[timeout] could actually cause memory leaks to occur when applied on code that would otherwise work just fine.

Perhaps you feel people shouldn't code a rate-limiter like that, and should find alternative ways to code it up that's more robust against this feature request. But, the thing is, this kind of code would already exist out there, and it depends on the fact that the language doesn't randomly throw errors in the middle of working code. And it's ok to depend on that.

Unlucky, adding a timer to the language is rejected. See Promise.delay (sleep) and Promise.timeout - APIs - WICG

Ok so there is no scope for keeping it as syntatic sugar as well ?

Let it be just syntactic sugar for implementations like this:

//let someExpectedValue = await[3000] getSomeValueFromSomeAsyncCall();//this will resolve as below line
let someExpectedValue = await new Promise((res,rej)=>{
    setTimeout(rej,3000,"TimeoutError");//either it will be rejected by a time out earlier
    getSomeValueFromSomeAsyncCall().then(e=>(res(e))).catch(e=>rej(e)); //or it will be resolved/rejected before the timeout
});

Here off course getSomeValueFromSomeAsyncCall will still hold up RAM (And continue in background causing mutations).

However it still better than nothing.
Pros:

  1. Do not rips off any previous code, as await without time out will still be a valid syntax.
  2. Gives a visual expectation from a line.
  3. And allows easy pin pointing bottle necks of code flow.
  4. Overall easy to use.
    Cons:
  5. Async sources are still the boss of certain resources on the RAM, developer can certainly avoid such libs/modify them to get better performance.

By this new syntax at least they will be able to know what is blotting their RAM.

In such case error should originate from line where await[timeInMS] is called. As error the getSomeValueFromSomeAsyncCall was expected to run indefinite anyway, its the developer's motive to tame it to some timeout.

I don't see any blatant issues with that idea, so it could work.

Except for the fact that the JavaScript committee doesn't want to introduce any sort of timers into the spec. setTImeout is not part of JavaScript, it's just an extra function that most platforms provide. I'm not completely sure on the reasons behind this decision (I did ask about it over here once, if you care to peek at that thread).

Perhaps if a proposal comes forth that brings a strong enough motivation with it to include timers in the syntax, then maybe committee members would rethink their stance about not having timers in native JavaScript, but I don't know if that'll ever happen (I don't know how much committee members care about this current stance).

Another option would be to make this syntax not use timeouts directly, rather, it can use accept sort of cancelation logic (via a cancel token). Then, browser APIs could provide a shorthand method to create timeout cancel tokens. For example:

// timeout() comes from a browser API.
// It accepts a ms parameter and returns a cancel token
await[timeout(1000)] someAsyncTask()

// You get additional benefits, like making multiple async tasks all
// be required to executed in the same time.
// This wouldn't otherwise be possible with the current await[] syntax.
function doManyTasks(timeout) {
  const token = timeout(timeout)
  await[token] task1()
  await[token] task2()
  await[token] task3()
}

This would just be syntax shorthand for a Promise.race().

1 Like

That looks oddly familiar and could easily be generalized... GitHub - tc39/proposal-cancellation: Proposal for a Cancellation API for ECMAScript

But in all seriousness @anuragvohraec, not only would you be more likely to get something like a cancel token through, it'd also get you better mileage in general (because cancellation is more generic than just timers).

await[timeout(1000)] someAsyncTask()

yes this looks perfect to me.
It give a clear intent of what's happening and provides a more general approach.

1 Like