Exposed Promises?

I've found myself relying on this more and more, especially for async APIs using websockets or p2p, but I kind of just always want it.
Everyone seems to know it can be done, but I never see it mentioned in tutorials or best practice guides.

I'm curious: Is this something Promises should encourage by default?
Is there a reason Promises can't be cancelled natively?

(Specifically: I want to force resolve/reject early, prevent the promise from ever resolving at all, or replace its .then() list.)

function Pledge( executor ) {
    let shortcircuit, abort;
    const pledge = 
        new Promise( ( resolve, reject ) => {
            shortcircuit = resolve;
            abort = reject;
            executor( 
                ( ...args ) => pledge.shortcircuit( ...args ), 
                ( ...args ) => pledge.abort( ...args ) 
            );
        } );

    pledge.shortcircuit = shortcircuit;
    pledge.abort = abort;
    pledge.redirect = target => {
        pledge.shortcircuit = target.shortcircuit;
        pledge.abort = target.abort;
    }
    pledge.neuter = () => {
        pledge.shortcircuit = () => {}
        pledge.abort = () => {}
    }
    
    return pledge;
}

There's actually a proposal that's looking into providing a robust way to cancel async tasks (see here). They outline some reasons in the README why they're not just updating the promise API to do it, instead, they're using the concept of a cancelation token to allow API designers to construct a smart canceling system.

That's similar to the cancel flags I used before this system. It's bad practice to put flag checks (or "if canceled" code) in every single call you write.

Promises already stipulate a complete set of contingencies: reject, resolve, drop.
That proposal is just doing

oncancel( cancelled = true )
...
if( cancelled ) ... //absolutely everywhere... if you EVER forget, code will run that shouldn't, and probably with side effects

I can't see why requiring a function to resolve from inside its executor is good practice. It's fundamentally unenforceable, and very inconvenient. What am I missing?

A promise represents a placeholder for the future value of work that has already begun. Only the thing that creates the promise - the task - can interact with cancellation (aborting), altho promises alone can interact with cancellation (registering disinterest in the result).

Rejecting the promise is in no way the same as aborting the work nor unregistering fulfillment/rejection handlers, although it may be a reasonable effect of aborting the work.

1 Like

I think we're on different pages.

I would say: "A Promise represents an optional code branch in which branch behavior is deferred to another agent."

Then saying, "Branch behavior can only be deferred to Agent X" is meaningless, isn't it? By definition, if control can be deferred, any agent(s) you defer to can defer to others.

Imagine forbidding use of && or || in if() expressions, then saying: "An if() branch must be controlled by only one expression."

It doesn't make sense.

I think all @ljharb was saying, was that my reference to the cancelation proposal was irrelevant here, because you're trying to actually "cancel" an action, all you want to do is ignore it.

If you actually "cancel" an action, that can only happen by the code that's performing the action, as only that code knows what needs to be done to properly cause the action to be canceled. Sometimes a cancellation isn't possible (like with a rest-request), but sometimes it is (like a setTimeout).

Your implementation still lets the action continue running, but you override how the promise behaves to force it to resolve to a different value than it would have done otherwise. Nothing got canceled.

That design means that by default, only the creator of a promise can ever resolve or reject it, which is a highly desirable guarantee.

The place to check the state of the work is inside the executor function.

That's a limitation of the current Promise API, it could be fixed. Generators also represent future values, and don't suffer this limitation — they can respond to (or ignore) exceptions from the outside, without having to spaghetti code if (cancellationRequested) return.

If by "highly desirable" you mean "incompatible with network requests that may never respond".

If I am not supposed to be using Promises to react to network requests, then what? Events? Those are impossible to control. Any code anywhere can react to an event, because their keys are strings (not unique / not controllable).
Maintaining Event interfaces is a nightmare. You have no idea where in your code something is catching it.

for( const key of getImportantEvents() ) addEventListener( key );

I need multi-way communication between a Promise creator and the systems that depend on the Promise.

I can have that easily, by exposing the Promise as above.

Edit:
More important: Events are for recurring things. Promises are disposable / one-off.

revocable promises??
we already have revocable proxies.
maybe we need something like

const revocable = Promise.revocable((res, rej) => doSmt)

revocable.then(a => doSmt)
setTimeout(() => revocable.revoke(),
// return true on success else false ??
3000)

I don't know exactly how it'll work. Hey just a suggestion😉

Other examples:

await renderer.rendering; //<-- This is a "refreshing" promise

//These are equivalent:
renderer.rendering.abort();
renderer.abortRendering(); //calls rendering.abort() internally, needs more code

Doesn't this happen all the time? Two-way communication between an async task originator and dependent systems.

The difference between this and a message bus is that we are guaranteeing 1 origin and we have a lifecycle. Once the originater and dependents agree that this cycle is dead, it is impossible for any code anywhere to trigger because of it.

If this kind of system is not a good fit for Promises, then I wonder what it would be called...

Maybe the problem has to do with a scenario like this:

const promisedAPI = new Promise((resolve, reject) => {
  // If you want to know what causes this promise to resolve or reject
  // all you have to do is look in this function definition.
  // That's the guarantee currently
})

export function doSomething() {
  const api = await promisedAPI
  // ...
}

export function doSomethingElse() {
  const api = await promisedAPI
  // ...
}

export function stop() {
  // Who would have guessed
  // there's now another way for that promise to be rejected.
  // We've lost our guarantee, and it's harder to track what causes the promise to reject.
  promisedAPI.abort()
}

You can imagine that this may make code harder to debug, if every time you're wanting to know how a particular promise rejects, you have to look everywhere it gets used, not just at how it got created.

Maybe, instead of adding exposed function to all promises, we could create a new "ControlledPromise" type, that gives control to outside users (maybe this is what you were effectivly suggesting to begin with?). It's constructor accepts a promise, and it returns a new promise that exposes these function.

For example:

// The ExposedPromise() constructor makes it clear that in this particular case
// you need to look at everywhere this promise gets used to know how
// it resolves/rejects
const promisedAPI = new ExposedPromise(new Promise((resolve, reject) => {
  // ...
}))

// ...

export function stop() {
  promisedAPI.abort()
}

I'm not a huge fan of this idea, because people may just start to always use a ControlledPromise instead of Promise for convenience, whenever creating a new promise, and we lose the guarantee anyways.

Interdependent tasks exist. I only write interdependent code if absolutely necessary.
But, when it is necessary, I need all the help I can get to make it work bug-free.

We can't avoid interdependent systems. But, maybe we can provide an API that helps control specific kinds of interdependency.

These Pledges have been extremely helpful in keeping my code easy-to-read and easy-to-follow (relatively speaking), because I know how they behave.

They don't have to be Promises, they just naturally evolved from Promises so I could use async / await.

Branch selection is only under the control of the creator of the Promise, unless the creator delegates it out.

Your pledge.abort doesn't actually abort the task. It only rejects the promise, i.e. downstream consumers get an early exception, but the executor has no idea that happened, any tasks it started keep running.

You're supposed to use a promise with a cancel token.

@0X-JonMichaelGalindo - could you maybe supply a more complete example of something you're trying to achieve, where this pledge interface helps to clean it up? That might help us better picture the problem we're solving.

Yes. Which is necessary sometimes, but not exactly encouraged by Promises.

To be honest, every instance I have in production code right now just does something like

this.currentPledge = new Pledge( ()=>{} )

//...some task somewhere
X.currentPledge.then( /* ... */ )

It's a lot like an Event listener. But, we have to re-attach every time (unique / controllable), and instead of bubbling, we have exactly 1 communication event broadcast from any 1 of the "listeners", after which the Pledge's code cycle is definitively terminated.

It's more linear than having everyone post messages to some parent whenever they like.

1 set of listeners. 1 broadcast.

I've done a terrible job of explaining this! Your patience is amazing.

I think I'm using promises like events. They have fewer headaches because we retain 1 top-level control (by requiring re-registration) and we limit parallel communication to exactly 1 finalizing broadcast (guaranteed).

api.somePromise = new ExposedPromise();
websocket.send( 'some request' );

//...
websocket.onmessage = message => {
    //...
    if( isSomething( message ) ) api.somePromise.resolve( /* If anyone cares, here's the answer. */ )
    if( isOther( message ) ) api.somePromise.reject( /* If anyone cares, abort this. */ )
    //...
}
//...
websocket.onclose = () => {
    for( const pledge of api.allPledges )
        pledge.reject( /* If anyone cares, abort. */ )
}
//...
api.logout = () => {
//...
api.somePromise.reject( /*...*/ )
}

//...
//code that must only execute if nothing cancels somePromise
api.somePromise.then( /*...*/ );

//...on other threads, independent code paths racing to provide a solution or reason for cancellation:
if( /*...*/ ) {
    api.somePromise.resolve( /* My answer might be ignored. Use if desired. */ )
}

//...
if( /*...*/ ) {
    api.somePromise.reject( /* If this is still alive, it needs to abort. */ )
}

See how we get fail-safe parallel flow control?

These are cases where working around this is very valuable:

Disinterest and data can originate from parallel code paths.
But only 1 time, after a scheduled start. So we still have a controlled, sequential flow despite the need for interdependent code.

You can almost use Events instead, like this:

API.currentEventKey = randomString();
 
addEventListener( currentEventKey, () => { /*...*/ } )

//...racing parallel code
dispatch( currentEventKey, new Error() )

//...
dispatch( currentEventKey, "The Solution" )

But there's no way to abort or "register disinterest".

clearAllEventListeners( currentEventKey ) //<--Not natively. DIY

But if you build your DIY flow model based on Events, you can't use async / await.