syncAndAsyncIterator

That's not actually why flatMap exists; otherwise there would be filterMap and so on as well. The reason flatMap exists is that it is a single primitive operation for people coming from a functional programming background. It's nothing to do with performance.

My point is that the 8x slowdown isn't going to just fade into the background, as other minor costs that can look major in microbechmarks do. Instead it will tend to be incurred repeatedly and continuously as the complexity of the required work grows.

It might, but performance is tricky; it is hard to trust this assertion without having concrete, real-world examples where that thing happens. I suspect it is false: your proposal is for a new kind of loop, and in my experience, people using loops will tend to put all of their logic into the body of the loop, rather than walking over their data repeatedly with multiple loops. That is, they have a single loop as in

let output = [];
for await (let item of data) {
  let x = transform(item);
  if (test(x)) {
    output.push(x);
  }
}

rather than

let transformed = [];
for await (let item of data) {
  transformed.push(transform(item));
}
let output = [];
for (let x of transformed) {
  if (test(x)) {
    output.push(x);
  }
}

And if they only have a single loop, they only pay the cost once.

The individual writing the excel parser offered you a real world example. Their real world use case slowed down by 40% when they attempted to use async iterators.

Their "real world use case" is incrementing a counter in the hot loop. That's still pretty close to a microbenchmark, I think. And that's compared to using callbacks directly, not a comparison to sync iterators.

The new syntax is already here.

for await? is not currently in the language, no. That's the new syntax I'm referring to.

Yes, but the async iterable syntax is here and it can be used to create and consume streams of characters. I think it's silly to suppose that a lot of people are going to be rolling hugely complex combinations of operations into a single for loop. Why would you want to encourage a programming style where for loops are a costly resource to be used only sparingly?

But anyway, here's the thing. Right now you're not wrong. Current usage probably does not warrant this optimization.

That said, lodash is the most popular package on npm. The vast majority of professional javascript programmers use functional methods to transform data, whether that be Array.prototype methods, lodash (still the most popular npm package), or something else. When I name these popular and highly used utilities you'll not that there is not any particularly popular one that offers a high level API over async iterables. I've spent portions of the last two years building that library. I've kept it relatively quiet so far because I didn't start out as an expert in this space. I've had to iterate and learn as I go, and I didn't want to expose people to my mistakes and lots of breaking changes. I'm about to release my work though and start encouraging people to use it. Right now I have quit my job and building these tools is the way I am choosing to spend my time. I aim to displace lodash as the top package (eventually), and I think what I've built will do it. iter-tools blows existing libraries out of the water. All current offerings are missing something major, be it a comprehensive library of methods, API documentation, type definitions, test coverage, parity between sync and async operations, compatibility with tree shaking, a well thought out transpilation strategy or some combination of all of the above.

You are free to wait and see if this comes to pass, but I think it will be an easy sell. I will be making it as easy as it aught to be to use async iterators to work with stream type data, and if the main complaint new adopters have is that it's slower than it aught to be, I'm going to be sending them here. My hope is that a significant number of the people I convince will work for companies which have TC39 delegates. At that point we'll end up right back here. But if we could agree on where this is going maybe we could get a head start?

Why would you want to encourage a programming style where for loops are a costly resource to be used only sparingly?

I don't think they're a costly resource. I think they're astonishingly cheap. They're not literally free, but for the vast majority of programmers, in real code rather than in microbenchmarks, their cost is not going to matter. I'm just observing that, empirically, when using loops people tend to have multiple operations within a single loop in preference to having a ton of different loops, and hence whatever minor costs those loops do entail is not likely to compound.

At that point we'll end up right back here.

Like I said, I think if you demonstrate that this is a significant and hard-to-avoid source of slowdown in a reasonably large class of real application, and it proves to be the case that engines can't optimize these patterns well enough without new syntax in the language, then that will be the appropriate time to try to push this proposal. Given how much engines have already been able to optimize async iterators, I don't think it is currently clear that either of those will prove to be the case.

I'll have to think some more about how to demonstrate the slowdown in a realistic sort of program.

In the mean time, there was some more discussion on the difficulty of optimizing this. What do you think about the bouncing pattern when two loops are executing concurrently? It would seem to break the conditions necessary for an engine-level sync-to-async optimization, and I suspect that it would not be an uncommon circumstance. I guess I should go ticket such an optimization with the v8 folks and see what they say.

Sorry for bumping this and then deleting my post. I guess it stays bumped. Weird. I decided to make a new thread in an attempt to refocus the discussion.