Ah yeah, the generator is slow. Making a manual iterator object with next() method for [Symbol.iterator] brings the for-of loop from 10 times slower to 3 times slower, although that's still quite slower than I hoped for:
Welp, looks like the best I can do so far is about ~2.8 times slower than a plain do-while loopm by using a plain iterator object for Symbol.hasIterator, or a forEach method with a plain while loop (looks like the for-of overhead is roughly the same as the callback for forEach):
Maybe but seems unlikely. A traditional C style for loop is effectively writing verbatim the bare minimum that the machine needs to do. An entries iterator is already two extra objects per iteration that the compiler would need to optimise out. Maybe the higher tier JITs could.
What might be interesting is if there are web compatible changes to the specification that make iterators easier to optimize "out of the box" in the baseline interpreter, where performance wins can be most beneficial for things like application startup. Or perhaps opt-in ahead of time JS to JS optimizers that can be configured to make certain assumptions, such as arrays never having holes and the array iterators are not replaced.
That said I still use iterators all the time in my code. It's only for the very hot paths, eg in a core library or framework where I switch to a C-style loop. In application code that only runs once on the occasional button press I haven't found iterators to be the bottleneck
@aclaymore Yeah I totally agree, I use for-of liberally and have no issues, even in render loops.
In my case, I'm working on a new project: porting Blender's mesh editing algorithms to JavaScript. I forsee a situation in the future where an artist will have millions of faces (they already do in Blender), and each face has a linked list of edges around the face. I'm imagining that, with that many lists and items, every speed might help when it comes to mesh operations.
But! I still have to get it all implemented, and then maybe one day someone will hopefully make such a complex model with it. I bet there will be plenty of other bigger bottleneck to handle before this becomes relevant, if ever.