I don't understand why the template can't be made immutable (I don't mean frozen). Regardless, it should not be tempered with, even it is made of mutable, non-frozen JS objects. The id field seems redundant.
You raise a good point re. components, what I suggest wouldn't cut it in client code to pass dynamic values in attribute or children position to components. You need to preserve some of the tree structure in the container that holds the dynamic values.
I need to pour more thoughts into this, but I still think it can be achieved with hoisted ESX templates (that are expected to not be mutated even they technically can, let's call it soft-immutable). I'll pour more thoughts into this (and will have a look at udomsay and the transformer). and come back if I have a working solution.
if the id can go, I'm OK with it ... but it won't work with the transpiler/Babel transformer ... or not so easily, as it needs to be an object with mutable fields/properties, even if always the same one ... anyway, something I can work with ... regarding the rest, this is the ESXTokenRecord (because I've mentioned MutationObserver records before, that covers them all as single struct:
const empty = Object.freeze([]);
class ESXToken {
static ATTRIBUTE = Symbol('ESXToken.ATTRIBUTE');
static COMPONENT = Symbol('ESXToken.COMPONENT');
static ELEMENT = Symbol('ESXToken.ELEMENT');
static FRAGMENT = Symbol('ESXToken.FRAGMENT');
static INTERPOLATION = Symbol('ESXToken.INTERPOLATION');
static STATIC = Symbol('ESXToken.STATIC');
static Attribute = (dynamic, name, value) => new ESXToken(
ESXToken.ATTRIBUTE,
null, empty, empty,
dynamic, name, value
);
static Component = (id, value, attributes = empty, children = empty) => new ESXToken(
ESXToken.COMPONENT,
id, attributes, children,
false, value.name, value
);
static Element = (id, name, attributes = empty, children = empty) => new ESXToken(
ESXToken.ELEMENT,
id, attributes, children,
false, name, void 0
);
static Fragment = (id, children = empty) => new ESXToken(
ESXToken.FRAGMENT,
id, empty, children,
false, '#fragment', void 0
);
static Interpolation = value => new ESXToken(
ESXToken.INTERPOLATION,
null, empty, empty,
true, '#interpolation', value
);
static Static = value => new ESXToken(
ESXToken.STATIC,
null, empty, empty,
false, '#static', value
);
/** @private */
constructor(
type,
id, attributes, children,
dynamic, name, value
) {
this.type = type;
this.id = id;
this.attributes = attributes;
this.children = children;
this.dynamic = dynamic;
this.name = name;
this.value = value;
}
get properties() {
switch (this.type) {
case ESXToken.ELEMENT:
case ESXToken.COMPONENT:
const properties = {};
const {length} = this.attributes;
if (length) {
for (let i = 0; i < length; i++) {
const entry = this.attributes[i];
if (entry.type === ESXToken.ATTRIBUTE)
properties[entry.name] = entry.value;
else
Object.assign(properties, entry.value);
}
return properties;
}
default:
return null;
}
}
}
type is any ESXToken.XXX type
id is either null or a symbol or unique identifier usable as WeakMap key. id can go if the instance itself is used as id
attributes can be of typeESXToken.ATTRIBUTE or ESXToken.INTERPOLATION
children can be of typeESXToken.COMPONENT, ESXToken.ELEMENT, ESXToken.FRAGMENT, ESXToken.INTERPOLATION or ESXToken.STATIC
dynamic is a boolean ... it's true by default for interpolations and optionally true for attributes. All other cases are false because static, components, elements, or fragments, are never dynamic per se
name when present is always a string (even an empty one)
value is either undefined (null?) or the type value, being that the component reference, the attribute value, or the interpolation/static one
This record covers everything around the suggested syntax already, and it unifies ESX as new primitive.
This would, perhaps, be a topic that I think would be healthy to explore a bit more. Perhaps I haven't done so yet, because I myself love JSX (I'm a React lover). But, I'm realizing that this could be a really important issue. So far, the main argument for ESX seems to be "JSX has been successful, so let's make it a native feature". I know there's been talk about making sure ESX isn't tied to the DOM, so let's focus on leaving HTML out of the picture, and see what use-cases it fulfills. Say, I'm a node user who's building, say, a CLI tool, or perhaps a web-server (that's not serving HTML pages). What benefit does ESX bring to a user like me?
Well, I can think of one thing - perhaps I'm supporting an older API and need to spit out XML, and I could use an XML-building library that integrates with ESX. Which would be neat.
What else?
I know it's been thrown around that ESX is good for any tree-like data structure, but I don't really understand that point. JSON is good for tree-like data, and JavaScript already has syntax for that. XML, on the other hand, is a fairly clumsy data format to work with, which is perhaps why we've been moving away from it as an industry.
When I'm talking about early errors, I'm still talking about runtime errors, either at parsing time, or at first execution. I understand that parsing time errors are a little better, but I don't see how much more valuable that would be, especially if there could be design time syntax validation through tooling.
Which is still a subset of the JS and even of the Web ecosystem. My doubt are about imposing the parsing costs on the whole JS ecosystem. The disambiguation lookaheads needed for JSX are significant.
I find both similarly readable, and the template tag version more explicit about scoping of tag elements.
Regarding parsing for spread attributes, i don't understand the argument. Both the template tag and esx in JS approaches would need to parse the spread depending on context. Is there a parsing difference I fail to see?
This is a facetious comparison, tooling with support for esx as template tags could highlight it to the same level as ESX in JS.
If you want this to become a proposal, let's try to follow the proposal process. The first steps are about exploring the problem space. A shim / transform really starts making sense once reaching later stages. While a playground is extremely valuable, it implicitly constrains the space and there is plenty of exploration that can be done conceptually without it at first.
I would like this question answered. I have a hard time personally identifying cases where ESX/JSX would solve problems I have as a JS developer of non DOM based programs.
I think these two sentences contradict each other ... or better, if you love JSX, and the industry indicates you're not alone, I don't understand your claim the industry has moved away from XML, when JSX is effectively XML (+ interpolations) embedded in JS and with scope resolution for components, so it's better than just being data.
The only demo I wrote so far uses ESX to stream text as HTML. The list of frameworks and libraries that use JSX on the server to share components logic is huge, we're talking dozens if not hundred.
Moreover, if you use React native you'll write JSX and that will result into native components UI.
Do you want to use Espruino or any Internet of Things that uses JS as programming language?
This is a bit of a stretch ... it's the other way around: JSON is built on top of JS and JS has a core parser for it, it's not that JSON is a special syntax that JS understands, JSON is just, literally JS.
However, if JSON is universal, so is XML, but ESX is not really XML, it's template literals on steroids with scope resolution and the ability to represent both what JSON represents and much more, as long as its resulting structure is well defined and usable by any JS engine and for any kind of platform and need where XML-like definitions with components and scope resolution are desirable over JSON.
The premises of TypeScript is that IDEs provide instant feedback on errors. The assumption is that compiles to correctly interpret-able JS. As IDEs also support already JSX as is, I don't think parsing time should be a huge concern but of course the interpreter must be sure the ESX is written correctly. It's definition though is so simple that I doubt this would be a real concern, or better, it won't be any harder than throwing while parsing a malformed JS AOT.
While other engines still use "use directive" to specialize files or functions parsing goal, if this is the only blocker we can explore a way to disambiguate ESX from the rest of the code, so that when no ESX exists there's no parsing cost at all, but when ESX is explicit that part uses the different parser.
JSX is fairly simple to parse as described by its standard, so hopefully it won't add much extra cost neither as implementation effort, although the disambiguation concern is real and I hope we could just think about that as first step in the process.
I've already explored all concepts for months now, and I've worked in production for 5+ years with every possible template literal tag based workaround to mimic JSX in every possible way and my conclusion is that nothing is superior than JSX at doing what JSX does. Template literals only advantage is the uniqueness of their outer template, everything else is a potentially disaster prone approach to mimic tree structures through chunks of strings, without having any way to correlate scoped components to those strings if not by using a much bloated and slower parser 100% done in userland and still ugly, despite the "de gustibus non est disputandum" part:
... but also ...
pretty no. The template element just accepts anything as tag element so it's error prone compared to ESX or JSX which expects a valid Component reference, either global, or in the scope, and that reference should be a callback (or a class), not just any reference that leads to errors.
This is part of the improved DX JSX already offer, and template literals based solutions are nowhere even close to compete with JSX adoption, but I won't repeat further my thoughts which are summarized as such: esx as template literal tag is a dead-end and I won't contribute to its proposal or specification unless things are done in a completely different way these are done today with template literals, so that it will be a less desirable and much harder to write and implement specification, also confusing as it would add new meanings to strings and templates ... hard pass to me.
more on this topic which I insist should be the first one to address as it's techincally the only blocker, I wonder if simply imposing parenthesis around wouldn't be enough to grant ESX.
const a = 1;
const b = (<a />);
This throws already, and so would {<a />} or even [<a />] but both are not backward compatible, or better, both might lead to confusion, except maybe for the {<a />} case, however (<a />) would be already compatible with everything out there already based on JSX, so it won't break anything, and it will make the community happy.
I forgot to answer this one ... there is no context in template literal tags, only chunks of strings. You don't know if ... was intended as spread attributes on an element, or just as part of the content as text of the element they are in, because there's no context there.
On top of that, JSX disambiguate spread within the interpolation, with template literals there's no disambiguation at all as showed in the example ... look closer, the spread in JSX/ESX is not outside the parenthesis, it's intended as explicit operation within the interpolation. With templates, my eyes can't instantly tell the difference, and the parser has a long way to go to understand and provide context for each value of the interpolation. It's doable, but it means creating an XML-ish parser within an already parsed program, and that's slow and bloated if done on user land.
The industry has, in large part, moved away from XML. Not entirely. Indeed, it's not possible to entirely move away from XML, since languages like HTML are built on XML, and HTML isn't going away. XML will never be "dead", but it is much less popular than it used to be.
If HTML were being built again from scratch today, it's very possible that it would have been built on top of JSON instead of XML, and then there would have been no need for JSX to power React.
Ok, fair, HTML isn't the only UI target that's based on XML, so JSX/ESX helps with other UI-building scenarios outside of HTML.
What does this code snippet do? It's dynamically configuring how an IOT device is set up, and which pin it should output power to when it needs to flip a light on/off? So, without ESX, you would have to do something like this?
And if I understand how ESX transformation works, the solution will look like this:
(I'm using ESX as currently defined in the proposal repo - I haven't been closely following the conversations about the exact details of how the data should be structured, so if I'm using something a bit out of date, sorry).
function getAllFileNames(data) {
// Returns null if it fails to find a "name" attribute.
// Otherwise, returns the value of the "name" attribute.
const findFileNameFromAttrs = attrs => {
for (const attr of attrs) {
if (attr.type === ESXToken.ATTRIBUTE && attr.name === 'name') {
return attr.value;
} else if (attr.type === ESXToken.INTERPOLATION) {
return findFileNameFromAttrs(attr.value);
}
}
return null;
}
if (data.type === ESXToken.ELEMENT && data.value === 'file') {
const maybeFileName = findFileNameFromAttrs(data.attributes);
if (maybeFileName !== null) {
return [maybeFileName];
}
}
if (data.type === ESXToken.ELEMENT || data.type === ESXToken.FRAGMENT) {
return data.children.flatMap(getAllFileNames);
} else if (data.type === ESXToken.INTERPOLATION) {
return Array.isArray(data.value)
? data.value.flatMap(getAllFileNames)
: getAllFileNames(data.value);
} else {
throw new Error();
}
}
Feel free to let me know if I got anything wrong there, or if it could be cleaned up.
This being said, I understand that ESX is intended to be used by libraries, and it's ok for libraries to pay some complexity cost in order to improve the UX of their library. And, I'm sure some re-usable utility functions could be extracted out of the above example, in order to make this sort of thing easier to handle.
UPDATE:
Another way to shape the file data without using XML or JSON would be as follows:
I don't wish to argue that XML is completely irrelevant except for the few places that are currently using it. It's certainly possible that there's still places where does better than JSON at specific jobs. And, I agree that the file-structure XML does look a little more elegant than the JSON version (not by a ton though).
My personal issues with XML is:
There's multiple ways to represent a concept, and there's no real standard way to choose between them. For example, I can structure data like "<person name=me age=8 />", or I can structure data like "<person><name>me</me><age>8</age></element>". The first format can be a little more concise, but the second format is nice when you need to add more complicated data, like an address. Sometimes you end up with a mix between the two approaches. (Yeah, I guess this is a relatively minor point, but it does bug me)
A "list" is nothing more than a bunch of children elements inside a parent element. A "mapping" is the same thing. This can sometimes add difficulty when reading and writting XML, because it can be hard to know if the contents of a certain XML tag is supposed to be list-like or mapping-like.
It is difficult to work with XML data programmatically, precisely because of issue number 2. An XML element's children are often given to the programmer as a list to preserve order, in case the order is needed (i.e., this is list-like data), but often what you really want is a mapping. Things like XPath have since been invented to help deal with the complexities of navigating XML data, but IMO, that's really just a bandade to the issue that XML is difficult to work with in the first place.
JSON has some of these issues to a degree as well, but, IMO, they're much more mild compared to the XML issues.
Now, when it comes to a library/consumer relationship, the advantage to something like ESX is that all of the complexities of handling the XML data can be done within the library. All the end-user has to worry about is building the XML, and with a tool like ESX, that's much easier to deal with. This kind of relationship takes out most of my personal issues with XML.
I'd like to keep the conversation less speculative and more current data oriented, which is: JSX had an enormous success and it's not going anywhere, it's actually being implemented by more and more cross platform libraries and frameworks.
Sure, but that's less semantic in terms of intent, or DX.
With ESX the solution is provided out of the box, accordingly with the consumer data. There's no variant of possible JSON representations of the same concept, it's one, and any library can digest that one representation across platforms.
JSON, indeed, has many ways to represent data, ESX (JSX) only one ... I find this a plus, an improvement, over non standardized JSON trees.
It's irrelevant for this discussion so apologies I won't review that code.
My previous point ... many ways, in ESX you have just one way anyone can understand everywhere.
ESX would like to solve this issue ... there's only one way to represent a generic tree.
Both works with ESX libraries that understand ESX, but of course moving attributes to children require counter-parts able to understand that too. It's still way more semantic and stricter than any possible JSON representation of a tree, as there's no standard around it (AFAIK).
Not really, in ESX everything is static. A dynamic list can be passed as interpolation Array value, pretty explicit.
The industry is using JSX already so once again, I'd like to keep the conversation around the current state/reality, not personal feelings, thanks!
Great for XPath, I've used it a lot last 5 years, unfortunately has nothing strictly to do with this proposal.
ESX enables such mapping for reactivity sake and any other use case ... it's exactly like template literal tags mapping except it has builtin scope resolution and terser syntax, which is desired out there.
JSON is data, it can't even represent callbacks, so I am not really sure why you keep mentioning JSON ... maybe you mean ECMAScript Standard 3rd edition JS style instead, but your wording is all about JSON.
JSON can't represent ESX out of the box, because JSON can't represent symbols, callbacks, other special values, and so on ... I'd like to have this clarified: when you say JSON, what do you really mean?
The goal of this proposal in a nutshell, I'm super pleased you saw that too.
Awesome! I hope we can then agree while XML sucks, ESX is something slightly different and slightly better, that's not just data/tree but a way to provide a semantic way to define intents, being these for UI purpose, program purpose, and so on.
That being said, after moaning a lot around ESX as template literal tag, I am playing around to have an esx tag that acts exactly like ESX via the transformer, but based on template literals and some inevitable extra overhead. If I reach a satisfying point I might re-consider my position and stop pushing for this proposal, but regardless of how my extra effort around will go, I already see it as inferior DX compared to ESX and a lot of failure proof boilerplate into developers shoulders.
I wish JSX lovers would've helped me out here instead of claiming JSON is the future, or it has it all, as that's not the reality I live in these days, but I also take no as an answer to a proposal, I just hope that won't take forever to be official, otherwise I'll keep trying but at this point I also feel a bit repeating myself.
This length thread surely doesn't help neither, but opening a new one doesn't look like a good idea neither.
Thanks for all participants, ideas, blockers (you had some good reason) and so on, I am close to move this proposal forward by my own and try again in a couple of years if this is ever proven to be worth it.
I've read here "follow the proposal process" but inventing complex solutions like these can't be just hypothetical, and I've used current worldwide shared data to argument my reasons, and provided attempts and solutions that demonstrated this solution works. I hoped udomsay library scoring better than any strict JSX runtime at js-framework-benchmarks would've have convinced more people, but apparently that wasn't enough.
Happy to iterate or answer more, if needed, hoping somebody will actually see this as a great opportunity, instead of an issue forward.
if none of these claims was clear, it meant that with JSON you can have {name: "a", value: "b"} or {"a": "b"} while with ESX you can only have either <a>b</a> or <a value="b" /> ... there's no playing around the way the tree is represented, or the name parts: it's a tree!
(I had completely re-written my thoughts, then tried to post them, but somehow the older version of what I was trying to say got posted instead - I think this forms draft feature got in the way - sorry about that)
Let me start fresh. I think I ran off in 100 different directions in that previous post. And, while some of those directions would be interesting to explore more (and, perhaps, I'll come back to some of those discussion points later), I probably should be more focused. I'll also try and stop comparing with JSON - we'll see if I last with that though. When I talked about JSON previously, I meant either true JSON, or JSON-serializable data. But, like you said, the template abilities that ESX has makes it offer more than what JSON can offer, which made some of the comparisons confusing.
The heart of the issue at hand right now is, "Putting aside the use-case of having to build XML for places that literally require XML (e.g. HTML, android UIs, legacy endpoints, etc), what does ESX have to offer"? (This question also now ropes in android UI building tools as those too are XML based - that was a good point to bring up, thanks).
You gave a couple of examples where you felt ESX was more expressive than what JS currently has.
One was with ardino-configuration code.
configureBoard( // (I assume this data needed to be passed into some sort of function to be a complete example)
<Board name="pluck">
<LightSwitch pin={env.PIN2} />
</Board>
);
// vs
configureBoard({
name: 'pluck',
lightSwitch: {
pin: env.PIN2
}
});
You stated the non-ESX solution was "less semantic in terms of intent, or DX". Could you elaborate on why you feel this way? Both examples look about the same to me.
And, I admit, the XML version is a bit cleaner and more concise. So, this is a win for ESX.
I want to take a moment to discuss what ESX is offering.
A concise syntax for describing tree-like data with certain kinds of trees (not all trees work with it well, e.g. play around with binary tress with ESX vs JS today, and you'll find that object literals do a better job at representing those).
The preservation of where data is interpolated in those trees. (e.g. <p>{<br />}</p> isn't the same as <p><br /></p>).
XML-looking syntax, which is nice if you're specifically trying to build XML documents.
For the use-case of describing a file-structure in JavaScript ESX did win slightly over JS because it had point number 2. Point number 3 wasn't involved in this use-case since we're specifically trying to avoid that point right now. And, point number 1 isn't involved in there either. In fact, the only use case for having the behavior describe in point 1, that I'm currently aware of, is DOM-diffing (or any-UI-diffing). I'm curious if there are any other examples where this comes in handy. This is, after all, the reason why it's so much more difficult for a library to do trivial things, like find all of the file names in ESX data vs normal JS data as previously shown (Most of the complexity of that logic had to do with dealing with how data could get interpolated).
my point is: ESX is a well defined struct anyone can crawl. What you find on each attribute or children can vary, but the struct remains always the same.
With JSON, there is no contract in how data should be crawled. A smart terser JSON can have {a: 1} fields, another one can have {name: "a", value: 1}. The latter is less desirable, the former requires O(N) amount of Object.keys(node) operations (or for/in loops to break asap). There is no contract in how the data can be represented while JSX or in this case ESX proposed as standard, will offer that.
In ESX you have one way only to represent children and elements name or attributes, with JSON everyone can do pretty much what they want, reason JSON schema exists, reason JSON schema is an extra overhead needed because there's not standard way to define a schema. With ESX, or JSX, the schema-structure is at least well defined out of the box.
my example uses a component, your example uses just data. That's the main point/difference, something you keep working around with examples.
So why are we still comparing apples with bananas? ;-)
hard pass ... as it's a non-compelling use case to discuss for JSX or ESX. Use the right tool for the job, binary trees as XML is not using the right tool for the job.
This is a feature, as that <br /> could be something else after an update or behind a ternary operator.
the whole Web industry is using JSX these days so I won't comment on factual feelings around how most developers love JSX, including yourself, if I remember correctly. None of the JSX lover is likely outputting XML so I think this specific topic is out of scope.
btw, I am moving forward with ESX as template literal tag as it's clear I never managed to convince anyone in here ESX as syntax is a glorious possibility for the future of the language.
I am happy to discuss technical issues with ESX but I've said already everything I needed to say, or everything I think it's great about ESX, so I might not be interested in repeating myself as this thread is already very long and my latest replies are all already discussed or explained in a way or another.
Happy to discuss technical limitations though, the rest feels like a matter of opinion and if you've a strong opinion against it won't surely be me writing binary trees in XML that will fix that and I accept, and respect your opinion, but please let's move forward, thanks.
now try to imagine fragment and component in that mix, without having component even possible with JSON because it can't have references in it without discarding functions or breaking on recursion ...
then we have ESX:
// no schema needed
// in the IDE: <div>text<span data-thing="123"></span></div>
// in the output
{
type: 3,
name: "div",
attributes: [],
children: [
{type: 6, value: "text"},
{type: 3, name: "span", attributes: [{name: "data-thing", value: "123"}], children: []}
]
}
The current serializer, as JSON is what we're apparently after, would reduce that to this, and revive it to the previous one:
So this is about the incompatibilities I see when JSON is mentioned for something JSON can't even handle, as the serialization my latest module provides can revive even components in another environment or realm, and have components too.
To summarize: ESX grant an always known and understandable hierarchy out of an extremely simple to defined intent via its template, without needing to think about what can be serialized or not, or how to represent best any tree based structure. It gives a common DSL ground to play with to everyone, as opposite of providing just data trashed in any of the schema variants JSON could have, and here I say there's not competition among these two formats, for this purpose.
actually I just hit a wall and yes, that was a smart move to me, or better, it worked.
Following this discussion, I did refactor both the babel transformer and started rewriting udomsay with compatibility with both the transformer and the template literal based solution.
I guess we've all been too clever in deciding that a template must, or should, be already a unique reference, but now I'd like to present the mighty Array case:
Here my current findings after investigating why I would have all items with num equal to 3:
if the template for <span>{num}</span> is unique, so is its tree of tokens, where the interpolation value would always point at the latest num in the loop ... as that map happens before the outer div can do anything to understand what's its content, we're doomed
if the template is unique, even if its properties are updated, when the mapping reaches that interpolation starting from the outer, unique, template, it will always find the latest value set to such interpolation, even if the interpolation is not always the same (babel transformer VS template literal solution)
if the mapping happens instead over an always different tree of tokens, it needs the outer id to both do the mapping once, but as the tree of tokens will be always different, the found value would be always the expected one ... this was the first implementation of ESX and that's why writing a library based on it, and a transformer made sense, so I could prove the logic was working as meant.
Now, after so many hints on how to make it better without having any playground to demonstrate these hints were actually improving the situation, I find myself with two broken proposals published as npm module.
As result, I am now back to the white board trying to figure out how to solve this nested Array issue, but in short, the initial proposal was already working and delivering real-world use cases and results.
If anyone has any idea how to solve this, I am listening, but to me it seems like separating the id to retrieve mapping for that specific tree of tokens is the only way forward. It's not the best GC or heap friendly way forward, but likely the only one that works, if ergonomics of the template literal solution, or transformer, would like to be at pair of JSX.
And then again ... I feel like we over-engineered a pattern that, via JSX, has been working for nearly 10 years, so maybe it's just OK to implement ESX with that somehow disturbing, but working, outer unique identifier, creating new tokens tree per each invoke, as opposite of being smart updating the previous/latest known template.
I'm honestly not sure I follow what the problem is since my JSX/ESX syntax knowledge is limited. Using template literals as that's the only thing my brain comprehends right now, couldn't the following work:
the inner esx`<span>${num}</span>` would have a unique frozen array per template literal semantics. The esx tag could memoize its output based on the compound of the frozen array, and the value of the slots. That way 3 unique results would exist for this specific tagged template.
for the outer esx`<div>${[1, 2, 3].map(innerMapper)}</div>` to work as expected, you'd need to consider arrays in slots as the spread of the array's values in the compound key. That doesn't seem too egregious since it looks like arrays are treated as a special case value already by the ESX syntax.
I probably missed something, but maybe this can trigger some ideas.
thanks, but unfortunately it doesn't. a template is a template, it has no notion of where this template is, meaning it makes no sense to store values just because, as that defeats the purpose of the unique template.
I also would like to see concrete implementations that solve this simple case, otherwise we keep talking theoretical, and this backfired already.
The key is to map tokens within a well known, already traversed, tree. We can do the same with cloneNode(true) and childNodes crawling, except we don't necessarily want a cloneNode(true) by default per template each time, or the same template will trash previous nodes every single update.
The matter is complicated and nobody solved it to date in the ESX way, so we should try to understand where we are now, and what we can do to improve that ... or at least, this was my initial goal, now I've just spent 2 months for something broken and I'm close to give it up and keep the babel-transform-hinted-jsx as reference implementation, as that works already and it scores well in every single benchmark, compared to other non Function based solutions, or non JSX based ones (Solid-JS)
We can cache that well known struct/template and map children tree to deal with: [0]. That's enough to reach the temporary placeholder that is the interpolation.
Now we have a well known template for the span and a mapping of children to deal with, still [0] would be enough.
The mapping suggests how to reach the children, it will be an interpolation, and its value should be the received num.
if the span is unique, its interpolation value will always point at the latest num in the loop because the Array logic can clone at each index a new span, but the mapping will always point at the well known token structure. If that known token structure gets updated, any consumer of that list of span will all contain the same num
if the span is not unique, and its token tree structure is created fresh each time, mapping to that children and its value will carry exactly the received value at the time that runtime token tree was created
This means former will map to 3 each time, latter will hold in memory the tree until needed, but that tree will point at value 1, 2, or 3, at children 0, per each map operation that happens now, or forever.
Updating a static tree of token can also have side effects if the token tree gets updated in span "0", but span "2" had a signal or some event dispatched and it can't reach the current value of its interpolation.
I am not sure this is well explained, but at least I've tried ... there's no way with template / token tree uniqueness to address lazy / on demand updates through the very same struct, as that struct will mislead every expectation.
edit in short: having a unique token tree mapped per "template" doesn't work for functions that returns always the same template or Array operations (extremely common) that returns always the same item. I could map every new cloned node per item to the current token value but then the mapping will be both temporary and slower compared to just having a new tree each time with an id as unique reference for the mapping.