Little-endian entrenchment and Float16Array

Proposal: GitHub - tc39/proposal-float16array: a proposal to add float16 TypedArrays to JavaScript

This isn't isolated to the Float16Array proposal, but given one of the motivations of this proposal is that WebGPU supports float16 this basically exacerbates the following problem.

So at present all web runtimes use little-endian, WebGPU specifies that host buffers must be little-endian. WebGL wasn't specified as far as I can tell, but a lot of libraries surrounding it seem to depend on little-endianness.

I don't know that there is anything even actionable here at this point, but it does seem to me to be a concern that Float16Array will entrench little-endian even more due to it's large use cases with WebGPU.

Like should something be done here? Perhaps at least, we should have an option for typed arrays an option to specify endianness so WebGPU code using it can be portable to any big-endian systems (in the same way that data views have), e.g. something like:

new Float??Array(size: number, endianness?: "host" | "little-endian" | "big-endian");
// And so on for other overloads

Endianness is an orthogonal concern across everything, and should really be considered separately.

If anything, DataView having a different default (big) from typed arrays (host, not necessarily little!) is a bigger issue.