As the name suggests, an Array that guarantees the invariant of length > 0and has no empty slots (never sparse). This ensures length always matches the actual count of elements, reduces "foot-guns", and it "matches" the name of the class itself! (the latter point is just an aesthetic reason)
Most of what you're suggesting seems to fall under the bucket of "let's fix the bad decisions related to our current array class", but then you also have, what looks to me like an unrelated idea of "let's make it so the array can't be empty". So echoing what ljharb asked - what is the core problem you'd like to see solved? Is this mostly about having an array type that can't be empty, and the other items are just being thrown in as "we could also fix these at the same time"? Could you expound on why it would be valuable to have an array type that can't be emptied?
As per the links I shared, there are cases where a piece of code requires a collection of 1-or-more values. Even ES has defined methods with that requirement (reduce). Perhaps some DOM or Web API requires one? (I tried searching, but couldn't find an example)
I'm aware a.length > 0 is enough, but the point is to not have to check every-time, but to "rest assured" that the list won't be empty at any point in the program. For further reference about the general concept, see:
However, I can't come up with a "compelling enough" argument in favor of this specific class. I'm afraid this is one of those "greater than the sum of its parts" situation:
Doing something on the ES side won't be useful enough to justify the complexity cost.
Doing something on the TS side alone won't yield satisfying results.
If there was some sort of "collaboration" between ES & TS (or if ES defined a static type-system), there could be enough reasons to justify the addition.
Yes! (but the idea could be extended to other collections, such as Set) I've been thinking more, then thought "What's the point of forbidding sparsity if the array can still contain undefined?" So I think it'll be more useful if inserting undefinedthrows. We could also forbid null, but that may be too extreme.
If this happens, it'd make more sense if the N.E.A. API was similar to that of Map, which would make it easier to poly-fill (no need for Proxy). But if length -> size, then it would no longer be "array-like" :( . That may need to happen anyway, as an N.E.A. could have no hard-coded theoretical limit, which would require size to be a BigInt
That's what I meant! I assumed the array can't contain undefined, so the only way to return it is if there's 1 element.
Yep, I agree. An alternative would be to add toPopped and toShifted, which would shallow-clone the array, potentially returning a classic array.
So, focusing just on the non-empty-array use-case from the perspective of a library author, here are a few thoughts:
From the API's perspective, if they want to force you to provide a non-empty array, how would they do it under the current proposal? They would have to assert that the array is an instance of the non-empty-array class before they try to use it. If they don't assert it, then the end-user may be able to pass in a normal array, and if the APIs are similar enough, the normal array would also work just fine, unless it was empty, in which case things would blow up in the user's face in the exact same way it would happen today.
The problem is that asserting that an array is an instance of this special non-empty-array class isn't any more verbose than asserting that the array is non-empty. If we really wanted to solve this problem statement, a better solution might be to add a ".assertNonEmpty()" function to all arrays, which makes it slightly easier to assert this constraint.
All of this is viewing the problem under the lens of a library author (or more generally, an API producer/consumer relationship - which may happen within the same project as well). There's other lenses to approach the problem as well - it would be nice to have a concrete example to discuss - the links you shared showed someone implementing an array like the one you're proposing, but it's difficult to see what such an array was needed.
Approaching this from another lense - say we have, in a single project, an array that's shared and many places can mutate it, and we want to make sure it stays non-empty.
This runs into another issue - one very common way of changing an array is to create an updated copy, then swap out the original with the copy. E.g. reassigning an array to a filtered copy.
While it's possible to make a non-empty-array class where it's .filter() function returned another instance of non-empty-array, that doesn't solve the whole problem. What's going to stop someone from (either accidentally or on purpose) re-assigning your non-empty array instance with a normal array that is empty? You could prevent re-assignment (e.g. if the array is on a frozen object), but that would prevent a lot of useful functionality, such as filtering, from being used.
I agree, that would be better! But then we need to freeze the array, to ensure the invariant holds for that particular instance:
/**
@throws if `a` is not a non-empty `Array`
*/
const f = a => {
'use strict';
if (!(Array.isArray(a) && a.length))
throw new TypeError
// This freezes `a` for the caller, too!
// If the dev forgets this,
// the JSDoc won't document it.
const nea = Object.freeze(a)
a = undefined
// ...
}
/**
@throws if `a` is not an NEA
*/
const g = a => {
if (!NonEmptyArray.is(a))
throw new TypeError
// preserve reference
const nea = a
a = undefined
// ...
}
And what about multi-dimensional arrays? There's no deepFreeze.
AFAIK, it's impossible to replace an object with another while preserving the reference, because each reference is associated with a unique instance (I suspect realms break this assumption). So, to "replace" an object, you have to replace the reference stored in a variable:
let a = new NonEmptyArray(0,1,2);
const b = a;
// possible, even after `freeze`
a = [];
a === b // false
I see no problem with that. If a was const, this would've been impossible.
I'm confused: Object.freeze and filter are both shallow:
'use strict';
const a = [0,1,2];
Object.freeze(a);
a.filter(n => n % 2); //[1]
What I was meaning is, say you want to guarantee an invariant that the array found at myUser.groups is always non-empty.
Well, just use a non-empty array instance and freeze the myUser object.
Now say you want to write some code that updates the myUser.groups array so it doesn't contain any admin-level groups. I.e. you want to use the filter function.
Well, there isn't an in-place filter function, instead, filtering also requires re-assignment, which means you can't freeze the myUser object. But if you stop freezing myUser to allow re-assignment, then you can't enforce this invariant anymore.
More simply - a lot of the array methods are written under the assumption that if you want to modify an existing array, you can just re-assign it. But if re-assignment is allowed, then invariants can't be enforced. There's a conflict between the design of existing array methods and the idea of trying to enforce an invariant on a particular array instance.
myUser.groups is an object property, not an array instance - in this case, it may hold an array instance, upon which the desired invariant can absolutely be enforced.
Right. An individual array instance can have an invariant. But is that actually useful?
In the example myUser.groups scenario, your choise is to either:
Make it so the single source of truth for getting information on that user's groups requires you to do a property access, such as myUser.groups. This makes it so any invariants places on a groups array are incomplete, because the property can always be (accidentally) reassigned to not have that invariant anymore (i.e. someone reassigns it to a normal array instance). By "incomplete", I mean that anyone trying to get the truth from this single source of truth must do that property access, and that property access is not guarded by the invariant.
Make it so the array reference itself is the single source of truth. This allows you to freeze the myUser object, since there's no need to re-assign, but it prevents you from using many built-in array methods to update the groups (i.e. no myUser.groups = myUser.groups.filter(...)).
Pick your poisen.
There is, however, a way to make option 1 work - we could make myUser.groups into a getter and setter pair that checks the invariant every time it gets updated. But if we're going to do that, we can actually go all the way and enforce the entire envariant without the help of a NonEmptyArray class, by doing somethign like this:
let groups = Object.freeze([]);
export const myUser = {
get groups() {
return groups;
},
set groups(newGroups) {
if (newGroups.length === 0 || !Object.isFrozen(newGroups)) {
throw new Error('Must assign with a frozen, non-empty array');
}
groups = newGroups;
};
};
Since every mutable array method now also has a non-mutable version, you don't really loose any capabilities by forcing the arrays themselves to be frozen.
I should mention - I'm bringing this all up, mostly as a way to poke around at (what I believe to be) the problem statement. I recognize that there is value in partially applying an invariant against a source of truth (as defined above) - a partial invariant might not be able to give you concrete garantees, but it can still help find some bugs, which is still useful, so it is very possible that a NonEmptyArray type class would be the best solution to the problem. But it has its limits, and there might be other ways to approach the problem and solve it differently (such as applying the invariant on a getter/setter).