Sorry, let me see if I can be more clear about the specific nature of my concern.
I see functions as being a fusion of two different kinds of things. One thing: they are algorithms. The algorithm executed by a function is immutably tied to its identity without us having to do anything.
The second thing they are is objects. A function can store keys and values, and most APIs that are expecting an object with properties foo and bar would also accept a function-object with properties foo and bar.
Here's the specific case that concerns me:
let { assign, freeze, isFrozen } = Object;
let doubleValue = (wrapper) => {
// The fix is to check isFrozen when typeof wrapper is function too
if (typeof wrapper === 'object' && !isFrozen(wrapper)) {
throw new Error();
}
let value1 = wrapper.value;
try { wrapper.value++ } catch (e) {}
let value2 = wrapper.value;
return value1 + value2;
};
let d1 = doubleValue(freeze({ value: 2 }));
let d2 = doubleValue(assign(function () {}, { value: 2 }));
console.log(d1, d2); // 4 5
This is what I mean: nothing about this demo in which I break the contract of frozen data involves a function which is peeking inside of the descriptors of objects.
Sorry, but that doesn’t resolve my confusion at all. Your proposed isDeepFrozen, Endo’s isPassable, and the existing Object.isFrozen all return true for primitive values, so filtering on typeof “object” is not just a bug but an unnecessary complication.
However, consider an object obj like { a(){}, get b(){} }—your deepFreeze implementation that ignores accessor functions introduces a difference between the result of an Object.isFrozen call with input Object.getOwnPropertyDescriptor(deepFreeze(obj), "a").value vs. Object.getOwnPropertyDescriptor(deepFreeze(obj), "b").get, despite both inputs being functions defined as part of the same “deeply-frozen” object. In fact, it isn’t even possible to write a predicate that will robustly differentiate a function that was originally defined as a method from a function that was originally defined as an accessor, making a deepFreeze that treats them differently even more surprising.
Well this is what's supposed to happen, we're supposed to talk until we're at least not confused by each other's ideas anymore.
I'm on the same page as you that filtering out primitives (before freezing) is likely an unnecessary complication. Essentially code that I had introduced as a perf optimization had created a bug, but I'm not even sure the optimization was meaningful. At very least it shouldn't introduce a bug!
I'm still not in agreement with what you're saying about property descriptors. You're checking to see if descriptor.get is frozen. To what end? What question is being answered? What pitfall prevented?
I'm also still unsure what reflection has to do with this. You can't cross into an object's descriptors by accident (as you can with prototype vs own properties, say). You have to do it on purpose with a call like Object.getOwnPropertyDescriptor.
I surely hope that I am open to persuasion, but I am not persuaded yet.
If it’s not frozen, it’s an information channel - someone can stick info on it and read it later. Immutability isn’t just about “the parts i care about won’t change”, it’s also “no part, including the ones i don’t know or care about, will change”.
OK, again that makes sense but leaves me wanting more. I'm reading up on HardenedJS and its usages. Seems like they're all smart contracts right now. BABLR would want to use it for zero-trust third-party plugins to an ecosystem of parsers, transpilers, linters, formatters and the like.
The closest thing HardenedJS mentions is the prevention of eavesdropping attacks as demonstrated in its security challenge: Web Challenge | Hardened JavaScript
These attacks make use of side channels, and so I now understand that HardenedJS goes to lengths to eliminate side channels.
The thing that still isn't fully resolved in my head is that I still think we're talking about a different kind of thing than a side channel attack here -- it'd be more like a back channel. If I were to set up a challenge to test for security against back channels I'd create two containers both with attacker code in them: one would have a MacGuffin, the other would have network access. The challenge now is to exfiltrate the MacGuffin with both containers working to break encapsulation. The defender wants to set the containers up in such a way that no information can move between them.
Is this really possible for the defender to win, and can someone confirm that this kind of security is a goal?
https://hardenedjs.org/ explicitly mentions “safe plugin systems”, along with “supply chain attack resistance” and “integrity in the face of adversarial code in the same process”.
A Taxonomy of Security Issues categorizes both “side channel” (unintentional, e.g. vulnerable timing measurement) and “covert channel” (intentional) as “non-overt”, and protection against both is in scope.