iMath and uMath (Integer Math)

The iMath and uMath objects would provide methods for 32-bit math.

JS already has Math.imul for multiplication, but nothing for other operations.

iMath.add(2147483647, 1) // -2147483648
uMath.add(4294967295, 1) // 0

iMath.sub(0, 1) // -1
uMath.sub(0, 1) // 4294967295

// (same as Math.imul)
iMath.mul(2**15, 2**16) // -2147483648
uMath.mul(2**15, 2**16) // 2147483648

This idea can be expanded to other C-like opperations.

Here's what the polyfill would look like for safe integers:

const uMath = {
	add: (a, b) => (((a + b) % 0x100000000) + 0x100000000) % 0x100000000,
	sub: (a, b) => (((a - b) % 0x100000000) + 0x100000000) % 0x100000000,
	mul: (a, b) => (((a * b) % 0x100000000) + 0x100000000) % 0x100000000
};
const iMath = {
	add: (a, b) => uMath.add(a, b) - 0x80000000,
	sub: (a, b) => uMath.sub(a, b) - 0x80000000,
	mul: Math.imil // can reuse it I guess ¯\_(ツ)_/¯
}

Numbers are always stores as floats, so to make this work, you're basically casting a float to an int, doing the operation, then casting back.

From the outside, I believe the only visual effect this would cause, is that we'd see the integer overflow behavior in these operations, and maybe some auto-truncating (not shown in the polyfills). Why is integer overflow a desirable effect to have? Why would someone actually want an integer overflow to happen? What makes that "safe"?

Note that the reason imul exists is that, given two 32-bit integers, the answer might be up to a 64-bit integer, while JS's Number type can only support integer precision up to 53 bits. Doing correct 32-bit multiplication without imul, then, requires you to reimplement multiplication from the ground up, which is tricky and slow, especially compared to just doing a 64-bit mul in hardware.

Also, imul does not emulate a 32-bit op; if your answer does exceed 32 bits, it doesn't wrap/saturate/error, it just gives you the full answer in two pieces. If you want to emulate a particular int size, you can do it yourself pretty easily, given the full answer.

Addition of two 32-bit numbers, on the other hand, can only yield a 33-bit number, which JS can represent just fine. Handling overflow on your own is similarly just as easy.

So, imul doesn't work in an analogous way to what you're asking for, and what you're asking for is very easy to implement in userland. You'll need to make a much stronger argument for this being an operation that needs library support, based on some combination of it being a common operation in practice, and/or difficult to do correctly/performantly in userland.

1 Like

Recently, I used Mulberry32 random number generation in JS. Originally for C, it utilizes this overflow. To achieve the exact behaviour it has in C, I can use imul, but I also had to use modulo for when I got a sum that exceeded 2**32. I realize this use case is rather niche, but I thought I'd present my idea anyway.

Also, for clarity, I was saying that my example methods are meant to be used with save integers (less than 2**53).

1 Like

JS already has big integer arithmetic. If you need to handle fixed-length integer arithmetic it also relatively easy. Nightly V8 has even learned to optimize such types with fixed length <= 64:

const signed   = (x, b) => BigInt.asIntN(b, BigInt(x))
const unsigned = (x, b) => BigInt.asUintN(b, BigInt(x))

const i32 = x => signed(x, 32)
const i64 = x => signed(x, 64)
const u32 = x => unsigned(x, 32)
const u64 = x => unsigned(x, 64)

const U32 = {
  add(a, b) { return u32(u32(a) + u32(b)) },
  sub(a, b) { return u32(u32(a) - u32(b)) },
  mul(a, b) { return u32(u32(a) * u32(b)) },
}

const I32 = {
  add(a, b) { return i32(i32(a) + i32(b)) },
  sub(a, b) { return i32(i32(a) - i32(b)) },
  mul(a, b) { return i32(i32(a) * i32(b)) },
}

console.log(I32.add(2147483647, 1)) // -2147483648n
console.log(U32.add(4294967295, 1)) // 0n

console.log(I32.mul(2 ** 15, 2 ** 16)) // -2147483648n
console.log(U32.mul(2 ** 15, 2 ** 16)) // 2147483648n