Monotonic time

Monotonic timers already exist due to Atomics.wait and Atomics.waitAsync. And in order to implement those efficiently, one has to track monotonic time and even read it.

  • Embedded MCUs almost always expose a monotonic reference clock. ARM provides one in-core (SysTick) and RISC-V requires it in every implementation that exposes status registers (read: nearly every implementation). In ARM, it's only 24-bit, so in order to exceed 16.8 ms on a 1 MHz clock (or about 8.6 minutes on a 32768 Hz clock), you have to track how many times the clock expired and increment a shared time counter. This counter can be combined with a SysTick read and knowledge of the last reset value to produce a monotonic timestamp. RISC-V's timers are instead 64-bit, so a simple loop of "read high, read low, read high, retry if new high differs from old high" can get you the value on bare metal.
  • Every major operating system, from FreeRTOS to Windows and Linux, offer a way to read a monotonic clock. In order to not spam the OS with futex/equivalent syscalls when implementing timers, you need to measure the monotonic time going in. Performance measurement might be the most public usage, but this usage with timers far and away comprises the most actual calls.

In light of this, I propose a new method: Atomics.monotonicNow(). This returns a 64-bit nanosecond-precision bigint time relative to an implementation-defined start offset and with implementation-defined resolution. Time deltas can be determined via BigInt.asIntN(64, next - prev), so no extra helper is needed there.

The clock must tick during sleep. Appropriate syscalls that yield this result:

  • Linux: clock_gettime(CLOCK_BOOTTIME, &ts)
  • macOS: mach_continuous_time()
  • Other BSDs: clock_gettime(CLOCK_MONOTONIC, &ts)
  • Windows: QueryInterruptTimePrecise(&hundred_nanos)

Why nanosecond? That matches the maximum precision anyone implements. Note that this is just precision, not resolution.

Why 64-bit? It provides enough room for ~584 years, and that's good enough for the foreseeable future. Also, it's the smallest common width afforded by operating systems.

Existing runtime precedent also is very plentiful:

  • performance.now() is implemented by many platforms and returns a millisecond float of up to microsecond precision.
  • Node has process.hrtime() returning [secs, nsecs] and process.hrtime.bigint() returning my proposed Atomics above.
  • XS has Time.ticks and Time.microseconds. You can synthesize a 64-bit time using the trick I detailed above about ARM's SysTick timer.
1 Like

Nit: it's "only" ~584 years.

Thanks for the catch! Fixed.

What benefit does this provide over performance.now()?

That is a valid alternative, bringing that into TC39.

I just want to see this standardized outside WHATWG so highly constrained runtimes (notably embedded) can be able to implement it. Also, it just feels like a feature hole, having timed waits in spec but not the standard way to minimize them.

Any runtime is free to implement whatever additional features they'd like; not being in TC39 does not prevent embedded runtimes from implementing this. Standardizing something in TC39 is a fair bit of work, especially something which is already in WHATWG, so you'd probably need a stronger reason to get someone to actually pursue this.

1 Like