My (very limited) understanding from reading 2 Conformance and 6.1.6.1 The Number Type is that all conforming implementations should represent numbers as double-precision 64-bit format IEEE 754-2019 values and that this implies supporting subnormal/denormalized values. (I do not have access to the IEEE spec so I can't confirm it myself, but my understanding has always been that implementations conforming to the IEEE spec are required to implement subnormal values and that modes like "flush-to-zero" are nonstandard. I could be wrong.)
However, when reading through the language spec, the following excerpt from 21.1.2.9 Number.MIN_VALUE caught my eye:
The value of
Number.MIN_VALUE
is the smallest positive value of the Number type, which is approximately 5 ร 10-324.In the IEEE 754-2019 double precision binary representation, the smallest possible value is a denormalized number. If an implementation does not support denormalized values, the value of
Number.MIN_VALUE
must be the smallest non-zero positive value that can actually be represented by the implementation.
(emphasis mine)
Is there a reason why this is mentioned at all? In isolation, this sentence appears to suggest that subnormal values are optional, but other parts of the spec appear to claim otherwise (which would also imply that Number.MIN_VALUE
has an exactly defined value).
Is support for subnormal numbers required or not?
(Context: The reason I'm asking is because I'm trying to understand which numeric operations are deterministic and exactly reproducible in conforming standards regardless of the exact implementation or underlying machine. If supporting subnormals is optional this would make achieving reproducible results a lot more difficult.)