There's two main applications for arithmetic on integers that large:
- Cryptography commonly see 4096-bit primes and sometimes larger, though they usually use byte buffers directly.
- Statistical analysis deals with a lot of operations where numbers grow exponentially if not even faster (like factorially:
n choose r = n! / r!(n-r)!), and the result of even seemingly innocent intermediate operations (like 10000! ≈ 10^35659.454... ≈ 2^118458.134..) can quickly blow past that. These sometimes show up in gambling contexts, but most often in applied scientific computing (such as in urban planning and financial analysis) where accuracy can't be sacrificed (usually for legal reasons).
From what I've read elsewhere, some of the first people to move to bigints in the first place were gambling and financial companies.
Also, there's precedent for having such a "to byte array" method on arbitrary-precision integers. To name a few: