I was excited to learn that current versions of gmp actually have a direct
implementation of *fib(n)* aka *F[n]*. It is based on a pair of
recurrence relations that aren't the same as the matrix method:

- Calculate
*F[2k-1] = F[k]*.^{2}+ F[k-1]^{2} - Calculate
*F[2k+1] = 4×F[k]*.^{2}- F[k-1]^{2}+ 2×(-1)^{k} - Calculate
*F[2k] = F[2k+1] - F[2k-1], replacing the unwanted one of F[2k+1] and F[2k-1]*.

Directly calling this from C(++) code is faster and uses less memory than my Python programs, yay! However, even on a 64-bit host, gmp numbers peter out when they are more than about `CHAR_BIT * sizeof(int) * (1<<31)` bits long (at least I think that's the pattern), so it's not possible to calculate *fib(256 million)*:

$ ./fibg 256000000000 gmp: overflow in mpz type Command terminated by signal 6

Too bad, I was all set to pay for a couple of hours on a large-memory virtual machine; I had estimated that with 256GB RAM + plenty of swap on an SSD, it would be possible to calculate at least *fib(16 billion)* in only a couple of hours, as even a HDD-based swap file let me get 50% performance (%CPU) when the virtual memory requirement was about 4x the available RAM. GMP must have good access patterns when it comes to working with numbers that don't fit in RAM at the same time.

Is there an "easy" hack to increase the gmp number-size limit? I'll spend a bit of time looking, but it also feels like it would be easy to start getting wrong answers after ill-advised tinkering without understanding gmp more deeply.

Entry first conceived on 18 April 2021, 14:45 UTC, last modified on 18 April 2021, 15:43 UTC

Website Copyright © 2004-2021 Jeff Epler