I've remarked something strange in a program I wrote.
It's a mix of analytical and numerical computation, so that I use a value of Digits higher than the default, to improve and stabilize the results against floating numbers errors.
What happens is that for Digits from 10 to 15 included, the time the PC needs to run the program increases reasonably, so that for Digits=15 it needs 10 minutes or so.
But if I set Digits=16 the time rises exponentially, and becomes more than two days!!
I was thinking about that: is there any special reason for this abrupt degradation of the efficiency for Digits=16 (something tied to the architecture (I'm running on a 64 bit Fedora12 pc with 32Gb of ram) or similar)?
Thanks for any help