"using operators instead of expressions seems to use a weaker tolerance".
Yeah, it looks natural. All of that functions was created via codegen stuff so they are optimized => less operations made => precision of result is higher. But problem not in tolerance i.e. smth that checked and smth else under check fails. There could be another stuff that i had with newton under c on double type.Namely, because of round-off error newton algorythm could be divergent i.e. stuck on pairs (or even triples and generally on arbitrary number, i suspect) of points x1, x2 such that
x2= newton_step(x1) and x1= newton_step(x2).
To overcome such situation after "critical times of guesses" i fixed delta=|x2-x1| and
after "other critical times of guesses" i calculate new delta and if i see that new delta wasn't twice less than old one then i changed guess to random perturbated:
x2+=unit_rand()*(x1-x2). Sequence of guesses still inside bounds and more probably covergent to only one point.
I just wonder, wheather maple developers did smth like that, because such behavior doesn't depend on neither tolerance cryterium nor precision of calculations as well. It depends only on precision in that sense if precision is higher then such situation is less probable to occur but still probable.