acer

33193 Reputation

29 Badges

20 years, 214 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

Thanks, Axel.

There are quite a few different floating-point computational modes in Maple now. It would be interesting to test more than just Maple's "usual" interpreter.

There is the usual interpreter, the evalhf interpreter, the runtime provided by Compiler:-Compile, and the usual interpreter running inside a proc with `option hfloat`. Then there are modes, such as whether UseHardwareFloats=true/false, whether one uses usual floats (SFloats) or HFloats. And some of those situations can be mixed together. What else have I missed?

I'm deliberately not mentioning Maple's rounding modes. There are paranoia tests for rounding specifically, and obviously these would give different results for non-default Maple rounding mode. I'd probably only look at the default Maple rounding mode (at least, at first).

acer

The recursive version calls itself. Your original version (or Joe's amended version) does not call itself. That's the difference. One of them recurses on itself, and the others do not.

acer

The recursive version calls itself. Your original version (or Joe's amended version) does not call itself. That's the difference. One of them recurses on itself, and the others do not.

acer

I will submit an SCR against evalhf returning Float(infinity) when evaluating abs(z) for your example of z=1e200+I*1e200, Axel.

I notice that Compiler:-Compile seems to do OK with that.

> restart:

> z:=10^200+I*10^200:
> evalhf(abs(z));
Float(infinity)

> p:=proc(x::complex(numeric)) abs(x); end proc:
> cp:=Compiler:-Compile(p):
> trace(evalf):
> cp(z); # is not using evalf
                                         201
                  0.141421356237309504 10

It would be better if evalhf and Compiler:-Compile shared their more robust (complex arithmetic) runtime features.

acer

I will submit an SCR against evalhf returning Float(infinity) when evaluating abs(z) for your example of z=1e200+I*1e200, Axel.

I notice that Compiler:-Compile seems to do OK with that.

> restart:

> z:=10^200+I*10^200:
> evalhf(abs(z));
Float(infinity)

> p:=proc(x::complex(numeric)) abs(x); end proc:
> cp:=Compiler:-Compile(p):
> trace(evalf):
> cp(z); # is not using evalf
                                         201
                  0.141421356237309504 10

It would be better if evalhf and Compiler:-Compile shared their more robust (complex arithmetic) runtime features.

acer

It is recursive because the procedure bisection calls itself.

It calls itself differently, according to whether f(c) has the same sign as f(a). (I just used your logic there. You could improve it.)

When it calls itself, it replaces either argument a or b by c, according to your logic as mentioned.

That whole process, of bisection calling itself with new arguments, over and over, happens until the tolerance eps is met.

You can first issue trace(bisection) before calling it with example inputs. to see more printed detail of what it does when it runs. See the ?trace help-page.

acer

It is recursive because the procedure bisection calls itself.

It calls itself differently, according to whether f(c) has the same sign as f(a). (I just used your logic there. You could improve it.)

When it calls itself, it replaces either argument a or b by c, according to your logic as mentioned.

That whole process, of bisection calling itself with new arguments, over and over, happens until the tolerance eps is met.

You can first issue trace(bisection) before calling it with example inputs. to see more printed detail of what it does when it runs. See the ?trace help-page.

acer

yankyank (with its extra simplify) isn't actually needed here. yank will do. Indeed, if you hit it with a final combine then you get pretty much what I posted above. ie. I did what yank does.

acer

yankyank (with its extra simplify) isn't actually needed here. yank will do. Indeed, if you hit it with a final combine then you get pretty much what I posted above. ie. I did what yank does.

acer

I often get a feeling of surprise that floating-point accuracy (as opposed to working precision!) does not come up on here on mapleprimes as a topic of intense discussion.

Routines such as evalf/Int and Optimization's exports have numerical tolerance parameters. And "atomic arithmetic operations", single calls to trig, and some special functions compute results accurate within so many ulps. But other than that, for compound expressions, all bets are off. The expression might be badly conditioned for floating-point evaluation.

Now, Maple has Digits as a control over working precision but it has no more general specifier of requested accuracy. Compare with Mathematica, which claims to have both. So, in Maple one must either do analysis or raise Digits to some problem-specific mystery value. The evalr command (and its less well-known friend, shake) is not really strong enough to help much. An argument that I have sometimes heard against making any progress in this area is that its an open field of study, and partial, fuzzily-bounded coverage is to be avoided. If we all accepted that sort of thinking all the time, Maple might have normal but no radnormal, evala/Normal, simplify/trig let alone simplify proper.

acer

I often get a feeling of surprise that floating-point accuracy (as opposed to working precision!) does not come up on here on mapleprimes as a topic of intense discussion.

Routines such as evalf/Int and Optimization's exports have numerical tolerance parameters. And "atomic arithmetic operations", single calls to trig, and some special functions compute results accurate within so many ulps. But other than that, for compound expressions, all bets are off. The expression might be badly conditioned for floating-point evaluation.

Now, Maple has Digits as a control over working precision but it has no more general specifier of requested accuracy. Compare with Mathematica, which claims to have both. So, in Maple one must either do analysis or raise Digits to some problem-specific mystery value. The evalr command (and its less well-known friend, shake) is not really strong enough to help much. An argument that I have sometimes heard against making any progress in this area is that its an open field of study, and partial, fuzzily-bounded coverage is to be avoided. If we all accepted that sort of thinking all the time, Maple might have normal but no radnormal, evala/Normal, simplify/trig let alone simplify proper.

acer

This is shorter than what I had before, but still multiplies and divides by exp(t). Of course, it also does ln-of-exp as another "arbitrary" operation-with-its-inverse.

> ee:=ln(cosh(t)):

> combine(simplify(expand(
>   ln(exp(t)*expand(exp(-t)*(exp(convert(ee,exp)))))
>                         ))) assuming real;
                          ln(1/2 + 1/2 exp(-2 t)) + t

> # or, using 1/exp(t), showing nonreliance at least upon the exp "form".
> combine(simplify(expand(
>   ln(exp(t)*expand(1/exp(t)*(exp(convert(ee,exp)))))
>                         ))) assuming real;
                          ln(1/2 + 1/2 exp(-2 t)) + t

acer

This is shorter than what I had before, but still multiplies and divides by exp(t). Of course, it also does ln-of-exp as another "arbitrary" operation-with-its-inverse.

> ee:=ln(cosh(t)):

> combine(simplify(expand(
>   ln(exp(t)*expand(exp(-t)*(exp(convert(ee,exp)))))
>                         ))) assuming real;
                          ln(1/2 + 1/2 exp(-2 t)) + t

> # or, using 1/exp(t), showing nonreliance at least upon the exp "form".
> combine(simplify(expand(
>   ln(exp(t)*expand(1/exp(t)*(exp(convert(ee,exp)))))
>                         ))) assuming real;
                          ln(1/2 + 1/2 exp(-2 t)) + t

acer

You've started with lots of the interesting bits already done, such as gracefully pulling out that exp(t) factor while keeping exp(-2*t) instead of 1/exp(t^2).

So, one can do it by first multiplying by exp(-t), then multiplying shortly after by exp(t). But I consider that a kludge since it's not fully automatic -- it sorta depends on fortuitously choosing that particular factor of exp(t) to introduce. And still there can be the problem of multiplying through by the exp(-t) and getting exp(-2*t) rather than 1/exp(2*t).

Below the intermediate conversion back to trigh is there merely to allow the exp(-t) factor to survive multiplication by exp(t). Quite ugly, all told.

> ee:=ln(cosh(t)):

> exp(t)*convert(exp(-t)*(exp(convert(ee,exp))),trigh);
                                     2
                      exp(t) (cosh(t)  - sinh(t) cosh(t))
 
> combine(simplify(convert(expand(ln(%)),exp))) assuming real;
                          t + ln(1/2 + 1/2 exp(-2 t))

acer

You've started with lots of the interesting bits already done, such as gracefully pulling out that exp(t) factor while keeping exp(-2*t) instead of 1/exp(t^2).

So, one can do it by first multiplying by exp(-t), then multiplying shortly after by exp(t). But I consider that a kludge since it's not fully automatic -- it sorta depends on fortuitously choosing that particular factor of exp(t) to introduce. And still there can be the problem of multiplying through by the exp(-t) and getting exp(-2*t) rather than 1/exp(2*t).

Below the intermediate conversion back to trigh is there merely to allow the exp(-t) factor to survive multiplication by exp(t). Quite ugly, all told.

> ee:=ln(cosh(t)):

> exp(t)*convert(exp(-t)*(exp(convert(ee,exp))),trigh);
                                     2
                      exp(t) (cosh(t)  - sinh(t) cosh(t))
 
> combine(simplify(convert(expand(ln(%)),exp))) assuming real;
                          t + ln(1/2 + 1/2 exp(-2 t))

acer

First 485 486 487 488 489 490 491 Last Page 487 of 607