acer

32490 Reputation

29 Badges

20 years, 8 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@Denis ...so post a full working example which could be used to reproduce the problem.

If you uploaded the example (in a worksheet, say, using the green up-arrow) then we'd have a better chance at helping you.

Numerical rootfinding of multi-dimension nonlinear functions can be tricky. The width of the ranges (domain) of the variables, the choice (if any) of initial point, the precision and accuracy, all these can come to bear hard on such problems.

Yes, the order of terms in a SUM (or PROD) Maple expression (DAG) can be session dependent. And that can cause a measurable numeric difference, which can in principle cascade down and amplify to the point that it's significant enough to make a method succeed of fail. But hopefully, if that is the case then the example could be solved more reliably by making some adjustment.

It's pretty hard to say more specifically, if you don't show the full example. (And even if you do, of course, it still might be hard.)

restart:
expression:=a+b+c:
1e20*eval(expression,[a=1e20,b=1e-20,c=-1e20]);
                               0.
restart:
expression:=c+a+b:
1e20*eval(expression,[a=1e20,b=1e-20,c=-1e20]);
                               1.

There are several ways to dispel the issue in the above example. But, sight unseen, we cannot know whether your case is even remotely similar.

acer

@Markiyan Hirnyk  Your understated original Answer only mentioned `evalf` and `solve`, but otherwise did not specify the order in which you intended they be applied. You just wrote that `evalf` would be useful, but not at which juncture. Two natural interpretations are evalf([solve(p)]) and [solve(evalf(p))] since you certainly did not specify at what point `evalf` ought to be used.

Your followup comment, in the midst of this thread, does specify which usage you intended. But the Answer was ambiguous, until then.

One of those two ways, [solve(evalf(p))], does in fact produce significantly less accurate results than those from fsolve, when Digits=12, in Maple 15. And the other, evalf([solve(p)]), produces essentially the same result as from fsolve. So that resolves the silly argument about ambiguous claims whether the fsolve or solve results are more accurate here.

The accuracy issue should not be of central importance here, anyway, since both results can be the same when used properly. What is important is that `solve` is the wrong tool for this task. This is clearly demonstrated, and should be the focus.

Here is `solve` calling `fsolve` to do the actual work, repeated degree=5 times,

restart:

trace(fsolve):

[evalf(solve(z^5-5*z^4-5*z-5, z))]: # 5-fold repetition

{--> enter fsolve, args = t^5-5*t^4-5*t-5, t, complex
 enter fsolve, args = t^5-5*t^4-5*t-5, t, complex
 enter fsolve, args = t^5-5*t^4-5*t-5, t, complex
 enter fsolve, args = t^5-5*t^4-5*t-5, t, complex
 enter fsolve, args = t^5-5*t^4-5*t-5, t, complex

@Markiyan Hirnyk  Your understated original Answer only mentioned `evalf` and `solve`, but otherwise did not specify the order in which you intended they be applied. You just wrote that `evalf` would be useful, but not at which juncture. Two natural interpretations are evalf([solve(p)]) and [solve(evalf(p))] since you certainly did not specify at what point `evalf` ought to be used.

Your followup comment, in the midst of this thread, does specify which usage you intended. But the Answer was ambiguous, until then.

One of those two ways, [solve(evalf(p))], does in fact produce significantly less accurate results than those from fsolve, when Digits=12, in Maple 15. And the other, evalf([solve(p)]), produces essentially the same result as from fsolve. So that resolves the silly argument about ambiguous claims whether the fsolve or solve results are more accurate here.

The accuracy issue should not be of central importance here, anyway, since both results can be the same when used properly. What is important is that `solve` is the wrong tool for this task. This is clearly demonstrated, and should be the focus.

Here is `solve` calling `fsolve` to do the actual work, repeated degree=5 times,

restart:

trace(fsolve):

[evalf(solve(z^5-5*z^4-5*z-5, z))]: # 5-fold repetition

{--> enter fsolve, args = t^5-5*t^4-5*t-5, t, complex
 enter fsolve, args = t^5-5*t^4-5*t-5, t, complex
 enter fsolve, args = t^5-5*t^4-5*t-5, t, complex
 enter fsolve, args = t^5-5*t^4-5*t-5, t, complex
 enter fsolve, args = t^5-5*t^4-5*t-5, t, complex

@Markiyan Hirnyk The statement is true for Maple 14, 15, and 16. If you are basing your counterclaim on results from Maple 13 or an even older major release then I would say that the moral obligation is on you to state that up front.

(For example, Maple 13 is three major releases back, and it would be bizarre to think that the unqualified statement should be that relating to some much older release as opposed to all the later releases.)

We should not forget that the rest of the claim is also true: using the solve command to get float approximations of all complex roots of a higher degree univariate polynomial (which does not factor explicitly) is much slower than using fsolve, in general. It is often worse by a multiple of n the degree of the polynomial, as solve simply calls fsolve n times to do the same work. This is demonstrated in the link given. This part is also true in Maple 13, 12, etc.

This accuracy comparison below is done in Maple 14.01 (Windows, 64bit), but the results are similar in 15.01 and 16.00. For these particular computations done under Digits=12 the results from fsolve are shown to have a greater number of accurate decimal digits, and produce smaller absolute forward error with respect to resubstitution in the original polynomial.

 

restart:
Digits:=12:

fsol:=[fsolve(z^5-5*z^4-5*z-5, z, complex)]:

forget(evalf):
ssol:=[solve(z^5-5*z^4-5*z-5.0, z)]:

forget(evalf):
Digits:=100:
highsol:=[fsolve(z^5-5*z^4-5*z-5, z, complex)]:

max(map(abs,sort(highsol)-sort(ssol))):
evalf[4](%);
                                         -12
                                 2.729 10   

max(map(abs,sort(highsol)-sort(fsol))):
evalf[4](%);

                                         -16
                                 8.203 10   

[seq(eval(z^5-5*z^4-5*z-5, z=x), x in fsol)]:
evalf[4](sort(%,(a,b)->abs(a)>abs(b)));

 [        -13           -15           -15             -15           -15    
 [5.476 10   , -2.189 10    - 1.555 10    I, -2.189 10    + 1.555 10    I, 

           -16           -16            -16           -16  ]
   8.853 10    + 2.778 10    I, 8.853 10    - 2.778 10    I]

[seq(eval(z^5-5*z^4-5*z-5, z=x), x in ssol)]:
evalf[4](sort(%,(a,b)->abs(a)>abs(b)));

 [         -10           -11           -11             -11           -11    
 [-7.234 10   , -4.471 10    + 5.918 10    I, -4.471 10    - 5.918 10    I, 

            -12           -12             -12           -12  ]
   -1.190 10    - 5.466 10    I, -1.190 10    + 5.466 10    I]

@Markiyan Hirnyk The statement is true for Maple 14, 15, and 16. If you are basing your counterclaim on results from Maple 13 or an even older major release then I would say that the moral obligation is on you to state that up front.

(For example, Maple 13 is three major releases back, and it would be bizarre to think that the unqualified statement should be that relating to some much older release as opposed to all the later releases.)

We should not forget that the rest of the claim is also true: using the solve command to get float approximations of all complex roots of a higher degree univariate polynomial (which does not factor explicitly) is much slower than using fsolve, in general. It is often worse by a multiple of n the degree of the polynomial, as solve simply calls fsolve n times to do the same work. This is demonstrated in the link given. This part is also true in Maple 13, 12, etc.

This accuracy comparison below is done in Maple 14.01 (Windows, 64bit), but the results are similar in 15.01 and 16.00. For these particular computations done under Digits=12 the results from fsolve are shown to have a greater number of accurate decimal digits, and produce smaller absolute forward error with respect to resubstitution in the original polynomial.

 

restart:
Digits:=12:

fsol:=[fsolve(z^5-5*z^4-5*z-5, z, complex)]:

forget(evalf):
ssol:=[solve(z^5-5*z^4-5*z-5.0, z)]:

forget(evalf):
Digits:=100:
highsol:=[fsolve(z^5-5*z^4-5*z-5, z, complex)]:

max(map(abs,sort(highsol)-sort(ssol))):
evalf[4](%);
                                         -12
                                 2.729 10   

max(map(abs,sort(highsol)-sort(fsol))):
evalf[4](%);

                                         -16
                                 8.203 10   

[seq(eval(z^5-5*z^4-5*z-5, z=x), x in fsol)]:
evalf[4](sort(%,(a,b)->abs(a)>abs(b)));

 [        -13           -15           -15             -15           -15    
 [5.476 10   , -2.189 10    - 1.555 10    I, -2.189 10    + 1.555 10    I, 

           -16           -16            -16           -16  ]
   8.853 10    + 2.778 10    I, 8.853 10    - 2.778 10    I]

[seq(eval(z^5-5*z^4-5*z-5, z=x), x in ssol)]:
evalf[4](sort(%,(a,b)->abs(a)>abs(b)));

 [         -10           -11           -11             -11           -11    
 [-7.234 10   , -4.471 10    + 5.918 10    I, -4.471 10    - 5.918 10    I, 

            -12           -12             -12           -12  ]
   -1.190 10    - 5.466 10    I, -1.190 10    + 5.466 10    I]

If you are going to make multiple posts like this (which you have just done) then please could you either include in each (both title and body) some keyword that informs readers as to the topics covered?

Otherwise, please couldn't you just roll them together in a single post?

What you've done so far is just submit a bunch of separate posts linking to other sites, plus zip files, each with no indication of the subject of content. That's a lot like spam.

acer

I didn't mention `numoccur` because it brings the extra need to match the nature of the zeros. If the Matrix has exact 0 then numoccur needs to be passed exact 0 as the second argument. And similarly for floating point 0.0.

So using `numoccur` can bring an extra burden of knowing what type of zeroes are to be tested, and what kind of data is in the Matrix. Of course, if you know the type of data for sure, or if you wish to distinguish between 0 and 0.0, then it's not an issue.

> M:=Matrix([[0.0, 0.0],[3.0,0.5]]);
                                    [0.     0. ]
                               M := [          ]
                                    [3.0    0.5]

> numboccur(M, 0);                  
                                       0

> numboccur(M, 0.0);                
                                       2

> 4-add(`if`(x<>0,1,0),x=M);        
                                       2

> 4-rtable_num_elems(M,'NonZero');  
                                       2

> M:=Matrix([[0, 0],[3,1/2]]);      
                                     [0     0 ]
                                M := [        ]
                                     [3    1/2]

> numboccur(M, 0);                
                                       2

> numboccur(M, 0.0);              
                                       0

> 4-add(`if`(x<>0,1,0),x=M);      
                                       2

> 4-rtable_num_elems(M,'NonZero');
                                       2

And then the mixed case gets more awkward still.

> M:=Matrix([[0, 0.0],[3,5]]);
                                     [0    0.]
                                M := [       ]
                                     [3    5 ]

> numboccur(M, 0);            
                                       1

> numboccur(M, 0.0);
                                       1

> 4-rtable_num_elems(M,'NonZero'); 
                                       2

> 4-add(`if`(x<>0,1,0),x=M);       
                                       2

acer

I didn't mention `numoccur` because it brings the extra need to match the nature of the zeros. If the Matrix has exact 0 then numoccur needs to be passed exact 0 as the second argument. And similarly for floating point 0.0.

So using `numoccur` can bring an extra burden of knowing what type of zeroes are to be tested, and what kind of data is in the Matrix. Of course, if you know the type of data for sure, or if you wish to distinguish between 0 and 0.0, then it's not an issue.

> M:=Matrix([[0.0, 0.0],[3.0,0.5]]);
                                    [0.     0. ]
                               M := [          ]
                                    [3.0    0.5]

> numboccur(M, 0);                  
                                       0

> numboccur(M, 0.0);                
                                       2

> 4-add(`if`(x<>0,1,0),x=M);        
                                       2

> 4-rtable_num_elems(M,'NonZero');  
                                       2

> M:=Matrix([[0, 0],[3,1/2]]);      
                                     [0     0 ]
                                M := [        ]
                                     [3    1/2]

> numboccur(M, 0);                
                                       2

> numboccur(M, 0.0);              
                                       0

> 4-add(`if`(x<>0,1,0),x=M);      
                                       2

> 4-rtable_num_elems(M,'NonZero');
                                       2

And then the mixed case gets more awkward still.

> M:=Matrix([[0, 0.0],[3,5]]);
                                     [0    0.]
                                M := [       ]
                                     [3    5 ]

> numboccur(M, 0);            
                                       1

> numboccur(M, 0.0);
                                       1

> 4-rtable_num_elems(M,'NonZero'); 
                                       2

> 4-add(`if`(x<>0,1,0),x=M);       
                                       2

acer

Another ungraceful way to do this particular example posed in the Question, at present, is to substitute a single name for Pi*a, then simplify, and then backsubstitute,

restart:
expr := -2*Pi*sin(Pi*a)/(-1+cos(2*Pi*a)):

subs(g=a*Pi,simplify(algsubs(a*Pi=g,expr)));

                              Pi    
                           ---------
                           sin(Pi a)

In principle this workaround should not be necessary as the Pythagorean identity, upon which this particular simplification can hinge, should hold regardless of such a scaling.

acer

Another ungraceful way to do this particular example posed in the Question, at present, is to substitute a single name for Pi*a, then simplify, and then backsubstitute,

restart:
expr := -2*Pi*sin(Pi*a)/(-1+cos(2*Pi*a)):

subs(g=a*Pi,simplify(algsubs(a*Pi=g,expr)));

                              Pi    
                           ---------
                           sin(Pi a)

In principle this workaround should not be necessary as the Pythagorean identity, upon which this particular simplification can hinge, should hold regardless of such a scaling.

acer

This Question leads to the following case which gets overlooked by the simplify command.

restart:

simplify( -1+cos(a)^2 ); # ok

                                   2
                            -sin(a) 

simplify( -1+cos(a*b)^2 ); # ?? missed ??

                                      2
                         -1 + cos(a b) 

simplify( -1+sin(a*b)^2 ); # ok

                                    2
                           -cos(a b) 

If that middle (problem) case above were fixed, so that the Pythagoras identity were recognized and used, then I could envision the submitter's example succeeding more directly. With that bug fixed then just `simplify, or `simplify` after `expand`, might get the desired result.

I will submit a bug report on that middle case above.

It worked fine for me, on 64bit Maple 15.00 or 15.01 each running on 64bit Linux ubuntu 10.04.

Please, let me ask one question though: are you using 2D Math input? Is it possible that you have an extra space between the int and the (...) bracketed piece? I'd just like to eliminate that possibility first, that it might be a case of inadvertant implicit multiplication.

acer

I don't see how this particular type-check that you've assembled in the Criterium [sic] procedure is useful for the described task. How does it distinguish between floats which are purely real and those which are not?

acer

I don't see how this particular type-check that you've assembled in the Criterium [sic] procedure is useful for the described task. How does it distinguish between floats which are purely real and those which are not?

acer

First 407 408 409 410 411 412 413 Last Page 409 of 594