pagan

5147 Reputation

23 Badges

17 years, 123 days

 

 

"A map that tried to pin down a sheep trail was just credible,

 but it was an optimistic map that tried to fix a the path made by the wind,

 or a path made across the grass by the shadow of flying birds."

                                                                 - _A Walk through H_, Peter Greenaway

 

MaplePrimes Activity


These are replies submitted by pagan

Have you seen the annotated context menu actions? By that I mean what happens when you right-click on an input and then select from the popup menu of action choices. What you end up with is the input-->output with a brief description above the arrow. You can chain several of those in one line. (Whether the annotations appear above the arrows, or not, is a preference that can be set in Maple's Standard graphical interface.)

And those menus can be customized with new types of action (w/descriptions). Hence, I'm pretty sure that Maple can be customized to do something closer to what you've described. The ``() and the expand() could be hidden automatically in this way (ie. "inside the arrow action"). A new action could also be implemented which substitutes into a formula with the values of the variables, while inserting the hidden ``() calls so that it doesn't fully evaluate/expand right away.

So you'd end up with a bunch of lines showing the "assignents" (equations showing values for the variables). And then on a single, new line you could have a chain of arrow actions. First would be the formula without values, then next with the simple substitutions, then next with the combination of units, then next (if you had exact input values) with the floating-point equivalent. And above each arrow could be nice terse words annotating each action. And of course, you'd just have to hide all the (reusable!) setup code.

Does that sound closer?

Basically, yes, it is due to roundoff during the forward computation (plugging the x result into the original equation, which involves tan). fsolve has tried to find correct to 10 (Digits) decimal digits an x which makes tan(x)-x very close.

> restart:

> Digits:=100:

> sol:=fsolve(tan(x)=x,x=Pi/2..3*Pi/2); # lot of digits will be correct

sol := 4.493409457909064175307880927280322082215583872290040802895823961926\
    950314597104098729057809455879692

> tan(sol)-sol: evalf[10](%);
                                         -97
                                  0.10 10

> Digits:=10:

> sol_10:=evalf[10](sol); # safe to say that sol_10 has 10 correct digits
                             sol_10 := 4.493409458
 
> tan(sol_10);
                                  4.493409460

I hope that the above illustrates that sol_10 does indeed have 10 correct (rounded) digits. But at Digits=10 (the default environment), and also allowing for a few guard digits (Digits bumping by `tan`, to try and gain accuracy), the result of tan(sol_10) is still off by 2 ulps from sol_10.

A quite different question is the following. What x (to unknown number DD of decimal digits) would make tan(x)-x (computed presumably with at least DD working precision) be at most 1.0*10^(-10)=1.0e-10?

That second, different question, is attacking the problem from a forward error point of view. Ie, the error after plugging the computed result into the equation. That's not what fsolve does. What fsolve does is try to find the x accurate to Digits places.

What you have seen is that an x correct to Digits places will not necessarily produce a forward error of at most 1.0*10^(-Digits).

Basically, yes, it is due to roundoff during the forward computation (plugging the x result into the original equation, which involves tan). fsolve has tried to find correct to 10 (Digits) decimal digits an x which makes tan(x)-x very close.

> restart:

> Digits:=100:

> sol:=fsolve(tan(x)=x,x=Pi/2..3*Pi/2); # lot of digits will be correct

sol := 4.493409457909064175307880927280322082215583872290040802895823961926\
    950314597104098729057809455879692

> tan(sol)-sol: evalf[10](%);
                                         -97
                                  0.10 10

> Digits:=10:

> sol_10:=evalf[10](sol); # safe to say that sol_10 has 10 correct digits
                             sol_10 := 4.493409458
 
> tan(sol_10);
                                  4.493409460

I hope that the above illustrates that sol_10 does indeed have 10 correct (rounded) digits. But at Digits=10 (the default environment), and also allowing for a few guard digits (Digits bumping by `tan`, to try and gain accuracy), the result of tan(sol_10) is still off by 2 ulps from sol_10.

A quite different question is the following. What x (to unknown number DD of decimal digits) would make tan(x)-x (computed presumably with at least DD working precision) be at most 1.0*10^(-10)=1.0e-10?

That second, different question, is attacking the problem from a forward error point of view. Ie, the error after plugging the computed result into the equation. That's not what fsolve does. What fsolve does is try to find the x accurate to Digits places.

What you have seen is that an x correct to Digits places will not necessarily produce a forward error of at most 1.0*10^(-Digits).

Sorry, I didn't have Maple access for previous replies.

I get all true returns, for this check.

  verify(tan(this_soln),this_soln,float(100));

That allows an error of 100 ulps (at the given, default, precision).

Sorry, I didn't have Maple access for previous replies.

I get all true returns, for this check.

  verify(tan(this_soln),this_soln,float(100));

That allows an error of 100 ulps (at the given, default, precision).

What was the error that you saw with verify float?

nb. It may be that you have to supply options, to let `verify` know how far apart the pair is allowed to be. That's important, for float comparison. If they were identical, you'd be done. So they must only be close. And one man's close is not always close enough in other circumstances. Hence there are options to verify,float to allow the number of digits compared, absolute vs relative, number of ulps wrong, and choice of complex model.

...or you could just check that abs(tan(this_soln)-this_soln)<10^(foo) for some foo of your liking.

What was the error that you saw with verify float?

nb. It may be that you have to supply options, to let `verify` know how far apart the pair is allowed to be. That's important, for float comparison. If they were identical, you'd be done. So they must only be close. And one man's close is not always close enough in other circumstances. Hence there are options to verify,float to allow the number of digits compared, absolute vs relative, number of ulps wrong, and choice of complex model.

...or you could just check that abs(tan(this_soln)-this_soln)<10^(foo) for some foo of your liking.

In Maple 11, you can insert a two column Matrix into a PLOT data structure and have that be interpreted as a point plot.

Rd := Vector([389.33, 453.33, 511.33, 557.33]):
Ru := Vector([390.3463, 460.6926, 531.0390, 601.3853]):

PLOT(POINTS(<<Rd|Ru>>));

In Maple 13, the plot command understands that <<Rd|Ru>> can be interpreted as point plot data. And so in Maple 13 the command plot(<<Rd|Ru>>,style=point) produces a point plot. But Maple 11 didn't recognize it.

Whether the GlobalSolve can find the truly best optimum is another question entirely from whether it incorrectly discards any better solution found during a separate search phase.

For what it's worth, I got the Maple 13.01 GOT to find 0.0056284221648 as optimal objective value, from the global multistart solver. But I had to restrict the LAMBDAW parameter's range to 0.0 .. 1.0 (instead of upper bound of 10.0). And I also passed these options,

   evaluationlimit=100000,noimprovementlimit=3

As they say, global optimization is a dark art.

Whether the GlobalSolve can find the truly best optimum is another question entirely from whether it incorrectly discards any better solution found during a separate search phase.

For what it's worth, I got the Maple 13.01 GOT to find 0.0056284221648 as optimal objective value, from the global multistart solver. But I had to restrict the LAMBDAW parameter's range to 0.0 .. 1.0 (instead of upper bound of 10.0). And I also passed these options,

   evaluationlimit=100000,noimprovementlimit=3

As they say, global optimization is a dark art.

Whether the global search finds the very best global optimum in all cases depends on the strength of the global search, sure. Maybe the global search might would sometimes find it (you can use multistart or branchandbound, and global optimization is a dark art).

My point is that if you follow the idea of always taking the best between an initial local search and a susequent global search then at least you work around any bug with its discarding a found better local search result. To me, that makes sense in the context of your given problem, regardless of your more general followup question.

Whether the global search finds the very best global optimum in all cases depends on the strength of the global search, sure. Maybe the global search might would sometimes find it (you can use multistart or branchandbound, and global optimization is a dark art).

My point is that if you follow the idea of always taking the best between an initial local search and a susequent global search then at least you work around any bug with its discarding a found better local search result. To me, that makes sense in the context of your given problem, regardless of your more general followup question.

> x:=1.6*10^(-16) + 2.7*I;
                                            -15
                        x := 0.1600000000 10    + 2.7 I

> (simplify@fnormal)(x);
                                 2.700000000 I

> G:=Vector([x],datatype=complex[8]);
           G := [                       -15                        ]
                [0.160000000000000012 10    + 2.70000000000000018 I]

> map(simplify@fnormal, G);
                                [2.700000000 I]

Note that, for very large antihermitian Matrices, it can be computed more efficiently by utilizing the better algorithm for hermitian Matrices. Also, the hermitian eigenvalues routine produces purely real datatype=float[8] results.

> with(LinearAlgebra):
> G:=RandomTools:-Generate(distribution(Normal(0,1)),makeproc=true):
> M:=Matrix(3,G+G*I,shape=antihermitian):
> EM:=Eigenvalues(M);
                [                       -17                         ]
                [0.542101086242752217 10    + 2.35647944272523757 I ]
                [                                                   ]
          EM := [                        -16                        ]
                [-0.245668274414614939 10    - 1.79036919417147256 I]
                [                                                   ]
                [                       -16                         ]
                [0.424576297887774995 10    - 0.566110248553767126 I]

> map(simplify@fnormal, EM);
                               [ 2.356479443 I ]
                               [               ]
                               [-1.790369194 I ]
                               [               ]
                               [-0.5661102486 I]

> Eigenvalues(Matrix(I*M,shape=hermitian));
                            [-2.35647944272524024]
                            [                    ]
                            [0.566110248553767237]
                            [                    ]
                            [1.79036919417147256 ]

> map(t->-t*I,%);
                               [ 2.356479443 I ]
                               [               ]
                               [-0.5661102486 I]
                               [               ]
                               [-1.790369194 I ]

I only used map for that last step, instead of multiplying by -I directly, in order to not get a complex[8] datatype result with zero-real parts printed. If you were concerned with speed, for a very large problem, you could do inplace MatrixScalarMultiply.

> x:=1.6*10^(-16) + 2.7*I;
                                            -15
                        x := 0.1600000000 10    + 2.7 I

> (simplify@fnormal)(x);
                                 2.700000000 I

> G:=Vector([x],datatype=complex[8]);
           G := [                       -15                        ]
                [0.160000000000000012 10    + 2.70000000000000018 I]

> map(simplify@fnormal, G);
                                [2.700000000 I]

Note that, for very large antihermitian Matrices, it can be computed more efficiently by utilizing the better algorithm for hermitian Matrices. Also, the hermitian eigenvalues routine produces purely real datatype=float[8] results.

> with(LinearAlgebra):
> G:=RandomTools:-Generate(distribution(Normal(0,1)),makeproc=true):
> M:=Matrix(3,G+G*I,shape=antihermitian):
> EM:=Eigenvalues(M);
                [                       -17                         ]
                [0.542101086242752217 10    + 2.35647944272523757 I ]
                [                                                   ]
          EM := [                        -16                        ]
                [-0.245668274414614939 10    - 1.79036919417147256 I]
                [                                                   ]
                [                       -16                         ]
                [0.424576297887774995 10    - 0.566110248553767126 I]

> map(simplify@fnormal, EM);
                               [ 2.356479443 I ]
                               [               ]
                               [-1.790369194 I ]
                               [               ]
                               [-0.5661102486 I]

> Eigenvalues(Matrix(I*M,shape=hermitian));
                            [-2.35647944272524024]
                            [                    ]
                            [0.566110248553767237]
                            [                    ]
                            [1.79036919417147256 ]

> map(t->-t*I,%);
                               [ 2.356479443 I ]
                               [               ]
                               [-0.5661102486 I]
                               [               ]
                               [-1.790369194 I ]

I only used map for that last step, instead of multiplying by -I directly, in order to not get a complex[8] datatype result with zero-real parts printed. If you were concerned with speed, for a very large problem, you could do inplace MatrixScalarMultiply.

You don't necessarily have to use digits=25. But it might be interesing to find out if the computation becomes stable when epsilon is held "large" while digits is raised from 10 towards 15, and then upwards from there.

Other approaches might involve trying to find a better representation of the integral for that problematic range.

You asked how epsilon works. It is estimated, according to the nature of the particular quadrature rule. For example,if you have a quadrature rule of order n (exactly correct results for polynomials up to degree n) then the error at stepsize h can be estimated using the result calculated at stepsize h along with the result calculated at stepsize h/2. This is because you know that the error term is roughly equal to constant K times h^(n+1), where K depends on the (n+1)th derivative of the integrand (and under some assumptions about the derivatives higher than nth being bounded). The refinement of the grid allows comparison of two results, which allows for estimation of the error. Sometimes you can nicely show how it works by plugging the (n+1)th order truncated Taylor series into the quadrature formula.

First 53 54 55 56 57 58 59 Last Page 55 of 81