acer

32490 Reputation

29 Badges

20 years, 7 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

While the random 10x10 Matrix might likely have a full set of linearly independent eigenvectors, it could of course just be that the OP used RandomMatrix for illustration. It may be that for the OP's typical problems the chance of the input being a defective Matrix is more likely. (We don't yet know.)

Such decomposition approaches to computing the floating-point exponential generally fail for defective Matrices. See Method 14 of section 6 of the well-known paper by Moler and Van Loan. I doubt that this method gets "better" by applying it to the mixed float-symbolic problem, which is what we have here for an unassigned symbol `t`. (For a float Matrix, one might be able to compute the condition number quickly, as an initial test.)

Naturally, I fully expect that Robert is aware of all this. I only mention it for the possible benefit of others.

acer

"MMA", sometimes given as "Mma", is a popular abbreviation for Mathematica,

acer

"MMA", sometimes given as "Mma", is a popular abbreviation for Mathematica,

acer

Are N.T.'s papers on these topics purely for numeric (and not symbolic) computation? Maple doesn't seem too slow for that. It seems to be the symbolic computation and interpolation (even though here using very quickly computed float eigenvalues) that gets bogged down.

acer

Are N.T.'s papers on these topics purely for numeric (and not symbolic) computation? Maple doesn't seem too slow for that. It seems to be the symbolic computation and interpolation (even though here using very quickly computed float eigenvalues) that gets bogged down.

acer

Note that you are computing a symbolic result, not a pure floating-point result. Actually, it's a mixed float-symbolic problem, which often is a special kind of animal. And the result will be very lengthy. Note, too, that pure floating-point Matrix exponentiation should be pretty fast.

So perhaps you should ask yourself: what do you expect to do with such a result? Often, such lengthy symbolic results don't give much insight on their own. One often ends up numerically instantiating (possibly many) values for the symbol (here, the `t`) in some other calculation or plot.

By my rough estimation, one can do about 1500 such 10x10 complex[8] pure floating-point Matrix exponentials in the same time that one can do the given symbolic example.

So rather than try for the general univariate, symbolic result (which might just end up being evaluated at many individual t values) perhaps it would be more efficient to alter one's methodology and compute the many individual purely floating-point Matrix exponentials directly.

acer

Try passing `plot` the operator F1, instead of the expression that is returned by F1(t). Ie,

plot(F1, -2..2);

I also saw quick success (even using the expression form) after simplifying the integrand (with `simplify`) following a conversion of the .2 to exact rational 2/10, or injecting a lower tolerance option into the evalf@Int call. The times when it was faster than your original call (by about a factor of 5) were pretty much each the same.

So, that can make plotting F1 faster, and fill those gaps. But it's still not very fast to plot F1 over a range of x. That's because computing F1 at each value of x value involves a separate infinite oscillatory numerical integral. There isn't a good general scheme for such in Maple yet.

acer

So, first you are doing this?

a:=readdata(`testdata.txt`,string,16):

If you've done that, then you could extract the contents of each list in `a` by replacing your lengthy command,

seq(op(i,op(1,a)),i=1..nops(op(1,a)));

with simply this,

op(a[1]);

As an alternative to using StringTools to remove the W characters, perhaps try something like this,

a:=seq(map(t->`if`(t="W",NULL,parse(t)),T),
       T in readdata(`testdata.txt`,string,16)):

That gives you a sequence of lists. Remember: you can produce a sequence from a list by simply applying op to it. Ie, at that point,

op(a[5]); # sequence, from 5th line/list in `a`

But I can't see why you need to convert the lists to sequences right away. A list is often just as easy to manipulate. I understand, that you intend to somehow plot the data.


acer

So, first you are doing this?

a:=readdata(`testdata.txt`,string,16):

If you've done that, then you could extract the contents of each list in `a` by replacing your lengthy command,

seq(op(i,op(1,a)),i=1..nops(op(1,a)));

with simply this,

op(a[1]);

As an alternative to using StringTools to remove the W characters, perhaps try something like this,

a:=seq(map(t->`if`(t="W",NULL,parse(t)),T),
       T in readdata(`testdata.txt`,string,16)):

That gives you a sequence of lists. Remember: you can produce a sequence from a list by simply applying op to it. Ie, at that point,

op(a[5]); # sequence, from 5th line/list in `a`

But I can't see why you need to convert the lists to sequences right away. A list is often just as easy to manipulate. I understand, that you intend to somehow plot the data.


acer

This is a much more interesting example. After looking at the plot, I was tempted to brush it off as being another case where the pathological spike was being missed with the default evaluationlimit and nodelimit options. But, as you showed, it doesn't help to crank up those option values. And I didn't have success with increasing Digits simultaneous to that (did Axel?!).

But a tiny shift in the range had success. So I am guessing that the internal routines of the univariate b&b solver have gone wrong when estimating lower bounds for a cell, when using 0.0 or less as the left-endpoint. Maybe something like that. (Or perhaps the interpolating polynomial is not right. I looked with infolevel[Optimization] at 6, but didn't debug it.)

> restart:

> Optimization[NLPSolve](x^2 +1- exp(-1/(110*(x-1))^2), x= -1.39 .. 1.4, method=branchandbound, maximize);

              [2.00611111585267299, [x = 1.00326066115002321]]

> Optimization[NLPSolve](x^2 +1- exp(-1/(110*(x-1))^2), x= -1.4 .. 1.41,   method=branchandbound, maximize);

              [2.00611111585267166, [x = 1.00326066227561483]]

> Optimization[NLPSolve](x^2 +1- exp(-1/(110*(x-1))^2), x= -1.4 .. 1.4,   method=branchandbound, maximize);

              [1.96001434792277918, [x = -1.39999999999999990]]

acer

This is a much more interesting example. After looking at the plot, I was tempted to brush it off as being another case where the pathological spike was being missed with the default evaluationlimit and nodelimit options. But, as you showed, it doesn't help to crank up those option values. And I didn't have success with increasing Digits simultaneous to that (did Axel?!).

But a tiny shift in the range had success. So I am guessing that the internal routines of the univariate b&b solver have gone wrong when estimating lower bounds for a cell, when using 0.0 or less as the left-endpoint. Maybe something like that. (Or perhaps the interpolating polynomial is not right. I looked with infolevel[Optimization] at 6, but didn't debug it.)

> restart:

> Optimization[NLPSolve](x^2 +1- exp(-1/(110*(x-1))^2), x= -1.39 .. 1.4, method=branchandbound, maximize);

              [2.00611111585267299, [x = 1.00326066115002321]]

> Optimization[NLPSolve](x^2 +1- exp(-1/(110*(x-1))^2), x= -1.4 .. 1.41,   method=branchandbound, maximize);

              [2.00611111585267166, [x = 1.00326066227561483]]

> Optimization[NLPSolve](x^2 +1- exp(-1/(110*(x-1))^2), x= -1.4 .. 1.4,   method=branchandbound, maximize);

              [1.96001434792277918, [x = -1.39999999999999990]]

acer

It's not just a comparison between `subs` and `algsubs` here, since the posted examples supplied them with different inputs. In the `subs` case, the results came following isolation via `solve`.

acer

It's not just a comparison between `subs` and `algsubs` here, since the posted examples supplied them with different inputs. In the `subs` case, the results came following isolation via `solve`.

acer

The defaults for evaluationlimit and nodelimit could be set higher for the univariate branch-and-bound solver. Very often the objective function evaluates quickly, and so there is no overlong delay when using more resources with higher default limits.

One can remove the "almost" in the statement in the reply above, and its principal clause will still be correct. Any numerical method can be beaten by a bad enough function.

I would not call the OP's example a case of an outright bug, as there is no set of default values for such options which satisfy everyone all the time. But the relevant default values for the nodelimit and evaluationlimit options might be raised (doubled?), to catch more problems by default yet without adding any great delay.

> restart: kernelopts(printbytes=false):
> st:=time():
> Optimization[NLPSolve](x^2/10 + sin(1000*x), x= -5 .. 5,
>    method=branchandbound, maximize, evaluationlimit=12000, nodelimit=2000);
               [3.49984521055628761, [x = -4.99984570315593846]]
 
> time()-st;
                                     5.583

Obviously the option values used for the above example are not suitable as defaults. They would make the computation take too long and work too hard in general. But one can see that the options can be adjusted so as to get a better (than default) result even for this objective.

acer

The defaults for evaluationlimit and nodelimit could be set higher for the univariate branch-and-bound solver. Very often the objective function evaluates quickly, and so there is no overlong delay when using more resources with higher default limits.

One can remove the "almost" in the statement in the reply above, and its principal clause will still be correct. Any numerical method can be beaten by a bad enough function.

I would not call the OP's example a case of an outright bug, as there is no set of default values for such options which satisfy everyone all the time. But the relevant default values for the nodelimit and evaluationlimit options might be raised (doubled?), to catch more problems by default yet without adding any great delay.

> restart: kernelopts(printbytes=false):
> st:=time():
> Optimization[NLPSolve](x^2/10 + sin(1000*x), x= -5 .. 5,
>    method=branchandbound, maximize, evaluationlimit=12000, nodelimit=2000);
               [3.49984521055628761, [x = -4.99984570315593846]]
 
> time()-st;
                                     5.583

Obviously the option values used for the above example are not suitable as defaults. They would make the computation take too long and work too hard in general. But one can see that the options can be adjusted so as to get a better (than default) result even for this objective.

acer

First 464 465 466 467 468 469 470 Last Page 466 of 594