acer

32323 Reputation

29 Badges

19 years, 317 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

I don't see any reason to believe that this is related to kernelopts(datalimit), the parent shell's limits, or similar.

It might in fact be coded right into the (external) shared library called by RootFinding:-Isolate, and be a separate management mechanism.

The OP might raise limits in the parent shell by using (say) ulimit, and then in Maple itself with kernelopts(), and see whether that gets rid of the problem altogether.

acer

This looks like an internal error message that is not supposed to surface. You might consider submitting a report about it.

The author of the code would probably like to see an example that produced the error reliably (if you have such).

I could not find such an error statement in the RootFinding:-Isolate command at the Maple library level. But it does exist in the compiled shared library libfgbuni.so. And the ?Isolate help-page does not mention it.

acer

The Statistics:-NonlinearFit help-page says that it uses a least squares approach.  It seemed straightforward to adapt Joe's (2nd) approach evaluating the DE (at a point t) to one which did a sum of squares for the data points in T and Z.

B := proc(a,b,c)
global T,Z;
local G,i;
for i from 1 to 11 do
    G[i]:=zcompute(T[i],a,b,c);
end do;
add( (G[i]-Z[i])^2, i=1..11 );
end proc:

Once one has this objective function, in can be used with the GlobalSolve routine.

As mentioned before, GlobalSolve has a a lot of options that can be adjusted to vary how it searches.

infolevel[GlobalOptimization]:=10:
Digits := trunc(evalhf(Digits)):

GlobalOptimization:-GlobalSolve(B, 0..0.2,0..0.2,0..0.5, 
   optimalitytolerance=1.0e-6, method=singlestart,
   noimprovementlimit=400, timelimit=10000,
   evaluationlimit=500000);

Quite quickly it found the triple 4.973755e-02, 1.612033e-03, 1.462690e-01 for p1, p2, and f. I forget for which values of the solver options it found something good fastest. That triple gives an ever so slightly smaller (better) residual (total error, in least-squares) when fed into the objective function B than does the result from NonlinearFit or from that other 3rd party package.

Depending on the specific problem, you might wish to set various options and have GlobalSolve search for as long as you have patience.

You asked about the 'parameters' option for dsolve/numeric. The dsolve(...,numeric) routine doesn't itself provide the numeric solution to DEs. It returns something which can subsequently be run to provide numeric values. It sets up a solver to do that, and returns a procedure (or list of such) that when run will themselves return the numeric values. But what if one has a generic DE of some form, and knows in advance that it will have to be solved numerically for various different values of some parameters (coefficients, etc) wihin it? One doesn't want to have to call all the heavy expensive machinery each time a parameter is adjusted, just to produce the solving-procedures which appear the same (except for the change in parameters). And indeed, your optimization problem seems to be a perfect example of why such efficient parametric numeric DE solving is necessary. How does it work? I don't know -- maybe it's just using Maple's lexical scoping functionality, and supplying the 'parameters' option informs dsolve/numeric that this will be OK at runtime. The option is briefly mention, but not fully described, on the ?dsolve[numeric,IVP] help-page.

acer

The Statistics:-NonlinearFit help-page says that it uses a least squares approach.  It seemed straightforward to adapt Joe's (2nd) approach evaluating the DE (at a point t) to one which did a sum of squares for the data points in T and Z.

B := proc(a,b,c)
global T,Z;
local G,i;
for i from 1 to 11 do
    G[i]:=zcompute(T[i],a,b,c);
end do;
add( (G[i]-Z[i])^2, i=1..11 );
end proc:

Once one has this objective function, in can be used with the GlobalSolve routine.

As mentioned before, GlobalSolve has a a lot of options that can be adjusted to vary how it searches.

infolevel[GlobalOptimization]:=10:
Digits := trunc(evalhf(Digits)):

GlobalOptimization:-GlobalSolve(B, 0..0.2,0..0.2,0..0.5, 
   optimalitytolerance=1.0e-6, method=singlestart,
   noimprovementlimit=400, timelimit=10000,
   evaluationlimit=500000);

Quite quickly it found the triple 4.973755e-02, 1.612033e-03, 1.462690e-01 for p1, p2, and f. I forget for which values of the solver options it found something good fastest. That triple gives an ever so slightly smaller (better) residual (total error, in least-squares) when fed into the objective function B than does the result from NonlinearFit or from that other 3rd party package.

Depending on the specific problem, you might wish to set various options and have GlobalSolve search for as long as you have patience.

You asked about the 'parameters' option for dsolve/numeric. The dsolve(...,numeric) routine doesn't itself provide the numeric solution to DEs. It returns something which can subsequently be run to provide numeric values. It sets up a solver to do that, and returns a procedure (or list of such) that when run will themselves return the numeric values. But what if one has a generic DE of some form, and knows in advance that it will have to be solved numerically for various different values of some parameters (coefficients, etc) wihin it? One doesn't want to have to call all the heavy expensive machinery each time a parameter is adjusted, just to produce the solving-procedures which appear the same (except for the change in parameters). And indeed, your optimization problem seems to be a perfect example of why such efficient parametric numeric DE solving is necessary. How does it work? I don't know -- maybe it's just using Maple's lexical scoping functionality, and supplying the 'parameters' option informs dsolve/numeric that this will be OK at runtime. The option is briefly mention, but not fully described, on the ?dsolve[numeric,IVP] help-page.

acer

Knowing a1 and a2 below makes it easier, to then show that a linear combination of them is equal both to W and to 1.

> a1 := hypergeom([-1/2, 11/8], [19/8], -1/6):
> a2 := hypergeom([-1/2, 3/8], [11/8], -1/6):

> W := 81/539*42^(1/2) * (5/27*a1+22/27*a2);
     81    1/2 /
W := --- 42    |5/27 hypergeom([-1/2, 11/8], [19/8], -1/6)
     539       \
 
       22                                     \
     + -- hypergeom([-1/2, 3/8], [11/8], -1/6)|
       27                                     /
 

> value(simplify(combine(convert(convert(W,MeijerG),Sum))));
            81    1/2                       169    81
            --- 42    hypergeom([-1/2, 3/8, ---], [--, 19/8], -1/6)
            539                             88     88
 

> Z := simplify(combine(convert(W,Int)));
                                1
                               /              1/2
                         1/2  |   (36 + 6 _t1)    (5 _t1 + 6)
            Z := 1/784 42     |   --------------------------- d_t1
                              |                5/8
                             /              _t1
                               0
 

> op(1,Z)*op(2,Z) *Int(IntegrationTools:-GetIntegrand(op(3,Z)),
>                       IntegrationTools:-GetVariable(op(3,Z)));
                             /             1/2
                       1/2  |  (36 + 6 _t1)    (5 _t1 + 6)
               1/784 42     |  --------------------------- d_t1
                            |               5/8
                           /             _t1
 
> Q := value(%);
                         1/2              3/8             1/2
                       42    (_t1 + 6) _t1    (36 + 6 _t1)
                  Q := --------------------------------------
                                        294
 

> (a,b) := op(IntegrationTools:-GetRange(op(3,Z)));
                                 a, b := 0, 1
 
> 1/(eval(Q,_t1=b)-eval(Q,_t1=a));
                                       1

acer

Ignore what I wrote about an elliptic routine being used. I was fooled by the userinfo messages, and should have given it more thought. The form of the result could be taken as a hint. Following IntegrationTools:-Definite:-Main one can see that the solution is obtained from`int/definite/meijerg` .

acer

Ignore what I wrote about an elliptic routine being used. I was fooled by the userinfo messages, and should have given it more thought. The form of the result could be taken as a hint. Following IntegrationTools:-Definite:-Main one can see that the solution is obtained from`int/definite/meijerg` .

acer

It sounds to me as if you just want to be able to specify the axis labels (and possibly also the plot legend).

> plot( Re(subs(deltaE=0.355,M=sqrt(5.4e5),Eig1(t))),
>       t=1..10^(1/3),labels=[R^3,'Eig1'(R^3)],
>       legend=typeset('Eig1'(R^3)));
Is that right?

acer

It sounds to me as if you just want to be able to specify the axis labels (and possibly also the plot legend).

> plot( Re(subs(deltaE=0.355,M=sqrt(5.4e5),Eig1(t))),
>       t=1..10^(1/3),labels=[R^3,'Eig1'(R^3)],
>       legend=typeset('Eig1'(R^3)));
Is that right?

acer

Yes, delta7's first response made that point, about using the antiderivative.

And, yes, Maple is capable of generating the nicer answer. As was mentioned, that happened in some earlier versions of Maple.

The thing is, there are lots of ways to accomplish some definite integration problems. There is FTOC, which we see allows Maple to get a nice form. There is change of variables, which also works nicely here. Pattern matching might also get this one  smoothly. (Search for MeijerG and integration and you can find lots of hits, many even on this site.)

Setting infolevel[int] to 3, and running the original definite integration call, one can see that Maple 12 uses an elliptic integral methodology.

It can be very difficult to tell in advance which methods will work best for a given problem -- which will both succeed and get the nicest answer, fastest. The front dispatcher of `int` has to do this. And it must be an art form, to have it please most of the people most of the time. I could envision that tweaking it to "fix" one particular example could likely "break" many other examples.

Maybe this state of affairs could be improved by crude use of Threads. Run several methods in parallel, and take which ever solution is nicest, to return to the user.

acer

Yes, delta7's first response made that point, about using the antiderivative.

And, yes, Maple is capable of generating the nicer answer. As was mentioned, that happened in some earlier versions of Maple.

The thing is, there are lots of ways to accomplish some definite integration problems. There is FTOC, which we see allows Maple to get a nice form. There is change of variables, which also works nicely here. Pattern matching might also get this one  smoothly. (Search for MeijerG and integration and you can find lots of hits, many even on this site.)

Setting infolevel[int] to 3, and running the original definite integration call, one can see that Maple 12 uses an elliptic integral methodology.

It can be very difficult to tell in advance which methods will work best for a given problem -- which will both succeed and get the nicest answer, fastest. The front dispatcher of `int` has to do this. And it must be an art form, to have it please most of the people most of the time. I could envision that tweaking it to "fix" one particular example could likely "break" many other examples.

Maybe this state of affairs could be improved by crude use of Threads. Run several methods in parallel, and take which ever solution is nicest, to return to the user.

acer

The `zip` routine can be used to process two lists elementwise.

For example, (this can be also be done with `seq`, as you had originally)

> L := [1,19,4,7,21,16]:
> zip((a,b)->`if`(a<b,a,b), L[1..-2],L[2..-1]);
                               [1, 4, 4, 7, 16]

If I was doing these sorts of things with large numbers of floats, and speed was of utmost concern, then I would consider using Arrays of datatype=float[8] and map[evalhf] and/or evalhf'able procedures. Since Arrays are fixed size, then instead of replacing elements with NULL (to shorten them) I might use some specific (very large in magnitude) negative number which could be subsequently ignored. I note that map[evalhf] can understand max() and min() but not `if`.

acer

The `zip` routine can be used to process two lists elementwise.

For example, (this can be also be done with `seq`, as you had originally)

> L := [1,19,4,7,21,16]:
> zip((a,b)->`if`(a<b,a,b), L[1..-2],L[2..-1]);
                               [1, 4, 4, 7, 16]

If I was doing these sorts of things with large numbers of floats, and speed was of utmost concern, then I would consider using Arrays of datatype=float[8] and map[evalhf] and/or evalhf'able procedures. Since Arrays are fixed size, then instead of replacing elements with NULL (to shorten them) I might use some specific (very large in magnitude) negative number which could be subsequently ignored. I note that map[evalhf] can understand max() and min() but not `if`.

acer

Wasn't the problem that table references couldn't be used as (formal) parameter names for a procedure, rather than as input aguments which is what you now seem to be discussing? I don't see how the above addresses that.

Also, it doesn't matter if one types the keystrokes x[1] or x_1 since they both create the table reference x[1] in Maple 11 &12.

This reminds me that I also don't like using atomic identifiers such as `#msub(mi("x"),mn("1"))` because context-menus don't typeset them properly. Sure, it gets typeset nicely as x-subscript-1 in 2D Math output, but every now and then its true nature pops up.

For example, create an expression containing the atomic identifier (or subliteral from the layout palette) and then look at the Solve->Solve for Variable context-menu item. That MathML object that then shows up as the variable (name) will likely confuse many users.

acer

Wasn't the problem that table references couldn't be used as (formal) parameter names for a procedure, rather than as input aguments which is what you now seem to be discussing? I don't see how the above addresses that.

Also, it doesn't matter if one types the keystrokes x[1] or x_1 since they both create the table reference x[1] in Maple 11 &12.

This reminds me that I also don't like using atomic identifiers such as `#msub(mi("x"),mn("1"))` because context-menus don't typeset them properly. Sure, it gets typeset nicely as x-subscript-1 in 2D Math output, but every now and then its true nature pops up.

For example, create an expression containing the atomic identifier (or subliteral from the layout palette) and then look at the Solve->Solve for Variable context-menu item. That MathML object that then shows up as the variable (name) will likely confuse many users.

acer

First 524 525 526 527 528 529 530 Last Page 526 of 591