acer

32480 Reputation

29 Badges

20 years, 6 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are answers submitted by acer

I got an answer in two different ways, using the Optimization package. But I did have to make a few corrections (similar to what was done here for the DirectSearch approach).

One thing that I did not do (but which the code shouts for, I think) is improvements to make the objective evaluate more quickly. Even a single evaluation of the objective at a given numeric point can be very slow. And optimization methods can request a lot of objective evalutations. For example, there are some numeric integrals, embedded deep in the mess (sorry) of the objective procedure, which are identically repeated several times. It's likely taking at least five times longer than it should have to, and that's not even considering whether there is some purely exact way to get a single expression for any of its current fsolve & integrals. I also didn't test to see whether the Digits:=30 was really needed, or best, for the fsolve inside procedure SX.

Because each objective evalutation is so slow, and given that you know the objective is smooth (and possibly even monotonic in each variable), then I would expect that a local optimization approach (even if performed by some global-capable optimization package) could be faster than a full strength global optimization method.

But first, a note. That error message about no iteration steps take due to first order conditions already being met, from Optimization, is often due to an internal bug related to a problem with computing numeric derivatives. It can be side-stepped by either using the unconstrained Nelder-Mead nonlinearsimplex method (slow convergence rate here, and many objective evaluations) with which I got a result. Or it can be side-stepped by supplying the obective in so-called "Matrix form", along with a hand-built routine to compute its numerical derivatives.

# I used this redefinition of SX, because sometimes the fsolve call fails.
SX := proc (egx, egy)::set; local j, sx, SEqn; global Eg1, Eg2;
SEqn := {seq(evalf(subs(subs(Eg1 = egx, Eg2 = egy, EV), Eqn(i))), i = 1 .. n)};
Digits := 30; sx := fsolve(SEqn, X);
if type(sx, specfunc(anything, fsolve)) then
return {seq(x[i] = 1, i = 1 .. n)};
else return sx; end if;
end proc

# And I used this redefinition of PEffSQ
PEffSQ := proc (Egx, Egy) local res;
if Egy <= Egx then
res := 100*evalf(subs(subs(Eg1 = Egx, Eg2 = Egy, EV), SX(Egx, Egy), PCellMax/PInc));
end if;
if `not`(type(res, realcons)) then return 0; else return res; end if;
end proc

I noted that the Egy<=Egx inside PEffSQ could be turned into a constraint, but didn't go that route even though the above changes make the objective nondifferentiable at the boundary.

At that point, I could do the following workaround

> objf := proc(V::Vector)
> PEffSQ(V[1],V[2]);
> end proc:

> objfgradient := proc(X::Vector,G::Vector)
> G[1] := fdiff( PEffSQ, [1], [X[1],X[2]] );
> G[2] := fdiff( PEffSQ, [2], [X[1],X[2]] );
> NULL;
> end proc:

> Optimization:-NLPSolve( 2, objf, 'objectivegradient'=objfgradient,
> 'initialpoint'=Vector([2.3, 1.4]), 'method'='sqp', 'maximize' );

[ [ 2.24196645821080 ]
[48.3648680038370032, [ ]
[ [ 1.41837902813424 ]]

I have a hope that any problem in NLPSolve could be fixed, so that it could automatically use the above or some other way to compute numerical derivatives and succeed more often for "Operator" form arguments.

acer

The evalf/Int approach should use sophisticated techniques. It also allows use to add an accuracy tolerance as an extra epsilon=value option inside that inert Int() call `igral`. In contrast, the Student:-NumericalAnalysis methods are more for teaching principals: stock (almost "toy") methods without optional accuracy options.

You could simply raise the working precision and recompute, and then see which of the two earlier results was a better approximation to the more accurate high-Digits result. The simplest way to do that is to make an assignment such as Digits:=20 or so.

When you call evalf(Int(...)) as in your first approach with igral, Maple automatically adjusts the target accuracy (epsilon) based on Digits the working precision ebvironment variable. So when you raise Digits and recompute, the result should become more accurate. Here, that's just simpler than adding the epsilon=value option into the Int() call.

And there is another way to get the same effect, of raising the working precision. That is to change the evalf to evalf[d] in your first approach. That makes Maple use `d` for Digits for the given computational argument to evalf.

Also, you can set infolevel[`evalf/int`]:=2 (note that was all lowercase) and see more information about what evalf/Int is doing and what accuracy tolerance it is using, etc.

acer

The call to MatrixInverse must somehow bring along a connection to the name LinearAlgebra, because otherwise how would Maple know to put that in the error string?

The Original Poster's pair of commands obvously work OK when done alone in a fresh session, so the challenge becomes to find a sequence of commands which could be done neforehand, so that the given pair of lines would produce the error.

First let's consider a sequence of commands that works as I'd expect,

> restart:
> with(Student):

> with(LinearAlgebra):
> MatrixInverse( Matrix([[4]]) );
MatrixInverse([4])

That makes sense. When I loaded the Student package, it rebound the name LinearAlgebra since that is one of the Student package's exports. So the subsequent command with(LinearAlgebra) was actually loading Student:-LinearAlgebra. And there is no export `MatrixInverse` in the Student:-LinearAlgebra subpackage, so the name MatrixInverse didn't match any known procedure, and what gets returned is an unevaluated function call. So far, so good.

Now another example, whose final two lines are closer to but not exactly the same as what was originally reported,

> restart:
> with(Student):

> with(:-LinearAlgebra):
> MatrixInverse(Matrix([[4]]));
Error, MatrixInverse is not a command in the LinearAlgebra package

Now I am surprised. I expected with(:-LinearAlgebra) to load by force the top level LinearAlgebra package and not any subpackage export from the Student package. Maybe it did. But, if so, then why did the MatrixInverse call fail? It's as if Maple figured out that the MatrixInverse call should correctly try to invoke :-LinearAlgebra:-MatrixInverse but the "LinearAlgebra" bit of some lookup then found the "wrong" name on account of the rebinding to Student:-LinearAlgeba. I am a little surprised, and maybe it's a bug.

Now another example, whose last two lines exactly match the original example,

> restart:
> with(LinearAlgebra):
> with(Student):

> with(LinearAlgebra):
> MatrixInverse(Matrix([[4]]));
Error, MatrixInverse is not a command in the LinearAlgebra package

I suspect that this too might be unintentional and have the same cause as the middle example, and hence be a bug. It does seem undesirable behaviour, and if these are not bugs then how can they be clearly documented (especially now that the GUI allows "easy" package loading)?

acer

By using eval, one can substitute the prior solutions involving x and y (as a set of equations) without having to actually assign to x and y. This is nice because it allows one to create new systems involving x and y as unassigned variable names, even after solving the initial system for x and y.

> restart:

> eqxy1:=x+2*y=4;
                          x + 2 y = 4

> eqxy2:=x^2+y^2=16;
                           2    2     
                          x  + y  = 16

> xysol:=solve({eqxy1,eqxy2},Explicit);
                               /    -12      16\ 
              {x = 4, y = 0}, { x = ---, y = -- }
                               \     5       5 / 

> xysol[1];
                         {x = 4, y = 0}

> xysol[2];
                       /    -12      16\ 
                      { x = ---, y = -- }
                       \     5       5 / 

> eqrsxy1:=s*x+y^2*r=11;
                               2       
                        s x + y  r = 11

> eqrsxy2:=r^2+s^2=x*y;
                          2    2      
                         r  + s  = x y

> eval({eqrsxy1,eqrsxy2},xysol[1]);
                    /           2    2    \ 
                   { 4 s = 11, r  + s  = 0 }
                    \                     / 

> eval({eqrsxy1,eqrsxy2},xysol[2]);
             /  12     256          2    2   -192\ 
            { - -- s + --- r = 11, r  + s  = ---- }
             \  5      25                     25 / 

> solve(eval({eqrsxy1,eqrsxy2},xysol[1]),Explicit);
           /    11        11\    /      11        11\ 
          { r = -- I, s = -- }, { r = - -- I, s = -- }
           \    4         4 /    \      4         4 / 

> solve(eval({eqrsxy1,eqrsxy2},xysol[2]),Explicit);
           /    4400     3             (1/2)  
          { r = ---- + ----- I 15164737     , 
           \    4321   17284                  

                  4125     16             (1/2)\    /
            s = - ----- + ----- I 15164737      }, { 
                  17284   21605                /    \

                4400     3             (1/2)  
            r = ---- - ----- I 15164737     , 
                4321   17284                  

                  4125     16             (1/2)\ 
            s = - ----- - ----- I 15164737      }
                  17284   21605                / 

acer

The fact that Maple appears to be doing the exact integral is a bit of an illusion, because all those individual erf() calls in the expanded "result" would likely have to be approximated numerically at some point.

You can consider doing numeric integration directly, instead of using value() to get an apparent exact result. Consider,

> restart:

> expr:=(a-b*delta)*(c-delta)*
>   1/(sigma*sqrt(2*Pi))*exp(-(delta-mu)^2/2/sigma^2):

> myvals:=[a=20,b=2,c=1,sigma=99,mu=1/10]:

> igral:=Int(expr,delta=0..a/b):

> evalf(subs(myvals,igral)); # better
                         -0.9386112295

> evalf(eval(value(igral),myvals)); # worse
                         -0.9386105310

>evalf[20](eval(value(igral),myvals)); # check
                    -0.93861122946668528680

The "better" answer above did pure numeric integration on the original expression. The second, "worse" answer resulted from applying value() and getting an intermediary answer with lots of erf() calls, and only subsequently applying evalf() to get a float answer.

Of course, one can raise Digits (or call evalf[d](...) to get the same effect) and attain greater accuracy. And that's what the final "check" did. But it's also worthwhile noticing which of the two initial attempts was more accurate (better) for this example.

There are some cases where the ability to transform a single integral into a (set of) calls to some named special function is a good thing (because direct numeric integration might be otherwise hard or notoriously innaccurate when compared to a specialized internal routine designed to evaluate the special function numerically). But as the above example illustrates, that's not always the case.

acer

The help-page ?ExpectedValue could be more clear about this usage, especially because ExpectedValue(X,u) does not appear to work properly for X a RandomVariable instead of a discrete Sample.

restart:
with(Statistics):

Q:=RandomVariable(Distribution(PDF=(t->f(t)))):

ExpectedValue(Q);

ExpectedValue(U(Q));

Q:=RandomVariable(Normal(mu,sigma)):

ExpectedValue(U(Q));

And one could replace U above, as desired. There's no need for creation of a new, weighted random variable proper.

For example, this is just a wrong result (and looks like a bug to me, since it's more in line with the documented calling sequences than is the last call above),

> restart:
> with(Statistics):
> Q:=RandomVariable(Normal(mu,sigma)):
> ExpectedValue(Q,t->1-exp(-phi*t));
mu

acer

There are two problems in the initial 2D Math input of e1.

The factor that looks like (a-`b&delta;`) is actually (a-`b&delta;`) instead of (a-b*delta). I suggest retyping it in full (and don't just try and insert a space or *).

Also it is missing either a space or an explicit multiplication symbol between the first two factors in brackets. That makes it a function application rather than a multiplication of two bracketed subexpressions. I suggest retyping it with an explicit *.

So, it has (a-`b&delta;`)(c-delta) instead of either (a-b*delta)*(c-delta) or (a-b*delta)*(c-delta).

Once changed, Maple can do the integral directly, without expansion by hand, using lowercase int() instead of evalf(Int(...)). That produces the result containg erf() calls.

If you intend on doing numeric integration with evalf(Int(..)) then you'll need to use numeric values for the unknowns a,b,c,sigma, and mu in advance of the call.

If you encounter poor accuracy from with numeric integration or with floating-point evaluation of the exact symbolic integration result (containing erf calls) then you could try increasing the environment variable Digits. See ?Digits.

Alternatively, is this something you could more simply compute using the Statistics package?

If you continue to have difficulties with 2D Math input then you could consider switching from a Document to a Worksheet and from 2D Math input to 1D Maple input.

acer

The error message occurs because the procedure has a break statement in it (2nd line from the end) which is not inside any for..do...end loop. Try removing it.

What are you trying to accomplish with that break? Did you intend that the procedure would instead exit (return) at the line with that invalid break? If so, then you could use an error statement instead of merely printf'ing that error message. Ie, replace,

  printf("Error: At least 2 planes do not have cut point. Please change the plane equation, and try again.");
  break;

with this,

  error "At least 2 planes do not have cut point. Please change the plane equation, and try again.";

acer

> res:=evalc(int(1/x,x));
                            /1   1          \   
                ln(|x|) + I |- - - signum(x)| Pi
                            \2   2          /   

> convert(res,piecewise);
                        {  ln(-x) + Pi I          x < 0
                        {
                        { ln(x) + 1/2 I Pi        x = 0
                        {
                        {      ln(x)              0 < x

> convert(res,abs);
                               /1     x  \   
                   ln(|x|) + I |- - -----| Pi
                               \2   2 |x|/   

I believe that the value for the condition x=0 in the above piecewise may be affected by the environment settings for signum(0) and Heaviside(0).

Also discussed here and here, including G A Edgar's suggestion,

> Re(int(1/x,x));
                          ln(abs(x))

From the help-page ?ln

      For complex-valued expressions x, ln(x) = ln(abs(x))+I*argument(x), where
       -Pi<argument(x)<=Pi.   Throughout Maple, this computation is taken to be the
      definition of the principal branch of the logarithm. 

acer

You are overlooking the effects of round-off error during the floating-point computation (ie. during the evaluations of diff(sCARA4,mu) at floating-point values). Remember that the Digits environment variable controls the working precision and is not a target accuracy tolerance.

> restart:

> sCARA4 := -ln(-(mu/sigma^2)^(mu^2/(mu-sigma^2))*(sigma^2/mu)^(mu^2/(mu-sigma^2))
> +(sigma^2/mu)^(mu^2/(mu-sigma^2))*((exp(phi)*sigma^2+mu-sigma^2)
> *exp(-phi)/sigma^2)^(mu^2/(mu-sigma^2))+1)/phi:

> dsCARA4:=diff(sCARA4,mu):

> eval(dsCARA4,[mu=29.38,phi=1.0,sigma=1.0]);
-2.499130625

> evalf[20](eval(dsCARA4,[mu=29.38,phi=1.0,sigma=1.0]));
0.99919104807402453122

Note also that, even though simplification (using `simplify`, with or without assumptions) seems to help below, there is no guarantee in general that it will generate a form of a symbolic expression with better conditioning during floating-point evaluation.

> eval(dsCARA4,[mu=29.38,phi=1.0,sigma=1.0]);
-2.499130625
> eval(simplify(dsCARA4),[mu=29.38,phi=1.0,sigma=1.0]);
1.204768329
> eval(simplify(dsCARA4),[mu=29.38,phi=1.0,sigma=1.0]) assuming positive;
0.9991907685

You can also look at ?evalr

You are overlooking analysis during the simplification of sCARA4. You might compare output from these,

simplify(sCARA4);

simplify(sCARA4) assuming positive;

acer

The two results are "the same" in the sense that one is merely a scaled version of the other. See below, where the final versions differ a just a small amount (reflecting round-off error during either the solving or the subsequent manipulation).

> p[R1] := (214-.7067494572*q[O]-.3533747286*q[R1]+.1766873643*q[S]^2-.1766873643*q[S]*q[R1]+5.879032258*q[S])/(1+0.7258064513e-1*q[S]):

> a1:=rhs(isolate(p[R1],q[R1])):

> a2:=solve(p[R1],q[R1]):

> fnormal(a2); # or expand(numer(a2))/denom(a2)
                                                     2                   
    1211.178857 - 4.000000000 q[O] + 1.000000000 q[S]  + 33.27364286 q[S]
    ---------------------------------------------------------------------
                                  2. + q[S]                              

> expand((numer(a1)/0.176687364))/expand(denom(a1)/0.176687364);
                                                     2                   
    1211.178859 - 4.000000007 q[O] + 1.000000002 q[S]  + 33.27364292 q[S]
    ---------------------------------------------------------------------
                       2.000000003 + 1.000000002 q[S]                    

But they are likely different in how they evaluate at a particular point. If one evaluates at numeric values for the unknowns, the results might differ. That's because numeric computation error may build up in different ways, in each.

Which is more numerically stable? In general, I doubt either method would always be "better" than the other. The lesson here should instead be that it's not always a great idea to mix floats and symbolics and expect every popular top-level routine to handle them exactly.

You could also consider converting p[R1] to rational. See ?convert,rational

note: The calling sequences collection of help-page ?convert,rational is wrong. The first argument can be an expression, and the effect maps over that. So, in this example, convert(p[R1],rational) is a possibility.

acer

You asked,
"if I hold the list as a private module variable and export a module procedure to return the weights, is Maple going to make a copy of the list every time a task invokes my module's "GetWeights" procedure?"

I believe that, if I understand your question properly, the answer is no. Maple only has one unique copy of any given list at any given time. If you assign into a list (or use subsop to try and cheat) then Maple creates a new list because (as Roman mentioned) lists are actually immutable. They may just appear to be mutable, by a Maple kernel sleight-of-hand.

Consider the following example, and ask why the higher level list L is not changed. The answer is not due to Maple passing a full copy of L into procedure p. It is because the later act of altering an entry of p's local T (with initial same ID as p's formal parameter LL) creates a new list. Note that the initial address of local T is the same as that of L, prior to assignment into an entry of T. The copying of L occurred when T[3] was altered, and not when L was passed into `p`. Moral: if you don't try and alter LL (or T), you should be OK as far as non-copying goes for performance. So avoiding the rtable locks, by converting the weights to lists (after all weights are initially computed, of course) looks ok.

> L:=[1,3,5,7]:
> addressof(L);
181177616
> p:=proc(LL)
> local T;
> T:=LL;
> printf("initial address of T: %a\n",addressof(T));
> T[3]:=z;
> printf("subsequent address of T: %a\n",addressof(T));
> T;
> end proc:

> p(L);
initial address of T: 181177616
subsequent address of T: 193667904
[1, 3, z, 7]

> L;
addressof(L);
[1, 3, 5, 7]
181177616

Joke: If Maple didn't behave like this then its list performance would be trash, and we'd all be passing addressof(L) into a procs which initially set up list locals with pointto().

acer

What the replies of hirnyk and John mean is that Maple is not changing the variable's identity on you. It is a question of how the name `xi` gets printed and displayed in output, not of its identity.

I don't see any convenient way to suppress that 2D display of all given symbols. And I didn't see `xi` in the Typesetting:-RuleAssistant() Maplet.

It might be that Library routines inside the Typesetting package produce something like `#mi(xi)` or whatever, in the final expression sent to the GUI for display. And it might even be that one could hack the Typesetting package to adjust that behaviour. But it could be a lot of effort, that might not even work.

Perhaps you could submit a Software Change Request, about suppressing the 2D Math display of chosen names?

acer

There are four subtypes of rtable: Array, Matrix, Vector[row], and Vector[column].

The indices of an Array can start from an arbitrary integer (including 0, or negative values) while the Matrix and Vector indices start from 1.

One basic idea is that a Vector represents a member of a vector space, while a Matrix repesents a linear mapping between vector spaces. Hence these objects are understood by the LinearAlgebra package, and the noncommunative `.` operator uses the traditional linear algebra multiplication for them (because it leads to useful places). A Vector is a 1D rtable, and a Matrix is a 2D rtable, and both have a mathematical meaning.

In contrast, an Array can be any dimension (number of distinct indices) from 64. And it doesn't have a specific mathematical meaning (ie. flexible). Hence `.` acts elementwise when "multiplying" such objects.

So, if you want to use rtables for higher dimension indexed computation (eg. tensor stuff) then Arrays are a viable choice.

You can build all these rtables using the rtable() constructor. You can pick them apart with a sundry of utility functions such as rtable_options(), etc.

A 1D Array whose index starts from 1 can be converted into a Vector, in-place, using the rtable_options command to change the 'subtype'. Similaryly for 1-based 2D Arrays to and from Matrix.

> A:=Array(1..2,1..2,[[a,b],[c,d]]):

> A.A;
                [ a^2  b^2 ]
                [          ]
                [c^2   d^d ]

> rtable_options(A,'subtype');
                                    Array

> rtable_options(A,'subtype'='Matrix');

> A.A;
                [ a^2+b c      a b+b d]
                [                     ]
                [ c a+d c      b c+d^2]

> rtable_options(A,'subtype');
                                   Matrix

What else? The Vector() and Matrix() constructors have angle-bracket shortcut syntax (which use to be slower due to overhead, but not anymore). The shortcut syntax doesn't allow you to specify options such as datatype, shape, etc.

The Matrix() and Vector() constructors both tend to flatten arguments.  By which I mean, if you pass them a collection of other smaller Matrix/Vector objects then they will try to flatten out the result as one larger object. But on the other hand there's nothing to stop you from assigning anything, unflattened, into individual entries of a pre-existingVector, Matrix, or Array.

acer

Interesting question.

Smart money says that someone could improve the following,

> PE := A.X+B.U+L.(Y-C.X-D.U);
            A . X + B . U + L . (Y - C . X - D . U)

> rx:=proc(x,z)
  local ox,sx,i;
  if type(x,`*`) then
    ox,sx:=selectremove(t->type(t,`.`),[op(x)]);
    ox:=op(ox);
  else
    ox,sx:=x,1;
  end if;
  if type(ox,`.`) and op(nops(ox),ox)=z then
      [`*`(op(sx),`.`(seq(op(i,ox),i=1..nops(ox)-1))),z];
  else x;
  end if;
end proc:

> rightcollect:=proc(x,z)
local spl,t,locx;
  locx:=subs(`&*`=`.`,expand(subs(`.`=`&*`,x)));
  if type(x,`+`) then
    spl:=selectremove(t->type(t,list),map(rx,[op(locx)],z));
    `+`(op(spl[2]))+`+`(seq(op(1,t),t in spl[1])).z;
  else x;
  end if;
end proc:

> rightcollect(PE,X);
          B . U + L . Y - L . D . U + (A - L . C) . X

> PredictorEstimator := diff(X(t), t) = A.X(t)+B.U+L.(Y-C.X(t)-D.U);
   d                                                          
  --- X(t) = A . (X(t)) + B . U + L . (Y - C . (X(t)) - D . U)
   dt                                                         

> map(rightcollect,PredictorEstimator,X(t));
   d                                                         
  --- X(t) = B . U + L . Y - L . D . U + (A - L . C) . (X(t))
   dt                                                        

A related item: is it just an oversight that `expand` can deal with `&*` but not with `.`? Does anyone else have trouble writing an `expand/.` routine to do x->subs(`&*`=`.`,expand(subs(`.`=`&*`,x)))? Is it anything to do with `expandon`?

acer

First 288 289 290 291 292 293 294 Last Page 290 of 337