acer

32747 Reputation

29 Badges

20 years, 112 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are answers submitted by acer

What the replies of hirnyk and John mean is that Maple is not changing the variable's identity on you. It is a question of how the name `xi` gets printed and displayed in output, not of its identity.

I don't see any convenient way to suppress that 2D display of all given symbols. And I didn't see `xi` in the Typesetting:-RuleAssistant() Maplet.

It might be that Library routines inside the Typesetting package produce something like `#mi(xi)` or whatever, in the final expression sent to the GUI for display. And it might even be that one could hack the Typesetting package to adjust that behaviour. But it could be a lot of effort, that might not even work.

Perhaps you could submit a Software Change Request, about suppressing the 2D Math display of chosen names?

acer

There are four subtypes of rtable: Array, Matrix, Vector[row], and Vector[column].

The indices of an Array can start from an arbitrary integer (including 0, or negative values) while the Matrix and Vector indices start from 1.

One basic idea is that a Vector represents a member of a vector space, while a Matrix repesents a linear mapping between vector spaces. Hence these objects are understood by the LinearAlgebra package, and the noncommunative `.` operator uses the traditional linear algebra multiplication for them (because it leads to useful places). A Vector is a 1D rtable, and a Matrix is a 2D rtable, and both have a mathematical meaning.

In contrast, an Array can be any dimension (number of distinct indices) from 64. And it doesn't have a specific mathematical meaning (ie. flexible). Hence `.` acts elementwise when "multiplying" such objects.

So, if you want to use rtables for higher dimension indexed computation (eg. tensor stuff) then Arrays are a viable choice.

You can build all these rtables using the rtable() constructor. You can pick them apart with a sundry of utility functions such as rtable_options(), etc.

A 1D Array whose index starts from 1 can be converted into a Vector, in-place, using the rtable_options command to change the 'subtype'. Similaryly for 1-based 2D Arrays to and from Matrix.

> A:=Array(1..2,1..2,[[a,b],[c,d]]):

> A.A;
                [ a^2  b^2 ]
                [          ]
                [c^2   d^d ]

> rtable_options(A,'subtype');
                                    Array

> rtable_options(A,'subtype'='Matrix');

> A.A;
                [ a^2+b c      a b+b d]
                [                     ]
                [ c a+d c      b c+d^2]

> rtable_options(A,'subtype');
                                   Matrix

What else? The Vector() and Matrix() constructors have angle-bracket shortcut syntax (which use to be slower due to overhead, but not anymore). The shortcut syntax doesn't allow you to specify options such as datatype, shape, etc.

The Matrix() and Vector() constructors both tend to flatten arguments.  By which I mean, if you pass them a collection of other smaller Matrix/Vector objects then they will try to flatten out the result as one larger object. But on the other hand there's nothing to stop you from assigning anything, unflattened, into individual entries of a pre-existingVector, Matrix, or Array.

acer

Interesting question.

Smart money says that someone could improve the following,

> PE := A.X+B.U+L.(Y-C.X-D.U);
            A . X + B . U + L . (Y - C . X - D . U)

> rx:=proc(x,z)
  local ox,sx,i;
  if type(x,`*`) then
    ox,sx:=selectremove(t->type(t,`.`),[op(x)]);
    ox:=op(ox);
  else
    ox,sx:=x,1;
  end if;
  if type(ox,`.`) and op(nops(ox),ox)=z then
      [`*`(op(sx),`.`(seq(op(i,ox),i=1..nops(ox)-1))),z];
  else x;
  end if;
end proc:

> rightcollect:=proc(x,z)
local spl,t,locx;
  locx:=subs(`&*`=`.`,expand(subs(`.`=`&*`,x)));
  if type(x,`+`) then
    spl:=selectremove(t->type(t,list),map(rx,[op(locx)],z));
    `+`(op(spl[2]))+`+`(seq(op(1,t),t in spl[1])).z;
  else x;
  end if;
end proc:

> rightcollect(PE,X);
          B . U + L . Y - L . D . U + (A - L . C) . X

> PredictorEstimator := diff(X(t), t) = A.X(t)+B.U+L.(Y-C.X(t)-D.U);
   d                                                          
  --- X(t) = A . (X(t)) + B . U + L . (Y - C . (X(t)) - D . U)
   dt                                                         

> map(rightcollect,PredictorEstimator,X(t));
   d                                                         
  --- X(t) = B . U + L . Y - L . D . U + (A - L . C) . (X(t))
   dt                                                        

A related item: is it just an oversight that `expand` can deal with `&*` but not with `.`? Does anyone else have trouble writing an `expand/.` routine to do x->subs(`&*`=`.`,expand(subs(`.`=`&*`,x)))? Is it anything to do with `expandon`?

acer

inner_proc:=proc(M::integer,
                 A::Vector(datatype=float[8]),
                 B::Vector(datatype=float[8]))
local count::integer, i::integer;
count:=0;
for i from 1 to M do
  if sqrt(A[i]^2+B[i]^2) < 0.5 then
    count:=count+1;
  end if;
end do;
return 4.0*count/M;
end proc:

cinner_proc:=Compiler:-Compile(inner_proc):
p:=proc(N) # N number of points local F,X,Y,n,inner_proc,P; n:=N; F:=Statistics:-RandomVariable(Uniform(-1/2, 1/2)); X:=Statistics:-Sample(F,n); Y:=Statistics:-Sample(F,n); cinner_proc(n,X,Y); end proc: st:=time(): p(2*10^5); time()-st; 3.14218000000000020 0.063

I tried to resist the strong temptation to change how it works, and to be faithful to the given method.

acer

inner_proc:=proc(M::integer,
                 A::Vector(datatype=float[8]),
                 B::Vector(datatype=float[8]))
local count::integer, i::integer;
count:=0.0;
for i from 1 to M do
  if sqrt(A[i]^2+B[i]^2) < 0.5 then
    count:=count+1.0;
  end if;
end do;
return 4.0*count/M;
end proc:

cinner_proc:=Compiler:-Compile(inner_proc):
p:=proc(N) # N number of points local F,X,Y,n,inner_proc,P; n:=N; F:=Statistics:-RandomVariable(Uniform(-1/2, 1/2)); X:=Statistics:-Sample(F,n); Y:=Statistics:-Sample(F,n); cinner_proc(n,X,Y); end proc: st:=time(): p(2*10^5); time()-st; 3.14250000000000007 0.078

acer

What you are doing now is a bit like this (simplified to just 1 loop),

> restart:

> lc:=1:
> for ii from 1 to 3 do
> T[ii]:=i->lc;
> lc:=lc+1;
> end do:

> T[1](), T[2](), T[3]();
4, 4, 4

> lc:=17:
> T[1](), T[2](), T[3]();
17, 17, 17

But it seems like you want behaviour more like this,

> restart:

> lc:=1:
> for ii from 1 to 3 do
> T[ii]:=unapply(lc,i);
> lc:=lc+1;
> end do:

> T[1](), T[2](), T[3]();
1, 2, 3

> lc:=17:
> T[1](), T[2](), T[3]();
1, 2, 3

Is that closer to what you are after?

Notice also that there is no need to have the T indices be the same as the formal parameters of the operators (ie, i,j,k). Having it so may be just additionally confusing to you.

acer

Sorry, but 100*cos(20) is an exact quantity. (Recall, Maple is a computer algebra system, in which there is an important distinction since algebraic manipulation of exact quanitites is useful.) You can apply evalf or evalhf to it, to get a floating-point quantity.

Or, you could ensure that the argument to sin is itself a float, so that it returns a float. This is probably a better solution for you than explicit using explicit evalf/evalhf calls, if you intend to use Compile on the procedure.

Maple will not automatically cast an exact numeric quantity to a float when assigning into a float[8] Matrix, except in the cases that the exact quantity is an integer or a rational.

acer

> with(LinearAlgebra):
>
> A:=Matrix([[A11,A12],[A21,A22]]):
>
> InvA := Matrix([[(A11-A12&*A22^(-1)&*A21)^(-1),
>           -A11^(-1)&*A12&*(A22-A21&*A11^(-1)&*A12)^(-1)],
>         [-A22^(-1)&*A21&*(A11-A12&*A22^(-1)&*A21)^(-1),
>          (A22-A21&*A11^(-1)&*A12)^(-1)]]);
InvA :=
 
    [             1
    [--------------------------- ,
    [      //        1 \       \
    [A11 - ||A12 &* ---| &* A21|
    [      \\       A22/       /
 
     // 1        \                 1             \]
    -||--- &* A12| &* ---------------------------|]
     |\A11       /          //        1 \       \|]
     |                A22 - ||A21 &* ---| &* A12||]
     \                      \\       A11/       //]
 
    [ // 1        \                 1             \
    [-||--- &* A21| &* ---------------------------| ,
    [ |\A22       /          //        1 \       \|
    [ |                A11 - ||A12 &* ---| &* A21||
    [ \                      \\       A22/       //
 
                 1             ]
    ---------------------------]
          //        1 \       \]
    A22 - ||A21 &* ---| &* A12|]
          \\       A11/       /]
 
>
> map(normal, A . subs(`&*`=`*`,InvA) );
                                   [1    0]
                                   [      ]
                                   [0    1]
 
>
> data:={A11=RandomMatrix(2,2), A12=RandomMatrix(2,2),
>        A21=RandomMatrix(2,2), A22=RandomMatrix(2,2)}:
>
> A:=Matrix(eval([[A11,A12],[A21,A22]],data)):
>
> InvA:=Matrix(eval(subs(`&*`=`.`,
>           [[(A11-A12&*A22^(-1)&*A21)^(-1),
>             -A11^(-1)&*A12&*(A22-A21&*A11^(-1)&*A12)^(-1)],
>            [-A22^(-1)&*A21&*(A11-A12&*A22^(-1)&*A21)^(-1),
>             (A22-A21&*A11^(-1)&*A12)^(-1)]] ),data) ):
>
> A . InvA;
                              [1    0    0    0]
                              [                ]
                              [0    1    0    0]
                              [                ]
                              [0    0    1    0]
                              [                ]
                              [0    0    0    1]

acer

@Hamidreza Is there a timing difference if you compute

add(add(evalf(f[i, j](i, j, x, y)), i = 1 .. 50), j = 1 .. 10);

instead of

evalf(add(add(f[i, j](i, j, x, y), i = 1 .. 50), j = 1 .. 10));

Also, could you use evalhf instead of evalf, in the former of those two?

And what is D1()? Is it always the same, or does it change for each i and j (using globals)? If unchanging, then can you pull it out of the summation as a multiplicative constant, or evalf(D()) up front? That too could make a difference.

And, taking another tack, how about first finding Q:=sum(...,i=1..N) to get a formula, finding Q50:=eval(Q,N=50), and then computing add(evalf(Q50),j=1..100) or similar?

If you raise Digits, and then recompute, you can get some evidence that those relatively small imaginary components are merely artecfacts of doing the computation with floating-point numbers (ie. as an approximation).

For this example, you might be able to convince yourself of this with an assignment of only Digits:=16.

acer

Your procedure `inertia` expects to find the global xyzc to be an indexable object with three scalar values. That's how  `inertia` uses xyzc. And it works fine, the first time you call it.

But then you assign to xyzc (at the top level) something which is a list of three equations. This occurs when you compute Equate(cog(...),[xc,yc,zc]), the result of which you assign to global xyzc. So then xyzc is no longer in a form for which some of the commands internal to `inertia` are valid or permissible. The three entries of xyzc are now equations rather than scalar values. So the next invocation to `inertia` fails with an error, going wrong when it tries to square the entries of xyzc.

For example,

> (4.99=xc)^2;
Error, (in simpl/reloprod) invalid terms in product

acer

Must a local optimization (least squares) method find the global optimum for any nonlinear fit?

You could try a few things, such as setting it up yourself as a least squares problem, and then using different starting points or using a global optimizer.

I'd like to see Maple automatically detect the presence of GlobalOptimization, and use it within NonLinearFit if found. (Using yet another global optimizer might be possible but could get tricky, as it would have to accept the same syntax as Optimization and also somehow be registered for detection.)

acer

Thanks Alec! That's better for Primes, but worse for me, as the inability-to-post problem looks to be at my end.

A related question: is there anyone for whom the 2DMath displayed in Alec's response looks good? For me, it doesn't even look as good as the 2DMath in the Online Help using the exact same browser and session. It's small, and more jagged than even lack of antialising usually gives.

acer

Give it the "bug" tag, Alec.

It's an outright bug that assumptions are put on x merely by calling `series`, present in 13.01 (and probably 13.02 too). Which makes it present in the version most people are using at this date.

"Du musst Amboss oder Hammer sein." - Goethe

acer

Does LinearAlgebra:-SylvesterSolve do this?

You started with this,

> Z.C.X + C = D;

(Z . C . X) + C = D

So, now if you right-multiply by the inverse of X, you get,

> Z.C.X.X^(-1) + C.X^(`-1`) = D.X^(`-1`);

-1 -1

(Z . C) + (C . (X )) = D . (X )

Now, if you read the help-page for LinearAlgebra:-SylvesterSolve, it says that it solves Matrix equations of the form A . X + isgn * X . B = scale*C.

And you seem to have that form, except that your C is their X, your Z is their A, your X^(-1) is their B, and your D.X^(-1) is their C. If that's correct, then how about calling,

LinearAlgebra:-SylvesterSolve(Z, X^(-1), D.X^(-1) );

You'd want to check that I didn't make a mistake in all that. If it doesn't work, maybe you could post or upload your actual Matrices.

acer

First 292 293 294 295 296 297 298 Last Page 294 of 341