acer

32490 Reputation

29 Badges

20 years, 8 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are answers submitted by acer

M:=[(s-s1)*(s-s2), (s-s0)*(s-s2), (s-s0)*(s-s1)]:

zip(int,M,[s,s0,s1]);

Or, if you really do have Matrices (instead of lists as above),

M:=Matrix([[(s-s1)*(s-s2), (s-s0)*(s-s2), (s-s0)*(s-s1)]]):

zip(int,M,Matrix([[s,s0,s1]]));

acer

@Christopher2222 It might be more sensible to first ask whether it can be done, and save until later the asking as to how.

No, you cannot really add another subpackage (like Astronomy) "under" ScientificConstants. Well, not without unprotecting it and replacing & resaving the whole package with a modified version. And you wrote that you didn't want to do that. And it's not easiest, anyway.

It is not true that a separate package cannot make use of ScientificConstants routines, which you supposed in a comment responding to another answer.

The AddConstant help-page suggests placing definitions for new constants inside an initialization file. But that doesn't really bundle them nicely, for re-use by other people. You could, as an alternative, put such definitions (via AddConstant) into the ModuleLoad of a package which you savelib to a Library archive. Once you do that then whenever that package is first read by Maple in a session the definitions will be loaded.

The module below is not part (or a subpackage) of ScientificConstants.

First let's write the source of the module.

Astronomy:=module() option package;
export GetConstant,GetValue,GetError,GetUnit,Constant;
local ModuleLoad;
   ModuleLoad:=proc()
      ScientificConstants:-AddConstant(':-mass_of_Jupiter',
                                       'symbol'=':-MJupiter',
                                       'value'=1.90e27,'units'=':-kg')
   end proc:
   Constant:=ScientificConstants:-Constant;
   GetConstant:=ScientificConstants:-GetConstant;
   GetValue:=ScientificConstants:-GetValue;
   GetError:=ScientificConstants:-GetError;
   GetUnit:=ScientificConstants:-GetUnit;
end module:

Now let's savelib it to a Library archive.

try
  LibraryTools:-Create("c:/temp/astro.mla",WRITE):
catch "there is already an archive":
end try;

libname:="c:/temp/astro.mla",libname:
savelib(Astronomy);

Now let's restart, and load it from that archive.

restart:

libname:="c:/temp/astro.mla",libname:

with(Astronomy);

      [Constant, GetConstant, GetError, GetUnit, GetValue]

GetConstant(MJupiter);
                                                       27  
    mass_of_Jupiter, symbol = MJupiter, value = 1.90 10  , 

      uncertainty = undefined, units = kg

GetValue(Constant(MJupiter));

                                  27
                           1.90 10  

Note that the exports of the module above are not strictly needed. You could just as well load ScientificConstants and use its exports instead. I put in those exports just to save a step. The key bit is the ModuleLoad.

Notice that `evalf/Sum` fails here, and plot is going to try using `evalhf` and `evalf`, but not `value`.

I don't see what the problem is with `add`, except of course that it is not fastest. And you have to be careful about avoiding evalhf mode.

restart;

S := x->1+Sum(floor((x/(Sum(floor(cos(Pi*(factorial(n-1)+1)/n)^2),
              n = 1 .. r)))^(1/x)), r = 1 .. 2^x):

seq(value(S(x)), x = 1 .. 7);

                     2, 3, 5, 7, 11, 13, 17

seq(evalf(S(x)), x = 1 .. 7);

                  2., 3., 3., 5., 11., 13., 6.

S := x->1+add(floor((x/(add(floor(cos(Pi*(factorial(n-1)+1)/n)^2),
              n = 1 .. r)))^(1/x)), r = 1 .. 2^x):

seq(S(x), x = 1 .. 7);

                     2, 3, 5, 7, 11, 13, 17

As to the speed of this, you could try and optimize in at least two ways. The inner addition (in n) might be replaced with calls to a procedure that had `option remember`. And the outer addition (in r) could be done using a recursive procedure. This could help you avoid reproducing all previous work in order to generate each successive result.

Also, you should try and plot `S` not `S(x)`, since S is not prepared for symbolic `x` as argument (when using `add`). And, did you want to do a point-plot? I mean, do you really want to have this try and plot for non-posint x? Note that `evalhf` mode of floating point evaluation goes wrong for the `add` example I gave. But that is easily avoided.

plot(evalf@S,sample=[seq(i,i=1..7)],adaptive=false,
     view=[1..7,1..20]);

acer

Are you trying to use commands from the Physics package (eg. KroneckerDelta) but might have forgtotten to load that package?

Try issuing the command,

  with(Physics):

before your other work.

acer

How accurate do you need the approximation to be? Maybe you could consider numeric approximation instead of series.

Does it need to have a relative error of 1e-10 say? Or is it just for plotting, in which case an error like 1e-3 may be adequate? For the higher accuracy you could try the numapprox package and one of its routines such as minimax or chebyshev. For just plotting you could compute an Array of values and then interpolate with the CurveFitting package or other means.

Just how expensive  is to to do a single evaluation at a given value of x?

Could you not upload a worksheet containing the function, and attach it to your post?

acer

If you want to create an image format file of the plot, then start with the help-page for the plotsetup command.

If you want to export the data of a plot then you could look at plots:-getdata (or directly compute an Array of data yourself) and an exporting command such as ExportMatrix. For example,

f:=sin(x)+cos(Pi*x):

P:=plot(f,adaptive=false,sample=[seq(x,x=0..10,10/(5000-1))]):

plottools:-getdata(P)[3];
                          [ 5000 x 2 Matrix      ]
                          [ Data Type: float[8]  ]
                          [ Storage: rectangular ]
                          [ Order: Fortran_order ]

...and then you can apply the ExportMatrix (or writedata, etc) to that 5000x2 Matrix.

acer

If you know that the function is periodic, and if you know the period, then you should only need to evaluate the function enough to compute plot data over a single period. A compound plot over several periods can then be done without any further evaluation of the function, merely by a translation of the single-period plot.

pforce:=p*sin(omega1*t)+3*cos(3*omega1*t);

d:=[p=3,omega1=4.2];

period:=eval(2*Pi/omega1,d);

P:=plot( eval(pforce,d), t=eval(0..2*Pi/omega1,d) ):

P;

plots:-display( seq(plottools:-translate(P,period*i,0), i=-3..3) );

acer

There is a distinction between trying to cast the whole Matrix when it is needed as datatype=sfloat (simply use evalf on the no-datatype Matrix, and create a new Matrix), and casting each entry as it written into the Matrix. The former is likely more efficient if you only have to do it once but is costly if you have to keep repeating the action, while the latter adds a cost to every individual entry access. Your Question asks for the latter, it seems.

You could use a customized (user-defined) rtable indexing function as a corrective gatekeeper.

> restart:

> `index/coerce_float` := proc(idx,M,val)
> local g,idx1;
> idx1:=op(idx);
>     if nargs = 2 then
>       # retrieval
>       M[idx1];
>     else
>       g := op(val);
>       # storage
>       if type(g,sfloat) then
>           M[idx1] := g;
>       elif type(g,{hfloat,integer}) then
>           M[idx1] := SFloat(g);
>       elif type(g,realcons) then
>        M[idx1] := evalf(g);
>       else
>          M[idx1] := g; # watch it burn
>     end if;
>   end if;
> end proc:

> M := Matrix(2,2,'shape'='coerce_float','datatype'='sfloat');

                                     [0.  0.]
                                M := [      ]
                                     [0.  0.]

> M[1,1]:=-4.5:
> M;
                                 [-4.5  0.]
                                 [        ]
                                 [  0.  0.]

> lprint(M);
Matrix(2, 2, {(1, 1) = -4.5}, datatype = sfloat, storage = rectangular,
order = Fortran_order, shape = [coerce_float])

> M[1,1]:=sin(3):
> M;

                             [0.1411200081  0.]
                             [                ]
                             [          0.  0.]

> lprint(M);
Matrix(2, 2, {(1, 1) = .1411200081}, datatype = sfloat, storage = rectangular,
order = Fortran_order, shape = [coerce_float])

> M[1,1]:=HFloat(2.3):
> M;

                          [2.29999999999999982  0.]
                          [                       ]
                          [                 0.  0.]

> lprint(M);
Matrix(2, 2, {(1, 1) = 2.29999999999999982}, datatype = sfloat, storage = rectangular,
order = Fortran_order, shape = [coerce_float])

> M[1,1]:=Int(x^2,x=0..1):
> M;

                             [0.3333333333  0.]
                             [                ]
                             [          0.  0.]

> lprint(M);
Matrix(2, 2, {(1, 1) = .3333333333}, datatype = sfloat, storage = rectangular,
order = Fortran_order, shape = [coerce_float])

> M[1,1]:=x:
Error, (in index/coerce_float) unable to store 'x' when datatype=sfloat

acer

One easy way is to use "multiple assignment".  For example,

  evals, evecs := Eigenvectors(DD);

Another way is to use indexing.

  result := Eigenvectors(DD);

  result[2];

You should not be using `D` as a variable, especially at the top level. It is the name of the differential operator (Ie. it differentiates procedures.) That is why it is protected at the top level.

acer

The two-argument way of using the `eval` command, along with the `seq` command, can be useful for this kind of manipulation.

For example,

sol:=[fsolve(x^12-x-4,{x})];

            [{x = -1.093000628}, {x = 1.146281083}]

form:=sin(x)-x;

                           sin(x) - x

seq( eval(form,S), S in sol );

                  0.2049899606, -0.2350425820

seq( eval([x,form],S), S in sol );

   [-1.093000628, 0.2049899606], [1.146281083, -0.2350425820]

seq( eval(x,S), S in sol );

                   -1.093000628, 1.146281083

sort(sol, (a,b)->rhs(a[1])>rhs(b[1]) );

            [{x = 1.146281083}, {x = -1.093000628}]

I'll also offer the opinion that using `assign` to make the name `x` be assigned a particular value in a solution from `fsolve` (or `solve`) is generally a more awkward method.

acer

I don't see why you would create N0 and N1 as operators. But, as operators or expressions, it is easy to accomplish using the map command and `.` to get the outer product.

restart:

N0:=zeta->Matrix([[-2*(1-zeta)*(zeta-1/2)],[4*zeta*(1-zeta)],[(2*zeta)*(zeta-1/2)]])^%T:
N1:=zeta->N0(zeta)^%T:
map(int,N1(zeta).N0(zeta),zeta);

restart:

N0:=Matrix([[-2*(1-zeta)*(zeta-1/2)],[4*zeta*(1-zeta)],[(2*zeta)*(zeta-1/2)]])^%T:
N1:=N0^%T:
map(int,N1.N0,zeta);

acer

This routine by Joe seems to fix it.

Simply run that in a worksheet, calling it with the full name to your problem file (as a string) for the procedure's argument. It should write out a fixed version with _fixed in the filename.

acer

My first choice for this kind of thing would be to store the Matrix as a member of a module, and have all the working procedures also be exports of that module. You could have the routine which constructs the Matrix be one of the module exports, and if you want store the Matrix as a local of that module (thus still ensuring that it is accesible to all its members).

You have a lot of questions, implicit and explicit. The answers all interrelate, so I hope the following addresses most of it.

Note that Maple does not make a copy of an rtable (Matrix, Array, or Vector) when passing as an argument to a procedure. That is true for several if not most of Maple's mutable data structures (of which these are such). It would be a catasphrophe for performance if it did. One consequence of not doing full copy/evaluations is that in-place operations are possible on rtable arguments. In my experience using Maple heavily for floating-point linear algebra since Maple 6, the ability to do in-place operations on hardware datatype=float[8] rtables is one of the single most important to getting higher performance behaviour.

Maple does not yet have a data structure which represents an rtable whose data is stored (permanetely or semi-permanently) in the GPU's memory. It is still the case, when CUDA is enabled in Maple, that main-memory float[8] hardware double precision rtable data is passed to the GPU's memory with each call to a (the) CUDA function. This involves a very high data transfer cost. Each time you call it, Maple has to push the data of two float[8] Matrices up to the card, and then pull the data of the result float[8] Matrix back down from the card. This is probably so expensive that only an O(n^3) operation like matrix-matrix multiplicaiton would be worthwhile usin this behavioural model. It might even be hitting a law of diminishing returns. (See this very recent review, by someone who is usually pretty aware of how to wring the most out of big-M math software.) Also, using `.` or LinearAlgebra:-MatrixMatrixMultiply to invoke the CUDA version of the underlying matrix-matrix function will produce a new container for the result. Doing so repeatedly will mean that the old results will be memory-managed and garbage-collected by Maple, which is additional overhead.

You do not mention your Operating System. It matters. It sounds as if you are going to be using the BLAS function dgemm, repeatedly, via an entry point from Maple. A datatype=float[8] rtable is a Maple data structure whose data portion is a contiguous portion of main memory. It is a rectangular table (hence, rtable). A Matrix is one flavour of rtable. Maple is simply going to pass the address(es) of the data portions of the relevent float[8] rtables as the relevent arguments of a call to a dgemm function in some shared library. If using CUDA, then it will be the dgemm in the CUDA runtime libraries. On MS-Windows, it will be to the dgemm in the MKL runtime bundled with Maple. On Linux, it will be to the dgemm in a shared ATLAS library. And on OSX it will be to a generic dgemm. On Windows, the MKL will automatically detect how many cores and cpus are available, and use all available by default. That scales pretty well. On Linux, depending on cpu vendor, Maple will only use up to 2 or cores or so, but you can build an ATLAS tuned for a host with a greater number of cores and then drop it in as a replacement.

If Maple is thus already using all available cores, then there is no need to try and break up the matrix-matrix multiplication as if to do Strassen, say, yourself by using Maple's Threads/Task facilities. In any event, note that the external-calls to the shared libs (with the dgemm) are not being made with the THREAD_SAFE option. Which means that Maple only allows one concurrent instance of dl-opening those external libs to run. So only one Maple thread would ever invoke the external dgemm at a given time, anyway.

The external BLAS routines do not put a mutex lock on the data of the rtables, and those functions are themselves threaded.

Note that Maple's time() routine will report something like the sum of all cycles used by all cores. Hence it may spuriously appear that a float[8] Matrix operation doesn't get sped up with the number of cores, even when it does! An alternative is to measure with the variant time[real]() and so see just how long it actually takes in wall-clock time. Of course, do this effectively will entail ensuring that your machine load is otherwise as low as possible.

Ok, where am I? All these topics double back on each other.

Right, let's talk about dgemm. That is the Basic Linear Algebra Subprogram which does double precision general matrix-matrix multiplication. It works like this: given Matrices A, B, C, and scalars alpha and beta, then it performs this,

            alpha*A*B + beta*C -> C

There are several things to admire about that. The first is that the Matrix C, which will contain the result, is acted upon in-place. You could re-use the same C (possibly another module local) as the result container for multiple calls (and incur no memory management, as it wouldn't be collected garbage after each use). If beta=0 then the previous contents of C do not get re-added, but of course you'll still want to fill C with zeroes before calling, if you'd used it before. You do not have to scale A or B before calling dgemm, as you can just use alpha to effect that. Also, there are additional arguments to dgemm which denote whether the Matrices are to be considered as transposed! So you would never waste time transposing the Matrices beforehand. (In fact there is no transposition function in all of BLAS or LAPACK, because nobody should ever waste time doing that since accomodating it could be done with mere changes to the indexing used in the functions.)

Yes, there is a way to access dgemm from Maple so as to make use of such optional subtlety. It's not directly available, but I can post an example in anyone is interested.

In Maple, memory management of large rtables -- even when relatively lean like with float[8] datatype - is expensive. In contrast, a routine like ArrayTools:-Fill can zero out a large float[8] rtable in very short time. This all contributes to why inplace operations can often win the day, for numerical linear algebra in Maple.

My general advice would include things like these points:

- use only datatype=float[8] (or complex[8]) rtables (or Matrices, Vectors, Arrays)

- try very hard to have the code act inplace on a reusable container

- if transposing often either do so with Transpose(...,inplace) command, or use low level dgemm entry point to get rid of all such explicit transpositions

- try to make all scalar operations in the code run in evalhf mode, or be Compilable. (you cannot do external calls in either mode, so make procedure which do the matrix-matrix multiply and linear algebra bits be separate routines from any other hopefully-evalhf'able computations)

- never make a computational routine create a Matrix. Instead, if possible have it accept the Matrix (into which results would be put) as an additional argument. You can create a separate parent procedure, which both creates any needed rtables and then invokes a call to the computational procedures. If you always write your routines this way, then beautiful inplace style savings may fall from the sky

- don't focus on using CUDA, for now

- forget about using Maple's Thread/Task facilities for the matrix-matrix multiplication here, since your cores are likely being all used in each individual dgemm call already. If you want to use Maple's Threading for other parts of the whole job, then split that off from the matrix-matrix multiplication (since dlopens of the relevent external libs are blocking -- only one at a time).-

- if you want to Thread other parts of the whole job, do so using evalf'able procedures and calls. For embarassingly parallelizable jobs I've been able to get an optimal linear speedup with core number. Maple does not put a lock on all the Matrix entries at once -- the mutex seems to give finer grained access. Of course, the fortunate algorithm will involve no mutex lock at all, but we are not always so lucky in our work. That works well. But I have had consistent experiences showing that single (serial use of) Compiled procedures acting inplace and optimally on float[8] rtables perform about eight times as fast as do Task/Threaded parallelized procedures also acting inplace and optimally under evalhf. So the Compiler can often beat the Task model of threading in Maple, up to about 4-8 cores.

acer

Could you try making your module a named module ? (There seemed to be a hint, in the debugger, at the juncture where it went wrong.)

restart:
with(CodeTools):

M := module m() option package; export f; f := proc() end proc; end module;

         module m () export f; option package; end module

EncodeName(M:-f);

_Inert_ASSIGNEDLOCALNAME("f", "PROC", 0, _Inert_ATTRIBUTE(

  _Inert_EXPSEQ(_Inert_EQUATION(_Inert_NAME("modulename"), 

  _Inert_ASSIGNEDNAME("m", "MODULE", _Inert_ATTRIBUTE(_Inert_NAME(

  "protected", _Inert_ATTRIBUTE(_Inert_NAME("protected")))))), 

  _Inert_NAME("protected", 

  _Inert_ATTRIBUTE(_Inert_NAME("protected"))))))

DecodeName(%);

                              m:-f

I'll have to test, but I wonder whether it would also be ok if the module was being read in from a .mla archive.

acer

Curiously, something very similar was asked just a day or so ago.

> restart:

> b:=Vector([lambda-2, lambda+23]):

> assume(lambda, real);

> subs(lambda=1, b);

                                [lambda~ - 2 ]
                                [            ]
                                [lambda~ + 23]

> eval(b, lambda=1);   
                                [lambda~ - 2 ]
                                [            ]
                                [lambda~ + 23]

> subs(lambda=1, rtable_eval(b));

                                     [-1]
                                     [  ]
                                     [24]

> eval(rtable_eval(b), lambda=1);
                                     [-1]
                                     [  ]
                                     [24]

> rtable_eval(b,inplace): # only need this once

> subs(lambda=1, b);

                                     [-1]
                                     [  ]
                                     [24]

> eval(b, lambda=1);             
                                     [-1]
                                     [  ]
                                     [24]

acer

First 270 271 272 273 274 275 276 Last Page 272 of 337