acer

32490 Reputation

29 Badges

20 years, 8 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are answers submitted by acer

Fortran(subs(unknown=cg,[codegen[optimize](f)]));

acer

I am not a big fan of this much code abstraction. But I suspect that this eventually goes out to external, compiled code. (It is not only kernel builtins for which one cannot see the source.)

Sometimes a module is implemented to be appliable [sic], by putting code inside a procedure which is its local ModuleApply member. This seems true of Finance:-BrownianMotion. You can call it with arguments, and it is a module, and its ModuleApply procedure is where the work first starts getting done. (Sorry,if you knew that already.)

> restart:
> kernelopts(opaquemodules=false):

> print(Finance:-BrownianMotion:-ModuleApply);

proc(x0::{Vector, realcons})
  if type(x0, ':-realcons') then
    return BrownianMotion1D(args)
  else
    return BrownianMotion(args)
  end if;
end proc;

> print(Finance:-BrownianMotion:-BrownianMotion1D);

 module () local GetStochasticVariables, GetStochasticFactors, ModuleApply, 

    ProcessOptions, StochasticFactorMap; end module

> print(Finance:-BrownianMotion:-BrownianMotion1D:-ModuleApply);

proc()
local parms, rest;
  parms, rest := ProcessOptions(args);
  if 0 < nops(rest) then error "unexpected parameters: %0", op(rest) end if;
  try
    return Utilities:-MakeStochasticProcess(':-_X', parms[1], parms[2])
  catch:
    error 
  end try;
end proc;

> print(Finance:-Utilities:-MakeStochasticProcess);

     proc()
     local x, p, y, t;
       if type([args[1 .. 2]], [':-name', {
         ':-_FinanceObject'(':-MapleMarkovProcess'), 
         ':-_FinanceObject'(':-MapleMarkovProcess1D')}]) then
         y := `tools/genglobal`(args[1]);
         t := `tools/genglobal`(':-_StochasticProcess');
         assign(t, args[2]);
         setattribute(y, t);
         protect(y);
         return y;
       elif type([args[1 .. 3]], [':-name', ':-string', ':-table']) then
         return procname(args[1], MakeFinanceObject(args[2 .. -1]))
       end if;
     end proc;

> showstat(Finance:-Utilities:-MakeFinanceObject);

Finance:-Utilities:-MakeFinanceObject := proc(extfunc::string, parameters::table, printfunc::procedure := proc () module () end module end proc)
local m, i, p, q, f, data, external;
   1   data := Finance:-Utilities:-GetImplementation(parameters);
   2   p := [op(sort(map2(op,1,[indices(parameters)]))), ModulePrint];
   3   q := map(convert,p,':-name');
   4   f := Finance:-ExternalSupport:-InitializeExternalFunction(extfunc);
   5   external := f(data);
   6   m := eval(subs(('__Y__') = ('external'),('__Z__') = op(q),parse("module() option implementation = __Y__; export __Z__; end module")));
   7   for i to nops(p)-1 do
   8     m[q[i]] := parameters[p[i]]
       end do;
   9   m[':-ModulePrint'] := eval(printfunc);
  10   return eval(m,1)
end proc

acer

You'll want to check this reasoning.

It seems to me that you are trying to compute something like a sum of terms of OEIS A001511. As Axel mentioned, finding the greatest common factor of the form 2^k in N! involves a sum of the exponents of the factors of 2 in each even term of the factorial. Hence the summation. And at first glance this is nasty because we want N=2011!, a number too large to take the factorial a second time.

But there is a formula (Daniele Parisse) given for terms in that A001511 sequence, as a function of terms of the OEIS A000129 sequence, namely:

2-A000120(n)+A000120(n-1)

And so it looks to me as if a sum of such A000120 terms will be a telescoping sum. Hoo-rah.

If I've done it right, then procedure `PP` below computes that telescoped sum. You can enjoy checking whether it is OK. It might be off by one, or...

And there in `PP` you can see n! since we wish to find the whole 2^k factor for 2011! as the input to `PP`.

PP:=proc(n) n! - Q(n!/2); end proc:

# Q and QQ both produce OEIS A000120, http://oeis.org/A000120
Q:=proc(n) local w, m, i;
w := 0; m := n;
while m > 0 do
i := m mod 2;
w := w+i;
m := (m-i)/2;
end do;
w;
end:

# a (slower?) alternative to QQ (Benoit Cloitre)
QQ:=proc(n)
log[2](((2*n)!/(n!)^2)/numer(binomial(2*n,n)/4^n));
end proc:

seq(PP(i),i=2..10);

1, 4, 22, 116, 716, 5034, 40314, 362874, 3628789

for j from 2 to 10 do
st:=time():
if irem((j!)!,2^PP(j)) = 0 and
irem((j!)!,2^(PP(j)+1)) > 0 then
print(j,"pass",time()-st);
else
print(j, "fail",time()-st);
end if;
end do:

2, "pass", 0.
3, "pass", 0.
4, "pass", 0.
5, "pass", 0.
6, "pass", 0.
7, "pass", 0.
8, "pass", 0.125
9, "pass", 2.547
10, "pass", 56.125

# PP(2011); # if you want a result to look at (don't worry, it is fast, less than a page)

acer

Using Maple 12.02 or 14.01 (Windows 7, 32bit Maple),

s:=Sockets:-Open("Tomorrowsgaspricetoday.com",80):
Sockets:-Write(s,"GET /rssfeed.php?city=16 HTTP/1.1\nContent-Type: text/plain\nUser-Agent: Maple\nHost: tomorrowsgaspricetoday.com:80\n\n"):
R:="": count:=0:
while count<100 do
  r:=Sockets:-ReadLine(s,30);
  if r=false then break;
  else R:=cat(R,"\n",r);
       count:=count+1;
  end if;
end do:
R;

In Maple 15, (Windows 7, 32bit Maple),

HTTP:-Get("http://tomorrowsgaspricetoday.com/rssfeed.php?city=16");

Try this,

  Explore(plot(a*sin(x*b),x=-2*Pi..2*Pi,view=[-2*Pi..2*Pi,-10..10]));

That's give you an initial pop-up window. Do not check the boxes to skip either a or b. Click the Explore button. Then a new worksheet should appear, with an embedded Plot Component and two sliders for a and b. Move the sliders. (No plot curve will appear, while the `a` slider is at value 0, of course.)

In the pop-up window, giving the end points of the ranges (for a or for b) as floats instead of whole numbers makes the slider(s) take on values "smoothly" rather than just at discrete integer values.

acer

Have a look at section 15.7 of the (new) Programming Guide. It discusses `verify`, as one approach to writing tests and unit tests.

It also mentions CodeTools:-Test, which is new to Maple 15 (and the help-page for which doesn't yet show up on the Online Help pages).

acer

Clearly the innermost nested computation, euclid(s,n), is of central importance to the performance.

Joe's suggestion to use `option cache` is helpful. But it seems that `option remember` may perform better. (Is some critical previous value being collected from the cache? Anyway, it is easy enough to check.)

But there is a problem with the coding. That while loop is too open ended in its stopping criteria. At a given low working precision there is no reason why the two compared values should be within the stated tolerance, no matter how many more terms get considered in computing euclid(s,n). If round-off has a big enough effect, then at the given value of Digits the approximation of euclid(s,n) may not even be converging, let along converging to Zeta(s).

Look at it another way. If euclid(s,n) is altered to compute a floating-point approximation, then the whole task becomes very fast for low q=1..5. Notice however that the required number of terms n-1 gets large quickly for the s=2 case, as q rises. But if the evaluation to floating-point of euclid(s,n) is not accurate enough then the loop will overshoot the true required/minimal n-1 and keep on ineffectually using more and more terms in the product formula.

So it may be tricky to manage the amount of needed working precision, with the loop structure coded as it is. How would one know how high to take the working precision Digits, when computing euclid(s,n)?

Other questions I did not consider in depth: Is evalf(Zeta(s)) accurate to Digits's worth of decimal digits? Is that necessary in order to only have to raise Digits inside `euclid` and not for the absolute comparison?

So, how high should Digits be inside `euclid`? How high should digits be globally (for Zeta(s) and the comparison with 10^(-q))? It needs to be at least 14 for that second question, at q=7.

How does one tell that the while-loop has missed the right value of n-1, and whether it is just endlessly churning?

It looks as if the task can be made fast, if optimal working precison (dependent upon q?) can be used. It looks as if the while-loop goes runaway if the working precision is not high enough.

restart:
euclid := proc(s,n)
option remember;
  if n = 1 then
    evalf[200]( 1/(1-1/2^s) );
  else
    evalf[200](procname(s,n-1)/(1-1/ithprime(n)^s));
  end if;
end proc:
Digits:=14:
for q from 1 to 7 do
  stq:=time();
  m := 10.0^(-q);
  for s from 2 to 10 do
   Z:=evalf(Zeta(s));
   n := 1;
   b := 1;
    while b >= m do
     b := evalf(abs(Z-euclid(s, n)));
     n := n+1
   end do;
   print([q, evalf[3](m), s, n-1]);
 end do;
 print([time()-stq]);
end do:
                        [1, 0.100, 2, 3]
                        [1, 0.100, 3, 1]
                        [1, 0.100, 4, 1]
                        [1, 0.100, 5, 1]
                        [1, 0.100, 6, 1]
                        [1, 0.100, 7, 1]
                        [1, 0.100, 8, 1]
                        [1, 0.100, 9, 1]
                       [1, 0.100, 10, 1]
                              [0.]
                       [2, 0.0100, 2, 12]
                       [2, 0.0100, 3, 3]
                       [2, 0.0100, 4, 2]
                       [2, 0.0100, 5, 1]
                       [2, 0.0100, 6, 1]
                       [2, 0.0100, 7, 1]
                       [2, 0.0100, 8, 1]
                       [2, 0.0100, 9, 1]
                       [2, 0.0100, 10, 1]
                              [0.]
                      [3, 0.00100, 2, 54]
                       [3, 0.00100, 3, 6]
                       [3, 0.00100, 4, 3]
                       [3, 0.00100, 5, 2]
                       [3, 0.00100, 6, 2]
                       [3, 0.00100, 7, 1]
                       [3, 0.00100, 8, 1]
                       [3, 0.00100, 9, 1]
                      [3, 0.00100, 10, 1]
                            [0.016]
                     [4, 0.000100, 2, 294]
                      [4, 0.000100, 3, 12]
                      [4, 0.000100, 4, 5]
                      [4, 0.000100, 5, 3]
                      [4, 0.000100, 6, 2]
                      [4, 0.000100, 7, 2]
                      [4, 0.000100, 8, 2]
                      [4, 0.000100, 9, 1]
                      [4, 0.000100, 10, 1]
                            [0.015]
                    [5, 0.0000100, 2, 1809]
                     [5, 0.0000100, 3, 27]
                      [5, 0.0000100, 4, 8]
                      [5, 0.0000100, 5, 5]
                      [5, 0.0000100, 6, 3]
                      [5, 0.0000100, 7, 3]
                      [5, 0.0000100, 8, 2]
                      [5, 0.0000100, 9, 2]
                     [5, 0.0000100, 10, 2]
                            [0.047]
                   [6, 0.00000100, 2, 12106]
                     [6, 0.00000100, 3, 63]
                     [6, 0.00000100, 4, 14]
                     [6, 0.00000100, 5, 7]
                     [6, 0.00000100, 6, 4]
                     [6, 0.00000100, 7, 4]
                     [6, 0.00000100, 8, 3]
                     [6, 0.00000100, 9, 2]
                     [6, 0.00000100, 10, 2]
                            [0.374]
                    [          -7          ]
                    [7, 1.00 10  , 2, 86226]
                     [          -7        ]
                     [7, 1.00 10  , 3, 155]
                     [          -7       ]
                     [7, 1.00 10  , 4, 24]
                     [          -7       ]
                     [7, 1.00 10  , 5, 10]
                      [          -7      ]
                      [7, 1.00 10  , 6, 6]
                      [          -7      ]
                      [7, 1.00 10  , 7, 4]
                      [          -7      ]
                      [7, 1.00 10  , 8, 4]
                      [          -7      ]
                      [7, 1.00 10  , 9, 3]
                     [          -7       ]
                     [7, 1.00 10  , 10, 3]
                            [3.276]

[Edited. I already know that this might not converge for q=8. Does it or doesn't it? And if not, then what values of Digits (at the two places it's set in the code) would make it succeed? And so on, for larger q.]

If you try and keep the result from `euclid` exact, until you need to `shake` it, then you might have to pay the penalty of forming such a big exact product. Rock and hard place.

acer

You can do a change of variable (or a reflection), along with forced tickmarks.

restart:
I2:=g->(200*g)/sqrt(0.04+g^2):

plot(I2(g),g=0..1); 

plot(I2(1-g),g=0..1,tickmarks=[[seq(1-0.2*i=0.2*i,i=1..5)],"default"]);

acer

The batch file in question is part of the MSVC++ compiler (and SDK). It just sets up the PATH and LIB and INC dir for that C/C++ compiler, appropriate for the 64bit version.

If you haven't installed the MSVC++ then you won't have that batch file.

If you don't have it, then don't worry. Everything in Maple should work fine without it, except the Compiler package (which in turn gets used automatically by almost nothing in Maple -- it's a means to speed some things up, is all).

The Compiler package also provides a Setup command to configure the external (in this case, 64bit MSVC++) compiler as a separate step, from within any later Maple session.

So, don't panic, if you can't set this part up at installation time.

acer

It's easy enough to print a single float using to a customized format. That's what printf does. Note that one of the drawbacks of fixed-digit notational printing is that it can hide trailing digit information. You can work around that, using Digits say to control the fixed format width as below (or even more dynamically also using the length of trailing nonzero digits, which I don't show here).

> restart:

> 1/1e7;
                                      -7
                        1.000000000 10  

> printf("%.10f\n", 1/1e7);
0.0000001000

> Digits:=20:

> printf("%.10f\n", 1/1e7);
0.0000001000

> printf(cat("%.",Digits,"f\n"), 1/1e7);
0.00000010000000000000

It gets harder if you want to customize how floats get printed as they appear in expressions. This does not seem to be configurable with a procedure like `print/float` (or similar) as an extension. And the various Standard GUI otpions for configuring digits-elision, etc, don't seem to cover your situation. So the simple answer to what you're asking might be no, if you want the effect for whole expressions and not just simple floats or collections of floats.

For expressions which contain floats, a solution involving just printing (and artefacts of printing only) would be nice, since then the displayed result could still be used directly in any subsequent calculation.

All I can think of right now is actual substitution into expressions, ie. replacing them with names that get printed closer to what you've requested. If you convert the floats in the expression G below to names, then the numeric quantity can be recoved later using `parse`, and so used in subsequent computation.

Unfortunately, the float-name objects get displayed in the Standard GUI in italics (so, not quite as nice as you see below). Also, they don't necessarily appear in front, as you'd normally expect from numeric coefficients.

> restart:

> convert(sprintf("%.10f", 1/1e7),`name`);

                             0.0000001000

> parse(%);

                                      -7
                              1.000 10  

> G:=sin(y*1e-7)*exp(5e-7*x);

                        /    -7  \    /    -7  \
                     sin\1 10   y/ exp\5 10   x/

> conv:=proc(x)
>    convert(sprintf(cat("%.",Digits,"f"), x),`name`);
> end proc:

> subsindets(G,float,conv);

               sin(y 0.0000001000) exp(x 0.0000005000)

> Digits:=20:

> subsindets(G,float,conv);

     sin(y 0.00000010000000000000) exp(x 0.00000050000000000000)

> subsindets(%,name,parse);

          /                  -7  \    /                  -7  \
       sin\1.0000000000000 10   y/ exp\5.0000000000000 10   x/

Another variation of the above is to use something like atomic identifiers (or TypeMK or whatever undocumented thing one wants to call it...). In this form, the printing in the GUI is in an upright font, not italic. But now there is no easy recovery of the numeric object, that I know of, so use in subsequent computation is problematic.

And the numeric-names still don't appear in front as coefficients.

> restart:

> G:=sin(y*1e-7)*exp(5e-7*x);

                        /    -7  \    /    -7  \
                     sin\1 10   y/ exp\5 10   x/

> conv:=proc(x)
>    cat(`#mo(`,sprintf("%.10f",x),`)`);
> end proc:

> subsindets(G,float,conv);

                  sin(y 0.0000001000) exp(x 0.0000005000)

> Digits:=20:

> conv:=proc(x)
>    cat(`#mo(`,sprintf(cat("%.",Digits,"f"),x),`)`);
> end proc:

> subsindets(G,float,conv);

         sin(y 0.00000010000000000000) exp(x 0.00000050000000000000)

acer

The difference is that in the case of your example 2 the argument to sqrt is first computed as a single float. That causes the `sqrt` command to follow that path of computation appropriate for that input.

> restart:
> trace(sqrt):

> evalf(sqrt(3.0*(1/4)), 5);

execute sqrt, args = .7500000000
{--> enter sqrt, args = .7500000000
                            0.86603
<-- exit sqrt (now at top level) = .86603}
                            0.86603

In your example 3, the `sqrt` command evaluates symbolically, up front and before `evalf` gets in on the action. And what sqrt does is pull out the 1/2 factor. This turns it into your example 4.

> restart:
> trace(sqrt):

> evalf(sqrt(3*(1/4)), 5);

execute sqrt, args = 3/4
{--> enter sqrt, args = 3/4
execute sqrt, args = 12
{--> enter sqrt, args = 12
                               3
                               1
                               1
                               12
                               6
                               2
                               2
                               1
                               3
                               2
                               1
                               1
                               1
                               1
                               3
<-- exit sqrt (now in sqrt) = 2*3^(1/2)}
                            1  (1/2)
                            - 3     
                            2       
<-- exit sqrt (now at top level) = (1/2)*3^(1/2)}
                            0.86605

And your example 1 gets handled not much differently from how example 4 gets handled.

Let's address your highly sensible question:

    "What would be the right command to get an accurate rounded result for
     1/2)*sqrt(3)?"

In order to answer that properly we'll need a little bit of knowledge about Maple's "model of floating-point computation", ie. its (stated or unstated) rules and behavior.

Digits=5 means that Maple's working precision is 5. It doesn't mean that 5 digits of accuracy are promised for floating-point computation of general compound expressions. There is no environment variable which relates to accuracy in the way that Digits relates to the working precision.

So by reducing Digits to 5 (or somewhat equivalently by calling evalf[5] or evalf with 5 as second argument) you are reducing the working precision and making results less accurate that usual. You are not at all ensuring that the result are accurate to 5 digits. Indeed, you are making that quite less likely to occur.

For your very simple example, you can display 5 accurate digits of result by allowing Maple to compute an intermediate result at a higher working precision (even the default Digits=10 suffices here) and simultanteously controlling how that intermediate result gets printed. Here are two ways to get that, done in the Standard GUI as 1D input in a Worksheet,

restart:
interface(displayprecision=5):
evalf( 1/2*sqrt(3) );
                            0.86603
restart:
ans:=evalf( 1/2*sqrt(3) ):
evalf[5](ans);
                            0.86603

Interestingly, that first result above is only being printed as 0.86603 to the display. Maple's engine still knows that its value is actually 0.8660254040 for subsequent computation. (True also for cut'n'paste outside Maple.) If you don't like that, then the second result above is stored internally just as it shows.

I hope that helps.

Some people might now ask, how can you know how high to raise Digits, to ensure that there are enough correct digits in the result to display? One answer to that involves the `shake` command. ...and I'm overdue for finishing a promised Post on that.

acer

It looks like a bug in either the 2D Math parser or the Standard GUI.

Here's a simpler example to reproduce.

Worksheet (not Document), 2D Math input mode, a single execution group:

One the first line,

restart:

On the next line (after <shift>-<enter>), a Matrix with say numerical entries entered using the Matrix palette.

M := Matrix([[1,2],[3,4]])

Then <enter>. The above displays nonsense output that looks like M:=_rtable[5163856]. That appears to be a bogus handle ("handles" being what the GUI uses internally to refer to rtables such as Matrices in output, when talking to the kernel). If one subsequently queries M[1,1], say, a value is not obtained (let alone the correct value).

I will submit this as a bug report.

acer

Create the container Matrix like one of the following (depending whether symmetric),

M:=Matrix(47008,47008,storage=sparse,datatype=float[8]):

MS:=Matrix(47008,47008,shape=symmetric,storage=sparse,datatype=float[8]):

That will create a Matrix whose data is stored in three contiguous arrays in memory. You shouldn't need to access these. And you can insert entries as you go, with Maple automatically adjusting the size of the internal storage. (Actually, it adds a little buffer whenever you insert past the current amount of stored space. More details that you don't normally need to know.)

Now you can add entries. If you want, you can also query how many entries are stored, allocated, etc.

> rtable_num_elems(MS,'Stored');
0

> rtable_num_elems(MS,'Allocated');
0

> MS[4,3]:=3:
> MS[3,4];
3.0

> rtable_num_elems(MS,'Stored');
2

> rtable_num_elems(MS,'Allocated');
10

> for i from 1 to 10 do
> MS[i,i+3]:=sin(1.0*i):
> end do:

> rtable_num_elems(MS,'Stored');
22

>rtable_num_elems(MS,'Allocated');
30

So you can then write regular Maple code to assign entries into the Matrix. You can also call LinearSolve on that Matrix and a datatype=float[8] (non-sparse!) Vector as the right-hand-side. That should automatically invoke an appropriate sparse linear solver (NAG's f11daf, f11dcf). See here for an example showing that.

Note that datatype=float[8] is key. Without it the storage=sparse option to the Matrix constructor will produce another kind of structure internally (a sparse table, internally) which is not suitable for those sparse linear solvers.

It's likely a bad idea to try and form a set of the indexed entries (like suggested in this answer), as the set itself will likely be comparable or maybe even much larger than the sparse float[8] Matrix itself. It'll likely be important to avoid storing the data in any form except the final target sparse float[8] container Matrix (even if only temporarily). If you run into problems with that, maybe ask again (with same code posted?) Hope that makes sense.

acer

Good news is that regular Maple already has (another, completely separate) copy of several of these f11 NAG Library routines embedded within it. These work right from the LinearAlgebra package in the Maple Library, without the Connector or any extra NAG download or install.

> restart:

> with(LinearAlgebra):

> a := Vector[row](22,[1.0,1.0,-1.0,2.0,2.0,
>                      3.0,-2.0,1.0,-2.0,1.0,1.0],
>                  datatype=float[8]):

> irow := Vector[row](22,[1,1,2,2,2,3,3,4,4,4,4],
>                  datatype=integer[kernelopts('wordsize')/8]):

> icol := Vector[row](22,[2,3,1,3,4,1,4,1,2,3,4],
>                  datatype=integer[kernelopts('wordsize')/8]):

> A:=Matrix(4,4,storage=sparse,datatype=float[8]):
> for i from 1 to 11 do
>    A[irow[i],icol[i]]:=a[i];
> end do:

> A;
                             [ 0.   1.  1.   0.]
                             [                 ]
                             [-1.   0.  2.   2.]
                             [                 ]
                             [ 3.   0.  0.  -2.]
                             [                 ]
                             [ 1.  -2.  1.   1.]

> V:=Vector(4,(i)->i,datatype=float[8]);
                                       [1.]
                                       [  ]
                                       [2.]
                                  V := [  ]
                                       [3.]
                                       [  ]
                                       [4.]

> infolevel[LinearAlgebra]:=6:

> X:=LinearSolve(A,V);

LinearSolve: using method SparseIterative
LinearSolve: using method   SparseIterative
LinearSolve: calling external function
LinearSolve: using CGS method
LinearSolve: preconditioning with incomplete LU factorization
LinearSolve: level of fill =  0
LinearSolve: using complete pivoting strategy
LinearSolve: dimension of workspaces for preconditioner =  30
LinearSolve: using infinity norm in stopping criteria
LinearSolve: setting maximum iterations to  200
LinearSolve: setting tolerance to  0.10e-7
LinearSolve: NAG hw_f11zaf
LinearSolve: NAG hw_f11daf
LinearSolve: NAG hw_f11dcf
LinearSolve: number of iterations 1
LinearSolve: residual computed last as HFloat(4.440892098500626e-16)
                             [                   -16]
                             [4.44089209850063 10   ]
                             [                      ]
                             [     -1.50000000000000]
                        X := [                      ]
                             [      2.50000000000000]
                             [                      ]
                             [     -1.50000000000000]

> A.X-V;

unknown: hw_SpMatVecMulRR
                          [                     0.]
                          [                       ]
                          [                     0.]
                          [                       ]
                          [                    -16]
                          [-4.44089209850063 10   ]
                          [                       ]
                          [                     0.]

As you can see, the above uses f11daf, followed by f11dcf. Those are Fortran Library versions of the C Library functions f11dac and f11dcc.

There isn't an easy way to preserve the "incomplete LU factorization" results from f11daf, and to re-use such in multiple successive calls to f11dcf. Instead, both f11daf and f11dcf would get called together, each time LinearSolve got called. So if one has multiple RHS Vectors then it's better to solve them all at once, so as to factorize the Matrix only once. (An alternative would involve poking into the LinearAlgebra routine which does the individual external-calls to the compiled wrappers to f11daf and  f11dcf, and replicating the set-up of workspaces, external calls, etc, relevent to this real nonsymmetric sparse case.)

The SparseIterative method for nonsymmetric, numeric Matrices can also be forced in the call to LinearSolve, and some options specified. Eg,

   method=SparseIterative,methodoptions=[....]

Note that I did not have to use a triple of Vectors, `irow`, and `icol` and `a` above. I could have formed sparse nonsymmetric Matrix `A` in any fashion I wanted. Yes, it is true that the nonzero entries and their indices of a datatype=float[8] Matrix with storage=sparse are internally stored in three contiguous portions of memory: 2 arrays of hardware integers and one of hardware doubles. But there's no easy way to access those at the Maple Library command level without making an external call to a function which picks out those as three Vectors (using the "external API").

I better repeat the important bit: all the above works in Maple 14, 13, 12, etc, right from the LinearAlgebra package in the Maple Library, without the Connector or any extra NAG download or install. It should work in plain, installed Maple.

Now on to your more specific question about the Connector (if you still need to know, in light of the above).

You need a copy of the NAG C Library in order to run the commands in the Maple-NAG Connector product. (See point 1 of this help-page section "Before You Begin" for a mention of this fact.) See here, at NAG's own website, for some detail on obtaining the Mark 8 NAG C Library. (Their latest, the Mark 9, might not work.)

The underlying Chapter F11 sparse linear algebra functions of the NAG C Library are proprietary to NAG Ltd, UK, and as such won't run from the Maple-NAG Connector without a valid license to accompany that NAG C Library. This means that one has to purchase a NAG C Library license (from NAG), over and above the usual Maple license, in order to run such license-locked commands from the Connector.

The Connector is a bridge between these two distinct commercial products: Maple, and the NAG C Library. You need both products, and valid licenses for both products, in order to use the Connector in full.

Not all NAG C Library commands invoke license-locked NAG C Library functions. But the f11 ones are locked, if I recall correctly.

acer

restart:
st:=time():
evalf(Int(exp(-t)/(mul(-3+2*exp(-.1*(1-.1)^i*t), i = 0 .. 4)), t = 0 .. infinity));
time()-st;
                         -0.5821551499
                             25.803

restart:
st:=time():
evalf(Int(unapply(exp(-t)/(mul(-3+2*exp(-.1*(1-.1)^i*t), i = 0 .. 4)),t), 0 .. infinity));
time()-st;
                         -0.5821551499
                             0.031

restart:
st:=time():
evalf(Int(exp(-t)/(mul(-3+2*exp(-.1*(1-.1)^i*t), i = 0 .. 4)), t=0 .. infinity,
          method=_d01amc));
time()-st;
                         -0.5821551499
                             0.032

What I suspect is going on is that a lot of computational effort is (by default) being put into determining whether the integrand is singular in the given range. By forcing a method, or concealing the expression form of the integrand inside a procedure, some or all of this checking might be bypassed. (And, naturally, that would only be safe when one knows something of the qualitative behavior of the integrand.) That's my conjecture, anyway, though one could trace through `evalf/int/control` or something to see.

acer

First 280 281 282 283 284 285 286 Last Page 282 of 337