acer

32510 Reputation

29 Badges

20 years, 12 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are answers submitted by acer

Attached is an example. The commands are in the Action code behind the buttons.

The basic idea is to use the DocumentTools:-SetProperty command and the properties for (an animation as the value of) a PlotComponent listed here.

animatortask.mw

I'm not sure how well the dial (approx. frames/sec) will work in general. It may be better to give up the FPS idea and just go with a frame-delay instead.

One of the actions that I miss is the ability to do a distributed divide (or multiply). By distributed I mean expanded. Sometimes I want the divided (or multiplied) factor distributed across a sum of terms, and sometime I don't. So a choice would be nice.

I find it awkward to cancel leading minus signs on both sides of an equation.

And I'd like to be able to customize the priorities, so that `normal` (which must seem like magic, and be uninstructive to many students) didn't supercede more understandable entries such as for expand,factor,combine,etc.

Here is a humble attempt, hopefully in the direction you were aiming.

spmanip.mw

acer

restart:

ee:=(exp(a+b)+exp(a+c))*exp(-a):

expand(ee);

                        exp(b) + exp(c)

acer

If you want to join the plotted data values by lines, then you can sort w.r.t. the first column of the pair.

A:=ArrayTools:-RandomArray(4,4);

plot(sort(convert(A[..,[1,3]],listlist)));

plot(A[sort(A[..,1],output=permutation),[1,3]]);

If you only want a point-plot (no joining lines) then it's easier.

plot(A[..,[1,3]],style=point);

acer

Your Maple procedure `mp` is heavy in terms of producing collectible garbage. And it's not just the sorting. The repeated unioning of temporary sets, conversion to temporary tables, and disposing of all these (memory managment) is expensive. 

That can be illustrated with a modification to your original `mp` procedure to get something like the following, where the partial results (computed in the loops within just a single non-recursed call to the central procedure) are instead stored in a single table whose entries are extracted into a set only once just prior to the procedure return.

Since `mp3` is recursive it still entails quite a bit of switching between temp sets and temp tables and so this modified version is still prone to slowdown as the problem size grows large -- but the idea is just to show that reducing it improves performance.

mp3 := proc(n)
    option remember;
    local count, rec, onef, res, f, asg, targ, t;
    if n=1 then return {}; fi;
    onef := op(2, ifactors(n))[1];
    rec := mp3(n/onef[1]);
    if nops(rec) = 0 then return {[n=1]} fi;
    count:=0:
    for f in rec do
        t := table(f);
        targ := onef[1];
        if type(t[targ], integer) then
            t[targ] := t[targ] + 1
        else
            t[targ] := 1;
        fi;
        count:=count+1: res[count]:=op(op(t));
        for asg in f do
            t := table(f);
            targ := onef[1]*op(1, asg);
            if type(t[targ], integer) then
                t[targ] := t[targ]+1;
            else
                t[targ] := 1;
            fi;
            if t[op(1,asg)]>1 then
                t[op(1,asg)] := t[op(1,asg)]-1;
            else
                t[op(1,asg)] := evaln(t[op(1,asg)]); 
            fi;
            count:=count+1: res[count]:=op(op(t));
        od;
    od;
    {entries(res,':-nolist')[]};
end:

On my machine your original `mp` handled 9! in about 35sec, and the edited `mp3` above took 1.2sec, while Factorings [due to Kitonum, Joe Riel, and Carl Love] took 0.14sec. Of course, more important than just single timings are the performance as the problem size grows.

acer

In the Standard GUI,

restart:
test := proc(x)
  local out;
  out:=cat(`#mrow(mo(\"x is equal to: \",mathcolor=\"#00aa00\"),`,
                sprintf("%a",convert(Typesetting:-Typeset(x),`global`)),`)`);
  return out;
  #print(out);
  #return NULL;
end proc:

test(2);

test(Pi/sqrt(2));

test(Int(f(x),x));

This post may be of interest here.

acer

x2:=y=a+b;

                           y = a + b

map2(map,f,x2);

                       f(y) = f(a) + f(b)

acer

You may want to adjust the domain (wider than -20..20), or the quadrature tolerance (smaller than 1e-5), or the plotting options.


restart:

ee:=(1/2)*(-4*dilog(-(exp(2*t)*s-(-s^2+1)^(1/2)+1)/(-1+(-s^2+1)^(1/2)))
*exp(4*t)+arctanh((-1+s)/(-s^2+1)^(1/2))*s^2+arctanh((exp(2*t)*s
-exp(2*t)-s+1)/((exp(2*t)+1)*(-s^2+1)^(1/2)))*s^2+8*(-s^2+1)^(1/2)
*exp(4*t)+4*dilog((exp(2*t)*s+(-s^2+1)^(1/2)+1)/(1+(-s^2+1)^(1/2)))
*exp(4*t)+4*exp(4*t)*arctanh((-1+s)/(-s^2+1)^(1/2))-8*arctanh((exp(2*t)
*s-exp(2*t)-s+1)/((exp(2*t)+1)*(-s^2+1)^(1/2)))*exp(4*t)*s^2*t
-4*ln(1+(-s^2+1)^(1/2))*exp(4*t)*s^2*t+4*ln(1-(-s^2+1)^(1/2))
*exp(4*t)*s^2*t-4*ln(exp(2*t)*s-(-s^2+1)^(1/2)+1)*exp(4*t)*s^2*t
+4*ln(exp(2*t)*s+(-s^2+1)^(1/2)+1)*exp(4*t)*s^2*t+12*(-s^2+1)^(1/2)
*exp(4*t)*t-16*arctanh((exp(2*t)*s-exp(2*t)-s+1)/((exp(2*t)+1)
*(-s^2+1)^(1/2)))*exp(4*t)*t-8*ln(1+(-s^2+1)^(1/2))*exp(4*t)*t
+8*ln(1-(-s^2+1)^(1/2))*exp(4*t)*t-8*ln(exp(2*t)*s-(-s^2+1)^(1/2)+1)
*exp(4*t)*t+8*ln(exp(2*t)*s+(-s^2+1)^(1/2)+1)*exp(4*t)*t-(-s^2+1)^(1/2)
*exp(2*t)*s+8*arctanh((exp(2*t)*s-exp(2*t)-s+1)/((exp(2*t)+1)
*(-s^2+1)^(1/2)))*exp(2*t)*s+4*exp(2*t)*arctanh((-1+s)/(-s^2+1)^(1/2))*s
-(-s^2+1)^(1/2)*exp(6*t)*s-8*arctanh((exp(2*t)*s-exp(2*t)-s+1)/((exp(2*t)+1)
*(-s^2+1)^(1/2)))*exp(6*t)*s+4*exp(6*t)*arctanh((-1+s)/(-s^2+1)^(1/2))*s
+2*dilog((exp(2*t)*s+(-s^2+1)^(1/2)+1)/(1+(-s^2+1)^(1/2)))*exp(4*t)*s^2
+2*(-s^2+1)^(1/2)*exp(4*t)*s^2-arctanh((exp(2*t)*s-exp(2*t)-s+1)/((exp(2*t)+1)
*(-s^2+1)^(1/2)))*exp(8*t)*s^2+exp(8*t)*arctanh((-1+s)/(-s^2+1)^(1/2))*s^2
+2*exp(4*t)*arctanh((-1+s)/(-s^2+1)^(1/2))*s^2-6*(-s^2+1)^(1/2)*ln(exp(4*t)*s
+2*exp(2*t)+s)*exp(4*t)+6*(-s^2+1)^(1/2)*ln(s)*exp(4*t)-2*dilog(-(exp(2*t)*s
-(-s^2+1)^(1/2)+1)/(-1+(-s^2+1)^(1/2)))*exp(4*t)*s^2)/((-s^2+1)^(1/2)
*exp(8*t)*s^2-2*arctanh((-s^2+1)^(1/2)/(1+s))*exp(8*t)*s^2+4*(-s^2+1)^(1/2)
*exp(6*t)*s-8*arctanh((-s^2+1)^(1/2)/(1+s))*exp(6*t)*s+2*(-s^2+1)^(1/2)
*exp(4*t)*s^2-4*arctanh((-s^2+1)^(1/2)/(1+s))*exp(4*t)*s^2+4*(-s^2+1)^(1/2)
*exp(4*t)-8*arctanh((-s^2+1)^(1/2)/(1+s))*exp(4*t)+4*(-s^2+1)^(1/2)
*exp(2*t)*s-8*arctanh((-s^2+1)^(1/2)/(1+s))*exp(2*t)*s+(-s^2+1)^(1/2)*s^2
-2*arctanh((-s^2+1)^(1/2)/(1+s))*s^2):

ff:=simplify(ee,size) assuming s>0, s<1, t::real:

H:=S->evalf(Int(unapply(eval(ff,s=S),t),-20..20,epsilon=1e-5,method=_d01ajc)):

CodeTools:-Usage( H(0.5) );

memory used=54.83MiB, alloc change=4.00MiB, cpu time=733.00ms, real time=769.00ms

-.3439176104

CodeTools:-Usage( plot(H,0..1,numpoints=40,adaptive=false,smartview=false) );

memory used=1.84GiB, alloc change=0 bytes, cpu time=25.71s, real time=31.16s

 

 


Download numint.mw

acer

If you are willing to admit an approximating expression of the form P(R_i_t)/Q(R_i_t) where P and Q are both polynomials then you can get pretty good behaviour near the right-end-point or R_i_t=1.

Also, you can get some speed up of the original omega_i evaluations by delaying the `Quantile` computation in G_PD until such time that R_i_t takes on numeric values. Of course, the timing doesn't matter once you do obtain the approximating expression which will itself evaluate very quickly. But it may help to have the attempts at computing the approximant run faster. It seems to be about a factor of 3, for the problem at hand. The speed up helps even more if you are not aware that the lower-end-point for R_i_t should be 0.25, and try it from say 0.0 intsead.

You might compare these two version, then.

restart;
with(Statistics):
i=F:
C_i:=50:
M_i:=2.5:
LGD_i:=0.75:
PD_i:=(1-R_i_t)/LGD_i:
b_i:=(0.11852-0.05478*ln(PD_i))^2:
B_i:=(1-1/(exp(C_i*PD_i)))/(1-1/exp(C_i)):
R_i:=0.12*B_i+0.24*(1-B_i): 
G_C:=3.09023230616741:
G_PD:=unapply('Quantile'(Normal(0,1),PD_i),R_i_t,numeric):
t:=sqrt(1/(1-R_i))*G_PD(R_i_t)+sqrt(R_i/(1-R_i))*G_C:
N_t:=CumulativeDistributionFunction('Normal'(0,1),t):
k_i:=(LGD_i*N_t-PD_i*LGD_i)*((1+(M_i-2.5)*b_i)/(1-1.5*b_i)):
omega_i:=12.5*k_i:

CodeTools:-Usage( plot(omega_i,R_i_t=0.25..1.0) );

AP:=CodeTools:-Usage(numapprox:-minimax(omega_i, R_i_t= 0.25..0.9999, [6,1]));
plot([AP],R_i_t=0.25..1.0);


restart;
with(Statistics):
i=F:
C_i:=50:
M_i:=2.5:
LGD_i:=0.75:
PD_i:=(1-R_i_t)/LGD_i:
b_i:=(0.11852-0.05478*ln(PD_i))^2:
B_i:=(1-1/(exp(C_i*PD_i)))/(1-1/exp(C_i)):
R_i:=0.12*B_i+0.24*(1-B_i): 
G_C:=3.09023230616741:
G_PD:=Quantile(Normal(0,1),PD_i):
t:=sqrt(1/(1-R_i))*G_PD+sqrt(R_i/(1-R_i))*G_C:
N_t:=CumulativeDistributionFunction('Normal'(0,1),t):
k_i:=(LGD_i*N_t-PD_i*LGD_i)*((1+(M_i-2.5)*b_i)/(1-1.5*b_i)):
omega_i:=12.5*k_i:

CodeTools:-Usage( plot(omega_i,R_i_t=0.25..1.0) );

AP:=CodeTools:-Usage(numapprox:-minimax(omega_i, R_i_t= 0.25..0.9999, [6,1]));
plot([AP],R_i_t=0.25..1.0);

acer

Apply the `combine` command to your expression. You could also apply `simplify`, after that.

acer

It is PDEtools:-declare:-`PDEtools/declare` which you can view by, say,

showstat((PDEtools::declare)::`PDEtools/declare`);

or by,

kernelopts(opaquemodules=false):              
interface(verboseproc=3):
eval((PDEtools:-declare):-`PDEtools/declare`);

One way to discover this is to issue stopat(PDEtools:-declare), run an example to get into the debugger, then step into the call to `PDEtools/declare`, and then query `where` in the debugger.

acer

The behaviour of embedding the content into the current document is new to Maple 17. See here.

acer

The function `_d01amc` in the external nag dynamic library (libnag.so, nag.dll) gets passed an integrand function, and the nag function calls that and passes it the various values of `x` (double precision). That function then calls back to Maple for evaluation of the integrand at that value of `x`.

There are two modes in which this can occur: the default EvalhfDataProc and EvalMapleProc a fallback software float arbitrary precision mode. There are userinfo statements which state which are being tried under `evalf/int` (and Optimization too).

It wasn't always the case, but in Maple 17 a non-evalhf'able integrand can be used with method=_d01amc. The failure in an initial evalhf callback is caught, and the software float eval callback may then be tried.

restart:
infolevel[`evalf/int`]:=2:

T:=proc(xi) []; exp(-(xi-0.0)^2)*exp(xi) end proc: # The empty list stymies evalhf

evalf(Int(T,-infinity..infinity,method=_d01amc)); 

  evalf/int/control: integrating on -infinity .. infinity the integrand
  proc(xi)  ...  end;
  Control: Entering NAGInt
  trying d01amc (nag_1d_quad_inf)
  d01amc: trying evalhf callbacks
  d01amc: trying eval callbacks
  d01amc: result=2.27587579446874644
  d01amc: abserr=.207787272088890576e-11; num_subint=7; fun_count=390
  result=2.27587579446874644

                                     2.275875794

So, what happened with the original example? As others have pointed out, it failed because the second multiplicand in the integrand expression can overflow, and a NaN can be the final result. Eg. a a problem point x=935.0,

T(935.0);

                                   -379266
                     9.373433646 10

evalhf(%);
     
                               0.

evalhf(T(935.0));

                        Float(undefined)

evalhf(DBL_MAX);

                                         307
                    9.9999999000000001 10   

evalhf(exp(-935.0^2)), exp(-935.0^2);

                                     -379672
                   0., 8.064141315 10       

evalhf(exp(935.0)), exp(935.0);

                                              406
               Float(infinity), 1.162359795 10   

evalhf(exp(-935.0^2)*exp(935.0)), exp(-935.0^2)*exp(935.0);

                                              -379266
              Float(undefined), 9.373433646 10       

And the above is what makes the orignal example fail out in evalhf callback mode.

restart:

T:=proc(xi) exp(-(xi-0.0)^2)*exp(xi) end proc:

evalf(Int(T,-infinity..infinity,method=_d01amc)); 

Error, (in evalf/int) NE_QUAD_ROUNDOFF_TOL:
  Round-off error prevents the requested tolerance from
  being achieved:  epsabs = 5.0e-013,  epsrel =  5.0e-010.

I don't see why the behaviour cannot be improved. That is, perhaps the arbitrary precision (software float) `eval` callback mode could be tried if the above round-off error were caught during the initial evalhf callback mode. It may just be that the round-off error could get added to the "catch" statements which trap non-evalhf'ability.

acer

The Explore command emits Embedded Components, which are not supported by the Classic interface.

acer

Try having that Button's action instead be,

Do(%Plot3=(plots:-display(%Plot3,
                          plot([[%Slider2,%Slider1]],style=point,colour=blue,
                               symbol=solidcircle,symbolsize=18),gridlines=false)));

Another possibility is to have a checkbox rather than that button, so that the action code behind all the sliders could query that and, if it's true/selected, incorporate the vertex into the plots they generate. This could introduce you to the operator form of Maple's `if` command.

nb. There is a mistake in the slider code for controlling value `a`. Make its argument to the plot call be,

  %Slider0*(x-%Slider2)^2+%Slider1

instead of your current,

  %Slider0*x(x-%Slider2)^2+%Slider2

acer

First 245 246 247 248 249 250 251 Last Page 247 of 337