acer

32385 Reputation

29 Badges

19 years, 340 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are answers submitted by acer

You have assigned an expression to T, not a procedure. The distinction is fundamental. You should't call the expression form with an argument here.

Also, if you want differentiation of expressions with so-called prime notation to denote differentiation by d then you need to inform Maple of that desire (I used Typesetting:-Settings for that, in the first example below.)

So, here are several ways, the first collections with T as an expression and the latter with T as a procedure (operator). You can use diff or D (as appropriate), or prime notation, in each scenario.

restart

 

This first example will use an expression for T.

 

T := 8*Pi*d*(1/(Pi*d^2))+Pi*((1/2)*d)^2

8/d+(1/4)*Pi*d^2

diff(T, d)

-8/d^2+(1/2)*Pi*d

Typesetting:-Settings(prime = d)

diff(T(x), x)

-8/d^2+(1/2)*Pi*d

restart

 

The next example will use a procedure (operator) instead of an expression.

 

T := proc (d) options operator, arrow; 8*Pi*d/(Pi*d^2)+(1/4)*Pi*d^2 end proc

proc (d) options operator, arrow; 8/d+(1/4)*Pi*d^2 end proc

(D(T))(d)

-8/d^2+(1/2)*Pi*d

D(T)

proc (d) options operator, arrow; -8/d^2+(1/2)*Pi*d end proc

(D(T))(d)

-8/d^2+(1/2)*Pi*d

restart

 

The system will also interpret as a procedure (operator) the kind of the 2D Input assignment to T that is done below.

I am not a fan of that syntax for creating operators, but it is equivalent to the previous example's assignment to T.

 

"T(d):=(Pi*d*8)/(Pi*d^(2))+Pi*(d/(2))^(2)"

proc (d) options operator, arrow, function_assign; 8/d+(1/4)*Pi*d^2 end proc

(D(T))(d)

-8/d^2+(1/2)*Pi*d

D(T)

proc (d) options operator, arrow, function_assign; -8/d^2+(1/2)*Pi*d end proc

(D(T))(d)

-8/d^2+(1/2)*Pi*d

NULL

Download primenotation.mw

(I used Maple 2019.2 for this, to match the Question's heading.)

It is unclear what you are trying to ask, about "mathtype".

Is this something like what you are trying to accomplish in Maple 2016?

help_limit_ac.mw

You haven't been very specific about what aspect you don't think is good. Is it that it the over-symbol is in italics? Is it the offset distance from the central character? Is it the relative size?

You could try these (there are a few other choices, but you'd need to be more clear about what you want for me to try...)

Also, it's not clear to me whether you want a check mark or something else.

`#mover(mi("π"),mi("^"))`; 
`#mover(mi("π"),mo("^"))`;
`#mover(mi("π"),mn("^"))`;
`#mover(mi("π"),mi("∨"))`;
`#mover(mi("π"),mo("∨"))`;

Here is one way, which you can easily adjust.

Wis_BS2_Taak2_2019-2020_20200319-2_ac.mw

For your Linux you might try the following (I was on ubuntu 18.04).

This is a before and after comparison. I am guessing that this will affect the inmem=false use of Compiler:-Compile, where that would use gcc on a generated C source file. I am less sure about the default LLVM for inmem=true.

I'm supposing that you already have a test case for which you expect to be able to detect the difference.

restart;

kernelopts(version);

`Maple 2019.2, X86 64 LINUX, Oct 30 2019, Build ID 1430966`

Compiler:-_DumpConfig();


Platform: unix
System: X86_64_LINUX
GCC Version: 7.5.0
MAPLE_TEMPDIR: <<NULL>>
Temp: /tmp
Compile Command: gcc -c -fPIC -I. -DMCLIENT_DLL_EXPIMP= -DX86_64_LINUX=1 -I/usr/local/maple/maple2019.2/extern/include -O2   -w  DUMMY.c -o DUMMY.o
Link Command: gcc -shared -L. -L/usr/local/maple/maple2019.2/bin.X86_64_LINUX  DUMMY.o -o DUMMY.so -lhf -lmaple -lmrt -lm
Post-Link Command: NULL
CCOUTPUT_FLAG: -o
OBJSUFFIX: .o
LDLIBS: -lhf -lmaple -lmrt -lm
EXTRACFLAGS:
EXTRALDFLAGS:
CPPFLAGS: -I. -DMCLIENT_DLL_EXPIMP= -DX86_64_LINUX=1 -I/usr/local/maple/maple2019.2/extern/include
LDFLAGS: -L. -L/usr/local/maple/maple2019.2/bin.X86_64_LINUX
LD: gcc
LDSHFLAGS: -shared
CCOPT: -O2
COMPILE_ONLY_FLAG: -c
TMPDIR: /tmp
LDOUTPUT_FLAG: -o
CC: gcc
EXESUFFIX:
DLLSUFFIX: .so
CCDEBUG:  
DLLPREFIX: lib
CCSHFLAGS: -fPIC
CFLAGS: -w

__oldkopts:=kernelopts(':-opaquemodules'=false):
#Compiler:-Build:-buildvars["CCOPT"];
Compiler:-Build:-buildvars["CCOPT"]:="-O3":
kernelopts(':-opaquemodules'=__oldkopts): # restore

Compiler:-_DumpConfig();

 

Platform: unix
System: X86_64_LINUX
GCC Version: 7.5.0
MAPLE_TEMPDIR: <<NULL>>
Temp: /tmp
Compile Command: gcc -c -fPIC -I. -DMCLIENT_DLL_EXPIMP= -DX86_64_LINUX=1 -I/usr/local/maple/maple2019.2/extern/include -O3   -w  DUMMY.c -o DUMMY.o
Link Command: gcc -shared -L. -L/usr/local/maple/maple2019.2/bin.X86_64_LINUX  DUMMY.o -o DUMMY.so -lhf -lmaple -lmrt -lm
Post-Link Command: NULL
CCOUTPUT_FLAG: -o
OBJSUFFIX: .o
LDLIBS: -lhf -lmaple -lmrt -lm
EXTRACFLAGS:
EXTRALDFLAGS:
CPPFLAGS: -I. -DMCLIENT_DLL_EXPIMP= -DX86_64_LINUX=1 -I/usr/local/maple/maple2019.2/extern/include
LDFLAGS: -L. -L/usr/local/maple/maple2019.2/bin.X86_64_LINUX
LD: gcc
LDSHFLAGS: -shared
CCOPT: -O3
COMPILE_ONLY_FLAG: -c
TMPDIR: /tmp
LDOUTPUT_FLAG: -o
CC: gcc
EXESUFFIX:
DLLSUFFIX: .so
CCDEBUG:  
DLLPREFIX: lib
CCSHFLAGS: -fPIC
CFLAGS: -w

 

Download compiler_opt.mw

You wrote, "... to prevent this inflation of random variables..."

Does that mean that you are trying to improve performance or conserve memory?

The modules that contain the details of the distribution instances take up some memory, and there may (eventually) be a noticable overhead due to global name accumulation and the cost of .mla initial checking of additional global names.

If you unprotect and unassign the _ProbabilityDistributionXXX instances then their referenced modules may still hang around (with memory-based names like _m140554330413728 and so on). Getting rid of them altogether could require getting rid of all references to them (including _RXXX names and anything else).

Or do you have only a modest number of such distributions, and you simply want to be neat and tidy?

Or do you have some other motivation?

note: I know that it can get quite awkward (because it's not convenient to utilize in the content of eval or use or assuming), but is it not at all possible to utilize the generic RV instantiation and re-assign specific values to the parameter name? For example,

restart;
with(Statistics):
x:=RandomVariable(Exponential(a)):
a:=2:
Mean(x), Sample(x,1);
              2, [8.23569715700137]
a:=1:
Mean(x), Sample(x,1);
              1, [1.43494745936376]

Maple has no facilities for creating animation files (eg. mp4, mov, avi, etc) from multiple images.  Animated gif from plots is all it has.

Yes, this is a sorry state of affairs. It's been requested many times over the past two decades. (Sister product MapleSim can do mpeg-2 export of some simulations.)

The ImageTools:-Embed command can only insert once per Execution Group or Document Block. Later instances within the same Group will overwrite the contents of the embedded Task region. That GUI limitation is shared with Explore, Tabulate, InsertContent, and a few other things. IIRC it is documented in a few places. 

Did you want to export it an a single column Matrix? For example,

restart

Digits := 8:

with(plots):

with(CurveFitting):

with(plottools):

with(ExcelTools):

v := .7:

Disp := 20:

esp := 1000000:

k := 0:

E := proc (x, t) options operator, arrow; Int(exp((-esp*w^4+Disp*w^2+k)*t)*cos(w*(x+v*t))/Pi, w = 0 .. infinity, epsilon = 0.1e-6) end proc;

proc (x, t) options operator, arrow; Int(exp((-esp*w^4+Disp*w^2+k)*t)*cos(w*(x+v*t))/Pi, w = 0 .. infinity, epsilon = 0.1e-6) end proc

f := proc (x) options operator, arrow; 15.5*exp(-(1/3710000)*(x-12590)^2)+14.55*exp(-(1/3000000)*(x-16100)^2) end proc:

NULL

u := proc (x, t) options operator, arrow; Int(E(x-xi, t)*f(xi), xi = 0 .. 0.2e5, epsilon = 0.1e-4) end proc;

proc (x, t) options operator, arrow; Int(E(x-xi, t)*f(xi), xi = 0 .. 0.2e5, epsilon = 0.1e-4) end proc

NULL

NULL

uu := evalf(Int(E(0-xi, i)*f(xi), xi = 0 .. 20000, method = _NCrule, epsilon = 10^(-6)))

Int((Int(.31830988*exp((-1000000.*w^4+20.*w^2)*i)*cos(w*(-1.*xi+.7*i)), w = 0. .. Float(infinity)))*(15.5*exp(-0.26954178e-6*(xi-12590.)^2)+14.55*exp(-0.33333333e-6*(xi-16100.)^2)), xi = 0. .. 20000.)

M := [seq(evalf(Int(E(0-xi, 39000-i)*f(xi), xi = 0 .. 20000, method = _NCrule, epsilon = 10^(-6))), i = 10000 .. 20000, 250)]:

plots:-listplot(M);

ExportMatrix(cat(kernelopts(homedir), "/Documents/mapleprimes/M.txt"), `<|>`(`<,>`(M)));

435

 

``

Download getD1_ac.mw

The GUI does not send individual commands in the same session to separate kernel instances. A "session" implies a kernel, communicating input and output back and forth with the interface.

The GUI does not control or run try...catch, the kernel does.

If the kernel instance crashes then there's nothing else (on top of that, controlling the computation, with access to the session's computational history).

You need to wait for MapleSim 2020, if you want it to work properly with Maple 2020.

MapleSim 2019 works properly with Maple 2019, and not Maple 2020.

MapleSim 2020 won't be available for another few weeks.

You don't have to suppress output (using full colons, say), if running this in the GUI.

Instead you could either,
1) Set  interface(typesetting=standard)
or,
2) Retain interface(typesetting=extended)  which is default, while
    also setting  interface(prettyprint=1)

writeto_gui.mw

The second argument passed to plots:-animate should be a list, the entries of which get passed to the first argument (here, the plot command).

But you are trying to use the parametric calling sequence of plot, ie, a list of component formulas and a range.

So the second argument passed to plots:-animate should be a list whose first entry is that list (of component fomulas and range).

I've adjusted the range of the animating parameter. You may need to adjust the lines' component formulas or its own range so that the lines are the appropriate lengths (I'm guessing you want them to end at the outer curve intersection points).

restart;

with(plots):

rec:=(fx,fy)->arctan(-diff(fy,s),diff(fx,s)):

x[0]:=0.5*(s-2)^2:
y[0]:=-s^3-0.5*(s-0.5)^2:
a[0]:=-1.5:
b[0]:=1.5:
w[0]:=0.2:

x[1]:=s:
y[1]:=s^3+2*s^2:
a[1]:=-2.3:
b[1]:=1.2:
w[1]:=0.2:

eq := 1:

c := plot([x[eq],y[eq],s=a[eq]..b[eq]],color=blue):

l := plot([x[eq]+w[eq]*sin(rec(x[eq],y[eq])),y[eq]+w[eq]*cos(rec(x[eq],y[eq])),s=a[eq]..b[eq]]):

r := plot([x[eq]-w[eq]*sin(rec(x[eq],y[eq])),y[eq]-w[eq]*cos(rec(x[eq],y[eq])),s=a[eq]..b[eq]]):

m := diff(y[eq],s)/diff(x[eq],s)

3*s^2+4*s

B:=-1.5:
line := plot([x[eq], subs(s=B,m) * (x[eq]-subs(s=B,x[eq])) + subs(s=B,y[eq]), s=B..B+0.5], color=red):

[x[eq], subs(s=A,m) * (x[eq]-subs(s=A,x[eq])) + subs(s=A,y[eq]), s=A..A+0.5];

[s, (3*A^2+4*A)*(s-A)+A^3+2*A^2, s = A .. A+.5]

line := animate(plot,[[x[eq], subs(s=A,m) * (x[eq]-subs(s=A,x[eq])) + subs(s=A,y[eq]), s=A..A+0.5]],
                A=-2.3..1.2):

display(c,l,r,line);

 

Download agentpath_ac.mw

Is this what you mean, in your followup question? (I find your description somewhat unclear.)

xx := [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]:
yy := [3, 3, 3, 4, 5, 6, 8, 7, 6, 5]:
s := CurveFitting:-Spline(xx, yy, v):

plots:-animate(plot3d,[[s, th, v], v=0..y, th=0..2*Pi,
                       coords=cylindrical, style=surface],
               y=0..9);

Similarly, it seems to me as if your first example could be done in a straightforward manner.

restart;
s := exp(-0.1*x)*(2+sin(x)):

plots:-animate(plot3d,[[s, th, x], x=0..y, th=0..2*Pi,
                       coords=cylindrical, style=surface],
               y=0..6*Pi, frames=60);

Naturally, you can easily adjust the number of frames, surface style, title, or color, etc.

This produces a list of the 12 Matrices, ie. the 3d data for the vertices of each of the 5-sided polygons.

You can export those Matrices as you see fit.

export_ac.mw

If that's not the kind of data that you are after then you should explain properly what you want -- in a followup Reply/Comment here, not a duplicate Question. 

 

# Approach 1)  0.63 seconds real-time
#
# Utilize unapply to avoid invoking J0 and P1 for
# every numeric value of s.
# Also force NAG d01akc method for oscillatory integrand,
# which also has the effect of preventing evalf(Int(...))
# from trying to discern (symbolically) discontinuity points
# of the integrand.
#
# note: This passes Int calls to plot, which will evalf them.
#
restart;
P1:=(r,R)->(2/Pi)*(arccos(r/(2*R))-(r/(2*R))*sqrt(1-(r/(2*R))^2)):
J0:=(r,shk)-> BesselJ(0, 2*Pi*r*shk):
Jhk:=unapply((1/s)*Int(P1(r,R)*J0(r,shk)*sin(2*Pi*r*s), r=0..2*R,
                        method=_d01akc),[s,shk,R]);
CodeTools:-Usage(plot(s->Jhk(s,2.14,38), 0..5)):

proc (s, shk, R) options operator, arrow; (Int(2*(arccos((1/2)*r/R)-(1/4)*r*(4-r^2/R^2)^(1/2)/R)*BesselJ(0, 2*Pi*r*shk)*sin(2*Pi*r*s)/Pi, r = 0 .. 2*R, method = _d01akc))/s end proc

memory used=11.34MiB, alloc change=0 bytes, cpu time=623.00ms, real time=629.00ms, gc time=0ns

# Approach 2)   2.1 seconds real-time
#
# Utilize unapply to avoid invoking J0 and P1 for
# every numeric value of s.
# Utilize another unevaluated 'unapply' of the integrand
# within the Int call. This has the effect of preventing
# evalf(Int(...)) from trying to discern (symbolically)
# discontinuity points of the integrand. Notice that each
# call to Jhk will invoke unapply -- the overhead of that
# is greater than that of the Approach 1), and less
# than that of the Approach 3) below.
#
# note: This passes Int calls to plot, which will evalf them.
#
restart;
P1:=(r,R)->(2/Pi)*(arccos(r/(2*R))-(r/(2*R))*sqrt(1-(r/(2*R))^2)):
J0:=(r,shk)-> BesselJ(0, 2*Pi*r*shk):
Jhk:=unapply((1/s)*Int('unapply'(P1(r,R)*J0(r,shk)*sin(2*Pi*r*s),r), 0..2*R),[s,shk,R]);
CodeTools:-Usage(plot(s->Jhk(s,2.14,38), 0..5)):

proc (s, shk, R) options operator, arrow; (Int(unapply(2*(arccos((1/2)*r/R)-(1/4)*r*(4-r^2/R^2)^(1/2)/R)*BesselJ(0, 2*Pi*r*shk)*sin(2*Pi*r*s)/Pi, r), 0 .. 2*R))/s end proc

memory used=33.43MiB, alloc change=14.98MiB, cpu time=2.10s, real time=2.10s, gc time=0ns

# Approach 3)   4.8 seconds real-time
#
# Much like Kitonum's suggestion (given at end), in
# that the Jhk is the same.
#
# Utilize unapply to avoid invoking J0 and P1 for
# every numeric value of s.
#
# Here we retain the evalf within that unapply call. The
# effect is that the Pi within the sin call becomes an
# explicit float. This only mitigates the overhead of
# evalf(Int(...)) trying to discern (symbolically) discontinuity
# points of the integrand.
#
# note: The evalf call is not present within Jhk, and it is
# done here too by plot itself.
#
restart;
P1:=(r,R)->(2/Pi)*(arccos(r/(2*R))-(r/(2*R))*sqrt(1-(r/(2*R))^2)):
J0:=(r,shk)-> BesselJ(0, 2*Pi*r*shk):
Jhk:=unapply(evalf((1/s)*Int(P1(r,R)*J0(r,shk)*sin(2*Pi*r*s), r=0..2*R)),[s,shk,R]);
CodeTools:-Usage(plot(s->Jhk(s,2.14,38), 0..5)):

proc (s, shk, R) options operator, arrow; (Int(.6366197722*(arccos(.5000000000*r/R)-.2500000000*r*(4.-1.*r^2/R^2)^(1/2)/R)*BesselJ(0., 6.283185308*r*shk)*sin(6.283185308*r*s), r = 0. .. 2.*R))/s end proc

memory used=418.61MiB, alloc change=25.99MiB, cpu time=5.02s, real time=4.77s, gc time=485.94ms

# Approach 4)   3.9 seconds real-time
#
# Much like Kitonum's suggestion, and also much like the
# previous attempt, in that the Jhk is the same.
#
# note: I've added adaptive=false alongside numpoints=200, since
# the latter alone will not disable adaptive plotting and *might*
# consume more more time that with (default) numpoints=100.
# We could add   numpoints=200, adaptive=false  to the previous
# attempts, to make them a little faster, but they are fast
# enough and adaptive plotting can be useful for plots with
# these qualitative aspects.
#
# note: The evalf call is not present within Jhk, and it is
# done here too by plot itself.
#
restart;
P1:=(r,R)->(2/Pi)*(arccos(r/(2*R))-(r/(2*R))*sqrt(1-(r/(2*R))^2)):
J0:=(r,shk)->BesselJ(0, 2*Pi*r*shk):
Jhk:=unapply(evalf((1/s)*Int(P1(r,R)*J0(r,shk)*sin(2*Pi*r*s), r=0..2*R)),s,shk,R);
CodeTools:-Usage(plot(Jhk(s,2.14,38), s=0..5, numpoints=200, adaptive=false)):

proc (s, shk, R) options operator, arrow; (Int(.6366197722*(arccos(.5000000000*r/R)-.2500000000*r*(4.-1.*r^2/R^2)^(1/2)/R)*BesselJ(0., 6.283185308*r*shk)*sin(6.283185308*r*s), r = 0. .. 2.*R))/s end proc

memory used=329.58MiB, alloc change=25.99MiB, cpu time=4.09s, real time=3.90s, gc time=399.44ms

# Kitonum's suggestion)  4.8 seconds real-time
#
# See explanation for Approach 3)
#
restart;
P1:=(r,R)->(2/Pi)*(arccos(r/(2*R))-(r/(2*R))*sqrt(1-(r/(2*R))^2)):
J0:=(r,shk)->BesselJ(0, 2*Pi*r*shk):
Jhk:=unapply(evalf((1/s)*Int(P1(r,R)*J0(r,shk)*sin(2*Pi*r*s), r=0..2*R)),s,shk,R);
CodeTools:-Usage(plot(Jhk(s,2.14,38), s=0..5, numpoints=200, size=[1000,300])):

proc (s, shk, R) options operator, arrow; (Int(.6366197722*(arccos(.5000000000*r/R)-.2500000000*r*(4.-1.*r^2/R^2)^(1/2)/R)*BesselJ(0., 6.283185308*r*shk)*sin(6.283185308*r*s), r = 0. .. 2.*R))/s end proc

memory used=418.58MiB, alloc change=25.99MiB, cpu time=5.03s, real time=4.76s, gc time=526.53ms

 

Download unapply_quadrature_examples.mw

[edited] I forgot to provide the following simpler revision to the code originally posted.

The point of the unapply in this variant is not to avoid repeated calls to J0 and P1 (which will still occur for each separate numeric value of s from plot). The point of it here is that it calls evalf(Int(...)) with an operator for the integrand, rather than an expression. That makes the integrand more like a black-box, and prevents evalf(Int(...)) from incurring large overhead of examing the integrand for discontinuties in r.

Note that here I uneval-quote the call to Jhk in the plot call, to avoid premature evaluation. I could also have plotted  with plot(s->Jhk(s,2.14,38, 0..5) .

This is pretty fast, though not quite as fast as with the oscillatory NAG method forced as well.

restart;

P1:=(r,R)->(2/Pi)*(arccos(r/(2*R))-(r/(2*R))*sqrt(1-(r/(2*R))^2)):
J0:=(r,shk)-> BesselJ(0, 2*Pi*r*shk):
Jhk:=(s,shk,R)-> evalf((1/s)*Int(unapply(P1(r,R)*J0(r,shk)*sin(2*Pi*r*s), r), 0..2*R));
CodeTools:-Usage( plot('Jhk'(s,2.14,38), s=0..5) );

proc (s, shk, R) options operator, arrow; evalf((Int(unapply(P1(r, R)*J0(r, shk)*sin(2*Pi*r*s), r), 0 .. 2*R))/s) end proc

memory used=44.62MiB, alloc change=52.89MiB, cpu time=2.11s, real time=2.09s, gc time=98.09ms

 

Download _unapply_quadrature_followup.mw

It would be useful (and much simpler to explain) if evalf(Int(...)) simply got a new keyword option to forcibly disable discont checks. It's not always easy or even possible to get that effect if a NAG method is not suitable or if there is deep nesting of integrals or procedure calls.

First 132 133 134 135 136 137 138 Last Page 134 of 336