29560 Reputation

29 Badges

18 years, 157 days
Ontario, Canada

Social Networks and Content at

MaplePrimes Activity

These are replies submitted by acer

@C_R The evalf inside F is not actually required there. The commands plot, Maximize, and fsolve will evalf themselves.

You also asked why the following did not work (without substitution y=Y, or :-y=y if you use lowercase y as the name of the parameter of F).

F := y->evalf(Int(unapply(op(1,ig),x),rhs(op(2,ig))));

The y that is the parameter of procedure F is not the same as the global name y already present in op(1,ig).

Even the first of these next two examples has two different "y" names in play.

expr := sin(y*x):

f := y -> expr:



g := y -> unapply(expr,x):


proc (x) options operator, arrow; sin(y*x) end proc

The command unapply exists to make it easier to build procedures from given expressions.

In the variants under discussion, I was building a procedure that returns an Int call containing a procedure (as integrand). And the original form of the integrand is a given existing expression.

Building procedures that return procedures (ie. a "procedure factory"), substituting for different variables at each level, is pretty awkward without unapply.

The first pair of these look understandable. But that last one, lacking all unapply, is more klunky. Without any unapply, things can be even klunkier if you want the procedure factory to contain literal sin(x*y) instead of the reference name expr.

expr := sin(x*y):

G1 := Y -> unapply(eval(expr,y=Y),x):


proc (x) options operator, arrow; sin(2*x) end proc


G2 := Y -> subs(y=Y, unapply(expr,x)):


proc (x) options operator, arrow; sin(2*x) end proc


G3 := unapply('unapply(eval(expr,y=Y),x)',Y):


proc (x) options operator, arrow; sin(2*x) end proc


G4 := YY -> subs(Y=YY, X->eval(expr,[x=X,y=Y])):


proc (X) options operator, arrow; eval(expr, [x = X, y = 2]) end proc


A variant on G2 above:

expr := sin(x*y):

template := unapply(expr,x):

G2b := Y -> subs(y=Y, eval(template)):


proc (x) options operator, arrow; sin(2*x) end proc



@sand15 That procedure F uses unapply to get an operator form for the integrand.

The result of
  unapply(expression_in_x, x)
is an operator (procedure).  As such, it no longer has a direct dependency on the name x in particular, or on any other name.

And so those commands like Int, fsolve, plot, etc, would take a mere range as second argument, as opposed to how they'd take name=range to accompany an expression.

In the case of Int, the numeric integration is done with respect to the arguments that will be passed to the operator-form integrand. In the case of fsolve, the root-finding is done with respect to that. Etc.

Forgive me if the following is too long-winded. (It's the kind of thing I'd put early in a book on Maple, if I were to write one.) You might well already know all this.



For expressions, the following commands take a second
argument in the form      name = range

That's consistent behavior, between the following commands.

expression := x^3 - 1/2;


evalf( Int( expression, x = 0 .. 1 ) );


plot( expression, x = 0 .. 1 );

fsolve( expression, x = 0 .. 1 );


Optimization:-Minimize( expression, x = 0 .. 1 );

[HFloat(-0.49999999999999994), [x = HFloat(3.4847423234192726e-6)]]


For operators/procedures, the following commands take a second
argument in the form      range

There is no particular "variable" connected to the operator, at a
level outside it.

This is the case whether the body of the operator is written out
manually, or formed using unapply.

That's consistent behavior, between the following commands.


p1 := x -> x^3 - 1/2 ;

proc (x) options operator, arrow; x^3-1/2 end proc

evalf( Int( p1, 0 .. 1 ) );


plot( p1, 0 .. 1 );

fsolve( p1, 0 .. 1 );


Optimization:-Minimize( p1, 0 .. 1 );

[-.49999999999999994, Vector(1, {(1) = 0.34847423234192726e-5})]

p2 := unapply(expression, x);

proc (x) options operator, arrow; x^3-1/2 end proc

plot( p2, 0 .. 1 );

evalf( Int( p2, 0 .. 1 ) );


fsolve( p2, 0 .. 1 );


Optimization:-Minimize( p2, 0 .. 1 );

[-.49999999999999994, Vector(1, {(1) = 0.34847423234192726e-5})]

You could call an operator using any(!) name you want.

The following uses an unevaluated function-call, which is
treated like an expression. Hence the following commands
take the second argument in the form   name = range

The name beside the range matches the name used in the
unevaluated function-call. Below, I use s.

evalf( Int( 'p2(s)', s = 0 .. 1 ) );


plot( 'p2(s)', s = 0 .. 1 );

fsolve( 'p2(s)', s = 0 .. 1 );


Optimization:-Minimize( 'p2(s)', s = 0 .. 1 );

[HFloat(-0.49999999999999994), [s = HFloat(3.4847423234192726e-6)]]


By using the option numeric with the unapply command
the resulting function call like  p3(s) can be more easily used,

without unevaluation quotes.

That makes it safer/easier to treat p3(s) like an expression,
because it stays unevaluated unless s is replaced by a something numeric.










p3 := unapply(expression, x, numeric):





eval(p3(s), s=0.5);




@Hullzie16 The optionsimplicit=[gridrefine=1] gets passed by plots:-inequal to plots:-implicitplot (which the former uses, internally). You could experiment with other implicitplot options there, to get a better resolution in the inequal plot. (That can make it take longer, say.)

The various digits, epsilon, and method options are used in the Int calls. Those get passed to the numeric integrator when those Int calls get evalf'd. Here it's tricky because you have nested Int calls.

In such situations the inner numeric integration often needs a more accurate result (smaller epsilon) to that the outer numeric integration can get its convergence for each evaluation of its own, outer integration.

The plotting command (or implicitplot) may be able to handle a coarser epsilon for the outer integration, which may make that faster. That may also be necessary to get the outer integration to converge (for the various values of the plotting variables).

Balancing all that, and the methods, can be tricky. Unfortunate choices may make it take a very long time, or not work at all.

@NIMA112 This is not super fast, and some computational effort is duplicated.

But it allows you to find the ranges for h[1] and h[2], and then merge them into a common range from which shared contour values may be constructed.

@C_R My original choice of op was unnecessarily confusing, sorry.

I've edited my Answer. Hopefully it's more clear now.

Let me know if you still have queries on it that Preben has not explained.

Best of the season.

ps. Other people (eg. Joe Riel) have pointed out in the past that there are situations in which it would be programmatically convenient if there were a named,  stock "identity" function, ie. a stock name for x->x .

If it were called, say, ident then all I'd have to do is set `print/ident` to Typesetting:-Typeset and then utilize it here will calls like %ident(1) .

@Scot Gould It might also be possible to construct an appliable module which accepts two arguments. Through Explore, its first argument could be hooked up to a Slider while its second argument could be hooked up to a listbox/combobox/radio-button.

The Slider might provide 1 or 2 digits (to the right of the decimal mark) at a time, while the other control might specify the current scale. The appliable module could store the running total, eg. a+b/100+c/10000 or what have you.

That might be done reasonably cleanly, and tightly, hopefully easy to use.

I suppose that a neat and tidy set of such dymanic controls might (ideally? one day) customize any Explore Slider, with a hidden State component storing the running values of each scale.

@sand15 Yes, this is the very kind of thing that I meant when I referred to splitting a parameter.

I suspect that if the Sliders were long enough (and maybe snap-to-ticks) then one might be able to usefully & reliably get two powers of 10 (ie. two digits) from each.

So that'd allow for pinpoint specification of 10e-4 precision with just a pair, and might be quite nice.

I have another idea, but it'll have to wait until tonight...

In your old Maple 13 you might also look at something like, say,

   numtheory[sigma](n) - n

You should be able to use the following attachment to investigate alpha>0 and beta>0, without having to wait too long.

I am not seeing any parameter vaues in the quadrant alpha>0 and beta>0 where eq7 > eq8.  You can extend the ranges out further (eg. alpha=0..2,beta-0..3, etc), but when I tried that the results were similar.

I've only gone out as far positive as alpha=1 and beta=1, below, just so that I can get finer plot3d detail without having to raise the grid size too much.

You can manually rotate the plot3d result, to get more of a visual feel for it.

You can get rid of the max/min in the plot3d call, if you want. I put those in just to get better GUI behavior (eg. less flashing) when forcing a restricted view in the case of extremely high +/- values.









eq7:=subs(Int(exp(-_z1^2*(_z1^2*beta - 2*alpha)/4), _z1 = 0 .. 1)=L,eq6):
eq7 := subsindets(eq6, specfunc(Int),
                  u->op(0,u)(op(u), digits=15, epsilon=1e-6, method=_d01ajc));

(1/2)*2^(1/2)*(-3*beta*t^2+alpha)*exp(-(1/2)*(Int(exp(-(1/4)*_z1^2*(_z1^2*beta-2*alpha)), _z1 = 0 .. t, digits = 15, epsilon = 0.1e-5, method = _d01ajc))^2/(Int(exp(-(1/4)*_z1^2*(_z1^2*beta-2*alpha)), _z1 = 0 .. 1, digits = 15, epsilon = 0.1e-5, method = _d01ajc))^2+(1/4)*t^2*(beta*t^2-2*alpha))/(Pi^(1/2)*(Int(exp(-(1/4)*_z1^2*(_z1^2*beta-2*alpha)), _z1 = 0 .. 1, digits = 15, epsilon = 0.1e-5, method = _d01ajc)))

GG:=Int(exp(-_z1^2*(_z1^2*beta - 2*alpha)/4), _z1 = 0 .. 1, digits=15, epsilon=1e-11, method=_d01ajc);

Int(exp(-(1/4)*_z1^2*(_z1^2*beta-2*alpha)), _z1 = 0 .. 1, digits = 15, epsilon = 0.1e-10, method = _d01ajc)



HH := proc(a,b) option remember; local res;
  if not [a,b]::list(numeric) then return 'procname'(args); end if;
  res := evalf(eval(Int(eval(eval(eq7,L=GG),[:-alpha=a,:-beta=b]), t=-25..25,

                        digits=15, epsilon=1e-2, method=_Dexp))
               - eval(GG,[:-alpha=a,:-beta=b]));
  #if res::extended_numeric then evalf[5](res); else undefined; end if;
end proc:


         , alpha=-2..1, beta=-2..1
         , optionsimplicit=[gridrefine=1]

memory used=18.41GiB, alloc change=226.31MiB, cpu time=4.38m, real time=3.94m, gc time=39.58s

         , alpha=-2..1, beta=-2..1, adaptmesh=false
         , color=[piecewise(HH(alpha,beta)<-4,1,1/5+(2+signum(HH(alpha,beta)))/5),
                  piecewise(HH(alpha,beta)<-4,1,1), colortype=HSV]
         , grid=[30,30], lightmodel=none
         , view=-5e0 .. 5e0, orientation=[-90,0,0]

memory used=9.08GiB, alloc change=39.99MiB, cpu time=2.36m, real time=2.12m, gc time=21.75s




I can have a look at alpha=0..1.5 and beta=0..2.

Originally you were looking at alpha=0..1 and beta=0..1, effectively? I mean, 1e-9..1e-0, and 1e-5..1e-0 . I switched the ranges because it seemed to me at the time that that's where the plot got interesting.

The epsilon is not a stepsize. It is an accuracy tolerance for evalf/Int. (If one only needs a plot, then usually that's enough accuracy...)

@C_R The functionality for handling units in some plotting commands, fsolve, int, etc, has been growing. It's relatively new, and there are still quirks.

I didn't have the time to figure out precisely why that one example (that involved units) failed in your followup attachment. (Offhand, I'd guess that something is checking for names but not being careful to sieve that though depends.)

But here is a workaround for that example:

It was certainly not deleted as "spam", since the spam-identification & removal mechanism would have deactivated your account. So that's a typo/mistake in the message generator.

I noticed that you had already asked another Question with that same query embedded in it (amongst other things), ie, "I can load an image in Sketcher and draw lines on it, but can't figure out how to access the lengths of those lines".

That second Question was posted before the first one was deleted.

Why are you posting duplicate Question threads!?


[(cough) Some people believe that there was only one Question asked on that fateful day, and discount the very existence of a so-called second Question.]

The .maple workbook filename extension is not handled properly by this site.

It could be put in an uploaded &attached .zip file instead.

@C_R Your slow (5 secs on my machine) plot was set up with int instead of Int. But that's not really what made it so slow, because when plot pumps in float values for r__mx then int will get a float range and in consequence attempt numeric rather than integration.

I believe that the slowness comes from the underlying evalf/Int process attempting to find discontinuities of the integrand expression when no special method is forced. There are two ways around that:
1) Make the integrand an operator, which sidesteps that check because it's now a "black box". It's a little tricky here since you also want to subs values for sigma and the running plotting variable r__mx into that operator-form integrand.

2) Force a special method. Going this route you should not call evalf on the Int expression, because that does nothing (the parameters are not yet numeric) good, and it even strips off the method!

I believe that my original answer did both 1) and 2).

ps. When you pass numpoints=10 to plot it will still compute at more points than that. If you only want exactly 10 points sampled then you'd also need to pass adaptive=false.

notes: Such hybrid symbolic-numeric quadrature approaches are well-intentioned. Applying a numeric quadrature rule across discontinuities might otherwise result in wrong or inaccurate results, hence a search for where to split the range, etc.

A (new) dedicated and documented option to forcibly disable continuity checking would be more user-friendly than operator-form for the integrand, which can be tricky to code. Eg. it can be problematic in the presence of nested integrals, or when having to shoehorn in values for additional parameters (like the value taken by a plotting variable, your case).

The NAG methods like _d01ajc are for purely real-valued integrands. Sometimes (but not always) forcing method=_Dexp can also disable potentially costly symbolic analysis, and that can handle complex-valued integrands.

5 6 7 8 9 10 11 Last Page 7 of 542