acer

32737 Reputation

29 Badges

20 years, 103 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@nm The calls to DEtools:-DEplot in your Question above do call dsolve,numeric , for example, as,

   dsolve({diff(y(x), x) = x*(x^2+9)^(1/2), y(-4) = 0}, {y(x)},
               output = listprocedure, type = numeric, method = rkf45,
               range = -9.950000000 .. 9.950000000)

That kind of thing happens in `DEtools/DEplot/drawlines` .

You can see it as, 

  showstat(`DEtools/DEplot/drawlines`)

and notice how the result from dsolve gets used to produce the values for the lines/curves.

This might be getting off topic, though. What might be relevant to your original question is that going out-of-range (as far as the obsrange option is concerned) could be a slightly tricky consideration, and if floats are used then there could be a tolerance gotcha.

[edit] Note that the OP has augmented his Reply (immediately above this) with an additional example only after I add this Repy. That is unhelpful. It would be better to add it in a new Reply, so that the material would all appear in order and make more sense. Adding new examples to earlier queries/replies can make responses look like they've overlooked it.

Anyway, the smaller view that plot gets for this added followup example is due to the internal behaviour controlled by its smartview option. With smartview=false one gets a taller y range, and with default smartview=true one gets the shorter range (that slightly obscures the asymptote at the singularity, imo). Internally, it adds a VIEW substructure to restrict what parts of the computed data is rendered.

In my Maple 2024.2 the call odetest(sol,ode) returns 0 directly.

As I suspected, the change in typesetting of BesselJ icame in the same release that the default "level" changed from standard to extended.

It is documented in the compatibility updates for Maple 2017.

@sand15 Are you sure that your results are correct?

Can you get a piecewise constant difference for the difference between the integrand and the derivative of your result, either by plotting or simplification?

Or, can you do it even if you don't assemble the piecewise, but just try that on the two pieces separately, under their respective assumptions?

I wasn't able to confirm your results in such a way, even with Maple 2015 (as you've used). It's possible I lost the thread of your code, though.

Another way to do that,

restart;

integrand:=-3*(Pi-2*arcsin(tau))
           *(tau+1)^(1/2)*(tau+(tau^2-1)^(1/2))^(2*(tau^2-1)^(1/2)/(tau-1)^(1/2)/(tau+1)^(1/2))*(tau-1)^(1/2)
           *(-16/3*tau^2+Pi-2*arcsin(tau)+8/3)/(4*tau^2-4):


First, a piecewise representation and simplification of the integrand,
if tau is take to be real.
 

new := piecewise(tau<-1, (combine(simplify(integrand)) assuming tau < -1),
                 tau>-1, (combine(simplify(integrand)) assuming tau > -1));

new := piecewise(tau < -1, -(8*tau^2-3*arccos(tau)-4)*arccos(tau)/((tau+sqrt(tau^2-1))^2*sqrt(tau^2-1)), -1 < tau, (8*tau^2-3*arccos(tau)-4)*(tau+sqrt(tau^2-1))^2*arccos(tau)/sqrt(tau^2-1))

ans := combine(simplify( int( expand(evala(new)), tau ) )):


We can check the answer, expecting here a piecewise constant.
 

simplify(combine( new - diff(ans,tau) ))

piecewise(tau = -1, undefined, 0)


And, only because the piecewise ans doesn't display nicely when inlined on
Mapleprimes, here are its pieces:
 

ans assuming tau < -1;

(1/4)*(((6+4*I-12*tau^2)*arcsin(tau)^2+(12*Pi*tau^2-2+(-6-4*I)*Pi)*arcsin(tau)+(16*tau^4+(-16-12*I)*tau^2+6*I)*arccos(tau)+(4*I)*tau^4+(-3*Pi^2+6-4*I)*tau^2+(3/2+I)*Pi^2+(-3+(1/2)*I))*(tau^2-1)^(1/2)-4*(tau+1)*(-3*arcsin(tau)^2+3*Pi*arcsin(tau)+(4*tau^2+(-2-3*I))*arccos(tau)+I*tau^2-(3/4)*Pi^2+3/2-(1/2)*I)*tau*(tau-1))/(tau^2-1)^(1/2)

ans assuming tau > -1;

-(3/4)*((1/6+(4/3)*(tau^2-3*arccos(tau)-1/2)*tau*(tau^2-1)^(1/2)+(4/3)*arcsin(tau)^2-(4/3)*Pi*arcsin(tau)+2*(-2*tau^2+1)*arccos(tau)+(1/3)*Pi^2+(4/3)*tau^4-(4/3)*tau^2)*(-tau^2+1)^(1/2)+(2*(2*tau^2-1)*arcsin(tau)^2+2*(1/3-2*Pi*tau^2+Pi)*arcsin(tau)+(16/3)*(-tau^4+tau^2)*arccos(tau)+(tau^2-1/2)*(Pi^2-2))*(tau^2-1)^(1/2)+(tau+1)*(-(16/3)*arccos(tau)*tau^2+Pi^2-4*Pi*arcsin(tau)+4*arcsin(tau)^2+(8/3)*arccos(tau)-2)*tau*(tau-1))/(tau^2-1)^(1/2)


Download nm_int2.mw

I get that error using Maple 2024.2 on Linux.

In Maple 2023.2.1 on Linux it returned unevaluated.

MmaTranslator:-FromMma("sol = y -> Function[{x}, Sin[x] + C[1]]");

`MmaTranslator/Assign`(sol, y = unapply(sin(x)+_C1, x))

MmaTranslator:-FromMma("IC /. sol");

eval(IC, sol)

Download FromMma_ex.mw

I too don't understand what janhardo's trying to say. Perhaps he's trying to ensure that you're using https rather than http. Or perhaps not.

@dharr I vote up both this and sand15's Answer. (Utilizing unapply and eval is a very natural way to tackle this.)

A short note to users of that other software:

MmaTranslator:-FromMma("sol = y -> Function[{x}, Sin[x] + C[1]]");

`MmaTranslator/Assign`(sol, y = unapply(sin(x)+_C1, x))

MmaTranslator:-FromMma("IC /. sol");

eval(IC, sol)


Translation is not always perfect, but in this case it produces essentially the same approach. It's often worthwhile to try it.

@C_R Even when not computing adaptively (ie, not doing any further subdividing-refinement of an initial subdivision) the internal routines are applying some small jitter offsets to the values sampled strictly between the end-points.

It behaves that way even in old Maple 16 (2012).

You can see the various steps, either in the debugger (stopat) or by trace. Eg,

restart;
trace(`plot/adaptive`):
plot(x,x=0..0.5,adaptive=false,numpoints=7);
untrace(`plot/adaptive`):

[edit] I'll try to ask around, to get a gist if whether the jitter is applied in the adaptive=false case by oversight or intentionally (say, partly as a means to avoid "bad" points somewhat common to an even split...).

@C_R I did not use and cite the point alpha=0.2952771861 because I was conveying that it was one of the points involved in,
   plot(a, alpha=0..0.5, adaptive=false, numpoints=7)

I cited the point alpha=0.2952771861 because you mentioned and used it here.

@C_R I'm sorry but I don't understand what you mean by, "...but still cannot explain why numpoints comes up with x=0.08718861663 for the second plot point".

It's not clear to which Maple input and scenario that refers. Could you please provide the inputs to reproduce that explicitly, as well as whatever aspect of it is of concern?

@C_R For a non-polynomial, fsolve retuns a single root, as is documented.

The fact that fsolve returns all roots (which it sorts) for polynomials, or that allvalues can also sort multiple results is not key to understanding that behavior when just a single root is computed.

As alpha changes, the sequence of iterates in a numeric root-finding algorithm (Newton, secant, etc) can be different, even using the same initial point.

As alpha changes, the nature/shape/locations of the wells of convergence of these methods may also change -- which again means that thd same initial value may lead to different iterates and that the first sequence of initial-point and ensuing iterates (that converges) might also change.

If this is not clear then I can demonstrate with concrete values and plots. But later, sorry, as I have to go watch the Santa Claus parade...

As a short note, consider the formula for Newton's method. If the sign of the evaluation of f(x) or D(f)(x) changes (with a different alpha, but similar x value) then the increment by which the next iterate is computed might be totally different. Even in the lucky case that everything computes in the reals the increment could change sign at some iterate, as the extrema of f (and where the sign of D(f)(x) flips) can change with alpha.

@WD0HHU To avoid GUI sluggishness sand15 has suppressed the final display call (terminating the line with a full colon) and then manually inserted an image of the plot (eg. from a file, etc).

So you could manually deleted that inserted imade, change to a semicolor for the final line, and then execute.

You can play with the various cases, the size (vs say the constrained scaling), the end values for the contours, and adjust according to which region needs the finer grid the most.

plot-help_sand15_ac24_r.mw

I've often mentally relied on the general notion that specifying a specific class of simplification (trig, exp, radical, power, etc) to the simplify command would prevent it from doing simplifications from the other classes.

But the conversion from form (x^2)^(1/2) to csgn(x)*x is manipulation of a radical. I dislike the fact that can happen even when the trig option is forced. Maybe it's a bug.

simplify( sqrt(z^2) );

csgn(z)*z

simplify( sqrt(cos(z)^2), trig ); # OK

(cos(z)^2)^(1/2)

simplify( 1-sin(z)^2, trig );

cos(z)^2

simplify( sqrt(1-sin(z)^2), trig ); # ooof

csgn(cos(z))*cos(z)

I would like to mention that the OP did not ask about merely how to replace all instances of cos(x)^2 with 1-sin(x)^2.

Doing that simple substitution is indeed straightforward, with several ways to accomplish it. That's true even if you code it for arbitrary names/subexpressions rather than just `x`.

Rather, the OP asked about how to simplify sqrt(1-cos(x)^2) to sqrt(sin(x)^2). That implies that the OP considers the latter to be simpler. And I suspect that stems from its having fewer terms (with the same class of elementary call), etc, and not because it has sin rather than cos, and so on.

So it doesn't seem (to me) to be in the spirit of the OP's request to have an approach that would also change any and all individual cos(...)^2 calls into the sums like 1-sin(...)^2 .

And the OP is very likely to have more complicated expressions, perhaps in terms of other variable names, etc. So examining the expressions for which have standalone -cos(v)^2 as opposed to 1-cos(v)^2 isn't in the vein of such programmatic simplification.

The following change (which might occur anywhere in some larger expression of the OP) might not be the kind of blanket efftect that the OP wants:

expr := sqrt(cos(x)^2);

(cos(x)^2)^(1/2)

simplify(expr, {1-cos(x)^2=sin(x)^2});

(-sin(x)^2+1)^(1/2)

algsubs(1-cos(x)^2=sin(x)^2, expr);

(-sin(x)^2+1)^(1/2)


ps. I'm not saying that I agree that shortness is a key thing to being simple here. But it might be, for some people.

Also, there are some tricky issues lurking about. Maple prefers cos to sin, in some cases. We could consider how to avoid turning standalone cos(v)^2 into the longer 1-sin(v)^2 , which also being able to turn (with generic variable names),
    2-cos(x)^2 - cos(y)^2
into,
     sin(x)^2 + sin(y)^2

Here's some idiosyncratic behavior:

restart;

simplify( 1 - cos(x)^2 );

sin(x)^2

I'm hoping now to get sin(x)^2+sin(y)^2

simplify( 1 - cos(x)^2 + sin(y)^2 );

-cos(y)^2+2-cos(x)^2

restart;

simplify( 1 - sin(x)^2 );

cos(x)^2

Oh, sure, the other ways works, because Maple
likes cos ...

simplify( 1 - sin(x)^2 + cos(y)^2 );

cos(x)^2+cos(y)^2

1 2 3 4 5 6 7 Last Page 2 of 600