_Maxim_

729 Reputation

12 Badges

8 years, 295 days

MaplePrimes Activity


These are replies submitted by _Maxim_

@vv

It's true that Maple doesn't know how to expand an arbitrary branch of RootOf and can give an expansion that is valid for a different branch.

In this case, the constant term w-sqrt(w^2+2*w) is already not valid for all w>0 (it's valid for w>sqrt(2)-1 though).

But it doesn't seem too difficult to work out the correct branch. Take a constant value of w. Compute the (one-sided) derivatives of nu3 and of nu3r at u=0. Now everything is numeric, and the evala machinery can be used to symbolically determine which branches to take (note that there are no non-polynomial RootOfs in nu3r and its derivatives). Then it should be possible to prove that the result can be extended to an appropriate range of values of w.

The difference between series(ee, ...) and series(sqrt(1/x)*erf(sqrt(x)), ...) probably stems from this:

ee assuming x < 0;
                             sqrt(1/x)*erf(sqrt(x))

sqrt(1/x)*erf(sqrt(x)) assuming x < 0;
                          I*sqrt(-1/x)*erf(I*sqrt(-x))

and then series throws an error if given the second output. I don't know why the results of assuming differ here.

@AndreaAlp

You're welcome. The rotation isn't necessary, just convenient, as it doesn't change the shape of the ellipse. When you're eliminating a variable from the equations, you're instead getting a projection of the ellipse onto one of the coordinate planes. But a parallel projection preserves the center of an ellipse, so conceptually there's nothing wrong with your original method:

applyop(CompleteSquare, [2, 1],
  eliminate([y-model_M_femoral_condyle, z-model_M_tibial_plateau], y));
       [{y = -.2564522229*z+519.1377414-0.8752086210e-1*x}, {0.134954854256928e-1*(z-82.5916599186723)^2+
       0.331826535967322e-1*(x-50.3331780529672)^2-1.01086211811955}]

You get the x and z coordinates of CP_M_fem_M_tibia from the second equation (the implicit rhs is zero) and then y from the first equation.

Also, a<b<c doesn't work in 1-D input mode (thanks to Preben Alsholm for pointing that out) so in the clip code you need to change it to (a<b and b<c):

m -> ir[2, 1] < Mean(m[.., ir[1]]) and Mean(m[.., ir[1]]) < ir[2, 2]

 

As another variation of #1, it's not at all obvious that an eval is needed here:

(simplify@series)(arccosh(csc(t)), t = Pi/2, 3) assuming t < Pi/2;
                   signum(t-(1/2)*Pi)*(t-(1/2)*Pi)+O((t-(1/2)*Pi)^3)

(simplify@eval@series)(arccosh(csc(t)), t = Pi/2, 3) assuming t < Pi/2;
                           -t+(1/2)*Pi+O((t-(1/2)*Pi)^3)

 

@ecterrab

Regarding #3, there seems to be an extra step involved, because assume(x(0)>0) always gives an error, but of the two simplify/assuming constructs, one just doesn't do the expected thing and the other gives an error.

My point is more or less simply "I think it shouldn't try to place assumptions on 0 and instead it should just work."

 

@vv

I just noticed that simplify((sqrt(1-I)*sqrt(1+I)-sqrt(2))*(t+1), constant) gives 0, so I suppose the question can be made more specific: "Should Maple be more aggressive in applying simplify(..., constant) routines?"

@Ronan

Usually you can type ? followed by an operator to look it up:

?:-

access the function NormalForm exported by the module Groebner;

?elementwise

(or the third entry in the search results for ?~): element-wise subtraction of two lists. Could have been just [a,b]-[c,d]. Brackets needed because your variables are sequences;

?@

composition of functions; the same as tdeg(op(indets(Phi[1]))).

 

@AndreaAlp

Can you explicitly write out the equations for both surfaces as F(x,y,z)=0 and G(x,y,z)=0, please? Otherwise there is some confusion. If expr is not an equation, implicitplot3d of expr means the plot of the surface expr=0.

If your quadric is in fact given by the equation y-my_quadric=0, then it's better to visualize it using the parametric plot

plot3d([x, my_quadric, z], x=xrange, z=zrange, view=[default, yrange, default])

This way the computation is done on a 2D grid, not a 3D grid, so the grid can be made denser.

But with your equation of the plane, there are no intersections:

SolveTools:-SemiAlgebraic(convert({z - my_plane, y - my_quadric}, rational, exact)); # no real solutions
                               []

 

@vv

I see, for polynomial roots that seems to do the trick. But there's the same issue for non-polynomial roots:

rr := RootOf(x-cos(x), x, fsolve(x-cos(x)));
               RootOf(_Z - cos(_Z), 0.7390851332)

f := x -> piecewise(x > rr, 1);

f(1);
                               0

rr is a uniquely defined root, but there's the same issue again. Perhaps Maple cannot decide whether the imaginary part is exactly zero or not.

This is not directly related, but while the RootOf page does say "a placeholder for representing all the roots of an equation in one variable", it does things like this:

Im(RootOf(_Z^3-_Z-1));
                               0

which are clearly not valid for all roots. So it's really hard to tell sometimes how RootOf is going to be interpreted in a given context.

Also, if RootOf is a multi-valued object, for which a comparison doesn't make sense, maybe Maple should say something.

As a consequence of the fact that comparisons do not work with RootOf, this happens:

f := x -> piecewise(x > RootOf(_Z^3-_Z-1), 1);

f(2.);
                               0

plot(f(x), x = 1 .. 2); # identical zero

plot(evalf(f(x)), x = 1 .. 2); # a unit jump

 

Three possible methods. Our chief weapons are the Newton polygon and the method of dominant balance. And simply applying algcurves:-puiseux, as mentioned in the post.

@tsunamiBTP

http://ocw.nctu.edu.tw/course/fourier/supplement/hewitt-hewitt1979.pdf, Theorem F.

@vv

Yes, forgot to add that condition: if f is absolutely integrable and f(x0-0) and f(x0+0) exist.

It's fine if the series is divergent at x0, the point of the theorem is that it's Cesaro summable to the midvalue.

@tsunamiBTP

If you mean that the maximum of Sn (the nth partial sum) is below the maximum of f, sure, that can happen for some values of n (Sn decreases towards 0 but f increases), but is that really significant? The proofs of the results related to the Gibbs phenomenon require just some basic calculus, they boil down to the fact that a certain sum is in fact a Riemann sum for one particular function. So the limiting behavior is always the same.

For the infinite series, if the Dirichlet conditions hold, the value of the sum is (f(x0+0)+f(x0-0))/2 at every point x0 (even more, if f is just absolutely integrable, the series is summable to that value). So at which point x0 is the overshoot going to occur? The limit of the maximum of Sn is not the same as the maximum of the limit of Sn.

?solve,series only gives examples where the solve variable is the same as the expansion variable. E.g., the series for the inverse function can be found in this manner:

ser := series(y-f(x)+O((x-x0)^3), x = x0);

iser := solve(ser, x);
      x0+(y-f(x0))/(D(f))(x0)-(1/2)*((D@@2)(f))(x0)*(y-f(x0))^2/(D(f))(x0)^3+O((y-f(x0))^3)

is(iser = series((f@@(-1))(y), y = f(x0), 3)); # same as the series for f@@(-1)
                          true

series(subs(x = iser, ser), y = f(x0)); # OK
                     O((y-f(x0))^3)

But we can formulate this differently, and instead of solving y-f(x)=0, solve f(y)-x=0:

ser := series(f(y)-x+O((x-x0)^3), x = x0);

iser := solve(ser, y);
Error, (in solve/series) invalid input: series received x-x0, which is not valid for its 2nd argument, eqn

Should it work? Sometimes it does:

ser := series(ln(y)-x+O(x^6), x = 0);

iser := solve(ser, y); # OK
                 1+x+(1/2)*x^2+(1/6)*x^3+(1/24)*x^4+(1/120)*x^5+O(x^6)

series(subs(y = iser, ser), x = 0);
        ln(1+x+(1/2)*x^2+(1/6)*x^3+(1/24)*x^4+(1/120)*x^5+O(x^6))-x+O(x^6)

But now the substitution doesn't work. Should it?

1 2 3 4 5 6 7 Page 2 of 9