acer

19132 Reputation

29 Badges

14 years, 288 days

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@eslamelidy You try a modestly high-order series solution, for various values of eq(z).

eslam_acc_complex.mw

@xinwmath What special form of generally invalid result from combine(..,symbolic) or simplify(...,symbolic) are you looking for?

restart;

expr3 := 3*(8*L__a+4*lambda+(4*I)*sqrt(3)*lambda)^(2/3)
         *(8*L__a-(4*I)*sqrt(3)*lambda+4*lambda)^(2/3)*lambda^2*(L__a-lambda);

3*(8*L__a+4*lambda+(4*I)*3^(1/2)*lambda)^(2/3)*(8*L__a-(4*I)*3^(1/2)*lambda+4*lambda)^(2/3)*lambda^2*(L__a-lambda)

ans3 := simplify(expand(combine(expr3,symbolic)),symbolic);

48*(L__a^2+L__a*lambda+lambda^2)^(2/3)*lambda^2*(L__a-lambda)

simplify(expand(combine(ans3 - expr3,symbolic)));

48*lambda^2*((L__a^2+L__a*lambda+lambda^2)^(2/3)-((L__a^2+L__a*lambda+lambda^2)^2)^(1/3))*(L__a-lambda)

simplify(expand(combine(ans3 - expr3,symbolic)),symbolic);

0

 

Download symbolic_sick.mw

@vv That is good and simple. Vote up.

But if the "polynomial entries in a single variable" have floating-point coefficients then it does not treat the concept of rank in the same manner as the Rank command (or Matlab's, etc, via singular values which is pretty standard). I mean, following substitution of numeric values for the unknown b, of course.

floatRank.mw

@Kitonum Another kick at the can, for Joe's query. (If `solve` returns an inquality, or non-equality, then it'll be more difficult still.)

Of course, this is the complement.

restart;
with(LinearAlgebra):

A := <b,1,3 | 4,b,6>:
B := <b,1,3 | 4,b+1,6>:

GE1 := GaussianElimination(A):
S1 := [solve({`*`(seq(GE1[i,i], i=1..2))=0}, b)]:
select(u->Rank(eval(A, u))<2, S1);
                           [{b = 2}]

GE2 := GaussianElimination(B):
S2 := [solve({`*`(seq(GE2[i,i], i=1..2))=0}, b)]:
select(u->Rank(eval(A, u))<2, S2);

                               []

@Kitonum Thanks for that observation. How about this amendment?

restart;
A := <b,1,3 | 4,2,6>:
GE := LinearAlgebra:-GaussianElimination(A):

solve({`*`(seq(GE[i,i], i=1..2))<>0}, b);

                            {b <> 2}

@Joe Riel Another way would be to perform Gaussian elimination, and solve for the set of restrictions than all diagonal elements are nonzero.

For example,

restart;

A := <1, 1, -2, -3|-1, -9, b, 11|-1, 3, 2, -1|b, -10, -4, 6>:

cols := [1,2,3]:
GE := LinearAlgebra:-GaussianElimination(A[..,cols]):
solve({seq(GE[i,i]<>0, i=1..nops(cols))}, b);

                            {b <> 2}

solve({seq(Or(GE[i,i]>0, GE[i,i]<0), i=1..nops(cols))}, b);

                        {2 < b}, {b < 2}

cols := [1,2]:
GE := LinearAlgebra:-GaussianElimination(A[..,cols]):
solve({seq(GE[i,i]<>0, i=1..nops(cols))}, b);

                            {b = b}

The help pages for topic   DocumentTools/Components/Plot has, as its very last example a 3D animation that is embedded with a custom scaling (zoom). That's the only way I know to currently get a 3D plot shown automatically with non-square viewing window as well as a nice zoom.  The point is that there is no irritating white space that one often gets at top and bottom.

I have a procedure (somewhere, on some hard-drive) that takes an constrained 3D plot and applies a transformation to get custom aspect ratios of the axes (eg, x vs z, and y vs z).  The point here is that currently Maple offers a unconstrained 3D plot and display its axes as a cube, or it matches the axes to the numeric ranges of the data. But there's nothing to allow arbitrary, pleasing axes-ratios in the case that the x- or y-data are on a completely different scale than the z-data.

That is not too hard to set up with numeric integration. You can even make a "black box" procedure which accepts a numeric value for c and returns the float approximation of the integral. That could be plotted as a function of c, etc.

(If taking a numeric approach then I'd recommend trying either one of the Monte-Carlo or the _CubaCuhre methods for evalf(Int(...)) since the integrand will be discontinuous at the boundary, because outside the region it will be zero. Most other methods rely on smoothness, and the cost of splitting at the implicit boundary will not be nice, especially if nesting a 1D integrator.)

Or are you really hoping for an explicit formula? Whether that can be accomplished will depend on the example. In general it won't be possible.

@Mac Dude Yes, UseHardwareFloats=true will cause some computations to be done faster. But preventing many instances of software float computation will break far more computations.

The UseHardwareFloats environment variable was introduced in Maple 6, and for several major releases almost the only effect it had was on whether datatype=float acted like datatype=sfloat or datatype=float[8] for rtables.

A few years later the scalar HFloat was devised, and a few people sought out ways to make fast scalar floating-point computations easier to accomplish. (There will always be some people whose wish for a silver bullet defies cold logic. HFloats are not immediate and still need memory management, and do not bring the same degree of performance benefits as evalhf, let alone the Compiler.) The option hfloat for procedures arose around the same time, and allowed more flexibility than evalhf even if not as much performance benefit.) Then UseHardwareFloats was used to also control default HFloat creation upon extraction of scalars from float[8] rtables, and in modern Maple it can plot a role similar to option hfloat, but at the top-level. Alas, UseHardwareFloats documentation is thin.

The reason I'm describing some history is that it's important to realize that a very large portion of the Maple Library (many 100s of thousands of lines of code) was designed and existed for decades under the scheme that increasing Digits would normally allow more accurate floating-point computation. This aspect is still relied upon in many places in the Library.

But if you set UseHardwareFloats to true then in modern Maple that will strictly prevent higher software precision computation and thus also more accurate results from being attained in quite a few routines, some of them key.

And there are additional, important nuances, aside from just high working precision. The hardware float environment has restricted range (roughly +-10^308 down to about +-10^(-308), as I reckon you know). But with software floats much larger or smaller exponents can be used, even with default Digits=10. There are Library commands which rely on that in order to function as designed.

Consider the expression exp(750.1 - x) where x is in the range 760 .. 765. This produces values which are not implausible for an underlying physical setting or model. But if one happens to expand that symbolically, then under forced hardware floating-point the result becomes Float(infinity)/exp(x) which will bring no joy for x in the stated range. So, here, with UseHardwareFloats=true a reasonable problem has suddenly become intractable and requires considerably more care and effort to handle. Here are a few examples, but note that many more problematic cases can arise. Float computations can be problematic under all settings, but this hardware float setting introduces a lot of issues which Maple's software floating-point arena handles nicely.

You wrote, "If it isn't working or useful, then why is the option even there?"  Now, I most certainly did not say or imply that UseHardwareFloats=true "isn't working or useful! I don't know how you managed to make that non sequitur. My opinion that UseHardwareFloats=true is not a good top-level, initialized setting does not at all imply that it is never useful.

Just as you can set option hfloat on a procedure of your own devising, you can also set UseHardwareFloats=true. Within a procedure, or for a limited kind of top-level calculation (pure float linear-algebra, say) it can indeed work and be useful. As Joe, mentioned, as an environment variable its value is inherited by child procedure calls, but setting it does not affect the parent.

So it can be useful, in a targeted, specific computational subprocess like a custom procedure for some task you might have. But setting it at the top-level, as a blanket setting, is going to break stuff.

When a procedure is generated by Localize it affects numeric output from the original Sol returned by dsolve.

If the procedure generated by Localize is ever used to change/set its own parameter then it affects Sol in different ways, according to whether the new parameter value was previously utilized in a call to Localize or not.

A call to query the current parameters, made to the result from Localize, contains the global rather than local names.

In other words, idiosyncratic (but not outright wrong or unrxplainable) behaviour follows if the Localize results are ever used to change their parameter value, or if the original solution from dsolve is utilized. And the remember table of Localize needs clearing if memory is to be recovered, even following unassignment of the Localization procedures and gc. All in all, I think that this not manageable by the common man. And the current behavior might change in the future.

foobar.mw

@bliengme You used = instead of := and so did not actually assign the result from solve.

Are you really using Maple version 16, or is it perhaps version 2016?

@Carl Love I'd like to address some of the points in your last Comment.

[edit: I now suspect that the in the paragraph below I may have been mistakenly responding to statements by Carl about code by Preben, and not by me. Upon re-reading, I now suspect that by "your code" Carl did not mean something that I wrote. But I'll let the paragraph stand...]
Yes, the whole point of my post what that, using a naive/obvious approach with the procedure returned by dsolve (numeric), one should never call plot3d like plot3d(Y(A,X), X= -2..2, A= -2..2, ...) where A is the parameter and X the independent variable (of the ODE). So my approach never "gets that wrong" since I am advocating never doing it. If one insists on doing it that way then it's not my approach. (I know that, by "your code" you are referring to the first argument passed to plot3d, the procedure. But I feel that the wording was imprecise enough that I can justify the preceding clarification for the sake of other readers.)

[edit: The rest of this Comment I'll leave, as I was planning on writing it anyway.]

When I wrote this post I was aware that with effort I could get a speedup with use of either remember tables or the instantiation being discussed. (My earlier response to you was about what I was interpreting as a particular way that I thought you wanted to get your hands on that. I now realize that I may have given the wrong impression. However, I did not ever consider the sscanf@sprintf approach, which is very ingenious.)

And, yes, I have been interpreting "instantiation" as used in Comments above to mean it in the particular way that you last described it, where one (if not the) key feature is that the instantiated proceduce will not be affected by changes to the parent (or spawning off other instantiation). The instantiation is done so as to obtain a solving procedure for a given value of the parameter, where that procedure persists independently.

But now I would like to state my position about the approaches of such instantiation, or remember table use. I consider them ingenious but inferior to the two main methods I've given which lay down the solution one parameter value at per slice. I'll list off three considerations that I consider important about such approaches:

  1. They are much more complicated, to lay down in code and for most people to understand.
  2. They create allocations which would eventually have to be cleaned up, to avoid the effect of a memory leak. While not terrible in itself, that is one more aspect of complication and possible burden during use.
  3. There can be integration schemes for the IVP solver where, for a given parameter, it is possible to lay down the solution very efficiently (as independent variable/time t increases). That might not always be leveraged by use of plot3d, but I could be beneficial in use by plots:-odeplot (for 3D plot or 2D animation).

Over the years my viewpoint has shifted somewhat, against implementations which require memory management (either manual/custom, or by stock kernel functionality). I find that I now have a marked preference for structural efficiency, when other considerations are mostly the same. That's just a general comment about point 2) above.

So, it is my belief that use of the way that plot3d runs over the GRID points is important and germane.

I completely understand, if anyone wishes to disagree.

What I would really like to see, in the future, is for the plots:-odeplot command to get new and additional functionality that provided high efficiency for constructing 2D animations or 3D plots where the parameters of an IVP system were utilized for one of the independently changing values. I think that numeric ODE solving is important and common enough to warrant the effort.

I recall discussion about improving DEtools[DEplot] and/or plots:-odeplot wtih respect to using IVP parameters way back in 2011. There was also side talk about remember table use there and -- even if not as beneficial or sophisticated as your and Preben's comments above -- it's worth noting that it illustrated that manual management of memoization techniques can be a struggle for even strong non-expert Maple users. I think that any sophisticated approach to this whole issue, whether structural or memory oriented, is better positioned within a convenient interface of a stock Library command for the majority of users.

 

@Adam Ledger I don't think that you ard going to be able to get that to work satisfactorily. As mentioned, Maple does not ship with debug versions of binaries (executables and shared/dynamic libraries).

@Carl Love I don't know an clean and easy way to localize the dsolve solution procedures, when instantiated at particular parameter values.

Preben, perhaps I ought to have phrased it more like a description of the behaviour seen. That is to say, even when the parameter value is not being changed the mere act of setting it seems to degrade the performance of the mixed set-parameter & compute at specific variable-point. For all that I know, that aspect might be a bug. But it would still be natural and expected to have to generate the output values by walking the variable outermost and the parameter innermost.

Here is the shortest code I have to obtain the two possible choices for the parameter & variable, as first or second independent axis. It's not something I consider easy to find.

VofU0check := proc(par,var)
  if eval(U0,convert(WV('parameters'),`global`)) <> par then
    WV('parameters'=['U0'=par]);
  end if;
  WV(var);
end proc:

plot3d([(x,y)->y,(x,y)->x,VofU0check], U0lo..U0hi, tlo..thi,
       labels=["t","U0","V(t)"]);

plot3d(VofU0check, U0lo..U0hi, tlo..thi,
       labels=["U0","t","V(t)"]);

I am not aware of a way to call plots:-odeplot to force the much better performance here. If someone could show a way then I'd be delighted. But I suspect that plots:-odeplot has not been taught, yet, how to work best with these issues of instantiating parameters.

First 67 68 69 70 71 72 73 Last Page 69 of 424