17433 Reputation

29 Badges

14 years, 144 days

On google groups. (comp.soft-sys.math.maple and sci.math.symbolic)

On stackoverflow.


MaplePrimes Activity

These are replies submitted by acer

@CyberRob There is no doubt a more efficient way to do this using coeff/coeffs. But I'm trying to figure out whether this is what you're after.

I still don't understand why in your second example you wanted a factor like,
   nurdel*(dnub*nur + dnur*nub)
while in your first example you wanted a factor like,
   dnub*nur*nurdel + dnur*nub*nurdel


vars := [nur, nub, dnur, dnub, nurdel, nubdel, dnurdel, dnubdel]:

expr1 := c4*dnub*kpbr*ksr*nur*nurdel + c4*dnur*kpbr*ksr*nub*nurdel:

targ1 := c4*kpbr*ksr*(dnub*nur*nurdel+dnur*nub*nurdel):

expr2 := c4*dnub*kpbr*ksr*nur*nurdel + c4*dnur*kpbr*ksr*nub*nurdel + nub:

targ2 := c4*kpbr*ksr*nurdel*(dnub*nur + dnur*nub)+nub:

expr3 := c4*dnub*kpbr*ksr*nur*nurdel + c4*dnur*kpbr*ksr*nub*nurdel
         + (c4*nus0 + c5)*dnub+dnub:

targ3 := c4*kpbr*ksr*nurdel*(dnub*nur + dnur*nub) + (c4*nus0 + c5+1)*dnub:

step1 := proc(ee) local K;
         end proc:

step2 := proc(ee) local temp;
         end proc:



[nurdel*(dnub*nur+dnur*nub), dnub], [c4*kpbr*ksr, c4*nus0+c5+1]




[nurdel*(dnub*nur+dnur*nub), nub], [c4*kpbr*ksr, 1]




[nurdel*(dnub*nur+dnur*nub)], [c4*kpbr*ksr]




An alternative for step1 (possibly better) might be,

step1 := proc(ee) local K;
         end proc:

@Kitonum For the very first example the OP's expected result is not the "simplest" form (lowest leaf count, shortest, fully factored, etc). So, while your comment is correct, I don't see how it figures in here.

Start by showing us the code you used to produce this plot.

@mmcdara The rotation option was introduced for the plots:-textplot command in Maple 2018.

Please don't start a new thread about how to do this kind of thing, ie. without involving the global names, etc.

Put the followup here,  instead. The whole topic should stay together.

@Stretto But it will break if the global name x is assigned a value (eg, 4) prior to calling Iter.

@Stretto Your attempt with  C@@(3)(5)  has incorrect syntax.

And your attempts using f(g) is incorrect syntax for functional composition. If it is corrected to f@g then, lo, Iter returns a result in terms of @@.


C := n->n^2:



Iter := proc(f,n)
    local g, i;
    g := f;
    for i from 1 to n do
        g := f@g:







@arashghgood I have already read your post. I did that before I made various kinds of plots from it for both parts 1) and (slightly more straightforward) ii).

I would like to know precisely what kind of plots you want. It is unclear from your details provided so far what you want in the case that K or Q are not purely real or purely imaginary. It is unclear what values you want for parameters which you might want fixed.

If you are not going to provide the full details for me to choose between various approaches then I am done here.

@arashghgood Yes, I was able to produce a number of plots.

But I'm going to wait until you answer all my queries for specific details thoroughly and properly.

@Carl Love In some cases Optimization can automatically differentiate a procedure form of the objective. I suspect that might work for your example above, say if the userinfo is replaced by a suitable printf call.

Or, perhaps, if the order of objective calls is not required then one might use a remember table on the objective procedure.

Or the corresponding gradient procedure could be supplied explicitly to the Minimize command.

The above are a few ways in which one might show (or possibly retain) the objective evaluations which are not used merely for numeric approximation of the gradient derivatives.


What kind of optimization problem are you doing?

The concept of "iteration" depends on the method.

Do you want to see the result attained for each "major iteration" step (whatever that might mean), or each and every functional evaluation?

Note also that some methods might attempt to approximate derivatives or bounds numerically (which requires functional evaluations of the original objective).

If you want to record or see each and every functional evaluation of the objective then you could construct an objective procedure which does that.

I forgot to mention that the second and first approaches which I described above (which have since been shown explicitly in the Answers of vv and Kitonum) were also possible ten years ago. In fact such functionality has been possible for much longer than that.

The OP has not included any direct link to the earlier Question thread, which so helpful; that does not allow us to tell how much general functionality was wanted earlier. The present example's formulation can be easily (manually) solved for variable end-points or recast in polar form, etc, but that it not the case for more general problems of this type.

I know three ways to get a reasonably smooth surface without having ramp up the grid resolution (so high that GUI responsiveness to rotation becomes very sluggish).

One is to use polar coordinates (and parametric calling sequence). That's not possible and easy for all examples.

Another is to use formulaic boundaries for the end points of one of the variable's range. This requires that the domain is convex in one direction.

Another way is to write code that adapts the points in a MESH plotting structure to the boundary at which the surface formula   becomes nonreal. (I have a few schemes that implement this...)

I'll write more later, when I have better than a cellphone.

[While I was typing this the member `vv` has given an Answer with the second approach I've mentioned.]

Note that none of these will do the task of figuring out the calling sequence (of the plot3d command) from a set of inequalities, in the way that Mma command does.

I agree with vv, that the matter of obtaining a target accuracy is key here. In my opinion it is the central distinction between the models of numeric computation of Maple and Mathematica.

I was shying off this conversation until now because I'm on holiday, and because the focus was more on the easier issue of final rounding( and/or significant figures).

I have previously (three separate times) implemented a mechanism as vv describes, using shake and evalr. Naturally such approaches will always be slower than necessary because they are not kernel builtin.

The question of how to specify the "known" accuracy/precision of all numeric inputs is another detail.

In at least one of my implementations I made my `feval` procedure have special evaluation rules, and also to emulate 2-argument eval (substitution).

Naturally, these mechanisms hinged on evalr estimate the error and then recompute. Estimating the working precision to obtain a target accuracy -- with no duplicated effort -- I call the inverse interval computation, and is more involved.

@Carl Love You seem to be mixing up hyperthreading with multithreading.

His machine has 4 physical cores, with 8 virtual cores if hyperthreading is enabled.

Any ratio greater than 1 shows an improvement due to *multithreading*.

A ratio greater than 4 would show an improvement that could be ascribed to *hyperthreading*, on his machine. (Tom's machine showed it. The OP's did not. Their chipsets both have a similar number of physical and virtual cores.)

It's true that we do not know whether the hosts were otherwise idle, or even whether the OP has hyperthreading enabled. But it seems doubtful that he would post all this and not have that set.

3 4 5 6 7 8 9 Last Page 5 of 400