21606 Reputation

29 Badges

15 years, 155 days

Social Networks and Content at

MaplePrimes Activity

These are replies submitted by acer

Did you intend that restart?

Earlier, you had defined a polynomial in p (assigned to v?). The restart gets rid of that. Perhaps you just forgot to assign it again.

@das1404 My answer already demonstrated how the terms in the lhs (or rhs) of an equation could be reduced by common constant content. The `simplify` command doesn't do that.

If you want an answer to some specific question then ask a specific question.

@Ronan At nelems=10^9 I would not be surprised if your 64GB machine starting swapping of RAM, and if that happens then you might as well stop the run as you'll be timing your OSes ability to swap rather than Maple's ability to compute.

But why bother storing the results in an Array anyway? You could compute them without assigning them to anything, and measure the timing of that.

@Ronan So you got 400sec for nelems=100mil it seems, with this code.

Not a bad speedup from  approx. 1530sec.

One aspect to note is that 300 of those 400 seconds are spent in the garbage collector. I wonder how much allocated memory it'd take to hold any of these examples if gc were (somehow) effectively turned off. I suppose the answer to that is the final value of kernelopts(bytesused) and is likely prohibitively high.

If we subtract the realgctime from the realtime for your 10mil and 100mil runs with the (so far) optimal code then the following observation can be made.

   10mil:    realgctime = 7sec         (totalrealtime-gcrealtime) = 10sec

  100mil:   realgctime = 300sec     (totalrealtime-gcrealtime) = 100sec

So the real time to do the actual computations went up by a factor of 10 (from 10sec to 100sec) as the problem size went from 10mil to 100mil. That's even better than I'd have guessed, since the maximal exponent goes up with problem size for your example. But the garbage collection real time went up by a factor of 42 (from 7sec to 300sec), which is noticeably higher. I don't know whether that could be mitigated.

@nm On MS-Windows you could look for something like,


(It's a little confusing because, on MS-Windows, the name of this preferences/resource file is like the name of a Maple initialization file. But the locations are different. And only the latter contains actual Maple source code.)

@Chouette You can fill out this Software Change Request (SCR) form. You can get to it from the Mapleprimes top menu.

@mmcdara I constructed the example so that the denominator would be (mathematically equivalent to) zero upon the substitution x=5 .

At the top-level I used 1-level evaluation in order to print it without the denominator becoming zero. Within a procedure that printing can be done directly since access of the procedure's assigned locals only gets 1-level evaluation.

The purpose was to prevent the evaluation of GAMMA(10/3) after the substitution x=5. After evaluation of that GAMMA(10/3) call the whole denominator would become zero. Hence accessing the evaluated, substituted expression would produce a numeric exception.



eval(GAMMA(10/3), 1);


And so,

expr := 1/(GAMMA(x*2/3)-56*Pi*sqrt(3)/(81*GAMMA(2/3)));




TD := subs(x=5, denom(expr));




eval(TD, 1);


lprint(eval(TD, 1));


I just happened to utilize lprint so that the output was terse. At the top-level I could call lprint or print after the substitution by using 1-level eval of the argument to prevent the division by zero that would occur upon full evaluation. Within the procedure I could utilize lprint or print directly.

The goal was to illustrate that utilizing subs can incur additional risk of hiding a zero-valued denominator, and that the situation can be trickier still if it happens within a procedure.

@Ronan You've probably already noticed that there can be noticeable variation in timing even for repeats of the very same method.

So it gets a little tricky to compare, when the average between two methods is on the same order as the variation. Just wanted to mention it.

@mmcdara I was mostly thinking of the ways in which the lack of evaluation can cause surprises (later on).


expr := 1/(GAMMA(x*2/3)-56*Pi*sqrt(3)/(81*GAMMA(2/3))):

T := subs([x=5], expr):



# It's inconvenient to be surprised by this only later. Eg.

Error, numeric exception: division by zero

# It may not be evident, that the problem is this:

Error, numeric exception: division by zero

# It's often convenient to find out sooner.
eval(expr, [x=5]);

Error, numeric exception: division by zero

# Inside a procedure the local `T` sent to `print`
# is only evaluated 1-level, and the problem can be
# missed.
F := proc()
  local expr, T;
  expr := 1/(GAMMA(x*2/3)-56*Pi*sqrt(3)/(81*GAMMA(2/3)));
  T := subs([x=5], expr):
  print(T); # non-useful debug attempt
end proc:





Perhaps this will provide background.

Yes, both Explore and seq have special evaluation rules on their first parameter.

The primary difference between Explore and seq in your situation is that seq is a kernel builtin and can pretty much do anything it wants to the underlying DAG structure when it finally has to evaluate the expression. On the other hand, Explore is an interpreted Library routine and it's ability to substitute and evaluate is mostly restricted to use of stock commands. (Having Explore walk DAG structures and mess with pointto etc would be a bit of a rabbit hole. Essentially the only way to undo certain consequences of a Library procedure with special evaluations rules is to mimic -- slowly -- all that the kernel does. It's probably not possible to ever get it exactly right.)

In my opinion it was a mistake for Explore to ever have its first parameter be specified as uneval. Far better and more consistent and explainable would be for it to behave like plots:-animate and frontend, which separate the outer operation from the arguments and where the arguments natually get uneval quotes as needed. But what's done is done.

You may use uneval (single right-tick) quotes to work around your given example, but for some other examples using uneval quotes in a natural manner gets messed up by the Library-side substitution/evaluation that is accomodating that special evaluation of the first argument.

@Chouette I have submitted a bug report that contains all the examples (also run with eliminate and a dummy variable workaround).

@tomleslie Regarding your bullet point 3), those Warnings related to changecoords and arrow come from the use of with, and in very old Maple with could issue some warnings when rebindings clobbered earlier rebindings.

They don't seem to to be the cause of a problem here. But I believe that they do indeed arise from what's in the worksheet (and need not arise from what's in some initialization file).

@tomleslie I suspect that a significant portion of the total real time for this computation is garbage collection.

The kernel can use kernelopts(gcmaxthreads) for parallel memory management, even without use of Threads by the code.

So, if there is much garbage that can be collected in parallel, then gcmaxthreads=4 cores could be in use by gc and numcpus=4 cores by the Threads computation.

This is partly why I suggested running under CodeTools:-Usage with realtime and gcrealtime being reported as partial output.

If the virtual hyperthreads actually interfere with optimal performance of the physics cores for this example (possible but certainly machine and OS dependent) then one could try with setting both numcpus and gcmaxthreads kernelopts settings to half of the number of physical cores.

@Ronan You may have misunderstood: nothing in my previous comment was intended to address your example specifically.

I was simply pointing out to Tom that numcpus can actually be set.

I suspect that garbage collection is a significant aspect to this example. I would run it under a single procedure call, wrapped by CodeTools:-Usage so that it returns gcrealtime, realtime, and the result.

One can meaningfully compare gcrealtime and realtime. (Other timing measurements that represent cumulative times across threads cannot be meaningfully compared.)

I believe you if you say that Threads slows it for you with immediate integers but speeds it up with big integers. I haven't run the example.

I'd be tempted to use ArrayTools:-Alias rather that form a whole new set of Arrays.

First 82 83 84 85 86 87 88 Last Page 84 of 452