acer

20896 Reputation

29 Badges

15 years, 60 days

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

Yes, both Explore and seq have special evaluation rules on their first parameter.

The primary difference between Explore and seq in your situation is that seq is a kernel builtin and can pretty much do anything it wants to the underlying DAG structure when it finally has to evaluate the expression. On the other hand, Explore is an interpreted Library routine and it's ability to substitute and evaluate is mostly restricted to use of stock commands. (Having Explore walk DAG structures and mess with pointto etc would be a bit of a rabbit hole. Essentially the only way to undo certain consequences of a Library procedure with special evaluations rules is to mimic -- slowly -- all that the kernel does. It's probably not possible to ever get it exactly right.)

In my opinion it was a mistake for Explore to ever have its first parameter be specified as uneval. Far better and more consistent and explainable would be for it to behave like plots:-animate and frontend, which separate the outer operation from the arguments and where the arguments natually get uneval quotes as needed. But what's done is done.

You may use uneval (single right-tick) quotes to work around your given example, but for some other examples using uneval quotes in a natural manner gets messed up by the Library-side substitution/evaluation that is accomodating that special evaluation of the first argument.

@Chouette I have submitted a bug report that contains all the examples (also run with eliminate and a dummy variable workaround).

@tomleslie Regarding your bullet point 3), those Warnings related to changecoords and arrow come from the use of with, and in very old Maple with could issue some warnings when rebindings clobbered earlier rebindings.

They don't seem to to be the cause of a problem here. But I believe that they do indeed arise from what's in the worksheet (and need not arise from what's in some initialization file).

@tomleslie I suspect that a significant portion of the total real time for this computation is garbage collection.

The kernel can use kernelopts(gcmaxthreads) for parallel memory management, even without use of Threads by the code.

So, if there is much garbage that can be collected in parallel, then gcmaxthreads=4 cores could be in use by gc and numcpus=4 cores by the Threads computation.

This is partly why I suggested running under CodeTools:-Usage with realtime and gcrealtime being reported as partial output.

If the virtual hyperthreads actually interfere with optimal performance of the physics cores for this example (possible but certainly machine and OS dependent) then one could try with setting both numcpus and gcmaxthreads kernelopts settings to half of the number of physical cores.

@Ronan You may have misunderstood: nothing in my previous comment was intended to address your example specifically.

I was simply pointing out to Tom that numcpus can actually be set.

I suspect that garbage collection is a significant aspect to this example. I would run it under a single procedure call, wrapped by CodeTools:-Usage so that it returns gcrealtime, realtime, and the result.

One can meaningfully compare gcrealtime and realtime. (Other timing measurements that represent cumulative times across threads cannot be meaningfully compared.)

I believe you if you say that Threads slows it for you with immediate integers but speeds it up with big integers. I haven't run the example.

I'd be tempted to use ArrayTools:-Alias rather that form a whole new set of Arrays.

@tomleslie 

You can set kernelopts(numcpus) once per kernel session/restart. Usually that has to be done at the very beginning (before anything else does).

There is kernel code which examines the processor specifications and tries to determine the appropriate value for kernelopts(numcpus) automatically, which is the value you see when you first query it.

For some very new CPU chipsets (eg. intel skylake, etc) the Maple facility which examines the processor specs may not be fully up to date. And so -- for example -- it may under-report and not initally set kernelopts(numcpus) to the number of physical cores instead of the number of logical cores (which could include hyperthreading). Ot it may under-report by not properly recognizing that hyperthreading is enabled.

On at least one of my recent Linux boxes the initial value of kernelopts(numcpus) is set to the number of physical cores, and some Threads:-Task examples speed up for me if I manually set it to twice that (thus accounting for all logical cores under hyperthreading) at the start of the session.

When hyperthreading was first introduced, years ago, the memory buses and cache access was not done well, and highly intensive and parallelized code would orten do worse if the additional virual cores also contended for these resources. So on such very old machines Maple sets kernelopts(numcpus) initially to just the number of physical cores. But nowadays the situation has much improved, and quite often there are still measurable gains to be had by allowing the virtual cores to participate even in previously problematic examples. The current fly in the ointment is that the facility may need updating for spanking new machines -- or manual setting of kernelopts(numcpus) at session start.

On my 64bit Linux Maple 2019.0 I can run this command to see details of what Maple's kernel thinks my machine can support.
   ssystem("processor full");
That runs the `processor` binary from the kernelopts(bindir) subdirectory.

I am not promising that altering kernelopts(numcpus) will improve your particular example. It might even make performance worse. I just thought that I'd tack on these details as commentary to your prior reply, since you wrote that kernelopts(numcpus) could not be changed.

I find that the benefits of Threads:-Task are generally platform dependent.

The use of name n in the example seems unnecessarily confusing, since it is scoped within doTask, a parameter of setupTask, and assigned at the top-level.

@Preben Alsholm It's only documented as accompanying the parametric option.

But it can now sometimes have an effect on things other than just polynomial examples. (I've seen other, less trivial, univariate examples where it helps. But I've also seen other examples where it prevents success. )

It bothers me that the solve,details help page is out of date. Another case is univariate trig expressions alongside inequalties, where solve can sometimes now return multiple solutions within a specified finite range. The extra functionality is nice, but it contradicts what the help pages say.

@cinderella Since your f(x) doesn't depend on x then M,N (and Ecost, Dcost, and Val) are univariate polynomials in x.

So fsolve of them can return multiple solutions.

Your code doesn't do anything to handle that situation.

What do you want to happen when Val has multiple roots?

@Christopher2222 If DownloadMap is a local but not an export of the GoogleMaps module then you could toggle kernelopts(opaquemodules=false) to access it.

You could try it again, in that mode. 

If that works then you might also be able to toggle it off, assign eval of DownloadMap to something (a local of your own module say) and then toggle it back to true. Depends on how much it bothers you to run in that mode. 

@vv The original question in this thread asked for random permutations of the numbers 1..n for n::posint and that is handled directly by combinat:-randperm(n), including in Maple 15 as you've noted.

In case anyone's interested, I noticed what could be an improvement in combinat:-randperm for the case of a general list. The performance difference is, natually, only significant if a very large list is permuted or a small list is permuted very many times.

randperm15.mw

[edit] Of course, the case of a set (which combinat:-randperm supports by returning a list), could be handled separately. For example the map(op,p,N) call in randperm could be replaced by `if`(N::list,N[p],[N[]][p]) .

And I'm aware that the syntax for indexing in list N by permutation list p could also be simply N[p] .

randperm20190.mw

@Racine65 What do you mean by "non-truncated" and "all the components"?

Do you understand what it means for a number to be irrational?

@Racine65 The evalf command will return a floating-point approximation. Any floating-point number can only be a rounded (or truncated) approximation here, since as Carl mentioned the exact value is an irrational number.

restart;

evalf((1/2)!);   # from double-precision evalhf, 16 places correct
                0.886226925452758052

evalf((1/2)!);   # remembered result, rounded to default Digits=10
                     0.8862269255

evalf[30]((1/2)!);
             0.886226925452758013649083741671

evalhf((1/2)!);   # double-precision evalhf, 16 places correct

                0.886226925452758052

I can't see what other form you might want here, other than the exact symbolic sqrt(Pi)/2, a floating-point approximation, or a rational approximation.

(note to self, it's a quirk that evalf((1/2)!) returns the evalhf result the first time, and then rounds the remembered result when re-executed. I'll submit an SCR as it's quirky.)

@Racine65 I already did that.

convert((1/2)!, GAMMA)

is just one command applied to (1/2)!

Here are some others.

convert(factorial(1/2), elementary);

                           1   (1/2)
                           - Pi     
                           2        

convert((1/2)!, elementary);

                           1   (1/2)
                           - Pi     
                           2        

convert(factorial(1/2), GAMMA);

                           1   (1/2)
                           - Pi     
                           2        
convert((1/2)!, GAMMA);
                           1   (1/2)
                           - Pi     
                           2        

simplify((1/2)!); # as vv noted

                           1   (1/2)
                           - Pi     
                           2        

In my experience it can often be productive to focus more on the way that symbolic manipulations occur and precisely how and when they are targeted than on the brevity of the command. But what do I know.

@dharr One of my points was that if one doesn't require the A,B,C expressions then a closer match to the originally requested form can be had with just this (eg. following your initial parfrac conversion to produce Gc2),

    coeff(Gc2,s,0)*collect(Gc2,s,u->u/coeff(Gc2,s,0));

The OP didn't specify whether he wanted to extract the A,B,C expressions explicitly.

Another point was that the distribution of the A term can be prevented, with an even closer match of the pretty-printed output, using the inert multiplication.

I understand that you already know all this, but it seems likely that at least some of it would be news to the OP.

The prettier form is displayed if your have the typesetting level set to standard.

You can accomplish that as a GUI setting, or by issuing the command
   interface(typesetting=standard):

By default it is set to extended in Maple 2017 onwards.

restart;

interface(version);

`Standard Worksheet Interface, Maple 2018.2, Linux, November 16 2018 Build ID 1362973`

u := D[1,2,2,3](f)(5,10,15);

(D[1, 2, 2, 3](f))(5, 10, 15)

v := convert( u, Diff ):

interface(typesetting); # default

extended

v;

eval(Diff(f(t1, t2, t3), t1, t2, t2, t3), {t1 = 5, t2 = 10, t3 = 15})

interface(typesetting=standard):

v;

eval(Diff(f(t1, t2, t3), t1, t2, t2, t3), {t1 = 5, t2 = 10, t3 = 15})

 

Download leibnitz.mw

[edit] Moreover, it seems that eval(Diff(f(t1,t2,t3),t1,t2,t2,t3),{t1=5}) may print in the prettier way regardless of the typesetting interface level. It may be that some aspects of all this were overlooked when the change to the new default typesetting=extended was done for Maple 2017.

First 74 75 76 77 78 79 80 Last Page 76 of 443