acer

32343 Reputation

29 Badges

19 years, 326 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

I will submit a report.

kernelopts(version);

`Maple 2022.0, X86 64 LINUX, Mar 8 2022, Build ID 1599809`

restart;

with(Units:-Simple):

a := (x, y) -> exp(y*x):

lprint(eval(a));

(x, y) -> Units:-Simple:-exp(Units:-Simple:-`*`(y,x))

dismantle(eval(a));


PROC(11) #[operator, arrow]
   EXPSEQ(3)
      NAME(4): x
      NAME(4): y
   EXPSEQ(1)
   EXPSEQ(3)
      NAME(5): operator
      NAME(4): arrow
   EXPSEQ(1)
   FUNCTION(3)
      MEMBER(3)
         NAME(4): Units:-Simple #[modulename = Units, protected]
         NAME(4): exp #[protected, _syslib]
      EXPSEQ(2)
         FUNCTION(3)
            MEMBER(3)
               NAME(4): Units:-Simple #[modulename = Units, protected]
               NAME(4): `*` #[protected]
            EXPSEQ(3)
               PARAM(2): [2]
               PARAM(2): [1]
   EXPSEQ(1)
   EXPSEQ(1)
   EXPSEQ(1)
   BINARY(2)
      0x2
   EXPSEQ(3)
      LIST(2)
         EXPSEQ(6)
            STRING(4): ""
            INTNEG(2): -1
            INTPOS(2): 4
            INTPOS(2): 21
            INTPOS(2): 36
      LIST(2)
         EXPSEQ(6)
            STRING(4): ""
            INTNEG(2): -1
            INTPOS(2): 4
            INTPOS(2): 12
            INTPOS(2): 36
 

D[1](a);

D[1](a)

restart;

with(Units:-Simple):

a := proc (x, y) options operator, arrow; exp(y*x) end proc

lprint(eval(a));

(x, y) -> Units:-Simple:-exp(Units:-Simple:-`*`(y,x))

dismantle(eval(a));


PROC(11) #[operator, arrow]
   EXPSEQ(3)
      NAME(4): x
      NAME(4): y
   EXPSEQ(1)
   EXPSEQ(3)
      NAME(5): operator
      NAME(4): arrow
   EXPSEQ(1)
   FUNCTION(3)
      NAME(4): Units:-Simple:-exp #[protected, modulename = Units:-Simple]
      EXPSEQ(2)
         FUNCTION(3)
            NAME(4): Units:-Simple:-`*` #[protected, modulename = Units:-Simple]
            EXPSEQ(3)
               PARAM(2): [2]
               PARAM(2): [1]
   EXPSEQ(1)
   EXPSEQ(1)
   EXPSEQ(1)
   BINARY(2)
      0x2
   EXPSEQ(1)
 

D[1](a);

Error, (in anonymous procedure called from PD/PD) too many levels of recursion

 

Download problem_etian2_rep.mw

@Chorux Sorry, I left in reference to another list I'd made. That should now be corrected, above.

Also, the code snippet $nops(L) may get rejected by the 2D parser, so I changed it to $1..nops(L) which should work in both 1D and 2D input modes.

@Rouben Rostamian  I saw it on the Help page in Maple 2022.0 for Topic plots:-display.

     redraw : boolean; specifies whether to allow redrawing of static
     2-D plots by combining the original plot calls into a single call
     rather than just displaying them together

I happened to have previously observed a (rare, hopefully, and now reported) case in which that new (default of) redraw=true caused a slowdown. So I was lucky to have had a little head start here.

Mentally I lump it in with explicitly adding the old defaulf of adaptive=true option for plot, that can help avoid some new slowdowns due to the adaptive=geometric in Maple 2022.0. Hopefully these things will be ironed out in 2022.1.

@vv I believe that the OP also wants this effect:

ff := (x/3-2/3)^k;

((1/3)*x-2/3)^k

expand(simplify(content(ff)))*simplify(ff/content(ff));

(x-2)^k/3^k

And someone might have an easier means to attain that.

@Christian Wolinski It appeared in Maple 2019.

See updates/Maple2019/Language.

I believe that I am making some progress, although I have to overcome the hurdle of the Control and Info options (structs, say) in the UMFPACK solver. But if I get it right then it could be exciting.

@C_R Please don't spawn off a separate Question thread for this (highly related, if not essentially duplicate) query, even if you haven't yet received a response here that satisfies you.

It runs quickly for me in Maple 2020.1 and 2020.2 for Linux.

@tomleslie This does not correspond to the provided definition.

Why do you want to write your own root-finding implementation for this, instead of using fsolve?

@pik1432 You should add this as followup to one of your earlier Question threads on the same underlying goal. By splitting this across several Question threads you unhelpfully split the details of the goal and the previous responses.

If you insist on spawning multiple Question threads on this topic you could at least use the Branch button from one of the earlier Question threads, so that cross-referencing links are included at both ends.

It would also be much better if you showed your expected result (lprinted, or as an actual expression identical to what you are wanting not as an image).

@charlie_fcl 

One way consists of programmatically embedding an animation inside a PlotComponent, and having it automatically start playing as part of the programmatic statement that generates and embeds it. I believe that I had an example of that in the attachments in this old Answer.

Two other ways are to use the Explore command (though by itself this does not usually create the kind of plot that can be exported to an aninated .gif file). animate_auto.mw

Your attached worksheet Q20220425.mw is empty.

Great. After I mentioned parallelization of Jac1 I realized that it was generating a Matrix with storage=sparse and that likely it'd be more key to construct it via compressed form (a triple of vectors). Your second worksheet appears to be working along those lines, so that's a place to focus. For your earlier sheet I'd noticed that the cost of construction by Jac1 was even more important than that of the lienar-solving as NN grew.

For your first worksheet my machine takes 25.7sec for NN=100 (dim of j00=10404, with WEN03).
After removing overhead of calling UMFPACK's direct solver, my machine takes 22.7sec for the same NN=100. Phasefield2DBaseCodeParametric_ac0.mw
Running your 2nd worksheet my machine takes 11.3sec for a similar ntot=100.

I would like to look closely at construction of the sparse Matrix ("jacobian"), especially via direct compressed formats.

I'd like to look at replacing both hw_PardisoFactor and hw_SpUMFPACK_MatFactor with some custom wrapper revisions that might allow reusing preconditioner/"factorizer" construction.

I have a hazy recollection of once reading that a sparse iterated approach might begin to beat a sparse direct approach (say, for this kind of block-banded solve that comes up so often in these kinds of pde problems) as the matrix dimension grows above about 10k-by-10k. It would be interesting to investigate any such possible cross-over. But that would of course require using the very same "jacobian" construction for both. And ideally that'd also need optimal "re-use" for both, as mentioned.

I might get some time, starting tomorrow evening.

note: Out of curiosity, your 2nd worksheet was apparently saved last by you in Maple 2021.1, and you use Linux IIRC, and it contains a define_external to hw_PardisoFactor. Did you run it in 2022.0 or actually in 2021.1?

note: If I have to compile any new custom wrapper would it be ok (say, to start with) if I supplied a linked .so file (Linux) rather than C source?

The possibility for improvement looks likely.

In the best case scenario the common pattern of sparse Matrix j00 might be leveraged, by reutilizing a "symbolic" preconditioner generated by UMFPACK. But this may require effort, as Maple's pre-built external wrapper function for that initial stage returns only handles (to objects in memory) which might not be thus re-usable. A new hand-crafted wrapper might be constructed, utilizing umfpack_di_symbolic once and then umfpack_di_numeric for each new Matrix.

That could be quite beneficial if the three linear-solving computations altogether consume about 2/3 of the time for each iteration at that NN=100.

On the other hand, a certain amount of Maple overhead might be cut out, though the savings for that might not amount to so much --> perhaps 20% of the total time, at that size.

But if the problem size is increased then it looks as if some of the other (Compiled, custom) procedures such as Jac1 might benefit relatively more from improvement -- possibly including parallelization.

So here are three ideas: re-usable factorization, reduction of Library call overhead, and parallelization of the custom procedures. I can see definite paths for the latter two.

First 106 107 108 109 110 111 112 Last Page 108 of 592