acer

32490 Reputation

29 Badges

20 years, 9 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@goli I was not originally convinced that simplex[convexhull] was the best way to approach your goal.

The problem here is that some portions (which you hope to connect as if they lay on the nice convex curve segment) of your latest set of data points are only "close to" being convex.

We can see that the very first stage fails to do what was expected of it.

restart:

ch1 := [[0.5e-2, -.57], [0.1e-1, -.56], [0.25e-1, -.57], [0.35e-1, -.555], [0.8e-1, -.485],
 [0.9e-1, -.42], [.115, -.43], [.125, -.36], [.155, -.365], [.16, -.3], [.17, -.34],
 [.175, -.275], [.195, -.24], [.23, -.24], [.23, -.18], [.245, -.215], [.25, -.145],
 [.275, -.1], [.28, -.155], [.3, -.12], [.31, -0.35e-1], [.315, -0.25e-1], [.32, -0.85e-1],
 [.345, -0.4e-1], [.355, 0.55e-1], [.36, 0.65e-1], [.375, 0.15e-1], [.38, .11], [.39, .135],
 [.405, .175], [.41, .19], [.415, .205], [.425, .11], [.425, .24], [.425, .425], [.425, .43],
 [.425, .435], [.425, .44], [.43, .12], [.43, .265], [.43, .405], [.43, .41], [.43, .44],
 [.43, .445], [.435, .295], [.435, .38], [.435, .385], [.435, .445], [.44, .445], [.445, .445],
 [.45, .445], [.465, .195], [.465, .44], [.48, .43], [.485, .245], [.485, .425], [.49, .42],
 [.5, .29], [.5, .405], [.505, .31], [.505, .395], [.51, .345], [.51, .35], [.51, .355],
 [.51, .36], [.51, .365], [.51, .37]]:

ch11 := simplex[convexhull](ch1,output=[hull]):

ch1b:={op(ch1)} minus {op(ch11)}:

plots:-display( plot(ch11,style=point,symbol=solidbox,symbolsize=10,color=cyan),
                plot(ch11,color=cyan),
                plot(ch1b,style=point,symbol=solidbox,symbolsize=10)
               );

You should be able to see the problem points in the above plot. The problem is that the `convexhull` command does not accept a tolerance option in order to denote how far off a point canbe while still being acceptable. That's unfortunate.

Did you round the data for those points? If you rounded, and if you are lucky, then there may be more success with the unrounded data. For example the point [.17, -.34] might originally have been closer to [.17, -.3401] or so.

If retaining more decimal digits in the data doesn't cure this main issue then it still may be possible to write a more careful piece of (involved) code. It might fit a curve to the computed hull segment, and then find out which points are "close enough". And then adjust the set that was the computed hull. And so on.

Sorry, I don't have spare time to write that.

@tfr@nanophysics.dk I think that you have misunderstood. The first task is to decode the base64 and the dotm encodings of the 2D Math represented as Typesetting calls. The two encoding steps, as the first steps above, would have to be done if you want to get at the code in some way. And they do work. But they decode to an internal representation of 2D Math (which is a non-plaintext Maple language) marked up in Typesetting code.

Yes, of course it does not work to print the Typesetting form (of 2D Math) in a 1D environment. I only included that `print` call to demonstrate that the initial decoding steps had in fact worked to obtain a necessary intermediate form.

And then I subsequently added a Comment, illustrating how the 2D Math Typesetting representation might further be parsed and even finally evaluated/interpreted/executed as 1D Maple Notation code. And that worked too. But perhaps you had not yet read and digested it, before you made a conclusion about the first steps' purpose and success.

@tfr@nanophysics.dk I think that you have misunderstood. The first task is to decode the base64 and the dotm encodings of the 2D Math represented as Typesetting calls. The two encoding steps, as the first steps above, would have to be done if you want to get at the code in some way. And they do work. But they decode to an internal representation of 2D Math (which is a non-plaintext Maple language) marked up in Typesetting code.

Yes, of course it does not work to print the Typesetting form (of 2D Math) in a 1D environment. I only included that `print` call to demonstrate that the initial decoding steps had in fact worked to obtain a necessary intermediate form.

And then I subsequently added a Comment, illustrating how the 2D Math Typesetting representation might further be parsed and even finally evaluated/interpreted/executed as 1D Maple Notation code. And that worked too. But perhaps you had not yet read and digested it, before you made a conclusion about the first steps' purpose and success.

Your solution is pretty much the same as mine: try to handle the remaining points also with convexhull, and then try to join the pieces.

In general this may get tough. The posted example is next to trivial, and doesn't do justice to the potential difficulty. Someone will have a collection of points to be joined as a collection of many pieces, with inflections.

But I am wondering about where these points came from. Isn't the submitter also asking questions about DEs? if these come from numerically solving them, or simulating, then it might be that DEplot or odeplot can handle the problem directly (and so avoid all this effort at figuring out the spacecurve discretized as an ordered sequence of points.)

acer

Your solution is pretty much the same as mine: try to handle the remaining points also with convexhull, and then try to join the pieces.

In general this may get tough. The posted example is next to trivial, and doesn't do justice to the potential difficulty. Someone will have a collection of points to be joined as a collection of many pieces, with inflections.

But I am wondering about where these points came from. Isn't the submitter also asking questions about DEs? if these come from numerically solving them, or simulating, then it might be that DEplot or odeplot can handle the problem directly (and so avoid all this effort at figuring out the spacecurve discretized as an ordered sequence of points.)

acer

A few more ideas, for manipulating this result. (This is crude. You'd likely want it much more bulletproof and sophisticated. And it'd likely end up much different, if done carefully and properly.)

Continuing from the above...

> H:=Typesetting:-Parse(op(fromdotm)):

> HH:=remove(type,
>        StringTools:-Split(convert(op(-1,eval(H,1)),string),"1;"),
>        identical(" ",""));

                     ["A := 4", " B := 7"]

> seq(eval(parse(t)), t in HH):

> A, B;
                              4, 7

I'm not sure that I understand what part you plan for python in this. If you were to implement the entire scripting process using commandline Maple itself then you might more easily utilize (or get more ideas from) the exports of the `Worksheet`, `XMLTools`, and `Typesetting` packages.

[Also, some curiosities of scripting with commandline Maple.]

acer

A few more ideas, for manipulating this result. (This is crude. You'd likely want it much more bulletproof and sophisticated. And it'd likely end up much different, if done carefully and properly.)

Continuing from the above...

> H:=Typesetting:-Parse(op(fromdotm)):

> HH:=remove(type,
>        StringTools:-Split(convert(op(-1,eval(H,1)),string),"1;"),
>        identical(" ",""));

                     ["A := 4", " B := 7"]

> seq(eval(parse(t)), t in HH):

> A, B;
                              4, 7

I'm not sure that I understand what part you plan for python in this. If you were to implement the entire scripting process using commandline Maple itself then you might more easily utilize (or get more ideas from) the exports of the `Worksheet`, `XMLTools`, and `Typesetting` packages.

[Also, some curiosities of scripting with commandline Maple.]

acer

Right. Good job. For a few major releases now, non-evalhf'able commands can be executed within an evalhf'd procedure if they are wrapped in an `eval` call.

The (faster) evalhf callback from the openmaple API may be usable, then, with this trick. But the evaluation of the particular piece inside that extra `eval` is not actually interpreted under evalhf. The extra `eval` is behaving like a temporary escape from evalhf back to Maple's regular interpreter.

Exceptions to this behaviour include module member calls, like A:-B(blah), for which evalhf will still complain. One way to get around that is to instead call eval(H(blah)) where H is another procedure which itself calls A:-B.

acer

Right. Good job. For a few major releases now, non-evalhf'able commands can be executed within an evalhf'd procedure if they are wrapped in an `eval` call.

The (faster) evalhf callback from the openmaple API may be usable, then, with this trick. But the evaluation of the particular piece inside that extra `eval` is not actually interpreted under evalhf. The extra `eval` is behaving like a temporary escape from evalhf back to Maple's regular interpreter.

Exceptions to this behaviour include module member calls, like A:-B(blah), for which evalhf will still complain. One way to get around that is to instead call eval(H(blah)) where H is another procedure which itself calls A:-B.

acer

@icegood As far a I know, the `LerchPhi` command is not evalhf'able.

@icegood As far a I know, the `LerchPhi` command is not evalhf'able.

As an illustration of my points, here is a comparison of two ways to get a 4th derivative as a procedure or operator.

The first way, obtaining operator `Fxxxx`, is quite similar to what you do in your worksheet. The alternative way involves just using the `diff` command, with `unapply` used on the result just once.

The alternative way, obtaining operator `otherFxxxx`, takes about 1000 times less time to produce the derivative operator, and evaluates that numerically at a point about 100 times faster than does the original way's `Fxxxx` operator.

restart:

expr:=1/7*sin(5/7*x+exp(3/7*x))/exp(2/7*(1/7*sin(5/7*(1/11*sin(5/11*x
      +exp(3/11*x))/exp(2/11*((1/11*sin(5/11*x+exp(3/11*x))/exp(2/11*x)))
      +1/7*ln(3/7*x)))))+1/7*ln(3/7*x)):

F:=unapply(expr,x):

Fx:=CodeTools:-Usage( unapply(evalf(simplify(diff(F(x),x))),x) ):
memory used=13.92MiB, alloc change=11.37MiB, cpu time=249.00ms, real time=244.00ms

Fxx:=CodeTools:-Usage( unapply(evalf(simplify(diff(Fx(x),x))),x) ):
memory used=62.31MiB, alloc change=29.74MiB, cpu time=780.00ms, real time=771.00ms

Fxxx:=CodeTools:-Usage( unapply(evalf(simplify(diff(Fxx(x),x))),x) ):
memory used=0.75GiB, alloc change=47.87MiB, cpu time=11.08s, real time=11.09s

Fxxxx:=CodeTools:-Usage( unapply(evalf(simplify(diff(Fxxx(x),x))),x) ):
memory used=7.03GiB, alloc change=363.68MiB, cpu time=4.75m, real time=4.76m

otherFxxxx:=CodeTools:-Usage( unapply((diff(expr,x,x,x,x)),x) ):
memory used=328.11KiB, alloc change=0 bytes, cpu time=0ns, real time=4.00ms

CodeTools:-Usage( Fxxxx(2.3) );
memory used=89.73MiB, alloc change=0 bytes, cpu time=1.37s, real time=1.38s

                         -0.8163268164

CodeTools:-Usage( otherFxxxx(2.3) );
memory used=1.55MiB, alloc change=0 bytes, cpu time=16.00ms, real time=19.00ms

                         -0.8163268116

The alternative way is so very fast that it could also be used to produce separate operators for each of the 1st, 2nd, 3rd, and 4th derivatives, assign each of those to operators as well. But each subsequent derivative would be produced using `diff` applied to the previous expression rather that function applications, and there would be no dubious `simplify` and `evalf` combined actions going on. Keeping each of the four derivatives, and producing four operators, should only be four times slower than the alternative shown above, not one thousand times slower.

You can mess around with option `numeric` on all your original `unapply` calls. But I believe that the approach is still fundamentally misguided.

Especially unfortunate is using `simplify` on a symbolic expression which contains floating-point coefficients, as this often tends to get the opposite effect and produce a much longer expression rather than a simpler one. But for your example, with all those LerchPhi calls, it might even be just `simplify` alone which is the biggest problem.

Here is generation of distinct operators for all the derivatives from 1st to 4th, but without the evalf@simplify,

restart:

expr:=1/7*sin(5/7*x+exp(3/7*x))/exp(2/7*(1/7*sin(5/7*(1/11*sin(5/11*x
      +exp(3/11*x))/exp(2/11*((1/11*sin(5/11*x+exp(3/11*x))/exp(2/11*x)))
      +1/7*ln(3/7*x)))))+1/7*ln(3/7*x)):

otherFxxxx:=CodeTools:-Usage( unapply((diff(expr,x,x,x,x)),x) ):
memory used=386.71KiB, alloc change=255.95KiB, cpu time=0ns, real time=8.00ms

st:=time():
F:=unapply(expr,x):
Fx_expr:=diff(F(x),x):
Fx:=unapply(Fx_expr,x): # don't create, if not to be used
Fxx_expr:=diff(Fx(x),x):
Fxx:=unapply(Fxx_expr,x): # don't create, if not to be used
Fxxx_expr:=diff(Fxx(x),x):
Fxxx:=unapply(Fxxx_expr,x): # don't create, if not to be used
Fxxxx_expr:=diff(Fxxx(x),x):
time()-st;
                             0.015

Fxxxx:=CodeTools:-Usage( unapply(Fxxxx_expr,x) ):
memory used=512 bytes, alloc change=0 bytes, cpu time=16.00ms, real time=3.00ms

CodeTools:-Usage( otherFxxxx(2.9) );
memory used=1.57MiB, alloc change=1.25MiB, cpu time=21.00ms, real time=22.00ms

                          -3.567059374

CodeTools:-Usage( Fxxxx(2.9) );
memory used=1.52MiB, alloc change=1.25MiB, cpu time=16.00ms, real time=17.00ms

                          -3.567059374

acer

As an illustration of my points, here is a comparison of two ways to get a 4th derivative as a procedure or operator.

The first way, obtaining operator `Fxxxx`, is quite similar to what you do in your worksheet. The alternative way involves just using the `diff` command, with `unapply` used on the result just once.

The alternative way, obtaining operator `otherFxxxx`, takes about 1000 times less time to produce the derivative operator, and evaluates that numerically at a point about 100 times faster than does the original way's `Fxxxx` operator.

restart:

expr:=1/7*sin(5/7*x+exp(3/7*x))/exp(2/7*(1/7*sin(5/7*(1/11*sin(5/11*x
      +exp(3/11*x))/exp(2/11*((1/11*sin(5/11*x+exp(3/11*x))/exp(2/11*x)))
      +1/7*ln(3/7*x)))))+1/7*ln(3/7*x)):

F:=unapply(expr,x):

Fx:=CodeTools:-Usage( unapply(evalf(simplify(diff(F(x),x))),x) ):
memory used=13.92MiB, alloc change=11.37MiB, cpu time=249.00ms, real time=244.00ms

Fxx:=CodeTools:-Usage( unapply(evalf(simplify(diff(Fx(x),x))),x) ):
memory used=62.31MiB, alloc change=29.74MiB, cpu time=780.00ms, real time=771.00ms

Fxxx:=CodeTools:-Usage( unapply(evalf(simplify(diff(Fxx(x),x))),x) ):
memory used=0.75GiB, alloc change=47.87MiB, cpu time=11.08s, real time=11.09s

Fxxxx:=CodeTools:-Usage( unapply(evalf(simplify(diff(Fxxx(x),x))),x) ):
memory used=7.03GiB, alloc change=363.68MiB, cpu time=4.75m, real time=4.76m

otherFxxxx:=CodeTools:-Usage( unapply((diff(expr,x,x,x,x)),x) ):
memory used=328.11KiB, alloc change=0 bytes, cpu time=0ns, real time=4.00ms

CodeTools:-Usage( Fxxxx(2.3) );
memory used=89.73MiB, alloc change=0 bytes, cpu time=1.37s, real time=1.38s

                         -0.8163268164

CodeTools:-Usage( otherFxxxx(2.3) );
memory used=1.55MiB, alloc change=0 bytes, cpu time=16.00ms, real time=19.00ms

                         -0.8163268116

The alternative way is so very fast that it could also be used to produce separate operators for each of the 1st, 2nd, 3rd, and 4th derivatives, assign each of those to operators as well. But each subsequent derivative would be produced using `diff` applied to the previous expression rather that function applications, and there would be no dubious `simplify` and `evalf` combined actions going on. Keeping each of the four derivatives, and producing four operators, should only be four times slower than the alternative shown above, not one thousand times slower.

You can mess around with option `numeric` on all your original `unapply` calls. But I believe that the approach is still fundamentally misguided.

Especially unfortunate is using `simplify` on a symbolic expression which contains floating-point coefficients, as this often tends to get the opposite effect and produce a much longer expression rather than a simpler one. But for your example, with all those LerchPhi calls, it might even be just `simplify` alone which is the biggest problem.

Here is generation of distinct operators for all the derivatives from 1st to 4th, but without the evalf@simplify,

restart:

expr:=1/7*sin(5/7*x+exp(3/7*x))/exp(2/7*(1/7*sin(5/7*(1/11*sin(5/11*x
      +exp(3/11*x))/exp(2/11*((1/11*sin(5/11*x+exp(3/11*x))/exp(2/11*x)))
      +1/7*ln(3/7*x)))))+1/7*ln(3/7*x)):

otherFxxxx:=CodeTools:-Usage( unapply((diff(expr,x,x,x,x)),x) ):
memory used=386.71KiB, alloc change=255.95KiB, cpu time=0ns, real time=8.00ms

st:=time():
F:=unapply(expr,x):
Fx_expr:=diff(F(x),x):
Fx:=unapply(Fx_expr,x): # don't create, if not to be used
Fxx_expr:=diff(Fx(x),x):
Fxx:=unapply(Fxx_expr,x): # don't create, if not to be used
Fxxx_expr:=diff(Fxx(x),x):
Fxxx:=unapply(Fxxx_expr,x): # don't create, if not to be used
Fxxxx_expr:=diff(Fxxx(x),x):
time()-st;
                             0.015

Fxxxx:=CodeTools:-Usage( unapply(Fxxxx_expr,x) ):
memory used=512 bytes, alloc change=0 bytes, cpu time=16.00ms, real time=3.00ms

CodeTools:-Usage( otherFxxxx(2.9) );
memory used=1.57MiB, alloc change=1.25MiB, cpu time=21.00ms, real time=22.00ms

                          -3.567059374

CodeTools:-Usage( Fxxxx(2.9) );
memory used=1.52MiB, alloc change=1.25MiB, cpu time=16.00ms, real time=17.00ms

                          -3.567059374

acer

@Samir Khan Thanks!

(Grist for the efficiency mill.)

Please forgive me if I've missed it in some hidden code block or region in that uploaded worksheet, but I don't see any code to reproduce these images in Maple.

I don't see how anyone could properly assess whether Maple can produce such plots or images using a reasonable amount of time and memory resources, without source code or a worksheet which successfully reproduces the results.

acer

First 427 428 429 430 431 432 433 Last Page 429 of 595