acer

32495 Reputation

29 Badges

20 years, 9 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@tomleslie Your #2 is not happening so weirdly. It's only the calls to Units:-Unit which have such attributes, and not the products or arithmetic expressions which contain them.

restart:

A:=1*Units:-Unit(m):
B:=2*Units:-Unit(m):
C:=1.0*Units:-Unit(m):

lprint(A);

Units:-Unit(m)

map(`[]`@attributes,[indets(A,specfunc(Units:-Unit))[]]);

[[Units:-UnitStruct(1, metre, contexts:-SI), inert]]

lprint(B);

2*Units:-Unit(m)

map(`[]`@attributes,[indets(B,specfunc(Units:-Unit))[]]);

[[Units:-UnitStruct(1, metre, contexts:-SI), inert]]

lprint(C);

1.0*Units:-Unit(m)

map(`[]`@attributes,[indets(C,specfunc(Units:-Unit))[]]);

[[Units:-UnitStruct(1, metre, contexts:-SI), inert]]

F:=2.3*K*Unit(sec^2/mile)+1.7*Unit(day)/(Unit(m)+Unit(cm));

2.3*K*Units:-Unit(('s')^2/('mi'))+1.7*Units:-Unit('d')/(Units:-Unit('m')+Units:-Unit('cm'))

lprint(F);

2.3*K*Units:-Unit(s^2/mi)+1.7*Units:-Unit(d)/(Units:-Unit(m)+Units:-Unit(cm))

map(`[]`@attributes,[indets(F,specfunc(Units:-Unit))[]]);

[[Units:-UnitStruct(1/100, metre, contexts:-SI), inert], [Units:-UnitStruct(1, day, SI), inert], [Units:-UnitStruct(1, metre, contexts:-SI), inert], [Units:-UnitStruct(1, second, SI)^2/Units:-UnitStruct(1, mile, standard), inert]]

 


Download NoSoStrange.mw

Commands such as Units:-Standard:=``+` and its `*` separate the Units:-Unit calls from their coefficients, to change into SI base units and combine them. It's when they look for these attributes on the Units:-Unit calls themselves that it goes awry.

 

@AmusingYeti I didn't really show how to get faster perfromance in the case that the working precision for the evaluations of the inner integrand has to be greater than 15 (ie, trunc(evalhf(Digits)) ). It just happens that for your example a higher working precision is not required, it seems.

In fact using the _cuhre method does just as well, if not better, at Digits=15. That's understandable, as it's purpose is to avoid the hit of doing iterated single-integrals numerically. (That difference can get more severe as the number of dimensions gets higher.) I illustrate this in the attached worksheet.

So if you encounter an example for which Digits>15 is in fact required so as to keep the roundoff error of the inner integrand down, let us know. It's tricker to set up, but often there is something that can be done even then.

L_sum_int_3.mw

A few guidelines may help:

1) evalf(Int(...)) offers controls for both working precision and target accuracy (tolerance), via its `digits` and `epsilon` options. These are often useful, especially if you need to use increased working precision to obtain only a coarser result.

2) Using an operator for the integrand can help prevent evalf@Int from expanding the integrand (as an expression). It doesn't like a mix of operators and expressions between its nested calls, though, hence the somewhat convoluted use in my code of the `eval` and `unapply` commands in the inner integral form.

3) When using the `epsilon` option in a (nested) multiple-inegral case it is usually necessary to ensure that the inner integral's accuracy target is finer than that of the outer integral. This is because the outer scheme needs results that are stable in a few more digits, so that it can ascertain that it is succeeding. The degree to which the inner integrals `epsilon` tolerance needs to be finer (smaller) than the tolerance at the level one-higher can be problem specific -- for this example it seems that it needs to be at least two or more factors of 10 finer.

4) avoid calling `simplify` with no options on expressions that contain floating-point coefficients (and radicals, or those and fractions of polynomials, etc) because it can introduce coefficients with much higher powers of 10 (and their purported cancelling factort, elsewhere). This can sometimes make roundoff issues more severe, perhaps even leading to a false impression that higher working precision is strictly necesary.

With those guidelines, it can often help to have the mindset that the problem Can Indeed Be Done Quickly, and that one's task is to Find The Way. Sometimes this means that, if it appears to be computing slowly then one must interrupt and adjust. Often the adjustment will involve finding the balance between inner and outer tolerances, or the digits vs epsilon balance at a single level. And repeat...

It can also help to take an instance of the inner integral for a particular value of the variable coming from the outer level, to test whether the inner integral is computing quickly enough (given that fixed value). Eg, given an `r2` numeric value, how will the inner integration perform, for its devised scheme.

I rather doubt that the reported result in the sheet is very accurate, for at least two reasons. 1) it is dependent on using 35 partitons in the boole method, and 2) the raw `simplify` command can introduce very large and very small exponents (base 10) in coefficients of the altered integrand. Matching very many digits of the originally reported result doesn't appear to be an especially good criterion for success, though I'd be glad to hear why someone considers it much more accurate.

What is the degree of accuracy that you want? It's a key detail.

You may have raised Digits to help deal with roundoff error during evaluations of the integrands. But it may be that you don't want nearly such a tight accuracy tolerance as anything like 10^(-30).

You method using the Quadrature command (from a package) is giving a result like 0.26683977722550432669157299491441 when using Digits=32 and the `boole` method and partitions=35. But I suspect that only about ten decimal digits of that are correct. Either that is unacceptable accuracy for you (in which case you need another to take another tack), or it is acceptable. But there are faster ways to get ten correct decimal digits for your given example. So I am also curious about to why you are using the Quadrature command.

acer

@Bendesarts Create it somewhere else other than the Maple installation location! You should not be experimenting with creating/modifying/deleting files there, at risk of breaking something. (Trying all this with OS admin privileges would be even worse.)

Just make a folder somewhere else. Some new folder under your Documents, say. You don't need to adjust libname in order to build the module and save it to the .mla archive, and in fact it's safer if you don't. Just make a new folder, and assign `path` accordingly.

You only need to adjust `libname` if you want to be able to call and test and run the exports of your module after `restart`, when you aren't building the .mla or executing the code that defines the module.

Your process would be like this:

Step 0, create the archive file

path:=cat(myfolder, "/mypackage.mla");
LibraryTools:-Create(path);

Step 1, build the module and save it

mypackage := module()
                ...
             end module:
path:=cat(myfolder, "/mypackage.mla");
LibraryTools:-Save(mypackage, path);

Step 2, restart and test or use it

restart;
libname:=cat(myfolder, "/mypackage.mla"), libname;
with(mypackage);

Did you first execute a call to LibraryTools:-Create to construct that .mla file?

You don't need to do it before each call to LibraryTools:-Save, but you need to have done it once.

The argument you pass could be the same as the second argument to your call to LibraryTools:-Save.

Eg,

fn := cat(libname[1], "/TrigoTransform.mla");

LibraryTools:-Create(fn);

LibraryTools:-Save(TrigoTransform, fn);

acer

Have you considered generating images in Maple, rather than plots, and then exporting them as image files?

Even if you decide to use plots rather than images, then why polygons instead of point-plots, if you have on the order of 10^5 or 10^6 individual items to render together?

acer

@Scot Gould ... your point is what, then?

@DSkoog That's nice.

It might be nicer still if Export[interactive] could get the ability to process extra arguments, such as plotoptions or say height only (or if it would pick them up from plotsetup()).

I notice that the maplet plotdevice won't pick up the height or width that have been specified using plotoptions, and use them to fill in either of the Dimentions textfields. Eg,

plotsetup(maplet,plotoptions="height=200");

Also, the Export[interactive] command reinstates the prior plotdevice, but it wipes out the prior plotoptions value in doing so (even if it is cancelled without actual export). It could be made gentler, so as to reinstate more prior plotsetup conditions (as my code does I hope). Currently I see it do this instead,

plotsetup(jpeg,plotoptions="height=200");
Warning, plotoutput file set to `plot.jpg`

plotsetup();
       preplot = [], postplot = [], plotdevice = jpeg, 

         plotoutput = plot.jpg, plotoptions = height=200

Export[interactive](plot(sin(x),x=0..2*Pi));
Warning, plotoutput file set to `plot.jpg`

plotsetup();
        preplot = [], postplot = [], plotdevice = jpeg, 

          plotoutput = plot.jpg, plotoptions = 

While Export[interactive] seems to provide all the interactive aspects that the OP mentioned, it might still be nice to have also some maplet variant which closes down immediately after export, and perhaps which did not itself render the plot visually.

The GUI will run out of memory or blow the java heap before it can display all of your 200,000 plots.

But this looks like 2D Math input, inside a paragraph in a Document (as opposed to being in an execution group or a Worksheet, say). If that's right then you won't see any results displayed for that as a collapsed Document Block, from either print or printf, until the entire outer loop completes. And then it'll try to render them all, in one shot.

How many of these do you really want to be able to view at any one time? One? Ten? I suggest you figure that out and then we could suggest other methodologies such as using Embedded Components, smaller sequence of plots as displayed as animation(s), using Explore, etc.

acer

@jbuddenh Sorry, I meant to say Shift-Enter, not Ctl-Enter. I'm in a Friday afternoon fog.

@taro No his original problem was with creation of an operator, to pick up the right instance of x. And as you too showed unapply provides the desired functionality for that aspect.

Assignment to f(x) is not key to resolving this, and is an unnecessary complication that muddies the issues. It also gets in the way of `f` also being made an operator.

restart;                                                                       

sol:=dsolve({f(0) = 12, diff(f(x), x) = 2*x+6});                               
                                                               2
                                                sol := f(x) = x  + 6 x + 12

rhs(sol);  # reasonable                                                        
                                                        2
                                                       x  + 6 x + 12

eval(f(x), sol); # reasonable                                                  
                                                        2
                                                       x  + 6 x + 12

assign(sol): f(x); # unreasonably unnecessary assignment to remember table of f
                                                        2
                                                       x  + 6 x + 12

@Doug Meade In the 64bit Linux version of each of Maple 17.00, 2015.2, and 2016.0 I am seeing,

seq( int( abs( cos(n*x) ), x=0..Pi ), n=1..24 );

                2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2

Nasser Abbasi's website contains his 3rd party runs of suites of examples of symbolic integration and symbolic differential equation solving, and may be of general interest to members of this community.

He has assembled results and compared performance of various recent versions of Maple and Mathematica, including timing and memory use as well as general pass-fail results. Source code is provided for the various problems.

For differential equations I find Nasser's results interesting in several respects. One such aspect is that he provides at least two full two comparison sets, one with Maple 18.02 and Mathematica 10.0.2, and another with Maple 2015.2 and Mathematica 10.3.1. Another is that he provides both mean and total computation times and leaf-counts (size). And another aspect I find useful is that he provides individual problem results, as well as source code. The 1940 different examples are taken from Kamke's book, Differential Gleichungen, 3rd ed., and the tables and examples listed on Nasser's site make it clear that this is a significantly broad collection of problems in multiple classes. I find it remarkable that not only does Maple outperform Mathematica on the pass-fail success rate (92% vs 76%) and compactness (size) of results but Maple is reported as doing it much faster on average with a mean CPU time of 0.5 sec for Maple 2015.2 vs 28.3 sec for Mathematica 10.3.1.

For integration Nasser has used several collections of problems. One the one hand his results comparing Maple 2015.1 and Mathematica 10.1 show only pass-fail numbers rather than the optimal/non-optimal/fail numbers that Albert Rich has shown (for older versions). But here too Nasser has assembled mean and total values for both computation time and leaf-count (size). According to these sites Maple's combine optimal/non-optimal pass rate has increased slightly from 85.1% in Maple 18 (A.Rich) to a total pass rate of 88.3% in Maple 2015.2 (Nasser). The Mathematica total pass rate was reported by both at approximately 97%. The average size of the Maple results (Nasser) is much larger than those of Mathematica, while Maple computes them much faster. This all indicates to me that Maple can afford to use some additonal computation time to work harder (change-of-variables attempts, say) to find solutions as well as to simplify results (w.r.t size, at if not via combine).

Nasser recently wrote that he would try to find time to repeat these comparisons using Maple 2016. Full results using Mathematica 10.4.x would be interesting.

acer

First 310 311 312 313 314 315 316 Last Page 312 of 595