acer

32490 Reputation

29 Badges

20 years, 8 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

This is a well worded explanation of a good general approach.

The waste of resources that can occur in the case that `plot` receives an unevaluated `int` call as its first argument should not be ignored, in general. As Carl mentioned, there is a potential for a waste of resources as Maple may attempt in vain to compute the integral symbolically. But we can make some observations about when that can occur.

Such a fruitless attempt at  symbolic computation of integral can depend on the location of the dummy plotting variable within the `int` call.

If the dummy plotting veraible occurs only in the integrand then Maple may (according to the particular example) waste a very large amount of resources re-trying to compute the integral symbolically. A separate attempt at symbolic integration can occur for every substituted floating-point value of the independent dummy plotting variable.

If the dummy plotting variable occurs only in the integration range then evaluation of the `int` call at floating-point values of that variable are supposed to each trigger a reasonably quick dispatch off to evalf(Int(...)), unless a symbolic `method` is specifed. In other words int(f(x),x=a..b) should dispatch off to evalf(Int(...)) if either of `a` or `b` is a float and the other is of type realcons. This is supposed to be roughly the same as calling `int` with its `numeric` option. There may be a small overhead, above the cost of calling evalf(Int(...)) directly, but it is not supposed to be a great cost.

The first two examples below use quite similar resources (give or take fluctation seen when repeating), and both do no symbolic integration in my Maple 17.01.

restart:
f:= x-> exp(-exp(x^2)):
P:=CodeTools:-Usage(plot(z-> Int(f, 0..z), -3.0..3.0)):

memory used=5.94MiB, alloc change=24.00MiB, cpu time=156.00ms, real time=160.00ms

restart:
f:= x-> exp(-exp(x^2)):
P:=CodeTools:-Usage(plot(z-> int(f, 0..z), -3.0..3.0)):

memory used=6.30MiB, alloc change=24.00MiB, cpu time=172.00ms, real time=165.00ms

The next example does a small amount of symbolic integration since plot seems to pass exact -3 for x while evaluating at the left end-point.

restart:
f:= x-> exp(-exp(x^2)):
P:=CodeTools:-Usage(plot(z-> int(f, 0..z), -3..3)):

memory used=27.04MiB, alloc change=28.02MiB, cpu time=530.00ms, real time=537.00ms

One other nice thing about using `Int` is that the `epsilon` tolerance of the numerical integration performed under evalf(Int(...)) can be easily relaxed, which can make some problematic examples more tractable. If the only goal is to produce a plot then high accuracy of the plotted results may not be a requirement.

acer

This is a well worded explanation of a good general approach.

The waste of resources that can occur in the case that `plot` receives an unevaluated `int` call as its first argument should not be ignored, in general. As Carl mentioned, there is a potential for a waste of resources as Maple may attempt in vain to compute the integral symbolically. But we can make some observations about when that can occur.

Such a fruitless attempt at  symbolic computation of integral can depend on the location of the dummy plotting variable within the `int` call.

If the dummy plotting veraible occurs only in the integrand then Maple may (according to the particular example) waste a very large amount of resources re-trying to compute the integral symbolically. A separate attempt at symbolic integration can occur for every substituted floating-point value of the independent dummy plotting variable.

If the dummy plotting variable occurs only in the integration range then evaluation of the `int` call at floating-point values of that variable are supposed to each trigger a reasonably quick dispatch off to evalf(Int(...)), unless a symbolic `method` is specifed. In other words int(f(x),x=a..b) should dispatch off to evalf(Int(...)) if either of `a` or `b` is a float and the other is of type realcons. This is supposed to be roughly the same as calling `int` with its `numeric` option. There may be a small overhead, above the cost of calling evalf(Int(...)) directly, but it is not supposed to be a great cost.

The first two examples below use quite similar resources (give or take fluctation seen when repeating), and both do no symbolic integration in my Maple 17.01.

restart:
f:= x-> exp(-exp(x^2)):
P:=CodeTools:-Usage(plot(z-> Int(f, 0..z), -3.0..3.0)):

memory used=5.94MiB, alloc change=24.00MiB, cpu time=156.00ms, real time=160.00ms

restart:
f:= x-> exp(-exp(x^2)):
P:=CodeTools:-Usage(plot(z-> int(f, 0..z), -3.0..3.0)):

memory used=6.30MiB, alloc change=24.00MiB, cpu time=172.00ms, real time=165.00ms

The next example does a small amount of symbolic integration since plot seems to pass exact -3 for x while evaluating at the left end-point.

restart:
f:= x-> exp(-exp(x^2)):
P:=CodeTools:-Usage(plot(z-> int(f, 0..z), -3..3)):

memory used=27.04MiB, alloc change=28.02MiB, cpu time=530.00ms, real time=537.00ms

One other nice thing about using `Int` is that the `epsilon` tolerance of the numerical integration performed under evalf(Int(...)) can be easily relaxed, which can make some problematic examples more tractable. If the only goal is to produce a plot then high accuracy of the plotted results may not be a requirement.

acer

This is a bug in Maple 17, illustrated by the behaviour of `subs`.

It worked properly in Maple 16.02 and at least 15.01, 14.01, and 11.02.

It works for 2-by-1 Matrices in Maple 17.01. But it is not working properly for the Vector case in Maple 17.01.

There is nothing unusual about the following way of constucting Vector V, which is similar in essence to one of your ways. It is a natural way to stack Vectors. Here it produces a (column) Vector. The `subs` action in question is broken in Maple 17.01. The problem also occurs for `eval`. It's also broken in the row-Vector analogue.

restart:
V0:=Vector([a]):
V1:=Vector([b]):

V:=Vector([V0,V1]);
                                       [a]
                                  V := [ ]
                                       [b]

P:=subs(a=5,V); # produced a Vector containing 5 in Maple 16, but is brokem in 17.01

                                       [a]
                                  P := [ ]
                                       [b]

rtable_eval(P); # this at least should work, but doesn't in Maple 17.01

                                     [a]
                                     [ ]
                                     [b]

The following constructs a 2-by-1 Matrix. It is not a more usual way of constructing a Vector (because it's a way of constructing a Matrix, rather than a Vector). The `subs` action in question is not broken in Maple 17.01, for Matrix (or Array).

restart:
V0:=Vector[row]([a]):
V1:=Vector[row]([b]):

M:=<V0,V1>;

                                       [a]
                                  M := [ ]
                                       [b]

P:=subs(a=5,M);
                                       [5]
                                  P := [ ]
                                       [b]

acer

This is a bug in Maple 17, illustrated by the behaviour of `subs`.

It worked properly in Maple 16.02 and at least 15.01, 14.01, and 11.02.

It works for 2-by-1 Matrices in Maple 17.01. But it is not working properly for the Vector case in Maple 17.01.

There is nothing unusual about the following way of constucting Vector V, which is similar in essence to one of your ways. It is a natural way to stack Vectors. Here it produces a (column) Vector. The `subs` action in question is broken in Maple 17.01. The problem also occurs for `eval`. It's also broken in the row-Vector analogue.

restart:
V0:=Vector([a]):
V1:=Vector([b]):

V:=Vector([V0,V1]);
                                       [a]
                                  V := [ ]
                                       [b]

P:=subs(a=5,V); # produced a Vector containing 5 in Maple 16, but is brokem in 17.01

                                       [a]
                                  P := [ ]
                                       [b]

rtable_eval(P); # this at least should work, but doesn't in Maple 17.01

                                     [a]
                                     [ ]
                                     [b]

The following constructs a 2-by-1 Matrix. It is not a more usual way of constructing a Vector (because it's a way of constructing a Matrix, rather than a Vector). The `subs` action in question is not broken in Maple 17.01, for Matrix (or Array).

restart:
V0:=Vector[row]([a]):
V1:=Vector[row]([b]):

M:=<V0,V1>;

                                       [a]
                                  M := [ ]
                                       [b]

P:=subs(a=5,M);
                                       [5]
                                  P := [ ]
                                       [b]

acer

Do you have the typesetting level set to `extended`? If so, then I suppose that you are seeing an enhanced prettyprinting of a call to hypergeom.

acer

@Alejandro Jakubi Thanks, but ScientificConstants has several times that much infomation on isotopes. For example,

restart:
with(ScientificConstants):
select(t->evalb(op(0,t)=H),convert([GetIsotopes()],`global`));
map(GetElement,%);

I have previously found several such partial sets of data at NIST and related sites. But what is needed, I suspect, is a much more full collection which is also named (so that it can be cited and referenced for comparison at a later date).

For example, there is some mention of a 2001 published data set here, with some later update here in 2005. The 2001 data might possibly be accesible only by subscription, and the 2005 updates might possibly have its central numbers available in the linked abstract. I'd like to hear an expert's opinion.

 

@Carl Love Hi Carl. Darin might have a good answer for you, but I'll chip in with some anecdotal evidence if that's OK.

I was using the Task model to split (halve) some of my embarrasingly parallelizable numeric escape-time fractal code. At first I imagined that I'd get optimal performance by just using numcpus to figure out the best base case. Ie, the code could split if the "current" size were not less than 1/numcpus times the original total size.

But in practice I found that the OS (64bit Windows and 64bit Linux) could ramp up more quickly if I instead used a value higher than numcpus. Both Linux `top` and Windows' Task Manager showed all cores getting to a higher load more quickly if the Maple Task mechanism was being instructed to split more times than just the value of numcpus. Eg, on an 8-core Intel i7 or a 4-core i5 I got a measurably better total real time for the entire computation if I made the code split until the size was say 1/15th to 1/20th of the original.

I'd be interested if anyone else had seen behaviour that was similar (or radically different).

In my experience the 2010 release of the CODATA collection of values for the fundamental physical constants was easily found on the web as a single plaintext file.

I once wrote a Maple routine which processed the CODATA 2010 .txt data file and saved the data into Maple using the ScientificConstants package. This was quite straightforward a task, given the single text file with the data.

But finding the latest data for isotopes (or nuclides) in a single collection that is recognized by NIST seems more difficult. Does anyone know the location of such a data set, as plaintext or XML?

acer

@Mac Dude It seems like a bug in ScientificConstants, where it doesn't properly accomodate the new system.

This next looks ok,

restart:

with(ScientificConstants):
em:=Constant(electron_mass):

GetValue(em),GetUnit(em);

                            -31                   
              9.109381882 10   , Units:-Unit('kg')

But now, with a system with energy and action to be simplified in terms of MeV and MeV*s respectively,

restart:

Units:-AddSystem('Accelerator',Units:-GetSystem('SI'),MeV,MeV*s); 
Units:-UseSystem('Accelerator');

with(ScientificConstants):
em:=Constant(electron_mass):

GetValue(em),GetUnit(em);

                            -18                   
              5.685626500 10   , Units:-Unit('kg')

@Mac Dude It seems like a bug in ScientificConstants, where it doesn't properly accomodate the new system.

This next looks ok,

restart:

with(ScientificConstants):
em:=Constant(electron_mass):

GetValue(em),GetUnit(em);

                            -31                   
              9.109381882 10   , Units:-Unit('kg')

But now, with a system with energy and action to be simplified in terms of MeV and MeV*s respectively,

restart:

Units:-AddSystem('Accelerator',Units:-GetSystem('SI'),MeV,MeV*s); 
Units:-UseSystem('Accelerator');

with(ScientificConstants):
em:=Constant(electron_mass):

GetValue(em),GetUnit(em);

                            -18                   
              5.685626500 10   , Units:-Unit('kg')

Answers about how to do this best (or perhaps just better) may depend on the particular nature of `a1`. Could you provide some details in the form of a fully functioning, explicit example?

acer

@Markiyan Hirnyk I interpreted the question as being about two things: the holes in the plot, and the long computation time.

Maple can get quirky in strange ways when Digits is set less than 5, so having it that low is not a great idea.

The holes in the plot are because the individual quadrature attempts failed (for input pairs in the plane). evalf/Int infers the value for epsilon based on Digits, when that option is supplied. But if Digits is set too low then the inferred looser tolerance may not be helpful as there might bot be enough working precision even to satisfy the looser tolerance. Hence quite often it helps to have a higher working precision (default Digits might do) while forcing a looser tolerance separately.

And evalf/Int might converge enough to satisfy a looser tolerance more quickly. Hence I suggested leaving Digits at its default value (10) while supplying looser tolerance (larger epsilon). That appears to help with both the failing values as well as the speed.

Having Digits be as high as 8 (or 10, the default) might fix the holes. But reducing Digits from default 10 down to a value of 8 doesn't do as good a job with speeding it all up as does supplying a looser tolerance. So typing in Digits:=8 or what have you just seems to be unnecessary typing, that might also obscure what may matter more. I did not test whether Digits=10 is needed; it might be, but even if not there seems less reason to reduce it from its default as doing so does not by itself cure the speed issues.

And the same may go for other tweaks such as using the non-iterated quadrature method and reducing the plot points.

 

@Markiyan Hirnyk I interpreted the question as being about two things: the holes in the plot, and the long computation time.

Maple can get quirky in strange ways when Digits is set less than 5, so having it that low is not a great idea.

The holes in the plot are because the individual quadrature attempts failed (for input pairs in the plane). evalf/Int infers the value for epsilon based on Digits, when that option is supplied. But if Digits is set too low then the inferred looser tolerance may not be helpful as there might bot be enough working precision even to satisfy the looser tolerance. Hence quite often it helps to have a higher working precision (default Digits might do) while forcing a looser tolerance separately.

And evalf/Int might converge enough to satisfy a looser tolerance more quickly. Hence I suggested leaving Digits at its default value (10) while supplying looser tolerance (larger epsilon). That appears to help with both the failing values as well as the speed.

Having Digits be as high as 8 (or 10, the default) might fix the holes. But reducing Digits from default 10 down to a value of 8 doesn't do as good a job with speeding it all up as does supplying a looser tolerance. So typing in Digits:=8 or what have you just seems to be unnecessary typing, that might also obscure what may matter more. I did not test whether Digits=10 is needed; it might be, but even if not there seems less reason to reduce it from its default as doing so does not by itself cure the speed issues.

And the same may go for other tweaks such as using the non-iterated quadrature method and reducing the plot points.

 

I googled kovacic algorithm and the first hit was this and most of the first page of hits seemed relevant. I mention that first hit because its references indicate a preprint (1979?) by Kovacic as well as an implementation by D.Saunders given at ACM 1981.

Another hit that stood out was this Maple help-page (even if it may differ in implementation), which cites Kovacic at end in its References section,

Kovacic, J. "An algorithm for solving second order linear homogeneous equations". J. Symb. Comp. Vol. 2. (1986): 3-43.

acer

@Carl Love I was not claiming that it is bug in 2-argument eval. I stated that `eval/if` relies on the behaviour of 2-argument eval.

I'm not sure that `eval/if` is quite right. There are other corners, too.

First 369 370 371 372 373 374 375 Last Page 371 of 594