1494 Reputation

19 Badges

16 years, 17 days

Social Networks and Content at

Maple Application Center
I am the manager of the Mathematical Software Group, working mostly on the Maple library. I have been working at Maplesoft since 2007, mostly on the Statistics and Units packages and on contract work with industry users. My background is in abstract algebra, in which I completed a PhD at Eindhoven University of Technology. During my studies I was always searching for interesting questions at the crossroads of math and computer science. When I came to Canada in 2007, I had nothing but a work permit, some contacts at Maplesoft (whom I had met at a computer algebra conference a year earlier), and a plan to travel around beautiful Canada for a few months. Since Maplesoft is a company that solves more interesting math and computer science related questions than just about anywhere else, I was quite eager to join them, and after about three months, I could start.

MaplePrimes Activity

These are answers submitted by epostma

Hi vuishl,

Restart is probably not the best solution, since it only works at the top level (see ?restart) - that is, not inside a loop. But there's a framework that you can use to run these sorts of things much more efficiently, if the objective is to plot: ?dsolve/numeric/Events, which uses the numeric ODE solver instead of the exact one.

I'm trying to reverse engineer what you're doing with the RootFinding:-NextZero etc., and I think this is more or less what happens: you want to follow a solution until x(t) is plus or minus one, then negate the derivative of the x(t). Is that correct? Then a more efficient way to generate such a series of plots starts as follows, in a fresh session:

ODE := diff(diff(x(t), t), t)+.2*(diff(x(t), t))-x(t) = aa*sin(t);
ICs := x(0)=mm, D(x)(0)=nn;
solution := dsolve([ODE, ICs], numeric, parameters=[aa, mm, nn],
events=[[x(t)^2-1, diff(x(t), t)=-diff(x(t), t)]], range=0..100):

Now solution is a procedure that contains everything necessary for simulation of your system, including the "bouncing back" of the derivative. The parameters aa, mm, and nn are placeholders that you need to substitute values for before simulation starts (I'll show how in a second). Then you can request the values at a particular point in time, say t = 5, as follows:

solution(parameters = [aa = 1.5, mm = 0.999, nn = -2.5]);

This will return a list of equations saying that (approximately) t = 5, x(t) = -0.56, and diff(x(t), t) = -1.09. But much more interesting is a routine that will do the phase plot for you:

plots:-odeplot(solution, [x(t), diff(x(t), t)], t=0..100, refine = 1);

This we can use in a loop.

for a from 1.0 to 2.0 by 0.4 do
  for m from 0.990 to 1 by 0.004 do
    for n from -3.0 to -2.0 by 0.4 do
      solution(parameters = [aa = a, mm = m, nn = n]);
      plotsetup(ps, plotoutput = sprintf("c:/tmp/partA[%a]M[%a]N[%a].eps", a, m, n), plotoptions = `color,portrait,noborder,width=16cm,height=12cm`);
      plots:-odeplot(solution, [x(t), diff(x(t), t)], t=0..100, refine=1);
    end do;
  end do;
end do:

As you see, there is no call to dsolve inside the loop - that's the beauty of using parameters! All that happens is that the simulation is run and the plot is drawn. Note that the name of the loop variable is different from the name of the parameter - that's always a good idea.

I would try to see if you can run this loop on your older system first. If that doesn't work, we can still break it up into smaller pieces, but I'd think that this is going to be much more efficient than what you had before.

One note: your original version always counted to 200 "bounces", whereas I bounded the simulation by letting it always run to t=100 irrespective of the number of bounces. I think we can make the simulation always run to 200 bounces, if you'd strongly prefer that; it would be a bit more complicated though. And we'd need a reasonable upper bound on how long that's going to take. Let us know if that is necessary.

Hope this helps,

Erik Postma

Another option would be if manjees has some function implemented in C++ that he or she wants to call straight from Maple, then plot the output. This can be useful e.g. if you want to do some visually-guided exploration of where the function has interesting behaviour, then make a refined plot of the interesting region without having to regenerate the data file, reread it into Maple, etc. On the other hand, this would almost certainly require much more programming investment than using a text file.

This sort of thing can be done in Maple using ?external_calling. A C++ function will need to be declared extern "C". There are many different ways to use the external calling API, which range from relatively straightforward to quite subtle, so if the help pages don't get you what you want, feel free to ask for more details.

Hope this helps,

Erik Postma

P.S.: I see now that manjees was explicitly talking about output, so this interpretation is maybe less likely. I'll still preserve the comment for posterity.

...are the commands you need: ?isprime and ?ifactor.

Hope this helps,

Erik Postma

That's absolutely correct, Alec. What we can see here is that determining the RREF is a numerically very unstable problem when you're close to singular matrices. (The condition number for the lower-right entry of the RREF is infinite, as it were.) Also - here you happen to get the right answer because these fractions can be represented exactly in decimal floating point; but for example, if there were a fraction of 1/3 in there, it is likely that no number of digits would lead to the correct answer.

So really, if you have floating point input data, there is almost always a better approach than using the RREF. For example, for estimating the rank, you might want to examine the SVD (implemented as ?LinearAlgebra/SingularValues in Maple) instead of the RREF. (That is what ?LinearAlgebra/Rank does, if there are float entries in the matrix.)

Olerass, what were you planning to use the RREF for? Let's see if we can find a good alternative.

Hope this helps,

Erik Postma

Hi Econometrician,

First of all, I see that you're assigning into a list, e.g. R1[q] := ... or Options[i] := ... . That's generally a bad idea; lists are immutable data structures in Maple, so if you change an element, Maple generates a completely new list for you with that one element changed and returns that. Because of this inefficiency, assigning into lists of length greater than 100 has been disabled completely, so your program will stop working if the size of your input grows beyond that. If you use an Array or a Vector, you don't have that problem.

Another option is to keep using lists, but create them as a whole, as it were, instead of element by element. For example, to create R1, you can either set R1 := Array(1 .. 15) (or 0..14?) and keep your current program (in which case you will clearly have an Array), or do the initialization in one fell swoop by running

R1 := map(i -> A[i, 1], select(i -> A[i, 2] = 4 or A[i, 2] = 0, [seq(1 .. 90)]));

in which case you will have a list.

Now for your main question. I'm not entirely sure I understand your explanation or your program - but the first line of your description tells me that you might want to consider using the combinat[randcomb] command, which will give you a random selection of m entries of a list.

General tip: if you want to do more than only displaying the card names, it might be useful to just use row number internally instead of the card names, so you could simplify the command for R1 above to just

R1 := select(i -> A[i, 2] = 4 or A[i, 2] = 0, [seq(1 .. 90)]);

and perform the map(i -> A[i, 1], ...) only upon output. That way, you have easy access to the information in the other 14 columns of your table as well.

Hope this helps,

Erik Postma


Yes, what Alec suggests above definitely works. But there is indeed also a select option to the NonIsomorphicGraphs command, and it has a potential efficiency advantage, so here's a way to use it. I'll first demonstrate a way to use it that's actually less efficient than what Alec suggests, but it's a useful step in the explanation:

myFilter := g -> evalb(min(GraphTheory:-DegreeSequence(g)) > 0);
[GraphTheory:-NonIsomorphicGraphs(8, output = graphs, outputform = graph, 
                                  select = myFilter, selectform = graph)];

The procedure myFilter expects to find its argument as a GraphTheory-understandable graph structure; that's why we need the selectform = graph option. However, constructing all of these graph structures takes time: the internal algorithm doesn't use these structures at every step. A more efficient alternative is selectform = adjacency. This gives the graph as an adjacency matrix with datatype = integer[1]. We can use that as follows:

myMatrixFilter := proc(m)
  local i, n;
  n := LinearAlgebra:-RowDimension(m);
  for i to n do
    if rtable_num_elems(m[i], 'NonZero') = 0 then
      return false;
    end if;
  end do;
  return true;
end proc;
[GraphTheory:-NonIsomorphicGraphs(8, output = graphs, outputform = graph, 
                                  select = myMatrixFilter, selectform = adjacency)];

This is already slightly faster than Alec's version; however, since almost all matrices satisfy the property we are interested in, almost all graph structures still need to be constructed in the end. If, however, we are content with the adjacency graphs (or indeed with the number of graphs), then this is substantially faster:

[GraphTheory:-NonIsomorphicGraphs(8, output = graphs, outputform = adjacency, 
                                  select = myMatrixFilter, selectform = adjacency)];

My timings:
5.800 for Alec's original code (for 8 nodes instead of 5),
9.900 for the version with myFilter,
5.640 for the version with myMatrixFilter,
2.140 for the version with outputform = adjacency.

(Alec's approach can of course also be sped up if you only need the adjacency matrices; I got 2.230 seconds for that.)

As I said earlier, a large fraction of all graphs has this property; for 8 vertices, the number is 11302 out of 12346. You can expect a greater advantage to using the select option instead of the select command if the fraction of graphs that satisfies the criterion is small. For example, if you would be interested in the set of graphs that have at least one vertex of maximal degree, n - 1, then you will find only 1044 out of 12346 graphs; you can write a similar procedure to myMatrixFilter and get a speedup from 2.300 to 0.750 seconds for 8 vertices.

Hope this helps,

Erik Postma

P.S.: It looks like the formatting is lost for now - I'll see what I can do to get it working again. In the mean time, here is an unformatted copy:


I can think of two ways to achieve this. One is to use lprint to print your equation. This will turn off "pretty-printing" and just show the expression as plain text:

eqn := expand((x + y)^2000) = 0:

A different way would be to set the interface parameter that controls term elision (which is the name of this feature), for example:

interface(elisiontermsthreshold = infinity);
eqn := expand((x+y)^2000) = 0;

Hope this helps,

Erik Postma

In principle, Maple can certainly do what you want (as can any Turing-complete programming language), but how easy it is and how well it performs depends on how you intend to represent your data. Do you already have data in Maple and code that you need to work with, or are you starting from scratch? If you already have data and code, what data structure are you using? If not, what type of data are you planning to work with, and approximately how much of it? Do you expect to do frequent "JOIN"s in your algorithm, or only a few times per session (maybe as pre- or postprocessing)?

Erik Postma


I ran some experiments, and am not very convinced that the theorem you are citing holds in this case. I'm referring to the statement that

P(F > Fnu1, nu21-alpha) = sumi = 0infinity(pi  P(Fnu1 + 2 i, nu2 > nu1 Fnu1, nu21-alpha / (nu1 + 2 i))),

if F follows a noncentral F distribution with parameters nu1, nu2, lambda. I know that the code for Quantile and CDF in Maple is different from the code for Sample, so I thought I would check the inconsistency between f and g, above, in a crude way by sampling the appropriate distributions, as follows.

genSample := proc(d, npts, $)
local nu1, nu2, delta, s1, s2;

    if op(0, d) = NonCentralFRatio then
        nu1, nu2, delta := op(d);
        s1 := Statistics:-Sample(NonCentralChiSquare(nu1, delta), npts);
        s2 := Statistics:-Sample(ChiSquare(nu2), npts);

        return (s1 /~ s2) * (nu2 / nu1);
        return Statistics:-Sample(d, npts);
    end if;
end proc;

compareQuantiles := proc(d1, d2, q1, {ratio := 1, npts := 10^6}, $)
    s1, s2, t, subS2;

    s1 := genSample(d1, npts);
    s2 := genSample(d2, npts);

    t := ratio * Statistics:-Quantile(s1, q1);
    subS2 := Statistics:-SelectInRange(s2, t .. max(s2) + 1);

    return evalf(op(1, subS2) / npts);
end proc;

compareQuantiles(FRatio(3, 3), NonCentralFRatio(3, 3, 0.82), 0.95, npts=10^7); # compare to f(1)
add(exp(-0.82) * 0.82^i / i! * compareQuantiles(FRatio(3, 3), FRatio(3 + 2*i, 3), 0.95, ratio=3/(3+2*i)), i=0..50); # compare to g(1)

Note I had to work around the NonCentralFRatio sampling, because it doesn't seem to work properly for these parameters - that's something I might work on for the next release! Anyway, I got approximately the same answers out of these as for f and g defined above, whereas this uses a different code base. (Convergence for the sum is very good - the last three terms in that sum are ~3E-66, 4E-68, and 7E-70, so it seems unlikely that further terms will add seriously to this sum.)

That doesn't mean it's impossible that the theorem holds, but before I put more effort in finding the discrepancy between the two results, I'd like to be convinced more that it's true.


Erik Postma

Hi qin,

I tried to do your computation, and by substituting values in, I did get an answer. Let me show you what I got:

integral := int(x*(1-sum(binomial(n, k)*p(x, t)^k * (1 - p(x, t))^(n-k), k=0 .. ceilz)), x=t-5 .. 5);
result := eval(integral, [p(x, t) = 1/2, t = 3, n = 17, ceilz = 5]);

Here integral just evaluates to an integral (which is to be expected, I think), and result evaluates to 32487/32768 * hypergeom([-11,1], [7], -1), or approximately 9.7468 using evalf. Note I used ceilz instead of actually typing ceil(z-2); it would possibly work as well if you typed ceil(z-2), but I have sometimes had issues when mixing discrete and continuous values like that.

I'm not sure if this solves your problem. Was the issue that you had expected to find a closed form solution (one without integrals or summations) for the fully general function, with all the variables still in it? In some sense, "most" sufficiently complicated integrals do not have closed form solutions; there just do not exist expressions in terms of elementary functions that are equal to the integral, so that would probably be an unrealistic expectation.

Hope this helps,

Erik Postma

So if I understand correctly, you're trying to find the probability that a random variable Y that is distributed according to the noncentral F-ratio distribution with parameters nu=3, omega=3*n, delta=0.82, is greater than the 0.95 quantile of a random variable distributed according to the (central) F-ratio distribution with parameters nu' = nu = 3, omega' = omega = 3*n; correct?

I'll give the first steps of such a solution; it leads to a rather messy expression, and before you get a useful answer it'll be a bit more work, but I'd first like to get a bit of feedback whether I understand your problem correctly :) . So here we go:

X := RandomVariable(FRatio(3, 3*n));
Y := RandomVariable(NonCentralFRatio(3, 3*n, 82/100));
Probability(Y > Quantile(X, 95/100));

As I said, this gives a huge expression that won't help you a lot.

Ok, after writing this I couldn't resist and wanted to try to make it useful anyway :) so here's what I'd suggest: you may have noticed that I substituted fractions for your floating point numbers above. That generally leads to better symbolic answers. However, I think that this is a situation where you can't really expect to find a useful, usable symbolic answer. So I'd suggest going the numeric route instead. That means that given a certain value for n, we now want to find the number corresponding to the expression above, and if possible we'd like it as quickly as possible, please, because we might need to iterate the process for rootfinding or something similar.

For iterating the process, it'll turn out to be useful to define a procedure that computes this probability for us, given a value for n,as follows:

f := proc(n)
uses Statistics = ST;
local quantile, X, Y;
  X := ST:-RandomVariable(FRatio(3, 3*n));
  Y := ST:-RandomVariable(NonCentralFRatio(3, 3*n, 0.82));
  quantile := ST:-Quantile(X, 0.95, numeric);
  return Probability(Y > quantile, numeric);
end proc;

If we try this procedure on some test inputs, we get answers that seem plausible at face value. Unfortunately, the probability does not grow very quickly for increasing n: I get f(1) = 0.068, f(10) = 0.096, f(100) = 0.102, f(1000) = 0.103.

Hope this helps,

Erik Postma

Hi Ryan,

Here's an example; it's for an ODE, but I think it demonstrates some of the ideas you might be looking for.

make_equations := proc(lambda :: positive, max_t :: positive, $)
  appearance_times,  # times at which an event appears
  appearance_deltas,  # times between events
  sample_length,  # number of events contemplated
  sample_generator,  # procedure to generate appearance_deltas
  i;  # counter
global x, t; # variables in the resulting equations
  # We start by taking twice as many appearance times as would be necessary if they're all equal to the expected value:
  sample_length := ceil(2*max_t/lambda);
  sample_generator := Statistics:-Sample(Poisson(lambda));
  appearance_deltas := sample_generator(sample_length);
  appearance_times := convert(appearance_deltas^%T . Matrix(sample_length, fill=1, shape=triangular[upper]), list);
  while appearance_times[-1] < max_t do
    # Add another sample_length event times (is very unlikely to be necessary, but...)
    appearance_deltas := sample_generator(sample_length);
    appearance_times := [op(appearance_times), convert(appearance_deltas^%T . Matrix(sample_length, fill=1, shape=triangular[upper]) +~ appearance_times[-1], list)];
  end do;
  return [x(0) = 0,
    diff(x(t), t) =
      piecewise(t < appearance_times[1], 0,
        seq('(t < appearance_times[i + 1], t - appearance_times[i])', i = 1 .. sample_length - 1),
        t - appearance_times[sample_length])];
end proc;

 This procedure will return a pair of equations that describe an ODE with a number of jumps in the derivative, where the time between the jumps is Poisson-distributed with parameter lambda (the first parameter) and where the jumps have been calculated to at least t = max_t (the second parameter). You can just examine the equations, or feed to dsolve/numeric, for example like this:

make_equations(400, 2000);
solution := dsolve(%, numeric);
plots:-odeplot(solution, t=0..2000);

Hope this helps, and if not then feel free to ask,

Erik Postma

This is not an easy question to answer, first of all because people who want to know such things sometimes want different types of answers. I'll show you two methods that can be used, and only the first one will give you a true order calculation - but it's often a lot harder to do than the second one. I'll first give you a somewhat more theoretical overview, then examples.

Basically, you count the number of basic operations that Maple needs to do in order to finish your algorithm, as a function of the size of your input, say n. Often, the number of steps will depend on the actual input, not just its size, and in this case you need to find an upper bound on the number of basic operations. A basic operation should in this case be read as something that can always execute in a bounded amount of time, independent of the parameters of the operation. This can be very difficult to determine, because you have to know what algorithms Maple uses internally for certain operations; this is generally not too difficult for library-level Maple routines, but for Maple kernel routines it is less easy. As an example, assigning an entry in a table or Vector or Matrix is a basic operation, but adding an element to a list is not (in Maple).

If you succeed in determining the number of basic operations, this will give you some algebraic expression involving n. You are then allowed (technically not required) to omit all but the eventually largest term in an addition of a fixed number of terms, and omit constant factors, to simplify this algebraic expression.

Let's take a simple bubble sort-like sorting algorithm as an example - as you said, we hope to get out O(n^2). We'll take a Vector of real numberic constants as input.

bsort := proc(v :: Vector(numeric), $) :: Vector(numeric);
  i :: posint, 
  j :: posint, 
  n :: nonnegint := LinearAlgebra:-Dimension(v);

  for i from n to 2 by -1 do
    for j to i - 1 do
      if v[j] > v[j+1] then
        v[j], v[j+1] := v[j+1], v[j];
      end if;
    end do;
  end do;

  return v;
end proc;

The if-block inside the loop will always execute in bounded time (*), so we'll count that as one operation, adding in the  bookkeeping overhead of increasing j and such in the inner for loop. Then the inner for loop takes i - 1 basic steps. The outer loop also has some bookkeeping overhead in each iteration, which we can account for by saying that the full inner loop plus bookkeeping for one iteration of the outer loop can be done in i basic steps. For the full outer loop, we get Sum(i, i=2..n) = (n+1)^2/2 - n/2 - 3/2 steps in total. This is the algebraic expression that's our upper bound for the number of basic operations of the algorithm as a whole. If we expand that (using Maple's expand command) we get n^2/2 + n/2 - 1. Eventually, the quadratic term will be bigger than the other two terms, so we can omit the linear and the constant term. Then we can also omit the constant factor of a half, and we get the answer we were looking for: the algorithm is O(n2).

OK - that's it for the first method, which as I said is the only one that can actually get you a true "order" of an algorithm. However, people who ask for a big-O calculation are sometimes more interested in how well the algorithm will scale to large inputs in practice. The second method gives you such information: it is to just time your procedure, then fit a function to it. This is a piece of software engineering that you can very easily and get rough results, or do very well and spend a lot of time accounting for cache warmup etc. I'll show a very rough and easy solution here.

timeit := proc(n :: posint, k :: posint, $) :: numeric;
  vs :: list(Vector) := [seq(LinearAlgebra:-RandomVector(n, generator = 1. .. 2.), i=1..k)],
  i :: posint,
  t, t0;

  t0 := time();
  map(bsort, vs);
  t := time() - t0;

  return t / k;
end proc;
nvalues := [seq(ceil(5 * 1.2^k), k=0..29)];
times := [seq(timeit(n, min(5000, ceil(10^6/n^2))), n in nvalues)];
plots:-pointplot(nvalues, times, axis=[mode=log]);
Statistics[PowerFit](nvalues[4..], times[4..], n);

This yields, after some computation, first the following plot:

We see that the points are almost on a straight line, except for the first three points (which I therefore discard). Then we find the fitted expression 0.00000276 * n^1.92 (with more digits, but let's forget about those) (and this number is from my new core i7 720QM laptop). This suggests approximately quadratic behaviour.

Hope this helps,

Erik Postma

(*) Here's an example of how difficult it is to do this in a general-purpose computer algebra system. In an earlier version of this comment, the procedure took a Vector(realcons) as an argument instead of a Vector(numeric). When I wrote the sentence with the footnote mark, I realized that the procedure wouldn't work in general. Writing if v[j] > v[j+1] is a somewhat dangerous thing to do, specifically because Maple has been written so that you can write that and know that it will execute in bounded time. The price to pay, however, is that the comparison is not very powerful. If v[j] and v[j+1] are both of type numeric, then you're safe. However, symbolic constants like Pi cannot be compared like this, because it might require a long computation to ascertain which is the bigger one (or indeed, if the two constants are different). For example, if you would write if x > y then ... while having x = Pi and y = 2, then Maple will complain. If you want to risk the somewhat more lengthy computation, you can write if is(x > y) or if you like living dangerously, if evalf(x - y > 0). However, then the count for the big-O computation cannot be bounded easily without further knowledge of the structure of x and y.

Hi Erik,

For what it's worth, you can enter something that looks correctly as follows:

  1. Make sure you have the "layout" palette on the left of your Maple window. You can do this by fiddling with View -> Palette options.
  2. Click the 10th icon in this palette, which looks like bA and has hover text "presup". This will insert the symbols bA where your cursor is currently.
  3. Type "He" (without the quotes) over the letter A, which should be automatically selected.
  4. Hit tab. This will select b.
  5. Type "4" (without the quotes).
  6. Use the right arrow to get to the bottom of He.
  7. Type "_2".

If you run the command "lprint" on this, you'll see that you've done something very similar to what jakubi suggested: this is the expression


internally. As an alternative, use the 13th icon (hover text "prepost") and hit delete every time you come across a sub- or superscript you don't want.

Hope this helps,

Erik Postma

Hi emoh,

I think those two are the same - the same curve is described by y = sqrt(x) for x=0..4 and x = y^2 for y=0..2. How the surface of revolution looks does not depend on the way in which you write it.

There is a difference, however, once you specify which way you want to rotate; with the above convention, rotating around the y-axis will generate a shape with an acute apex, whereas rotating around the x-axis will generate a paraboloid.

The command you show rotates around the y-axis; if you're looking to obtain the other figure, then either switch to

VolumeOfRevolution(y^2, y=0..2, axis='vertical', output='plot');

or to

VolumeOfRevolution(sqrt(x), x=0..4, axis='horizontal', output='plot');

(They are the same, up to interchanging the axes in the resulting plot.)

Hope this helps,

Erik Postma

First 6 7 8 9 10 11 Page 8 of 11