1494 Reputation

19 Badges

15 years, 267 days

Social Networks and Content at

I am the manager of the Mathematical Software Group, working mostly on the Maple library. I have been working at Maplesoft since 2007, mostly on the Statistics and Units packages and on contract work with industry users. My background is in abstract algebra, in which I completed a PhD at Eindhoven University of Technology. During my studies I was always searching for interesting questions at the crossroads of math and computer science. When I came to Canada in 2007, I had nothing but a work permit, some contacts at Maplesoft (whom I had met at a computer algebra conference a year earlier), and a plan to travel around beautiful Canada for a few months. Since Maplesoft is a company that solves more interesting math and computer science related questions than just about anywhere else, I was quite eager to join them, and after about three months, I could start.

MaplePrimes Activity

These are answers submitted by epostma

"unable to delimit strings / identifiers" sounds like there is a problem with your use of double quotes (for strings) or backquotes (for identifiers). This could be apparent from the single line that throws the error.

You write "RETURN". Are you using the old RETURN function? That has been deprecated for a very long time; we recommend use of the "return" statement now. See ?return.  This may be completely unrelated to your error message, though.

Hope this helps,

Erik Postma

In the interest of completeness, I believe the other option would be as follows:

floatProcessor := proc(x)
local mantissa, exponent;
  mantissa, exponent := op(x);
  if abs(mantissa) < 10 and exponent = 0 then
    return Float(mantissa * 10, -1);
    return x;
  end if;
end proc;
a := x - 3.;

subsindets(a, float, floatProcessor);

This returns x - 3.0. Hope this helps!

Erik Postma

Hi Jonas,

I looked at your worksheet, and I'd like to address both questions that you ask.

For the first, I'm not sure why you set f(x) + f'(x) equal to zero. I would approach it as follows. If we take a tangent at x0, then the value of that tangent at x is f(x0) + (x - x0) * f'(x0). This should be equal to -3 for x=-1: so we need to solve f(x0) + (-1 - x0)*f'(x0) = -3. This will give you two solutions, which both lead to a correct answer.

Also when you obtain 1/2*x^2 + 3*x + 5, and try to solve that, I think you obtain an incorrect answer. Indeed, if you plot that expression, you'll see that it does not have any roots (which confirms the solutions -3 + I, -3 - I). The formula you use for solving the quadric doesn't seem to be correct: the determinant should be "b^2 - 4*a*c", which is in your case 3^2-4*1/2*5 = 9-10=-1.

Then on to your question about regression. The ExponentialFit command that you show gives the same result that you give for f(x), although it's written slightly differently. I could not find a very easy way to change the exp(0.043*x) into 1.04^x, by the way; I'm sure some of the experts in the forum can come up with something though. If desperate, you can always do something like this:

Q := ExponentialFit(X, Y, x);
expr := indets(Q, specfunc(anything, exp))[1];
newQ := subs(expr = exp(op([1, 1], expr))^op([1, 2], expr), Q);

For the value of R^2: what's your definition, exactly? I tried several things, but could not get the number 0.9596 that you suggest. This is what I would think gives the correct value (taking some cues from Wikipedia's page about the coefficient of determination):

QFunction := unapply(Q, x);
QValues := QFunction~(X);
option1 := Correlation(Y, QValues)^2; # returns 0.9831 (approx)
option2 := Correlation(ln~(Y), ln~(QValues))^2; # returns 0.9963 (approx)

Here I liberally use Maple 13's elementwise syntax with the tilde operator.

Hope this helps,

Erik Postma

Hi Hamidreza,

I'm not sure I completely understand your problem (what does the fragment := '`θ_e`' at the end mean?), but here's a guess. I think the problem is simplify; that's a very complicated piece of code that does incredible things but can possibly take a long time to complete. If the thing should simplify to a real number, your best bet might be to reduce it to a floating point number by running evalf on it. That will often be much faster.

Hope this helps,

Erik Postma

Hi biethb,

I think the best way to obtain that sort of information is to use the output=piecewise option in your call to dsolve. That requires that you also give the range with this call immediately, so your code would look somewhat like this:

solution := dsolve({sys_ode, ics}, type=numeric, stiff=true, output=piecewise, range=0..1);

This will give you a list of equations where the right hand sides are piecewise expressions; the boundaries between these are the points computed.

Note that this will not give you a solution procedure, like in the normal output. Note also that if you're interested in the values of the variables at specific points in time which you know beforehand, then you can use an Array for the output option to obtain this information efficiently; that's another calling sequence that does not give you solution procedures. More details are in the ?dsolve,numeric help page.

Hope this helps,

Erik Postma

Hi all,

What's going on here is that the construction of the plot and the display of the plot are separated in Maple.

OK, that sounds a bit complicated. Let's try with a bit more detail.

When you ask for plot(x^2), Maple detects that the only variable occurring in the expression is x and uses the default range of -10 .. 10 to construct the plot: it selects a set of values for x between -10 and 10, and then samples the value of the expression at those points to get a reasonable approximation of the curve. It then displays the full plot for you. The selection of points is (by default) adaptive, so that more sampling points are chosen in a region where that is necessary. This is explained in the ?plot,options help page.

Now if you add the view option, Maple still uses the default range of -10 .. 10 to construct the plot, but then displays only part of it. In other words, using the view option is somewhat like zooming into a plot using the point-and-click tools in the GUI. This is the intended functionality for the view option, and in principle it is working fine. The issue is that the same sample points are used as before, and sample points chosen for the range -10 .. 10 may not be a very good choice for the displayed range from the view option. In particular, in your example, there apparently is no sample point very close to 2.

Now how do you get the plot you actually wanted? That's not too difficult: just tell Maple to construct the plot using the range 1.995 .. 2.005! That is, use a command like plot(x^2, x=1.995 .. 2.005). This gives a plot where the x-range is 1.995 .. 2.005 and the vertical range is something like 3.98 .. 4.02 (the full range of values assumed by x^2 for x between 1.995 and 2.005. If you need to specify the vertical range, then do both; for example, you could use plot(x^2, x = 1.995 .. 2.005, view = [1.995 .. 2.005, 3.99 .. 4.01]) to obtain the plot that you probably expected to see.

Moral of the story: the view option can be useful, but only in fairly specific cases, and in most of those, you'll want to use the form of plot where you specify the range for the independent variable explicitly.

Hope this helps!


Erik Postma

Hi Christopher2222 and acer,

I agree with everything that acer said, but for completeness' sake I wanted to add that there is code in Maple for doing exactly what you asked: the Units:-Converter:-AddUnit command. But there are a few issues with it:

  • First of all, you need to run the command within the same session in which the assistant runs. For example, if you have the default settings, you run Units:-Converter:-AddUnit(parsec/year[Julian]) in a regular worksheet and then open the unit conversion assistant, then that unit won't show up if you select the dimension "speed". I can think of two ways to make it show up, and another one which turns out not to work:

    1) Run the command in the worksheet that contains the unit conversion assistant.
    2) Put the command in your Maple initialization file. (See e.g. the ?worksheet,reference,initialization help page.)

    What could also have worked is making Maple not start a new session when you start the assistant: open Tools -> Options -> General (the default tab) and change the setting under "How should Maple handle the creation of new Math Engines?". However, the unit conversion assistant runs the "restart" command when it opens, which undoes any modification you might have done before; so this method doesn't work.
  • For the example you mention, conversion of fuel consumption units, there is an additional issue that will prevent the unit conversion assistant from being able to do this conversion: it has been written to only perform conversions from and to units that have the same dimension. The units 'mile per gallon of petroleum' and 'litre of petroleum per 100 km' have inverse dimensions (viz., 'length / (length of petroleum)^3' and '(length of petroleum)^3 / length'), so the unit conversion assistant cannot do the computation.

    Of course, you could add the code to do that yourself...

Hope this helps,

Erik Postma

Hey Domeneca,


I do get an answer for the input you describe above. How had you defined M, C, K, and F? They need to be scalars in the procedure you describe above, because Y[1], Y[2] are also scalars and dsolve/numeric expects to find scalars in YP as well.

If you extend this approach to actual matrices and vectors, you'll run up against an issue in Maple where using LinearAlgebra commands prevents the use of evalhf mode. This means that your simulation will run relatively slowly. There are ways around it, but an easier way might be the following: write your system of equations in matrix form and use Maple to extract a scalar ODE system out of it!

This would go as follows. Toy example here. I use t for the independent variable. yv is the vector of dependent variables y[1], y[2]. I use the tilde notation quite a bit here, which was introduced in Maple 13; you can probably use map and zip in earlier versions.

# Problem setup:
n := 2;
M := t -> <1, t; 0, 1>:
C := t -> <0, 1; 1, 0>:
K := t -> <2, 0; 0, -1>:
F := t -> <t, -1>:
yv := t -> Vector(n, i -> y[i](t)):
# The system as a vector of equations:
sys := M(t) . diff~(yv(t), t, t) + C(t) . diff~(yv(t), t) + K(t) . yv(t) =~ F(t);
ICs := [op(convert(yv(0) =~ Vector(n), 'list')), op(convert(eval(diff~(yv(t), t), t = 0) =~ Vector(n), 'list'))];
sol := dsolve([op(convert(sys, list)), op(convert(ICs, D))], numeric):

This gives as answer [t = 1., y[1](t) = .597526858519554227, diff(y[1](t), t) = 2.01928839702482810, y[2](t) = -.688215532611217928, diff(y[2](t), t) = -1.80053479986208664].

Hope this helps,

Erik Postma

Hi hanswurst,

You could try something like the following procedure:

p := proc(cube :: list([realcons, realcons]), $) :: Array(list(integer));
  i :: 'negint', 
  j :: 'posint' := 1,
  point :: 'Array'('integer'),
  npoints :: 'nonnegint',
  lowerbounds :: 'list'('integer'),
  upperbounds :: 'list'('integer'),
  result :: 'Array'({'list'('integer'), 'identical'(0)});
  lowerbounds := map(range -> ceil(range[1]), cube);
  upperbounds := map(range -> floor(range[2]), cube);

point := Array(lowerbounds);

npoints := mul(upperbounds[i] - lowerbounds[i] + 1, i=1 .. nops(cube));
result := Array(1 .. npoints);

while j <= npoints do
# Store the current point
result[j] := convert(point, 'list');
j := j + 1;

for i from nops(cube) to 1 by -1 do
# Try increasing coordinate i.
if point[i] < upperbounds[i] then
# Success.
point[i] := point[i] + 1;
# Failure; reset the ith coordinate.
point[i] := lowerbounds[i];
end if;
end do;
end do;

return convert(result, 'list');
end proc:

There's a couple of things you might want to vary according to your needs; possibly you don't want to store all the points, but process them in place, or use a different data representation. I would argue in favour of using an rtable data type (such as Array) to hold the point that is being changed all the time, because a list cannot be changed in place (see e.g.

Hope this helps,

Erik Postma

Hi fredbel6,

I can see one issue with your procedures that causes some of your troubles. The issue is that i is a global variable in all of your procedures. If you insert a statement 'local i;' into each of your procedures, then test(1) returns 7+2*q.

There is one more thing that I don't quite understand: you call alpha with two arguments (in beta), yet alpha only has one formal argument. That is perfectly allowed in Maple (the extra argument becomes part of the _rest special sequence, see e.g. the ?using_parameters help page at, but here it looks like it might be an oversight.

Finally some advice: if you expect to call sum with a range delimited by actual numbers (such as the i=0..1 range in beta, or even the z=0..c range in alpha if you call alpha with a numerical argument) instead of formal parameters, the add procedure is generally recommended. sum is a procedure to calculate a closed form for a symbolic sum as a formula, whereas add just adds up a sequence of numbers. (See e.g. the sum help page at


Hope this helps,

Erik Postma

Hi Deadstar,

I'm assuming this relates to a similar problem as what you described in this question, right?

It would probably help if you would give us some more details on exactly what the statistical computations are that you'd like to perform, inside the loop. In particular: how do you generate your samples, and exactly what properties do you need?

Generally, I think the trick to doing all of this efficiently is twofold:

  1. First, use acer's trick of generating a large number of random values up front, then deal with them in a bunch. Here you need to experiment a bit with what amount of data your laptop is comfortable dealing with; your previous post suggests something like 5*10^6.
  2. The second step is performing the statistical calculation on a "slice" of your big, precomputed random data set. You can use the command ArrayTools[Alias] to obtain such a slice. A simple example would go as follows:

results := Array(1 .. 100):

sampler := Sample(Uniform(0, 1)):

for i to 10 do
  # Generate enough random data for 10 computations
  sample := sampler(10^6):

  for j from 0 to 9 do 
    # Operate on the jth slice of the current (ith) sample
    slice := ArrayTools[Alias](sample, j*10^5, [10^5]):

    results[10 * (i-1) + j + 1] := Mean(slice):
  end do:
end do:

You could also try to use Statistics[Bootstrap]. That would work as follows with the example above:

X := RandomVariable(Uniform(0, 1));
Bootstrap('Mean', X, samplesize=10^5, replications=100, output=array);

However, a quick experiment suggests that this procedure does not reuse memory efficiently either, so I'd suggest going with the approach above.

I will try revisiting this post in a couple days, to see if you've posted a reply with more details.

All the best,

Erik Postma

Hi Willie,

In general, Maple will assume that any variable you use can have a complex value. If one of them is real (as in your example) then you can specify this using the "assuming" facility. So you could use something like:

solve({f(U, V) = 0, Re(g(U, V)) = 0}) assuming V :: real;

If this does not give the desired answer, feel free to post again with more details about your system of equations.


Erik Postma

If you know beforehand that you will need to add 100 to each entry, then this is equivalent:

X := RandomVariable(Normal(100, 1));
A := Sample(X, 1000):


Erik Postma

Hi tbasket,

It seems to me that the best way to go about generating correlated samples of normal distributions with mean and standard deviation different from 0 and 1, respectively, would be to first generate standard normal samples with the right correlation and then shift and scale those. So for the example that you give, I would do this (using Maple 13's new elementwise operators in the definition of X3 and Y3):


rho := 1/2:   # or whatever
mu := 120:
sigma := 40:

# First create uncorrelated samples
X1 := Sample(Normal(0, 1), 10^5):
Y1 := Sample(Normal(0, 1), 10^5):

# Now create correlated sample with mean 0 and standard dev 1
Y2 := rho * X1 + sqrt(1 - rho^2) * Y1:

# Finally, scale to obtain sample with desired mean and standard dev
X3 := X1 * sigma +~ mu:
Y3 := Y2 * sigma +~ mu:

# Check all properties:
Correlation(X3, Y3);

I am sure you could do it the other way around as well, starting with samples that have the right mean and standard deviation.  We just need to adjust the formula for combining the two samples. To determine the right formulas, you can use Maple's ability to deal with random variables with symbolic parameters:


assume(a :: real, b :: real, c :: real,  mu :: real, sigma > 0, rho >= -1, rho <= 1);
X := RandomVariable(Normal(mu, sigma)):
Y := RandomVariable(Normal(mu, sigma)):
Z := a * X + b * Y + c:

equations := [Mean(Z) = mu, StandardDeviation(Z) = sigma, Correlation(X, Z) = rho];
_EnvExplicit := true;
solve(equations, {a, b, c});

This shows that we can take a = rho, b = sqrt(1 - rho^2), c = mu (1 - rho - sqrt(1 - rho^2)) and obtain the same result. Let us verify this:


rho := 1/2:   # or whatever
mu := 120:
sigma := 40:
a := rho:
b := sqrt(1 - rho^2):
c := mu * (1 - rho - sqrt(1 - rho^2));

# Create original uncorrelated samples
X1 := Sample(Normal(mu, sigma), 10^5):
Y1 := Sample(Normal(mu, sigma), 10^5):

# Create correlated sample
Y2 := a * X1 + b * Y1 +~ c:

# Verify the properties
Correlation(X1, Y2);

Best regards,

Erik Postma

Hi Vitro,

Thank you for reporting this. As a first point, let me say that I'm afraid you've run into an inaccuracy in the documentation -- or at least a spot where it's rather unclear. The issue is with the frequencyscale option that you already discussed. The help page for Statistics[Histogram] states:

frequencyscale=relative or absolute
This option controls whether the absolute or relative data frequencies should be plotted. If frequencyscale is set to relative then the histogram will be rescaled so that the total area covered by the bars is equal to 1.

This seems to imply that the default is absolute. That's not true; the default is relative. We will try to make this clearer in the next version of Maple.


Now for the actual issue you were discussing. I tried running the commands you posted:

  • If I add the option frequencyscale = absolute, I get bars running up to integer values; that seems to be correct.
  • If I don't, as you say, the first bar is about 0.8 units high. According to the help page, the total area of the bars added together should be one; we could verify that this is true. First, we need to assign the plot data structure to a variable.

    dt := [2.5, 3.9, 2.9, 2.4, 2.9, .8, 9.1, .8, .7, .6, 7.9, 1.5, 1.8, 1.9, .8, 6.5, 1.6, 5.8, 1.3, 1.2, 2.7];
    histogram := Statistics[Histogram](dt);

    Now we need to get at the list of coordinates of the bars.

    bars := select(type, convert(op(histogram), list), listlist);

    We now compute the area of these bars.

    barArea := proc(polygon :: listlist, $) :: nonnegative;
    local xCoordinates, yCoordinates;
        xCoordinates := map(point -> point[1], polygon);
        yCoordinates := map(point -> point[2], polygon);
        return (max(xCoordinates) - min(xCoordinates)) * (max(yCoordinates) - min(yCoordinates));
    end proc;

    add(barArea(bar), bar in bars);

    This returns 1 for me (with a small rounding error); I tried it in Maple 12. So this seems to also conform to the documentation.

A separate issue is whether any of the two options does what you are trying to achieve. Maybe you would like the heights, instead of the areas, of the bars to add up to one? I don't think that Statistics[Histogram] can do that directly, but we can use plottools[transform] to do it for us: we use the histogram with absolute values, and divide the height of the bars by the number of points. First let's get the histogram.

dt := [2.5, 3.9, 2.9, 2.4, 2.9, .8, 9.1, .8, .7, .6, 7.9, 1.5, 1.8, 1.9, .8, 6.5, 1.6, 5.8, 1.3, 1.2, 2.7];
histogram := Statistics[Histogram](dt, frequencyscale = absolute);

Now we use the transformation.

transformation := plottools[transform]( (x, y) -> [x, y / nops(dt)] ):

This gives a plot where the leftmost bar is approximately 0.24 units high.

I hope that this answers your question; if not, feel free to ask for more detail.


Erik Postma.

First 8 9 10 11 Page 10 of 11