1484 Reputation

19 Badges

15 years, 177 days

Social Networks and Content at

Maple Application Center
I am the manager of the Mathematical Software Group, working mostly on the Maple library. I have been working at Maplesoft since 2007, mostly on the Statistics and Units packages and on contract work with industry users. My background is in abstract algebra, in which I completed a PhD at Eindhoven University of Technology. During my studies I was always searching for interesting questions at the crossroads of math and computer science. When I came to Canada in 2007, I had nothing but a work permit, some contacts at Maplesoft (whom I had met at a computer algebra conference a year earlier), and a plan to travel around beautiful Canada for a few months. Since Maplesoft is a company that solves more interesting math and computer science related questions than just about anywhere else, I was quite eager to join them, and after about three months, I could start.

MaplePrimes Activity

These are answers submitted by epostma

Hi itsme,

The integration scheme used is Euler-Maruyama. This is implemented in QuantLib (we use version 0.3.11, by the way) in what their documentation calls the Monte Carlo framework -- in particular, in the PathGenerator<...> class. Their documentation page for this framework contains the following fragment:

The Black-Scholes equation

diff(f,t) + sigma^2/2*diff(f,x,x) + nu*diff(f,x) - r*f = 0

where r is the risk-free rate, sigma is the volatility of the underlying, and nu = r - sigma^2/2, has the form of a diffusion process. According to this heuristic interpretation, paths followed by the logarithm of the underlying would be Brownian random walks with a constant drift nu per unit time and a standard deviation sigma*sqrt(T) over a time T.

Therefore, the paths to be generated for a Monte Carlo model of the Black-Scholes equation will be vectors of successive variations of the logarithm of the underlying price over M consecutive time intervals Delta t_i, i = 0 ... M-1. Each such variation will be drawn from a Gaussian distribution with average nu Delta T_i and standard deviation sigma sqrt(Delta T_i) --- or possibly nu_i Delta T_i and sigma_i sqrt(Delta T_i) should nu and sigma vary in time.

That sounds like a pretty decent explanation of Euler-Maruyama to me.

Just to clarify: Wikipedia seems to suggest that Euler-Maruyama means that all time intervals are the same, but that's not the case for QuantLib: you can use any sequence of time points you like.

Hope this helps,

Erik Postma

Hi Bornin1992,

If you need to know the probability of more complicated events, you can use the ?Statistics/Probability command:

X := RandomVariable(Binomial(10, 1/2)):
Y := RandomVariable(Geometric(2/3)):
Probability(X + Y < 7);
# returns an exact fraction
Probability(X + Y < 7, 'numeric');
# returns 0.723...
# now to verify:
s := Sample(X + Y, 10^6):
SelectInRange(s, -infinity .. 6.999);
# should show a vector of roughly 723000 elements

Hope this helps,

Erik Postma

Hi GaoCG,

You'll need to provide initial values for your parameters so that Maple has a starting point for iterating towards a locally optimal solution. You can do this with the initialvalues option explained on the Statistics/Fit help page. Also, there seems to be an extra parenthesis in the call to Fit; I assume that's the one just before ', X, Y, sigma', right?

It's not so easy to find values of the parameters that lead to numbers that are anywhere near reasonable. Maybe you know the actual value of the parameters for some other instance of the model? Since there's a factor of sigma^2-(650*10^6)^2 in the denominator, where sigma^2 is on the order of a few hundred, this is a big negative number; since we need to find values that are positive, we need to compensate for that. One option would be to set A to a big negative number.

Another thing to remember is that there might be bad cancellation going on: because you're subtracting such a big number from sigma^2, if you want to see the effect of sigma on the result, you'll need sufficiently many digits to represent it accurately. It might be useful to increase Digits to a higher number, like 25.

Hope this helps,

Erik Postma

Hi hossayni,

What does currentdir(); print in these two worksheet? Maple will search for .m files in that directory, and if it doesn't match you either need to specify the path more fully, or change the current directory. See ?currentdir.

Hope this helps,

Erik Postma

Hi strawfields93,

The issue is in your use of 2n - if you use 2*n (in all three places where it occurs) it should work. (It did for me, at least.)


Erik Postma

Hi upperhilsdale,

Wondering what your input is given as, before you turn it into splines - is it just an unordered set of data points, or of weighted data points? If it's either of those, then the easiest solution is to use the Statistics package.

# Generate x-values and weights:
X1 := Statistics:-Sample(Normal(5, 2), 10^4):
W1 := Statistics:-Sample(ChiSquare(3), 10^4):
X := cos~(X1) + X1:
W := abs~(sin~(X1 + W1) + W1):

Statistics:-PointPlot(W, 'xcoords' = X);
Statistics:-Median(X, 'weights' = W);

Median gives you the 50% quantile; if you'd want to explore other values you can use the Quantile command.

If your original data truly represents 'points on a curve', then it's almost certainly best to do what acer suggests, but just for fun you could possibly generate random data points according to the curve and then take the median of those points - however, it's quite tricky to get Statistics:-Sample to work with a piecewise-defined PDF (I didn't manage to make it work in the short time that I tried).


Erik Postma

Hi PrimZor,

You ask how to transform the result into a "usable" result. What would you like to do with your results? For example if you want to find out the values of x3-1 at the solution(s) of your system of equations, you can write:

a1 := solve({x^2=1, x>0}, x);
eval(x^3-1, a1);


a2 := solve({x^2=1, x>-5}, x);
eval~(x^3-1, [a2]);

Unfortunately, you'll need to know beforehand whether there will be 0, 1, or more solutions with this syntax. It's a bit more predictable if you enclose the variable(s) you want to solve for in list brackets:

solve({x^2=1, x>0}, [x]);

returns a list of solutions (just the one in this case), each being a list of equations (also just one in this case) defining the solution. So this way you can always do

a3 := solve({x^2=1, x>0}, [x]);
eval~(x^3-1, a3);
a4 := solve({x^2=1, x>-5}, [x]);
eval~(x^3-1, a4);

and get a list of the values at all the solutions of your equation.

Hope this helps,

Erik Postma

Hi Mac Dude,

The answer really is: it depends. In general using Vectors and Matrices is a safe choice, so I'd stay with what you're doing now if you don't have a pressing reason to switch. I would say that for most applications Vectors and Matrices are probably the best choices. If you would find particular performance problems with these two, you can always reconsider.

By the way, lists are not a superset of Vectors; they're different animals altogether. All of the types you've posted are distinct. (There is a common term for Vectors and Matrices (and Arrays), though, which is rtable, for rectangular table.)

Hope this helps,

Erik Postma

On the two machines where I tested it, this seems to be about 10% faster:

listhist2 := proc(L)
    [seq([i, numboccur(i, L)], i = {op(L)})];
end proc;


Hi mitko,

I don't know if you're interested in getting the execution of this single line of code sped up; it's unlikely to help you much at this point, but I saw an opportunity for improvement and figured I'd let you, and maybe more importantly the rest of the forum, know.

There are in fact two options for speedup that I see here. The first is to use ArrayTools:-Alias instead of convert(..., Vector). In particular, if your Matrix is called M, then the following alternatives will - often - give you the same result, or at least a result that looks the same:

v1 := convert(M, 'Vector');
v2 := ArrayTools:-Alias(M, [numelems(M)]);

There are two differences between v1 and v2. The first is the cause of the word 'often' above. Maple can store Matrix data in a number of ways: influenced by its indexing function (a.k.a. shape; see ?rtable_indexfcn), and its storage and order options (see ?Matrix and ?rtable_options). The default is no indexing function, rectangular storage, and Fortran order; if that's the case, then v1 and v2 will be the same Vector.

The other difference is the cause for ArrayTools:-Alias to be faster - especially if your initial Matrix is large. convert(..., Vector) creates a new Vector and copies the Matrix data into it. ArrayTools:-Alias, on the other hand, simply creates a new reference to the same block of memory, treating it as a Vector now. (Read the warnings on the package: this can go horribly wrong if you strip the attributes off the resulting Vector, and let the original Matrix be destroyed while your new reference is still around.) If you are constructing temporary objects, to be used for the construction of a different object, as you are doing here, then ArrayTools:-Alias is an excellent choice.

This causes a speedup of a factor of 20 for a 3x3 Matrix to almost 6000 for a 300x300 Matrix. (I like to test this sort of stuff using the new features of the iterations option of CodeTools:-Usage in Maple 16, where you can say 'iterations = 2*Unit(s)' to get it to choose the number of iterations so that it runs for about 2 seconds.)

The other option for speedup is using the kernel function `<|>`, which you normally wouldn't call explicitly but which is the functionality behind the Matrix/Vector shortcut syntax <a | b | c>. (It does this together with its sibling, `<,>`.) In particular, if vs is a sequence of column Vectors, then

M1 := ArrayTools:-Concatenate(2, vs);
M2 := `<|>`(vs);

will return the same Matrices - but in the test I did (with 10 Vectors of 10 elements each), `<|>` was about a hundred times faster. This is because `<|>` is pure kernel code whereas ArrayTools:-Concatenate has a substantial amount of library code.

Erik Postma

Hi mah00,

The case where sigma is degenerate (the case in which you are) is a special case that you'll need to handle specially. The PDF for this case is not well-defined. Let that sink in: there is no PDF for alpha - it's impossible. Wikipedia has a nice paragraph about it:

So you need to consider what the alternatives are. You don't write why you want to know the PDF, but I think what I would recommend in most cases is to find an n x 12 transformation matrix A and the vector of means m such that alpha = A . X + m, where X is an n-dimensional vector of independent standard Gaussians (i.e. mean 0 and standard deviation 1) and A has rank n. You can then do whichever analysis you need to do using X.

You can find this as follows. First, find the vector of means: this is simply the vector containing the ExpectedValue of each alpha_i. Subtract this from alpha to obtain alpha'. Now alpha' has mean 0 in every entry.

Now compute your matrix of covariances with respect to alpha'. Let's call this matrix sigma'. Like sigma, it will be a 12 x 12 symmetric matrix of incomplete rank. That means that it has some singular values that are zero. Compute these with

u, s := LinearAlgebra:-SingularValues(sigma_prime, 'output = [U, S]');

Check which entries of s are zero; for example, maybe this is nrs 11 and 12. That means the actual information is in entries 1 .. 10. We have now found that n = 10. Now set

A := u[.., 1 .. 10] . Matrix(s[1 .. 10], 'shape = diagonal') . u[1 .. 10]^+;

This will be your matrix A. You have alpha = A . X + m; use that instead of alpha.

Let us know if you need any more help.

Erik Postma

Hi Kitonum and Xilyte,

I saw that Kitonum found a great solution, but you can see that Maple gets stuck in a situation where the local search methods do not find a great solution. I played with it for a bit and adapted it to the following solution, where we first fit the one exponent coefficient (assuming that the other is twice as small) and then, fixing one, fit the second exponent:

restart: with(plots): with(Statistics):

X:=<0, 7.17, 18.11, 34.34, 57.95, 91.84, 139.98, 207.94, 303.48, 437.57, 625.87, 890.96, 1265.91, 1800>:
Y:=<.44, .43, .42, .41, .40, .39, .38, .37, .36, .35, .34, .33, .32, .31>:

expr := a+b1*exp(-t/c1)+b2*exp(-t/c2);

fit:=Fit(eval(expr, c2 = 2*c1), X, Y, t, parameterranges=[a=0..1, c1=1..100000], output=[parametervalues, leastsquaresfunction]);
fitplot1 := plot(fit[2], t=0..1800, thickness=2):
graph1 := pointplot(X, Y, symbol = BOX, symbolsize=15, axes = BOXED):
display(graph1, fitplot1);

fit2:=Fit(eval(expr, c1 = 191.),X, Y, t, parameterranges=[a=0..1, c2=1..100000]);
fitplot2 := plot(fit2, t=0..1800, thickness=2, color=red):
graph1 := pointplot(X, Y, symbol = BOX, symbolsize=15, axes = BOXED):
fitplot1 := display(fitplot1, transparency=0.5):
display(graph1, fitplot1, fitplot2);

Hope this helps,

Erik Postma

In the Appendix at the end of the Programming Guide (available on the web at there is a lot of information about tables. In particular, you can see that a Maple-language table is a TABLE dag, the contents of which are in a HASHTAB dag which consists of HASH dags. So these guys are hash tables. That means they're pretty efficient - the relevant operations are amortized O(1) in principle.

If you have specific questions about the implementation, let us know.

Hope this helps,

Erik Postma

I think you'd have to write this yourself, but it wouldn't be too much work. The equivalent of the ratweight command would simply write the weights into a table (global, or even better, (optionally) user-supplied) and you'd write a command called something like evaluateWithRatWtLvl that takes an expression and an integer (and optionally the weight table, or otherwise using the global weight table), expands the expression, and throws away terms with too high weight.

If you'd like, you could write a package that overloads certain operators, like + and ^, to do an automatic call to evaluateWithRatWtLvl after performing their computation. This will only apply to code you type at the top level, though, not to internal computations done in Maple library code.

Hope this helps,

Erik Postma

The parameters in your expression to be fitted, Amplitude, seem to be x, t, n, and S (after, as Axel pointed out, you substitute I for i). However, in the call to NonlinearFit, you give parameter ranges for a wholly different set of parameters - De, S, and tau. That doesn't work - you should either rewrite Amplitude to depend (only) on De, S, and tau, or supply ranges for (only) x, t, n, and S.

Hope this helps,

Erik Postma

1 2 3 4 5 6 7 Last Page 2 of 11