epostma

1499 Reputation

19 Badges

16 years, 104 days
Maplesoft

Social Networks and Content at Maplesoft.com

Maple Application Center
I am the manager of the Mathematical Software Group, working mostly on the Maple library. I have been working at Maplesoft since 2007, mostly on the Statistics and Units packages and on contract work with industry users. My background is in abstract algebra, in which I completed a PhD at Eindhoven University of Technology. During my studies I was always searching for interesting questions at the crossroads of math and computer science. When I came to Canada in 2007, I had nothing but a work permit, some contacts at Maplesoft (whom I had met at a computer algebra conference a year earlier), and a plan to travel around beautiful Canada for a few months. Since Maplesoft is a company that solves more interesting math and computer science related questions than just about anywhere else, I was quite eager to join them, and after about three months, I could start.

MaplePrimes Activity


These are answers submitted by epostma

The parameters in your expression to be fitted, Amplitude, seem to be x, t, n, and S (after, as Axel pointed out, you substitute I for i). However, in the call to NonlinearFit, you give parameter ranges for a wholly different set of parameters - De, S, and tau. That doesn't work - you should either rewrite Amplitude to depend (only) on De, S, and tau, or supply ranges for (only) x, t, n, and S.

Hope this helps,

Erik Postma
Maplesoft.

Hi alix,

This type of thing is usually easier in Maple with expressions than with functions. If you have an expression

expr := x + 3;

then you can get the operands x and 3 using ?op:

op(expr)

will return x, 3. The function combining these to get expr back can be obtained with op(0, expr) ; it will return `+` in this case.

Things get a little complicated if there are data structures such as ?tables or ?Vectors involved, or indeed procedures such as in your example.

Hope this helps,

Erik Postma
Maplesoft.

You can't really define w as both a bivariate function w(ξ,s) and a univariate function w(ξ). I'd suggest using notation like

w2 := (ξ,s) -> N(s) . w1(ξ);

where w2 is the bivariate and w1 the univariate function. Note that I wrote the function definition in the non-ambiguous arrow notation; writing it as 

w2(ξ,s) := N(s) . w1(ξ);

may work, depending on your settings, but the former always does. Note also the use of the period (.) for matrix and vector multiplication.

If you show us a little more of what you're trying to achieve, I bet we'll find out that it's even better to not define w2 itself, but just a definition equation

w2def := w2 = (ξ,s) -> N(s) . w1(ξ);

and then use eval(something, w2def) in some well-chosen spots. This way, you can control when Maple applies this definition of w2; otherwise, it is always applied immediately when Maple encounters the function w2 and you will never get a result that involves w2.

As to your other question: this is not always easy.  We would need to see a bit more of the problem you're trying to solve in order to answer it. By the way, it may or may not work to use the same symbol s for the index of x and its argument; if you substitute a value like 2 or 5 for s, you would get x2(2) and x5(5), not xs(2) or xs(5). It might be better to write this as xs(s) - Maple will never break up a single "word" like that. (There are other solutions involving things called "inert identifiers" - but I'd advise to do that only if it's really worth the added complexity.)

Hope this helps,

Erik Postma
Maplesoft.

CartProdSeq := proc(L::seq(list))
local Seq,i,j;
option `Copyright (C) 2007, Joseph Riel. All rights reserved.`;
    eval([subs(Seq=seq, foldl(Seq
                              , [cat(i,1..nargs)]
                              , seq(cat(i,j)=L[j],j=nargs..1,-1)
                             ))]);
end proc:
minsum := proc(ls :: list(posint), $)                                                                       
local pmvectors;                                                                                            
  pmvectors := CartProdSeq([1, -1] $ nops(ls));                                                             
  return min(abs~(map(`.`, map(Vector, pmvectors), Vector(ls))));                                           
end:                                                                                                        
minsum([2,1,4]);                                                                                            
                                                  1                                                         

Note that CartProdSeq is Joe's brainchild from http://www.mapleprimes.com/posts/41838-Cartesian-Products-Of-Lists.

If you need not the result, but the coefficients, you can do something like

minsumcoeffs := proc(ls :: list(posint), $)
local pmvectors, results, position;
  pmvectors := CartProdSeq([1, -1] $ nops(ls));
  results := abs~(map(`.`, map(Vector, pmvectors), Vector(ls)));
member(min(results), results, 'position');
return `%+`(op(pmvectors[position] *~ ls));
end:

This will return an inert addition (in terms of `%+`), which you can convert to a regular sum by calling  ?value  on it.

Hope this helps,

Erik Postma
Maplesoft.

(Edit: typo in minsumcoeffs)

Hi Sam54,

The code you posted seems to work for me - if I run s("abc"), I see 'counts' being updated. Did you want to return the table counts? Then add the statement

return eval(counts);

just before the final end of the procedure. (Usually if you want to return a table it's best to evaluate the table name to the actual table; otherwise the user of your command will get the local variable counts as the name for the table, which might be confusing, especially if she calls your procedure multiple times.)

Hope this helps,

Erik Postma
Maplesoft.

Hi Pete3431,

I'd like to try to set things up in such a way that once a non-integer value is produced, it doesn't need to continue searching and stops immediately. If you prefer the method of defining a procedure with remember table, then we can keep the definition of x as is (including assigning to x(0), x(1), x(2)). After that, I would do:

testIntegers := proc(seqname :: appliable,
                     {rng :: range(integer) := 0 .. 20},
                     $)
local i;
    for i from lhs(rng) to rhs(rng) do
        if not type(seqname(i), 'integer') then
            return false;
        end if;
    end do;

    return true;
end proc;

Then you can ask for testIntegers(x), or if the first 21 numbers are maybe not enough, testIntegers(x, 'rng' = 0 .. 30) to test the first 31 numbers. It will return true in both cases, because these entries are all integers. Both of these are relatively quick. More importantly, if we run forget(x) in order to clear the set values of x and use new initial conditions 1, 2, 3, then computation of x(5) will already trigger stopping the loop.

In this way you get the best of both worlds: it's reasonably fast if indeed the sequence consists of only integers (because the in-memory size of the integers involved is a lot smaller than the size of fractions), and typically even faster if there are non-integer values.

If you'd be OK with tracking integer or non-integer status only until the values get too large to fit into an 8-byte signed integer (about 10^18), we could look at optimizing the code further to be able to use the Compiler.

Hope this helps,

Erik Postma
Maplesoft.

They're called environment variables. See the help page.

Hope this helps,

Erik Postma
Maplesoft.

Hi Pete,

Use of the do loop should be fine. Initially I thought it might be the use of the data structure that's causing the slowness - for example, if you're building a list element by element. See e.g. this section of the programming manual for more information. It will be much faster if you use a mutable data structure, such as a Vector. A first version would look like this:

xVector := proc(n :: integer, x1, x2, x3)
local i, x;
  x := Vector(n, [x1, x2, x3]);
  for i from 4 to n do
    x[i] := (x[i-1]^2*x[i-2]^2+x[i-2]^2+x[i-1]^3)/x[i-3];
  end do;
  return x;
end proc;

When I ran this code, I realized that the real issue with this computation is probably that e.g. xVector(15, 1,2,3) (the Vector of x1 ... x15 starting with x1 = 1, x2 = 2, x3 = 3) already has pages and pages of digits to represent the last entry. This is also the reason for the bad performance - I'm revising my statement above: Maple needs to deal with huge integers for the numerators and denominators of those fractions, and that simply takes a lot of time, no matter how you arrange the computation. This suggests that fractions are maybe not the right thing to use. What exactly do you want to know about these sequences? Is it just whether a non-integer turns up eventually? That might be easier to compute. Or would you be happy with a floating point approximation? xVector(15, 1., 2., 3.) returns much more quickly (the last entry now shows up as approximately 0.54E207363).

Normally if there is a question about speeding a computation up, the first thing to try is to see if we can create a version of the code that ?Compiler/Compile can handle. However, that will only with either hardware floats (which will run out long before 10^207363), or integers of at most 64 bits. Another option would potentially be to study the values modulo a large prime, but I'm not sure if there's anything useful for your application that you can find from such results.

If you need symbolic values for members of the sequence (such as computed by Axel's answer), then it's going to be tricky - even, say, the 8th entry is a mind-bogglingly huge expression.

Hope this helps,

Erik Postma
Maplesoft.

For linear programming, the thing you want to optimize needs to be an expression that is linear in all your parameters (with numerical coefficients). There's no way that you can express the fifth percentile of that data set in that way (unless all rows of R happen to be positive multiples of each other). I'd recommend looking at the GlobalOptimization toolbox (or DirectSearch), unless a local optimum is good enough for you, in which case you should set it up as an operator problem with ?Optimization/NLPSolve.

Hope this helps,

Erik Postma
Maplesoft.

Hi tieuvodanh,

Thanks for your report. The ProbabilityTable issue was fixed for Maple 13 (and later versions as well, of course). It was definitely a bug - you should be able to use any unassigned (and unprotected) global names you like that don't start with an underscore.

The difficulty with debugging that you describe is I think pretty much universal: debugging is simply difficult. The first thing I typically do is to call ?tracelast after I get an error to see if the last (few) procedure(s) on the stack before the error occurred have obviously bogus arguments, then start with the last one that had good input. A typical problem for Statistics commands is that the user-accessible commands all have a try ... catch block at the top level which rethrows errors coming from the inside, so that it's clear for the user which command caused the error (an internal procedure name may not mean very much to them). This means you might want to consider running stopat(the top level Statistics command), then 'into' the commands until it jumps to the catch block, and call the internal command that does that, directly, and start your debugging process over from that point on. This will often require setting kernelopts(opaquemodules=false).

Hope this helps,

Erik Postma
Maplesoft.

 

Do you actually want matlab notation (as also suggested by the .m ending of the file name)? Otherwise you could simply do

fprintf(fd, "%a\n", mylist);

If you need matlab notation but the domain is as restricted as you say, i.e. rational function equations, you could just create the strings by hand, using a recursive overloaded procedure or so. It won't necessarily be extremely fast, though. If that's OK, I can elaborate. I don't think it's particularly easy to make the floats that CodeGeneration outputs particularly nice looking, so if that's what you're after it might be more difficult.

Hope this helps,

Erik Postma
Maplesoft. 

Hi JerseyJohn,

It looks like you're using the regular variable e to represent the base of the natural logarithm. That doesn't work in Maple - it's just another variable. So Maple is solving a much more general differential equation.

You can use several different notations for specifying the base of the natural logarithm. The easiest one is often using the function ?exp. If I replace e^(-x) by exp(-x) in your input, I get the same answer that you show.

Hope this helps,

Erik Postma
Maplesoft.

Hi sama,

The RKF45 implementation in dsolve/numeric uses a variable step size: it takes smaller steps where necessary and larger steps where possible. See ?dsolve/numeric/Error_Control for details. In principle you could use the minstep and maxstep options to constrain the step size, which might be instructive from an educational point of view. However, if you want the best tradeoff between accuracy and computation effort, let Maple vary the step size freely and just set relative and absolute error targets.

Hope this helps,

Erik Postma
Maplesoft.

Try something like the following:

with(GraphTheory):
t := SpecialGraphs:-CompleteBinaryTree(3);
DrawGraph(t);

You would first need to convert your data into the GraphTheory data structure; you can probably use the ?GraphTheory:-Graph command for that. Another option would be to start from the complete binary tree, then start removing vertices (and probably use ?GraphTheory:-RelabelVertices to give vertices a different label than just the number they have by default). This would have the advantage that a tree-like layout is built in; if starting from GraphTheory:-Graph, you would have to be content with one of the layouts that are found automatically, or set the coordinates manually for each vertex.

Hope this helps,

Erik Postma
Maplesoft.

An easy but very limited solution: ?plottools/transform. It's not entirely turn-key: you have to specify the projection explicitly, independently of the orientation of the 3d plot. Say you want to project onto the plane spanned by <1,0,1> and <0,1,-2>. Then you first determine the images of the unit vectors: 

project := proc(u,v)
local uu, vv;
uu := u / LinearAlgebra:-Norm(u, 2);
vv := v - (v . u) * u;
vv := vv / LinearAlgebra:-Norm(vv, 2);
return w -> [w . uu, w . vv];
end proc:
pp := project(<1,0,1>, <0, 1, -2>);
w -> [w . uu, w . vv]
pxyz := pp(<x, y, z>) assuming real;
[1 (1/2) 1 (1/2) 2 (1/2) 1 (1/2)]
[- x 2 + - z 2 , - x 5 + - y 5 ]
[2 2 5 5 ]

Now you can use this to create a transform. Unfortunately, the 'shading = zhue' option has no equivalent in 2D plots; and worse, the corresponding element is not even taken out of the plot structure. We can do that ourselves with eval, though.

pt  := plottools:-transform(unapply(pxyz, [x, y, z])):
eval(pt(p), SHADING=(()->()));

Here p is the plot you defined above. Alas, since the shading is gone, you really can't see anymore what's going on...

Hope this inspires a better solution,

Erik Postma
Maplesoft. 

1 2 3 4 5 6 7 Last Page 3 of 11