425 Reputation

9 Badges

2 years, 250 days

Social Networks and Content at Maplesoft.com

Editor-in-Chief of Maple Transactions (www.mapletransactions.org), longtime Maple user (1st use 1981, before Maple was even released). Most obscure piece of the library that I wrote? Probably `convert/MatrixPolynomialObject` which is called by LinearAlgebra[CompanionMatrix] to compute linearizations of matrix polynomials in several different bases. Do not look at the code. Seriously. Do not look. You have been warned.

MaplePrimes Activity

These are replies submitted by rcorless

I think what you want is the RealTriangularize command in the RegularChains package.  I am trying it now; it's taking a while because it's using exact arithmetic.  In the end, if it succeeds, it will give exactly all the solutions (and a proof that there are no more).  If it doesn't succeed, I will ask the designer (Marc Moreno Maza) if he can make it work with some extra polishing of the input somehow.

@justauser We rewrote the modified equations code at least twice since 1997.  The most recent version is, however, ten years old (it's in my 2013 book with Nic Fillion). 

Here are two implementations from 2015




These are from the paper "Variations on a Theme of Euler" which is available for free at



You will see that the code is not at all general: it is applied to a single first order equation.  It is, however, extremely simple.

I should work up some general code, but most of my work nowadays is "ad hoc".



Yes, I have done work on modified equations :)

I have a few papers, but the most recent stuff is in Chapter 13 of my book with Nic Fillion.

Chapter 11 is on finite differences.


If your library has the Springer Bundle, then you will have free access to the PDF of the book (and can buy a paperback copy for $25, or 25 euros if you are in Europe, or 25 pounds if you are in the UK).

What I am hearing from you is that there is scope for a post here on Maple Primes on this, or maybe a paper in Maple Transactions.  

Ok, will put it on my list.  If you have a specific example that you would like to analyze, I'd like to try that.  Modified equations were first invented for PDE but they work better for ODE.



I've dug it up, but not the worksheet it came from (I have the Maple program for it).  It might be of interest; I'll try to update it for Maple 2023.  It was based on 1995 work with Jacek Rokicki of TU Warsaw, who sadly passed away not too long ago.  Updated in 2004 and 2005 by myself and Jichao Zhao we used it for compact finite difference schemes for finance and cardiac modelling.


The trouble with this kind of work is usually notation: what one person likes and understands, another person thinks is opaque and clumsy.  At least I have some historical quotes and references in the talk, anyway.


@sursumCorda Any nontrivial computer program has bugs.  It is *always* the user's responsibility to ensure correctness and fidelity.

(That maxim is well-known: the corollary is that if your program is bug-free, it is trivial :)

@Zeineb I see no variable Pe.  Later you say that one should take Z to infinity.  But you don't specify anything about the functions, so taking the limit of (say) f( x, y*Z^(1/3)) depends on the asymptotics of f as its second variable goes to infinity.  So I think that you have not specified enough information for anyone to get an answer.


Unless I am misunderstanding.

@abdulganiy I learned how to do this many years ago in an undergraduate numerical analysis course.  This was about the time that the wonderful book by Ascher, Mattheij, and Russell was being written on the subject.  Learning from that book would be the most thorough way to learn this material even today. https://epubs.siam.org/doi/book/10.1137/1.9781611971231


For a gentler introduction, you might do well to work from other sources.  My own chapter in my book (...Chapter 13 in a Graduate Introduction to Numerical Analysis) is very short and gives only the barest introduction, and depends heavily on other material.  There are the wonderful videos by Gil Strang and Cleve Moler (they start gently but get to interesting places very quickly) https://www.youtube.com/watch?v=ghjOS7Q82s0 

(full course at https://ocw.mit.edu/courses/res-18-009-learn-differential-equations-up-close-with-gilbert-strang-and-cleve-moler-fall-2015/ )

More references at http://www.scholarpedia.org/article/Boundary_value_problem


@Pepini Following the instructions in "parse" it works for me (to make the last one work one would have to insert a * in front of the microamp symbol, or otherwise deal with it).  Of course you can put the answers whatever you want; I just put them in a table called "piece" for convenience.  


This is an example of a problem with a "mass matrix", of the form M f' = K f  where M and K are both matrices.  In your case the matrix is a Vandermonde matrix (using nodes 1, 1/3, and 2/3).  Applying RK2 to the system *without* inverting M is an interesting thing to do; this approach (not with RK2 but with other RK methods) leads to good methods for stiff systems.

Is that what you were thinking about?  

If you use the approach for larger dimension, the condition number of the matrix M gets exponentially large, although there are very good and stable ways to solve it even so (I can point you to papers by Demmel and Koev if you are interested).  

A nice kind of problem to think about.  Might investigate later.

@Joe Riel thanks for "thisproc".  I did use "procname" before but I was genuinely mistaken as to its meaning.  The keyword "thisproc" is correct.


g := proc(n) option remember; if n < 2 then 1; else procname(n - 1) + procname(n - 2); end if; end proc;

G := Compiler:-Compile(g);

CodeTools:-Usage( G(40) ); # Takes 1.36 seconds on my machine (!)

CodeTools:-Usage(g(40)); # Records 0ns .  Um, what?

Now, the first time G is invoked it might need something.  So if one progressively calls G(10), G(20), G(30), one sees the time creep up.  Then G(40) takes 1.3 seconds.

Or, don't bother.  It still takes about the same time, vastly longer than the uncompiled version.

Perhaps the lesson is, only compile more expensive programs?  There's some kind of overhead involved here?  Except that the larger the input, the longer it takes.  I suspect that the problem is "option remember" here, which is not being respected by the compiler.

This seems to be correct.  The "option remember" is being ignored, and the cost becomes exponential in n (as is well-known).  A non-recursive g works much better:

g := proc(n::nonnegint) local k, g0, g1, g; g0 := 1; g1 := 1; for k from 2 to n do g := g0 + g1; g0 := g1; g1 := g; end do; return g; end proc;

G := Compiler:-Compile(g);

Now G(91) gives the correct answer (no rounding error! Which seems remarkable because floats can't be involved, therefore, only integers) but G(92) does not give a negative answer but rather raises the overflow exception. 

Also the time taken is not detectable by CodeTools:-Usage; too fast.

Ok, thanks; I learned something.


Aha! I used procname later, not thisproc. I shall investigate the differences, if any.

@Kitonum add( (-1)^(n+1)/(2*n-1), n=1..4) is *much* more intelligible than your (admittedly correct, just obscure) solution :)

"sum" does symbolic summation

"add" is (like seq) a command with special evaluation rules but is meant for adding together a fixed number of terms.


This is not an Ordinary differential equation, but a Functional differential equation instead.  Other examples include simple delay differential equations like y'(t) = a*y(t-1) and then you have to specify the "history" on an interval (say (0 <= x <= 1).  It gets complicated. More fun functional equations include things like y'(t) = a*y(t/2) and this one can be started from t=0 (doesn't need a history).  Yours is of a type I have never seen before.  The derivative at x=0 depends on the behaviour of the function at x=infinity!  If you specified the function on 0 < x <= 1 then it looks like it might be solvable on x>1, but I am not so sure. 

I'll think about this a little bit now---might try a Laplace transform, for instance---but yes as has already been observed, f(x) = 0 identically is a solution to this equation (but I have no idea if there are any others).

@Carl Love the word "height" was more commonly used for polynomials; there is a paper by Mahler for instance, but maybe the usage goes back to Littlewood.  The "height" of a polynomial is the infinity norm of the vector of coefficients (usually in the monomial basis); its "length" is the 1-norm.  In other words, the height of a polynomial was the largest absolute value of any coefficient.

Similarly, the height of a matrix is the largest absolute value of any entry.  If the entries are all integers, then there is a minimal possible height as well (and this matters: of course one can scale any matrix so its height is 1; but only at the cost of possibly introducing small entries).  The zero matrix has height 0 (and it's the only one).  A matrix like the Mandelbrot matrix of https://doi.org/10.5206/mt.v1i1.14037 has height 1, and this is pretty obviously the smallest interesting height.  It's astounding that the characteristic polynomial has height (we say "characteristic height", to distinguish from the height of the matrix) that is exponential in the degree, and is thus vastly ill-conditioned for evaluation and rootfinding.


The name "Bohemian" comes from the acronym "BOunded HEight Matrix of Integers" (BOHEMI) which is close enough.


intsolve was written in 1991, and fracdiff in 2004.  I'm kind of pleased that MapleMathMatt managed to force intsolve to work with fracdiff.  Nice work!

1 2 Page 1 of 2