ChrisRackauckas

10 Reputation

2 Badges

7 years, 79 days

MaplePrimes Activity


These are replies submitted by ChrisRackauckas

I think the transformation can always be done, but does it always preserve the index of the DAE?

Mathematica has:

>

"MassMatrix" reduce the system to the form  if possible

http://reference.wolfram.com/language/tutorial/NDSolveDAE.html

I wonder if I'm still missing something.

Here is the Excel sheet for the table:

de_solver_software_comparsion.xlsx

>Despite my confirmative proof on RADAU, "you are only inclined to agree with me", perhaps this is a generation gap, and the younger genration is trained to always sound positive and confident despite making errors.

No, how did that sound positive and confident? I said I am inclined to agree with you, meaning that you made a very good point and I will need to think about it some more. You didn't provide a proof that the transformation can always occur, but the simple examples I was thinking of 10 minutes before jumping on a flight all were fixed by your method, making me think there's a good possibility that it does cover everything. 13 boring hours later on a plane with not much else to do, I at least will agree to the statement that pretty much any problem people will encounter in practical situations can do this transformation, but I don't know how one would prove this is always the case. Still, that's good enough that I am convinced implicit ODEs aren't as big of a deal as I thought they were before. Maybe it's because I'm in the math department but I won't say that I am positive there are no equations that can't be transformed until I know of a proof, but I will say that it is unlikely any user will have such an example.

In that case, yes Radau does really well. For methods with mass matrices, fully implicit RK methods (Radau) and Rosenbrock methods are the two that can incorporate mass matrices "with no cost". It is possible to handle mass matrices in SDIRK methods, but it actually requires a separate extra linear solving step to do on methods of more than 2 stages (it's discussed a little bit on the ARKODE page, but really understand when trying to implement it) which then would make it directly have to invert the singular matrix, so these can't handle DAEs. Together, the Hairer (to be specific, these are in Hairer II) and the DiffEq benchmarks (both published, and same reuslts happened in chatroom talks if the archives are searched) both show that when mass matrices can be used, high order Rosenbrock methods do best at higher tolerances and Radau does better at lower tolerances, with BDF methods only doing well when the system is large or the f call is expensive. Thus, given that we now agree that mass matrix form is at least practically always possible, having only BDF methods isn't sufficient and a suite should really have fully implicit RK and Rosenbrock methods if it really wants to handle all DAEs at all tolerances well. Are we in agreement? 

However, I am still going to keep a separate row for implict ODEs because there is still a use case. While I agree that the can almost certainly transform the equation to a mass matrix form, I have been specifically requested that I address this and have seen these used to build modeling tools. It's a usability feature for people who don't want to have to handle the transformation, and a much requested one.

In that case, I forget the higher purpose of what that was being argued for, but it seems to point negatively at Sundials since it misses these important categories. It also ehh for Mathematica since it has Radau and IDA, but is missing the Rosenbrock methods should count against them. Maple has the Rosenbrock methods which is a plus. Maple is missing Radau but has the others which is probably fine in most cases. R's deSolve has DASPK and Radau.

I made a few other changes. deSolve does have dede for delay differential equations, but it doesn't do any discontinuity tracking which I (and others, there a literature in this exact area) already have numerous examples to show that strategy doesn't converge well, so I changed those from "None" to "Poor". 

>I have to still conclude that your current ranking is arbitrary

Well it's arbitrary but informed. If I wanted to write down the exact details for why I made every single choice I'd be doing this forever, which wasn't the point of this exercise. The point of this exercise for me was to simply take a look at what everyone has to offer, see what's missing or what's unique about each package, and see what others have that I don't still don't have. I was doing this to re-align development goals and set milestones for what a 4.0 release should include (which I am going to lay out in a blog post soon that describes the 3.0 release). Early back in pre-1.0 I created a checklist of "Boost has GPU support, Radau is important, etc." and after hitting all of those checkboxes I wanted to just look back through and see what's still missing, and try to understand how much it matters by comparing it to existing benchmarks and user-requests. In the end, I wouldn't say it's perfect (it's a living document which will probably keep getting refined and keep serving as a development roadmap), but it's at least useful to me any maybe useful to someone else. 

>If you want me to add to this, post an editable chart or document, I will add my rebuttal and ranking for each entry in the table for MATLAB, desolve, Hairer, Maple and Mathematica and we can continue this interesting discussion.

Sure thing, let's keep refining it. I'll add it in a sec.

Hey,

  I've responded to the other replies. Feel free to post some more critiques.

>Also, you gave FAIR for Mathematica's ODE solver efficiency. I am not very knowledgeable in Mathematica, but I have used its numeric differential equation solver where it calls IDA and it works just fine and perhaps better than MATLAB

Yes, Mathematica uses IDA for DAEs and that is great. But it's not as well rounded. Its nonstiff solvers are best in class: Bogaki-Shampine (5)4 and then the higher order Verner methods. The Mathematica documentation is actually where I first heard about these and the DiffEqBenchmarks.jl shows those tableaus are more efficient, as in achieve less error that similar nonstiff RK methods for the same amount of steps (on a given set of test problems). However, where it lacks is stiff solvers. From this page

http://reference.wolfram.com/language/tutorial/NDSolveOverview.html

it only tends to use implicit RK methods. It does have the Radau IIA, and as a bonus it includes less common symplectic implicit solvers as well. But both Hairer and my benchmarks tend to show that Radau methods are only the most efficient stiff solvers when you want high accuracy (and given that Hairer II was essentially a book for showing the Radau methods, it's pretty interesting that he points out that Rosenbrock methods do better at lower accuracies. But these "lower accuracies" tend to be where the tolerance is set by default, so I think that's pretty important). That's why I am impressed that Maple uses the 4th order Rodas method as its default stiff solver. Mathematica would be fine it it had other choices, but with that, even a good implementation will be inefficient when stuck with just a single solver for this case (and one which is not really the best in the standard cases to begin with!). 

MATLAB is set at poor, so pointing out it does better than MATLAB (which it does) isn't saying much.

>For your ODE model use y1' = z1 and y2'=z2 now you have a system of DAEs with 2 ODEs and 2 AEs trivially solved using RADAU.

True, you bring up an excellent point. Now that I see that, I am inclined to agree with you: I can't think of one that can't be linearized. I did read Hairer's paper awhile ago but don't recall this part about transformations in there. I'll have to go back and ask the modeling systems developers why they need implicit ODE solvers. Maybe it's just because it's easier to throw equations to?

>Hope I am not being overly critical and obnoxious. I admire your efforts in Julia. Your blog got my attention thanks to my research scientist Chintan Pathak. I am not a Maple developer, just a long-time user with my own long list of complaints about the same.  My aim as a professor is to insist on accuracy and make sure people are not misguided. For example, I have lost a lot of my time trying to use algorithms on different platforms. If the methods don't work, then fine. But if they are poorly implemented, it is not the algorithm's fault.

No problem at all. These were just a set of personal notes that I put to my blog since I was putting a roadmap together for what to implement next (i.e., look around, see what everyone else has, why they have it, and how to do it). I put them out there because I knew that the feedback would be very enlightening as to not only what I didn't find in other suites, but also the things people focus on would be a good indicator of the things people care about (the answer seems to be DAEs and PDEs. PDEs weren't part of the post but so many questions about doing something for them. And all other extended discussions seemed to focus on DAEs). I've actually learned a lot about Maple through this. 

>PS, if you have time and bandwidth, it will be super cool to collaborate on a research/educational article on this topic with detailed examples that I can provide.

I'm looking to put together an article with a bunch of comparisons between algorithms (with similar implementations). Comparisons between RK methods of the same order but different tableaus (explicit, SDIRK, fully implicit), comparisons between Rosenbrock tableaus, comparisons between tableaus when used in a DDE solver (this makes a surprising amount of difference, here you can see that only some achieve the right qualitative behavior in the small component), and a re-analysis of residual control DDE solvers. In addition, testing the efficiency difference between Runge-Kutta methods applied to 2nd order ODEs vs the high order (and adaptive) Runge-Kutta Nystrom methods, showing whether they really matter or whether newer RK methods are just efficient enough. And maybe also showing efficiency in terms of energy error of symplectic vs RKN/RK with projections. I have a lot of these benchmarks around and just want to formalize then and share it.

Just write one down where the implicitness is nonlinear. In psudo-math:

sqrt(y1*y1') - exp(y2*y2') = 0

log(y1'*y2') - sqrt(y1*y2) = 0

That's an implicit ODE which I don't think you can put in mass matrix form. Or you can even add some algebraic variables

sqrt(y1*y1'*z1) - exp(y2*y2'*z2) = 0

log(y1'*y2'*z1) - sqrt(y1*y2*z2) = 0

z1 + z2 = 1

 

>You have given Excellent for SUNDIALS for implicitly defined DAE solvers. What is the index here? RADAU handles DAEs better than anything from SUNDIALS (missing a trivial transformation is not a weakness).

Radau can't do implicitly-defined DAEs, just DAEs in mass matrix form. Not all implicit ODEs are linearly implicit, so this is more than a transformation. I added this row because the developers of modeling suites like Modia/Modelica, Sims.jl, etc. care about this capability since their modeling software easily generate implicit ODEs which need these kinds of solvers to handle. IDA and its ancesters are definitely considered top of the lines tools in this domain since they are quite efficient and scale to large problems well.

>The previous version of the table and the ranking on Maple seems to be made in a rush doubting the validity of other entries as well. I am not well versed with the other platforms to accept or deny your conclusions.

Maple is the one I am not very well versed in beause I haven't had a license in years, so I checked through the documentation to see what changed but it's clear I missed some things. I am sorry about that but I think it all got fixed up. In the end, my opinion of Maple's suite has changed a lot. For example, looking at the delay diffeq suite, it looks like it's designed in a similar way to what we did in DelayDiffEq.jl where the idea is to get a stiff solver by extending a Rosenbrock method. I am sure that Maple devs saw the same reusage tricks that can be done in this case. The accuracy shown in your extended post on the state-delay problem along with the docs example point to the fact that there must be some form of discontinuitiy tracking as well, probably silently on top of their events system if they also followed the DKLAG paper, since otherwise though examples wouldn't get that tight of errors. So in the end I think choices like this look quite similar to choices we made, so it's nice to see some precident. I now think that Maple's delay equation setup is very great, I just don't get why it's documented so poorly.

I still think that the choices for the explicit RK tableaus are odd and suboptimal and hope that the Maple devs update those to more modern tableaus. But those are the kinds of "sugar" things. In general it seems that for "most people's problems in standard tolerances" some explicit RK method or some higher order Rosenbrock method seems to do well, and we have all of Hairer's benchmarks and DiffEqBenchmarks.jl to go off of for that, so it's pretty clear that Maple is hitting that area pretty solidly. Still, it's missing someone of the "sugar" like fully-implicit RK methods (radau) which our and Hairer's benchmarks say are important for high accuracy solving of stiff ODEs. It's also missing SDIRK methods which our benchmarks say trade with Rosenbrock methods as being more efficient in some cases. The mentioned case was a semilinear stiff quorum sensing model. I'll see if we can get that added to DiffEqBenchmarks.jl, but basically what's shown is that we agree with Hairer's benchmarks that the methods from his SDIRK4 were uncompetitive, but TRBDF2 and the newer Kvaerno and Kennedy & Carpenter methods (which are the basis of ARKODE) are much better than the SDIRK methods benchmarked in Hairer. All of the cases are essentially cases with semilinearity where the fact that Jacobians are required to be re-calculated every step in a Rosenbrock method whereas Jacobians are just for linesearches in SDIRK implicit steps actually made a big difference since the standard Hairer implementation of SDIRK then allows for Jacobian calcuations and re-factorizations to be skipped. But again, these are edge cases looking for just a little more efficiency on some problems, while explicit RK + Rosenbrock + LSODE covers quite a bit of ground. That plus the fact that Maple lets you compile the functions bumped up my opinion of Maple's set of solvers to very good but not excellent. I am sure that down the line we can write a Julia to Maple bridge to do more extensive benchmarking (Julia links to Sundials/LSODA/etc. so then there's a direct way to do quite a bit of comparisons) but since I have found that implementations don't differ much these days from what Hairer described, I'll assume Maple devs know what they're doing and would get similar efficiency in writing a compiled solver that anyone else does.

But I will say that Maple should definitely document not just how to use their stuff, but what they are doing. It's really hard to know what Maple is doing in its solvers sometimes. Most of the other suites have some form of publication that details exactly what methods they implemented and why. Maple's docs don't even seem to hit that, and I only found out some of those details in forum links here. What I can find on Maple are Shampine's old PSE papers, but it seems a lot has changed since then.

>Regarding your comment on 10,000 or more ODEs, are you talking about well defined system with well defined pattern for Jacobian or matrix or arbitrarily working with a random set of matrix? I agree that Maple is weak for this, but MATLAB switches to sparse solvers for large systems.

Yes. A quintessential example is a method of lines discretization of a reaction-diffusion equation. Busselator is a standard example, but I like to use like a system of 8 reactants or something like that. These PDE discretizations usually have a natural banded structure that things like Sundials have built-in choices of banded linear solvers for handling, or suites give the ability to pass in user-defined linear solvers. It definitely depends on the audience, but "using ODE solvers to write PDE solvers" is definitely a large group of users from what I've found and so handling this well matters to those who are building scientific software on top of the ODE suites.

>I have had bad experience with shooting methods (in particular single shooting) for BVPs. It will be trivial to break any such code for DAE BVPs.

Yes, and honestly the BVP solvers were a tough call between "Fair" and "Good" here. I put Julia's as "Good" here because of flexibility. The imputus for finishing and releasing them were because we were talking about them in our chat channel and someone wanted to use them for the boundary constraint that the maximum of the velocity was 1 over the interval. Since our setup involves writing constraints using (possibly continuous extension of) the solution, this was possible. And then we had some people use it for multipoint BVPs without having to do any transformations of them. So that, plus the fact that our MIRK-based method can do (stiff) multipoint BVPs and has a specialized form for banded Jacobian handling when it's a two-point BVP is why I ended up giving it a "Good". But that doesn't mean it's close to complete at all. The Shooting methods are nice but many problems are too sensitive to the initial condition to use them, so we really need to get a continuous extension, mass matrices, singularity handling, and adaptvity to complete our MIRK-based method. But since from what's missing it's clear that someone's problem can't be solved well by this setup, one could also put a "Fair" on this, but I don't think you can justify a "Poor" because it does handle so many unique things. MATLAB actually does quite well in this area with bvp4c solving two-point BVPs with singularities and stiffness just fine, but it's only a "Good" because it doesn't go any further. Maple's documentation explicitly states that it shouldn't be used for stiff BVPs, and it's unclear to me if it can do more than two-point BVPs so I think that justifies the "Fair" rating. 

I did miss that COLSYS was so flexible in the table, along with COLDAE. If anything deserves an excellant, Netlib would be it. And the sensitivity analysis of DASPK is missing. That will get updated.

Of course, simple tables with rankings can only be ballparks because it's all about the details so I try to flesh out the details as much as possible in the text and hope to get as close as possible or as reasonable as possible in the table. I really dropped the ball in the first version of the table for Maple and I'm sorry about that. But as to:

>I am not well versed with the other platforms to accept or deny your conclusions.

let me just share the other main concerns that have been brought up:

1. Someone on Hacker News wondered why I omitted Intel's ODE solvers. They were fine but discontinued in 2011 and are just vanilla solvers without any event handling or other special things.

2. Someone emailed me about Mathematica having GPU usage (and now it looks like someone else posted a comment about it on my site), but that was for in general and their GPU stuff doesn't apply to ODEs.

3. Someone on Twitter mentioned that SciPy does do event handling. I can't find it in the docs at all so I am waiting for confirmation. But from what I can find, there are just sites showing how to hack in event handling and these don't even make use of dense output (and I cannot find a method in the docs, and the dev channels seem to show nobody has picked up the implementation as a project yet). So unless I'm missing something big it seems like this won't change. Edit: Looks like we are in agreement here now.

So it seems Maple is the only area that really had some big valid objections that have since been corrected. With the amount of press this somehow got (man, these were just mental notes I was sharing while comparing what's available to find out what to do next haha), I think there's some confidence to be gained that more devs in other languages haven't really voiced objections. Though I'm sure that there are corrections that can and will be made as other issues are brought up.

>If stochastic ODEs are to be solved, I would look at StochSS of Petzold before concluding that other software may be the best.

StochSS doesn't do stochastic ODEs. It does discrete stochastic equations like Gillespie-type algorithms. I didn't include that in here because that's just too niche and DifferentialEquations.jl is the only one that handles that, but that's because the comparison is to the wrong places (it would be interesting to compare DiffEq's discrete stochastic solvers vs StochSS, but not SciPy). So that's the topic of another day.

>In your blog, I have seen claims of SDIRK type methods doing better than other algorithms, I would like to see some examples if possible. 

Just look at the benchmarks. Well, we need to post more. If you go back in the history for the Gitter channel in August you'll find a few. We will be digging those up in a bit to formalize them more, and add ARKODE to these benchmarks.

>As you might know IDA came from DASSL/DASKR. Among all the BDF type solvers for index 1 DAEs, DASPK and DASPK3.0 may be the best. Check, http://www.cs.ucsb.edu/~cse/software.html. It has more options for initialization of DAEs.

Yeah. We have DASKR and IDA wrapped, but not DASPK. Honestly I didn't know what DASPK did that the others didn't. But I don't see any of these in Maple?

>The first successful implementation of sparse solver with BDF type solver was done by Paul Barton at MIT. Check, DAEPACK, and JACOBIAN at https://yoric.mit.edu/software.

The more modern way is to just allow the user to swap out any of the linear algebra routines and provide the Jacobian type. Sundials does that and so does DiffEq. Some of Hairer's later stuff does to some extent as well. In the end that's vastly more flexible, and this exact trait got a row in the table for this reason.

@itsme Thanks! Yep, it's Euler-Maruyama.

>I don't think MATLAB can handle nonlinear complex ODEs using BDF type formula.

It can. All of the MATLAB ODE solvers are actually written in MATLAB. That's why it's actually not BDF but NDF, but that's a detail which really doesn't mean much. But yes, in the end that means it can use complex numbers.

> I am not sure why you think Maple won't solve any complex ODE that other platforms can solve. The comparison should be made based on the ability to solve a given model, not based on the implementation of a particular algorithm

I told you I goofed here? I am not sure what you're adding to the conversation with this.

> Note that dsolve/numeric if properly used can be tricked to solve any RK method (explicit) by giving the Runge kutta coefficient in proper form as long as you give two formula for prediction (to calculate the error)

Not all RK methods are strictly defined by tableaus. A good example is the 5th order Bogaki-Shampine.

>Regarding events, I think I didn't make it clear. The current documented events page in Maple handles much more than what MATLAB can do. There are much more options builtin, but not documented yet.  Check https://www.mapleprimes.com/questions/125273-Dsolve-Events-How-To-Control-For-A-Sign-Change

Sorry, I don't see anything but basic events in there. There's halting, one-sided events, and combinatorial events mentioned there. Anything else? It's not so clear to me there's something else in there, but it is a long conversation so I could have missed something. I don't see anything like changing the equation that's being solved, or changing the size, etc. going on.

>I have run IDA, RADAU, etc. For a very small number of ODEs (less than 100 or so), Maple's dsolve numeric will be as fast as anything else for non-stiff and moderately stiff systems. 

Yes, that's to be expected because it's using a Rosenbrock method. I already mentioned that. But what about hundreds of thousands of ODEs? 

>On an another note, what about DAE BVP? As of now, only COLDAE can handle it.

In mass-matrix form DifferentialEquations.jl will handle it via a Shooting method. Not the best but it does exist.

@itsme does it say what method it's using anywhere? Since it requires the number of timesteps it's clear that it's not adaptive, but it still doesn't mention what method it's using. But since it's non-diagonal and there's no mention of Wiktorsson approximation types, I am inclined to think it's an Euler-Maruyama?

Hey,

  The author here. Thanks for informing me about this discussion. I'll update the tables as I get more information. I will say that Maple is definitely the one on the list for which I have the least experience, so please bear with me. But I find some of these comments odd.

> Maple can solve state dependent delay differential equations, Mathematica can't.The table makes me conclude that the author has no clue

I completely missed its documentation on the delay equation solver. However, it doesn't discuss state-dependent delays. (Found the example with state-dependent delays). Could you point me to where it documents its discontinuity handling strategy? I couldn't find any details.

As for Mathematica, the table also says that Mathematica can't do it. I'm not sure we actually disagree here.

> Maple's dsolve sovler is faster than MATLAB (in compiled form)

That's already true in the table. I am not sure we actually disagree here either.

>Event options are better in Maple compared to matlab or anything else in that table, but poorly documented.

Well, how am I supposed to know if it isn't written down? Can you give me the lowdown and point to where this is documented? "compared to matlab or anything else in that table": I agree that MATLAB's event handling is constrained, but what can it do that DifferentialEquations.jl events can't? I would be surprised if there's an answer there because you can literally change anything in the integrator object.

>BVP solvers are available in Maple as well and marked as unavailable in the table.

I did miss this. Can Maple handle multipoint BVPs or higher order constraints?

>The comment on wrapper for lsode is wrong. Maple has a wrong implementation of lsode (linearized newton raphson)

I don't get what you're saying here. Do you mean the documentation is wrong? It clearly states here it has an LSODE wrapper.

https://www.maplesoft.com/support/help/maple/view.aspx?path=dsolve%2flsode

>Maple can be used to solve ODEs with arbitrary precision, also ODEs in the complex domain. It looks the blogger hasn't used a recent version of Maple.

With its native methods, yes. Not with the wrapped methods, right? So do complex and arbitrary precsion work with LSODE? But yes, this does work with a lot of the methods so it should probably be upgraded to "Good", or if it somehow works with LSODE as well it should get an "Excellent". 

Page 1 of 1