I am really impressed by your analysis of lsode.
In fact I rarely use Maple for solving ODEs numerically. As I do a lot of statistical studies my "reference" tool for numerical purposes is R (https://cran.r-project.org)
Before using Maple to solve the problem I presented in my primal question, I had solved it in R (of course the equations were not written in the same form, but they represented basically the same physical system) by using the function lsode from the deSolve R-package.
In the package deSolve, it is written
The R function
lsodar provides an interface to the FORTRAN ODE solver of the same name, written by Alan C. Hindmarsh and Linda R. Petzold.
(lsodar allows you to invoke lsode, lsoda, lsodar, ... and, contrary to Maple, allows use of events)
The solver to wich this sentence refers is part of the Fortran library ODEPACK
I naively thought that Maple had used the same strategy, in the sense that : the function lsode provides an interface to some function of the same name ...
For instance from ODEPACK (Fortran) or SUNDIALS (C, C++) which are highly respected ODEs solvers libraries.
This because I thought it would have been a bit stupid to redevelop something that works well somewhere else.
But, as the head of the mail the Maple support team sent me (« The problem is that lsode is a direct translation of...), and as your sound analysis seems to suggest too, Maple has probably coded its own version of lsode ... and maybe has done this in a way that is not above reproach (?)
Tanks again for your definitely exciting analysis.
Here is the "in extenso" version of the Maple team answer
« The problem is that lsode is a direct translation of an involved Fortran implementation that uses common blocks (aka global variables) to store critical data, and needless to say the exact nature of these is not documented.
The issue is simple to describe:
Say you obtain a solution 'sol' to an ODE problem using LSODE, and you do this:
Then do the same thing, but then do this:
One might expect the two values of sol(2) to agree, but they will not, and here's why:
In the first sequence, the lsode solver is called to integrate from 0 to 1, and obtain values at 1.
For efficiency, when the value at 2 is requested, we do not integrate all the way from zero again (though we probably should) but instead use the value at t=1 as a starting point.
The problem is if we directly request the value at t=2 instead, the integration is highly unlikely to choose t=1 as a natural point to take one of the integration steps, so the computation process will be different, and the values will differ.
The solvers where it was possible to see what is actually going on (how the integrator works) were updated to store the last step before a modified step (i.e. modified by the endpoint) and resume the integration from there when another point is requested.
So as an example, in integrating from 0 to 1, we might evaluate at the following time values:
t=0, 0.11, 0.207, 0.292, 0.35, 0.38, 0.41, 0.48, ..., 0.902, 0.949, 1.00
where in effect the last two evaluation points were altered from the natural step size of the solver as driven by the error control so that the endpoint would be hit. The altered solvers save their state before the alteration, effectively at 0.902, and resume from there once a request is made for a point past t=1, so for example you might see this:
t=0.902, 0.97, 1.03, ...
The main solvers are even more intelligent in how they do this, as there is no repetition of computation as they store a antural interpolant, so for a request at t=1 they would take the steps .902, 0.97, 1.03, then simply interpolate the t=1 value over the 0.97..1.03 region.
It was not possible to do this for lsode»