C_R

3577 Reputation

21 Badges

6 years, 95 days

MaplePrimes Activity


These are replies submitted by C_R

@sand15 

X0, Y0, X and Y are column vectors.

The first error message is generaded in Step 1 here

Download Compare_outputs_2015_your_version_reply.mw

@sand15 

I did not know about GPE.
Interesting approach in general. It seems to allow tuning the fit fidelty at regions of interest.

In your comments you use L as a synomym for lambda. Correct?

I am impressed by how little information is required for such a good fit. Hard to believe.

Could you have a look at the attachment. Something is not working in the proc KrigingModel. 

Download Iterative_Gaussian_Process_Emulator_2025.mw

@nm

I did not see your post. I just discovered that Windows restarted my computer. When I execute now this help page

help("ExcelTools:-Export")

and click on an error message

the help page for the error message opens.

That is strange. Since my IP adress has not changed I expected the firewall bock again. I suspected my IP adress beeing part of the problem and wanted to try accessing the help page via VPN.

@dharr @sand15

Noise-free

I am impressed by the fidelity of the rational fit. It’s much better than the modulated trig function without adding too much extra parameters. In the meantime, I have found a symbolic regression tool (TuringBot). I could run it in demo mode with 50 data points with a custom model y=f1(x)+cos(f2(x)). I am getting close to sands15 attempts.

The tool tells me that it has tested close to a billion “formulas” (models satisfying the custom model). Even in its demo mode the tool is interesting since it provides many options you have mentioned in your discussion. The “Search Metric” Mean Error seems to work better on the data set. While writing, I noticed that the tool has now discontinued optimizing the model above (blue curve) and found something better
 


So far (in its default mode) the tool has not come up with a rational model. Instead, pretty good fit was achieved involving special functions: Simple in terms of length but complicated in terms of functions involved. That’s not what I want.  However, the tool is at least helpful for me (probably not for you) in exploring classes of models and finding initial conditions for subsequent optimization.

The data set:

The data set I have added to my original post describes exemplarily the error of an approximation for an exact solution of a non-linear differential equation.

My intention is to improve the approximation by an error compensating term. The approximation and the error compensation term(s) should be short and simple enough to be copied by hand. Processing by simple processors that do not come with sophisticated libraries should be possible in an efficient way.

From your comments I have learned that noise of an (unknown) process plays an important role in the selection of model optimization methods. With that knowledge I would have written my original post differently, mentioning that the data set is noise-free.

I also realized that only providing a data set is (in some instances) insufficient input for the selection of a model and for optimization methods. For instance, there could be a additional constraint of “convergence to a linearized solution” for some data points. This means that a fit should reproduce the value and/or slope at selected data points. For the exemplary data set this would be no residual at x=0. Fitting not the error but to the solution of the non-linear DE additionally makes the same slope a desirable constraint.

It looks like that I have to bake all the extra constraints into models before I can start optimizing and profiting from the codes and the background you have provided.

I still think that your discussion and insights merit a larger audience. It was a pleasure studying it.

Thank you again for all your time!

@sand15 

A big thanks for your time. I will study this ( including your last point), in depth over the week-end.

@sand15 

I had time to study your code. Great stuff, really!

Your code contains two essential elements required to automate model finding: Generating models from a model type and determining residuals. I assume that dedicated Symbolic Regression software packages allow for further user input as:

- complexity (length of the model, leaft count, ...)

- accuracy (allowed error)

- objective function for the residuals (Maples Optimization package has an option for that)

- model type selector 

Please make the reply above an answer that I can vote up for it.

Please also consider making the reply with your lost comments a post that more users can read it. I think symbolic regression is of general interest and Maple has already many functions on board that are required for a dedicated command or application.

@sand15 

Yes I am very interested. All what I have tried was not satisfying.

@dharr 

My requirement: An algorithm (not me) that searches for a model that fits a given data set best.

My best guess for the dataset above is a periodic function with a modulated argument. With that guess I could establish a few models and then try non-linear regression on them and compare the fidelity of the fits.

Symbolic regression is supposed to do this for me (including even models I do not imagine).

@Ronan 

I have added an example to my original post. CurveFitting offers 9 models. I have try them all to find out which one is best. Splines, for example, work well for the example but the returned expression is large and piecewiese. That is not what I want.

Ideally symbolic regression returns "simple" models (in a compact form) automatically. Maybe a trig function with a "somehow" modulated argument could fit well. 

The interactive option is indeed helpfull in trying out models. Thank you.

@acer 

Now it makes sense. Thanks you!

@acer 

I would be interested if this was done intentionally. Thank you!

@acer 
In the above have replaced x by alpha:  

x=0.08718861663 alpha=0.08718861663

This is the value for alpha that plot computes for the second plot point and used in the JacobiCN call.

a := RootOf(JacobiCN(sqrt(2)*sqrt(alpha), sqrt(2)*_Z/2)^2*_Z^2 + _Z^2 - 2):
subs(alpha=0.08718861663,a);
      /                                          2              \
      |        /              (1/2)  1  (1/2)   \    2     2    |
RootOf|JacobiCN|0.2952771861 2     , - 2      _Z|  _Z  + _Z  - 2|
      \        \                     2          /               /

Sorry again for not beeing clear.

@acer

Sorry, for beeing not clear.

Tracing your example,
plot(a, alpha=0..0.5, adaptive=false, numpoints=7)
I see that plot calls fsolve for the second plot point this way

fsolve(JacobiCN(0.2952771861*2^(1/2), 1/2*2^(1/2)*t)^2*t^2 + t^2 - 2);

where 0.2952771861=sqrt(alpha).
This squared

0.2952771861^2 <> 0.5/(7 - 1);
                 0.08718861663 <> 0.08333333333

does not match exactly the sampling I would have expected for numpoints=7 over the length of 0.5 (of the range).

@one man 

The animation runs with the changes.

INSCRIBED_SPHERES_CILL_FOR_-_2025.mw

 

 

@acer 

I now understand that fsolve returns two roots at alpha=0. With the following conversion I cannot reproduce the change of roots anymore

a := RootOf(JacobiCN(sqrt(2)*sqrt(alpha), sqrt(2)*_Z/2)^2*_Z^2 + _Z^2 - 2);

b := convert(a, Elliptic_related);
         
plot(b, alpha = 0 .. 0.5);

This might be because sn looks considerably different in the above range and no change of sign of D(f)(x) might occur.

plot3d(JacobiSN(sqrt(2)*sqrt(alpha), sqrt(2)*_Z/2), alpha = 0 .. 0.2952771861^2, _Z = -20 .. 20);

The problem occurs elsewhere (with an unexpected drop in magnitude that should not be there; but this is off-topic).

I have no more rootfinding questions but still cannot explain why numpoints comes up with x=0.08718861663 alpha=0.08718861663 for the second plot point.

1 2 3 4 5 6 7 Last Page 1 of 70