4500 Reputation

17 Badges

6 years, 289 days

MaplePrimes Activity

These are replies submitted by mmcdara


Wonderful explanation (even though I still need to delve deeper into your answer to fully understand it).
As a numerical and data analyst I'm much more familiar with the the numerical aspects of ODEs. You have opened up a world for me.

Great thanks


Di you have any idea why dpolyform fails to find an ODE that g(x) verifies

e := g(x) = 1/(1+x):
PDEtools:-dpolyform(e, no_Fn);
      [g(x) = 1/(1+x)] &where []

as it is obvious that 

diff(g(x), x) = -g(x)^2:



Sure. The only difficulty is that minimizing the sum of orthogonal distances (named "Orthogonal Regression") is no longer a linear problem and and one has to use an optimization procedure.

BTW, here is a very educative application whcich visually explains what ordinary least squares regression is :
  Start > StatisticsAndProbability > Least Squares Approximation

Please load your worksheet by using the big green arrow in the menubar


I like the geometry package for the elegant way it can solve a problem, but I have to admit that your solution is even nicer (although of black box type).
I vote up

@Scot Gould  

In my experience, it is best to name the rows and columns as strings.
I would even say "it's more natural, logical, and conform to the idea of what a DatafFrame is aimed to".
As the columns and the rows of a DataFrame are supposed to represent classes, it would be odd that a class has the same name as one of its elements.

More of this the restriction "names as strings" introduces some safety and I agree with you Scot.


The R software possesses exactly the same structure.
From one can read
A data frame is the most common way of storing data in R and, generally, is the data structure most often used for data analyses. Under the hood, a data frame is a list of equal-length vectors. Each element of the list can be thought of as a column and the length of each element of the list is the number of rows. As a result, data frames can store different classes of objects in each column (i.e. numeric, character, factor). In essence, the easiest way to think of a data frame is as an Excel worksheet that contains columns of different types of data but are all of equal length rows.

I'm sure people who use to use Excel give names to columns and rows that different from the content of a cell

@Carl Love 

Yes and No.
Yes for "//" as a special meaning in Linux when it appears at the beginning of a path.
No for "//" as no special meaning when it appears within a path where it is just equivalent to "/".
Linux collapses all interior"//...//" into a single one "/"

a := currentdir():
M := ExcelTools:-Import(cat(a, "/A_sample.xlsx"));
M := ExcelTools:-Import(cat(a, "///A_sample.xlsx"));

b := StringTools:-SubstituteAll(a, "\/", "\/\/"):
M := ExcelTools:-Import(cat(b, "/////A_sample.xlsx"));


Sorry for not being able to verify this right now (this comes from my phone), but I think you should begin to replace all backslashes
 ( \ ) by forward slashes ( / ).

@Preben Alsholm 

Why not this ?

f := (a, f, t) -> x -> add(a[i]*cos(f[i]*x+t[i]), i=1..numelems(a)):
plot(f(A, F, Theta)(x), x=-3..10);

Which has the advantage to be (IMO) more flexible

plot(f(A, F, Theta2)(x), x=-3..10);



If you read carefully what I wrote, I am talking about converting an mw file to an mpl file "with Maple". And it takes very little time.
Maybe I misunderstood your problem and this is not what you are looking for. In which case I apologize for interfering in this discussion and confusing the issue.
In any case, nothing justifies your rudeness



  • Function stepAIC from package MASS  (stepwise regression)
    • Searching from the best model in the AIC sense (Akaike Information Criterion) from lower model [1] (only intercept) to uppermodel=[1, X1, ..., X10] (complete polynomial of total degree 1 with 10 indeterminates)
       returns [1, X1, ..., X10].
      Thus steAIC = Ordinary Least Regression [OLS] (function lm in R)
    • Searching from the best model in the AIC sense (Akaike Information Criterion) from lower model [1] to  uppermodel=[1, X1, ..., X10, X1*X2, ...X9*10] (i,complete polynomial of total degree 2 with 10 indeterminates)
      as the best model returns 
       Y ~ X1 + X2 + X3 + X4 + X5 + X6 + X7 + X8 + X9 + 
          X10 + X4:X8 + X2:X5 + X5:X10 + X1:X9 + X6:X8 + X6:X9 + X4:X9 + 
          X2:X8 + X2:X3 + X3:X5 + X1:X2
      Still more complex model, fvery far from the 2-regressors or 3-regressors model yous expect.
  • Function cv.glmnet from package glmnet (optimal lambda parameter in LASSO regression)
    • The answer is a value of lambd around 10^(-6) (for tecall lambda=0 corresponds to OLS
      Thus LASSO will select as a set of regressors... [X1, ..., X10] itself.
      Maybe some normalization of the regressors could help selecting a more parcimonious model


    My implementation of LASSO (which is still go be improved) returns results of the same order than those of cv.glmnet: the LASSO model has betxeen 8 and 10 regressors (to get a single model very intensive computation using Cross-Validation [your Train & Test base randomly splitted in a large number of ways] that I have not develoipped).
    Note that cv.glmnet uses this strategy and returns a result almost instantly: without a clever coding a naive Maple procedure will take un unbareable time to run.


I'm developping a Maple code for LASSO regression.
I've still a fex problems to fix.

I will use R (this will validate my implementation of LASSO in MAPLE). I will send you the results in 5 to 6 hours for I have something else do do right now


I missed your comment because it probably overlapped with my answer

A few comments :

  • we need to apply stepwise regression or lasso regression ... 
    None of these methods exists in Maple.
    I wrote a piece of code over a decade ago that does ridge regression (another possibility), I'll dig around and see if I can get my hands on it (actually, I'm fed up with implementing algorithms in Maple and prefer using R directly).
  • If no such model comes with two or three regressor variables even are certain number of number of run
    I supposed you consider only polynomial models of degree 1 (at most) in all regressors?
    Is the model y=x1+x1^2+x1*x2 a two regressors model for you?

    For information the best 3-regressors model is the 4th one (ranked in increasing values of the Akaike criterion) and has regressors [X3, X6, X9]
  • Step 4
    There is no difficulties at all to ExcelTools:-Export the data nor to generate drawings.

Subsidiary question: do you use any data analysis software (beyond the Excel statistics package maybe)?
If not, would you be interested in the results that R would provide on your data?

I made a huge mistake: I didn't see the response (Y) was in first column andI imagined that it was in rightmost one.
The correction is easy

XY   := Data[2..-1]:
N, P := LinearAlgebra:-Dimensions(XY):
P    := P-1:
N, P:

XY := XY[.., [$2..P+1, 1]]:  # add this line to put Y in rightmost column

Train_fraction := 0.7:

Here is the corrected file (you will see that Akaike criterion doesn't seem well suited here, chich probably pleads for the use of something more reliable.


I've just updated my answer with a procedure to find the solution in a more workable way.


I experienced the same issue when I decided to manage the development of the code I develop and use versioning methods.
The first step was to convert (as @Rouben Rostamian said) all the mw files (about 20 in 1D text) into mpl files by opening each if them and exporting them into mpl files.

Based on an operation that lasts 1 minute per file, this means that you will have done this job (admittedly not very pleasant) in 1 hour.
Writting a conversion script in some language will probably take longer.

3 4 5 6 7 8 9 Last Page 5 of 99