Carl Love

Carl Love

28055 Reputation

25 Badges

12 years, 362 days
Himself
Wayland, Massachusetts, United States
My name was formerly Carl Devore.

MaplePrimes Activity


These are replies submitted by Carl Love

@Rouben Rostamian  

I don't understand the mathematics here, so this comment may be irrelevant: Before you get too far involved with this problem in a Mapleish way, remember that Maple currently will only numerically solve PDEs with exactly two independent variables. There's no such hard limit for symbolic solutions.

You were using Maple's 2D input and you typed the epsilon in the first line too close to the following left parenthesis, so it was interpretted as if epsilon were a function rather than as a coefficient. If you use Maple's 1D input, this and numerous other problems will be avoided. In the second line, you will get better results using eval rather than subs. The eval forces the evaluation of the derivatives. So, change these two lines to

eq:= diff(x(t),t$2) - epsilon*(alpha*x(t)^4 - beta)*diff(x(t),t) - x(t) + x(t)^3;
eq2:= eval(eq, x(t)= 1 + epsilon*x[1](t) + epsilon^2*x[2](t));

 

 

@Christopher2222 How does one hide a code edit region? I see the option to collapse it, but that doesn't completely hide it.

@MDD 

If you need for your procedure to be able to detect empty Matrices, then I recommend looking at the ColumnDimension rather than the RowDimension, because, like I said, any reasonable attempt to create an empty Matrix out of square brackets leads to a Matrix with one row and zero columns. (<<[][]>> creates a Matrix with zero rows and one column.)

If you need for your procedure to create empty Matrices, then use Matrix() or, if speed is critical, rtable(1..0, 1..0, subtype= Matrix).

If you post your procedure, I can make further suggestions.

@Haley_VSRC 

The corresponding mathematical reason is that b must be less than or equal to the minimal value of t in order to avoid nonreal values. The graphical result is that the omitted points are very close to the regression curve horizontally.

@Markiyan Hirnyk 

No, the {} indicates that the set of elements is the empty set, which is different from the empty set being an element.

@MDD 

NonzeroRowsAndColumns:= (M::Matrix)-> [
     add(`if`(ArrayTools:-IsZero(M[j,..]), 0, 1), j=1..op([1,1], M)),
     add(`if`(ArrayTools:-IsZero(M[..,j]), 0, 1), j=1..op([1,2], M))
]:

 

@Markiyan Hirnyk 

When I said that my fit had lower residuals, I was using the fit that treated the first four points as outliers (my Sol_Carl5), and I compared it to doing the equivalent with DirectSearch:-DataFit, using all nine fitmethods and taking the minimal residuals. Done this way, my method has less than half the residuals.

But, if I use all four methods crossed with all nine fitmethods (for a total of 36 calls to DataFit), I can make a small improvement over my method. Here's the worksheet:

 

restart:

TW:= <<2.00E-05|0.00723>,<4.86E-05|0.033171>,<7.31E-05|0.047349>,<9.77E-05|0.057941>,<0.000122|0.066835>,<0.000147|0.074711>,<0.000171|0.081864>,<0.000196|0.088463>,<0.000221|0.094621>,<0.000245|0.10042>,<0.00027|0.10591>,<0.000295|0.11114>,<0.000319|0.11614>,<0.000344|0.12091>,<0.000368|0.12544>,<0.000393|0.12977>,<0.000418|0.13391>,<0.000442|0.13788>,<0.000467|0.1417>,<0.000492|0.14539>,<0.000516|0.14896>,<0.000541|0.15244>,<0.000565|0.15582>,<0.00059|0.15911>,<0.000615|0.16231>,<0.000639|0.16544>,<0.000664|0.1685>,<0.000689|0.17148>,<0.000713|0.1744>,<0.000738|0.17725>,<0.000762|0.18005>,<0.000787|0.18279>,<0.000812|0.18548>,<0.000836|0.18811>,<0.000861|0.1907>,<0.000885|0.19324>,<0.00091|0.19574>,<0.000935|0.19819>,<0.000959|0.2006>,<0.000984|0.20297>,<0.001008|0.20531>,<0.001033|0.20761>,<0.001058|0.20987>,<0.001082|0.2121>,<0.001107|0.2143>,<0.001131|0.21647>,<0.001156|0.2186>,<0.00118|0.22071>,<0.001205|0.22279>,<0.00123|0.22484>,<0.001254|0.22687>,<0.001279|0.22886>,<0.001303|0.23083>,<0.001328|0.23278>,<0.001352|0.2347>,<0.001377|0.2366>,<0.001401|0.23847>,<0.001425|0.24032>,<0.00145|0.24215>,<0.001474|0.24396>,<0.001498|0.24575>,<0.001523|0.24751>,<0.001547|0.24926>,<0.001571|0.25099>,<0.001596|0.25277>,<0.00162|0.25433>,<0.001644|0.25571>,<0.001668|0.25708>,<0.001693|0.25843>,<0.001717|0.25978>,<0.001741|0.26112>,<0.001765|0.26245>,<0.001789|0.26377>,<0.001813|0.26508>,<0.001838|0.26638>,<0.001862|0.26767>,<0.001886|0.26895>,<0.00191|0.27022>,<0.001934|0.27148>,<0.001958|0.27273>,<0.001982|0.27396>,<0.002006|0.27519>,<0.00203|0.27641>,<0.002054|0.27762>,<0.002078|0.27881>,<0.002102|0.28>,<0.002126|0.28118>,<0.00215|0.28235>,<0.002174|0.28352>,<0.002198|0.28467>,<0.002222|0.28582>,<0.002246|0.28695>,<0.00227|0.28808>,<0.002294|0.2892>,<0.002318|0.29032>,<0.002342|0.29142>,<0.002366|0.29252>,<0.00239|0.29361>,<0.002414|0.29469>,<0.002438|0.29577>,<0.002462|0.29684>,<0.002486|0.2979>,<0.002509|0.29895>,<0.002533|0.3>,<0.002557|0.30104>,<0.002581|0.30207>,<0.002605|0.3031>,<0.002629|0.30412>,<0.002653|0.30513>,<0.002677|0.30614>,<0.002701|0.30714>,<0.002724|0.30813>,<0.002748|0.30912>,<0.002772|0.3101>,<0.002796|0.31107>,<0.00282|0.31204>,<0.002844|0.313>,<0.002868|0.31396>,<0.002891|0.31491>,<0.002915|0.31586>>:

RecenteredPowerFit:= proc(TW::Matrix, b_range::range(realcons))

local

     T:= TW[..,1], W:= TW[..,2],

     b:= Optimization:-NLPSolve(

          b-> Statistics:-PowerFit(T-~b, W, output= residualsumofsquares),

          b_range, method= branchandbound, objectivetarget= 0

     )[2][1],

     Res:= Statistics:-PowerFit(T-~b, W)

;

     ['A' = Res[1], ':-b' = b, 'c'= Res[2]]

end proc:

 

Sol_Carl5:= RecenteredPowerFit(TW[5.., ..], -1..TW[5,1] - 1e-7);

[A = HFloat(3.682236898705571), b = HFloat(6.28308061404132e-5), c = HFloat(0.4158452235037357)]

model:= A*(t-b)^c:

FitMethods:= [lms, lad, lts, lws, minimax, median, quantile, trimean, lfad]:

T5:= TW[5.., 1]:  W5:= TW[5.., 2]:
Min:= infinity:
for f in FitMethods do
     Sol_DS[f]:= DirectSearch:-DataFit(
          model, T5, W5, t,
          fitmethod= f, evaluationlimit= 30000, usewarning= false
     );
     V:= add(abs((eval(eval(model, Sol_DS[f][2]), t= T5[k]) - W5[k])^2), k= 1..op(1,T5));
     if V < Min then  (Min,Min_f):= V,f  end if  
end do:

RSS_DS:= Min;

HFloat(0.005773814784137672)

Min_f;

trimean

RSS_Carl:= add((eval(eval(model, Sol_Carl5), t= T5[k]) - W5[k])^2, k= 1..op(1,T5));

HFloat(8.687836999527558e-4)

Now try all methods and all fitmethods of  DataFit.

Methods:= [cdos, brent, powell, quadratic]:

Min:= infinity:
for f in FitMethods do  for m in Methods do
     Sol_DS[f,m]:= DirectSearch:-DataFit(
          model, T5, W5, t,
          fitmethod= f, evaluationlimit= 30000, usewarning= false,
          method= m, strategy= globalsearch
     );
     V:= add(abs((eval(eval(model, Sol_DS[f,m][2]), t= T5[k]) - W5[k])^2), k= 1..op(1,T5));
     if V < Min then  (Min,Min_f,Min_m):= V,f,m  end if  
end do  end do:

 

RSS_DS:= Min;

HFloat(6.044545213233039e-4)

Min_f, Min_m;

lms, quadratic

 

 

Download RecenteredPowerFit_vs_DirectSeach.mw

@Markiyan Hirnyk I'm not trying to blame anybody! I was just trying to tell you that both issues were caused by the same mistake. Geez, I can't even say that you're right without you finding some way to criticize!

@capea 

You didn't execute the expandoff command! That is nescessary to prevent simplification of residues. Also, don't forget to restart before the whole process.

 

@Markiyan Hirnyk You're absolutely right. I made a mistake. I will correct the Answer now.

Regarding the expand portion, you're experiencing the same mistake. After I make the correction to the Answer, try expand(Psi(3)) before and after the expandoff commands.

@MDD Try using the Maple help: Enter ?Psi at the Maple prompt.

Psi(z) = diff(ln(GAMMA(z)), z)

@MDD You simply cut and paste and execute that code that Roman gave, then 

with(PolynomialIdeals):
EliminationIdeal(<x>,{a,b,c});

@Markiyan Hirnyk 

By what criterion are you saying that the result produced by DirectSearch:-DataFit is better than that produced by my RecenteredPowerFit (which calls Statistics:-PowerFit)? You can't directly compare the residuals as reported by the various fitmethods because they are each computed by their own formula. If you compare the sum of the squares of the residuals, my fit is several times better than yours. It is also clear from the plots that my fit is better than yours. (My fit is the red curve in my plot; ignore the blue curve.)

Any result with > 1 is completely ridiculous, not just "not so good". Try plotting them. Of the nine fitmethods, seven produce such results for me (on the first run).

Furthermore, the results of DirectSearch:-DataFit are not entirely reproducible. Rerunning the exact same call (of course, without restart) often gives a completely different result, often changing a decent result into a completely ridiculuous one.

The vote up may be from the OP and may be due to the fact that I worked back and forth with her over three days and that I produced code that she was able to execute and with which she was able to get what she considered to be "perfect" results.

 

@Jerome BENOIT I also don't care what the shown material is called. I just didn't want other readers to get the impression that it was ever possible to see the complete code of a module---not by eval or by any other means. Your usage of "body" was likely to give that false impression.

First 482 483 484 485 486 487 488 Last Page 484 of 709