axiuluo6611

0 Reputation

2 Badges

12 years, 214 days

MaplePrimes Activity


These are replies submitted by axiuluo6611

@Thomas Richard 

hi,

Thanks for your help! I think I may find the solution to my former problem, but when the program runs, other problems appeared. Some of my estimated variables are just be right at the upper or lower bound I gave which are followed by a ".". For example, I set all the range to [-100,100], and the result is

[-98.4102393621806329, [x[1] = -18.6701519728996, x[2] = -0.963910334624946, x[3] = 6.28476620992306, x[4] = -100., x[5] = -0.156737723341160e-1, x[6] = -.967524702985621, x[7] = 0.118493169662852e-3, x[8] = .948306917362862, x[9] = -33.7788970201065, x[10] = 100., x[11] = -11.4103630538657, x[12] = 9.87363631705384]] 

(In fact, some of them are clearly too large according to literature related, but when I narrow the range, value of likelihood function unfortunately decrease...) 

Please pay attention to x[4], x[10].

For another example,

[-81.6089352616458542, [x[1] = 9.06738879205833, x[2] = 10., x[3] = 10., x[4] = -10., x[5] = 0.899940265957144e-3, x[6] = .856687363109218, x[7] = -0.101798430849125e-2, x[8] = .999031908281083, x[9] = -10., x[10] = -10.0000000000000, x[11] = -10.0000000000000, x[12] = -10.]]

Because of the appearance of -10.0000000000000, I think perhaps Maple doesn't find the solution optimal. Does the . mean that the bound is so narrow that I need to modify it? 

 

Futhermore, why do different ranges give rise to different result while there are definitely some  variables to be at the bound every times. Isn't it a GLOBAL optimization? I hope that you can give some suggestions about this. Thanks a lot!

Best regards

 

Mr.Richard,

Thanks for your help and suggestion! 

b,c are arrays of data used to estimate parameters and details can be seen in the following uploads test3.mw and dataset data.xlsx.

And I use Maple/GOT 15. 

Best regards

test3.mw

data.xlsx

 

 

Page 1 of 1