Axel Vogt

5936 Reputation

20 Badges

20 years, 260 days
Munich, Bavaria, Germany

MaplePrimes Activity


These are replies submitted by Axel Vogt

Thank you both for the answers to my vague (and initialy even wrong posted question).

Alec's explication for used convention and Doug's explanation for sum vs. add was it worth to disgrace myself.

stupid me, of course :-((

sum(a,j=0..-2);

                                  -a

Is it still that strange rule (which I never understood)?

Thank you - though I assert that basic arithmetics (even over the Complex numbers)
should correct and there must be no need for workarounds.

PS: I not even got an automatic notification, that the SCR / bug report arrived.
Sometimes I really have doubts whether etiquettes are really that common.

It is years ago and was a pain ... Actually one should check first,
whether the model ever *might* be applied to data:

  assume(0 <= a0): assume(0 <= a1):assume(0 <= b1): 
  additionally(a1+b1 < 1);
  h(n+1) = a0 + a1*x(n)^2 + b1*h(n);
  h(1) = a0/(1-a1-b1);

  epsilon(n) = x(n)/sqrt(h(n)); epsilon = N(0,1), iid; # errors

That no longer uses the language of a stochastic process, but it
stays to the discrete situation.

Ignoring that (as explained in the Excel sheet) there are 2 ways:
either you enforce that the solution at least comes close to the
descripive statistics, but may violate conditions. Or conversely.

The posted Maple solution considers both.

And it explains, the the bottle neck is the sum: it is slow. And
one can simply improve it (either a DLL or evalhf'ing or by the
command Compiler:-Compile).

One does not need analytic gradients, but would feed the Optimizer
(which is not handy at all and I will not do it again), an easy
way is to ignore the last condition and check it afterwards.

Practical note there is no reason, that the process stays the same
over time and on the other hand you need lots of data to get any
raesonable guess for the 3 parameters. Thus its actual usage is
left to those specialists, who actually knows about it and its
limitations (I only know that I stay off that). Or just feed some
software like R and pray.
It is years ago and was a pain ... Actually one should check first,
whether the model ever *might* be applied to data:

  assume(0 <= a0): assume(0 <= a1):assume(0 <= b1): 
  additionally(a1+b1 < 1);
  h(n+1) = a0 + a1*x(n)^2 + b1*h(n);
  h(1) = a0/(1-a1-b1);

  epsilon(n) = x(n)/sqrt(h(n)); epsilon = N(0,1), iid; # errors

That no longer uses the language of a stochastic process, but it
stays to the discrete situation.

Ignoring that (as explained in the Excel sheet) there are 2 ways:
either you enforce that the solution at least comes close to the
descripive statistics, but may violate conditions. Or conversely.

The posted Maple solution considers both.

And it explains, the the bottle neck is the sum: it is slow. And
one can simply improve it (either a DLL or evalhf'ing or by the
command Compiler:-Compile).

One does not need analytic gradients, but would feed the Optimizer
(which is not handy at all and I will not do it again), an easy
way is to ignore the last condition and check it afterwards.

Practical note there is no reason, that the process stays the same
over time and on the other hand you need lots of data to get any
raesonable guess for the 3 parameters. Thus its actual usage is
left to those specialists, who actually knows about it and its
limitations (I only know that I stay off that). Or just feed some
software like R and pray.

The model has some constraints (you can find them in the link I gave) and needs some assumptions on positivity, the most easy is to square to enforce real logs (which I always would do, dito for squares), where I guess Maple's optimizer works over the Reals.

The model has some constraints (you can find them in the link I gave) and needs some assumptions on positivity, the most easy is to square to enforce real logs (which I always would do, dito for squares), where I guess Maple's optimizer works over the Reals.

well, it can be simplified: you need the max Likelihood, which is summing. But if you switch to evalhf and 'add' or even compile that very part (say by putting it into a proc working on numerical Arrays) then it should be doable without computing the gradients explicitely or using a DLL (which does just that)

well, it can be simplified: you need the max Likelihood, which is summing. But if you switch to evalhf and 'add' or even compile that very part (say by putting it into a proc working on numerical Arrays) then it should be doable without computing the gradients explicitely or using a DLL (which does just that)

.

.

.

then take the Excel version axelvogt.de/axalom/simpleGarch11.zip and change the topic to "Is Excel easier than Maple?" where my immediate response will be "yes"

:-)

 

then take the Excel version axelvogt.de/axalom/simpleGarch11.zip and change the topic to "Is Excel easier than Maple?" where my immediate response will be "yes"

:-)

 

Thank you, that explains it: w3 is non-real in general, while simplify is absolute(something), hence always real.

I submitted a short error report

First 149 150 151 152 153 154 155 Last Page 151 of 209