MrMarc

3163 Reputation

18 Badges

17 years, 133 days

MaplePrimes Activity


These are answers submitted by MrMarc

really easily ??  Can you please show me :-)  

the probelm with the Cholesky Decomposition is however that the correlation matrix must be  positive define (all eigenvalues +) which is

such a pain when we have negative serial correlation. 

 

I got a litle bit side tracked I am currently trying to prove that the slope of the rescaled range plot [ ln (R/S) vs ln (N) ] is equal to the

Hurst exponent  which for random data should be is 0.5.... howevere I have not proven that yet..

 

I did try to simulate the Fractional Brownian Motion by using the below code ( but it turned out to be quite useless)


restart;
with(Statistics);
randomize();
a := 1.5;
b := 2;
n := 100;
H := .5;
mu := 0;
sigma := abs(b-a)^(2*H);
r := Sample(RandomVariable(Normal(mu, sigma)), n);
for t from 2 to n do
 s[1] := .1; s[t] := s[t-1]*exp(r[t])
end do;

ss := `<,>`(seq(s[i], i = 1 .. n));
plot([seq(i, i = 1 .. n)], ss, color = black, thickness = 2, labels = ["Time", "Stock Price"], font = [Times, Roman, 14]);

R[1] := 0;
for i from 2 to n do
R[i] := ss[i]-ss[i-1] end do;

RR := [seq(R[i], i = 1 .. n-1)];
tt := [seq(i, i = 1 .. n)]; plot(tt, RR, color = black, thickness = 2, labels = ["time", "Returns"], font = [Times, Roman, 16]);

lagL := 20;
plot([seq(i, i = 1 .. 20)], [evalf(seq(Correlation(RR[1 .. nops(RR)-i], RR[i+1 .. nops(RR)]), i = 1 .. 20), 3)], view = [1 .. lagL, -1 .. 1], labels = ["lags", "Correlation"], font = [Times, Roman, 16])
 

ok, thanx for your input Georgios.

I know how to simulate Pure Units Roots and Geometric Brownian Motions (even though I dont understand completely what alec

has done, but that is ok) but what I still havent figured out how to simulate a Fractional Unit Root or Fractional Brownian Motion.

I think it should be like an ARIMA(1,1) model which means it should be non-stationary with

some sort of serial dependence but more then that I dont know... would be nice to set up one in Maple.....

Thanx Scott03 that works great !   Saved me a couple of hours or days, he he  :-)

The "slider tool" is probably good when you only have a simple equation

but I have some problem using this slider tools when ploting a litle more complicated equation such as

an geometric brownian motion which takes the form

restart;
with(Statistics);
randomize();
n := 100;
mu := 0;
sigma := 1;
`&Delta;t` := evalf(1/250, 3);

 r := Sample(RandomVariable(Normal(mu*`&Delta;t`, sigma*sqrt(`&Delta;t`))), n);

for t from 2 to n do s[1] := 100; s[t] := s[t-1]*exp(r[t]) end do;

ss := `<,>`(seq(s[i], i = 1 .. 100));

plot([seq(i, i = 1 .. n)], ss, color = black, thickness = 2, labels = ["time", "Stock Price"], font = [Times, Roman, 14])
 

 

The idea is that I want to set the mu and sigma as "sliders" and get the plot updated automatically

 

thanx for your input! but in my case I dont think that will work...since I want to assign the inputvalue of the slider

to a variable and not a MathContainer so I can change the plot by tweaking the input variable (slider).

 

 

 

ok thanx Robert.

Can you please explain in a simple step-by-step maner how I can derive the autocorrelation function for an AR(1), AR(2), MA(1),

MA(2) and a ARMA(1,1) process?! If I can find the particular autocorrelation for each lag I can probably reverse engineer it so I

can force theprocess to have my predefined autocorrelation at each lag.....

I mean I can due it with brutal force by simply collecting data

calculate the serial correlation coefficient of the returns and then use the below code to calculate serial dependent drawings

restart:
randomize():
with(Statistics):
n:=1000:
p := .7;   # serial correlation
r := Sample(RandomVariable(Normal(0, 1)), n);
for i from 2 to n do
x[1] := 0; x[i] := p*x[i-1]+r[i] end do;
rr := [seq(x[i], i = 1 .. n)]
Correlation(rr[1 .. n-1], rr[2 .. n])

 

but there must be an more elegant version? maybe Bootstaping?!

Thanks Robert! It looks good taken at face value.

I just realized though (correct me if I am wrong) that even if we sample a random

drawings from for example serial dependent data (empirical data) we still wont get serial correlated drawings, or?.

It would be nice to find a way to collect empirical data, calculate the returns,

sample a sequence of the returns ( with the original statistical properties such as serial correlation)

and then use that sample to simulate a unit root.

 

The solution was staring me in the face !

A first order serial correlated variable rr with a correlation

parameter p=0.9 is given by an Autoregressive Process of order one AR(1) 

 

restart;
with(Statistics);
randomize();
n := 1000;
p := .9;
r := Sample(RandomVariable(Normal(0, 1)), n);
for i from 2 to n do
x[1] := 0; x[i] := p*(x[i-1]+r[i]) end do;
rr := [seq(x[i], i = 1 .. n)];
Correlation(rr[1 .. n-1], rr[2 .. n])

 

We have to model serial correlation in a pure unit root and unit root with a stochastic trend separetely

by using correlated drawings.

Hi Acer , ok

I want to make inference about serial correlation in a Pure Unit Root (which should be zero since a pure unit root is the limit:

everything above pure unit root for example unit root with stochastic trend or trend stationary process will have more serial

correlation in my understanding) I am simulating the Pure Unit Root by using a recursive equation s( t) = s (t -1) + rand where

rand is a drawing from a standard normal distribution (below code) Note that I tryid to use a a drawing from a pareto distibution

(not pure unit root) as well but I have not got a good result yet since I cannot figure out what a and c stands for in Pareto( a, c ). 

restart:
randomize();
with(Statistics);
a := 0; b := 1; n := 100;
randd := Sample(RandomVariable(Normal(0, 1)), 100);
s[1] := 100; for i from 2 to n do
s[i] := a+b*s[i-1]+randd[i] end do;
s := [seq(s[i], i = 1 .. 100)];
tt := [seq(i, i = 1 .. 100)];
with(plots);
plot(tt, s, color = black, thickness = 2, labels = ["t", "Stock_Price"])

 

I then ran the code

Statistics:-Correlation(s[1 .. n-1], s[2 .. n]) 

where s is the table that holds all the "stock" prices to calculate serial correlation

But I got a serial correlation coefficients of 0.90 so something is wrong (since it is a random process).

I then realized that maybe we have to make the series stationary (first difference s(t)-s(t-1)=rand ) before we can calculate

serial correlation.  So I did that and I got a serial correlation coefficient of 0.003 which is close to zero. good !

I now tried to experiment with the formula for how we calulate returns. Instead of

s[i] := a+b*s[i-1]+randd[i]

we can use a formula that has a larger time increament for example

s[i] := a+b*s[i-3]+randd[i]    or      s[i] := a+b*s[i-3]+randd[i]

and this is where I  initialy found serial correlation (I think I did something wrong).

I calculate the serial correlation by using the below code.

inc := 5;
r[1] := 0;
for i to n-inc do
r[i] := s[i+inc]-s[i] end do;
rr := [seq(r[i], i = 1 .. n-1)];
tt := [seq(i, i = 1 .. (n-1)/inc)];
with(Statistics);
plot(tt, rr, color = black, thickness = 3, labels = [t, Returns]);
Correlation(rr[1 .. nops(rr)-inc], rr[inc .. nops(rr)-1])

 

I guess my question is: If we simulate 1000 pure unit roots then the average serial correlation coefficient should be  close to zero

irregardless of the time increament in the return, right?!   I mean it should not matter if we calulate returns as the price tomorrow

minus the price today or if we took the price in a week or the price in a month minus the price today...?

Also regarding the code can you find any erros in it?

 

 

Even when I take the first difference I find serial correlation especially when I increase the return

increament for example instead of P (t) - P(t-1) we can take P(t) - P(t-4) or P(t) - P(t-8)

any suggestions??!

ok I think I got it

correlation is based upon the assumption that the series is stationary so we must take the first difference.

the serial correlation is then -0.049 almost zero

ok, maybe I now understand what is going on but.....

I was just under the impression that you had to started by defining the two functions  F (f (t)  g(t) )

 

f (t) = V    and    g(t) = U   

 

and then we defined the derivatives of these two functions given by dV and dU.

 

Chiang starts by defining the derivatives d U and d V and derives U, V  humm.....ok

This means in the introductory example where we want to take the partial derivative of F (f(t)  g(t) )

 

we should start by defining

 

dU = diff ( g(t) , t) dt     dV = diff (f(t) , t)  dt

 

(we should not start by defining U and V) which then gives us

 

U= g (t)    V= f( t)

 

alright I think I got it.

 

For me It is all about sequential steps and presentation if you need to start by define dU and dV first to get the correct solution then

chiang should not have presented an initial example where started by defining U and V ( for me this logic is not intact and it just

makes me confused).......hummm. There should have been clear instructions that it is critical that we start from dU and  dV and not U

and V....make sense right?

 

First 17 18 19 20 21 22 23 Page 19 of 24