MaPal93

175 Reputation

6 Badges

2 years, 332 days

MaplePrimes Activity


These are replies submitted by MaPal93

@acer Sure, will be waiting for your feedback on the solve() step. 

I now better understand indets(). Thanks.

What are the downsides of keep using subs instead of eval once that RealDomain is NOT loaded anymore (as in the latest script I attached)? Wouldn't they be the same?

@acer thanks for the details. I updated my script accordingly: 250423_Finding_Chi_updated_again.mw

Now indets() finally produces the set of my actual free parameters without repetitions (output (19)), but it also includes the "building blocks" random variables, which are not really free parameters as the sigmas and the rhos are. Is this an issue? With free parameters set I mean the set of free name parameters that I will eventually fix to some values at the calibration step sometime later in my script (when I will need to better interpret my solutions).

As you can see at the bottom of the script, my end goal is to isolate in X_1, X_2, X_3 the coefficients on the random variables nu[1] and nu[2] (or their respective placeholders nnu[1] and nnu[2]), which themselves are combinations of the "building blocks" random variables (see output (10) and (11) for a better understanding).

Back to the main issue now: how to best set up the solve() taking into account my expressions for FOC_1, FOC_2, and FOC_3? 

@acer first, thanks for looking into this. I am quite new to Maple...does your latest comment about RealDomain supersedes your previous one? 

By just commenting (#) out the with(RealDomain): line of code on top of the script:

  • I solve the param repetition issue for _R_R0_R1_R2 but indets still produces sqrt(-rho^2_nu_1,2+1) as a separate param from rho_nu_1,2. Why?
  • Finally infolevel() command produces some text output ("Main: Entering solver with 3 equations in 3 variables" etc etc) but the text output is now so long that I can't scroll down the window in Maple anymore.

I was thinking, should I:

  1. "Hide"/Comment out infolevel()?
  2. Remove all simplify() since all I am interested in is the final output at the bottom of the script rather than the compactness of intermediate outputs
  3. Add somewhere on top my assumptions for the free parameters listed by indets? Note that all the sigmas are variances and all the rhos are correlations, thus all sigmas should be assumed to be >0 and all rhos between -1 and +1 (can I avoid loading RealDomain if I enforce these assumptions from the start? Of course I want to make sure everything is in the real space but is it even necessary to load the RealDomain?)

Is any of the above possibly interfering with solve()? What else do you suggest? For convenience, please feel free to directly work on 250423_Finding_Chi_updated.mw with with(RealDomain): commented out and to send me back your amended version if you wish. Thanks

@dharr thanks for clarifying. I fixed the script accordingly:

250423_Finding_Chi_updated.mw

Can you please consider this latest version? Maybe the phrasing of my question was confusing. What I am trying to do is straightforward: I want to solve my three equations FOC_1, FOC_2, FOC_3 for my three variables X_1, X_2, X_3. After fixing the script, the only problem I now encounter is that the solve() is stuck in "evaluating" status. Based on my expressions for FOC_1, FOC_2, FOC_3, what's the best approach here?

Also, if you check output (20), you see that indets(Eqs) minus Vars produces the list of free parameters but _R, _R0, _R1, _R2 repeat themselves three times each and sqrt(-rho^2_nu_1,2+1) appears as a separate free parameter even though rho_nu_1,2 was already in the list. Why?

@dharr thanks for your reply. As I kind of anticipated your point, I changed the body of the question and included:

Question 1: is conditional_distributions_Version2.mw consistent with (is it the right interpretation of/the correct implementatio of) conditional distribution of a multivariate normal distribution?

If the answer to Question 1 is yes, then Question 2: solve() in Finding_Chi_Version2.mw does not produce that error anymore but evaluation still gets stuck. How to solve the system?  

(Also your 1-D math inputs have superscripts and subscripts, which seem strange - for example b2 instead of b^2.): for example where?

@acer ahah thank you Acer!

I still have an issue that I am trying to solve, so I migrated it into a related question.

@acer Why only coeff(Omega__*, u[1]) does not work? please check the line of code highlighted in green: collect_complete_ac_coeffissue.mw.

Also, how do I use coeff() to isolate a, that is, the "leftover" term (the coefficient not sticking to any variable)?

@acer this is exactly what I needed...using placeholders! Thank you acer.

@acer thanks for your answer. There were some earlier calculations indeed, in which I define and correlate the various random variables such that the linear combination Omega takes up that form.

This is the full script (you can for now ignore step 2.1 and go straight to 2.2 to deal with my question):

 collect_complete.mw

If you want to better understand why I would even want to do something like this, check the total script below:

total_script.mw

Now my question should make more sense to you.

@mmcdara 

sorry if my notation created confusion and misunderstanding. I will try to fix the recode procedure myself, that should not be too much of an issue.

But if you see the script above, I encounter a few more issues in the steps that follow. For example (please check the script to better understand what I mean):

1) How to compute Cov[ nu[1], nu[2] | S[1], S[2] ]? Should I compute it as E[ nu[1]*nu[2] | S[1], S[2] ] - E[ nu[1] | S[1], S[2] ]*E[ nu[2] | S[1], S[2] ] in which E[ nu[1] | S[1], S[2] ] and E[ nu[2] | S[1], S[2] ] are computed using the procedure? And how to compute E[ nu[1]*nu[2] | S[1], S[2] ]? (Just by defining a new RV nu[1]*nu[2] and conditioning it on S using again the same procedure?)

 

2) Is there a way to use the "collect" or "coeffs" built-in functions but for random variables instead of variables? (I need to re-organize the linear combination Omega by collecting the coefficients on the RVs nu[1], nu[2], u[1], u[2], u[3] - see script to understand what I mean). Alternatively, is there a way to "redefine" them as standard variables so that I can standardly use collect() and coeffs()? 


I think after solving these two doubts it should be alright. I should be able to take care of the rest.

 

thanks again! Sorry if I created confusion. 

@mmcdara OK, thanks. I think that my understanding was wrong in the first place and your first comment about the "sufficient statistics" was sufficient already to answer the first of the two questions.

Just to double-check: below script (see highlighted comment at the bottom) is correct, isn't it?

restart

with(LinearAlgebra):
with(Statistics):
with(RealDomain):

N := 2

epsilon := Vector(N, [RandomVariable(Normal(0, sigma__epsilon1)),RandomVariable(Normal(0, sigma__epsilon2))]);
Variance(epsilon[1]);
Variance(epsilon[2]);
Correlation(epsilon[1], epsilon[2]) assuming sigma__epsilon1 > 0, sigma__epsilon2 > 0;

Vector(2, {(1) = _R, (2) = _R0})

 

sigma__epsilon1^2

 

sigma__epsilon2^2

 

0

(1)

MarginalStDev := Vector(N, symbol=sigma__v);
cor_Matrix    := Matrix(N$2, (i, j) -> `if`(j=i, 1, rho__v[i, j]), shape=symmetric);
cov_Matrix    := DiagonalMatrix(MarginalStDev) . cor_Matrix . DiagonalMatrix(MarginalStDev);

Vector(2, {(1) = `#msub(mi("σ",fontstyle = "normal"),mi("v"))`[1], (2) = `#msub(mi("σ",fontstyle = "normal"),mi("v"))`[2]})

 

Matrix(2, 2, {(1, 1) = 1, (1, 2) = `#msub(mi("ρ",fontstyle = "normal"),mi("v"))`[1, 2], (2, 1) = `#msub(mi("ρ",fontstyle = "normal"),mi("v"))`[1, 2], (2, 2) = 1})

 

Matrix(%id = 36893628023199252100)

(2)

# Conditions for cor_Matrix to be positive definite

rhos := indets(cor_Matrix):

# Sylvester's criterion: the N main minors must be strictly positive.

det_11 := cor_Matrix[1, 1]:        # this minor isobviously positive
det_33 := Determinant(cor_Matrix):

rho_conds := solve(det_33 > 0, rhos):
print~(rho_conds):


 

-1 < rho__v[1, 2]

 

rho__v[1, 2] < 1

(3)

# Cholesky decomposition assuming rho_conds are met



conditions := map(x -> op([x > -1, x < 1]), indets(cor_Matrix))[], map(x -> x > 0, indets(MarginalStDev))[];
LUDecomposition(cov_Matrix, method='Cholesky'):
L := simplify(%) assuming conditions;

conditions := -1 < `#msub(mi("&rho;",fontstyle = "normal"),mi("v"))`[1, 2], `#msub(mi("&rho;",fontstyle = "normal"),mi("v"))`[1, 2] < 1, 0 < `#msub(mi("&sigma;",fontstyle = "normal"),mi("v"))`[1], 0 < `#msub(mi("&sigma;",fontstyle = "normal"),mi("v"))`[2]

 

Warning, Matrix is non-hermitian; using only data from upper triangle

 

Matrix(%id = 36893628023035761956)

(4)

# Correlation by linear combination:
#
# step 1: Let A a random variable of independent centered gaussian variables

A := `<,>`(seq(RandomVariable(Normal(0, 1)), i=1..N));

Vector(2, {(1) = _R1, (2) = _R2})

(5)

# step 2: Define the random vector B this way

B := L . A

Vector(2, {(1) = `#msub(mi("&sigma;",fontstyle = "normal"),mi("v"))`[1]*_R1, (2) = `#msub(mi("&rho;",fontstyle = "normal"),mi("v"))`[1, 2]*`#msub(mi("&sigma;",fontstyle = "normal"),mi("v"))`[2]*_R1+sqrt(-`#msub(mi("&rho;",fontstyle = "normal"),mi("v"))`[1, 2]^2+1)*`#msub(mi("&sigma;",fontstyle = "normal"),mi("v"))`[2]*_R2})

(6)

# step 3: check B

'Expectation(B)' = Mean~(B);
'Cov(B)' =  simplify(Mean~(B . Transpose(B)))

Expectation(B) = (Vector(2, {(1) = 0, (2) = 0}))

 

Cov(B) = Matrix(%id = 36893628023035719916)

(7)

# final step: Define the random vector C as B + the vector of desired means.
#             Check C

nu := B + Vector(N, symbol=nu__0):

'Expectation(nu)' = Mean~(nu);
'Cov(nu)' =  simplify( Mean~(nu . Transpose(nu))  - Mean~(nu) . Transpose(Mean~(nu)) );
Covariance(nu[1],nu[2]);

Expectation(nu) = (Vector(2, {(1) = `#msub(mi("&nu;",fontstyle = "normal"),mi("0"))`[1], (2) = `#msub(mi("&nu;",fontstyle = "normal"),mi("0"))`[2]}))

 

Cov(nu) = Matrix(%id = 36893628023199250540)

 

sigma__v[1]*rho__v[1, 2]*sigma__v[2]

(8)

S := nu+epsilon; 'Expectation(S)' = `~`[Mean](S); 'Cov(S)' = simplify(`~`[Mean](S.Transpose(S))-`~`[Mean](S).Transpose(`~`[Mean](S))); `assuming`([Covariance(S[1], S[2])], [sigma__epsilon2 > 0])

Expectation(S) = (Vector(2, {(1) = `#msub(mi("&nu;",fontstyle = "normal"),mi("0"))`[1], (2) = `#msub(mi("&nu;",fontstyle = "normal"),mi("0"))`[2]}))

 

Cov(S) = Matrix(%id = 36893628023199239092)

 

sigma__v[1]*rho__v[1, 2]*sigma__v[2]

(9)

All in all, since bivariate S := nu + epsilon and nu and epsilon are independent, Cov[S[1],S[2]] IS THE SAME AS Cov[nu[1],nu[2]] ASSUMING sigma__epsilon2 > 0. The correlation, instead, is not a sufficient statistics and the correlation coefficient between nu[1] and nu[2] is different from the correlation coefficient between S[1] and S[2], as the latter takes into account the new variances.  

NULL

Download Cov_S1S2.mw

 

@mmcdara 

You asked: "what are S[1] and S[2] for they do notappear in the expresison of Omega?" I defined them in the first question above: S[1]=nu[1]+epsilon[1] and S[2]=nu[2]+epsilon[2]. They do not appear in Omega, but nu[1] and nu[2] do.

You wrote: "Basically the variance and covariance are sufficient statistics from which other statistics can be build (the linear correlation coefficient for instance), not the opposite.
Thus you cannot do some transformation on random variables, believe the correlation coefficient is an invariant, and deduce the covariance from its value and the two variances."
 Ok, this I understand. BUT, the covariance matrix I am expecting is COV_S_correct defined here below (instead of COV_S):

@mmcdara 

Thank you for clarifying the mathematics of Sylvester's criterion. I have "branched out" into another question two more doubts I have, since these can be seen as independent.

thanks a lot @mmcdara. Always very helpful and supportive. In theory I should now have all the ingredients to solve the optimization problem (highlighted in dark green are all my fundamental random variables: bivariate {nu_1, nu_2}, trivariate {u_1, u_2, u_3}, eps_1, eps_2):

I might encounter issues with implementing it. Can I get back to you in case?

 

A small detail about the Sylvester's criterion: if I want to calculate conditional moments for bivariates instead of trivariates (by setting N=2 on top of the script), am I missing any condition if I generalize det_22 := Minor(cor_Matrix, 3, 3) into det_22 := *(*,N,N) like here below? Any other code changes I'd need to take care of? I think is useful to work with the most generic code as possible...

@mmcdara thanks a lot, I think this script is exactly what I needed. But you wrote: "As the formula will be very lengthy, I wrote themsymbolically (remove the simple quotes to evaluate the conditional statistics)"...why do I get the following error once I remove the quotes? Maybe I misunderstood something...but I do need to check the explicit calculations.

 

I understand your Point 1, 2, and 3 now. You also wrote: "If these variables are the result of some computation, just provide the formuas to derive them." Yes, those alphas and betas are the results of calculations, but please don't care about them now...in fact, those alphas and betas should be the coefficients of the optimal X_1, X_2, and X_3 equations I am now trying to re-calculate in the correct way through the optimization procedure, which is the topic of this question.

First 11 12 13 14 15 16 17 Page 13 of 17