MaPal93

175 Reputation

6 Badges

2 years, 332 days

MaplePrimes Activity


These are replies submitted by MaPal93

@acer my end goal is to find the three lambdas as a simple, readable function of the free distribution parameters. 

In here 240323_simpler_mmcdara.mw (thanks @mmcdara) the solution to the 4x4 system is found in just a few minutes and, while it exceeds the output limit of 1000000, running a calibration plus simplify(allvalues~([calibrated_sol]) lead to my end goal, as you can see at the bottom of the script. I really can't see why 120523_problem_parallel.mw behaves differently - I even calibrated my system before solving it, which I did not do for the simpler one.

To my eyes, there are just two main differences between simpler and complex versions:

  1. Simpler version is a 4x4 system (versus 6x6)
  2. Simpler version has much simpler alphas and betas, as they are only expressions of the mus and lambdas I want to solve for (versus alphas and betas which also depend on a few more parameters)

Why the parallel solver cannot overcome these two?

UPDATE:

I noticed that 3 of 6 equations are linear in mu_1, mu_2, and mu_3 so I tried to solve for this 3x3 sub-system first. I am getting the following error:

120523_problem_parallel_try_3by3_first.mw

How to fix?

@acer The 3 mus are only in 3 equations out of 6, and I can solve for them; but the 3 lambdas are in all 6 equations. 
Is 120523_problem_parallel_6by6.mw set up correctly (looking for all 6 variables using all 6 equations simultaneously) or is there any smarter way (e.g., are some equations redundant?)

Please recommend me some more ways to analyse my system. What other tools do I have that I am not aware of? Thanks!

@acer then it just means that the system cannot be split in two sub-systems in the way I did, right? In fact, {mu_1, mu_2, mu_3} are only in 3 equations out of 6, but {lambda_1, lambda_2, lambda_3} are in all 6 equations.

  • Do I need all of them to solve for the lambdas?
  • What if it's an overdetermined system in the lambdas?
  • What if after solving for the mus I solve for the lambdas but for 3 other equations (different from the 3 I picked in the original script in the body of this question)? How to find the optimal 3-equations sub-system?

In the meantime, I am trying to use all my 6 equations here: 120523_problem_parallel_6by6.mw

with this as the solve block:

StringTools:-FormatTime("%H:%M:%S"); infolevel[solve] := 4; P := `minus`(indets(MyEqs, name), MyVars); EqN := (`@`(`~`[`@`(`@`(`@`(numer, evala), :-Norm), numer)], eval))(MyEqs, `~`[`=`](P, 1)); kernelopts(numcpus); SolE := CodeTools:-Usage(([SolveTools:-PolynomialSystem])(EqN, MyVars, engine = groebner, backsubstitute = false, preservelabels)); (length, `~`[length], nops)(SolE); StringTools:-FormatTime("%H:%M:%S")

 

All my CPUs look very busy, but most of them around or below 50% usage and just a very few sometimes peaking to 100%. 

@acer I am not following sorry. What do you mean?  

Why are you using "assuming" given my assume() commands right after "local gamma" on top of my script? I thought SolveTools:~PolynomialSystem could overcome the "Warning, solve may be ignoring assumptions on the input variables" typical of solve().

Tagging @Carl Love ​​​​​​(who I mentioned in the body of the question), and @acer (who has provided useful inputs in some other threads about parallelization) but everyone else welcome to help! Thank you all.

@acer great, interesting to see that something like this can be done with Maple. Thank you! I think this will be useful especially if someone has to deal with larger expressions.

@mmcdara thank you for the detailed and attentive answer, as usual! In particular, I think the linear sensitivity analysis will be useful to my problem at hand, later. Why don't you post it as an answer? I can't give you any kind of credit for your efforts otherwise.

@acer thanks! I wanted to give best answer but cannot do it. You didn't post it as an answer?

This is the script with all 9 expressions I want to express in terms of each other: rearranging9terms.mw

 

"I'd be interested to know whether your wider investigations might have any use for expressing them all ("compactly") in terms of additional temporary variables."

Mmm, I need to think more about it. The ultimate goal is to 'eyeball' the sensitivity of each of these 9 expressions with respect to changes to the underlying parameters (eyeball meaning without computing partial derivatives - see my comment above to mmcdara). But how did you "find" these additional temporary variables? How do you know that this is the most compact way? (Here is quite immediate since the expressions are few and quite simple...but what if they were not? I thought Maple could do it with a single command)

 

"I made those additional temporary substitutions as assignments. But they could also be formulated as a sequence of equations (for substitution into each other, up the chain)."

Similarly to here https://www.mapleprimes.com/maplesoftblog/201455-Rearranging-The-expression-Of-Equations?

 

"You could optionally (telescopically) collapse a portion of that sequence of subsitutions, from the top down.  Eg, you could retain only the knowledge in a5 & a6, or what have you."

Interesting...could you show me how to do this with the script attached in this comment? Is it any more beneficial compared to how you did it using assignments?

 

"Also, such a scheme can also support a wider set of operations (expression forms) than just mere polynomials. For polynomials there are various techniques (basis, triangularization, etc) to produce such a substituion chain. But there are also mechanisms for doing it for a wider (non-polynomial) class."

Probably I don't need this much for my simple expressions here but, just for the sake of learning, can you provide examples that show how powerful these techniques and mechanisms can be?

 

@mmcdara you are of course right, no other expressions are simpler than the initial ones. However, I did not mention anything about the conditional variances as I actually wanted to learn about the abstract problem (outside of my context), that is, whether Maple allowed to do these re-arrangements of expressions relative to each other. I should have used a different notation maybe for the example above...

The idea was that it might facilitate interpretation when eyeballing about sensitivities. Something like: "if denominator parameters of t__1 increase, t__1 decreases and since t__2 = something - t__1, then t__2 increases"... but I understand that the 'something' in t__2 might itself depend on those parameters of t__1 I play around with, so it might be not so straightforward. I guess eventually I'd like to look at the partial derivatives of these t parameters wrt to the underlying distribution parameters...

@acer Sure, understood. But @mmcdara was following up and also his answer is now deleted together with the thread. I hope he gets notified of the deletion and will continue on here (since this thread doesn't have any answer yet).

Here's the latest script where I'd like my two remarks above to be addressed: 080523_Chis.mw

@mmcdara yes I did expect the syntax was wrong, but I also tried your fixed version before as it made more sense to me but it does not output what I need:

080523_Chis.mw

It's still collecting only wrt to _R0 next to _R, but the _R1 and _R2 terms are still 'hidden' in the 'xi' coefficients. Essentially, I want my 'xi' to be combinations of only free parameters (exactly like K1_wrt_R[1] just above). Thanks!

 

Moreover:

REMARK (primary concern):

I think that your recursive collection procedure is exactly what I need, but from the yellow-highlighted line following output 19 in your script, I noticed that you are trying to collect the solutions with respect to RVS instead of randomvars as I wished. Why? (I assume you have a reason)

{ _R, _R0, _R1, _R2, _R3, _R4, _R5 } are only the "primitive building blocks" for the correlated vectors I actually care about, which are those you see in the outputs of section 1 of the script. For example, _R1 and _R2 form the gaussian vector A, but the vector I care about is nu, which was constructed according to the procedure you helped me with some time ago. Do you see what I mean? I admit that I might be misunderstanding the whole thing, but if that's the case please explain to me why you are collecting on RVS instead of randomvars. I do understand that the elements in randomvars do not inherit type "RandomVariable" from { _R, _R0, _R1, _R2, _R3, _R4, _R5 } but, conceptually, I am interested in the coefficients on randomvars (this is why I needed 'wrappers' for randomvars). Pardon me if I called randomvars "random variables" when they are technically not of this type in Maple.

 

@mmcdara thanks in advance for addressing the two remarks above!!! (and don't worry about the complexity of the expressions too much. A simple calibration of the params sigmas and rhos right before collection already reduces their length quite a lot...but I think it's worth it to proceed with the uncalibrated problem first)

I feel like mmcdara's approach is the right direction, but @acer don't hesitate to provide alternative ideas if you have any. Thanks.

REMARK 1 (primary concern):

I think that your recursive collection procedure is exactly what I need, but from the yellow-highlighted line following output 19 in your script, I noticed that you are trying to collect the solutions with respect to RVS instead of randomvars as I wished. Why? (I assume you have a reason)

{ _R, _R0, _R1, _R2, _R3, _R4, _R5 } are only the "primitive building blocks" for the correlated vectors I actually care about, which are those you see in the outputs of section 1 of the script. For example, _R1 and _R2 form the gaussian vector A, but the vector I care about is nu, which was constructed according to the procedure you helped me with some time ago. Do you see what I mean? I admit that I might be misunderstanding the whole thing, but if that's the case please explain to me why you are collecting on RVS instead of randomvars. I do understand that the elements in randomvars do not inherit type "RandomVariable" from { _R, _R0, _R1, _R2, _R3, _R4, _R5 } but, conceptually, I am interested in the coefficients on randomvars (this is why I needed 'wrappers' for randomvars). Pardon me if I called randomvars "random variables" when they are technically not of this type in Maple.

 

REMARK 2

Separately from remark 1, using your script I verified that my three solutions all depend on random variables { _R, _R0, _R1, _R2 }. Given the output at the bottom of your script, AS1_eq appears nonlinear in the first two random variables. So I assume that each one of AS1_eq, AS2_eq, and AS3_eq is nonlinear in the four random variables (and of course each one of AS1_eq, AS2_eq, and AS3_eq has different coefficients on the random variables). Essentially, I have 3 nonlinear algebraic expressions in 4 random variables.

Now, would you kindly extend the last block to collect AS1_eq, AS2_eq, and AS3_eq wrt to { _R, _R0, _R1, _R2 } recursively in a compact form? I tried 070523_collectingRVs.mw but it does not work (it collects only wtr to _R and _R0...how to fix it to also take into account interacting or isolated _R1 and _R2 terms?)

  1. Do AS1_eq, AS2_eq, and AS2_eq share the same nonlinear form?
  2. Once unpacked, would each coefficient (yes there will be quite a few) look like K1_wrt_R[1]/output 19 in your script, that is, a relatively simple combinations of free parameters? 

Thank you 

 

Attempt 2 works. I solved my system of 3 equations in 3 variables and my solutions are expressions combining random variables. However, I am still having issues in isolating my random variables in expressions with multiple random variables. In particular, I need to collect the coefficients on such random variables, which later enter another optimization problem. Please check the two lines highlighted in red in the latest script:

290423_Chi_version1.mw 

I feel like the trick with placeholders and subs did the job in section 2.2 because the expression Omega was created ex-ante with the placeholders, that is, I knew ex-ante the form of Omega and where the random variables would appear in Omega.

​​​What if a linear combination of random variables is the result of a computation (like it's the case for chi_1, chi_2, and chi_3 in the script)?

Correlated components of Gaussian vectors are expressed as sums. As an example, let RV be such component (this is exactly nu[2] in my script, check output 2 to see how I created it). How to make Maple interpret the random variable RV as an "independent" block whose addends remain untouched by any sort of computation? How to "wrap" RV? This is important. A simple multiplication plus expand illustrates my issue (but also a more common operation like simplify() might affect the addends in RV):

RV := `σ__v`[2]*`ρ__v`[1, 2]*_R1+`σ__v`[2]*sqrt(-`ρ__v`[1, 2]^2+1)*_R2+`ν__0`[2]

sigma__v[2]*rho__v[1, 2]*_R1+sigma__v[2]*(-rho__v[1, 2]^2+1)^(1/2)*_R2+nu__0[2]

(1)

expand(RV*(`ρ__v`[1, 2]*`σ__v`[2]+`ν__0`[2]))

sigma__v[2]^2*rho__v[1, 2]^2*_R1+sigma__v[2]*rho__v[1, 2]*_R1*nu__0[2]+sigma__v[2]^2*(-rho__v[1, 2]^2+1)^(1/2)*_R2*rho__v[1, 2]+sigma__v[2]*(-rho__v[1, 2]^2+1)^(1/2)*_R2*nu__0[2]+nu__0[2]*sigma__v[2]*rho__v[1, 2]+nu__0[2]^2

(2)

RV*(`ρ__v`[1, 2]*`σ__v`[2]+`ν__0`[2])

(sigma__v[2]*rho__v[1, 2]*_R1+sigma__v[2]*(-rho__v[1, 2]^2+1)^(1/2)*_R2+nu__0[2])*(rho__v[1, 2]*sigma__v[2]+nu__0[2])

(3)

NULL

 

Importantly, whatever wrapper I use for my RV "to protect it during computations", I still want to be able to compute Mean(), Variance() and other with(Statistics) operations as if I were applying these to the "nested" RV itself.

  • How to do this in a smart way? 
  • Where to best introduce these wrappers in the script I attached above? (in fact you can notice how all computations which followed Omega__* produced expressions no more explicit in "repl"/the placeholders and lost compactness)

@acer thanks. Since my three equations FOC_1, FOC_2, and FOC_3 are linear in X__1, X__2, X__3, I thought that an alternative approach to solve my system is the following:

#ATTEMPT 2

Q := Matrix([[coeff(FOC_1,X__1), coeff(FOC_1,X__2), coeff(FOC_1,X__3)], [coeff(FOC_2,X__1), coeff(FOC_2,X__2), coeff(FOC_2,X__3)], [coeff(FOC_3,X__1), coeff(FOC_3,X__2), coeff(FOC_3,X__3)]]);
P := Matrix(3, 1, [eval(FOC_1, [X__1,X__2,X__3] =~ 0), eval(FOC_2, [X__1,X__2,X__3] =~ 0), eval(FOC_3, [X__1,X__2,X__3] =~ 0)]);
evalm(MatrixInverse(Q) &* P);

NULL

 

Because of MatrixInverse(Q), this might be an inefficient way to solve my system but I do not foresee any conceptual mistake (am I wrong?). However, I get an error already at the first line:

260423_Finding_Chi_attempts.mw (Error, unable to compute coeff)

Interestingly, I noticed that collect(FOC_*, [X__1,X__2,X__3]) works smoothly for all the three FOCs but, somehow, coeff(FOC_*,X__*) does not. Why?

First 10 11 12 13 14 15 16 Page 12 of 17