20 Reputation

4 Badges

0 years, 200 days

MaplePrimes Activity

These are replies submitted by MaPal93


Thank you for your latest script. It's quite late in my TZ right now and will check the details tomorrow morning. At a first glance the script looks really exhaustive.

Can I get back to you tomorrow just in case I still have doubts about it? I will also try to adapt it to my "complex" setup and see what it leads to.


I really appreciate your efforts. Not only you kept looking for a solution, but you also took care of explaining everything in detail and in providing the most exhaustive and instructive answers. I learned a lot in this thread. I am sure many other beginners in Maple community will find your insights beneficial to solve their own problems. Thank you.


Anticipating that I haven't gone through the details of your latest script yet, here is an example of a readable solution with a simple calibration (am I making sense?): 

Main: solving successful - now forming solutions


Main: Exiting solver returning 2 solutions


Polynomial: handling a polynomial with a decomposition factor of z^2


RootUnit: computing the symbolic 2nd roots


Polynomial: handling a polynomial with a decomposition factor of z^2


RootUnit: computing the symbolic 2nd roots


allvalues(eval(sol3[][1], [Sigma__0jk = Sigma, Sigma__0ki = Sigma, sigma__uji = sigma, sigma__ujk = sigma, sigma__uki = sigma, rho__XX__23 = 0, q__0jk = 0, q__0ki = 0]))

{mu__jk = 0, mu__ki = 0, lambda__jk = -(1/8)*8^(1/2)*(Sigma/sigma^2)^(1/2)*((9/4)*Sigma^4+2*Sigma^3)/(-(3/2)*Sigma^4+4*Sigma^3), lambda__ki = (1/8)*8^(1/2)*(Sigma/sigma^2)^(1/2)}, {mu__jk = 0, mu__ki = 0, lambda__jk = (1/8)*8^(1/2)*(Sigma/sigma^2)^(1/2)*((9/4)*Sigma^4+2*Sigma^3)/(-(3/2)*Sigma^4+4*Sigma^3), lambda__ki = -(1/8)*8^(1/2)*(Sigma/sigma^2)^(1/2)}


allvalues(eval(sol3[][2], [Sigma__0jk = Sigma, Sigma__0ki = Sigma, sigma__uji = sigma, sigma__ujk = sigma, sigma__uki = sigma, rho__XX__23 = 0, q__0jk = 0, q__0ki = 0]))

{mu__jk = mu__ki, mu__ki = mu__ki, lambda__jk = (1/8)*Sigma*8^(1/2)/((Sigma/sigma^2)^(1/2)*sigma^2), lambda__ki = (1/8)*8^(1/2)*(Sigma/sigma^2)^(1/2)}, {mu__jk = mu__ki, mu__ki = mu__ki, lambda__jk = -(1/8)*Sigma*8^(1/2)/((Sigma/sigma^2)^(1/2)*sigma^2), lambda__ki = -(1/8)*8^(1/2)*(Sigma/sigma^2)^(1/2)}




@Carl Love thank you again for your insights.

The solutions as in look promising in the sense that at least mu_jk and mu_ki only depend on q_0jk, q_0ki (mu_jk=q_0jk and mu_ki=q_0ki as I suspected - for reasons that go beyond this thread). The lambda_jk and lambda_ki, instead, are extremely long and reported in the implicit form. If you remove the 1000000 output size limit you can display them. What would you be able to tell by looking at these?


We could play around with this calibration to make it easier to interpret my solutions:

`Σ__0jk` := Sigma

`Σ__0ki` := Sigma

`σ__uji` := sigma

`σ__ujk` := sigma

`σ__uki` := sigma

`ρ__XX__23` := 0

q__0jk := 0 or q

q__0ki := 0 or q


Sorry for the misunderstanding. I thought I was clear. I upvoted your answer and I will definitely "select this answer as the best one" once the answer is complete. 

Your latest script, UPTADED VERSION, addresses the problem summarized in, which I attached in the original post in this thread. In this sense, my original question never changed. Only in my latest comment I introduced a more complicated version of the same problem (

"So I won't try to solve this umpteenth variant, given that your previous problem doesn't have any solution."

Why the solve command in your latest script is not something like the following?

MyEqs := {eqljk = lambda__jk, eqlki = lambda__ki, eqmujk = mu__jk, eqmuki = mu__ki}; MyVars := {mu__jk, mu__ki, lambda__jk, lambda__ki}; infolevel[solve] := 4; solve(MyEqs, MyVars)

Please check the following:

I found solutions, but they are incredibly long and involve implicit _Z variables. How to simplify my results?

For example, the following calibration might make it easier to interpret my solutions:

`Σ__0jk` := Sigma

`Σ__0ki` := Sigma

`σ__uji` := sigma

`σ__ujk` := sigma

`σ__uki` := sigma

`ρ__XX__23` := 0

q__0jk := 0 or q

q__0ki := 0 or q


Thanks for explaining.


This is the "simple" version:

This is the "complex" version:


  • "So tell what eqmujk is meant to represent and I'll return your file corrected."

I just wanted to collect my solutions into variables to be first evaluated for my definitions of alphas and betas, and later used in solve() at the end of the script.

For the simple version:

  • eqmujk = mu_jk
  • eqmuki = mu_ki
  • eqljk = lambda_jk
  • eqlki = lambda_ki 

For the complex version:

  • eqmujk = mu_jk
  • eqmuki = mu_ki
  • eqljk1 = lambda_jk1
  • eqljk2 = lambda_jk2
  • eqljk3 = lambda_jk3
  • eqlki1 = lambda_ki1 
  • eqlki2 = lambda_ki2
  • eqlki3 = lambda_ki3  


Thank you for all the details. How to fix the following?

As mentioned in my original post, after obtaining my [mu_jk, lambda_jk, mu_ki, lambda_ki] from the two minimization problems, I need to plug in my definitions for alphas and betas and, eventually, pin down my [mu_jk, lambda_jk, mu_ki, lambda_ki] as explicit expressions of my RVs distribution parameters. What am I doing wrong?

Thank you. 


I understand your points now. You are of course right.

However, I am not simply reproducing the screenshot paper. My problem slightly differs. I need to extend it to "two dimensions" and the computations become significantly more convoluted. A correlation term is introduced as well.

Please check How would you translate this into an executable, efficient Maple algorithm? (later I will also need to apply it to a setup with "multiple dimensions").

@mmcdara I am quite new to Maple and to the Statistics() package, I hope you understand. The following script explains what I need to translate into code: Thank you.

Here below, instead, I address your questions related to the stylized problem in the screenshot paper. 

  • "E seems to denote the "Mathematical Expectation" and E[Q] seems to represent the expectation of the random variable Q?"

Yes, correct.

  • "Here Q is a combination of 2 random variables v tilde and u tilde , whose expectations and variances are given on the line which follows equation (6)."

Yes, but we also have the four constants lambda, beta, mu, and alpha. These four constants are linked together by the two definitions for alpha and beta, inline and right after (8b).  

"What this equation seems to represent." Why are you introducing a variance here? I want to minimize the expectation of something in the form (a-b-c)^2, where (a-b-c) includes RVs and constants. 

  • "Equation (7) is just the expansion of equation (6) given the expectationsn variances and correlation oof v tilde and u tilde."

Yes, but correlation between vtilde and utilde is 0. 

  • "So youj search for the value of lambda such that the variance of RV is minimal.
  • I understand that what is called "first order condition" is nothing but the expectation ("mean") of"

No. First order conditions (FOCs) here mean partial derivatives set to 0. For example, FOC with respect to mu means computing the partial derivative of our expression with respect to mu and then setting it to zero (equation (8a)). FOC with respect to lambda means computing the partial derivative of our expression with respect to lambda and then setting it to zero (equation (8b)). Then you plug (8a) into (8b) and you solve (8b) for lambda. Finally, (8a) and (8b) become (9a) and (9b) by plugging in our definitions for alpha and beta in terms of mu and lambda. (9a) now only depend on the expectation  of vtilde, while (9b) only on the variance of vtilde and the variance of utilde (expectation of utilde is zero).

@Carl Love

I simplified my setup as much as possible. Please check

While I think I managed to obtain some analytical solutions, they look a bit strange for two reasons:

1) They do not depend on the exogenous parameters as I expected. In fact, mu_jk and mu_ki should only depend on q_0jk and q_0ki, while lambda_jk and lambda_ki should only depend on BigSigma_0jk, BigSigma_0ki, smallsigma_ujk and smallsigma_uki.

2) Strong dependence on q_0jk and q_0ki: if I were to setup these two parameters to zero or to the same value I can't obtain solutions anymore (especially for the lambdas). Does it mean that they are not really "free" parameters? I noticed that if I combine the two equations from the FOCs of mu_jk and mu_ki into one system (is this even legit?), I get q_0jk = - q_0ki * (lambda_jk / lambda_ki). Why?

I am quite sure that the computations are correct (I checked multiple times), but I am now questioning my setup. Essentially, I am trying to extend the following problem. As you see below, mu depends only on p_0 (the one-dimensional equivalent of my q_0jk and q_0ki) and lambda depends only on BigSigma_0 and smallsigma_u (the one-dimensional equivalents of my BigSigma_0jk, BigSigma_0ki, smallsigma_ujk and smallsigma_uki).


If the current post morfed into something unrelated to the original post of this thread, I'd be fine with creating a new thread. 

@Carl Love 

You asked: "Why do you think that there's a solution?"

I am not sure that there's a solution given my conjectures as in equation (1) of the PDF file Problem_stylized.pdf. That is, I do think that equations (2) to (5) (and their translations into Maple) are correct, but equation (5) (i.e., the predictions, as simply derived from a normality assumption) could map to conjectures which are not linear.

Now I ask you:

Given the form of the beta's, alpha's and natlq2 and natlq3, can you think of polynomial conjectures with more reasonable degrees? Maybe quadratic? Square root? Any other power law?

Previosuly in this thread you wrote: "At least all the equations are algebraic functions (rational functions & fractional exponents) with the highest exponent being 4 and the only fractional exponent being 1/2." Is this still true for the latest attached scripts? Did you verify this with some Maple command? Most importantly, how to use this information to think of a more clever form for my conjectures?

I think is important to emphasize that I am trying to solve a two-steps problem. In the first step (not in this thread), I look for the beta's and alpha's as combinations of the lambda's and mu's. In the second step (this whole thread), I try to pin down the lambda's and mu's as expressions of the exogenous parameters and then plug them back into the beta's and alpha's to fully characterize my equilibrium.

That is, (in the first step) I use the linear conjecture for natlq2 and natlq3 (equation (1)) to find the equations for beta's and alpha's in the first place...therefore, if such conjecture for natlq2 and natlq3 (equation (1)) turns out to be non-linear, then also the equations for beta's and alpha's, which are those given at the beginning of the scripts, would need to be revised accordingly.

Finally, as you suggested, I attach here my attempts with numerical approaches.


With this calibration (calibration 1) I obtain some values for my lambda's and mu's but the numerical solutions look a bit strange, i.e. tiny. I run fsolve with the avoid option to check the variation of the solutions across the runs. For each variable, while the signs of the solutions seem to be preserved across runs, their values differ and they are all very tiny. What could it mean? 


Here I use a more realistic calibration of my parameters (calibration 2). In this case fsolve fails, but I don't know why. What could it mean? 


Finally, I use DirectSearch[SolveEquations]. Contrary to the root-finder fsolve(), this optimiser almost always returns some sort of least-square solution, as shown by the relatively minimal value of the sum of the squares of the residuals (the first two items in the list returned by the command). For calibration 2, I finally obtain some results but the caveat is that the outputs are not necessarily what we would ordinarily call solutions (is my understanding correct?). Moreover, the results I obtain for calibration1 differ from those I got using fsolve as I described in 1). All in all, what could it mean?


In general, how do I know whether there are numerical errors? Is there a way to verify the accuracy of the numerical results?


Again, I thank you for looking into this with such care.  


@Carl Love 

You have been tremendously helpful with your comments and detailed explanations. I learned a lot. Now I finally have time to look again into the problem I introduced in this thread and I am again trying to obtain an analytical solution for my 6 lambda's and 2 mu's.

I attach here a PDF file with a stylized description of my problem, as well as the associated Maple script with the parallel computing execution block you suggested above. I left it running for more than 10h with no results. I do not get any "kernel connection lost" error but the execution seems to be stuck. Despite my powerful machine (its characteristics are detailed above in this thread), I observed that the parallelization is not efficient as most of my CPUs remain unused. 

1) Do you see any error in my script that can lead the computation to get stuck in an infinite loop?

2) Do you see any other error in how I translate my problem into Maple?

3) Based on the expressions for natlq2 and natlq3, do you have any other idea?

Thank you for the continuous support!



 @Carl Love do you also have issues with the displayed 3D output of @mmcdara code? Why are the plots fully in black when I re-execute the worksheet?

@mmcdara thank you. I will dig deeper on the issue. As Carl Love said, the restart was indeed the issue of course.

It may be a stupid (newbie) question, but why when I run your code the 3D plot is correct but fully in black?

@mmcdara what am I missing?

@Carl Love this is perfect. Thank you a lot!

Now I have another doubt. I also tag @mmcdara.

Let's say I have a matrix COEFF, and its structure is given by the follow-up matrix Type

I'd like to track, again over the 10 runs/rows, the following comparisons:

  1. k_1A (1st col) vs. k_1B (2nd col)
  2. k_1A (1st col) vs. k_2A (3rd col) vs. k_3A (5th col)
  3. k_1B (2nd col) vs. k_2B (4th col) vs. k_3B (6th col)
  4. k_2A (3rd col) vs. k_2B (4th col)
  5. k_3A (5th col) vs. k_3B (6th col)  

Given your experience, what would be the best way to visualise these 5 comparisons over the 10 runs?

Comparisons 2. and 3. are actually the most important to me. I know that something similar to the previous plots can be designed (in the sense that I can always split my 10x6 matrix into two 10x3 sub-matrices), but I was wondering if Maple could build a 3D plot for these comparisons in particular? 


@Carl Love this is great. This is what I meant. How do I do the 3 side-by-side dualaxisplots (one for each column) as you mention (without splitting my original matrices)? I do need this indeed. How do I change the j(s)?


These A_jk and A_ki have the first column dominating the others, but for other matrices I need to analyse, this combined dualaxisplot should give a better output. So, yes, I do need both the combined dualaxisplots and the 3 side-by-side dualaxisplots.

1 2 3 Page 1 of 3