60 Reputation

2 Badges

1 years, 283 days

MaplePrimes Activity

These are replies submitted by MapleEnthusiast


Thanks for your response and suggestions regarding -LinearSolve() and -IterativeFormula(), which I've played around with quite a bit. I'm still not sure, however, how I can use those tools to study the stability of equilibrium. Let me elaborate:

The general solver -LinearSolve() confirms the existence of a solution. The -IterativeFormula() suggests that this solution can be reached using one of the standard iterative methods (such Gauss-Seidel, Jacobi, etc). I guess, the fact that these methods can deal with the case of spectral radius of M being 1 is not so surprizing, since the singular matrix (I-M) from equation (5) of my write-up can be inverted using the Moore-Penrose pseudoinverse and still generate an (up-to-scale) unique solution. Presumably, however, the -IterativeFormula() applies a different type of iteration then the one you've coded in and above (where we start with an initial, model-consistent guess for the unknowns (of 0's) on the right hand side (RHS), calculate the left-hand side (LHS), plug those values again on the RHS, etc.), for which we have a convergence in Ex1 and a divergence in Ex2. (Presumably, the -IterativeFormula() plugs in the LHS values with some sort of 'damping' factor on the RHS to ensure the convergence to the existent solution. However, the existence of a solution that can be reached iteratively, e.g., using -IterativeFormula(), does not yet imply the stability of the equilibrium, right?). 

Hence, I wonder whether the fact of a divergence in (via a "mechanical" updating of the RHS using LHS) is some sort of indication that the equilibrium in Example2 is unstable and, if so, how can I formally tackle this question? 

I would really appreciate any advise. Thank you.

Dear @tomleslie

Thank you very much for your response. To properly comment on your observations, I need to step back, introduce the symbolic notation, and give you a bit of a background behind my problem, which I summarized in the attached 1-pager (and coded in the accompanying Maple file). I apologize in advance for the lengthy write-up but, unfortunately, I couldn't find a more succinct way to address your well-thought comments, other than to give you a full(er) picture (with my above posts bein just an excerpt thereof).

Consider the symbolic linear system of equations from the write-up (and the accompanying Maple file). This system has an (up-to-scale) unique solution for each of the two sets of parameter values: {Parameters1} and {Parameters2}. Once we bring this system into a matrix form, we again find the same solutions either by Moore-Penrose pseudoinverse or with a regular matrix inverse. However, while for {Parameters1} this solution can be reached iteratively (using your above code), the iterative updating process under {Parameters2} diverges (which corresponds to the case of the spectral radius of the underlying matrix being larger than 1). This makes me wonder whether the (up-to-scale) unique solution under {Parameters2} may constitute an unstable equilibrium and, if so, how can I verify this (using Maple)? Please let me know if I can provide you with any further information.

Thank you so much for your time and amazing help -- I really appreciate it! 


Dear @tomleslie@Carl Love, et al.

May I ask a conceptual follow-up question related to the numerical exercise above? 

Consider the following two examples: and Both examples are generated from the same linear system (of 4 equations in 4 unknowns) of the following type:

(EQ1): y[1]=f(y[1], y[2], y[3], y[4]) 

(EQ2): y[2]=f(y[1], y[2], y[3], y[4]) 

(EQ3): y[3]=f(y[1], y[2], y[3], y[4]) 

(EQ4): y[4]=f(y[1], y[2], y[3], y[4])

The difference between the two examples is that, for the parameter values in Example1 the iterative process described above converges, while for parameter values in Example2 the iterative process diverges

Importantly, in both cases, there exists an (up-to-scale unique) positive real solution to the above system of equations (referred henceforth as an equilibrium), and in Example1 we can converge to it iteratively, while in Example2 we cannot. 

In view of the above, my question is: Does the fact that the solution in Example2 cannot be reached iteratively reveal something about the stability of the equilibrium in this case? Simply put, is equilibrium in Example2 unstable and, if so, how do I test this in Maple? 

I should also mention the following piece of information that may be relevant to help you think about my question: The above system of equations can be written in the following matrix form:

y=(I-A)^{-1} * x,

where y is a column vector of unknowns, I is an identity matrix, A is square matrix of independent variables and x is a column vector of independent variables (in my model, these are the "shocks" that I feed into the system to study their effect on the vector of y's for given values of A).

For the parameter values underlying Example1, the spectral radius (rho) of matrix A is smaller than one, i.e., rho(A)<1. In this case, matrix (I-A)^{-1} can be rewritten as Neumann series and nth element (degree) of this power series corresponds to the value of the unknowns after n iterations of the above system of linear equations. In contrast, for the parameter values underlying Example2, the spectral radius of A is larger than 1, rho(A)>1, which, obviously, precludes the Neumann series decomposition of this matrix. So, to paraphrase my question in matrix notation, does the case of rho(A)>1 reveal something about the stability of the equilibrium vector y and, if so, how do I test the stability in Maple?

Thank you very much in advance!

Thank you very much, @Carl Love -- that works. 

Dear @tomleslie, thank you so much for your response and great help -- this is exactly what I was looking for! 

May I ask you a quick follow-up question in a related context? In contrast to the original system, which had a unique solution, consider another system of (linear) equations, which doesn't converge (see Maple sheet attached). I would nevertherless like to see the first N (e.g., 100) iterations to get a better sense on *how* it diverges. When I use your original code and run it over this new system, I get the error message: "Error, Array index out of range". Unfortunately, I couldn't figure out myself what I should change about the code to make it display some iterations (even if they diverge) -- may I ask you one more time for your help? Thank you very much!

@Carl Love, may I follow-up on this thread? 

Let's exclude the possibility of zeroes in the denominators of rows 3 and 4 by setting the domain of the parameters pi, p, and gamma strictly between 0 and 1, i.e., pi, p, gamma∈(0,1). How would you apply Gershgorin circles to show that all (real parts of) M's eigenvalues are≤1.

@vv, you're right in that 1 is always an eigenvalue of M. For instance, for the parameter values from my post above, the 4 eigenvalues are: [1, 0.5, -1.32, -0.03]. And what I need to prove, in general, is that the eigenvalues that cause the spectral radius of M to be >1 are always "in the negative range", or, in other words, the spectral bound of M is 1. 

Thank you very much for your help.



But what I need to sho

Thanks @vv:

"how": matrix M was obtained from a system of linear equations, and

"why": I would like to conduct a power (Neumann) series decomposition of the matrix (I-(M-Q))^{-1}, where Q is the stationary distribution, which is needed due to the fact that (I-M) is singular and, hence, not invertible. The "issue" here is that the spectral radius of the matrix (M-Q) may be larger than 1, which would preclude the Neumann series decomposition. Hence, I was thinking about applying some "matrix splitting" techniques to transform matrix M into another one, which would be decomposable. But the latter approach necessitates a proof that the real part of M's eigenvalues is≤1. Please let me know if you have any advise how to prove that.

On a related note, if you have any general advise (or reference to existing discussions on this forum), on obtaining a Neumann series for a matrix with spectral radius>1, I would also appreciate it.

Thank you very much in advance!

@Carl Love, thanks for all the links. While I'm still struggling through the theory, I thought I'd answer your specific questions, in case you have any advise for me regarding the implementation:

- What I need for my proof is to show that the real part of any of M's eigenvalues is≤1. 

- The domains for the parameters are as follows: For all pi, p, and gamma, we have pi, p, gamma∈[0,1], as well as epsilon>0. 

Do you happen to know of a specific approach/Maple routine (using Gershgorin circle theorem or any other existing theorem) that would allow me to conduct this proof?

Thank you very much!

Thanks @Carl Love,

While it is true that all rows of matrix M sum up to one, some of its entries may be negative (i.e., matrix M is not a non-negative matrix, which is typically required for the stochastic matrices, as in the Wikipedia article above), see, e.g., the following permissible parameter values of M:

subs(epsilon = 4, gamma[1] = 1/2, gamma[2] = 1/2, pi[1, 1] = 6/10, pi[1, 2] = 2/10, p[1, 1] = 5/10, p[1, 2] = 4/10, M)

Do you happen to know of a theorem/result that would generalize (the Collatz-Wieland formula?) to right-stockastic matrices containing negative entries and prove that the largest eigenvalue (not in absolute value) is 1? Thank you. 

Thanks for your reply, @acer -- it's a good question:

The goal is to show analytically that the largest eigenvalue (but not in absolute value!) is ≤1 (which is needed for some additional proofs involving the spectral bound of this matrix). Now, I've seen continuously numerically (i.e., for given parameter values of pi, p, gamma, and epsilon) that this seems to be indeed the case for this matrix and was hoping that I could prove this analytically (i.e., for any permissible parameter values). Of course, this is only possible if the symbolic eigenvalues can be simplified to some managable expression. Please let me know if you have any advice. Thanks!

Thanks @Carl Love, it works and I'm really grateful for your continuous help! 

I was wondering whether I may kindly draw your attention to another question question of mine from a different thread, where you've already showed me how to find a numerical solution of a non-linear system using the fsolve-command, but I was additionally wondering whether there is any way to store the iterates from each iteration? Thank you!


@Carl Love, thank you very much for your response.

I've transformed the original matrix to make sure that it's stochastic (i.e., all its rows sum up to one, see the bottom of the attached file). Unfortunately, neither the original "steadyStateVector"-routine, nor the two solutions suggested in your response deliver a Steady State Vector within a reasonable amount of time. Hence, I'm still wondering whether I'm doing something wrong? Thanks.


[This updates my post above:]

I'm trying to solve iteratively a system of linear equations and, to better understand the dynamics behind this process, would like to know the new/updated values of the unknowns after each iteration. Unfortunately, I wasn't able to implement this myself and was wondering whether you could give me hand?

Specifically, consider the following system of 4 non-linear equations in 4 unknowns: y[1], y[2], y[3], y[4]:

(EQ1): y[1]=f(y[1], y[2], y[3], y[4])

(EQ2): y[2]=f(y[1], y[2], y[3], y[4])

(EQ3): y[3]=f(y[1], y[2], y[3], y[4])

(EQ4): y[4]=f(y[1], y[2], y[3], y[4])

The goal is to set-up an iterative process along the following lines:

Step 1: Set the values of y[1] through y[4] on the right-hand side (RHS) equal to 1 and calculate the respective values of y[1] through y[4] on the left-hand side (LHS). Store the latter LHS values as {y[1]^0, y[2]^0, y[3]^0, y[4]^0}.

Step 2: Substitute {y[1]^0, y[2]^0, y[3]^0, y[4]^0} on the RHS, update the LHS values and store those as {y[1]^1, y[2]^1, y[3]^1, y[4]^1}.

Iterate Step 2 until convergence (which should yield the exact same result as with solving numerically the above system of 4 equations using the fsolve(.) command). Importantly, I would like to "retrieve" the values of each variable y[k]^i at each iteration i, where k=1,2,3,4, and i=0,1,... and use them for further analysis. 

The specific system of equations that I am trying to solve iteratively is attached and I would like to thank @Carl Love for showing me how to get the numerical solution using the fsolve-command in the first place (but now I would like to know in addition the values after each iteration).


Dear @Carl Love, that's exactly what I've been looking for -- thank you so much! 

Just one last follow-up question to this topic: If I understand it correctly, the "{vz[]}=~ 0.1..infinity" at the end of your fsolve command defines the domain of the solutions (between 0.1 and ∞). Interestingly enough, when I change those values to, say, "{vz[]}=~ 0.9..infinity", the fsolve doesn't converge and I get: 

even though the numerical values found in your original solution (with 0.1 starting point) are all above 0.9. Do you happen to have an idea why the choice of the lower bound should affect the (non-)convergence? 

Thanks again for your amazing help -- I truly appreciate it!


Dear @Carl Love,

Thank you so much for your, as always, extremely helpful response! Do you mind if I follow-up on it?

What I'm ultimaltely after are numerical solutions for the 3 unknown variables {y[2,2], y[4,1], y[4,2]} in terms of real numbers (as opposed to functions of other unknowns), where I have set the 4th unknown variable, y[2,1], as a scaling factor (say, y[2,1]=1). May I ask you how I can amend your code to achieve this goal? 

To briefly comment on some of the issues raise above: Yes, your assumption of all variables being positive numbers is correct. Also, I have some reasons to believe that the solution to the above system of equations is unique up to scale (i.e., once one picked the scaling factor, say, y[2,1]=1, all other unknowns can be expressed in terms of real numbers). Potentially, this solution may be achieved through an iterative process along the following lines (using Gauss-Newton method):

1. Fix the value of, say, y[2,1]=1 and "guess" any values of the "right-hand side variables". Calculate the "left-hand-side values" of the three unknowns {y[2,2], y[4,1], y[4,2]}.

2. Plug in the values from previous step on the right-hand side and calculate again the "left-hand-side values".

Iterate Step 2 until the solution converges.

Thanks again!

1 2 3 4 Page 1 of 4