MrMarc

3163 Reputation

18 Badges

17 years, 134 days

MaplePrimes Activity


These are replies submitted by MrMarc

@acer You seem to misunderstand my intentions a bit. All I want is to be able to express all
my numerous different problems in all different kinds of forms ie algebraic or matrix form.
Now the notation for matrix form is more complex hence it is more difficult to set up such
a problems not to mention the notation for NLP[mtrix form] which is even more complex.

That was the reasons why I liked the Optimization:-Convert:-AlgebraicForm:-
LPToMatrix(problem) solution because it gave you something to hold one to.
Instead of just trying stuff randomly and gettting error messages slaped in your face

when you are trying to convert you problem to matrix form you could actually see what
the final result of your specific problem should look like in matrix form.
Which made things a little bit easier.

However, I recently discovered that such a solution is not necessarily the "holy grail" because
even though it gives you the final results it does not explain how it got there. For simple
problems that might be easy to figure out but for complex problems that is a whole different
ball game. As you pointed out you cant just do the Optimization:-Convert:-AlgebraicForm:-
LPToMatrix(problem)
and hope for a speed up but you need to rewrite the output in float
matricies, hence you need to understand how those matricies where generated in
order to be able to set up more general problems like if you change the number of data
columns ie the number of stocks or the number of observations etc etc

Its not like I dont believe your advice. I do believe that for this particular problem
QPSolve[Matrix Form] might be the fastest and the most appropriate to use however that
does not mean that in the future I will not be faced with the task of converting another NLP to
matrix form. Hence, it would be nice at this stage to have such knowledge so the conversion
can be as painless as possible. This specific problem represent just one of many
different problems I have in my computer.

Also I am glad that you pointed out that Min(Norm(R.W)) is not linear hence LP cant be used!
Then I dont have to bang my head against the wall trying to figure it out (
I always have the
insecure feeling that LP is better than QP since some people use LP to solve problems with 10's
of thousands of variables and constrainst (not in maple though) but this might be wrong. I also
have the insecure feeling that Cone programming is the most efficient of them all but this might
also be wrong) 

For example if you look in the begining of Chapter 10 in this referens (Boyd etc, which
apperently is an expert in optimization )

http://www.ee.ucla.edu/~vandenbe/103/reader.pdf

They state that the solution to the least-norm problem ie norm(x) Ax=b is unique and given by
x = A' (AA') ^(-1) .b which means (I quote) "that we can solve the least-norm problem by solving
(AA').z=b  (10.2) and then calculating ˆx = A'.z. The equations (10.2) are a set of m linear equations
in m variables, and are called the normal equations associated with the least-norm problem.

My specific problem is a little bit different (even though they are both min norm problems) hence it
was difficult to determind if my problem was linear or not.

@herclau sorry about that. I have to copy from maple and then past into a
wordpad and then rearange everything to be able to post the code here.

I have uppdated the constraints...but it still dont work as it should.

@herclau sorry about that. I have to copy from maple and then past into a
wordpad and then rearange everything to be able to post the code here.

I have uppdated the constraints...but it still dont work as it should.

hummm I am not sure they are independent though, I might be wrong.
It would be nice to prove that with maple...

hummm I am not sure they are independent though, I might be wrong.
It would be nice to prove that with maple...

acer, you are absolutly correct. It is a QP minimization problem in disguise!

S=R'.R   where S is the covariance matrix and R is the upper triangular
factor of the Cholesky Decomposition matrix

w'.R'=(R.w)'    sqrt(x'x)=norm(x)

w'.S.w < r^2     w'.R'.R.w< r^2    (R.w)'.(R.w)< r^2    sqrt((R.w)'.(R.w))< sqrt(r^2)    norm(R.w)< r


Unfortunately, I already know that the QP and the NLP optimization produces
the same allocations (even though the objective functions returns different values since the
objective is different).

And yes, the first part can be made much more efficient.

# later comment-1
And yes you are right the two problems can be solved equivalent as:

problem := r, {con1, con4, con5}, seq(w[i] = 0 .. 1, i = 1 .. nstock):
NLPSolve(Norm(R.W, 2, conjugate = false), {con1, con2, con3, con4});
NLPSolve(problem) ;


My question remains though how do I convert it to NLP[Matrix form] using
Optimization:-Convert:-AlgebraicForm:-NLPToMatrix(problem)   ?????????

# later comment-2
or alternativly how do you solve Min(Norm(R.W)) with LP?


restart:
randomize():
with(ListTools):
with(LinearAlgebra):
with(ArrayTools):
with(Statistics):
with(plots):
with(Optimization):

n := 40:
nstock := 10:
A := RandomMatrix(n, nstock, generator = -15 .. 15, outputoptions = [datatype = float[8]]):
Cov := CovarianceMatrix(A):
EV := Vector([seq(ExpectedValue(Column(A, i)), i = 1 .. nstock)]):
R := LUDecomposition(Cov, 'method' = 'Cholesky', output = 'U'):
W := Vector([seq(w[i], i = 1 .. nstock)]):

pr := .6*max([seq(ExpectedValue(Column(A, i)), i = 1 .. nstock)]):

con1 := add(W[i], i = 1 .. nstock) <= 1:
con2 := seq(W[i] <= 1, i = 1 .. nstock):
con3 := seq(W[i] >= 0, i = 1 .. nstock):
con4 := EV.W-pr >= 0:
con5 := Norm(R.W, 2, conjugate = false):

problem := r, {con1, con4, con5}, seq(w[i] = 0 .. 1, i = 1 .. nstock):
problem_matrix_form := Optimization:-Convert:-AlgebraicForm:-NLPToMatrix(problem):

QPSolve(Transpose(W).Cov.W, {con1, con2, con3, con4}); 
NLPSolve(problem);   
NLPSolve(problem_matrix_form[[1, 2, 8, 9]])

acer, you are absolutly correct. It is a QP minimization problem in disguise!

S=R'.R   where S is the covariance matrix and R is the upper triangular
factor of the Cholesky Decomposition matrix

w'.R'=(R.w)'    sqrt(x'x)=norm(x)

w'.S.w < r^2     w'.R'.R.w< r^2    (R.w)'.(R.w)< r^2    sqrt((R.w)'.(R.w))< sqrt(r^2)    norm(R.w)< r


Unfortunately, I already know that the QP and the NLP optimization produces
the same allocations (even though the objective functions returns different values since the
objective is different).

And yes, the first part can be made much more efficient.

# later comment-1
And yes you are right the two problems can be solved equivalent as:

problem := r, {con1, con4, con5}, seq(w[i] = 0 .. 1, i = 1 .. nstock):
NLPSolve(Norm(R.W, 2, conjugate = false), {con1, con2, con3, con4});
NLPSolve(problem) ;


My question remains though how do I convert it to NLP[Matrix form] using
Optimization:-Convert:-AlgebraicForm:-NLPToMatrix(problem)   ?????????

# later comment-2
or alternativly how do you solve Min(Norm(R.W)) with LP?


restart:
randomize():
with(ListTools):
with(LinearAlgebra):
with(ArrayTools):
with(Statistics):
with(plots):
with(Optimization):

n := 40:
nstock := 10:
A := RandomMatrix(n, nstock, generator = -15 .. 15, outputoptions = [datatype = float[8]]):
Cov := CovarianceMatrix(A):
EV := Vector([seq(ExpectedValue(Column(A, i)), i = 1 .. nstock)]):
R := LUDecomposition(Cov, 'method' = 'Cholesky', output = 'U'):
W := Vector([seq(w[i], i = 1 .. nstock)]):

pr := .6*max([seq(ExpectedValue(Column(A, i)), i = 1 .. nstock)]):

con1 := add(W[i], i = 1 .. nstock) <= 1:
con2 := seq(W[i] <= 1, i = 1 .. nstock):
con3 := seq(W[i] >= 0, i = 1 .. nstock):
con4 := EV.W-pr >= 0:
con5 := Norm(R.W, 2, conjugate = false):

problem := r, {con1, con4, con5}, seq(w[i] = 0 .. 1, i = 1 .. nstock):
problem_matrix_form := Optimization:-Convert:-AlgebraicForm:-NLPToMatrix(problem):

QPSolve(Transpose(W).Cov.W, {con1, con2, con3, con4}); 
NLPSolve(problem);   
NLPSolve(problem_matrix_form[[1, 2, 8, 9]])

@acer ok lets close this discussion.
I have manage to solve my problem with your help which I am greatful for.

@acer ok lets close this discussion.
I have manage to solve my problem with your help which I am greatful for.

@acer Yes, I think I understand this however what I dont understand is
why I should have to waist weeks of time trying to convert a problem to matrix form
when there are SIMPLE and UNDOCUMENTED ways of doing so directly.
To be honest with you the speed up is not that great to begin with.

I understand that you think that the below code is the wrong way of doing
things (I am grateful that you pointed it out and I can agree to some degree)
however it is also tremendusly helpfull for me because if I want to express the
problem in Matrix form then I can just simply run such code and then
manually convert it as you did in your initial example.

However, if you dont run the below code how are you planning to make the
conversion? You might have 1000's of constraint no human being is able
to figure that out in their head. Even if you said that you did it in your head
to be honest with you I would not believe you!

problem := ur+dr, {con1, con4, con5}, seq(w[i] = 0 .. 1, i = 1 .. N):
problem_matrix_form := Optimization:-Convert:-AlgebraicForm:-LPToMatrix(problem):

Optimization:-LPSolve(problem, maximize = true);
Optimization:-LPSolve(problem_matrix_form[2 .. 4], maximize = true);

@acer Yes, I think I understand this however what I dont understand is
why I should have to waist weeks of time trying to convert a problem to matrix form
when there are SIMPLE and UNDOCUMENTED ways of doing so directly.
To be honest with you the speed up is not that great to begin with.

I understand that you think that the below code is the wrong way of doing
things (I am grateful that you pointed it out and I can agree to some degree)
however it is also tremendusly helpfull for me because if I want to express the
problem in Matrix form then I can just simply run such code and then
manually convert it as you did in your initial example.

However, if you dont run the below code how are you planning to make the
conversion? You might have 1000's of constraint no human being is able
to figure that out in their head. Even if you said that you did it in your head
to be honest with you I would not believe you!

problem := ur+dr, {con1, con4, con5}, seq(w[i] = 0 .. 1, i = 1 .. N):
problem_matrix_form := Optimization:-Convert:-AlgebraicForm:-LPToMatrix(problem):

Optimization:-LPSolve(problem, maximize = true);
Optimization:-LPSolve(problem_matrix_form[2 .. 4], maximize = true);

@acer I think it is an under-statement to say that I have a love-hate relationship with Maple.
I admit I was starting to get overly self-assertive because the above 
problem_matrix_form[2 .. 4] - conversion method worked for all
LP and QP problems I tried so by the logic of induction it should also work on NLP problems.

However, that was not the case !!  I have attached the workshet

Maple-S.mw

Error, (in Optimization:-NLPSolve) constraints must be specified as a set or list of  procedures





@acer I think it is an under-statement to say that I have a love-hate relationship with Maple.
I admit I was starting to get overly self-assertive because the above 
problem_matrix_form[2 .. 4] - conversion method worked for all
LP and QP problems I tried so by the logic of induction it should also work on NLP problems.

However, that was not the case !!  I have attached the workshet

Maple-S.mw

Error, (in Optimization:-NLPSolve) constraints must be specified as a set or list of  procedures





ok, I see what you did.

This Optimization:-Convert:-AlgebraicForm:-LPToMatrix(problem)
should really be documented in the help pages.

Why would anyone in their right state of mind try to do it manually.
and guess what QPToMatrix(problem) does he he

So now it is trivial to convert any LP or QP exprssed in algebraci form
to matrix form...as it should be !


restart:
randomize():
with(Optimization):
with(Statistics):
with(LinearAlgebra):

N := 20:
R := RandomMatrix(N, outputoptions = [datatype = float[8]]):
ER := Vector[column]([seq(ExpectedValue(Column(R, j)), j = 1 .. N)]):
W := Vector(N, symbol = w):
S := Vector(N, fill = 1, datatype = float[8]):
z := Multiply(R, Matrix(W)):

con1 := Transpose(W).S = 1:
con4 := seq(z[i][1]-dr >= 0, i = 1 .. N):
con5 := expand(Transpose(W).ER)-ur >= 0:

problem := ur+dr, {con1, con4, con5}, seq(w[i] = 0 .. 1, i = 1 .. N):
problem_matrix_form := Optimization:-Convert:-AlgebraicForm:-LPToMatrix(problem):

Optimization:-LPSolve(problem, maximize = true);
Optimization:-LPSolve(problem_matrix_form[2 .. 4], maximize = true);



 [9.69759373436402, [dr = -1.73365859794041200, ur = 11.4312523323044335,

   w[1] = 0.0385004672149683522, w[2] = 0.0279096045003266874, w[3] = 0.,

   w[4] = 0., w[5] = 0., w[6] = 0.00473442818034913379,

   w[7] = 0.0545558008360255282, w[8] = 0.131509202761866478, w[9] = 0.,

   w[10] = 0.180359459715276033, w[11] = 0., w[12] = 0., w[13] = 0.,

   w[14] = 0.143042755803694244, w[15] = 0., w[16] = 0.213292414638054567,

   w[17] = 0., w[18] = 0.158674622402634968, w[19] = 0.0474212439468062047,

   w[20] = 0.]]


               [                                    [ 1 .. 22 Vector[column] ]]
               [                                     [ Data Type: float[8]    ]]
               [9.69759373436402,        [ Storage: rectangular   ]]
               [                                     [ Order: Fortran_order   ]]



ok, I see what you did.

This Optimization:-Convert:-AlgebraicForm:-LPToMatrix(problem)
should really be documented in the help pages.

Why would anyone in their right state of mind try to do it manually.
and guess what QPToMatrix(problem) does he he

So now it is trivial to convert any LP or QP exprssed in algebraci form
to matrix form...as it should be !


restart:
randomize():
with(Optimization):
with(Statistics):
with(LinearAlgebra):

N := 20:
R := RandomMatrix(N, outputoptions = [datatype = float[8]]):
ER := Vector[column]([seq(ExpectedValue(Column(R, j)), j = 1 .. N)]):
W := Vector(N, symbol = w):
S := Vector(N, fill = 1, datatype = float[8]):
z := Multiply(R, Matrix(W)):

con1 := Transpose(W).S = 1:
con4 := seq(z[i][1]-dr >= 0, i = 1 .. N):
con5 := expand(Transpose(W).ER)-ur >= 0:

problem := ur+dr, {con1, con4, con5}, seq(w[i] = 0 .. 1, i = 1 .. N):
problem_matrix_form := Optimization:-Convert:-AlgebraicForm:-LPToMatrix(problem):

Optimization:-LPSolve(problem, maximize = true);
Optimization:-LPSolve(problem_matrix_form[2 .. 4], maximize = true);



 [9.69759373436402, [dr = -1.73365859794041200, ur = 11.4312523323044335,

   w[1] = 0.0385004672149683522, w[2] = 0.0279096045003266874, w[3] = 0.,

   w[4] = 0., w[5] = 0., w[6] = 0.00473442818034913379,

   w[7] = 0.0545558008360255282, w[8] = 0.131509202761866478, w[9] = 0.,

   w[10] = 0.180359459715276033, w[11] = 0., w[12] = 0., w[13] = 0.,

   w[14] = 0.143042755803694244, w[15] = 0., w[16] = 0.213292414638054567,

   w[17] = 0., w[18] = 0.158674622402634968, w[19] = 0.0474212439468062047,

   w[20] = 0.]]


               [                                    [ 1 .. 22 Vector[column] ]]
               [                                     [ Data Type: float[8]    ]]
               [9.69759373436402,        [ Storage: rectangular   ]]
               [                                     [ Order: Fortran_order   ]]



First 8 9 10 11 12 13 14 Last Page 10 of 33