MrMarc

3163 Reputation

18 Badges

17 years, 134 days

MaplePrimes Activity


These are replies submitted by MrMarc

I am glad to report that the issue has been fixed in Maple 16 :-)
It gives me a warm sensation in my body when order is restored to the universe!

I think microsoft access is a joke! I loaded quite a lot of data into it and it
just shuts down on me. It freezes up. Pivoit tables is a nightmare with capitalized N.

Back to the SQL Server. How much data have you loaded into Maple and approx how fast?
I am not going to go and buy a dedicated server if I dont know it is going to be sweet.


Well, I had that SQL Server crap running on my laptop. However, the result was not that great.
Now I am thinking I might have to buy my own server and install Apache HTTP server
and then try to store yahoo finance data on it so I can access it though maple.

In your expertise how much data can you load into maple and how fast?!

"You might think that everyone else should be able to figure out, without explanation beforehand, your intended meanings that df = num.rows - num.cols, or that "ey" means "Estimated Y" and is computed via A.x, and that data1 is the solution in the code while `x` is the solution elsewhere, and X is the Matrix in the code while A is the Matrix elsewhere, and that sometimes Y means lowercase y while X does not mean lowercase x. But it does make it harder to read. I think that it isn't as polite as can be, for your audience."


I agree with this however I am just learning this stuff as of now hence that is why I am posting
on MaplePrimes ie I have no clue what I am doing. I need to clarify a lot of things
before I can make the decision in regards to if the notation is appropriate. Otherwise
I might change the notation, and then discover that I should use a different notation
and change it back etc etc.

At the end though I am planning to submit I well layed out and well motivated case to the
application center. However, it is way to early for that now.

Now I want to focus on trying to find a more exact solution than SVD to a Row-Dominant Matrix.

Again, here I am using the term Row-Dominant Matrix instead overdetermined matrix.
2 hours ago I thought "overdetermined matrix" was a much better term. However, I realized
that is is quite "ambiguous" Row-Dominant Matrix more descriptive I find.....


@pagan "Your (earlier, and these) plots of Y don't say anything about what the norm
of XY-rhs (or Ax-y) might be."

That is completely untrue!

In the plot you can see the original y=Vector(N,fill=5) and you can see the estimated y ie ey.
Now Norm(A.x-y)= minimize sqrt( (y[1]-ey[1])^2 + (y[2]-ey[2])^2 + (y[n]-ey[n])^2 )
Hence, it is not so hard to figure out what the norm migh be by looking at the plots.
However, for the completion of the argument I will give you these explicitly

                  "Norm[A.x - y] LS" = 77.3349121979146475
             "Norm[A.x - y] QR-SVD" = 3.66635920566521172 10^-14  

                  "Norm[A.x - y] LS" = 62.7320909536222900
                "Norm[A.x - y] QR-SVD" = 62.7320909506343228
                
                 "Norm[A.x - y] LS" = 0.0101910722839492572            
             "Norm[A.x - y] QR-SVD" = 3.56737169093675050 10^-13  


Note 1: parameter values is defined as estimates of x
Note 2: df is defined at the # of rows - # of columns in matrix A

I am a bit offended by your language and what you write ie "Please stop abusing symbols and notation"
I will not post anymore in this thread!

The last thing I will say is that I am concerned with the poor performance of
QR and SVD for a Overdetermined (df>0) system of equations and I am
concerned about the poor performance of LS both for a Underdetermined (df<0),
Overdetermined (df>0)
system of equations.


"Do you understand how singular value decomposition is used in order to do
least squares solving of linear systems?
"

To be honest with you no. However, I realized that the QR and SV decomposition
is not what I am looking for. I was under the impression that they would completely
orthogonalize the matrix A which would lead to (I speculate a bit) that the
norm(Ax-y) would be close to zero even for non-square matricies.

However, that did not a happen! Below you can see the result from Underdetermined (df<0),
Overdetermined (df>0) and square systems of equations (df=0).

The only one that produced good results was the square system A[1..n,1..n]

I will try to look into polynomial fitting to see if I can get some better results.


Ok, I think I figured out one of the questions.

Now the only one remaining is; how do I get acces to the transformed x-matrix?
I want to plot the eigenvalues/eigenvectors etc for the original vs transformed X-matrix.


restart:
randomize():
with(LinearAlgebra):
with(Statistics):

N := 15:
X := RandomMatrix(10, N, outputoptions = [datatype = float]):
Y := Vector([seq(i, i = 1 .. 10)], datatype = float):

data1 := evalf(1/(Transpose(X).X).Transpose(X).Y, 5):
data2 := LinearAlgebra[LeastSquares](X, Y):

LineChart([data1, data2], color = [red, blue], legend = ["LS", "QR and SVD"])

 

I dont understand this. If I run the below code then
I get the same parameter values..hummmm

I thought the whole point in making the X-Matrix Orthogonal
was so we could get better estimates.

Also how can I access the transfomed matrix-X ?

restart:
with(LinearAlgebra):

X := RandomMatrix(10, 5, outputoptions = [datatype = float]):
Y := Vector([seq(i, i = 1 .. 10)], datatype = float):

LeastSquares(X, Y);
LeastSquares(X, Y, method = 'SVD');

                           [  -0.00505224347921694270]
                         [                         ]
                         [    0.0669360830835003668]
                         [                         ]
                         [    0.0389288386786362516]
                         [                         ]
                         [-0.0000351045813488561612]
                         [                         ]
                         [    0.0366669159507587926]
                         [  -0.00505224347921688372]
                         [                         ]
                         [    0.0669360830835002696]
                         [                         ]
                         [    0.0389288386786361754]
                         [                         ]
                         [-0.0000351045813488755684]
                         [                         ]
                         [    0.0366669159507586398]


I noticed that the LeastSquares procedure in Maple 15 has been
significantly updated compared to my Maple 13 version.

Now LeastSquares actually has a option method=SVD

That is cool......

 

I am looking for this as well...

@Robert Israel

i) why do you use sqrt(p*(1-p)/n) and not Variance[Binomial(n,p) = p*(1-p)*n

ii) I dont understand this: "sqrt(p*(1-p)/n) <= p/100 or n >=  10000*p/(1-p) = 2707240000."
Could you explain more please?!


@Robert Israel

i) why do you use sqrt(p*(1-p)/n) and not Variance[Binomial(n,p) = p*(1-p)*n

ii) I dont understand this: "sqrt(p*(1-p)/n) <= p/100 or n >=  10000*p/(1-p) = 2707240000."
Could you explain more please?!


In regards to your question:

"But then, you have to date left unanswered queries as to how large your typical problems will be."

For me as a financial economist (not specific trained in mathematics nor optimization) you
are basically asking the question "how rich do you want to be?".

There does not exist an answer to that question.

If I have a software that lets me optimize a portfolio over a global universe of lets say 10 000 securities
that would be nice. However, if I could optimize the portfolios over a global universe of lets
say a million securities that would be even better.

This efficient portfolio concept is quite relative there does not exist any guarantee that the portfolio
you are holding is the most efficient. There might be better securities to include in such portfolio
hence the sample size of the global universe is quite important.

Also the optimization process initself is no guarantee that the portfolio you have found actually
will outperform. Which makes my question "how rich do you want to be?" not completely accurate. 

In regards to your question:

"But then, you have to date left unanswered queries as to how large your typical problems will be."

For me as a financial economist (not specific trained in mathematics nor optimization) you
are basically asking the question "how rich do you want to be?".

There does not exist an answer to that question.

If I have a software that lets me optimize a portfolio over a global universe of lets say 10 000 securities
that would be nice. However, if I could optimize the portfolios over a global universe of lets
say a million securities that would be even better.

This efficient portfolio concept is quite relative there does not exist any guarantee that the portfolio
you are holding is the most efficient. There might be better securities to include in such portfolio
hence the sample size of the global universe is quite important.

Also the optimization process initself is no guarantee that the portfolio you have found actually
will outperform. Which makes my question "how rich do you want to be?" not completely accurate. 

@acer You seem to misunderstand my intentions a bit. All I want is to be able to express all
my numerous different problems in all different kinds of forms ie algebraic or matrix form.
Now the notation for matrix form is more complex hence it is more difficult to set up such
a problems not to mention the notation for NLP[mtrix form] which is even more complex.

That was the reasons why I liked the Optimization:-Convert:-AlgebraicForm:-
LPToMatrix(problem) solution because it gave you something to hold one to.
Instead of just trying stuff randomly and gettting error messages slaped in your face

when you are trying to convert you problem to matrix form you could actually see what
the final result of your specific problem should look like in matrix form.
Which made things a little bit easier.

However, I recently discovered that such a solution is not necessarily the "holy grail" because
even though it gives you the final results it does not explain how it got there. For simple
problems that might be easy to figure out but for complex problems that is a whole different
ball game. As you pointed out you cant just do the Optimization:-Convert:-AlgebraicForm:-
LPToMatrix(problem)
and hope for a speed up but you need to rewrite the output in float
matricies, hence you need to understand how those matricies where generated in
order to be able to set up more general problems like if you change the number of data
columns ie the number of stocks or the number of observations etc etc

Its not like I dont believe your advice. I do believe that for this particular problem
QPSolve[Matrix Form] might be the fastest and the most appropriate to use however that
does not mean that in the future I will not be faced with the task of converting another NLP to
matrix form. Hence, it would be nice at this stage to have such knowledge so the conversion
can be as painless as possible. This specific problem represent just one of many
different problems I have in my computer.

Also I am glad that you pointed out that Min(Norm(R.W)) is not linear hence LP cant be used!
Then I dont have to bang my head against the wall trying to figure it out (
I always have the
insecure feeling that LP is better than QP since some people use LP to solve problems with 10's
of thousands of variables and constrainst (not in maple though) but this might be wrong. I also
have the insecure feeling that Cone programming is the most efficient of them all but this might
also be wrong) 

For example if you look in the begining of Chapter 10 in this referens (Boyd etc, which
apperently is an expert in optimization )

http://www.ee.ucla.edu/~vandenbe/103/reader.pdf

They state that the solution to the least-norm problem ie norm(x) Ax=b is unique and given by
x = A' (AA') ^(-1) .b which means (I quote) "that we can solve the least-norm problem by solving
(AA').z=b  (10.2) and then calculating ˆx = A'.z. The equations (10.2) are a set of m linear equations
in m variables, and are called the normal equations associated with the least-norm problem.

My specific problem is a little bit different (even though they are both min norm problems) hence it
was difficult to determind if my problem was linear or not.

First 7 8 9 10 11 12 13 Last Page 9 of 33