acer

32495 Reputation

29 Badges

20 years, 10 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@alex_01 You've forgotten to adjust the objective by that factor of 2.

@alex_01 You've forgotten to adjust the objective by that factor of 2.

@alex_01 You're most welcome.

If I could belabour the notions: when LP or QP problems are passed to Optimization some internal routines of that package analyse the input, dice it and slice it, and produce something like that very same "Matrix form" before passing out to the compiled external solvers.

So supplying the problem in Matrix form has a first benefit that it allows you to formulate a problem without explicitly generating the symbolic equations and formulas for the constraints, bounds, and objective. A float[8] Matrix H takes up much less space than does W^%T.H.W where W is a Vector of named elements (variables).

A second benefit is that, by supplying the data in Matrix form, the package does not have to analyse the symbolic expressions and convert them to Matrix form before passing them along. That overhead is removed.

And the reason that such forms are used is because the compiled (C language) external solvers can evaluate the objective or constraints at a numeric (multidimensional) point very quickly with only the pure numeric data at hand. No callback into Maple to evaluate any symbolic formula is needed or done, for the LP and QP class of problem. The external solvers already have compiled functions to quickly evaluate the objective or constraints -- because their form is known and hard-coded according to these class of problems.

The remaining end-user concern is to figure out how, and to remember, not to form the symbolic expressions both before and after calling QPSolve. If avoidable. Even the Vector W of symbolic variable names may not be needed.

We've seen that a 600 variable QP problem can be solved pretty quickly, and fits reasonably nicely in memory. How large are your largest QP problems?

One problematic scenario for current Optimization is a very large dimension, sparse, QP problem where in principal the nonzero data could fit in memory but not if it has to be cast as a full `rectangular` storage set of Matrices and Vectors.

@alex_01 You're most welcome.

If I could belabour the notions: when LP or QP problems are passed to Optimization some internal routines of that package analyse the input, dice it and slice it, and produce something like that very same "Matrix form" before passing out to the compiled external solvers.

So supplying the problem in Matrix form has a first benefit that it allows you to formulate a problem without explicitly generating the symbolic equations and formulas for the constraints, bounds, and objective. A float[8] Matrix H takes up much less space than does W^%T.H.W where W is a Vector of named elements (variables).

A second benefit is that, by supplying the data in Matrix form, the package does not have to analyse the symbolic expressions and convert them to Matrix form before passing them along. That overhead is removed.

And the reason that such forms are used is because the compiled (C language) external solvers can evaluate the objective or constraints at a numeric (multidimensional) point very quickly with only the pure numeric data at hand. No callback into Maple to evaluate any symbolic formula is needed or done, for the LP and QP class of problem. The external solvers already have compiled functions to quickly evaluate the objective or constraints -- because their form is known and hard-coded according to these class of problems.

The remaining end-user concern is to figure out how, and to remember, not to form the symbolic expressions both before and after calling QPSolve. If avoidable. Even the Vector W of symbolic variable names may not be needed.

We've seen that a 600 variable QP problem can be solved pretty quickly, and fits reasonably nicely in memory. How large are your largest QP problems?

One problematic scenario for current Optimization is a very large dimension, sparse, QP problem where in principal the nonzero data could fit in memory but not if it has to be cast as a full `rectangular` storage set of Matrices and Vectors.

I've branched off comments that contained (basic, new, or functionality) questions about how to use Maple commands.

Links to branched items are above (just below the end of the original parent Post).

acer

Tell us which part of this, if any, you disagree with or dispute.

You have a quadratic programming (QP) problem with linear constraints.

Your Matrix Cov is symmetric positive definite and so your objective is convex. Your linear constraints bound your objective from below. So you don't need to resort to an expensive global solver. Optimization:-QPSolve should do fine.

QPSolve finds the answer for your example in less than 1/10th sec on a fast machine, about 10 times faster than NLPSolve (which will use evalhf for this example to evaluate objective and constraints, and compiled C for the external solver).

The most efficient way to solve this in current Maple is to use the Matrix-form of input to QPSolve. This means that all data and coefficients get passed as data in float[8] rtables. It means you just supply Cov, and not W^%T.Cov.W as the objective. In your MCone.mw attachment your did not use this more efficient form; instead you used the more commonplace and less efficient algebraic-form.

When using that Matrix-form for the QP solver there is no need to form the objective or the constraints as explicit symbolic formula or (in)equalities. This lets you encapsulate much larger problems, using less memory.

For this kind or problem, DirectSearch's global solver will likely be less efficient that GlobalOptimization (which would use evalhf). And GlobalOptimization in turn would be far less efficient that a compiler local solver like NLPSolve (which would be ok to use here, due to convexity, etc). And NLPSolve in turn would be measurably less efficient that a specialized compiled QP solver (even if using an active-set based algorithm) which would evaluate objective and constraints in compiled C code (possible, because coefficients in hardware rtables are all that's needed!). And for very large problems, or when absolute speed is critical, the Matrix-form of QPSolve is much more efficient than the algebraic-form which uses the explicit symbolic formulae.

An interior-point method for LP problems is not going to give you an interior point method for QP problems. So an implementation of the Affine Scaling algorithm is not going to help with you QP problem (even when cast as a Second Order quadratic Cone Problem, or SOCP).

But why care so much about SOCP anyway? Where is your problem too large for Matrix-form QPSolve, or for which QPSolve is not fast enough? Do you have a concrete portfolio optimization problem for which you can demonstrate that (the most efficient currently possible) Matrix-form QPSolve does not have a good enough performance? Can you upload it?

Suppose that you demonstrate a SOC Problem for which Matrix-form QPSolve is too slow. You can see from the Wikipedia page for SOCP that one of the existing semidefinite solvers (CSDP) is due to an original project containing H. Wolkowitz, a math professor at U Waterloo who is also listed in the reference paper of the newer LP interior-point method of LPSolve. You could submit an SCR for CSDP in future Maple.

Or perhaps you could compile a shared library for CSDP now, and connect to it using wrapperless external calling. (I have not tried this.)

acer

Z:=evalhf(exp(354+I));

                          153                         153  
    2.97086840260185732 10    + 4.62685339914530199 10    I

evalhf(1/Z);

                         -155                         -154  
   9.82630470844021329 10     - 1.53035628577376246 10     I

Z:=evalhf(exp(355+I));

                          153                         154  
    8.07565759353578019 10    + 1.25770915178406420 10    I

evalhf(1/Z);

                               0.

I think that some hardware float libraries (eg. cephes? I forget offhand. Axel probably knows) might be able to do boundary case reciprocals like this with less problem (ie. without dividing by abs(Z) say, or using more special care).

evalhf(abs(Z));

                        Float(infinity)

acer

Z:=evalhf(exp(354+I));

                          153                         153  
    2.97086840260185732 10    + 4.62685339914530199 10    I

evalhf(1/Z);

                         -155                         -154  
   9.82630470844021329 10     - 1.53035628577376246 10     I

Z:=evalhf(exp(355+I));

                          153                         154  
    8.07565759353578019 10    + 1.25770915178406420 10    I

evalhf(1/Z);

                               0.

I think that some hardware float libraries (eg. cephes? I forget offhand. Axel probably knows) might be able to do boundary case reciprocals like this with less problem (ie. without dividing by abs(Z) say, or using more special care).

evalhf(abs(Z));

                        Float(infinity)

acer

@okoolo You can see a list of the possible outputs from Statistics:-Fit in the Help system, here and here.

 

@okoolo You can see a list of the possible outputs from Statistics:-Fit in the Help system, here and here.

 

One of the key aspects that makes the Affine Scaling interior point method important is that it can be implemented efficiently.

The above implementation is, unfortunately, quite inefficient. It produces a lot of temporary objects which must be memory-managed. The collectible garbage consists of lots of unnecessary temporary non-scalar objects (eg. Matrices, Vectors and lists). Every line but one in the do-loop involves avoidable creation of one or more Matrix/Vector/list objects, and managing that memory is costly. An efficient implementation at the Maple "Library" level would involved operating inplace on float[8] datatype Matrices and Vectors (or Arrays).

Also, what is posted is not the Affine Scaling algorithm, which handles an arbitrary LPP. It's a hard-coded example for just a single given LPP.

Readers might be interested to know that Maple 15 has an interior point method implementation suitable for larger, sparse LPPs.

acer

When I qualified my suggestion with "if efficiency matters" and "or critical code" I had in mind other scenarios like pure computation, as opposed to this task of printing and viewing. Sorry if I wasn't clear enough about that.

acer

When I qualified my suggestion with "if efficiency matters" and "or critical code" I had in mind other scenarios like pure computation, as opposed to this task of printing and viewing. Sorry if I wasn't clear enough about that.

acer

@Robert Israel That's like what you answered in another Question posted by the same member.

@Robert Israel That's like what you answered in another Question posted by the same member.

First 425 426 427 428 429 430 431 Last Page 427 of 595