acer

32490 Reputation

29 Badges

20 years, 8 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are answers submitted by acer

> alpha:=0.618:

> sprintf( "The value of alpha is %a.", alpha );
                 "The value of alpha is .618."

acer

The control character of a tab can be printed by using an escaped character with backslash (\).

fprintf("c:/temp/text.txt","%a\t%a",A,B):

fclose("c:/temp/text.txt");

And now that file text.txt contains,

A	B

with a tab between them.

See ?backslash and ?printf

acer

h:=(1+i)^2+(1+i)^3*A+(1+i)^4*B;

                      2          3            4  
               (1 + i)  + (1 + i)  A + (1 + i)  B

algsubs(1+i=g,h);

                         2      3      4
                        g  + A g  + B g 

Of course, in this simple example subs will also work, but one has to "solve" for `i` (mentally works, in this example),

subs(i=g-1,h);

                         2      3      4
                        g  + A g  + B g 

This kind of attempt with subs is also possible here (automating the "solving"),

subs(isolate(i+1=g,i),h);

                         2      3      4
                        g  + A g  + B g 

acer

> restart:

> h:=Statistics:-Distribution(PDF=(x->Dirac(x-b))):

> eval(h[':-PDF']);

                               x -> Dirac(x - b)

> map(FromInert,indets(remove(type,[op(ToInert(eval(h[':-PDF'])))],
>             specfunc(anything,{_Inert_OPTIONSEQ,_Inert_LOCALSEQ,
>                                _Inert_PARAMSEQ})),
>               specfunc(string,_Inert_NAME)));

                                  {Dirac, b}

You should be able to sieve that result, or the inert body, to remove names of applied function calls (like `Dirac`).

acer

It looks like Maple 9 is confused about what to do with the structure of the float object itself.

> kernelopts(version);

            Maple 9.03, IBM INTEL LINUX, Oct 1 2003 Build ID 141050

> dismantle([10000.]); # writedata works ok for this list         

LIST(2)
   EXPSEQ(2)
      FLOAT(3): 10000.
         INTPOS(2): 10000
         INTPOS(2): 0

> dismantle(map(x->1/x, [.0001])); # writedata fails for this list   

LIST(2)
   EXPSEQ(2)
      FLOAT(3): .1e5
         INTPOS(2): 1
         INTPOS(2): 4

These both seem to do "better", in 9.03,

> restart:

> B:=[ 3.5, 0.0001, 73.45]; A:=[map(x->1.0*1/x,B)];

                           B := [3.5, 0.0001, 73.45]

               A := [[0.2857142857, 10000.00000, 0.01361470388]]

>  writedata("a.txt",A); # ok      
                
>  writedata("a.txt",A,float); # ok

and produce,

% more a.txt
0.2857142857	10000	0.01361470388

It's interesting. The printf and fprintf commands also seem to malfunction here.

> B:=[ 3.5, 0.0001, 73.45]; A:=[[seq(1/op(j,B), j=1..3)]];

                           B := [3.5, 0.0001, 73.45]

                 A := [[0.2857142857, 10000., 0.01361470388]]

> printf("%{n}.11g\n", Matrix(A));       
                 
0.2857142857 1 0.01361470388

>  B:=[ 3.5, 0.0001, 73.45]; A:=[map(x->1.0*1/x,B)];    
  
                           B := [3.5, 0.0001, 73.45]

               A := [[0.2857142857, 10000.00000, 0.01361470388]]

> printf("%{n}.11g\n", Matrix(A));          
        
0.2857142857 10000 0.01361470388

It looks like this was fixed in the next major release, 9.5.

acer

There are several ways to determine whether two lines are perpendicular. Which is easiest to use can depend on which particular form one has for the lines. See here as the "Graph of functions" section for two easy ways, both of which should require no deep thinking to do in Maple.

acer

Is this just a QP minimization problem in disguise, rather than an NLP minimization problem? By which I mean, a quadratic objective and linear constraints?

Sure, your con3 constraint is nonlinear, involving a 2-norm. But it is the only place that variable `r` appears in constraints, and your objective is simply `r`. So can't you just make the norm part of con3 as a new objective, and then get rid of con3 as a constraint?

Note that the setup of this problem gets expensive quickly, as `nstock` grows. And there is a lot of room for improvements to the first parts which generate the randomized data. So, unless all that is just a stand-in for some other means of data acquisition then you can get a bit of speedup just by refining all that code. I mean, everything up to the Cholesky decomposition of `Cov`. On the machine used for the timings below, at nstock=200 the construction of `Cov`, `R` and the constraints takes about 70 seconds. What comes before constructing `Cov` can likely be reduced to under 4-5 seconds.

I set nstock=200 and compared these two Optimization calls below. I'll leave out all the set up here, except for the constraint assignments,

con1 := add(W[i], i = 1 .. nstock) <= 1:
con2 := EV.W-pr >= 0:

NRW2 := Norm(R.W, 2, conjugate = false):
con3 := NRW2 - r <= 0:

# As you solved it with NLPSolve...

sol1 := CodeTools:-Usage(NLPSolve(r,{con1,con2,con3},seq(w[i]=0..1,i=1..nstock)));

memory used=1.74GiB, alloc change=55.49MiB, cpu time=60.68s, real time=60.70s

sort(select(t -> rhs(t)>0 and lhs(t)<>r, sol1[2]));

        [w[21] = 0.0270237593173089, w[25] = 0.0322056751304918, 

          w[35] = 0.000847852100225448, w[67] = 0.0492087037211113, 

          w[78] = 0.0438242963135371, w[86] = 0.0416518751919245, 

          w[88] = 0.0417036500369194, w[89] = 0.0717631323305688, 

          w[100] = 0.00837739752003451, w[107] = 0.137179654344253, 

          w[109] = 0.107669160523004, w[125] = 0.0289124908386128, 

          w[131] = 0.160793105594057, w[153] = 0.0236924836513717, 

          w[157] = 0.0341339386566807, w[180] = 0.0119978162730577, 

          w[185] = 0.0405857945037750, w[190] = 0.129027000241401, 

          w[192] = 0.00940221371168578]

r = eval(r, sol1[2]);

                            r = 2.03629907765324

# And now...

sol2 := CodeTools:-Usage(QPSolve(NRW2^2,{con1,con2},seq(w[i]=0..1,i=1..nstock)));

memory used=448.67MiB, alloc change=0 bytes, cpu time=7.71s, real time=7.72s

sort(t -> rhs(t)>0 and lhs(t) <> r, sol2[2]));

        [w[21] = 0.0270237589337289, w[25] = 0.0322056750319796, 

          w[35] = 0.000847852090717340, w[67] = 0.0492087042316293, 

          w[78] = 0.0438242958978819, w[86] = 0.0416518740369907, 

          w[88] = 0.0417036488838085, w[89] = 0.0717631314373649, 

          w[100] = 0.00837739703763107, w[107] = 0.137179652525067, 

          w[109] = 0.107669163079282, w[125] = 0.0289124902420816, 

          w[131] = 0.160793108650386, w[153] = 0.0236924831211481, 

          w[157] = 0.0341339380256316, w[180] = 0.0119978167550648, 

          w[185] = 0.0405857946727649, w[190] = 0.129027001732921, 

          w[192] = 0.00940221361388309]

r = sqrt(eval(NRW2^2, sol2[2])); # `r` in the original sense of `con3`

                            r = 2.03629907765382

Unless I did something quite wrong, those two attempts produce essentially the same solution, but the QPSolve is 7 times faster.

And the QPSolve use can likely also be improved using Matrix form for its input.

acer

> evalc(ln(x)) assuming x<0;

                         ln(-x) + I Pi

acer

Thy this code by Joe Riel.

acer

Maple's GE (and GJE) sometimes switches around the rows, but not the columns, so it's unclear what you mean by "...the columns and see where they are after..."

The row switches can be seen by the pivot Vector/Matrix, which is one of the outputs of LUDecomposition (LU is GE in mild disguise). This can be "applied" after the fact to a Vector of names.

acer

Here is a short example which might help you interpret and see the relationship between the algebraic and Matrix forms accepted by LPSolve.

There is no exposed conversion utility (likely because one of the central purposes of the Matrix form is to never form the large expressions in the algebraic form at all). But you might experiment with the internal routine which I use below, in order to see what happens when there are no strict equality constraints, or no inequality constraints, or no simple variable bounds, and so on.

> restart:

> problem := -x-2*y, {7 >= 3*x+5*y, 113 >= -11*x-13*y, 17*x+19*y=23}, x=-29..31, y=-37..41:

> Optimization:-LPSolve(problem);

     [-2.92857142857143, [x = -0.642857142857142, y = 1.78571428571429]]

> kernelopts(opaquemodules=false):

> problem_matrix_form := Optimization:-Convert:-AlgebraicForm:-LPToMatrix(problem)[2..4];

        problem_matrix_form := [-1., -2.], 

          [[-11.  -13.]                               ]                            
          [[          ], [113., 7.], [17.  19.], [23.]], [[-29., -37.], [31., 41.]]
          [[  3.    5.]                               ]                            

> Optimization:-LPSolve(problem_matrix_form);

                  [                   [-0.642857142857142]]
                  [-2.92857142857143, [                  ]]
                  [                   [  1.78571428571429]]

[edited] I might as well just do it. The first two places represent variables ur and dr, and the remaining N places represent w[i], i=1..N.

restart:
randomize():

with(Optimization):
with(Statistics):
with(LinearAlgebra):

N := 500:
R := RandomMatrix(N, outputoptions = [datatype = float[8]]):
ER := Vector[column]([seq(ExpectedValue(Column(R, j)), j = 1 .. N)]):

# algebraic form
st:=time():
W := Vector(N, symbol = w):
S := Vector(N, fill = 1, datatype = float[8]):
z := Multiply(R, Matrix(W)):
con1 := Transpose(W).S = 1:
con4 := seq(z[i][1] - dr >= 0, i = 1 .. N):
con5 := expand(Transpose(W).ER) - ur >= 0:
sol1:=LPSolve(ur+dr, {con1, con4, con5}, seq(w[i]=0..1,i=1..N), maximize = true):
sol1[1];
                        2.68825855741790
time()-st;
                             6.296

# Matrix form
st:=time():
A:=Matrix(N+1,N+2,datatype=float[8]):
A[2..-1,3..-1]:=R:
A[2..-1,1]:=Vector(N,fill=-1,datatype=float[8]):
A[1,3..-1]:=ER:
A[1,2]:=-1:
MatrixScalarMultiply(A,-1,inplace):
b:=Vector(N+1,datatype = float[8]):
#A,b;
Aeq:=Matrix(1,N+2,fill=1,datatype=float[8]):
Aeq[1,1..2]:=Vector(2,datatype=float[8]):
beq:=Vector(1,[1],datatype=float[8]):
#Aeq,beq;
c:=Vector(N+2,[1,1],datatype=float[8]):
#c;
bl:=Vector(N+2,datatype=float[8]):
bl[1..2]:=Vector(2,fill=-infinity):
bu:=Vector(N+2,fill=1,datatype=float[8]):
bu[1..2]:=Vector(2,fill=infinity,datatype=float[8]):
#bl,bu;
sol2:=Optimization:-LPSolve(c,[A,b,Aeq,beq],[bl,bu],maximize = true):
sol2[1];
                        2.68825855741790
time()-st;
                             3.735

So, half the time, for N=500. There are also memory savings on the order of the size of all the constraints as symbolic expressions, which I didn't bother to measure. There might be a relatively small bit more time saved, using ArrayTools routines like BlockCopy and Fill.

acer

Obviously we cannot be completely sure about the cause of the problem, if we cannot see the full worksheet that exhibits the problem.

But I suspect that Georgios K. is on the right tack, when looking at the incomplete worksheet containing just the final formulae.

I suspect roundoff as the underlying cause because, as Georigios mentions, the plots stabilize when computed at a higher working precision. I can actually get `function1` to plot in what seems to be a reliable and accurate manner with Digits set only to 33, provided that I first manipulate `function1` with the `evalc` command. That command will treat `theta` as purely real and under that assumption will simplify the formula enough to remove all merely ostensible occurences of `I` the imaginary unit. Not only does evalc(function1) appear to plot more accurately without such a high Digits setting as `function1` requires, but it also does so measurably faster.

The other reason I suspect roundoff is because the order of terms in PROD and SUM dags still depends on memory value for ordering. As I tried to illustrate in this old blog post, the order in which multiplicands are stored internally in a product data structure can greatly affect floating-point evaluation. The same is true of sums (which I hope to blog about soon, though I must apologize, for I have written that before...). The upshot is that session dependent ordering of arithmetic terms can magnify and enhance numerical difficulties at a given working precision. How high to increase Digits, the working precision, in order to overcome, is yet another tough topic.

Again, without the Original Poster's full worksheets, we cannot investigate and discover the underlying cause in a definitive way. But some of the familiar bells that are ringing seem to include roundoff and memory-based ordering of terms.

To trlewis, please feel free to contact me if you would consider my investigating your worksheets in a confidential way.

Since simplification (including using evalc, say) need not necessarily improve the numeric robustness of any formula to be evaluated in floating-point, then for now I'd simply concur with Georgios: increase Digits.

Example_alt01.mw Roundoff may be more severe in your original full worksheets than in this modified sheet, depending on term ordering issues. It seems like a possibility.

I do not know what, if anything, MuPAD might try to do, automatically, to accomodate numerical issues for such examples.

acer

The seven curves are functions of only x, each for some fixed A value. It's natural to interpolate, to attain values for A between the seven curves.

But you might find that linear interpolation gives too sharp a change, for some `x`, as `A` varies across some of the seven given curves' values. A higher order interpolating scheme may smooth over those changes while still allowing the interpolated surface to match and be coincidental with all seven curves.

Below, I try to show how you can produce both surfaces using both lower and higher order interpolation, with a visual difference (discernable upon closer inspection) in the smoothness in the A-direction. Hopefully my indexing and bookkeeping is correct, but that can be checked. Also, if you feel lucky, you can extrapolate down from A=5 to A=1.

I uploaded only small replicas of surfaces plotted with glossiness and light models. This may cause the inlined images to appear a little splotchy, viewed from Mapleprimes. Here's the full sheet: surf_eq_seven_curve.mw

restart;
# A=5
R5 := (38.269-14.640*x+5.1792*x^2+0.74587e-2*x^3)/(1.-.42387*x+.86530*x^2):
# A=10
R10 := (37.653+47.812*x+4.3844*x^2+.48174*x^3)/(1.+.67811*x+2.9990*x^2):
# A=15
R15 := (36.927+70.867*x+16.401*x^2+.31751*x^3)/(1.+.93439*x+4.4198*x^2):
# A=20
R20 := (37.373+22.213*x+28.413*x^2-.25340*x^3)/(1.-0.54937e-1*x+3.5737*x^2):
# A=25
R25 := (38.018-9.2287*x+28.549*x^2-.28952*x^3)/(1.-.56866*x+2.5447*x^2):
# A=30
R30 := (38.575-22.618*x+25.073*x^2-.22292*x^3)/(1.-.70178*x+1.8478*x^2):
# A=35
R35 := (39.005-27.849*x+21.729*x^2-.16576*x^3)/(1.-.69438*x+1.4182*x^2):

# Form an Array of sample points, along the 7 regularly spaced (in A) curves.
# The more points used, the smoother the surface will be in the x-direction.
# (Further below, we see how we may smooth the surface in the A-direction.)

N := floor((27.0-0.1)/(0.1)):

Q := Array(1 .. 7, 1 .. N,
          [seq([seq(eval(r, x = 0.1+j*0.1), j=0..N-1)],
               r = [seq(cat(R, 5*i), i = 1 .. 7)])], datatype = float[8]):

# Vectors of the sample points (each in 1 dimension only).

P := Vector(7,i->5*i):

T := Vector(N,i->(i)*0.1):

# Form a point plot of the sample points, only for illustration purposes.

plot_pts := plots:-pointplot3d([seq(seq([P[i],T[j],Q[i,j]],
                                       j=1..N),i=1..7)],
                              'symbolsize'=10,'color'='red'):

# First, a surface plot with low order interpolation between the curves.
# Notice that it matches the sample points exactly.
# Notice some sharp changes where A cross the seven curves, due to low
# order interpolation, eg. at A=20,x=27.0. 

P1:=plots:-surfdata(Q, 5..35, 0.1..27.0, orientation=[74,77,-11], labels=['A','x',``],
                axes=box,color=gold,glossiness=0.9,lightmodel=light4):

plots:-display(P1, plot_pts, orientation=[74,77,-11]);

# It would be nice to have a smoother surface, which still matches the sample
# points but without the sharp changes (as A crosses any of the seven curves).
# So create a plot'able procedure, with higher order spline interpolation.
# See ?CurveFitting:-ArrayInterpolation for details on the available schemes.

B := (a,b) -> CurveFitting:-ArrayInterpolation(
                               [P,T], Q,
                               Array(1..1, 1..1, 1..2, [[[a,b]]]),
                               'method' = 'spline')[1,1]:

# Plot procedure B. Notice that it too matches the sample points exactly.
# Notice the surface is smoother for fixed x (eg. along x=27.0, esp. at A=20)

P2 := plot3d(B, 5..35, 0.1..27.0, orientation=[74,77,-11], labels=['A','x',``],
             axes=box,color=RGB(0.0,0.3,0.3),glossiness=0.9,lightmodel=light4):

plots:-display(P2, plot_pts, orientation=[74,77,-11]);

# Show the surfaces without gridlines or the sampled data points.

plots:-display(Array(1..2,[P1,P2]),style=patchnogrid,orientation=[88,70,1]);

# It is also easy to extrapolate the plot of `B`, from A=5 down to A=1.

plot3d(B, 1..35, 0.1..27.0, orientation=[74,77,-11], labels=['A','x',``],
             axes=box,color=RGB(0.0,0.3,0.3),glossiness=0.9,lightmodel=light4);

 

acer

You'd be better off using Quantile's facility for purely numeric computation. Also, you have it prematurely evaluating procedure `t` at symbolic (nonnumeric) `n`.

with(Statistics):

t:=(p,df)->Quantile(RandomVariable(StudentT(df)),p,numeric):

s:=0.5: beta:=0.95: delta:=0.14:

st:=time():

fsolve(n->t((beta+1)/2,n-1)*s/sqrt(n)-delta,2..infinity);


                          51.43599587
time()-st;

                             1.061


delta:=0.11:

st:=time():

fsolve(n->t((beta+1)/2,n-1)*s/sqrt(n)-delta,2..infinity);

                          81.80057685

time()-st;

                             1.498

In your version here's what you were actually passing to fsolve.

restart:
with(Statistics):
t:=(p,df)->Quantile(RandomVariable(StudentT(df)),p):
s:=0.5: beta:=0.95: delta:=0.11:
t((beta+1)/2,n-1)*s/sqrt(n)=delta; # result is  what you were passing to fsolve

       /          /            /                    2 \          
  1    |          |            |[1  1  ]  [3]     _Z  |         /
------ |0.5 RootOf|40 hypergeom|[-, - n], [-], - -----| _Z GAMMA|
 (1/2) \          \            \[2  2  ]  [2]    n - 1/         \
n                                                                

                                            \\       
  1  \                  (1/2)      /1     1\||       
  - n| - 19 (Pi (n - 1))      GAMMA|- n - -||| = 0.11
  2  /                             \2     2///       

In contrast, Statistics:-Quantile knows how to compute a purely numeric result much faster than the above mess can be computed numerically for some values of n.

acer

For one thing, you should be using round brackets like () instead of square brackets [] in the definition of w(r).

Square brackets are the list constructor. But you want round brackets as expression delimiters there.

acer

First 272 273 274 275 276 277 278 Last Page 274 of 337