Robert Israel

6577 Reputation

21 Badges

18 years, 211 days
University of British Columbia
Associate Professor Emeritus
North York, Ontario, Canada

MaplePrimes Activity


These are answers submitted by Robert Israel

Use print or printf.  For example (before returning the result):

> printf("Standard Deviation of R1\n");

 

The bigomega function requires its argument to be an integer (literally an integer, not just a variable assumed to be an integer).  You gave it the symbolic argument j instead.

If n has an integer value, you could use add instead of sum.  Thus:

> add(bigomega(j), j = 1 .. 10);

                    15

Alternatively, you could define a new procedure Bigomega that calls bigomega if its argument is an integer, and otherwise returns unevaluated.  Thus:

> Bigomega:= proc(n) 
    if type(n,integer) then numtheory:-bigomega(n) else 'procname'(n) end if 
  end proc;

Then, for example,

> sum(Bigomega(j), j = 1 .. n);

                sum(Bigomega(j),j = 1 .. n)

> eval(%, n = 10);

                  15

For the intersection of vector spaces, use IntersectionBasis in the LinearAlgebra package.

Finding a zero-one vector in a vector space could be formulated as an integer linear programming problem, and solved using LPSolve in the Optimization package.  Thus suppose B is a Matrix whose rows span your vector space,   NullSpace(B) is a set of column Vectors corresponding to constraints defining your vector space.  For example:

> with(LinearAlgebra): with(Optimization):
   B:= Matrix([[ 5, -2,  0, -1,   5, -5,  2, -1], 
               [ 4,  5, -4, -4,  -2,  2,  5, -3], 
               [ 3,  4, -4,  5,   2,  4,  3,  5],
               [-5,  0,  1,  1,  -3, -3, -1, -3], 
               [-5,  8, -3,-10, -14,  3,  5, -12]]);
   n:= ColumnDimension(B):
   c:= Vector(n, 1): 
   N:= NullSpace(B): m:= nops(N):
   M:= <op(map(Transpose,N))>;
   b:= Vector(m):
   LPSolve(c, [NoUserValue, NoUserValue, M, b], assume=binary, maximize=true);
   map(round, Transpose(%[2]));

[0, 0, 0, 1, 0, 1, 1, 1]

Another possibility that might be worth trying is to use LLL in the IntegerRelations package.  This finds a "reduced" basis in the lattice generated by a list of Vectors (with integer coefficients).  If there is such a vector with zero-one entries, it will probably occur at or near the beginning of.the reduced basis.  For this example:

 > IntegerRelations[LLL]([Row](B,1..-1));

[[0, 0, 0, 1, 0, 1, 1, 1], [-5, 0, 1, 3, -3, -1, 1, -1], [-2, 4, -3, 3, -1, -2, -1, -1], [0, -2, 1, 3, 2, -5, 4, -1], [1, 1, 0, -5, -4, 2, 6, -4]]

 However, this only works if the zero-one vector is a linear combination of the rows of B with integer coefficients. 

 

Note that for a*l^2+b > 0,
1/(exp(a*l^2+b)+1) = Sum((-1)^(n+1)/exp(n*a*l^2+n*b),n = 1 .. infinity)

Multiply by l^2 and integrate term-by-term: if a > 0 and b > 0, I get

int(l^2/(exp(a*l^2+b)+1),l = 0 .. infinity) = -1/4*Pi^(1/2)/a^(3/2)*polylog(3/2,-exp(-b))

Hmm, looks familiar.  Do you and manjees know each other?

 

BA as a two-digit number is 10*B+A, if A and B are integers 0 to 9.  But the simplest way to "solve" this is by a brute-force search through
all possibilities.  With only a few variables, there are not many possibilities to look at, so it will be very fast.

> for A from 0 to 9 do
      for B from 1 to 9 do
         if 10*B+A = A^B then
           print(10*B+A = A^convert(B,name))
         end if
    end do end do:

25 = 5^`2`

> for A from 0 to 9 do
       for B from 0 to 9 do
        for C from 1 to 9 do
          if 100*C+10*B+A = A^(B+C) then
            print(100*C+10*B+A = A^(convert(B,name)+convert(C,name)))
          end if
     end do end do end do:

125 = 5^(`2`+`1`)

216 = 6^(`2`+`1`)

This works for me: highlight the output (Maple 13 Standard or Classic) by double-clicking on the initial [, copy with Ctrl-C, click in a new Notepad (Vista Home Premium), paste with Ctrl-V. 

NonlinearFit in the Statistics package seems to have trouble with this one.   I needed to use the parameterranges option, otherwise it took forever.

> with(Statistics):
  f:=NonlinearFit(a*(1-exp(b*x)), xx, yy, x,
    parameterranges=[a=10..100000, b=-0.1 .. 0]);

 

f:=24885.6408528876674-24885.6408528876674*exp(-.427257479738294214e-2*x)

The <maple> tag seems to be acting up again: that should be

f := 24885.6408528876674-24885.6408528876674*exp(-0.427257479738294214e-2*x)

 

> plots[display](ScatterPlot(xx,yy,colour=black),
    plot(f, x = min(xx) .. max(xx)));

 

 

As you see, the fit is not very good.  That's hardly surprising, since the data points are very widely scattered.

Normally you'd have to reduce this to equations in the entries of the matrix.  However, in this particular case, without any calculation I can tell you there are no solutions.  The equation A^2 = A implies the eigenvalues of A are all solutions of x^2 = x, thus the only possible eigenvalues are 0 and 1.  But det(A) is the product of the eigenvalues (counted by algebraic multiplicity), and there's no way to get 3 by multiplying 0's and 1's.

 

Something like this, perhaps.

>  infile:= "d:/myfolder/myfile.txt":
 # wait for first indented line
    do
       S:= readline(infile);
       if S[1] = " " then break end if
       end do:
 # wait for first un-indented line
    do
        S:= readline(infile);
        if S[1] <> " " then break end if
        end do:
 # now process each line until end of file or blank line
     for count while S <> 0 and S <> "" do
         nelts:= op(sscanf(S, "%d"));
         L[count]:= sscanf(S, cat("%d ", "%a "$nelts))[2..-1];
         S:= readline(infile);
      end do;
 # return a list
      [seq](L[i],i=1..count-1);


 

Do you want the function to fit these points exactly?  Then you might try PolynomialInterpolation or RationalInterpolation from the CurveFitting package.  But these may give poor results if your data points have some error, as interpolation is numerically unstable.  You might get better results with LeastSquares, which will give you an approximation of a given form depending linearly on the coefficients (or Fit in the Statistics package, which also allows nonlinear least-squares fitting).

You seem to be doing a 3 x 3 rather than 2 x 2 example as you stated at the beginning.  But let's go with it.

Suppose you have the quadratic form

> Q := -x^2+3*x*y-5*x*z+3*y^2-z^2;

The corresponding matrix is

> M := 1/2 * VectorCalculus[Hessian](Q, [x,y,z]);

Now to calculate eigenvalues and eigenvectors:

> with(LinearAlgebra):
   E,V := Eigenvectors(M);

V is a Matrix whose columns are the eigenvectors.  It thus diagonalizes M in the sense that V^%T . M . V is diagonal (where ^%T denotes transpose).  Some simplification is necessary to see this:

> map(rationalize, V^%T . M . V);

This corresponds to a change of variables.  Suppose we make new variables u,v,w so that

[ x ]     [ u ]
[ y ]     [ v ]
[ z ] = V [ w ]


then Q = [ x y z ] M [x y z]^%T = [ u v w ] V^%T M V [u v w]^%T will only have terms in u^2, v^2 and w^2.

 

 

You want to start out by diagonalizing the symmetric matrix corresponding to the quadratic form, using Eigenvectors in the LinearAlgebra package. 

First of all, for future reference, in Maple e is not 2.7128..., it's just another variable.  You probably want exp(-z[11]/t[1]) etc.

Second: equations involving different exponentials are generally hard to solve, but in this case the only dependence of the four equations on z[11], ..., z[22] is through exp(-z[11]/t[1]), exp(-z[21]/t[1]), exp(-z[22]/t[2]), exp(-z[12]/t[2]), so we might as well use those as variables instead of z[11] to z[22]: I'll call them x[11] to  x[22].  Now you tried to solve for z[11] to z[22], but u[11] to u[22] seem also to be variables.  Note that z[ij] = 0 corresponds to x[ij] = 1.  The constraints u[ij]*(x[ij]-1) = 0 can be included as equations too.  So:

> eqs:= { x[11]*(p[21]*x[21]+p[22]*x[22])=t[1]*(1-w-u[11])/w/p[11], 
          x[21]*(p[11]*x[11]+p[12]*x[12])=t[1]*(1-w-u[21])/w/p[21], 
          x[12]*(p[21]*x[21]+p[22]*x[22])=t[2]*(1-w-u[12])/w/p[12], 
          x[22]*(p[11]*x[11]+p[12]*x[12])=t[2]*(1-w-u[22])/w/p[22],
          u[11]*(x[11]-1)=0, u[12]*(x[12]-1)=0, u[21]*(x[21]-1)=0, u[22]*(x[22]-1)=0 };
> solve(eqs, {x[11],x[12],x[21],x[22],u[11],u[12],u[21],u[22]});

The result is a sequence of 12 solutions.  Whether any of them satisfies the nonnegativity constraints may depend on the constants w, p[11],p[12],p[21],p[22],t[1],t[2].  For example:

> eqs1:= eval(eqs, {w=1,p[11]=2,p[12]=3,p[21]=4,p[22]=5,t[1]=6,t[2]=7});
  S1:= {solve(eqs1, {x[11],x[12],x[21],x[22],u[11],u[12],u[21],u[22]})}; 

Remove the solutions where some u[ij] < 0 or some x[ij] <= 0 or some x[ij] > 1:
 

> remove(s -> ormap(is, subs(s, 
    {seq(u[ij]< 0, ij = [11,12,21,22]), seq(x[ij] <= 0, ij = [11,12,21,22]),
     seq(x[ij] > 1, ij = [11,12,21,22])})), S1);

{{u[11] = 0, u[12] = 0, u[21] = 0, u[22] = 0, x[11] = -3/2*x[12], x[12] = x[12], x[21] = -5/4*x[22], x[22] = x[22]}}

But that's not acceptable either, because if x[22] > 0 it would make x[11] and x[21]  < 0. 

 

 

First 70 71 72 73 74 75 76 Last Page 72 of 138