Robert Israel

6577 Reputation

21 Badges

18 years, 214 days
University of British Columbia
Associate Professor Emeritus
North York, Ontario, Canada

MaplePrimes Activity


These are replies submitted by Robert Israel

Integer linear programming is NP-hard.  Maple's Optimization package handles it using a branch-and-bound method, but this may take a very long time in some cases.  Integer nonlinear programming may be even harder.  For some of these problems, there may be no method much better than brute force.  If you have 50 or so variables, it may be unrealistic to expect to find the optimal solution.

Sorry about that.  This makes the problem much more interesting... I'm working on it....

Sorry about that.  This makes the problem much more interesting... I'm working on it....

The next thing you might try is when c is small.

For c = 0, the solutions of z^a = w are z = G(a,w,k) = exp((ln(w) + 2*Pi*I*k)/a)  for integers k (where ln(w) is the principal value of the logarithm, as Maple uses).
Each of these is analytic in a for a <> 0.

For c <> 0 but small, these solutions can be expanded as series in c: it seems to be

z = G(a,w,k) - sum( product(i*a-1-j, i=1..j-1)*(G(a,w,k)^(j+1)*(c/w/a)^j/j!, j = 1 .. infinity)

I think these can be obtained from the Lagrange Inversion Formula.

 

The next thing you might try is when c is small.

For c = 0, the solutions of z^a = w are z = G(a,w,k) = exp((ln(w) + 2*Pi*I*k)/a)  for integers k (where ln(w) is the principal value of the logarithm, as Maple uses).
Each of these is analytic in a for a <> 0.

For c <> 0 but small, these solutions can be expanded as series in c: it seems to be

z = G(a,w,k) - sum( product(i*a-1-j, i=1..j-1)*(G(a,w,k)^(j+1)*(c/w/a)^j/j!, j = 1 .. infinity)

I think these can be obtained from the Lagrange Inversion Formula.

 

I don't know why dsolve doesn't work in this case, but note that the solution is quite underdetermined (there are only four initial conditions, and there should be 8
because the system is second order in each of four dependent variables.  You can proceed as follows.

> S:= dsolve([sys]);   # the rather complicated general solution of the system of DE's, with 8 arbitrary parameters _C1 to _C8
    S0:= eval(S, x=0);    # the solution at x=0
   ceqs:= eval([ics], S0); # the initial conditions evaluated using the solution, giving four linear equations in _C1 to _C8
   solve(evalf(ceqs));     # using evalf gets rid of the RootOf's that make this complicated
   
_C1 = -_C2+1.000000002, _C2 = _C2, _C3 = -_C4-.3485784846, _C4 = _C4, _C5 = 1.205682403-_C6, _C6 = _C6, 
  _C7 = -1.857103920-_C8, _C8 = _C8

So these give four constraints on parameters _Ci, leaving _C2, _C4, _C6 and _C8 arbitrary.

 

 

I don't know why dsolve doesn't work in this case, but note that the solution is quite underdetermined (there are only four initial conditions, and there should be 8
because the system is second order in each of four dependent variables.  You can proceed as follows.

> S:= dsolve([sys]);   # the rather complicated general solution of the system of DE's, with 8 arbitrary parameters _C1 to _C8
    S0:= eval(S, x=0);    # the solution at x=0
   ceqs:= eval([ics], S0); # the initial conditions evaluated using the solution, giving four linear equations in _C1 to _C8
   solve(evalf(ceqs));     # using evalf gets rid of the RootOf's that make this complicated
   
_C1 = -_C2+1.000000002, _C2 = _C2, _C3 = -_C4-.3485784846, _C4 = _C4, _C5 = 1.205682403-_C6, _C6 = _C6, 
  _C7 = -1.857103920-_C8, _C8 = _C8

So these give four constraints on parameters _Ci, leaving _C2, _C4, _C6 and _C8 arbitrary.

 

 

The point was that jakubi (and dale0228923-1 too, judging from the error message) already had sys assigned a set, and then was calling dsolve with a set enclosed inside a list or set.  You have sys and ics as expression sequences, and so dsolve([sys, ics], ...) is OK.

The point was that jakubi (and dale0228923-1 too, judging from the error message) already had sys assigned a set, and then was calling dsolve with a set enclosed inside a list or set.  You have sys and ics as expression sequences, and so dsolve([sys, ics], ...) is OK.

It looks to me like the regions V(eta, xi) < .0162 and V(eta, xi) > .0162 will both have infinite area.  But I suppose you mean
the small oval region near the origin.

To insert a plot, first export the plot (to gif or jpeg), then click on the green up-arrow button, upload the file, and click OK to insert it here.

> with(plots):
  Vd := proc (eta, xi) options operator, arrow; ((-1)*0.2893279480e-4*xi^6+0.2068912225e-5*xi^8
      +0.9156907035e-3*xi^2+0.2109192340e-3*xi^4)*cos(eta)+((-1)*0.6071284941e-4*xi^7
      +0.9066131648e-3*xi^5+(-1)*0.6001066340e-2*xi^3+(-1)*0.4141132509e-1*xi)*sin(eta)+.5000000000*kp*eta^2 end proc;
    kp:= 4.8486;
    implicitplot(Vd(eta,xi)=.0162, eta=-1..1,xi=-10..10,gridrefine=3);

It looks like an appropriate parametrization for this oval would be scaled polar coordinates, eta = .1 * r * cos(theta), xi = 3 * r * sin(theta), theta = 0 .. 2*Pi.

> R:= theta -> fsolve(Vd(0.1*r*cos(theta), 3*r*sin(theta)) = .0162, r = 0 .. 1.5);
   

The appropriate element of area is then 0.15 * r^2 * dtheta.  Numerical integration will get the area.

> A:= evalf(Int(.15*R^2, 0 .. 2*Pi));

A := .8633774570
 

 

It looks to me like the regions V(eta, xi) < .0162 and V(eta, xi) > .0162 will both have infinite area.  But I suppose you mean
the small oval region near the origin.

To insert a plot, first export the plot (to gif or jpeg), then click on the green up-arrow button, upload the file, and click OK to insert it here.

> with(plots):
  Vd := proc (eta, xi) options operator, arrow; ((-1)*0.2893279480e-4*xi^6+0.2068912225e-5*xi^8
      +0.9156907035e-3*xi^2+0.2109192340e-3*xi^4)*cos(eta)+((-1)*0.6071284941e-4*xi^7
      +0.9066131648e-3*xi^5+(-1)*0.6001066340e-2*xi^3+(-1)*0.4141132509e-1*xi)*sin(eta)+.5000000000*kp*eta^2 end proc;
    kp:= 4.8486;
    implicitplot(Vd(eta,xi)=.0162, eta=-1..1,xi=-10..10,gridrefine=3);

It looks like an appropriate parametrization for this oval would be scaled polar coordinates, eta = .1 * r * cos(theta), xi = 3 * r * sin(theta), theta = 0 .. 2*Pi.

> R:= theta -> fsolve(Vd(0.1*r*cos(theta), 3*r*sin(theta)) = .0162, r = 0 .. 1.5);
   

The appropriate element of area is then 0.15 * r^2 * dtheta.  Numerical integration will get the area.

> A:= evalf(Int(.15*R^2, 0 .. 2*Pi));

A := .8633774570
 


You might try this:

> z:= r*exp(I*t);
   a:= 1/(z^3 - z^(-3)+sqrt(5));
   V:= [I*(z^2 - z^(-2)), z^2 + z^(-2), 2*I/3*(z^3 + z^(-3))];
   M:= map(Re, expand(a*V))+[0,0,1/2] assuming r>0,r<1,t::real;
   B:= map(`*`, M, 1/(M[1]^2 + M[2]^2 + M[3]^2));
   plot3d(B,r=0.002 .. 1, t = 0 .. 2*Pi, grid=[50,200], orientation=[0,0],style=patchnogrid,
      lightmodel=light3,glossiness = 0.8,scaling=constrained); P:= %:
   plotsetup(jpeg, plotoutput="c:/test/bryantkusner.jpg",plotoptions="height=600,width=600");
   P;
   plotsetup(default);

 

One thing Maple doesn't have is the ability to draw a grid on the surface that is coarser than the grid actually used to compute
the surface.

You might try this:

> z:= r*exp(I*t);
   a:= 1/(z^3 - z^(-3)+sqrt(5));
   V:= [I*(z^2 - z^(-2)), z^2 + z^(-2), 2*I/3*(z^3 + z^(-3))];
   M:= map(Re, expand(a*V))+[0,0,1/2] assuming r>0,r<1,t::real;
   B:= map(`*`, M, 1/(M[1]^2 + M[2]^2 + M[3]^2));
   plot3d(B,r=0.002 .. 1, t = 0 .. 2*Pi, grid=[50,200], orientation=[0,0],style=patchnogrid,
      lightmodel=light3,glossiness = 0.8,scaling=constrained); P:= %:
   plotsetup(jpeg, plotoutput="c:/test/bryantkusner.jpg",plotoptions="height=600,width=600");
   P;
   plotsetup(default);

 

One thing Maple doesn't have is the ability to draw a grid on the surface that is coarser than the grid actually used to compute
the surface.

Yes, the actual probability should be p=1/binomial(52,4) = 1/270725.  But you shouldn't expect to get that exact value from a simulation, no matter how big a sample you use.  If you run your simulation n times, the standard deviation of the result you get (i.e. of the empirical probability) will be sqrt(p*(1-p)/n).  So to have a good chance of being within about 1% of the correct value you'd want sqrt(p*(1-p)/n) <= p/100, or n >=  10000*p/(1-p) = 2707240000.

 

Yes, the actual probability should be p=1/binomial(52,4) = 1/270725.  But you shouldn't expect to get that exact value from a simulation, no matter how big a sample you use.  If you run your simulation n times, the standard deviation of the result you get (i.e. of the empirical probability) will be sqrt(p*(1-p)/n).  So to have a good chance of being within about 1% of the correct value you'd want sqrt(p*(1-p)/n) <= p/100, or n >=  10000*p/(1-p) = 2707240000.

 

First 84 85 86 87 88 89 90 Last Page 86 of 187