Robert Israel

6577 Reputation

21 Badges

18 years, 212 days
University of British Columbia
Associate Professor Emeritus
North York, Ontario, Canada

MaplePrimes Activity


These are replies submitted by Robert Israel

Taking natural log of both sides, it is sufficient to show that ln(1+1/n) > R, where

> R := 8/(n^(1/3) + (n+1)^(1/3));

According to asympt, this is true for sufficiently large n.

> asympt(ln(1+1/n) - R, n);

1/(6480*n^5)+O(1/(n^6))

In fact, if we define Q as the derivative of the difference with respect to n, Q < 0 for
sufficiently large n.

> Q:= diff(ln(1+1/n) - R, n);

Q := -1/(n^2*(1+1/n))+24/(n^(1/3)+(n+1)^(1/3))^4*(1/(3*n^(2/3))+1/(3*(n+1)^(2/3)))

> asympt(Q, n, 8);

-1/(1296*n^6)+1/(432*n^7)+O(1/(n^8))

Note that Q is continuous for n > 0.  It is thus sufficient to show that Q is never 0 for n > 0.

Now solve doesn't find a solution.

> solve(Q, n);

But solve isn't always reliable.  Instead, let's reduce the question to one involving rational functions,
by substituting new variables s = n^(1/3) and t = (n+1)^(1/3).

> subs(n=s^3,subs(n^(1/3)=s, n^(-2/3)=s^(-2), (n+1)^(1/3)=t, (n+1)^(-2/3)=t^(-2), Q));
 
-1/(s^6*(1+1/(s^3)))+24/(s+t)^4*(1/(3*s^2)+1/(3*t^2))

> Q1:= numer(normal(%));

Q1 := 7*t^2*s^4-4*t^3*s^3-6*s^2*t^4-4*s*t^5-t^6+8*s*t^2+8*s^6+8*s^3

Now eliminate t, using the equation t^3 - s^3 = 1.

> factor(resultant(Q1,t^3-s^3-1,t));

-(s+1)^2*(s^2-s+1)^2

Since this has no positive roots, that completes the proof.

Taking natural log of both sides, it is sufficient to show that ln(1+1/n) > R, where

> R := 8/(n^(1/3) + (n+1)^(1/3));

According to asympt, this is true for sufficiently large n.

> asympt(ln(1+1/n) - R, n);

1/(6480*n^5)+O(1/(n^6))

In fact, if we define Q as the derivative of the difference with respect to n, Q < 0 for
sufficiently large n.

> Q:= diff(ln(1+1/n) - R, n);

Q := -1/(n^2*(1+1/n))+24/(n^(1/3)+(n+1)^(1/3))^4*(1/(3*n^(2/3))+1/(3*(n+1)^(2/3)))

> asympt(Q, n, 8);

-1/(1296*n^6)+1/(432*n^7)+O(1/(n^8))

Note that Q is continuous for n > 0.  It is thus sufficient to show that Q is never 0 for n > 0.

Now solve doesn't find a solution.

> solve(Q, n);

But solve isn't always reliable.  Instead, let's reduce the question to one involving rational functions,
by substituting new variables s = n^(1/3) and t = (n+1)^(1/3).

> subs(n=s^3,subs(n^(1/3)=s, n^(-2/3)=s^(-2), (n+1)^(1/3)=t, (n+1)^(-2/3)=t^(-2), Q));
 
-1/(s^6*(1+1/(s^3)))+24/(s+t)^4*(1/(3*s^2)+1/(3*t^2))

> Q1:= numer(normal(%));

Q1 := 7*t^2*s^4-4*t^3*s^3-6*s^2*t^4-4*s*t^5-t^6+8*s*t^2+8*s^6+8*s^3

Now eliminate t, using the equation t^3 - s^3 = 1.

> factor(resultant(Q1,t^3-s^3-1,t));

-(s+1)^2*(s^2-s+1)^2

Since this has no positive roots, that completes the proof.

As I said, it's not easy, partly because of the singularity at (0,0).  I wasn't able to obtain a very good fit at all, which may indicate that this system is not a good model for your data.  I'll try to write something up in a few days.  Meanwhile, to show that I've been doing something, here's a plot showing the best fit I could find to the first six data points (treating the first as specifying the initial conditions).  Your data points are in red, the corresponding points on the solution are in blue.

As I said, it's not easy, partly because of the singularity at (0,0).  I wasn't able to obtain a very good fit at all, which may indicate that this system is not a good model for your data.  I'll try to write something up in a few days.  Meanwhile, to show that I've been doing something, here's a plot showing the best fit I could find to the first six data points (treating the first as specifying the initial conditions).  Your data points are in red, the corresponding points on the solution are in blue.

But int(abs(z),z) has no correct answer in the complex plane: only analytic functions have antiderivatives.

This should certainly be described in any calculus text.  The tangent line to the curve y = f(x) at
x = x0 has slope f'(x0) [that's what derivatives are all about] and passes through the point 
[x0, f(x0)].  The equation of a line with that slope passing through that point is 
y = f(x0) + (x-x0) * f'(x0).

 

This should certainly be described in any calculus text.  The tangent line to the curve y = f(x) at
x = x0 has slope f'(x0) [that's what derivatives are all about] and passes through the point 
[x0, f(x0)].  The equation of a line with that slope passing through that point is 
y = f(x0) + (x-x0) * f'(x0).

 

Hmmm, some interesting statements there, such as: "1 does not automatically simplify to 1".  This may be the result of automatic simplification in the pretty-printer when changing 1D Maple input to 2D.  The help page (as viewed in Classic) says

* x*0 and 0/x do not automatically simplify to 0 if the value of x renders this simplification invalid.
* 0^x does not automatically simplify to 0 unless x is an unassigned name.
* x/x does not automatically simplify to 1. This is invalid if, for example, x is a function that produces random numbers.
* x*x does not automatically simplify to x^2. This is invalid if, for example, x is a function that produces random numbers.
* Constant operators do not simplify to numbers. For example, x -> 1 does not simplify to 1.
* Expressions such as x -> f(x) do not simplify to f. This is invalid if f can also take fewer or greater than one argument.
 

It looks like the Standard GUI parser doesn't respect these rules.  For example (in Maple 13 Standard)

> f:= proc(x) x*0; 0/x;  x/x; x*x; end proc;

                           f := proc(x)  0; 0; 1; x^2  end proc

It's not just cosmetic.  Consider

> g:= proc(x) seq(x()*x(), i=1..10) end proc:
   g(rand(1..10));

Standard produces
                    49, 100, 36, 4, 16, 36, 25, 1, 64, 25

(note that these are all squares)

while Classic produces
                 70, 12, 24, 5, 40, 20, 8, 24, 90, 16
 

So it seems that the Standard parser really has simplified x()*x() to x()^2.  With x = rand(1..10) this produces one
random number and squares it.  Classic, on the other hand, produces two random numbers and multiplies them.
 

Several SCR's to make...

 



 

 

Hmmm, some interesting statements there, such as: "1 does not automatically simplify to 1".  This may be the result of automatic simplification in the pretty-printer when changing 1D Maple input to 2D.  The help page (as viewed in Classic) says

* x*0 and 0/x do not automatically simplify to 0 if the value of x renders this simplification invalid.
* 0^x does not automatically simplify to 0 unless x is an unassigned name.
* x/x does not automatically simplify to 1. This is invalid if, for example, x is a function that produces random numbers.
* x*x does not automatically simplify to x^2. This is invalid if, for example, x is a function that produces random numbers.
* Constant operators do not simplify to numbers. For example, x -> 1 does not simplify to 1.
* Expressions such as x -> f(x) do not simplify to f. This is invalid if f can also take fewer or greater than one argument.
 

It looks like the Standard GUI parser doesn't respect these rules.  For example (in Maple 13 Standard)

> f:= proc(x) x*0; 0/x;  x/x; x*x; end proc;

                           f := proc(x)  0; 0; 1; x^2  end proc

It's not just cosmetic.  Consider

> g:= proc(x) seq(x()*x(), i=1..10) end proc:
   g(rand(1..10));

Standard produces
                    49, 100, 36, 4, 16, 36, 25, 1, 64, 25

(note that these are all squares)

while Classic produces
                 70, 12, 24, 5, 40, 20, 8, 24, 90, 16
 

So it seems that the Standard parser really has simplified x()*x() to x()^2.  With x = rand(1..10) this produces one
random number and squares it.  Classic, on the other hand, produces two random numbers and multiplies them.
 

Several SCR's to make...

 



 

 

There may still be problems.  Consider

> A := <<1 | 1>, <2 | -2>, <2 | -2>>;
   b:= <5,2,2>;
where you want 0 <= x <= 1 and 0 <= y <= 1.

> LeastSquares(A,b);
                                 [3]
                                 [ ]
                                 [2]

So you would replace both these entries with 1.  But the optimal solution with constraints is not <1,1> but <1, 4/9>.
Conclusion: some variables that are out-of-range in the initial LSSolve solution may be in the interior of the range in
the final solution, and it's not so easy to tell which. 

I think when you have your algorithm sufficiently refined to be correct, you'll essentially end up with the active set method.

 

There may still be problems.  Consider

> A := <<1 | 1>, <2 | -2>, <2 | -2>>;
   b:= <5,2,2>;
where you want 0 <= x <= 1 and 0 <= y <= 1.

> LeastSquares(A,b);
                                 [3]
                                 [ ]
                                 [2]

So you would replace both these entries with 1.  But the optimal solution with constraints is not <1,1> but <1, 4/9>.
Conclusion: some variables that are out-of-range in the initial LSSolve solution may be in the interior of the range in
the final solution, and it's not so easy to tell which. 

I think when you have your algorithm sufficiently refined to be correct, you'll essentially end up with the active set method.

 

In general there is no algorithm to verify whether a complicated function is nonnegative on an interval.  This is a mathematical theorem, not just a limitation of Maple.  See D. Richardson, "Some Undecidable Problems Involving Elementary Functions of a Real Variable", Journal of Symbolic Logic 33 (1968) 514-520 www.jstor.org/stable/2271358
Maple can sometimes recognize that a function is positive, and can sometimes recognize that it is not, but in many cases it can't do either, so verify would return FAIL in these cases.  Moreover, different ways of representing the same function may well produce different results.  Thus an expression that is explicitly a sum of several squares might be easier to deal with than a "simplified" form that is not so obviously a sum of squares.

In general there is no algorithm to verify whether a complicated function is nonnegative on an interval.  This is a mathematical theorem, not just a limitation of Maple.  See D. Richardson, "Some Undecidable Problems Involving Elementary Functions of a Real Variable", Journal of Symbolic Logic 33 (1968) 514-520 www.jstor.org/stable/2271358
Maple can sometimes recognize that a function is positive, and can sometimes recognize that it is not, but in many cases it can't do either, so verify would return FAIL in these cases.  Moreover, different ways of representing the same function may well produce different results.  Thus an expression that is explicitly a sum of several squares might be easier to deal with than a "simplified" form that is not so obviously a sum of squares.

Linear least squares with linear constraints is a convex quadratic programming problem.  According to the help page
?Optimization,General,Methods, LSSolve uses an active-set method in this case.

Linear least squares with linear constraints is a convex quadratic programming problem.  According to the help page
?Optimization,General,Methods, LSSolve uses an active-set method in this case.

First 76 77 78 79 80 81 82 Last Page 78 of 187