TechnicalSupport

Technical Support

190 Reputation

3 Badges

11 years, 3 days
Maplesoft
Waterloo, Ontario, Canada

MaplePrimes Activity


These are Posts that have been published by TechnicalSupport

We occasionally get asked questions about methods of Perturbation Theory in Maple, including the Lindstedt-Poincaré Method. Presented here is the most famous application of this method.

Introduction

During the dawn of the 20th Century, one problem that bothered astronomers and astrophysicists was the precession of the perihelion of Mercury. Even when considering the gravity from other planets and objects in the solar system, the equations from Newtonian Mechanics could not account for the discrepancy between the observed and predicted precession.

One of the early successes of Einstein's General Theory of Relativity was that the new model was able to capture the precession of Mercury, in addition to the orbits of all the other planets. The Einsteinian model, when applied to the orbit of Mercury, was in fact a non-negligible perturbation of the old model. In this post, we show how to use Maple to compute the perturbation, and derive the formula for calculating the precession.

In polar coordinates, the Einsteinian model can be written in the following form, where u(theta)=a(1-e^2)/r(theta), with theta being the polar angle, r(theta) being the radial distance, a being the semi-major axis length, and e being the eccentricity of the orbit:
 

# Original system.
f := (u,epsilon) -> -1 - epsilon * u^2;
omega := 1;
u0, du0 := 1 + e, 0;
de1 := diff( u(theta), theta, theta ) + omega^2 * u(theta) + f( u(theta), epsilon );
ic1 := u(0) = u0, D(u)(0) = du0;


The small parameter epsilon (along with the amount of precession) can be found in terms of the physical constants, but for now we leave it arbitrary:
 

# Parameters.
P := [
    a = 5.7909050e10 * Unit(m),
    c = 2.99792458e8 * Unit(m/s),
    e = 0.205630,
    G = 6.6740831e-11 * Unit(N*m^2/kg^2), 
    M = 1.9885e30 * Unit(kg), 
    alpha = 206264.8062, 
    beta = 415.2030758 
];
epsilon = simplify( eval( 3 * G * M / a / ( 1 - e^2 ) / c^2, P ) ); # approximately 7.987552635e-8


Note that c is the speed of light, G is the gravitational constant, M is the mass of the sun, alpha is the number of arcseconds per radian, and beta is the number of revolutions per century for Mercury.

We will show that the radial distance, predicted by Einstein's model, is close to that for an ellipse, as predicted by Newton's model, but the perturbation accounts for the precession (42.98 arcseconds/century). During one revolution, the precession can be determined to be approximately
 

sigma = simplify( eval( 6 * Pi * G * M / a / ( 1 - e^2 ) / c^2, P ) ); # approximately 5.018727337e-7


and so, per century, it is alpha*beta*sigma, which is approximately 42.98 arcseconds/century.
It is worth checking out this question on Stack Exchange, which includes an animation generated numerically using Maple for a similar problem that has a more pronounced precession.

Lindstedt-Poincaré Method in Maple

In order to obtain a perturbation solution to the perturbed differential equation u'+omega^2*u=1+epsilon*u^2 which is periodic, we need to write both u and omega as a series in the small parameter epsilon. This is because otherwise, the solution would have unbounded oscillatory terms ("secular terms"). Using this Lindstedt-Poincaré Method, we substitute arbitrary series in epsilon for u and omega into the initial value problem, and then choose the coefficient constants/functions so that both the initial value problem is satisfied and there are no secular terms. Note that a first-order approximation provides plenty of agreement with the measured precession, but higher-order approximations can be obtained.

To perform this in Maple, we can do the following:
 

# Transformed system, with the new independent variable being the original times a series in epsilon.
de2 := op( PDEtools:-dchange( { theta = phi/b }, { de1 }, { phi }, params = { b, epsilon }, simplify = true ) );
ic2 := ic1;

# Order and series for the perturbation solutions of u(phi) and b. Here, n = 1 is sufficient.
n := 1;
U := unapply( add( p[k](phi) * epsilon^k, k = 0 .. n ), phi );
B := omega + add( q[k] * epsilon^k, k = 1 .. n );

# DE in terms of the series.
de3 := series( eval( de2, [ u = U, b = B ] ), epsilon = 0, n + 1 );

# Successively determine the coefficients p[k](phi) and q[k].
for k from 0 to n do

    # Specify the initial conditions for the kth DE, which involves p[k](phi).
    # The original initial conditions appear only in the coefficient functions with index k = 0,
    # and those for k > 1 are all zero.
    if k = 0 then
        ic3 := op( expand( eval[recurse]( [ ic2 ], [ u = U, epsilon = 0 ] ) ) );
    else
        ic3 := p[k](0), D(p[k])(0);
    end if:

    # Solve kth DE, which can be found from the coefficients of the powers of epsilon in de3, for p[k](phi).
    # Then, update de3 with the new information.
    soln := dsolve( { simplify( coeff( de3, epsilon, k ) ), ic3 } );
    p[k] := unapply( rhs( soln ), phi );
    de3 := eval( de3 );

    # Choose q[k] to eliminate secular terms. To do this, use the frontend() command to keep only the terms in p[k](phi)
    # which have powers of t, and then solve for the value of q[k] which makes the expression zero. 
    # Note that frontend() masks the t-dependence within the sine and cosine terms.
    # Note also that this method may need to be amended, based on the form of the terms in p[k](phi).
    if k > 0 then
        q[1] := solve( frontend( select, [ has, p[k](phi), phi ] ) = 0, q[1] );
        de3 := eval( de3 );
    end if;

end do:

# Final perturbation solution.
'u(theta)' = eval( eval( U(phi), phi = B * theta ) ) + O( epsilon^(n+1) );

# Angular precession in one revolution.
sigma := convert( series( 2 * Pi * (1/B-1), epsilon = 0, n + 1 ), polynom ):
epsilon := 3 * G * M / a / ( 1 - e^2 ) / c^2;
'sigma' = sigma;

# Precession per century.
xi := simplify( eval( sigma * alpha * beta, P ) ); # returns approximately 42.98


Maple Worksheet: Lindstedt-Poincare_Method.mw

Maple users often want to write a derivative evaluated at a point using Leibniz notation, as a matter of presentation, with appropriate variables and coordinates. For instance:

 

Now, Maple uses the D operator for evaluating derivatives at a point, but this can be a little clunky:

p := D[1,2,2,3](f)(a,b,c);

q := convert( p, Diff );

u := D[1,2,2,3](f)(5,10,15);

v := convert( u, Diff );

How can we tell Maple, programmatically, to print this in a nicer way? We amended the print command (see below) to do this. For example:

print( D[1,2,2,3](f)(a,b,c), [x,y,z] );

print( D[1,2,2,3](f)(5,10,15), [x,y,z] );

print( 'D(sin)(Pi/6)', theta );

Here's the definition of the custom version of print:

# Type to check if an expression is a derivative using 'D', e.g. D(f)(a) and D[1,2](f)(a,b).

TypeTools:-AddType(   

        'Dexpr',      

        proc( f )     

               if op( [0,0], f ) <> D and op( [0,0,0], f ) <> D then

                       return false;

               end if;       

               if not type( op( [0,1], f ), 'name' ) or not type( { op( f ) }, 'set(algebraic)' ) then

                       return false;

               end if;       

               if op( [0,0,0], f ) = D and not type( { op( [0,0,..], f ) }, 'set(posint)' ) then

                       return false;

               end if;       

               return true;          

        end proc      

):


# Create a local version of 'print', which will print expressions like D[1,2](f)(a,b) in a custom way,

# but otherwise print in the usual fashion.

local print := proc()


        local A, B, f, g, L, X, Y, Z;


        # Check that a valid expression involving 'D' is passed, along with a variable name or list of variable names.

        if ( _npassed < 2 ) or ( not _passed[1] :: 'Dexpr' ) or ( not passed[2] :: 'Or'('name','list'('name')) ) then

               return :-print( _passed );

        end if;


        # Extract important variables from the input.

        g := _passed[1]; # expression

        X := _passed[2]; # variable name(s)

        f := op( [0,1], g ); # function name in expression

        A := op( g ); # point(s) of evaluation


        # Check that the number of variables is the same as the number of evaluation points.

        if nops( X ) <> nops( [A] ) then

               return :-print( _passed );

        end if;


        # The differential operator.

        L := op( [0,0], g );


        # Find the variable (univariate) or indices (multivariate) for the derivative(s).

        B := `if`( L = D, X, [ op( L ) ] );


        # Variable name(s) as expression sequence.

        Y := op( X );


        # Check that the point(s) of evaluation is/are distinct from the variable name(s).

        if numelems( {Y} intersect {A} ) > 0 then

               return :-print( _passed );

        end if;


        # Find the expression sequence of the variable names.

        Z := `if`( L = D, X, X[B] );

       

        return print( Eval( Diff( f(Y), Z ), (Y) = (A) ) );


end proc:

Do you use Leibniz Notation often? Or do you have an alternate method? We’d love to hear from you!

A common question to our tech support team is about completing the square for a univariate polynomial of even degree, and how to do that in Maple. We’ve put together a solution that we think you’ll find useful. If you have any alternative methods or improvements to our code, let us know!

restart;

# Procedure to complete the square for a univariate
# polynomial of even degree.

CompleteSquare := proc( f :: depends( 'And'( polynom, 'satisfies'( g -> ( type( degree(g,x), even ) ) ) ) ), x :: name )

       local a, g, k, n, phi, P, Q, r, S, T, u:

       # Degree and parameters of polynomial.
       n := degree( f, x ):
       P := indets( f, name ) minus { x }:

       # General polynomial of square plus constant.
       g := add( a[k] * x^k, k=0..n/2 )^2 + r:

       # Solve for unknowns in g.
       Q := indets( g, name ) minus P:

       S := map( expand, { solve( identity( expand( f - g ) = 0, x ), Q ) } ):

       if numelems( S ) = 0 then
              return NULL:
       end if:

       # Evaluate g at the solution, and re-write square term
       # so that the polynomial within the square is monic.

       phi := u -> lcoeff(op(1,u),x)^2 * (expand(op(1,u)/lcoeff(op(1,u),x)))^2:  
       T := map( evalindets, map( u -> eval(g,u), S ), `^`(anything,identical(2)), phi ):

       return `if`( numelems(T) = 1, T[], T ):

end proc:


# Examples.

CompleteSquare( x^2 + 3 * x + 2, x );
CompleteSquare( a * x^2 + b * x + c, x );
CompleteSquare( 4 * x^8 + 8 * x^6 + 4 * x^4 - 246, x );

m, n := 4, 10;
r := rand(-10..10):
for i from 1 to n do
       CompleteSquare( r() * ( x^(m/2) + randpoly( x, degree=m-1, coeffs=r ) )^2 + r(), x );
end do;

# Compare quadratic examples with Student:-Precalculus:-CompleteSquare()
# (which is restricted to quadratic expressions).

Student:-Precalculus:-CompleteSquare( x^2 + 3 * x + 2 );
Student:-Precalculus:-CompleteSquare( a * x^2 + b * x + c );

For a higher-order example:

f := 5*x^4 - 70*x^3 + 365*x^2 - 840*x + 721;
g := CompleteSquare( f, x ); # 5 * ( x^2 - 7 * x + 12 )^2 + 1
h := evalindets( f, `*`, factor ); 4 * (x-3)^2 * (x-4)^2 + 1
p1 := plot( f, x=0..5, y=-5..5, color=blue ):
p2 := plots:-pointplot( [ [3,1], [4,1] ], symbol=solidcircle, symbolsize=20, color=red ):
plots:-display( p1, p2 );

tells us that the minimum value of the expression is 1, and it occurs at x=3 and x=4.

Problem:

Suppose you have a bunch of 2D data points which:

  1. May include points with the same x-value but different y-values; and
  2. May be unevenly-spaced with respect to the x-values.

How do you clean up the data so that, for instance, you are free to construct a connected data plot, or perform a Discrete Fourier Transform? Please note that Curve Fitting and the Lomb–Scargle Method, respectively, are more effective techniques for these particular applications. Let's start with a simple example for illustration. Consider this Matrix:

A := < 2, 5; 5, 8; 2, 1; 7, 8; 10, 10; 5, 7 >;

Consolidate:

First, sort the rows of the Matrix by the first column, and extract the sorted columns separately:

P := sort( A[..,1], output=permutation ); # permutation to sort rows by the values in the first column
U := A[P,1]; # sorted column 1
V := A[P,2]; # sorted column 2

We can regard the sorted bunches of distinct values in U as a step in a stair case, and the goal is replace each step with the average of the y-values in V located on each step.

Second, determine the indices for the first occurrences of values in U, by selecting the indices which give a jump in x-value:

m := numelems( U );
K := [ 1, op( select( i -> ( U[i] > U[i-1] ), [ seq( j, j=2..m ) ] ) ), m+1 ];
n := numelems( K );

The element m+1 is appended for later convenience. Here, we can quickly define the first column of the consolidated Matrix:

X1 := U[K[1..-2]];

Finally, to define the second column of the consolidated Matrix, we take the average of the values in each step, using the indices in K to tell us the ranges of values to consider:

Y1 := Vector[column]( n-1, i -> add( V[ K[i]..K[i+1]-1 ] ) / ( K[i+1] - K[i] ) );

Thus, the consolidated Matrix is given by:

B := < X1 | Y1 >;

Spread Evenly:

To spread-out the x-values, we can use a sequence with fixed step size:

X2 := evalf( Vector[column]( [ seq( X1[1]..X1[-1], (X1[-1]-X1[1])/(m-1) ) ] ) );

For the y-values, we will interpolate:

Y2 := CurveFitting:-ArrayInterpolation( X1, Y1, X2, method=linear );

This gives us a new Matrix, which has both evenly-spaced x-values and consolidated y-values:

C := < X2 | Y2 >;

Plot:

plots:-display( Array( [
        plots:-pointplot( A, view=[0..10,0..10], color=green, symbol=solidcircle, symbolsize=15, title="Original Data", font=[Verdana,15] ),
        plots:-pointplot( B, view=[0..10,0..10], color=red, symbol=solidcircle, symbolsize=15, title="Consolidated Data", font=[Verdana,15] ),
        plots:-pointplot( C, view=[0..10,0..10], color=blue, symbol=solidcircle, symbolsize=15, title="Spread-Out Data", font=[Verdana,15] )
] ) );

Sample Data with Noise:

For another example, let’s take data points from a logistic curve, and add some noise:

# Noise generators
f := 0.5 * rand( -1..1 ):
g := ( 100 - rand( -15..15 ) ) / 100:

# Actual x-values
X := [ seq( i/2, i=1..20 ) ];

# Actual y-values
Y := evalf( map( x -> 4 / ( 1 + 3 * exp(-x) ), X ) );

# Matrix of points with noise
A := Matrix( zip( (x,y) -> [x,y], map( x -> x + f(), X ), map( y -> g() * y, Y ) ) );

Using the method outlined above, and the general procedures defined below, define:

B := ConsolidatedMatrix( A );
C := EquallySpaced( B, 21, method=linear );

Visually:

plots:-display( Array( [
    plots:-pointplot( A, view=[0..10,0..5], symbol=solidcircle, symbolsize=15, color=green, title="Original Data", font=[Verdana,15] ),
    plots:-pointplot( B, view=[0..10,0..5], symbol=solidcircle, symbolsize=15, color=red, title="Consolidated Data", font=[Verdana,15]  ),
    plots:-pointplot( C, view=[0..10,0..5], symbol=solidcircle, symbolsize=15, color=blue, title="Spread-Out Data", font=[Verdana,15] )
] ) );

  

Generalization:

Below are more generalized custom procedures, which are used in the above example. These also account for special cases.

# Takes a matrix with two columns, and returns a new matrix where the new x-values are unique and sorted,
# and each new y-value is the average of the old y-values corresponding to the x-value.
ConsolidatedMatrix := proc( A :: 'Matrix'(..,2), $ )

        local i, j, K, m, n, P, r, U, V, X, Y:
  
        # The number of rows in the original matrix.
        r := LinearAlgebra:-RowDimension( A ):

        # Return the original matrix should it only have one row.
        if r = 1 then
               return A:
        end if:

        # Permutation to sort first column of A.
        P := sort( A[..,1], ':-output'=permutation ):       

        # Sorted first column of A.
        U := A[P,1]:

        # Corresponding new second column of A.
        V := A[P,2]:

        # Return the sorted matrix should all the x-values be distinct.
        if numelems( convert( U, ':-set' ) ) = r then
               return < U | V >:
        end if:

        # Indices of first occurrences for values in U. The element m+1 is appended for convenience.
        m := numelems( U ):
        K := [ 1, op( select( i -> ( U[i] > U[i-1] ), [ seq( j, j=2..m ) ] ) ), m+1 ]:
        n := numelems( K ):

        # Consolidated first column.
        X := U[K[1..-2]]:

        # Determine the consolidated second column, using the average y-value.
        Y := Vector[':-column']( n-1, i -> add( V[ K[i]..K[i+1]-1 ] ) / ( K[i+1] - K[i] ) ):

        return < X | Y >:

end proc:

# Procedure which takes a matrix with two columns, and returns a new matrix of specified number of rows
# with equally-spaced x-values, and interpolated y-values.
# It accepts options that can be passed to ArrayInterpolation().
EquallySpaced := proc( M :: 'Matrix'(..,2), m :: posint )

        local A, i, r, U, V, X, Y:

        # Consolidated matrix, the corresponding number of rows, and the columns.
        A := ConsolidatedMatrix( M ):
        r := LinearAlgebra:-RowDimension( A ):
        U, V := evalf( A[..,1] ), evalf( A[..,2] ):

        # If the consolidated matrix has only one row, return it.
        if r = 1 then
               return A:
        end if:

        # If m = 1, i.e. only one equally-spaced point is requested, then return a matrix of the averages.
        if m = 1 then
               return 1/r * Matrix( [ [ add( U ), add( V ) ] ] ):
        end if:

        # Equally-spaced x-values.
        X := Vector[':-column']( [ seq( U[1]..U[-1], (U[-1]-U[1])/(m-1), i=1..m ) ] ):

        # Interpolated y-values.
        Y := CurveFitting:-ArrayInterpolation( U, V, X, _rest ):    

        return < X | Y >:

end proc:

Worth Checking Out:

 

Maple users frequently solve differential equations. If you want to use the results later in Maple, you need to deconstruct the solution, and then assign the functions -- something that isn't done automatically in Maple. We wrote a multi-purpose routine to help you out. For instance, suppose you solve a simple linear system of equations:

restart;

eqs := { x + y = 3, x - y = 1 };
soln := solve( eqs ); # { x = 2, y = 1 }
x, y; # plain x and y

To assign the values from the solution to the corresponding variables:

assign( soln );
x, y; # 2, 1

This won't work for solutions of differential equations:

restart;

sys := { D(x)(t) = y(t), D(y)(t) = -x(t), x(0) = 1, y(0) = 0 };
soln := dsolve( sys ); # { x(t) = cos(t), y(t) = -sin(t) }
assign( soln );
x(s), y(s); # plain x(s) and y(s)

To make this work, we wrote this multi-purpose routine:

restart;

# Type for a variable expression, e.g. x=5.
TypeTools:-AddType( 'varexpr', u -> type( u, 'And'('name','Non'('constant'))='algebraic' ) ):

# Type for a function expression, e.g. f(x)=x^2.
TypeTools:-AddType( 'funcexpr', u -> type( u, 'function'('And'('name','Non'('constant')))='algebraic' ) ):

# Procedure to assign variable and function expressions.
my_assign := proc( u :: {
        varexpr, 'list'(varexpr), 'rtable'(varexpr), 'set'(varexpr),
        funcexpr, 'list'(funcexpr), 'rtable'(funcexpr), 'set'(funcexpr)
}, $ )

        local F, L, R, V:       

        # Map the procedure if input is a data container, or apply regular assign(), where applicable.
        if not u :: {'varexpr','funcexpr'} then
               map( procname, u ):
               return NULL:
        elif u :: 'varexpr' then
               assign( u ):
               return NULL:
        end if:       

        L, R := lhs(u), rhs(u):
        F := op(0,L): 
        V := [ indets( L, 'And'( 'name', 'Non'('constant') ) )[] ]:    

        map( assign, F, unapply( R, V ) ):
        return NULL:

end proc:

# Example 1.

eqs := { x + y = 3, x - y = 1 };
my_assign( solve( eqs ) );
'x' = x, 'y' = y; # x=1, y=2

# Example 2.

unassign( 'x', 'y' ):
E := [ f(x,y) = x + y, g(x,y) = x - y ];
my_assign( E );
'f(u,v)' = f(u,v), 'g(u,v)' = g(u,v); # f(u,v)=u+v, g(u,v)=u-v

# Example 3.

sys := { D(x)(t) = y(t), D(y)(t) = -x(t), x(0) = 1, y(0) = 0 };
soln := dsolve( sys );
my_assign( soln ):
'x(s)' = x(s); # x(s)=cos(s)
'y(s)' = y(s); # y(s)=-sin(s)
1 2 Page 1 of 2
´╗┐