Product Tips & Techniques

Tips and Tricks on how to get the most about Maple and MapleSim

We are currently in the process of updating the support FAQs at https://faq.maplesoft.com. We’ve been working on updating the existing content for clarity, and have added several new articles already.

 

The majority of our FAQs are from questions people ask us in Technical Support at support@maplesoft.com, but we’d also like to like to add content from other sources.

Since we have such a great community here at MaplePrimes, we wanted to reach out and ask if there are any articles or questions that you'd like to see added to our FAQ.

 

We look forward to hearing your feedback!

I experienced a significant obstacle while trying to generate independent random samples with Statistics:-Sample on different nodes of a Grid multi-processing environment. After many hours of trial-and-error, I discovered an astonishing workaround, and I achieved excellent time and memory performance. Since this seems like a generally useful computation, I thought that it was worthy of a Post.

This Post is also worth reading to learn how to use Grid when you need to initialize a substantial environment on each node before using Grid:-Map or Grid:-Seq.

All remaining details are in the following worksheet.
 

How to use Statistics:-Sample in the `Grid` environment

Author: Carl Love <carl.j.love@gmail.com> 1 August 2019

 

I experienced a significant obstacle while trying to generate indenpendent random samples with Statistics:-Sample on the nodes of a multi-processor Grid (on a single computer). After several hours of trial-and-error, I discovered that two things are necessary to do this:

1. 

The random number generator needs to be seeded differently in each node. (The reason for this is easy to understand.)

2. 

The random variables generated by Statistics:-RandomVariable need to have different names in each node. This one is mind-boggling to me. Afterall, each node has its own kernel and so its own memory It's as if the names of random variables are stored in a disk file which all kernels access. And also the generator has been seeded differently in each node.

 

Once these things were done, the time and memory performance of the computation were excellent.

restart
:

Digits:= 15
:

#Specify the size of the computation:
(n1,n2,n3):= (100, 100, 1000):
# n1 = size of each random sample;
# n2 = number of samples in a batch;
# n3 = number of batches.

#
#Procedure to initialize needed globals on each node:
Init:= proc(n::posint)
local node:= Grid:-MyNode();
   #This is wrapped in parse so that it'll act globally. Otherwise, an environment
   #variable would be reset when this procedure ends.
   parse("Digits:= 15;", 'statement');

   randomize(randomize()+node); #Initialize independent RNG for this node.
   #If repeatability of results is desired, remove the inner randomize().

   (:-X,:-Y):= Array(1..n, 'datatype'= 'hfloat') $ 2;

   #Perhaps due to some oversight in the design of Statistics, it seems necessary that
   #r.v.s in different nodes **need different names** in order to be independent:
   N||node:= Statistics:-RandomVariable('Normal'(0,1));
   :-TRS:= (X::rtable)-> Statistics:-Sample(N||node, X);
   #To verify that different names are needed, change N||node to N in both lines.
   #Doing so, each node will generate identical samples!

   #Perform some computation. For the pedagogical purpose of this worksheet, all that
   #matters is that it's some numeric computation on some Arrays of random Samples.
   :-GG:= (X::Array, Y::Array)->
      evalhf(
         proc(X::Array, Y::Array, n::posint)
         local s, k, S:= 0, p:= 2*Pi;
            for k to n do
               s:= sin(p*X[k]);  
               S:= S + X[k]^2*cos(p*Y[k])/sqrt(2-sin(s)) + Y[k]^2*s
            od
         end proc
         (X, Y, n)
      )      
   ;
   #Perform a batch of the above computations, and somehow numerically consolidate the
   #results. Once again, pedagogically it doesn't matter how they're consolidated.  
   :-TRX1:= (n::posint)-> add(GG(TRS(X), TRS(Y)), 1..n);
   
   #It doesn't matter much what's returned. Returning `node` lets us verify that we're
   #actually running this on a grid.
   return node
end proc
:

The procedure Init above uses the :- syntax to set variables globally for each node. The variables set are X, Y, N||node, TRS, GG, and TRX1. Names constructed by concatenation, such as N||node, are always global, so :- isn't needed for those.

#
#Time the initialization:
st:= time[real]():
   #Send Init to each node, but don't run it yet:
   Grid:-Set(Init)
   ;
   #Run Init on each node:
   Nodes:= Grid:-Run(Init, [n1], 'wait');
time__init_Grid:= time[real]() - st;

Array(%id = 18446745861500764518)

1.109

The only purpose of array Nodes is that it lets us count the nodes, and it lets us verify that Grid:-MyNode() returned a different value on each node.

num_nodes:= numelems(Nodes);

8

#Time the actual execution:
st:= time[real]():
   R1:= [Grid:-Seq['tasksize'= iquo(n3, num_nodes)](TRX1(k), k= [n2 $ n3])]:
time__run_Grid:= time[real]() - st

4.440

#Just for comparison, run it sequentially:
st:= time[real]():
   Init(n1):
time__init_noGrid:= time[real]() - st;

st:= time[real]():
   R2:= [seq(TRX1(k), k= [n2 $ n3])]:
time__run_noGrid:= time[real]() - st;

0.16e-1

24.483

R1 and R2 will be different because different random numbers were used, but they should have similar histograms.

plots:-display(
   Statistics:-Histogram~(
      <R1 | R2>, #side-by-side plots
      'title'=~ <<"With Grid\n"> | <"Without Grid\n">>,
      'gridlines'= false
   )
);

(Plot output deleted because MaplePrimes cannot handle side-by-side plots!)

They look similar enough to me!

 

Let's try to quantify the benefit of using Grid:

speedup_factor:= time__run_noGrid / time__run_Grid;

5.36319824753560

Express that as a fraction of the theoretical maximum speedup:

efficiency:= speedup_factor / num_nodes;

.670399780941950

I think that that's really good!

 

The memory usage of this code is insignificant, which can be verified from an external memory monitor such as Winodws Task Manager. It's just a little bit more than that needed to start a kernel on each node. It's also possible to measure the memory usage programmatically. Doing so for a Grid:-Seq computation is a little bit beyond the scope of this worksheet.

 


 

Download GridRandSample.mw

Here are the histograms:

 

Animated 3-D cascade of dolls

 

3d_matryoshkas_en.mws

 

We occasionally get asked questions about methods of Perturbation Theory in Maple, including the Lindstedt-Poincaré Method. Presented here is the most famous application of this method.

Introduction

During the dawn of the 20th Century, one problem that bothered astronomers and astrophysicists was the precession of the perihelion of Mercury. Even when considering the gravity from other planets and objects in the solar system, the equations from Newtonian Mechanics could not account for the discrepancy between the observed and predicted precession.

One of the early successes of Einstein's General Theory of Relativity was that the new model was able to capture the precession of Mercury, in addition to the orbits of all the other planets. The Einsteinian model, when applied to the orbit of Mercury, was in fact a non-negligible perturbation of the old model. In this post, we show how to use Maple to compute the perturbation, and derive the formula for calculating the precession.

In polar coordinates, the Einsteinian model can be written in the following form, where u(theta)=a(1-e^2)/r(theta), with theta being the polar angle, r(theta) being the radial distance, a being the semi-major axis length, and e being the eccentricity of the orbit:
 

# Original system.
f := (u,epsilon) -> -1 - epsilon * u^2;
omega := 1;
u0, du0 := 1 + e, 0;
de1 := diff( u(theta), theta, theta ) + omega^2 * u(theta) + f( u(theta), epsilon );
ic1 := u(0) = u0, D(u)(0) = du0;


The small parameter epsilon (along with the amount of precession) can be found in terms of the physical constants, but for now we leave it arbitrary:
 

# Parameters.
P := [
    a = 5.7909050e10 * Unit(m),
    c = 2.99792458e8 * Unit(m/s),
    e = 0.205630,
    G = 6.6740831e-11 * Unit(N*m^2/kg^2), 
    M = 1.9885e30 * Unit(kg), 
    alpha = 206264.8062, 
    beta = 415.2030758 
];
epsilon = simplify( eval( 3 * G * M / a / ( 1 - e^2 ) / c^2, P ) ); # approximately 7.987552635e-8


Note that c is the speed of light, G is the gravitational constant, M is the mass of the sun, alpha is the number of arcseconds per radian, and beta is the number of revolutions per century for Mercury.

We will show that the radial distance, predicted by Einstein's model, is close to that for an ellipse, as predicted by Newton's model, but the perturbation accounts for the precession (42.98 arcseconds/century). During one revolution, the precession can be determined to be approximately
 

sigma = simplify( eval( 6 * Pi * G * M / a / ( 1 - e^2 ) / c^2, P ) ); # approximately 5.018727337e-7


and so, per century, it is alpha*beta*sigma, which is approximately 42.98 arcseconds/century.
It is worth checking out this question on Stack Exchange, which includes an animation generated numerically using Maple for a similar problem that has a more pronounced precession.

Lindstedt-Poincaré Method in Maple

In order to obtain a perturbation solution to the perturbed differential equation u'+omega^2*u=1+epsilon*u^2 which is periodic, we need to write both u and omega as a series in the small parameter epsilon. This is because otherwise, the solution would have unbounded oscillatory terms ("secular terms"). Using this Lindstedt-Poincaré Method, we substitute arbitrary series in epsilon for u and omega into the initial value problem, and then choose the coefficient constants/functions so that both the initial value problem is satisfied and there are no secular terms. Note that a first-order approximation provides plenty of agreement with the measured precession, but higher-order approximations can be obtained.

To perform this in Maple, we can do the following:
 

# Transformed system, with the new independent variable being the original times a series in epsilon.
de2 := op( PDEtools:-dchange( { theta = phi/b }, { de1 }, { phi }, params = { b, epsilon }, simplify = true ) );
ic2 := ic1;

# Order and