Maplesoft Blog

The Maplesoft blog contains posts coming from the heart of Maplesoft. Find out what is coming next in the world of Maple, and get the best tips and tricks from the Maple experts.

We are currently in the process of updating the support FAQs at https://faq.maplesoft.com. We’ve been working on updating the existing content for clarity, and have added several new articles already.

 

The majority of our FAQs are from questions people ask us in Technical Support at support@maplesoft.com, but we’d also like to like to add content from other sources.

Since we have such a great community here at MaplePrimes, we wanted to reach out and ask if there are any articles or questions that you'd like to see added to our FAQ.

 

We look forward to hearing your feedback!

We occasionally get asked questions about methods of Perturbation Theory in Maple, including the Lindstedt-Poincaré Method. Presented here is the most famous application of this method.

Introduction

During the dawn of the 20th Century, one problem that bothered astronomers and astrophysicists was the precession of the perihelion of Mercury. Even when considering the gravity from other planets and objects in the solar system, the equations from Newtonian Mechanics could not account for the discrepancy between the observed and predicted precession.

One of the early successes of Einstein's General Theory of Relativity was that the new model was able to capture the precession of Mercury, in addition to the orbits of all the other planets. The Einsteinian model, when applied to the orbit of Mercury, was in fact a non-negligible perturbation of the old model. In this post, we show how to use Maple to compute the perturbation, and derive the formula for calculating the precession.

In polar coordinates, the Einsteinian model can be written in the following form, where u(theta)=a(1-e^2)/r(theta), with theta being the polar angle, r(theta) being the radial distance, a being the semi-major axis length, and e being the eccentricity of the orbit:
 

# Original system.
f := (u,epsilon) -> -1 - epsilon * u^2;
omega := 1;
u0, du0 := 1 + e, 0;
de1 := diff( u(theta), theta, theta ) + omega^2 * u(theta) + f( u(theta), epsilon );
ic1 := u(0) = u0, D(u)(0) = du0;


The small parameter epsilon (along with the amount of precession) can be found in terms of the physical constants, but for now we leave it arbitrary:
 

# Parameters.
P := [
    a = 5.7909050e10 * Unit(m),
    c = 2.99792458e8 * Unit(m/s),
    e = 0.205630,
    G = 6.6740831e-11 * Unit(N*m^2/kg^2), 
    M = 1.9885e30 * Unit(kg), 
    alpha = 206264.8062, 
    beta = 415.2030758 
];
epsilon = simplify( eval( 3 * G * M / a / ( 1 - e^2 ) / c^2, P ) ); # approximately 7.987552635e-8


Note that c is the speed of light, G is the gravitational constant, M is the mass of the sun, alpha is the number of arcseconds per radian, and beta is the number of revolutions per century for Mercury.

We will show that the radial distance, predicted by Einstein's model, is close to that for an ellipse, as predicted by Newton's model, but the perturbation accounts for the precession (42.98 arcseconds/century). During one revolution, the precession can be determined to be approximately
 

sigma = simplify( eval( 6 * Pi * G * M / a / ( 1 - e^2 ) / c^2, P ) ); # approximately 5.018727337e-7


and so, per century, it is alpha*beta*sigma, which is approximately 42.98 arcseconds/century.
It is worth checking out this question on Stack Exchange, which includes an animation generated numerically using Maple for a similar problem that has a more pronounced precession.

Lindstedt-Poincaré Method in Maple

In order to obtain a perturbation solution to the perturbed differential equation u'+omega^2*u=1+epsilon*u^2 which is periodic, we need to write both u and omega as a series in the small parameter epsilon. This is because otherwise, the solution would have unbounded oscillatory terms ("secular terms"). Using this Lindstedt-Poincaré Method, we substitute arbitrary series in epsilon for u and omega into the initial value problem, and then choose the coefficient constants/functions so that both the initial value problem is satisfied and there are no secular terms. Note that a first-order approximation provides plenty of agreement with the measured precession, but higher-order approximations can be obtained.

To perform this in Maple, we can do the following:
 

# Transformed system, with the new independent variable being the original times a series in epsilon.
de2 := op( PDEtools:-dchange( { theta = phi/b }, { de1 }, { phi }, params = { b, epsilon }, simplify = true ) );
ic2 := ic1;

# Order and series for the perturbation solutions of u(phi) and b. Here, n = 1 is sufficient.
n := 1;
U := unapply( add( p[k](phi) * epsilon^k, k = 0 .. n ), phi );
B := omega + add( q[k] * epsilon^k, k = 1 .. n );

# DE in terms of the series.
de3 := series( eval( de2, [ u = U, b = B ] ), epsilon = 0, n + 1 );

# Successively determine the coefficients p[k](phi) and q[k].
for k from 0 to n do

    # Specify the initial conditions for the kth DE, which involves p[k](phi).
    # The original initial conditions appear only in the coefficient functions with index k = 0,
    # and those for k > 1 are all zero.
    if k = 0 then
        ic3 := op( expand( eval[recurse]( [ ic2 ], [ u = U, epsilon = 0 ] ) ) );
    else
        ic3 := p[k](0), D(p[k])(0);
    end if:

    # Solve kth DE, which can be found from the coefficients of the powers of epsilon in de3, for p[k](phi).
    # Then, update de3 with the new information.
    soln := dsolve( { simplify( coeff( de3, epsilon, k ) ), ic3 } );
    p[k] := unapply( rhs( soln ), phi );
    de3 := eval( de3 );

    # Choose q[k] to eliminate secular terms. To do this, use the frontend() command to keep only the terms in p[k](phi)
    # which have powers of t, and then solve for the value of q[k] which makes the expression zero. 
    # Note that frontend() masks the t-dependence within the sine and cosine terms.
    # Note also that this method may need to be amended, based on the form of the terms in p[k](phi).
    if k > 0 then
        q[1] := solve( frontend( select, [ has, p[k](phi), phi ] ) = 0, q[1] );
        de3 := eval( de3 );
    end if;

end do:

# Final perturbation solution.
'u(theta)' = eval( eval( U(phi), phi = B * theta ) ) + O( epsilon^(n+1) );

# Angular precession in one revolution.
sigma := convert( series( 2 * Pi * (1/B-1), epsilon = 0, n + 1 ), polynom ):
epsilon := 3 * G * M / a / ( 1 - e^2 ) / c^2;
'sigma' = sigma;

# Precession per century.
xi := simplify( eval( sigma * alpha * beta, P ) ); # returns approximately 42.98


Maple Worksheet: Lindstedt-Poincare_Method.mw

I just wanted to let everyone know that the Call for Papers and Extended Abstracts deadline for the Maple Conference has been extended to June 14.

The papers and extended abstracts presented at the 2019 Maple Conference will be published in the Communications in Computer and Information Science Series from Springer. We welcome topics that fall into the following broad categories:

  • Maple in Education
  • Algorithms and Software
  • Applications of Maple

You can learn more about the conference or submit your paper or abstract here: 

https://www.maplesoft.com/mapleconference/Papers-and-Presentations.aspx

Hope to hear from you soon!

We’re excited to announce that we have just released a new version of MapleSim. The MapleSim 2019 family of products helps you get the answers you need from your models, with improved performance, increased modeling scope, and more ways to connect to your existing toolchain. Improvements include:
 

  • Faster simulation speeds, both within MapleSim and when using exported MapleSim models in other tools

  • More simulation options are now available when running models imported from other systems

  • Additional options for FMI connectivity, including support for variable-step solvers with imported FMUs, and exporting models using variable step solvers using the MapleSim FMI Connector add-on

  • Improved interactive analysis apps for Monte Carlo analysis, Optimization, and Parameter Sweep

  • Expanded modeling scope in hydraulics, electrical, multibody, and more, with new built-in components and support for more external Modelica libraries

  • New add-on library: MapleSim Engine Dynamics Library from Modelon provides specialized tools for modeling, simulating, and analyzing the performance of combustion engines

  • New add-on connector: The B&R MapleSim Connector gives automation projects a powerful, model-based ability to test and visualize control strategies from within B&R Automation Studio
     

See What’s New in MapleSim 2019 for more information about these and other improvements!

Submit your paper or extended abstract to the Maple Conference!

The papers and extended abstracts presented at the 2019 Maple Conference will be published in the Communications in Computer and Information Science Series from Springer. 

The deadline to submit is May 27, 2019. 

This conference is an amazing opportunity to contribute to the development of technology in academics. I hope that you, or your colleagues and associates, will consider making a contribution.

We welcome topics that fall into the following broad categories:

  • Maple in Education
  • Algorithms and Software
  • Applications of Maple

You can learn more about the conference or submit your paper or abstract here: 

https://www.maplesoft.com/mapleconference/Papers-and-Presentations.aspx

 

 

 

Maple 2019 has a new add-on package Maple Quantum Chemistry Toolbox from RDMChem for computing the energies and properties of molecules.  As a member of the team at RDMChem that developed the package, I would like to tell the story of its origins and provide a brief demonstration of the package.  

 

Thinking about Quantum Chemistry at Harvard

 

The story of the Maple Quantum Chemistry Toolbox begins with my graduate studies in Chemical Physics at Harvard University in the late 1990s.  Even in 1998 programs for computing the energies and properties of molecules were extremely complicated and nonintuitive.  Many of the existing programs had begun in the 1970s on computers whose programs would be recorded on punchcards.

Fig. 1: Used Punchcard by Pete Birkinshaw from Manchester, UK CC BY 2.0

 

Even today some of these programs have remnants of their early versions such as input files that must start on the second column to account for the margin of the now non-existent punchcards.  As a student, I made a bound copy of one of these manuals at a local Kinkos photocopy shop and later found myself in Harvard Yard, thinking that there must be a better way to present quantum chemistry computations.  The idea for a Maple-like package for quantum chemistry was born in that moment.

 

At the same time I was learning about something called the two-electron reduced density matrix (2-RDM).  The basic variable in quantum chemistry is the wave function which is the probability amplitude for finding each of the electrons in a molecule.  Because electrons are indistinguishable with pairwise interactions, the wave function contains much more information than is needed for computing the energies and electronic properties of molecules.  The energies and properties of any molecule with any number of electrons can be expressed as a function of a 2 electron matrix, the 2-RDM [1-3].  A quantum chemistry based on the 2-RDM, it was known, would have potentially significant advantages over wave function calculations in terms of accuracy and computational cost, especially for molecules far from the mean-field limit.  A 2-RDM approach to quantum chemistry became the focus of my Ph.D. thesis.

 

Representing Many Electrons with Only Two Electrons

 

The idea of using the 2-RDM in quantum chemistry can be attributed to four scientists: two physicists Kodi Husimi and Joseph Mayer, a chemist Per-Olov Lowdin, and a mathematician John Coleman [1-3].  In the early 1940s Husimi first published the idea in a Japanese physics journal, but in the midst of World War II the paper was not widely disseminated in the West.  In the summer of 1951 John Coleman, which attending a physics conference at Chalk River, realized that the ground-state energy of any atom or molecule could be expressed as functional of the 2-RDM, and similar ideas later occurred to Per-Olov Lowdin and Joseph Mayer who published their ideas in Physical Review in 1955.  It was soon recognized that computing the ground-state energy of an atom or molecule with the 2-RDM was potentially difficult because not every two-electron density matrix corresponds to an N-electron density matrix or wave function.  The search for the appropriate constraints on the 2-RDM, known as N-representability conditions, became known as the N-representability problem [1-3].  

 

Beginning in the late 1990s and early 2000s, Carmela Valdemoro and Diego Alcoba at the Consejo Superior de Investigaciones Científicas (Madrid, Spain), Hiroshi Nakatsuji, Koji Yasuda, and Maho Nakata at Kyoto University (Kyoto, Japan), Jerome Percus and Bastiaan Braams at the Courant Institute (New York, USA), John Coleman and Robert Erdahl at Queens University (Kingston, Canada), and my research group and I at The University of Chicago (Chicago, USA) began to make significant progress in the computation of the 2-RDM without computing the many-electron wave function [1-3].  Further contributions were made by Eric Cances and Claude Le Bris at CERMICS, Ecole Nationale des Ponts et Chaussées (Marne-la-Vallée, France), Paul Ayers at McMaster University (Hamilton, Canada), and Dimitri Van Neck at the University of Ghent (Ghent, Belgium) and their research groups.  By 2014 several powerful 2-RDM methods had emerged for the computation of molecules.  The Army Research Office (ARO) issued a proposal call for a company to develop a modern, built-from-scratch package for quantum chemistry that would contain two newly developed 2-RDM-based methods from our group: the parametric 2-RDM method [1] and the variational 2-RDM method with a fast algorithm for solving the semidefinite program [4,5,6].   The company RDMChem LLC was founded to work with the ARO to develop such a package built around RDMs, and hence, the name of the company RDMChem was selected as a hybrid of the RDM abbreviation for Reduced Density Matrices and the Chem colloquialism for Chemistry.  To achieve a really new design for an electronic structure package with access to numeric and symbolic computations as well as advanced visualizations, the team at RDMChem and I developed a partnership with Maplesoft to build something new that became the Maple Quantum Chemistry Package (or Toolbox), which was released with Maple 2019 on Pi Day.

 

Maple Quantum Chemistry Toolbox

 The Maple Quantum Chemistry Toolbox provides a powerful, parallel platform for quantum chemistry calculations that is directly integrated into the Maple 2019 environment.  It is optimized for both cutting-edge research as well as chemistry education.  The Toolbox can be used from the worksheet, document, or command-line interfaces.  Plus there is a Maplet interface for rapid exploration of molecules and their properties.  Figure 2 shows the Maplet interface being applied to compute the ground-state energy of 1,3-dibromobenzene by density functional theory (DFT) in a 6-31g basis set.           

Fig. 2: Maplet interface to the Quantum Chemistry Toolbox 2019, showing a density functional theory (DFT) calculation         

After entering a name into the text box labeled Name, the user can click on: (1) the button Web to import the geometry from an online database containing more than 96 million molecules,  (2) the button File to read the geometry from a standard XYZ file, or (3) the button Input to enter the geometry.  As soon the geometry is entered, the Maplet displays a 3D picture of the molecule in the window on the right of the options.  Dropdown menus allow the user to select the basis set, the electronic structure method, and a boolean for geometry optimization.  The user can click on the Compute button to perform the computation.  When the quantum computation completes, the total energy appears in the box labeled Total Energy.  The dropdown menu Analyze contains a list of data tables, plots, and animations that can be selected and then displayed by clicking the Analyze button.  The Maplet interface contains nearly all of the options available in the worksheet interface.   The Help Pages of the Toolbox include extensive curricula and lessons that can be used in undergraduate, graduate, and even high school chemistry courses.  Next we look at some sample calculations in the worksheet interface.     

 

Reproducing an Early 2-RDM Calculation

 

One of the earliest variational calculations of the 2-RDM was performed in 1975 by Garrod, Mihailović,  and  Rosina [1-3].  They minimized the electronic ground state of the 4-electron atom beryllium as a functional of only two electrons, the 2-RDM.  They imposed semidefinite constraints on the particle-particle (D), hole-hole (Q), and particle-hole (G) metric matrices.  They solved the resulting optimization problem of minimizing the energy as a linear function of the 2-RDM subject to the semidefinite constraints, known as a semidefinite program, by a cutting-plane algorithm.  Due to limitations of the cutting-plane algorithm and computers circa 1975, the calculation was a difficult one, likely taking a significant amount of computer time and memory.

 

With the Quantum Chemistry Toolbox we can use the command Variational2RDM to reproduce the calculation on a Windows laptop.  First, in a Maple 2019 worksheet we load the commands of the Add-on Quantum Chemistry Toolbox:

with(QuantumChemistry);

[AOLabels, ActiveSpaceCI, ActiveSpaceSCF, AtomicData, BondAngles, BondDistances, Charges, ChargesPlot, CorrelationEnergy, CoupledCluster, DensityFunctional, DensityPlot3D, Dipole, DipolePlot, Energy, FullCI, GeometryOptimization, HartreeFock, Interactive, Isotopes, MOCoefficients, MODiagram, MOEnergies, MOIntegrals, MOOccupations, MOOccupationsPlot, MOSymmetries, MP2, MolecularData, MolecularGeometry, NuclearEnergy, NuclearGradient, Parametric2RDM, PlotMolecule, Populations, RDM1, RDM2, ReadXYZ, SaveXYZ, SearchBasisSets, SearchFunctionals, SkeletalStructure, Thermodynamics, Variational2RDM, VibrationalModeAnimation, VibrationalModes, Video]

(1.1)

Then we define the atom (or molecule) using a Maple list of lists that we assign to the variable atom:

atom := [["Be",0,0,0]];

[["Be", 0, 0, 0]]

(1.2)

 

We can then perform the variational 2-RDM method with the Variational2RDM command to compute the ground-state energy and properties of beryllium in a minimal basis set like the one used by Rosina and his collaborators.  By default the method uses the D, Q, and G N-representability conditions and the minimal "sto-3g" basis set.  The calculation, which completes in seconds, contains a wealth of information in the form of a convenient Maple table that we assign to the variable data.

data := Variational2RDM(atom);

table(%id = 18446744313704784158)

(1.3)

 

The table contains the total ground-state energy of the beryllium atom in the atomic unit of energy (hartrees)

data[e_tot];

HFloat(-14.40370016681039)

(1.4)

 

We also have the atomic orbitals (AOs) employed in the calculation

data[aolabels];

Vector(5, {(1) = "0 Be 1s", (2) = "0 Be 2s", (3) = "0 Be 2px", (4) = "0 Be 2py", (5) = "0 Be 2pz"})

(1.5)

 

as well as the Mulliken populations of these orbitals

data[populations];

Vector(5, {(1) = 1.9995807710723152, (2) = 1.7913484714571852, (3) = 0.6969023822632789e-1, (4) = 0.6969026475511847e-1, (5) = 0.6969029119010149e-1})

(1.6)

 

We see that 2 electrons are located in the 1s orbital, 1.8 electrons in the 2s orbital, and about 0.2 electrons in the 2p orbitals.  By default the calculation also returns the 1-RDM

data[rdm1];

Matrix(5, 5, {(1, 1) = 1.9999258249189755, (1, 2) = -0.37784860208539793e-2, (1, 3) = 0., (1, 4) = 0., (1, 5) = 0., (2, 1) = -0.37784860208539793e-2, (2, 2) = 1.7910034176105256, (2, 3) = 0., (2, 4) = 0., (2, 5) = 0., (3, 1) = 0., (3, 2) = 0., (3, 3) = 0.6969023822632789e-1, (3, 4) = 0., (3, 5) = 0., (4, 1) = 0., (4, 2) = 0., (4, 3) = 0., (4, 4) = 0.6969026475511847e-1, (4, 5) = 0., (5, 1) = 0., (5, 2) = 0., (5, 3) = 0., (5, 4) = 0., (5, 5) = 0.6969029119010149e-1})

(1.7)

 

The eigenvalues of the 1-RDM are the natural orbital occupations

LinearAlgebra:-Eigenvalues(data[rdm1]);

Vector(5, {(1) = 1.9999941387490443+0.*I, (2) = 1.7909351037804568+0.*I, (3) = 0.6969023822632789e-1+0.*I, (4) = 0.6969026475511847e-1+0.*I, (5) = 0.6969029119010149e-1+0.*I})

(1.8)

 

We can display the density of the 2s-like 2nd natural orbital using the DensityPlot3D command providing the atom, the data, and the orbitalindex keyword

DensityPlot3D(atom,data,orbitalindex=2);

 

 

Similarly,  using the DensityPlot3D command, we can readily display the 2p-like 3rd natural orbital

DensityPlot3D(atom,data,orbitalindex=3);

 

 

By using Maple keyword arguments in the Variational2RDM command, we can readily change the basis set, use point-group symmetry, add active orbitals with or without self-consistent-field, change the N-representability conditions, as well as explore many other options.  Having reenacted one of the first variational 2-RDM calculations ever, let's examine a more complicated molecule.

 

Explosive TNT

 

We consider the molecule TNT that is used as an explosive. Using the command MolecularGeometry, we can import the experimental geometry of TNT from the online PubChem database.

mol := MolecularGeometry("TNT");

[["O", .5454, -3.514, 0.12e-2], ["O", .5495, 3.5137, 0.8e-3], ["O", 2.4677, -2.4539, -0.5e-3], ["O", 2.4705, 2.4513, 0.3e-3], ["O", -3.5931, -1.0959, 0.4e-3], ["O", -3.5922, 1.0993, 0.6e-3], ["N", 1.2142, -2.454, 0.2e-3], ["N", 1.217, 2.4527, 0], ["N", -2.9846, 0.15e-2, 0.1e-3], ["C", 1.2253, -0.6e-3, -0.9e-3], ["C", .5271, -1.2082, -0.8e-3], ["C", .5284, 1.2078, -0.8e-3], ["C", -1.5646, 0.8e-3, -0.4e-3], ["C", -.8678, -1.2074, -0.6e-3], ["C", -.8666, 1.2084, -0.6e-3], ["C", 2.7239, -0.16e-2, 0.11e-2], ["H", -1.4159, -2.1468, -0.3e-3], ["H", -1.4137, 2.1483, -0.3e-3], ["H", 3.1226, .2418, -.9891], ["H", 3.0863, .6934, .7662], ["H", 3.3154, -.8111, .4109]]

(1.9)

 

The command PlotMolecule generates a 3D ball-and-stick plot of the molecule

PlotMolecule(mol);

 

 

We perform a variational calculation of the 2-RDM of TNT in an active space of 10 electrons and 10 orbitals by setting the keyword active to the list [10,10].  The keyword casscf is set to true to optimize the active orbitals during the calculation.  The keyword basis is used to set the basis set to a minimal basis set sto-3g for illustration.   

data := Variational2RDM(mol, active=[10,10], casscf=true, basis="sto-3g");

table(%id = 18446744493271367454)

(1.10)

 

The ground-state energy of TNT in hartrees is

data[e_tot];

HFloat(-868.8629631593426)

(1.11)

 

Unlike beryllium, the electric dipole moment of TNT in debyes is nonzero

data[dipole];

Vector(3, {(1) = .5158925019252739, (2) = -0.5985274393363119e-1, (3) = .1277528280025474})

(1.12)

 

We can easily visualize the dipole moment relative to the molecule's ball-and-stick model with the DipolePlot command

DipolePlot(mol,method=Variational2RDM, active=[10,10], casscf=true, basis="sto-3g");

 

 

The 1-RDM is returned by default

data[rdm1];

_rtable[18446744313709602566]

(1.13)

 

The natural molecular-orbital (MO) occupations are the eigenvalues of the 1-RDM

data[mo_occ];

_rtable[18446744313709600150]

(1.14)

 

All of the occupations can be viewed at once by converting the Vector to a list

convert(data[mo_occ], list);

[HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(1.9110133620349001), HFloat(1.8984139688344246), HFloat(1.6231436866358906), HFloat(1.6158489471020905), HFloat(1.6145310163161273), HFloat(0.38920731792133734), HFloat(0.387039366894289), HFloat(0.37786347287813526), HFloat(0.09734187094597906), HFloat(0.08559699476985069), HFloat(0.0), HFloat(0.0), HFloat(0.0), HFloat(0.0), HFloat(0.0), HFloat(0.0), HFloat(0.0), HFloat(0.0), HFloat(0.0), HFloat(0.0), HFloat(0.0), HFloat(0.0), HFloat(0.0), HFloat(0.0), HFloat(0.0), HFloat(0.0), HFloat(0.0), HFloat(0.0), HFloat(0.0), HFloat(0.0), HFloat(0.0), HFloat(0.0)]

(1.15)

 

We can visualize these occupations with the MOOccupationsPlot command

MOOccupationsPlot(mol,method=Variational2RDM, active=[10,10], casscf=true, basis="sto-3g");

 

 

The occupations, we observe, show significant deviations from 0 and 2, indicating that the electrons have substantial correlation beyond the mean-field (Hartree-Fock) limit.  The blue lines indicate the first N/2 spatial orbitals where N is the total number of electrons while the red lines indicate the remaining spatial orbitals.  We can visualize the highest "occupied" molecular orbital (58) with the DensityPlot3D command

DensityPlot3D(mol,data, orbitalindex=58);

 

 

Similarly, we can visualize the lowest "unoccupied" molecular orbital (59) with the DensityPlot3D command

DensityPlot3D(mol,data, orbitalindex=59);

 

 

Comparison of orbitals 58 and 59 reveals an increase in the number of nodes (changes in the phase of the orbitals denoted by green and purple), which reflects an increase in the energy of the orbital.

 

Looking Ahead

 

The Maple Quantum Chemistry Toolbox 2019, an new Add-on for Maple 2019 from RDMChem, provides a easy-to-use, research-grade environment for the computation of the energies and properties of atoms and molecules.  In this blog we discussed its origins in graduate research at Harvard, its reproduction of an early 2-RDM calculation of beryllium, and its application to the explosive molecule TNT.  We have illustrated only some of the many features and electronic structure methods of the Maple Quantum Chemistry package.  There is much more chemistry and physics to explore.  Enjoy!    

 

Selected References

 

[1] D. A. Mazziotti, Chem. Rev. 112, 244 (2012). "Two-electron Reduced Density Matrix as the Basic Variable in Many-Electron Quantum Chemistry and Physics"

[2]  Reduced-Density-Matrix Mechanics: With Application to Many-Electron Atoms and Molecules (Adv. Chem. Phys.) ; D. A. Mazziotti, Ed.; Wiley: New York, 2007; Vol. 134.

[3] A. J. Coleman and V. I. Yukalov, Reduced Density Matrices: Coulson’s Challenge (Springer-Verlag,  New York, 2000).

[4] D. A. Mazziotti, Phys. Rev. Lett. 106, 083001 (2011). "Large-scale Semidefinite Programming for Many-electron Quantum Mechanics"

[5] A. W. Schlimgen, C. W. Heaps, and D. A. Mazziotti, J. Phys. Chem. Lett. 7, 627-631 (2016). "Entangled Electrons Foil Synthesis of Elusive Low-Valent Vanadium Oxo Complex"

[6] J. M. Montgomery and D. A. Mazziotti, J. Phys. Chem. A 122, 4988-4996 (2018). "Strong Electron Correlation in Nitrogenase Cofactor, FeMoco"

 

Download QCT2019_PrimesV17_05.05.19.mw

While googling around for Season 8 spoilers, I found data sets that can be used to create a character interaction network for the books in the A Song of Ice and Fire series, and the TV show they inspired, Game of Thrones.

The data sets are the work of Dr Andrew Beveridge, an associate professor at Macalaster College (check out his Network of Thrones blog).

You can create an undirected, weighted graph using this data and Maple's GraphTheory package.

Then, you can ask yourself really pressing questions like

  • Who is the most influential person in Westeros? How has their influence changed over each season (or indeed, book)?
  • How are Eddard Stark and Randyll Tarly connected?
  • What do eigenvectors have to do with the battle for the Iron Throne, anyway?

These two applications (one for the TV show, and another for the novels) have the answers, and more.

The graphs for the books tend to be more interesting than those for the TV show, simply because of the far broader range of characters and the intricacy of the interweaving plot lines.

Let’s look at some of the results.

This a small section of the character interaction network for the first book in the A Song of Ice and Fire series (this is the entire visualization - it's big, simply because of the shear number of characters)

The graph was generated by GraphTheory:-DrawGraph (with method = spring, which models the graph as a system of protons repelling each other, connected by springs).

The highlighted vertices are the most influential characters, as determined by their Eigenvector centrality (more on this later).

 

The importance of a vertex can be described by its centrality, of which there are several variants.

Eigenvector centrality, for example, is the dominant eigenvector of the adjacency matrix, and uses the number and importance of neighboring vertices to quantify influence.

This plot shows the 15 most influential characters in Season 7 of the TV show Game of Thrones. Jon Snow is the clear leader.

Here’s how the Eigenvector centrality of several characters change over the books in the A Song of Ice and Fire series.

A clique is a group of vertices that are all connected to every other vertex in the group. Here’s the largest clique in Season 7 of the TV show.

Game of Thrones has certainly motivated me to learn more about graph theory (yes, seriously, it has). It's such a wide, open field with many interesting real-world applications.

Enjoy tinkering!

I recently had a wonderful and valuable opportunity to meet with some primary school students and teachers at Holbaek by Skole in Denmark to discuss the use of technology in the classroom. The Danish education system has long been an advocate of using technology and digital learning solutions to augment learning for its students. One of the technology solutions they are using is Maple, Maplesoft’s comprehensive mathematics software tool designed to meet the unique and complex needs of STEM courses. It is rare to find Maple being used at the primary school level, so it was fascinating to see first-hand how Maple is being incorporated at the school.

In speaking with some of the students, I asked them what their education was like before Maple was incorporated into their course. They told me that before they had access to Maple, the teacher would put an example problem on the whiteboard and they would have to take notes and work through the solution in their notebooks. They definitely prefer the way the course is taught using Maple. They love the fact that they have a tool that let them work through the solution and provide context to the answer, as opposed to just giving them the solution. It forces them to think about how to solve the problem. The students expressed to me that Maple has transformed their learning and they cannot imagine going back to taking lectures using a whiteboard and notebook.

Here, I am speaking with some students about how they have adapted Maple to meet their needs ... and about football. Their team had just won 12-1.

 

Mathematics courses, and on a broader level, STEM courses, deal with a lot of complex materials and can be incredibly challenging. If we are able to start laying the groundwork for competency and understanding at a younger age, students will be better positioned for not only higher education, but their careers as well. This creates the potential for stronger ideas and greater innovation, which has far-reaching benefits for society as a whole.

Jesper Estrup and Gitte Christiansen, two passionate primary school teachers, were responsible for introducing Maple at Holbaek by Skole. It was a pleasure to meet with them and discuss their vision for improving mathematics education at the school. They wanted to provide their students experience with a technology tool so they would be better equipped to handle learning in the future. With the use of Maple, the students achieved the highest grades in their school system. As a result of this success, Jesper and Gitte decided to develop primary school level content for a learning package to further enhance the way their students learn and understand mathematics, and to benefit other institutions seeking to do the same. Their efforts resulted in the development of Maple-Skole, a new educational tool, based on Maple, that supports mathematics teaching for primary schools in Denmark.

Maplesoft has a long-standing relationship with the Danish education system. Maple is already used in high schools throughout Denmark, supported by the Maple Gym package. This package is an add-on to Maple that contains a number of routines to make working with Maple more convenient within various topics. These routines are made available to students and teachers with a single command that simplifies learning. Maple-Skole is the next step in the country’s vision of utilizing technology tools to enhance learning for its students. And having the opportunity to work with one tool all the way through their schooling will provide even greater benefit to students.

(L-R) Henrik and Carolyn from Maplesoft meeting with Jesper and Gitte from Holbaek by Skole

 

It helps foster greater knowledge and competency in primary school students by developing a passion for mathematics early on. This is a big step and one that we hope will revolutionize mathematics education in the country. It is exciting to see both the great potential for the Maple-Skole package and the fact that young students are already embracing Maple in such a positive way.

For us at Maplesoft, this exciting new package provides a great opportunity to not only improve upon our relationships with educational institutions in Denmark, but also to be a part of something significant, enhancing the way students learn mathematics. We strongly believe in the benefits of Maple-Skole, which is why it will be offered to schools at no charge until July 2020. I truly believe this new tool has the potential to revolutionize mathematics education at a young age, which will make them better prepared as they move forward in their education.

Maple users often want to write a derivative evaluated at a point using Leibniz notation, as a matter of presentation, with appropriate variables and coordinates. For instance:

 

Now, Maple uses the D operator for evaluating derivatives at a point, but this can be a little clunky:

p := D[1,2,2,3](f)(a,b,c);

q := convert( p, Diff );

u := D[1,2,2,3](f)(5,10,15);

v := convert( u, Diff );

How can we tell Maple, programmatically, to print this in a nicer way? We amended the print command (see below) to do this. For example:

print( D[1,2,2,3](f)(a,b,c), [x,y,z] );

print( D[1,2,2,3](f)(5,10,15), [x,y,z] );

print( 'D(sin)(Pi/6)', theta );

Here's the definition of the custom version of print:

# Type to check if an expression is a derivative using 'D', e.g. D(f)(a) and D[1,2](f)(a,b).

TypeTools:-AddType(   

        'Dexpr',      

        proc( f )     

               if op( [0,0], f ) <> D and op( [0,0,0], f ) <> D then

                       return false;

               end if;       

               if not type( op( [0,1], f ), 'name' ) or not type( { op( f ) }, 'set(algebraic)' ) then

                       return false;

               end if;       

               if op( [0,0,0], f ) = D and not type( { op( [0,0,..], f ) }, 'set(posint)' ) then

                       return false;

               end if;       

               return true;          

        end proc      

):


# Create a local version of 'print', which will print expressions like D[1,2](f)(a,b) in a custom way,

# but otherwise print in the usual fashion.

local print := proc()


        local A, B, f, g, L, X, Y, Z;


        # Check that a valid expression involving 'D' is passed, along with a variable name or list of variable names.

        if ( _npassed < 2 ) or ( not _passed[1] :: 'Dexpr' ) or ( not passed[2] :: 'Or'('name','list'('name')) ) then

               return :-print( _passed );

        end if;


        # Extract important variables from the input.

        g := _passed[1]; # expression

        X := _passed[2]; # variable name(s)

        f := op( [0,1], g ); # function name in expression

        A := op( g ); # point(s) of evaluation


        # Check that the number of variables is the same as the number of evaluation points.

        if nops( X ) <> nops( [A] ) then

               return :-print( _passed );

        end if;


        # The differential operator.

        L := op( [0,0], g );


        # Find the variable (univariate) or indices (multivariate) for the derivative(s).

        B := `if`( L = D, X, [ op( L ) ] );


        # Variable name(s) as expression sequence.

        Y := op( X );


        # Check that the point(s) of evaluation is/are distinct from the variable name(s).

        if numelems( {Y} intersect {A} ) > 0 then

               return :-print( _passed );

        end if;


        # Find the expression sequence of the variable names.

        Z := `if`( L = D, X, X[B] );

       

        return print( Eval( Diff( f(Y), Z ), (Y) = (A) ) );


end proc:

Do you use Leibniz Notation often? Or do you have an alternate method? We’d love to hear from you!

Last year, I read a fascinating paper that presented evidence of an exoplanet, inferred through the “wobble” (or radial velocity) of the star it orbits, HD 3651. A periodogram of the radial velocity revealed the orbital period of the exoplanet – about 62.2 days.

I found the experimental data and attempted to reproduce the periodogram. However, the data was irregularly sampled, as is most astronomical data. This meant I couldn’t use the standard Fourier-based tools from the signal processing package.

I started hunting for the techniques used in the spectral analysis of irregularly sampled data, and found that the Lomb Scargle approach was often used for astronomical data. I threw together some simple prototype code and successfully reproduced the periodogram in the paper.

 

After some (not so) gentle prodding, Erik Postma’s team wrote their own, far faster and far more robust, implementation.

This new functionality makes its debut in Maple 2019 (and the final worksheet is here.)

From a simple germ of an idea, to a finished, robust, fully documented product that we can put in front of our users – that, for me, is incredibly satisfying.

That’s a minor story about a niche I’m interested in, but these stories are repeated time and time again.  Ideas spring from users and from those that work at Maplesoft. They’re filtered to a manageable set that we can work on. Some projects reach completion in under a year, while other, more ambitious, projects take longer.

The result is software developed by passionate people invested in their work, and used by passionate people in universities, industry and at home.

We always pack a lot into each release. Maple 2019 contains improvements for the most commonly used Maple functions that nearly everyone uses – such as solve, simplify and int – as well features that target specific groups (such as those that share my interest in signal processing!)

I’d like to to highlight a few new of the new features that I find particularly impressive, or have just caught my eye because they’re cool.

Of course, this is only a small selection of the shiny new stuff – everything is described in detail on the Maplesoft website.

Edgardo, research fellow at Maplesoft, recently sent me a recent independent comparison of Maple’s PDE solver versus those in Mathematica (in case you’re not aware, he’s the senior developer for that function). He was excited – this test suite demonstrated that Maple was far ahead of its closest competitor, both in the number of PDEs solved, and the time taken to return those solutions.

He’s spent another release cycle working on pdsolve – it’s now more powerful than before. Here’s a PDE that Maple now successfully solves.

Maplesoft tracks visits to our online help pages - simplify is well-inside the top-ten most visited pages. It’s one of those core functions that nearly everyone uses.

For this release, R&D has made many improvements to simplify. For example, Maple 2019 better simplifies expressions that contain powers, exponentials and trig functions.

Everyone who touches Maple uses the same programming language. You could be an engineer that’s batch processing some data, or a mathematical researcher prototyping a new algorithm – everyone codes in the same language.

Maple now supports C-style increment, decrement, and assignment operators, giving you more concise code.

We’ve made a number of improvements to the interface, including a redesigned start page. My favorite is the display of large data structures (or rtables).

You now see the header (that is, the top-left) of the data structure.

For an audio file, you see useful information about its contents.

I enjoy creating new and different types of visualizations using Maple's sandbox of flexible plots and plotting primitives.

Here’s a new feature that I’ll use regularly: given a name (and optionally a modifier), polygonbyname draws a variety of shapes.

In other breaking news, I now know what a Reuleaux hexagon looks like.

Since I can’t resist talking about another signal processing feature, FindPeakPoints locates the local peaks or valleys of a 1D data set. Several options let you filter out spurious peaks or valleys

I’ve used this new function to find the fundamental frequencies and harmonics of a violin note from its periodogram.

Speaking of passionate developers who are devoted to their work, Edgardo has written a new e-book that teaches you how to use tensor computations using Physics. You get this e-book when you install Maple 2019.

The new LeastTrimmedSquares command fits data to an equation while not being signficantly influenced by outliers.

In this example, we:

  • Artifically generate a noisy data set with a few outliers, but with the underlying trend Y =5 X + 50
  • Fit straight lines using CurveFitting:-LeastSquares and Statistics:-LeastTrimmedSquares

LeastTrimmedSquares function correctly predicts the underlying trend.

We try to make every release faster and more efficient. We sometimes target key changes in the core infrastructure that benefit all users (such as the parallel garbage collector in Maple 17). Other times, we focus on specific functions.

For this release, I’m particularly impressed by this improved benchmark for factor, in which we’re factoring a sparse multivariate polynomial.

On my laptop, Maple 2018 takes 4.2 seconds to compute and consumes 0.92 GiB of memory.

Maple 2019 takes a mere 0.27 seconds, and only needs 45 MiB of memory!

I’m a visualization nut, and I always get a vicarious thrill when I see a shiny new plot, or a well-presented application.

I was immediately drawn to this new Maple 2019 app – it illustrates the transition between day and night on a world map. You can even change the projection used to generate the map. Shiny!

 

So that’s my pick of the top new features in Maple 2019. Everyone here at Maplesoft would love to hear your comments!

It is my pleasure to announce the return of the Maple Conference! On October 15-17th, in Waterloo, Ontario, Canada, we will gather a group of Maple enthusiasts, product experts, and customers, to explore and celebrate the different aspects of Maple.

Specifically, this conference will be dedicated to exploring Maple’s impact on education, new symbolic computation algorithms and techniques, and the wide range of Maple applications. Attendees will have the opportunity to learn about the latest research, share experiences, and interact with Maple developers.

In preparation for the conference we are welcoming paper and extended abstract submissions. We are looking for presentations which fall into the broad categories of “Maple in Education”, “Algorithms and Software”, and “Applications of Maple” (a more extensive list of topics can be found here).

You can learn more about the event, plus find our call-for-papers and abstracts, here: https://www.maplesoft.com/mapleconference/

Recently, my research team at the University of Waterloo was approached by Mark Ideson, the skip for the Canadian Paralympic men’s curling team, about developing a curling end-effector, a device to give wheelchair curlers greater control over their shots. A gold medalist and multi-medal winner at the Paralympics, Mark has a passion to see wheelchair curling performance improve and entrusted us to assist him in this objective. We previously worked with Mark and his team on a research project to model the wheelchair curling shot and help optimize their performance on the ice. The end-effector project was the next step in our partnership.

The use of technology in the sports world is increasing rapidly, allowing us to better understand athletic performance. We are able to gather new types of data that, when coupled with advanced engineering tools, allow us to perform more in-depth analysis of the human body as it pertains to specific movements and tasks. As a result, we can refine motions and improve equipment to help athletes maximize their abilities and performance. As a professor of Systems Design Engineering at the University of Waterloo, I have overseen several studies on the motor function of Paralympic athletes. My team focuses on modelling the interactions between athletes and their equipment to maximize athletic performance, and we rely heavily on Maple and MapleSim in our research and project development.

The end-effector project was led by my UW students Borna Ghannadi and Conor Jansen. The objective was to design a device that attaches to the end of the curler’s stick and provides greater command over the stone by pulling it back prior to release.  Our team modeled the end effector in Maple and built an initial prototype, which has undergone several trials and adjustments since then. The device is now on its 7th iteration, which we felt appropriate to name the Mark 7, in recognition of Mark’s inspiration for the project. The device has been a challenge, but we have steadily made improvements with Mark’s input and it is close to being a finished product.

Currently, wheelchair curlers use a device that keeps the stone static before it’s thrown. Having the ability to pull back on the stone and break the friction prior to release will provide great benefit to the curlers. As a curler, if you can only push forward and the ice conditions aren’t perfect, you’re throwing at a different speed every time. If you can pull the stone back and then go forward, you’ve broken that friction and your shot is far more repeatable. This should make the game much more interesting.

For our team, the objective was to design a mechanism that not only allowed curlers to pull back on the stone, but also had a release option with no triggers on the curler’s hand. The device we developed screws on to the end of the curler’s stick, and is designed to rest firmly on the curling handle. Once the curler selects their shot, they can position the stone accordingly, slide the stone backward and then forward, and watch the device gently separate from the stone.

For our research, the increased speed and accuracy of MapleSim’s multibody dynamic simulations, made possible by the underlying symbolic modelling engine, Maple, allowed us to spend more time on system design and optimization. MapleSim combines principles of mechanics with linear graph theory to produce unified representations of the system topology and modelling coordinates. The system equations are automatically generated symbolically, which enables us to view and share the equations prior to a numerical solution of the highly-optimized simulation code.

The Mark 7 is an invention that could have significant ramifications in the curling world. Shooting accuracy across wheelchair curling is currently around 60-62%, and if new technology like the Mark 7 is adopted, that number could grow to 70 or 75%. Improved accuracy will make the game more enjoyable and competitive. Having the ability to pull back on the stone prior to release will eliminate some instability for the curlers, which can help level the playing field for everyone involved. Given the work we have been doing with Mark’s team on performance improvements, it was extremely satisfying for us to see them win the bronze medal in South Korea. We hope that our research and partnership with the team can produce gold medals in the years to come.

 

Throughout the course of a year, Maple users create wildly varying applications on all sorts of subjects. To mark the end of 2018, I thought I’d share some of the 2018 submissions to the Maple Application Center that I personally found particularly interesting.

Solving the 15-puzzle, by Curtis Bright. You know those puzzles where you have to move the pieces around inside a square to put them in order, and there’s only one free space to move into?  I’m not good at those puzzles, but it turns out Maple can help. This is one of collection of new, varied applications using Maple’s SAT solvers (if you want to solve the world’s hardest Sudoku, Maple’s SAT solvers can help with that, too).

Romeo y Julieta: Un clasico de las historias de amor... y de las ecuaciones diferenciales [Romeo and Juliet: A classic story of love..and differential equations], by Ranferi Gutierrez. This one made me laugh (and even more so once I put some of it in google translate, which is more than enough to let you appreciate the application even if you don’t speak Spanish). What’s not to like about modeling a high drama love story using DEs?

Prediction of malignant/benign of breast mass with DNN classifier, by Sophie Tan. Machine learning can save lives.

Hybrid Image of a Cat and a Dog, by Samir Khan. Signal processing can be more fun that I realized. This is one of those crazy optical illusions where the picture changes depending on how far away you are.

Beyond the 8 Queens Problem, by Yury Zavarovsky. In true mathematical fashion, why have 8 queens when you can have n?  (If you are interested in this problem, you can also see a different solution that uses SAT solvers.)

Gödel's Universe, by Frank Wang.  Can’t say I understood much of it, but it involves Gödel, Einstein, and Hawking, so I don’t need to understand it to find it interesting.

A common question to our tech support team is about completing the square for a univariate polynomial of even degree, and how to do that in Maple. We’ve put together a solution that we think you’ll find useful. If you have any alternative methods or improvements to our code, let us know!

restart;

# Procedure to complete the square for a univariate
# polynomial of even degree.

CompleteSquare := proc( f :: depends( 'And'( polynom, 'satisfies'( g -> ( type( degree(g,x), even ) ) ) ) ), x :: name )

       local a, g, k, n, phi, P, Q, r, S, T, u:

       # Degree and parameters of polynomial.
       n := degree( f, x ):
       P := indets( f, name ) minus { x }:

       # General polynomial of square plus constant.
       g := add( a[k] * x^k, k=0..n/2 )^2 + r:

       # Solve for unknowns in g.
       Q := indets( g, name ) minus P:

       S := map( expand, { solve( identity( expand( f - g ) = 0, x ), Q ) } ):

       if numelems( S ) = 0 then
              return NULL:
       end if:

       # Evaluate g at the solution, and re-write square term
       # so that the polynomial within the square is monic.

       phi := u -> lcoeff(op(1,u),x)^2 * (expand(op(1,u)/lcoeff(op(1,u),x)))^2:  
       T := map( evalindets, map( u -> eval(g,u), S ), `^`(anything,identical(2)), phi ):

       return `if`( numelems(T) = 1, T[], T ):

end proc:


# Examples.

CompleteSquare( x^2 + 3 * x + 2, x );
CompleteSquare( a * x^2 + b * x + c, x );
CompleteSquare( 4 * x^8 + 8 * x^6 + 4 * x^4 - 246, x );

m, n := 4, 10;
r := rand(-10..10):
for i from 1 to n do
       CompleteSquare( r() * ( x^(m/2) + randpoly( x, degree=m-1, coeffs=r ) )^2 + r(), x );
end do;

# Compare quadratic examples with Student:-Precalculus:-CompleteSquare()
# (which is restricted to quadratic expressions).

Student:-Precalculus:-CompleteSquare( x^2 + 3 * x + 2 );
Student:-Precalculus:-CompleteSquare( a * x^2 + b * x + c );

For a higher-order example:

f := 5*x^4 - 70*x^3 + 365*x^2 - 840*x + 721;
g := CompleteSquare( f, x ); # 5 * ( x^2 - 7 * x + 12 )^2 + 1
h := evalindets( f, `*`, factor ); 4 * (x-3)^2 * (x-4)^2 + 1
p1 := plot( f, x=0..5, y=-5..5, color=blue ):
p2 := plots:-pointplot( [ [3,1], [4,1] ], symbol=solidcircle, symbolsize=20, color=red ):
plots:-display( p1, p2 );

tells us that the minimum value of the expression is 1, and it occurs at x=3 and x=4.

Over the holidays I reconnected with an old friend and occasional
chess partner who, upon hearing I was getting soundly thrashed by run
of the mill engines, recommended checking out the ChessTempo site.  It
has online tools for training your chess tactics.  As you attempt to
solve chess problems your rating is computed depending on how well you
do.  The chess problems, too, are rated and adjusted as visitors
attempt them.  This should be familar to any chess or table-tennis
player.  Rather than the Elo rating system, the Glicko rating system is
used.

You have a choice of the relative difficulty of the problems.
After attempting a number of easy puzzles and seeing my rating slowly
climb, I wondered what was the most effective technique to raise my
rating (the classical blunder).  Attempting higher rated problems would lower my
solving rate, but this would be compensated by a smaller loss and
larger gain.  Assuming my actual playing strength is greater than my
current rating (a misconception common to us patzers), there should be a
rating that maximizes the rating gain per problem.

The following Maple module computes the expected rating change
using the Glicko system.

Glicko := module()

export DeltaRating
    ,  ExpectedDelta
    ,  Pwin
    ;

    # Return the change in rating for a loss and a win
    # for player 1 against player2
    DeltaRating := proc(r1,rd1,r2,rd2)
    local E, K, g, g2, idd, q;

        q := ln(10)/400;
        g := rd -> 1/sqrt(1 + 3*q^2*rd^2/Pi^2);
        g2 := g(rd2);
        E := 1/(1+10^(-g2*(r1-r2)/400));
        idd := q^2*(g2^2*E*(1-E));

        K := q/(1/rd1^2+idd)*g2;

        (K*(0-E), K*(1-E));

    end proc:

    # Compute the probability of a win
    # for a player with strength s1
    # vs a player with strength s2.

    Pwin := proc(s1, s2)
    local p;
        p := 10^((s1-s2)/400);
        p/(1+p);
    end proc:

    # Compute the expected rating change for
    # player with strength s1, rating r1 vs a player with true rating r2.
    # The optional rating deviations are rd1 and rd2.

    ExpectedDelta := proc(s1,r1,r2,rd1 := 35, rd2 := 35)
    local P, l, w;
        P := Pwin(s1,r2);
        (l,w) := DeltaRating(r1,rd1,r2,rd2);
        P*w + (1-P)*l;
    end proc:

end module:

Assume a player has a rating of 1500 but an actual playing strength of 1700.  Compute the expected rating change for a given puzzle rating, then plot it.  As expected the graph has a peak.

 

Ept := Glicko:-ExpectedDelta(1700,1500,r2):
plot(Ept,r2 = 1000...2000);

Compute the optimum problem rating

 

fsolve(diff(Ept,r2));

                     {r2 = 1599.350691}

As your rating improves, you'll want to adjust the rating of the problems (the site doesn't allow that fine tuning). Here we plot the optimum puzzle rating (r2) for a given player rating (r1), assuming the player's strength remains at 1700.

Ept := Glicko:-ExpectedDelta(1700, r1, r2):
dEpt := diff(Ept,r2):
r2vsr1 := r -> fsolve(eval(dEpt,r1=r)):
plot(r2vsr1, 1000..1680);

Here is a Maple worksheet with the code and computations.

Glicko.mw

Later

After pondering this, I realized there is a more useful way to present the results. The shape of the optimal curve is independent of the user's actual strength. Showing that is trivial, just substitute a symbolic value for the player's strength, offset the ratings from it, and verify that the result does not depend on the strength.

Ept := Glicko:-ExpectedDelta(S, S+r1, S+r2):
has(Ept, S);
                    false

Here's the general curve, shifted so the player's strength is 0, r1 and r2 are relative to that.

r2_r1 := r -> rhs(Optimization:-Maximize(eval(Ept,r1=r), r2=-500..0)[2][]):
p1 := plot(r2_r1, -500..0, 'numpoints'=30);

Compute and plot the expected points gained when playing the optimal partner and your rating is r-points higher than your strength.

EptMax := r -> eval(Ept, [r1=r, r2=r2_r1(r)]):
plot(EptMax, -200..200, 'numpoints'=30, 'labels' = ["r","Ept"]);

When your playing strength matches your rating, the optimal opponent has a relative rating of

r2_r1(0);
                       -269.86

The expected points you win is

evalf(EptMax(0));
                       0.00956
First 9 10 11 12 13 14 15 Last Page 11 of 33