Maplesoft Blog

The Maplesoft blog contains posts coming from the heart of Maplesoft. Find out what is coming next in the world of Maple, and get the best tips and tricks from the Maple experts.

Submit your paper or extended abstract to the Maple Conference!

The papers and extended abstracts presented at the 2019 Maple Conference will be published in the Communications in Computer and Information Science Series from Springer. 

The deadline to submit is May 27, 2019. 

This conference is an amazing opportunity to contribute to the development of technology in academics. I hope that you, or your colleagues and associates, will consider making a contribution.

We welcome topics that fall into the following broad categories:

  • Maple in Education
  • Algorithms and Software
  • Applications of Maple

You can learn more about the conference or submit your paper or abstract here: 

https://www.maplesoft.com/mapleconference/Papers-and-Presentations.aspx

 

 

 

Maple 2019 has a new add-on package Maple Quantum Chemistry Toolbox from RDMChem for computing the energies and properties of molecules.  As a member of the team at RDMChem that developed the package, I would like to tell the story of its origins and provide a brief demonstration of the package.  

 

Thinking about Quantum Chemistry at Harvard

 

The story of the Maple Quantum Chemistry Toolbox begins with my graduate studies in Chemical Physics at Harvard University in the late 1990s.  Even in 1998 programs for computing the energies and properties of molecules were extremely complicated and nonintuitive.  Many of the existing programs had begun in the 1970s on computers whose programs would be recorded on punchcards.

Fig. 1: Used Punchcard by Pete Birkinshaw from Manchester, UK CC BY 2.0

 

Even today some of these programs have remnants of their early versions such as input files that must start on the second column to account for the margin of the now non-existent punchcards.  As a student, I made a bound copy of one of these manuals at a local Kinkos photocopy shop and later found myself in Harvard Yard, thinking that there must be a better way to present quantum chemistry computations.  The idea for a Maple-like package for quantum chemistry was born in that moment.

 

At the same time I was learning about something called the two-electron reduced density matrix (2-RDM).  The basic variable in quantum chemistry is the wave function which is the probability amplitude for finding each of the electrons in a molecule.  Because electrons are indistinguishable with pairwise interactions, the wave function contains much more information than is needed for computing the energies and electronic properties of molecules.  The energies and properties of any molecule with any number of electrons can be expressed as a function of a 2 electron matrix, the 2-RDM [1-3].  A quantum chemistry based on the 2-RDM, it was known, would have potentially significant advantages over wave function calculations in terms of accuracy and computational cost, especially for molecules far from the mean-field limit.  A 2-RDM approach to quantum chemistry became the focus of my Ph.D. thesis.

 

Representing Many Electrons with Only Two Electrons

 

The idea of using the 2-RDM in quantum chemistry can be attributed to four scientists: two physicists Kodi Husimi and Joseph Mayer, a chemist Per-Olov Lowdin, and a mathematician John Coleman [1-3].  In the early 1940s Husimi first published the idea in a Japanese physics journal, but in the midst of World War II the paper was not widely disseminated in the West.  In the summer of 1951 John Coleman, which attending a physics conference at Chalk River, realized that the ground-state energy of any atom or molecule could be expressed as functional of the 2-RDM, and similar ideas later occurred to Per-Olov Lowdin and Joseph Mayer who published their ideas in Physical Review in 1955.  It was soon recognized that computing the ground-state energy of an atom or molecule with the 2-RDM was potentially difficult because not every two-electron density matrix corresponds to an N-electron density matrix or wave function.  The search for the appropriate constraints on the 2-RDM, known as N-representability conditions, became known as the N-representability problem [1-3].  

 

Beginning in the late 1990s and early 2000s, Carmela Valdemoro and Diego Alcoba at the Consejo Superior de Investigaciones Científicas (Madrid, Spain), Hiroshi Nakatsuji, Koji Yasuda, and Maho Nakata at Kyoto University (Kyoto, Japan), Jerome Percus and Bastiaan Braams at the Courant Institute (New York, USA), John Coleman and Robert Erdahl at Queens University (Kingston, Canada), and my research group and I at The University of Chicago (Chicago, USA) began to make significant progress in the computation of the 2-RDM without computing the many-electron wave function [1-3].  Further contributions were made by Eric Cances and Claude Le Bris at CERMICS, Ecole Nationale des Ponts et Chaussées (Marne-la-Vallée, France), Paul Ayers at McMaster University (Hamilton, Canada), and Dimitri Van Neck at the University of Ghent (Ghent, Belgium) and their research groups.  By 2014 several powerful 2-RDM methods had emerged for the computation of molecules.  The Army Research Office (ARO) issued a proposal call for a company to develop a modern, built-from-scratch package for quantum chemistry that would contain two newly developed 2-RDM-based methods from our group: the parametric 2-RDM method [1] and the variational 2-RDM method with a fast algorithm for solving the semidefinite program [4,5,6].   The company RDMChem LLC was founded to work with the ARO to develop such a package built around RDMs, and hence, the name of the company RDMChem was selected as a hybrid of the RDM abbreviation for Reduced Density Matrices and the Chem colloquialism for Chemistry.  To achieve a really new design for an electronic structure package with access to numeric and symbolic computations as well as advanced visualizations, the team at RDMChem and I developed a partnership with Maplesoft to build something new that became the Maple Quantum Chemistry Package (or Toolbox), which was released with Maple 2019 on Pi Day.

 

Maple Quantum Chemistry Toolbox

 The Maple Quantum Chemistry Toolbox provides a powerful, parallel platform for quantum chemistry calculations that is directly integrated into the Maple 2019 environment.  It is optimized for both cutting-edge research as well as chemistry education.  The Toolbox can be used from the worksheet, document, or command-line interfaces.  Plus there is a Maplet interface for rapid exploration of molecules and their properties.  Figure 2 shows the Maplet interface being applied to compute the ground-state energy of 1,3-dibromobenzene by density functional theory (DFT) in a 6-31g basis set.           

Fig. 2: Maplet interface to the Quantum Chemistry Toolbox 2019, showing a density functional theory (DFT) calculation         

After entering a name into the text box labeled Name, the user can click on: (1) the button Web to import the geometry from an online database containing more than 96 million molecules,  (2) the button File to read the geometry from a standard XYZ file, or (3) the button Input to enter the geometry.  As soon the geometry is entered, the Maplet displays a 3D picture of the molecule in the window on the right of the options.  Dropdown menus allow the user to select the basis set, the electronic structure method, and a boolean for geometry optimization.  The user can click on the Compute button to perform the computation.  When the quantum computation completes, the total energy appears in the box labeled Total Energy.  The dropdown menu Analyze contains a list of data tables, plots, and animations that can be selected and then displayed by clicking the Analyze button.  The Maplet interface contains nearly all of the options available in the worksheet interface.   The Help Pages of the Toolbox include extensive curricula and lessons that can be used in undergraduate, graduate, and even high school chemistry courses.  Next we look at some sample calculations in the worksheet interface.     

 

Reproducing an Early 2-RDM Calculation

 

One of the earliest variational calculations of the 2-RDM was performed in 1975 by Garrod, Mihailović,  and  Rosina [1-3].  They minimized the electronic ground state of the 4-electron atom beryllium as a functional of only two electrons, the 2-RDM.  They imposed semidefinite constraints on the particle-particle (D), hole-hole (Q), and particle-hole (G) metric matrices.  They solved the resulting optimization problem of minimizing the energy as a linear function of the 2-RDM subject to the semidefinite constraints, known as a semidefinite program, by a cutting-plane algorithm.  Due to limitations of the cutting-plane algorithm and computers circa 1975, the calculation was a difficult one, likely taking a significant amount of computer time and memory.

 

With the Quantum Chemistry Toolbox we can use the command Variational2RDM to reproduce the calculation on a Windows laptop.  First, in a Maple 2019 worksheet we load the commands of the Add-on Quantum Chemistry Toolbox:

with(QuantumChemistry);

[AOLabels, ActiveSpaceCI, ActiveSpaceSCF, AtomicData, BondAngles, BondDistances, Charges, ChargesPlot, CorrelationEnergy, CoupledCluster, DensityFunctional, DensityPlot3D, Dipole, DipolePlot, Energy, FullCI, GeometryOptimization, HartreeFock, Interactive, Isotopes, MOCoefficients, MODiagram, MOEnergies, MOIntegrals, MOOccupations, MOOccupationsPlot, MOSymmetries, MP2, MolecularData, MolecularGeometry, NuclearEnergy, NuclearGradient, Parametric2RDM, PlotMolecule, Populations, RDM1, RDM2, ReadXYZ, SaveXYZ, SearchBasisSets, SearchFunctionals, SkeletalStructure, Thermodynamics, Variational2RDM, VibrationalModeAnimation, VibrationalModes, Video]

(1.1)

Then we define the atom (or molecule) using a Maple list of lists that we assign to the variable atom:

atom := [["Be",0,0,0]];

[["Be", 0, 0, 0]]

(1.2)

 

We can then perform the variational 2-RDM method with the Variational2RDM command to compute the ground-state energy and properties of beryllium in a minimal basis set like the one used by Rosina and his collaborators.  By default the method uses the D, Q, and G N-representability conditions and the minimal "sto-3g" basis set.  The calculation, which completes in seconds, contains a wealth of information in the form of a convenient Maple table that we assign to the variable data.

data := Variational2RDM(atom);

table(%id = 18446744313704784158)

(1.3)

 

The table contains the total ground-state energy of the beryllium atom in the atomic unit of energy (hartrees)

data[e_tot];

HFloat(-14.40370016681039)

(1.4)

 

We also have the atomic orbitals (AOs) employed in the calculation

data[aolabels];

Vector(5, {(1) = "0 Be 1s", (2) = "0 Be 2s", (3) = "0 Be 2px", (4) = "0 Be 2py", (5) = "0 Be 2pz"})

(1.5)

 

as well as the Mulliken populations of these orbitals

data[populations];

Vector(5, {(1) = 1.9995807710723152, (2) = 1.7913484714571852, (3) = 0.6969023822632789e-1, (4) = 0.6969026475511847e-1, (5) = 0.6969029119010149e-1})

(1.6)

 

We see that 2 electrons are located in the 1s orbital, 1.8 electrons in the 2s orbital, and about 0.2 electrons in the 2p orbitals.  By default the calculation also returns the 1-RDM

data[rdm1];

Matrix(5, 5, {(1, 1) = 1.9999258249189755, (1, 2) = -0.37784860208539793e-2, (1, 3) = 0., (1, 4) = 0., (1, 5) = 0., (2, 1) = -0.37784860208539793e-2, (2, 2) = 1.7910034176105256, (2, 3) = 0., (2, 4) = 0., (2, 5) = 0., (3, 1) = 0., (3, 2) = 0., (3, 3) = 0.6969023822632789e-1, (3, 4) = 0., (3, 5) = 0., (4, 1) = 0., (4, 2) = 0., (4, 3) = 0., (4, 4) = 0.6969026475511847e-1, (4, 5) = 0., (5, 1) = 0., (5, 2) = 0., (5, 3) = 0., (5, 4) = 0., (5, 5) = 0.6969029119010149e-1})

(1.7)

 

The eigenvalues of the 1-RDM are the natural orbital occupations

LinearAlgebra:-Eigenvalues(data[rdm1]);

Vector(5, {(1) = 1.9999941387490443+0.*I, (2) = 1.7909351037804568+0.*I, (3) = 0.6969023822632789e-1+0.*I, (4) = 0.6969026475511847e-1+0.*I, (5) = 0.6969029119010149e-1+0.*I})

(1.8)

 

We can display the density of the 2s-like 2nd natural orbital using the DensityPlot3D command providing the atom, the data, and the orbitalindex keyword

DensityPlot3D(atom,data,orbitalindex=2);

 

 

Similarly,  using the DensityPlot3D command, we can readily display the 2p-like 3rd natural orbital

DensityPlot3D(atom,data,orbitalindex=3);

 

 

By using Maple keyword arguments in the Variational2RDM command, we can readily change the basis set, use point-group symmetry, add active orbitals with or without self-consistent-field, change the N-representability conditions, as well as explore many other options.  Having reenacted one of the first variational 2-RDM calculations ever, let's examine a more complicated molecule.

 

Explosive TNT

 

We consider the molecule TNT that is used as an explosive. Using the command MolecularGeometry, we can import the experimental geometry of TNT from the online PubChem database.

mol := MolecularGeometry("TNT");

[["O", .5454, -3.514, 0.12e-2], ["O", .5495, 3.5137, 0.8e-3], ["O", 2.4677, -2.4539, -0.5e-3], ["O", 2.4705, 2.4513, 0.3e-3], ["O", -3.5931, -1.0959, 0.4e-3], ["O", -3.5922, 1.0993, 0.6e-3], ["N", 1.2142, -2.454, 0.2e-3], ["N", 1.217, 2.4527, 0], ["N", -2.9846, 0.15e-2, 0.1e-3], ["C", 1.2253, -0.6e-3, -0.9e-3], ["C", .5271, -1.2082, -0.8e-3], ["C", .5284, 1.2078, -0.8e-3], ["C", -1.5646, 0.8e-3, -0.4e-3], ["C", -.8678, -1.2074, -0.6e-3], ["C", -.8666, 1.2084, -0.6e-3], ["C", 2.7239, -0.16e-2, 0.11e-2], ["H", -1.4159, -2.1468, -0.3e-3], ["H", -1.4137, 2.1483, -0.3e-3], ["H", 3.1226, .2418, -.9891], ["H", 3.0863, .6934, .7662], ["H", 3.3154, -.8111, .4109]]

(1.9)

 

The command PlotMolecule generates a 3D ball-and-stick plot of the molecule

PlotMolecule(mol);

 

 

We perform a variational calculation of the 2-RDM of TNT in an active space of 10 electrons and 10 orbitals by setting the keyword active to the list [10,10].  The keyword casscf is set to true to optimize the active orbitals during the calculation.  The keyword basis is used to set the basis set to a minimal basis set sto-3g for illustration.   

data := Variational2RDM(mol, active=[10,10], casscf=true, basis="sto-3g");

table(%id = 18446744493271367454)

(1.10)

 

The ground-state energy of TNT in hartrees is

data[e_tot];

HFloat(-868.8629631593426)

(1.11)

 

Unlike beryllium, the electric dipole moment of TNT in debyes is nonzero

data[dipole];

Vector(3, {(1) = .5158925019252739, (2) = -0.5985274393363119e-1, (3) = .1277528280025474})

(1.12)

 

We can easily visualize the dipole moment relative to the molecule's ball-and-stick model with the DipolePlot command

DipolePlot(mol,method=Variational2RDM, active=[10,10], casscf=true, basis="sto-3g");

 

 

The 1-RDM is returned by default

data[rdm1];

_rtable[18446744313709602566]

(1.13)

 

The natural molecular-orbital (MO) occupations are the eigenvalues of the 1-RDM

data[mo_occ];

_rtable[18446744313709600150]

(1.14)

 

All of the occupations can be viewed at once by converting the Vector to a list

convert(data[mo_occ], list);

[HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(2.0), HFloat(1.9110133620349001), HFloat(1.8984139688344246), HFloat(1.6231436866358906), HFloat(1.6158489471020905), HFloat(1.6145310163161273), HFloat(0.38920731792133734), HFloat(0.387039366894289), HFloat(0.37786347287813526), HFloat(0.09734187094597906), HFloat(0.08559699476985069), HFloat(0.0), HFloat(0.0), HFloat(0.0), HFloat(0.0), HFloat(0.0), HFloat(0.0), HFloat(0.0), HFloat(0.0), HFloat(0.0), HFloat(0.0), HFloat(0.0), HFloat(0.0), HFloat(0.0), HFloat(0.0), HFloat(0.0), HFloat(0.0), HFloat(0.0), HFloat(0.0), HFloat(0.0), HFloat(0.0), HFloat(0.0), HFloat(0.0)]

(1.15)

 

We can visualize these occupations with the MOOccupationsPlot command

MOOccupationsPlot(mol,method=Variational2RDM, active=[10,10], casscf=true, basis="sto-3g");

 

 

The occupations, we observe, show significant deviations from 0 and 2, indicating that the electrons have substantial correlation beyond the mean-field (Hartree-Fock) limit.  The blue lines indicate the first N/2 spatial orbitals where N is the total number of electrons while the red lines indicate the remaining spatial orbitals.  We can visualize the highest "occupied" molecular orbital (58) with the DensityPlot3D command

DensityPlot3D(mol,data, orbitalindex=58);

 

 

Similarly, we can visualize the lowest "unoccupied" molecular orbital (59) with the DensityPlot3D command

DensityPlot3D(mol,data, orbitalindex=59);

 

 

Comparison of orbitals 58 and 59 reveals an increase in the number of nodes (changes in the phase of the orbitals denoted by green and purple), which reflects an increase in the energy of the orbital.

 

Looking Ahead

 

The Maple Quantum Chemistry Toolbox 2019, an new Add-on for Maple 2019 from RDMChem, provides a easy-to-use, research-grade environment for the computation of the energies and properties of atoms and molecules.  In this blog we discussed its origins in graduate research at Harvard, its reproduction of an early 2-RDM calculation of beryllium, and its application to the explosive molecule TNT.  We have illustrated only some of the many features and electronic structure methods of the Maple Quantum Chemistry package.  There is much more chemistry and physics to explore.  Enjoy!    

 

Selected References

 

[1] D. A. Mazziotti, Chem. Rev. 112, 244 (2012). "Two-electron Reduced Density Matrix as the Basic Variable in Many-Electron Quantum Chemistry and Physics"

[2]  Reduced-Density-Matrix Mechanics: With Application to Many-Electron Atoms and Molecules (Adv. Chem. Phys.) ; D. A. Mazziotti, Ed.; Wiley: New York, 2007; Vol. 134.

[3] A. J. Coleman and V. I. Yukalov, Reduced Density Matrices: Coulson’s Challenge (Springer-Verlag,  New York, 2000).

[4] D. A. Mazziotti, Phys. Rev. Lett. 106, 083001 (2011). "Large-scale Semidefinite Programming for Many-electron Quantum Mechanics"

[5] A. W. Schlimgen, C. W. Heaps, and D. A. Mazziotti, J. Phys. Chem. Lett. 7, 627-631 (2016). "Entangled Electrons Foil Synthesis of Elusive Low-Valent Vanadium Oxo Complex"

[6] J. M. Montgomery and D. A. Mazziotti, J. Phys. Chem. A 122, 4988-4996 (2018). "Strong Electron Correlation in Nitrogenase Cofactor, FeMoco"

 

Download QCT2019_PrimesV17_05.05.19.mw

While googling around for Season 8 spoilers, I found data sets that can be used to create a character interaction network for the books in the A Song of Ice and Fire series, and the TV show they inspired, Game of Thrones.

The data sets are the work of Dr Andrew Beveridge, an associate professor at Macalaster College (check out his Network of Thrones blog).

You can create an undirected, weighted graph using this data and Maple's GraphTheory package.

Then, you can ask yourself really pressing questions like

  • Who is the most influential person in Westeros? How has their influence changed over each season (or indeed, book)?
  • How are Eddard Stark and Randyll Tarly connected?
  • What do eigenvectors have to do with the battle for the Iron Throne, anyway?

These two applications (one for the TV show, and another for the novels) have the answers, and more.

The graphs for the books tend to be more interesting than those for the TV show, simply because of the far broader range of characters and the intricacy of the interweaving plot lines.

Let’s look at some of the results.

This a small section of the character interaction network for the first book in the A Song of Ice and Fire series (this is the entire visualization - it's big, simply because of the shear number of characters)

The graph was generated by GraphTheory:-DrawGraph (with method = spring, which models the graph as a system of protons repelling each other, connected by springs).

The highlighted vertices are the most influential characters, as determined by their Eigenvector centrality (more on this later).

 

The importance of a vertex can be described by its centrality, of which there are several variants.

Eigenvector centrality, for example, is the dominant eigenvector of the adjacency matrix, and uses the number and importance of neighboring vertices to quantify influence.

This plot shows the 15 most influential characters in Season 7 of the TV show Game of Thrones. Jon Snow is the clear leader.

Here’s how the Eigenvector centrality of several characters change over the books in the A Song of Ice and Fire series.

A clique is a group of vertices that are all connected to every other vertex in the group. Here’s the largest clique in Season 7 of the TV show.

Game of Thrones has certainly motivated me to learn more about graph theory (yes, seriously, it has). It's such a wide, open field with many interesting real-world applications.

Enjoy tinkering!

I recently had a wonderful and valuable opportunity to meet with some primary school students and teachers at Holbaek by Skole in Denmark to discuss the use of technology in the classroom. The Danish education system has long been an advocate of using technology and digital learning solutions to augment learning for its students. One of the technology solutions they are using is Maple, Maplesoft’s comprehensive mathematics software tool designed to meet the unique and complex needs of STEM courses. It is rare to find Maple being used at the primary school level, so it was fascinating to see first-hand how Maple is being incorporated at the school.

In speaking with some of the students, I asked them what their education was like before Maple was incorporated into their course. They told me that before they had access to Maple, the teacher would put an example problem on the whiteboard and they would have to take notes and work through the solution in their notebooks. They definitely prefer the way the course is taught using Maple. They love the fact that they have a tool that let them work through the solution and provide context to the answer, as opposed to just giving them the solution. It forces them to think about how to solve the problem. The students expressed to me that Maple has transformed their learning and they cannot imagine going back to taking lectures using a whiteboard and notebook.

Here, I am speaking with some students about how they have adapted Maple to meet their needs ... and about football. Their team had just won 12-1.

 

Mathematics courses, and on a broader level, STEM courses, deal with a lot of complex materials and can be incredibly challenging. If we are able to start laying the groundwork for competency and understanding at a younger age, students will be better positioned for not only higher education, but their careers as well. This creates the potential for stronger ideas and greater innovation, which has far-reaching benefits for society as a whole.

Jesper Estrup and Gitte Christiansen, two passionate primary school teachers, were responsible for introducing Maple at Holbaek by Skole. It was a pleasure to meet with them and discuss their vision for improving mathematics education at the school. They wanted to provide their students experience with a technology tool so they would be better equipped to handle learning in the future. With the use of Maple, the students achieved the highest grades in their school system. As a result of this success, Jesper and Gitte decided to develop primary school level content for a learning package to further enhance the way their students learn and understand mathematics, and to benefit other institutions seeking to do the same. Their efforts resulted in the development of Maple-Skole, a new educational tool, based on Maple, that supports mathematics teaching for primary schools in Denmark.

Maplesoft has a long-standing relationship with the Danish education system. Maple is already used in high schools throughout Denmark, supported by the Maple Gym package. This package is an add-on to Maple that contains a number of routines to make working with Maple more convenient within various topics. These routines are made available to students and teachers with a single command that simplifies learning. Maple-Skole is the next step in the country’s vision of utilizing technology tools to enhance learning for its students. And having the opportunity to work with one tool all the way through their schooling will provide even greater benefit to students.

(L-R) Henrik and Carolyn from Maplesoft meeting with Jesper and Gitte from Holbaek by Skole

 

It helps foster greater knowledge and competency in primary school students by developing a passion for mathematics early on. This is a big step and one that we hope will revolutionize mathematics education in the country. It is exciting to see both the great potential for the Maple-Skole package and the fact that young students are already embracing Maple in such a positive way.

For us at Maplesoft, this exciting new package provides a great opportunity to not only improve upon our relationships with educational institutions in Denmark, but also to be a part of something significant, enhancing the way students learn mathematics. We strongly believe in the benefits of Maple-Skole, which is why it will be offered to schools at no charge until July 2020. I truly believe this new tool has the potential to revolutionize mathematics education at a young age, which will make them better prepared as they move forward in their education.

Maple users often want to write a derivative evaluated at a point using Leibniz notation, as a matter of presentation, with appropriate variables and coordinates. For instance:

 

Now, Maple uses the D operator for evaluating derivatives at a point, but this can be a little clunky:

p := D[1,2,2,3](f)(a,b,c);

q := convert( p, Diff );

u := D[1,2,2,3](f)(5,10,15);

v := convert( u, Diff );

How can we tell Maple, programmatically, to print this in a nicer way? We amended the print command (see below) to do this. For example:

print( D[1,2,2,3](f)(a,b,c), [x,y,z] );

print( D[1,2,2,3](f)(5,10,15), [x,y,z] );

print( 'D(sin)(Pi/6)', theta );

Here's the definition of the custom version of print:

# Type to check if an expression is a derivative using 'D', e.g. D(f)(a) and D[1,2](f)(a,b).

TypeTools:-AddType(   

        'Dexpr',      

        proc( f )     

               if op( [0,0], f ) <> D and op( [0,0,0], f ) <> D then

                       return false;

               end if;       

               if not type( op( [0,1], f ), 'name' ) or not type( { op( f ) }, 'set(algebraic)' ) then

                       return false;

               end if;       

               if op( [0,0,0], f ) = D and not type( { op( [0,0,..], f ) }, 'set(posint)' ) then

                       return false;

               end if;       

               return true;          

        end proc      

):


# Create a local version of 'print', which will print expressions like D[1,2](f)(a,b) in a custom way,

# but otherwise print in the usual fashion.

local print := proc()


        local A, B, f, g, L, X, Y, Z;


        # Check that a valid expression involving 'D' is passed, along with a variable name or list of variable names.

        if ( _npassed < 2 ) or ( not _passed[1] :: 'Dexpr' ) or ( not passed[2] :: 'Or'('name','list'('name')) ) then

               return :-print( _passed );

        end if;


        # Extract important variables from the input.

        g := _passed[1]; # expression

        X := _passed[2]; # variable name(s)

        f := op( [0,1], g ); # function name in expression

        A := op( g ); # point(s) of evaluation


        # Check that the number of variables is the same as the number of evaluation points.

        if nops( X ) <> nops( [A] ) then

               return :-print( _passed );

        end if;


        # The differential operator.

        L := op( [0,0], g );


        # Find the variable (univariate) or indices (multivariate) for the derivative(s).

        B := `if`( L = D, X, [ op( L ) ] );


        # Variable name(s) as expression sequence.

        Y := op( X );


        # Check that the point(s) of evaluation is/are distinct from the variable name(s).

        if numelems( {Y} intersect {A} ) > 0 then

               return :-print( _passed );

        end if;


        # Find the expression sequence of the variable names.

        Z := `if`( L = D, X, X[B] );

       

        return print( Eval( Diff( f(Y), Z ), (Y) = (A) ) );


end proc:

Do you use Leibniz Notation often? Or do you have an alternate method? We’d love to hear from you!

Last year, I read a fascinating paper that presented evidence of an exoplanet, inferred through the “wobble” (or radial velocity) of the star it orbits, HD 3651. A periodogram of the radial velocity revealed the orbital period of the exoplanet – about 62.2 days.

I found the experimental data and attempted to reproduce the periodogram. However, the data was irregularly sampled, as is most astronomical data. This meant I couldn’t use the standard Fourier-based tools from the signal processing package.

I started hunting for the techniques used in the spectral analysis of irregularly sampled data, and found that the Lomb Scargle approach was often used for astronomical data. I threw together some simple prototype code and successfully reproduced the periodogram in the paper.

 

After some (not so) gentle prodding, Erik Postma’s team wrote their own, far faster and far more robust, implementation.

This new functionality makes its debut in Maple 2019 (and the final worksheet is here.)

From a simple germ of an idea, to a finished, robust, fully documented product that we can put in front of our users – that, for me, is incredibly satisfying.

That’s a minor story about a niche I’m interested in, but these stories are repeated time and time again.  Ideas spring from users and from those that work at Maplesoft. They’re filtered to a manageable set that we can work on. Some projects reach completion in under a year, while other, more ambitious, projects take longer.

The result is software developed by passionate people invested in their work, and used by passionate people in universities, industry and at home.

We always pack a lot into each release. Maple 2019 contains improvements for the most commonly used Maple functions that nearly everyone uses – such as solve, simplify and int – as well features that target specific groups (such as those that share my interest in signal processing!)

I’d like to to highlight a few new of the new features that I find particularly impressive, or have just caught my eye because they’re cool.

Of course, this is only a small selection of the shiny new stuff – everything is described in detail on the Maplesoft website.

Edgardo, research fellow at Maplesoft, recently sent me a recent independent comparison of Maple’s PDE solver versus those in Mathematica (in case you’re not aware, he’s the senior developer for that function). He was excited – this test suite demonstrated that Maple was far ahead of its closest competitor, both in the number of PDEs solved, and the time taken to return those solutions.

He’s spent another release cycle working on pdsolve – it’s now more powerful than before. Here’s a PDE that Maple now successfully solves.

Maplesoft tracks visits to our online help pages - simplify is well-inside the top-ten most visited pages. It’s one of those core functions that nearly everyone uses.

For this release, R&D has made many improvements to simplify. For example, Maple 2019 better simplifies expressions that contain powers, exponentials and trig functions.

Everyone who touches Maple uses the same programming language. You could be an engineer that’s batch processing some data, or a mathematical researcher prototyping a new algorithm – everyone codes in the same language.

Maple now supports C-style increment, decrement, and assignment operators, giving you more concise code.

We’ve made a number of improvements to the interface, including a redesigned start page. My favorite is the display of large data structures (or rtables).

You now see the header (that is, the top-left) of the data structure.

For an audio file, you see useful information about its contents.

I enjoy creating new and different types of visualizations using Maple's sandbox of flexible plots and plotting primitives.

Here’s a new feature that I’ll use regularly: given a name (and optionally a modifier), polygonbyname draws a variety of shapes.

In other breaking news, I now know what a Reuleaux hexagon looks like.

Since I can’t resist talking about another signal processing feature, FindPeakPoints locates the local peaks or valleys of a 1D data set. Several options let you filter out spurious peaks or valleys

I’ve used this new function to find the fundamental frequencies and harmonics of a violin note from its periodogram.

Speaking of passionate developers who are devoted to their work, Edgardo has written a new e-book that teaches you how to use tensor computations using Physics. You get this e-book when you install Maple 2019.

The new LeastTrimmedSquares command fits data to an equation while not being signficantly influenced by outliers.

In this example, we:

  • Artifically generate a noisy data set with a few outliers, but with the underlying trend Y =5 X + 50
  • Fit straight lines using CurveFitting:-LeastSquares and Statistics:-LeastTrimmedSquares

LeastTrimmedSquares function correctly predicts the underlying trend.

We try to make every release faster and more efficient. We sometimes target key changes in the core infrastructure that benefit all users (such as the parallel garbage collector in Maple 17). Other times, we focus on specific functions.

For this release, I’m particularly impressed by this improved benchmark for factor, in which we’re factoring a sparse multivariate polynomial.

On my laptop, Maple 2018 takes 4.2 seconds to compute and consumes 0.92 GiB of memory.

Maple 2019 takes a mere 0.27 seconds, and only needs 45 MiB of memory!

I’m a visualization nut, and I always get a vicarious thrill when I see a shiny new plot, or a well-presented application.

I was immediately drawn to this new Maple 2019 app – it illustrates the transition between day and night on a world map. You can even change the projection used to generate the map. Shiny!

 

So that’s my pick of the top new features in Maple 2019. Everyone here at Maplesoft would love to hear your comments!

It is my pleasure to announce the return of the Maple Conference! On October 15-17th, in Waterloo, Ontario, Canada, we will gather a group of Maple enthusiasts, product experts, and customers, to explore and celebrate the different aspects of Maple.

Specifically, this conference will be dedicated to exploring Maple’s impact on education, new symbolic computation algorithms and techniques, and the wide range of Maple applications. Attendees will have the opportunity to learn about the latest research, share experiences, and interact with Maple developers.

In preparation for the conference we are welcoming paper and extended abstract submissions. We are looking for presentations which fall into the broad categories of “Maple in Education”, “Algorithms and Software”, and “Applications of Maple” (a more extensive list of topics can be found here).

You can learn more about the event, plus find our call-for-papers and abstracts, here: https://www.maplesoft.com/mapleconference/

Recently, my research team at the University of Waterloo was approached by Mark Ideson, the skip for the Canadian Paralympic men’s curling team, about developing a curling end-effector, a device to give wheelchair curlers greater control over their shots. A gold medalist and multi-medal winner at the Paralympics, Mark has a passion to see wheelchair curling performance improve and entrusted us to assist him in this objective. We previously worked with Mark and his team on a research project to model the wheelchair curling shot and help optimize their performance on the ice. The end-effector project was the next step in our partnership.

The use of technology in the sports world is increasing rapidly, allowing us to better understand athletic performance. We are able to gather new types of data that, when coupled with advanced engineering tools, allow us to perform more in-depth analysis of the human body as it pertains to specific movements and tasks. As a result, we can refine motions and improve equipment to help athletes maximize their abilities and performance. As a professor of Systems Design Engineering at the University of Waterloo, I have overseen several studies on the motor function of Paralympic athletes. My team focuses on modelling the interactions between athletes and their equipment to maximize athletic performance, and we rely heavily on Maple and MapleSim in our research and project development.

The end-effector project was led by my UW students Borna Ghannadi and Conor Jansen. The objective was to design a device that attaches to the end of the curler’s stick and provides greater command over the stone by pulling it back prior to release.  Our team modeled the end effector in Maple and built an initial prototype, which has undergone several trials and adjustments since then. The device is now on its 7th iteration, which we felt appropriate to name the Mark 7, in recognition of Mark’s inspiration for the project. The device has been a challenge, but we have steadily made improvements with Mark’s input and it is close to being a finished product.

Currently, wheelchair curlers use a device that keeps the stone static before it’s thrown. Having the ability to pull back on the stone and break the friction prior to release will provide great benefit to the curlers. As a curler, if you can only push forward and the ice conditions aren’t perfect, you’re throwing at a different speed every time. If you can pull the stone back and then go forward, you’ve broken that friction and your shot is far more repeatable. This should make the game much more interesting.

For our team, the objective was to design a mechanism that not only allowed curlers to pull back on the stone, but also had a release option with no triggers on the curler’s hand. The device we developed screws on to the end of the curler’s stick, and is designed to rest firmly on the curling handle. Once the curler selects their shot, they can position the stone accordingly, slide the stone backward and then forward, and watch the device gently separate from the stone.

For our research, the increased speed and accuracy of MapleSim’s multibody dynamic simulations, made possible by the underlying symbolic modelling engine, Maple, allowed us to spend more time on system design and optimization. MapleSim combines principles of mechanics with linear graph theory to produce unified representations of the system topology and modelling coordinates. The system equations are automatically generated symbolically, which enables us to view and share the equations prior to a numerical solution of the highly-optimized simulation code.

The Mark 7 is an invention that could have significant ramifications in the curling world. Shooting accuracy across wheelchair curling is currently around 60-62%, and if new technology like the Mark 7 is adopted, that number could grow to 70 or 75%. Improved accuracy will make the game more enjoyable and competitive. Having the ability to pull back on the stone prior to release will eliminate some instability for the curlers, which can help level the playing field for everyone involved. Given the work we have been doing with Mark’s team on performance improvements, it was extremely satisfying for us to see them win the bronze medal in South Korea. We hope that our research and partnership with the team can produce gold medals in the years to come.

 

Throughout the course of a year, Maple users create wildly varying applications on all sorts of subjects. To mark the end of 2018, I thought I’d share some of the 2018 submissions to the Maple Application Center that I personally found particularly interesting.

Solving the 15-puzzle, by Curtis Bright. You know those puzzles where you have to move the pieces around inside a square to put them in order, and there’s only one free space to move into?  I’m not good at those puzzles, but it turns out Maple can help. This is one of collection of new, varied applications using Maple’s SAT solvers (if you want to solve the world’s hardest Sudoku, Maple’s SAT solvers can help with that, too).

Romeo y Julieta: Un clasico de las historias de amor... y de las ecuaciones diferenciales [Romeo and Juliet: A classic story of love..and differential equations], by Ranferi Gutierrez. This one made me laugh (and even more so once I put some of it in google translate, which is more than enough to let you appreciate the application even if you don’t speak Spanish). What’s not to like about modeling a high drama love story using DEs?

Prediction of malignant/benign of breast mass with DNN classifier, by Sophie Tan. Machine learning can save lives.

Hybrid Image of a Cat and a Dog, by Samir Khan. Signal processing can be more fun that I realized. This is one of those crazy optical illusions where the picture changes depending on how far away you are.

Beyond the 8 Queens Problem, by Yury Zavarovsky. In true mathematical fashion, why have 8 queens when you can have n?  (If you are interested in this problem, you can also see a different solution that uses SAT solvers.)

Gödel's Universe, by Frank Wang.  Can’t say I understood much of it, but it involves Gödel, Einstein, and Hawking, so I don’t need to understand it to find it interesting.

A common question to our tech support team is about completing the square for a univariate polynomial of even degree, and how to do that in Maple. We’ve put together a solution that we think you’ll find useful. If you have any alternative methods or improvements to our code, let us know!

restart;

# Procedure to complete the square for a univariate
# polynomial of even degree.

CompleteSquare := proc( f :: depends( 'And'( polynom, 'satisfies'( g -> ( type( degree(g,x), even ) ) ) ) ), x :: name )

       local a, g, k, n, phi, P, Q, r, S, T, u:

       # Degree and parameters of polynomial.
       n := degree( f, x ):
       P := indets( f, name ) minus { x }:

       # General polynomial of square plus constant.
       g := add( a[k] * x^k, k=0..n/2 )^2 + r:

       # Solve for unknowns in g.
       Q := indets( g, name ) minus P:

       S := map( expand, { solve( identity( expand( f - g ) = 0, x ), Q ) } ):

       if numelems( S ) = 0 then
              return NULL:
       end if:

       # Evaluate g at the solution, and re-write square term
       # so that the polynomial within the square is monic.

       phi := u -> lcoeff(op(1,u),x)^2 * (expand(op(1,u)/lcoeff(op(1,u),x)))^2:  
       T := map( evalindets, map( u -> eval(g,u), S ), `^`(anything,identical(2)), phi ):

       return `if`( numelems(T) = 1, T[], T ):

end proc:


# Examples.

CompleteSquare( x^2 + 3 * x + 2, x );
CompleteSquare( a * x^2 + b * x + c, x );
CompleteSquare( 4 * x^8 + 8 * x^6 + 4 * x^4 - 246, x );

m, n := 4, 10;
r := rand(-10..10):
for i from 1 to n do
       CompleteSquare( r() * ( x^(m/2) + randpoly( x, degree=m-1, coeffs=r ) )^2 + r(), x );
end do;

# Compare quadratic examples with Student:-Precalculus:-CompleteSquare()
# (which is restricted to quadratic expressions).

Student:-Precalculus:-CompleteSquare( x^2 + 3 * x + 2 );
Student:-Precalculus:-CompleteSquare( a * x^2 + b * x + c );

For a higher-order example:

f := 5*x^4 - 70*x^3 + 365*x^2 - 840*x + 721;
g := CompleteSquare( f, x ); # 5 * ( x^2 - 7 * x + 12 )^2 + 1
h := evalindets( f, `*`, factor ); 4 * (x-3)^2 * (x-4)^2 + 1
p1 := plot( f, x=0..5, y=-5..5, color=blue ):
p2 := plots:-pointplot( [ [3,1], [4,1] ], symbol=solidcircle, symbolsize=20, color=red ):
plots:-display( p1, p2 );

tells us that the minimum value of the expression is 1, and it occurs at x=3 and x=4.

Over the holidays I reconnected with an old friend and occasional
chess partner who, upon hearing I was getting soundly thrashed by run
of the mill engines, recommended checking out the ChessTempo site.  It
has online tools for training your chess tactics.  As you attempt to
solve chess problems your rating is computed depending on how well you
do.  The chess problems, too, are rated and adjusted as visitors
attempt them.  This should be familar to any chess or table-tennis
player.  Rather than the Elo rating system, the Glicko rating system is
used.

You have a choice of the relative difficulty of the problems.
After attempting a number of easy puzzles and seeing my rating slowly
climb, I wondered what was the most effective technique to raise my
rating (the classical blunder).  Attempting higher rated problems would lower my
solving rate, but this would be compensated by a smaller loss and
larger gain.  Assuming my actual playing strength is greater than my
current rating (a misconception common to us patzers), there should be a
rating that maximizes the rating gain per problem.

The following Maple module computes the expected rating change
using the Glicko system.

Glicko := module()

export DeltaRating
    ,  ExpectedDelta
    ,  Pwin
    ;

    # Return the change in rating for a loss and a win
    # for player 1 against player2
    DeltaRating := proc(r1,rd1,r2,rd2)
    local E, K, g, g2, idd, q;

        q := ln(10)/400;
        g := rd -> 1/sqrt(1 + 3*q^2*rd^2/Pi^2);
        g2 := g(rd2);
        E := 1/(1+10^(-g2*(r1-r2)/400));
        idd := q^2*(g2^2*E*(1-E));

        K := q/(1/rd1^2+idd)*g2;

        (K*(0-E), K*(1-E));

    end proc:

    # Compute the probability of a win
    # for a player with strength s1
    # vs a player with strength s2.

    Pwin := proc(s1, s2)
    local p;
        p := 10^((s1-s2)/400);
        p/(1+p);
    end proc:

    # Compute the expected rating change for
    # player with strength s1, rating r1 vs a player with true rating r2.
    # The optional rating deviations are rd1 and rd2.

    ExpectedDelta := proc(s1,r1,r2,rd1 := 35, rd2 := 35)
    local P, l, w;
        P := Pwin(s1,r2);
        (l,w) := DeltaRating(r1,rd1,r2,rd2);
        P*w + (1-P)*l;
    end proc:

end module:

Assume a player has a rating of 1500 but an actual playing strength of 1700.  Compute the expected rating change for a given puzzle rating, then plot it.  As expected the graph has a peak.

 

Ept := Glicko:-ExpectedDelta(1700,1500,r2):
plot(Ept,r2 = 1000...2000);

Compute the optimum problem rating

 

fsolve(diff(Ept,r2));

                     {r2 = 1599.350691}

As your rating improves, you'll want to adjust the rating of the problems (the site doesn't allow that fine tuning). Here we plot the optimum puzzle rating (r2) for a given player rating (r1), assuming the player's strength remains at 1700.

Ept := Glicko:-ExpectedDelta(1700, r1, r2):
dEpt := diff(Ept,r2):
r2vsr1 := r -> fsolve(eval(dEpt,r1=r)):
plot(r2vsr1, 1000..1680);

Here is a Maple worksheet with the code and computations.

Glicko.mw

Later

After pondering this, I realized there is a more useful way to present the results. The shape of the optimal curve is independent of the user's actual strength. Showing that is trivial, just substitute a symbolic value for the player's strength, offset the ratings from it, and verify that the result does not depend on the strength.

Ept := Glicko:-ExpectedDelta(S, S+r1, S+r2):
has(Ept, S);
                    false

Here's the general curve, shifted so the player's strength is 0, r1 and r2 are relative to that.

r2_r1 := r -> rhs(Optimization:-Maximize(eval(Ept,r1=r), r2=-500..0)[2][]):
p1 := plot(r2_r1, -500..0, 'numpoints'=30);

Compute and plot the expected points gained when playing the optimal partner and your rating is r-points higher than your strength.

EptMax := r -> eval(Ept, [r1=r, r2=r2_r1(r)]):
plot(EptMax, -200..200, 'numpoints'=30, 'labels' = ["r","Ept"]);

When your playing strength matches your rating, the optimal opponent has a relative rating of

r2_r1(0);
                       -269.86

The expected points you win is

evalf(EptMax(0));
                       0.00956

From time to time, people ask me about visualizing knots in Maple. There's no formal "Knot Theory" package in Maple per se, but it is certainly possible to generate many different knots using a couple of simple commands. The following shows various examples of knots visualized using the plots:-tubeplot and algcurves:-plot_knot commands.

The unknot can be defined by the following parametric equations:

 

x=sin(t)

y=cos(t)

z=0

 

plots:-tubeplot([cos(t),sin(t),0,t=0..2*Pi],
   radius=0.2, axes=none, color="Blue", orientation=[60,60], scaling=constrained, style=surfacecontour);

 

plots:-tubeplot([cos(t),sin(t),0,t=0..2*Pi],    radius=0.2,axes=none,color=

 

The trefoil knot can be defined by the following parametric equations:

 

x = sin(t) + 2*sin(2*t)

y = cos(t) + 2*sin(2*t)

z = sin(3*t)

 

plots:-tubeplot([sin(t)+2*sin(2*t),cos(t)-2*cos(2*t),-sin(3*t), t= 0..2*Pi],
   radius=0.2, axes=none, color="Green", orientation=[90,0], style=surface);

 

plots:-tubeplot([sin(t)+2*sin(2*t),cos(t)-2*cos(2*t),-sin(3*t),t= 0..2*Pi],    radius=0.2,axes=none,color=

 

The figure-eight can be defined by the following parametric equations:


x = (2 + cos(2*t)) * cos(3*t)

y = (2 + cos(2*t)) * sin(3*t)

z = sin(4*t)

 

plots:-tubeplot([(2+cos(2*t))*cos(3*t),(2+cos(2*t))*sin(3*t),sin(4*t),t=0..2*Pi],
   numpoints=100, radius=0.1, axes=none, color="Red", orientation=[75,30,0], style=surface);

 

plots:-tubeplot([(2+cos(2*t))*cos(3*t),(2+cos(2*t))*sin(3*t),sin(4*t),t=0..2*Pi],    numpoints=100,radius=0.1,axes=none,color=

 

The Lissajous knot can be defined by the following parametric equations:

 

x = cos(t*n[x]+phi[x])

y = cos(t*n[y]+phi[y])

z = cos(t n[z] + phi[z])

Where n[x], n[y], and n[z] are integers and the phase shifts phi[x], phi[y], and phi[z] are any real numbers.
The 8 21 knot ( n[x] = 3, n[y] = 4, and n[z] = 7) appears as follows:
 

plots:-tubeplot([cos(3*t+Pi/2),cos(4*t+Pi/2),cos(7*t),t=0..2*Pi],
   radius=0.05, axes=none, color="Brown", orientation=[90,0,0], style=surface);

 

plots:-tubeplot([cos(3*t+Pi/2),cos(4*t+Pi/2),cos(7*t),t=0..2*Pi],    radius=0.05,axes=none,color=

 

A star knot can be defined by using the following polynomial:
 

f = -x^5+y^2

 

f := -x^5+y^2
algcurves:-plot_knot(f,x,y,epsilon=0.7,
   radius=0.25, tubepoints=10, axes=none, color="Orange", orientation=[60,0], style=surfacecontour);

 

 

By switching x and y, different visualizations can be generated:

 

g=(y^3-x^7)*(y^2-x^5)+y^3

 

g:=(y^3-x^7)*(y^2-x^5)+y^3;
plots:-display(<
algcurves:-plot_knot(g,y,x, epsilon=0.8, radius=0.1, axes=none, color="CornflowerBlue", orientation=[75,30,0])|
algcurves:-plot_knot(g,x,y, epsilon=0.8, radius=0.1, axes=none, color="OrangeRed", orientation=[75,0,0])>);

 

 

f = (y^3-x^7)*(y^2-x^5)

 

f:=(y^3-x^7)*(y^2-x^5);
algcurves:-plot_knot(f,x,y,
  epsilon=0.8, radius=0.1, axes=none, orientation=[35,0,0]);

 

 

h=(y^3-x^7)*(y^3-x^7+100*x^13)*(y^3-x^7-100*x^13)

 

h:=(y^3-x^7)*(y^3-x^7+100*x^13)*(y^3-x^7-100*x^13);

algcurves:-plot_knot(h,x,y,
   epsilon=0.8, numpoints=400, radius=0.03, axes=none, color=["Blue","Red","Green"], orientation=[60,0,0]);

 

Please feel free to add more of your favourite knot visualizations in the comments below!

You can interact with the examples or download a copy of these examples from the MapleCloud here: https://maple.cloud/app/5654426890010624/Examples+of+Knots

Problem:

Suppose you have a bunch of 2D data points which:

  1. May include points with the same x-value but different y-values; and
  2. May be unevenly-spaced with respect to the x-values.

How do you clean up the data so that, for instance, you are free to construct a connected data plot, or perform a Discrete Fourier Transform? Please note that Curve Fitting and the Lomb–Scargle Method, respectively, are more effective techniques for these particular applications. Let's start with a simple example for illustration. Consider this Matrix:

A := < 2, 5; 5, 8; 2, 1; 7, 8; 10, 10; 5, 7 >;

Consolidate:

First, sort the rows of the Matrix by the first column, and extract the sorted columns separately:

P := sort( A[..,1], output=permutation ); # permutation to sort rows by the values in the first column
U := A[P,1]; # sorted column 1
V := A[P,2]; # sorted column 2

We can regard the sorted bunches of distinct values in U as a step in a stair case, and the goal is replace each step with the average of the y-values in V located on each step.

Second, determine the indices for the first occurrences of values in U, by selecting the indices which give a jump in x-value:

m := numelems( U );
K := [ 1, op( select( i -> ( U[i] > U[i-1] ), [ seq( j, j=2..m ) ] ) ), m+1 ];
n := numelems( K );

The element m+1 is appended for later convenience. Here, we can quickly define the first column of the consolidated Matrix:

X1 := U[K[1..-2]];

Finally, to define the second column of the consolidated Matrix, we take the average of the values in each step, using the indices in K to tell us the ranges of values to consider:

Y1 := Vector[column]( n-1, i -> add( V[ K[i]..K[i+1]-1 ] ) / ( K[i+1] - K[i] ) );

Thus, the consolidated Matrix is given by:

B := < X1 | Y1 >;

Spread Evenly:

To spread-out the x-values, we can use a sequence with fixed step size:

X2 := evalf( Vector[column]( [ seq( X1[1]..X1[-1], (X1[-1]-X1[1])/(m-1) ) ] ) );

For the y-values, we will interpolate:

Y2 := CurveFitting:-ArrayInterpolation( X1, Y1, X2, method=linear );

This gives us a new Matrix, which has both evenly-spaced x-values and consolidated y-values:

C := < X2 | Y2 >;

Plot:

plots:-display( Array( [
        plots:-pointplot( A, view=[0..10,0..10], color=green, symbol=solidcircle, symbolsize=15, title="Original Data", font=[Verdana,15] ),
        plots:-pointplot( B, view=[0..10,0..10], color=red, symbol=solidcircle, symbolsize=15, title="Consolidated Data", font=[Verdana,15] ),
        plots:-pointplot( C, view=[0..10,0..10], color=blue, symbol=solidcircle, symbolsize=15, title="Spread-Out Data", font=[Verdana,15] )
] ) );

Sample Data with Noise:

For another example, let’s take data points from a logistic curve, and add some noise:

# Noise generators
f := 0.5 * rand( -1..1 ):
g := ( 100 - rand( -15..15 ) ) / 100:

# Actual x-values
X := [ seq( i/2, i=1..20 ) ];

# Actual y-values
Y := evalf( map( x -> 4 / ( 1 + 3 * exp(-x) ), X ) );

# Matrix of points with noise
A := Matrix( zip( (x,y) -> [x,y], map( x -> x + f(), X ), map( y -> g() * y, Y ) ) );

Using the method outlined above, and the general procedures defined below, define:

B := ConsolidatedMatrix( A );
C := EquallySpaced( B, 21, method=linear );

Visually:

plots:-display( Array( [
    plots:-pointplot( A, view=[0..10,0..5], symbol=solidcircle, symbolsize=15, color=green, title="Original Data", font=[Verdana,15] ),
    plots:-pointplot( B, view=[0..10,0..5], symbol=solidcircle, symbolsize=15, color=red, title="Consolidated Data", font=[Verdana,15]  ),
    plots:-pointplot( C, view=[0..10,0..5], symbol=solidcircle, symbolsize=15, color=blue, title="Spread-Out Data", font=[Verdana,15] )
] ) );

  

Generalization:

Below are more generalized custom procedures, which are used in the above example. These also account for special cases.

# Takes a matrix with two columns, and returns a new matrix where the new x-values are unique and sorted,
# and each new y-value is the average of the old y-values corresponding to the x-value.
ConsolidatedMatrix := proc( A :: 'Matrix'(..,2), $ )

        local i, j, K, m, n, P, r, U, V, X, Y:
  
        # The number of rows in the original matrix.
        r := LinearAlgebra:-RowDimension( A ):

        # Return the original matrix should it only have one row.
        if r = 1 then
               return A:
        end if:

        # Permutation to sort first column of A.
        P := sort( A[..,1], ':-output'=permutation ):       

        # Sorted first column of A.
        U := A[P,1]:

        # Corresponding new second column of A.
        V := A[P,2]:

        # Return the sorted matrix should all the x-values be distinct.
        if numelems( convert( U, ':-set' ) ) = r then
               return < U | V >:
        end if:

        # Indices of first occurrences for values in U. The element m+1 is appended for convenience.
        m := numelems( U ):
        K := [ 1, op( select( i -> ( U[i] > U[i-1] ), [ seq( j, j=2..m ) ] ) ), m+1 ]:
        n := numelems( K ):

        # Consolidated first column.
        X := U[K[1..-2]]:

        # Determine the consolidated second column, using the average y-value.
        Y := Vector[':-column']( n-1, i -> add( V[ K[i]..K[i+1]-1 ] ) / ( K[i+1] - K[i] ) ):

        return < X | Y >:

end proc:

# Procedure which takes a matrix with two columns, and returns a new matrix of specified number of rows
# with equally-spaced x-values, and interpolated y-values.
# It accepts options that can be passed to ArrayInterpolation().
EquallySpaced := proc( M :: 'Matrix'(..,2), m :: posint )

        local A, i, r, U, V, X, Y:

        # Consolidated matrix, the corresponding number of rows, and the columns.
        A := ConsolidatedMatrix( M ):
        r := LinearAlgebra:-RowDimension( A ):
        U, V := evalf( A[..,1] ), evalf( A[..,2] ):

        # If the consolidated matrix has only one row, return it.
        if r = 1 then
               return A:
        end if:

        # If m = 1, i.e. only one equally-spaced point is requested, then return a matrix of the averages.
        if m = 1 then
               return 1/r * Matrix( [ [ add( U ), add( V ) ] ] ):
        end if:

        # Equally-spaced x-values.
        X := Vector[':-column']( [ seq( U[1]..U[-1], (U[-1]-U[1])/(m-1), i=1..m ) ] ):

        # Interpolated y-values.
        Y := CurveFitting:-ArrayInterpolation( U, V, X, _rest ):    

        return < X | Y >:

end proc:

Worth Checking Out:

 

Maple users frequently solve differential equations. If you want to use the results later in Maple, you need to deconstruct the solution, and then assign the functions -- something that isn't done automatically in Maple. We wrote a multi-purpose routine to help you out. For instance, suppose you solve a simple linear system of equations:

restart;

eqs := { x + y = 3, x - y = 1 };
soln := solve( eqs ); # { x = 2, y = 1 }
x, y; # plain x and y

To assign the values from the solution to the corresponding variables:

assign( soln );
x, y; # 2, 1

This won't work for solutions of differential equations:

restart;

sys := { D(x)(t) = y(t), D(y)(t) = -x(t), x(0) = 1, y(0) = 0 };
soln := dsolve( sys ); # { x(t) = cos(t), y(t) = -sin(t) }
assign( soln );
x(s), y(s); # plain x(s) and y(s)

To make this work, we wrote this multi-purpose routine:

restart;

# Type for a variable expression, e.g. x=5.
TypeTools:-AddType( 'varexpr', u -> type( u, 'And'('name','Non'('constant'))='algebraic' ) ):

# Type for a function expression, e.g. f(x)=x^2.
TypeTools:-AddType( 'funcexpr', u -> type( u, 'function'('And'('name','Non'('constant')))='algebraic' ) ):

# Procedure to assign variable and function expressions.
my_assign := proc( u :: {
        varexpr, 'list'(varexpr), 'rtable'(varexpr), 'set'(varexpr),
        funcexpr, 'list'(funcexpr), 'rtable'(funcexpr), 'set'(funcexpr)
}, $ )

        local F, L, R, V:       

        # Map the procedure if input is a data container, or apply regular assign(), where applicable.
        if not u :: {'varexpr','funcexpr'} then
               map( procname, u ):
               return NULL:
        elif u :: 'varexpr' then
               assign( u ):
               return NULL:
        end if:       

        L, R := lhs(u), rhs(u):
        F := op(0,L): 
        V := [ indets( L, 'And'( 'name', 'Non'('constant') ) )[] ]:    

        map( assign, F, unapply( R, V ) ):
        return NULL:

end proc:

# Example 1.

eqs := { x + y = 3, x - y = 1 };
my_assign( solve( eqs ) );
'x' = x, 'y' = y; # x=1, y=2

# Example 2.

unassign( 'x', 'y' ):
E := [ f(x,y) = x + y, g(x,y) = x - y ];
my_assign( E );
'f(u,v)' = f(u,v), 'g(u,v)' = g(u,v); # f(u,v)=u+v, g(u,v)=u-v

# Example 3.

sys := { D(x)(t) = y(t), D(y)(t) = -x(t), x(0) = 1, y(0) = 0 };
soln := dsolve( sys );
my_assign( soln ):
'x(s)' = x(s); # x(s)=cos(s)
'y(s)' = y(s); # y(s)=-sin(s)

Fourteen year old Lazar Paroski is an exceptional student. Not only is he an overachiever academically, but he has a passion to help struggling students, and to think of innovative ways to help them learn. Lazar is particularly fond of Math, and in his interactions with other students, he noticed how students have a hard time with Math.

Putting on his creative cap, Lazar came up with the idea of an easily accessible “Math Wall” that explains simple math concepts for students in a very visual way.

“The Music Wall on Pinterest was my inspiration,” says Lazar. “I thought I can use the same idea for Math, and why not a Math Wall”?

"The math wall is basically all the tools you'll have on the wall in your classroom outside," said Lazar. Making the Math Wall and getting it set up, took time and effort. But he had help along the way, which, fueled by his passion and enthusiasm, helped turn his creative dream into reality. Lazar received a grant of $6000 from the local government to implement the project; his teachers, principal and family helped promote it; and the community of parents provided encouragement.

The Math Wall covers fundamental math concepts learnt in grades 1 to 6. Lazar engaged with over 450 students in the community to understand what would really be helpful for students to see in this Math Wall, and then he carefully picked the top themes he wanted to focus on.

The three meter Math Wall is located in the Morrison community park, and was officially inaugurated by the Mayor of Kitchener in July 2018. Many students have already found it to be useful and educative. Parents who bring their children to the park stop by to give their kids a quick math lesson.

At Maplesoft, we love a math story like this! And that too in our backyard! We wanted to appreciate and encourage Lazar and his efforts in making math fun and easy and accessible to students. So we invited Lazar to our offices, gifted him a copy of Maple, and heard more about his passion and future plans. “In many ways, Lazar embodies the same qualities that Maplesoft stands for – making it easy for students to understand and grasp complex STEM concepts,” said Laurent Bernardin, Maplesoft’s Chief Operating Officer. “We try to impress upon students that math matters, even in everyday life, and they can now use advanced, sophisticated technology tools to make math learning fun and efficient.”

We wish Lazar all the very best as he thinks up new and innovative ways to spread his love for math to other kids. Well done, Lazar!

 

 

1 2 3 4 5 6 7 Last Page 1 of 23