## Introducing the Pauli Algebra Package

by: Maple

Allow me to introduce the Pauli Algebra package for Maple. This package implements the Clifford Algebra of Physical Space Cl3 through the use of complex paravectors. The syntax of the package is similar to the notation popularized by the work of Dr. William E. Baylis. For more information, check out the Wikipedia entry on Paravectors and the APS workbook available on Dr. Baylis' website

To install the Pauli Algebra package while in a Maple session, type

Alternatively, you can find the Pauli Algebra package in the MapleCloud, and choose the install button on the far right. Upon installation, several help files become available in Maple Help to assist you with the syntax.

As you will see in the Pauli.mpl source code of the package workbook, a number of Maple functions have been overloaded to handle the package's Paravector datatype. If you wish to overload additional Maple functions, let me know, and I'll include them in future updates.

I'll sign off by attaching a Maple worksheet/pdf containing some examples.

Enjoy!

## Kinematics and Dynamics of the solid

Maple 18

These files contain the kinematics and dynamics of the solid using a new technique (ALT + ENTER) to visualize the results online and thus save space in our Maple worksheet. Seen from a relative approach. For engineering students. In Spanish.

Kinematics_and_relative_dynamics_of_a_solid.mw

Flat_kinetic_of_a_rigid_body.mw

Lenin Araujo Castillo

## Van Aubel's theorem with Maple

by:

Recently I looked through an interesting book D. Wells "The Penquin book of Curious and Interesting Geometry" and came across this result, which I did not know about before: starting with a given quadrilateral , construct a square on each side. Van Aubel's theorem states that the two line segments between the centers of opposite squares are of equal lengths and are at right angles to one another. See the picture below: It is interesting that this is true not only for a convex quadrilateral, but for arbitrary one and even self-intersecting one. This post is devoted to proving this result in Maple. The proof was very short and simple. Note that the coordinates of points are given not by lists, but by vectors. This is convenient when using  LinearAlgebra:-DotProduct  and  LinearAlgebra:-Norm  commands.

The code of the proof (the notation of the points on the picture coincide with their names in the code):

```restart;
with(LinearAlgebra):
assign(seq(P||i=<x[i],y[i]>, i=1..4)):
P||5:=P||1:
assign(seq(Q||i=(P||i+P||(i+1))/2+<0,1; -1,0>.(P||(i+1)-P||i)/2, i=1..4)):
expand(DotProduct((Q||3-Q||1),(Q||4-Q||2), conjugate=false));
is(Norm(Q||3-Q||1, 2)=Norm(Q||4-Q||2, 2));

```

The output:

0
true

VA.mw

## How do I Insert a Row or Column in a Matrix?

Maple

Hi MaplePrimes Users!

It’s your friendly, neighborhood tech support team; here to share some tips and tricks from issues we help users with on a daily basis.

A customer contacted us through a Help Page feedback form, asking how to add a row or column in a Matrix. The form came from the Row Operations help page, but the wording of the message suggested that the customer actually wanted to insert a new row or column altogether. Such manipulations can often be accomplished by a command in the ArrayTools package, but the only Insert command currently available is the one for Vectors and 1-D Arrays. Using the Concatenate command from that package, and various commands from the LinearAlgebra package (including the SubMatrix command), we were able to write two custom procedures to perform these manipulations:

```InsertRow := proc (A::rtable, n::integer, v::Vector[row])
local R, C, top, bottom;
uses LinearAlgebra;
R := RowDimension(A); C := ColumnDimension(A);
top := SubMatrix(A, [1 .. n-1], [1 .. C]);
bottom := SubMatrix(A, [n .. R], [1 .. C]);
return ArrayTools:-Concatenate(1, top, v, bottom);
end proc:

InsertColumn := proc (A::rtable, n::integer, v::Vector[column])
local R, C, left, right;
uses LinearAlgebra;
R := RowDimension(A); C := ColumnDimension(A);
left := SubMatrix(A, [1 .. R], [1 .. n-1]);
right := SubMatrix(A, [1 .. R], [n .. C]);
return ArrayTools:-Concatenate(2, left, v, right)
end proc:

# test cases:

M := Matrix([[1, 2, 3], [4, 5, 6], [7, 8, 9]]):
v := Vector[row]([2, 2, 2]):
v2 := Vector[column]([2, 2, 2]):
seq(InsertRow(M, i, v), i = 1 .. 4);
seq(InsertColumn(M, i, v2), i = 1 .. 4);```

We then reworked this problem using some handy indexing and construction notation that allows our previous procedures to save on the variable constructions and syntax:

```InsertRow := proc( A :: rtable, V :: Vector[row], r :: posint )
return < A[1..r-1,..]; V; A[r..-1,..] >:
end proc:

InsertColumn := proc( A :: rtable, V :: Vector[column], c :: posint )
return < A[..,1..c-1] | V | A[..,c..-1] >:
end proc:

M := Matrix(3, 3, [seq(i, i = 1 .. 9)]);
A := convert(M, Array);
U := Vector[row]( [ a, b, c ] );
V := convert( U, 'Vector[column]' );
seq(InsertRow( A, U, i ), i=1..4);
seq(InsertColumn( A, V, i ), i=1..4);
seq(InsertRow( M, U, i ), i=1..4);
seq(InsertColumn( M, V, i ), i=1..4);``` ## Discrete Haar Wavelet Image Compression

by:

In order to explore the use of signal processing package in image processing, @Samir Khan and I created this application that performs discrete Haar wavelet transform on images to achieve both lossy (irreversible) and lossless (reversible) compression.

Haar wavelet compression modifies the image matrix to increase the number of zero entries in the matrix, which results in a sparse matrix that can be stored more efficiently, thus reducing the file size. A Haar wavelet transform groups adjacent items in the matrix, stores the average and difference of the two numbers. Notice that this computation is reversible since knowing the values of a, b and given that x1-x2 = a; (x1+x2)/2 = b, we can solve for x1 and x2. Haar wavelet compression is taking advantage of the property that neighboring pixels in an image usually share very similar value; hence recursively applying Haar wavelet transform to the rows and columns of an image matrix significantly increases the number of zero entries. In the application we achieved a compression ratio of 1.296 (number of non-zero entries in original: number of non-zero entries in processed matrix).

The fact that Haar wavelet transform is reversible means that we can use it to perform lossless image compression: the decompressed image will be exactly the same as the image before compression. Transmission and temporary storage of the data would be made more efficient without any loss of details in the image.

But what if the file size is still too big or we simply don’t need that many details in the image? That is when lossy compression comes into use. By omitting some details/fidelity, lossy compression is able to achieve notably smaller file size. In this application, we apply a filter to the transformed image matrix, setting entries that are close to zeros to actual zeros (i.e. pick a positive number ϵ such that all x < ϵ are changed to 0 in the matrix). The value of ϵ directly impacts the quality of the decompressed image so should be chosen carefully in practice. In this application, we chose ϵ = 0.01, which results in a compression ratio of 19.38, but instead produces a very blurry image after reversing the compression.  (left: Original image, right: lossy compression with ϵ = 0.01)

The application can be accessed here for more details.

## Improvements in solving PDE & BC in Maple 2018.1

Maple 2018

For Maple 2018.1, there are improvements in pdsolve's ability to solve PDE with boundary and initial conditions. This is work done together with E.S. Cheb-Terrab. The improvements include an extended ability to solve problems involving non-homogeneous PDE and/or non-homogeneous boundary and initial conditions, as well as improved simplification of solutions and better handling of functions such as piecewise in the arguments and in the processing of solutions. This is also an ongoing project, with updates being distributed regularly within the Physics Updates.

Solving more problems involving non-homogeneous PDE and/or non-homogeneous boundary and initial conditions

Example 1: Pinchover and Rubinstein's exercise 6.17: we have a non-homogenous PDE and boundary and initial conditions that are also non-homogeneous:

 >  >  (1)
 How we solve the problem, step by step:

Example 2: the PDE is homogeneous but the boundary conditions are not. We solve the problem through the same process, which means we end up solving a nonhomogeneous pde with homogeneous BC as an intermediate step:

 >  >  (12)
 How we solve the problem, step by step:

Example 3: a wave PDE with a source that does not depend on time:

 >  >  (23)
 How we solve the problem, step by step:

Example 4: Pinchover and Rubinstein's exercise 6.23 - we have a non-homogenous PDE and initial condition:

 >  >  (30)

If we now make the functions f and g into specific mappings, we can compare pdsolve's solutions to the general and specific problems:

 >  Here is what pdsolve's solution to the general problem looks like when taking into account the new values of f(x) and g(x,t):

 >  (31)

Here is pdsolve's solution to the specific problem:

 >  (32)

And the two solutions are equal:

 >  (33)
 > Improved simplification in integrals, piecewise functions, and sums in the solutions returned by pdsolve

Example 1: exercise 6.21 from Pinchover and Rubinstein is a non-homogeneous heat problem. Its solution used to include unevaluated integrals and sums, but is now returned in a significantly simpler format.

 >  >  (34)
 >  (35)

Example 2: example 6.46 from Pinchover and Rubinstein is a non-homogeneous heat equation with non-homogeneous boundary and initial conditions. Its solution used to involve two separate sums with unevaluated integrals, but is now returned with only one sum and unevaluated integral.

 >  >  (36)
 >  (37)

More accuracy when returning series solutions that have exceptions for certain values of the summation index or a parameter

Example 1: the answer to this problem was previously given with instead of as it should be:

 >  >  (38)

Example 2: the answer to exercise 6.25 from Pinchover and Rubinstein is now given in a much simpler format, with the special limit case for w = 0 calculated separately:

 >  >  (39)

Improved handling of piecewise, eval/diff in the given problem

Example 1: this problem, which contains a piecewise function in the initial condition, can now be solved:

 >  >  (40)

Example 2: this problem, which contains a derivative written using eval/diff, can now be solved:

 >  >  (41)

References:

Pinchover, Y. and Rubinstein, J.. An Introduction to Partial Differential Equations. Cambridge UP, 2005.

Katherina von Bülow

## Removal of Periodic, Salt and Pepper Noises from...

by: Maple

In an attempt to explore the field of image processing, @Samir Khan and I created an application (download here) that demonstrates the removal of two types of noises from an image through frequency and spatial filtering.

Periodic noises and salt & pepper noises are two common types of image noises, usually caused by errors during the image capturing or data transmission process. Periodic noises result in repetitive patterns being added onto the original image, while salt & pepper noises are the irregular appearance of dark pixels in the bright area and bright pixels in the dark area of the image. In this application, we artificially generate these noises and pollute a clean picture in order to demonstrate the removal techniques.  (Fig 1: Picture of Waterloo Office taken by Sophie Tan            Fig 2: Converted to greyscale for processing, added two noises)

In order to remove periodic noises from the image, we apply a 2D Fourier Transform to convert the image from spatial domain to frequency domain, where periodic noises can be visually detected as separate, discrete spikes and therefore easily removed. (Fig 3 Frequency domain of the magnitude of the image)

One way to remove salt and pepper noises is to apply a median filter to the image. In this application, we run a 3 by 3 kernel across the image matrix that sorts and places the median among the 9 elements as the new matrix entry, thus resulting in the whole image being median-filtered.

Comparison of the image before and after noise removal:  Please refer to the application for more details on the implementation of the two removal techniques.

## A Beginner’s guide to using the DNN Classifier...

by: Maple 2018

Hello, everyone! My name’s Sophie and I’m an intern at Maplesoft. @Samir Khan asked me to develop a couple of demonstration applications using the DeepLearning package - my work is featured on the Application Center

I thought I’d describe two critical commands used in the applications – DNNClassifier() and DNNRegressor().

The DNNClassifier calls tf.estimator.DNNClassifier from the Tensorflow Python API. This command builds a feedforward multilayer neural network that is trained with a set of labeled data in order to perform classification on similar, unlabeled data.

Dataset used for training and validating the classifier has the type DataFrame in Maple. In the Prediction of malignant/benign of breast mass example, the training set is a DataFrame with 32 columns in total, with column labels: “ID Number”, “Diagnosis”, “radius”, “texture”, etc. Note that labeling the columns of the dataset is mandatory, as later the neural network needs to identify which feature column corresponds to which list of values.

Feature columns are what come between the raw input data and the classifier model; they are required by Tensorflow to specify how the input data should be transformed before given to the model. Maple now supports three types of Feature Columns, including:

• NumericColumn that represents real, numerical figure,
• CategoricalColumn that denotes categorical(ordinal) data
• BucketizedColumn that organizes continuous data into a discrete number buckets with specified boundaries.

In this application, the input data consists of 30 real, numeric values that represents physical traits of a cell nucleus computed from a digitized image of the breast mass. We create a list of NumericColumns by calling

```with(DeepLearning):
fc := [seq(NumericColumn(u,shape=), u in cols[3..])]:```

where cols is a list of column labels and shape indicates that each data input is just a single numeric value.

When we create a DNNClassifier, we need to specify the feature columns (input layer), the architecture of the neural network (hidden layers) and the number of classes (output layer). Recall that the DNNClassifier builds a feedforward multilayer neural network, hence when we call the function, we need to indicate how many hidden layers we want and how many nodes there should be on each of the layer. This is done by passing a list of non-negative integers as the parameter hidden_units when we call the function. In the example, we did:

`classifier := DNNClassifier(fc, hidden_units=[20,40,20],num_classes=2):`

where we set 3 hidden layer each with 20, 40, 20 nodes respectively. In addition, there are 30 input nodes (i.e. the number of feature columns) and 1 output node (i.e. binary classification). The diagram below illustrates a simpler example with an input layer with 3 nodes, 2 hidden layers with 7, 5 nodes and an output layer with 1 node. (Created using NN-SVG by https://github.com/zfrenchee/NN-SVG)

After we built the model, we can train it by calling

`classifier:-Train(train_data[3..32], train_data, steps = 256, num_epochs = 3, shuffle = true):`

where we

1. Give the training data (`train_data[3..32]`) and the corresponding labels (`train_data`) to the model.
2. Specified that the entire dataset will be passed to the model for three times and each iteration has 256 steps.
3. Specified that data batches for training will be created by randomly shuffling the tensors.

Now the training process is complete, we can use the validation set to evaluate the effectiveness of our model.

`classifier:-Evaluate(test_data[3..32],test_data, steps = 32);`

The output indicates an accuracy of ~92.11% in this case. There are more indices like accuracy_basline, auc, average_loss that help us decide if we need to modify the architecture for better performance.

We then build a predictor function that takes an arbitrary set of measurements as a DataSeries and returns a prediction generated by the trained DNN classifier.

`predictor := proc (ds) classifier:-Predict(Transpose(DataFrame(ds)), num_epochs = 1, shuffle = false) end proc;`

Now we can pass a DataSeries with 30 labeled rows to the predictor: (Recall the cols is a list of the column names)

```ds := DataSeries([11.49, 14.59, 73.99, 404.9, 0.1046, 8.23E-02, 5.31E-02, 1.97E-02, 0.1779, 6.57E-02, 0.2034, 1.166, 1.567, 14.34, 4.96E-03, 2.11E-02, 4.16E-02, 8.04E-03, 1.84E-02, 3.61E-03, 12.4, 21.9, 82.04, 467.6, 0.1352, 0.201, 0.2596, 7.43E-02, 0.2941, 9.18E-02], labels = cols[3..]);
predictor(ds);
```

The output indicates that the probability of this data being a class _id  is ~90.79%. In other words, according to our model, the probability of this breast mass cell being benign is ~90.79%.

The use of the DNNRegressor is very similar (almost identical) to that of the Classifier, the only significant difference is that while the Classifier predicts discrete labels as classes, the Regressor predicts a continuous qualitative result with the provided data (Note that CategoricalColumn is still applicable). For more details about the basic usage of the DNNRegressor, please refer to Predicting the burnt area of a forest fires with DNN Regressor.

## Games with pseudo-fractals

by: Maple

Mukhametshina Liya

Games with pseudo-fractals

Homothety_Fractals.mw       ## Animated Picture on the Coordinate Plane

by: Maple

Aleksandrov Denis,
secondary school #57 of Kazan _ANIMATED_PICTURE_ON_THE_COORDINATE_PLANE_Aleksandrov_D..mw

## World Cup 2018 simulation

vv if you could please help adjust your code.  I've adjusted the start of the eurocup code to match the world cup however I haven't decoded your coding and probably won't be able to have time before the world cup starts.  I've got as far as adding the teams, flags and ratings of each team.

Let me just say while copying and pasting the flag bytes to the code, Maple became a bitch to work worth (pardon my language) but I became so frustated because my laptop locked up twice.  The more I worked with Maple the slower it got, until it froze right up.  Copying and pasting large data in maple is almost to near IMPOSSIBLE.  .. perhaps this could be a side conversation.

Here's the world cup file so far.

2018_World_Cup.mw

Fixed flag sizes, couple of other fixes in other stats and added some additional stats
2018_World_Cup7.mw by: Maple

ANIMATED image of cascade of opening matryoshkas

E.R. Ibragimova ИбрагимоваЭ.Р_03_Казань_Матрёшки.mws

## Flasc with bubbles

by: Maple

Simulation of the animated image "Flask with bubbles" FLASC_Murtazin_S.A..mw

## A limit from an undergraduate competition

by: Maple

At a recent undegraduate competition the students had to compute the following limit

 > Limit( n * Diff( (exp(x)-1)/x, x\$n), n=infinity ) assuming x<>0; (1)

Maple is able to compute the symbolic n-fold derivative and I hoped that the limit will be computed at once.

Unfortunately it is not so easy.
Maybe someone finds a more more straightforward way.

 > restart;
 > f := n * diff( (exp(x)-1)/x, x\$n ); (2)
 > limit(%, n=infinity); (3)
 > simplify(%) assuming x>0; (4)

So, Maple cannot compute directly the limit.

 > convert(f, Int) assuming n::posint; (5)
 > J:=simplify(%)  assuming n::posint; (6)
 > L:=convert(J, Int) assuming n::posint; (7)
 > L:=subs(_k1=u, L); (8)

Now it should be easy, but Maple needs help.

 > with(IntegrationTools):
 > L1:=Change(L, u^n = t, t) assuming n::posint; (9)
 > limit(L1, n=infinity);  # OK (10)
 > ####################################################################

Note that the limit can also be computed using an integration by parts, but Maple refuses to finalize:

 > Parts(L, exp(u*x)) assuming n::posint; (11)
 > simplify(%); (12)
 > limit(%, n=infinity); (13)
 > value(%);  # we are almost back! (14)
 >

## The use of manipulators as multi-axis CNC machines...

by: Maple 17

It post can be called a continuation of the theme “Determination of the angles of the manipulator with the help of its mathematical model. Inverse  problem”.
Consider  the use of manipulators as multi-axis CNC machines.
Three-link manipulator with 5 degrees of freedom. In these examples  one of the restrictions on the movement of the manipulator links is that the position of the last link coincides with the normal to the surface along the entire trajectory of the working point movement.
That is, we, as it were, mathematically transform a system with many degrees of freedom to an analog of a lever mechanism with one degree of freedom, so that we can do the necessary work in a convenient to us way.
It seems that this approach is fully applicable directly to multi-axis CNC machines.

(In the texts of the programs, the normalization is carried out with respect to the coordinates of the last point, in order that the lengths of the integration interval coincide with the path length.)
MAN_3_5_for_MP.mw MAN_3_5_for_MP_TR.mw ﻿