## How do I use dsolve to numerically solve a differe...

Asked by:

The following is a simple example of what I would like to do:

M := Array(1 .. 10, 1 .. 2);

for i to 10 do
M[i, 1] := i;
M[i, 2] := 3*i;
end do;

Mt := Interpolation:-LinearInterpolation(M);
E := t -> Mt(t);
diffeq := D(C)(t) = E(t);
dsolve({diffeq, C(0) = 0}, {C(t)}, numeric);
Error, (in dsolve/numeric/process_input) unknown Interpolation:-LinearInterpolation([Vector(10, [1.,2.,3.,4.,5.,6.,7.,8.,9.,10.], datatype = float[8], attributes = [source_rtable = (Vector(10, [1.,2.,3.,4.,5.,6.,7.,8.,9.,10.], datatype = float[8], attributes = [source_rtable = (Array(1..10, 1..2, [[1.,3.],[2.,6.],[3.,9.],[4.,12.],[5.,15.],[6.,18.],[7.,21.],[8.,24.],[9.,27.],[10.,30.]], datatype = float[8]))]))])],Vector(10, [3.,6.,9.,12.,15.,18.,21.,24.,27.,30.], datatype = float[8], attributes = [source_rtable = (Array(1..10, 1..2, [[1.,3.],[2.,6.],[3.,9.],[4.,12.],[5.,15.],[6.,18.],[7.,21.],[8.,24.],[9.,27.],[10.,30.]], datatype = float[8]))]),uniform = true,verify = false) present in ODE system is not a specified dependent variable or evaluatable procedure

## How to generate a ColumnGraph with bars centered o...

Asked by:

Hello,

I would like to generate a ColumnGraph such that the bars are centered over 1, 2, and 3. For example, I have a vector:

x:= Vector([1,2,3]);

When you execute ColumnGraph(x), you a get the first bar centered over 0.375, instead of 1, second bar centered over 1.375, instead of 2, and so on.

Thanks for your help.

## What causes multiple ">" symbols in one block...

Asked by:

I have seen this quite a bit in blocks of code. The > symbol seems to appear erratically. I don't know how to specifically reproduce this. Does it mean something? I would post the worksheet but it will not run without the package.

## How to make (if possible) this procedure ThreadSaf...

Asked by:

Hello

I have the following procedure that uses the Lie Derivatives of a vector field to build a set of equations.

LieDerList:=proc(h::symbol,f::list,vars::list)
description "This function returns the system of equations based on the Lie derivative.":
local i,n:=numelems(vars),L:=Array(0..n):
L[0]:=h:
for i from 1 to n do
L[i]:=inner(f,map((a,b) -> diff(b,a),vars,L[i-1])):
end do:
return(zip((w,v)->simplify(w-v),[seq(L)],[seq](cat(h,i),i=0..n))):
end proc:


Below it is an example on how to call the procedure.

I used CodeTools:-ThreadSafetyCheck to check all the procedures used within LieDerList and LieDerList itself, but nothing wrong came out. However when I try to run

LieEq4:=Threads:-Map(w->LieDerList(x,w,[x,y,z]),models4):

where models4 is a list of 1765 elements, maple returns "Error, (in factors) attempting to assign to LinearAlgebra:-Modular:-Create which is protected". If I change Threads to Grid, there is no problem at all.

What am I overlooking? Is there a method to ensure the procedure is thread-safe?

Many thanks.

PS.  I found one problem - inner, which is related to LinearAlgebra package, is not thread-safe.

## reproducible server crash (Kernel connection has ...

Asked by:

This is not new in Maple 2024, and also happens in Maple 2023. But it causes the famous "kernel connection has been lost" each time.

I think many of the kernel connection has been lost problems I see might be related. It could be memory problem in this case. I wonder if someone will be able to track what causes it. Is it expected that server.exe crash on such input?

ps. Reported to Maplesoft support.

 > interface(version);

 > Physics:-Version();

 > restart;

 > integrand:=(2*x^2022+1)/(x^2023+x);

 > int(integrand,x);

Download reproducible_server_crash_test.mw

For reference, another software give this result instantly and with no crash:

My PC is windows 10, with 128 GB RAM.

## new error (in int/gparse/gmon) too many levels of ...

Asked by:

May be someone could find what causes this new internal error in Maple 2024.

I did report it already to Maplesoft. It does not happen in Maple 2023. Attached both worksheets.

 > interface(version);

 > Physics:-Version();

 > integrand:=(d*x)^m/(a+b*arctanh(c*x^n))^2;

 > int(integrand,x);

Error, (in int/gparse/gmon) too many levels of recursion

 > interface(version);

 > integrand:=(d*x)^m/(a+b*arctanh(c*x^n))^2;

 > int(integrand,x);

Download int_gparse_gmon_NO_error_maple_2023.mw

## Option to display or not display a default message...

Asked by:

I setup my package to display the a message when it is loaded. It is quiet convienent but I don't need it all the time. Obviously I can "#" in the code to hide it premanently. I was wondering if there is away to optionally turn it off/on. Something along the lines.

with(RationalTrigonometry,false) or with(RationalTrigonometry)[false]....

Or put something in the .ini flie to set the default behaviour.

restart:with(RationalTrigonometry):
"Default global settings:-

GeomClr = "Blue",

Prntmsg = true,

Prjpsn = 3 can be set to 1,

Normalgpt = 1 or set to 0,

Metric is a 3 x3 symmetric matrix defaults to the Identity

matrix "



## How to multiply two row vector component wise?...

Asked by:

Suppose I have, a row vector  R = [R1 , R2  , R] and a column vector C =  [C1 , C2  , C]. I need a multiplication like as follows

RC = [ R1 [C], R2[C], R3 [C]  ], so that RC will be a 3/3 matrix.

## DeepLearning example in Maple 2024...

Asked by:

Recognizing Handwritten Digits with Machine Learning

Introduction

Using the DeepLearning  package, this application trains a neural network to recognize the numbers in images of handwritten digits. The trained neural network is then applied to a number of test images.

The training and testing images are a very small subset of the MNIST database of handwritten digits; these consist of 28 x 28 pixel images of a handwritten digit, ranging from 0 to 9. A sample image for the digit zero is .

Ultimately, this application generates an vector of weights for each digit; think of weights as a marking grid for a multiple choice exam. When reshaped into a matrix, a weight vector for the digit 0 might look like this.

When attempting to recognize the number in an image

 • If a pixel with a high intensity lands in the red area, the evidence is high that the handwritten digit is zero
 • Conversely, if a pixel with a high intensity lands in the blue area, the evidence is low that the handwritten digit is zero

The DeepLearning package is a partial interface to Tensorflow, an open-source machine learning framework. To learn about the machine learning techniques used in this application, please consult these references (the next section, however, features a brief overview)

 •
 •

Notes

 Introduction We first build a computational (or dataflow) graph. Then, we create a Tensorflow session to run the graph.   Tensorflow computations involve tensors; think of tensors as multidimensional arrays.
 Images Each 28 x 28 image is flattened into a list with 784 elements.   Once flattened, the training images are stored in a tensor x, with shape of [none, 784]. The first index is the number of training images ("none" means that we can use an arbitrary number of training images).

Labels

Each training image is associated with a label.

 • Labels are a 10-element list, where each element is either 0 or 1
 • All elements apart from one are zero
 • The location of the non-zero element is the "value" of the image

So for an image that displays the digit 5, the label is [ 0,0,0,0,0,1,0,0,0,0]. This is known as a one-hot encoding.

All the labels are stored in a tensor y_ with a shape of [none, 10].

Training

The neural network is trained via multinomial logistic regression (also known as softmax).

Step 1

Calculate the evidence that each image is in a selected class. Do this by performing a weighted sum of the pixel intensity for the flattened image.

where

 • Wi,j and bi are the weight and the bias for digit i and pixel j. Think of W as a matrix with 784 rows (one for each pixel) and 10 columns (one for each digit), and b is a vector with 10 columns (one for each digit)
 • xj is the intensity of pixel j

Step 2

Normalize the evidence into a vector of probabilities with softmax.

Step 3

For each image, calculate the cross-entropy of the vector of predicted probabilities and the actual probabilities (i.e the labels)

where

 • y_ is the true distribution of probabilities (i.e. the one-hot encoded label)
 • y is the predicted distribution of probabilities

The smaller the cross entropy, the better the prediction.

Step 4

The mean cross-entropy across all training images is then minimized to find the optimum values of W and b

 Testing For each test image, we will generate 10 ordered probabilities that sum to 1. The location of the highest probability is the predicted value of the digit.

Miscellaneous

This application consists of

 • this worksheet
 • and a very small subset of images from the MNIST handwritten digit database

in a single zip file. The images are stored in folders; the folders should be extracted to the location as this worksheet.

Load Packages and Define Parameters

 > restart: with(DeepLearning): with(DocumentTools): with(DocumentTools:-Layout): with(ImageTools):
 > LEARNING_RATE := 0.01: TRAIN_STEPS   := 40:

Number of training images to load for each digit (maximum of 100)

 > N := 22:

Number of labels (there are 10 digits, so this is always 10)

 > L := 10:

Number of test images

 > T := 50:

Import Training Images and Generate Labels

Import the training images, where images[n] is a list containing the images for digit n.

 > path := "C:/Users/Wilfried/Documents/Maple/Examples/ML/": for j from 0 to L - 1 do     images[j] := [seq(Import(cat(path, j, "/", j, " (", i, ").PNG")), i = 1 .. N)]; end do:

Generate the labels for digit j, where label[n] is the label for image[n].

 > for j from 0 to L - 1 do    labels[j] := ListTools:-Rotate~([[1,0,0,0,0,0,0,0,0,0]\$N],-j)[]: end do:

Display training images

 > Embed([seq(images[i-1], i = 1 .. L)]);

Training

Flatten and collect images

 > x_train := convert~([seq(images[i - 1][], i = 1 .. L)], list):

Collect labels

 > y_train := [seq(labels[i - 1], i = 1 .. L)]:

Define placeholders x  and y to feed the training images and labels into

 > SetEagerExecution(false): x  := Placeholder(float[4], [none, 784]): y_ := Placeholder(float[4], [none, L]):

Define weights and bias

 > W := Variable(Array(1 .. 784, 1 .. L), datatype = float[4]): b := Variable(Array(1 .. L), datatype = float[4]):

Define the classifier using multinomial logistic regression

 > y := SoftMax(x.W + b):

Define the cross-entropy (i.e. the cost function)

 > cross_entropy := ReduceMean(-ReduceSum(y_ * log(y), reduction_indicies = [1])):

Get a Tensorflow session

 > sess := GetDefaultSession():

Initialize the variables

 > init := VariablesInitializer(): sess:-Run(init):

Define the optimizer to minimize the cross entropy

 > optimizer := Optimizer(GradientDescent(LEARNING_RATE)): training  := optimizer:-Minimize(cross_entropy):

Repeat the optimizer many times

 > for i from 1 to TRAIN_STEPS do    sess:-Run(training, {x in x_train, y_ in y_train}):    if i mod 200 = 0 then       print(cat("loss = ", sess:-Run(cross_entropy, {x in x_train, y_ in y_train})));        end if: end do:

Import Test Images and Predict Numbers

Randomize the order of the test images.

 > i_rand := combinat:-randperm([seq(i, i = 1 .. 100)]);
 (6.1)

Load and flatten test images.

 > path:= "C:/Users/Wilfried/Documents/Maple/Examples/ML/test_images": x_test_images := [seq(Import(cat(path,"/","test (", i, ").png")), i in i_rand[1 .. T])]: x_train:= convert~(x_test_images, list):

For each test image, generate 10 probabilities that the digit is a number from 1 to 10

 > pred := sess:-Run(y, {x in x_train})
 (6.2)

For each test image, find the predicted digit associated with the greatest probability

 > predList := seq( max[index]( pred[i, ..] ) - 1, i = 1 .. T )
 (6.3)
 > : Val:=Vector(10,0): Val_mean:=Vector(10,0): for k from 1 to 10 do:   L1:=[]:  for i from 1 to 50 do:   if predList[i]=k-1 then L1:=[op(L1),L[i]] end if:  end do:  Val(k):=evalf(L1,3):  Val_mean(k):=Statistics:-Mean(Array(L1)): end do: Val,Val_mean
 >
 (6.4)

Consider the first test image

 > Embed(x_test_images[1])

The ten probabilities associated with this image are

 > pred[1, ..]
 (6.5)

Confirm that the probabilities add up to 1

 > add(i, i in pred[1, ..])
 (6.6)

The maximum probability occurs at this index

 > maxProbInd := max[index](pred[1, ..])
 (6.7)

Hence the predicted number is

 > maxProbInd - 1
 (6.8)
 >
 >

We now display all the predictions

 > T1 := Table(Row(seq(predList[k],k = 1.. 25)),Row( seq(predList[k],k = 26 .. 50 ))           ): InsertContent(Worksheet(T1)):
 Visualize Weights

I have problmes running the file with Maple 2024. It runs fine with Maple 2020.2 (execpt the very last part, which is not essential). The problem occurs at the SoftMax command, even if I use Softmax. It seems to be a Python conersion problem in Maple 2024. Please let me know what the remidy is. You need to modify the data path because it is set to my computer.

Wilfried

Download HDR.mw

## why Maple likes to write exp(2*a) as (exp(a))^2 ?...

Asked by:

This is just cosmotics, but it looks ugly for me. For some reason Maple converts exp(2*a) to (exp(a))^2 under certain operations such as expand

expr:=exp(2*a);
expand(%);
simplify(%);
expand(%)

.

This happens in worksheet under typesetting level extends or standard.

Any specific reason why Maple likes to rewrite exp(2*a) as (exp(a))^2  and is there a way to tell it not to do that?

ps. it is little more than cosmotic actually, it affects the Latex generated

latex(expr)
{\mathrm e}^{2 a}

latex(expand(expr))
\left({\mathrm e}^{a}\right)^{2}


Maple 2024 on windows 10

## The 'gridlines' option deactivates when using the ...

Asked by:

Hi,

I successfully animated my trigonometric problem on the wheel. Just a few details: 1) The 'gridlines' option deactivates when using the 'tickmarks' option. 2) The animation takes time to compile. Any suggestions? Thank you for your advice on optimizing this animation

RoueAnimation.mw

## Error, (in simplify/trig/do/1) expression independ...

Asked by:

Is this internal error expected? Why does it happen? Reported to Maplesoft just in case.

 > interface(version);

 > restart;

 > e:= RootOf(csc(_Z)); simplify(e);

Error, (in simplify/trig/do/1) expression independent of, _Z

 > e:= RootOf(csc(x)); simplify(e);

Error, (in simplify/trig/do/1) expression independent of, _Z

Download simplify_Z_error_maple_2024_march_22_2024.mw

ps. Reported to Maplesoft just in case.

## how to disable the new Scrollable Matrices in Mapl...

Asked by:

I do not like this feature at all. called Scrollable Matrices:

https://mapleprimes.com/maplesoftblog/224789-Discover-Whats-New-In-Maple-2024

Is there a way to turn it off?

In Maple 2024 when I display a wide matrix, it no longer wraps around if the worksheet window width was smaller as it did in Maple 2023. I prefer the 2023 behavior.

A:=Matrix(3,4,{(1, 1) = (y(x) = RootOf(-Intat(1/(_a^(3/2)+1),_a = _Z+x)+x+Intat(1/
(_a^(3/2)+1),_a = 0))), (1, 2) = "explicit", (1, 3) = "", (1, 4) = false, (2, 1
) = (y(x) = -1/2+1/2*I*3^(1/2)-x), (2, 2) = "explicit", (2, 3) = "", (2, 4) =
false, (3, 1) = (x = -2/3*ln(((y(x)+x)^(3/2))^(1/3)+1)+1/3*ln(((y(x)+x)^(3/2))^
(2/3)-((y(x)+x)^(3/2))^(1/3)+1)+2/3*3^(1/2)*arctan(1/3*(2*((y(x)+x)^(3/2))^(1/3
)-1)*3^(1/2))+1/9*3^(1/2)*Pi), (3, 2) = "implicit", (3, 3) = "", (3, 4) = true}
,datatype = anything,storage = rectangular,order = Fortran_order,shape = []);


Screen shot on Maple 2023

Screen shot on Maple 2024