Applications, Examples and Libraries

Share your work here

Hi again all,

Was trying to be helpful at

mathforums.com

and made these two Maple files.  

simple_square_root_loop.mw

simple_square_root_loop.pdf

Hope that helps.

Maple is the best :-)

goodbye for now.

 

Matthew


Under the name of mmcdara (unfortunately inaccessible since the major July 2025 Mapleprimes outage, and probably lost forever, God rest his soul.) I published two years ago a post about Multivariate Normal Distribution.

The current post continues in the same vein and presents the construction of a few new Multivariate Random Variables (MRV for short) named Multinomial (see for instance this recent question), Dirichlet, Categorical and related compound distributions.
I advice the interested readers to give a quick look to these names on Wikipedia (more specific references are given at the top of the wotksheet).

As I explained (in fact as my alter ego did) in Multivariate Normal Distribution, the Statistics package is limited to univariate random variabled  and thus implementing MRVs requires a little cunning.
Here is a list of a few problems you face:

  • Whereas the expectation (sometimes named "mean") of a univariate random variable is a number or an expression, the expectation of a MRV is a vector (or a list, a n-uple, ...) of numbers or expressions.

So far, so good, except that the Mean attribute of Distribution can only be a scalar quantity. So if you want to assign a vector to Mean you have to code it some way and do something like Decode(Mean(My_MRV)) to get the expectation in a vector form.
 

  • The Variance case is even more tricky because MRV variance are matrices.
     
  • Beyond this some very useful attributes like ParentName and Parameters cannot be instanciated in the definition of user random variables (whether there are MRV or not), implying here again some bit of gymnastics to, if not reaslly instantiate these attributes, be able at least to retrieve them when needed.
     
  • Finally, last but not least, the RandomSample is not appropriated to sample MRVs for reasons which are explained in the attached worksheet.


The file below contains more than 20 procedures enabling the definition of the studied MRVs, the decoding of the coded attributes, the visualization (which is not that immediate because the supports of the MRVs I foccus on are simplexes), the parameter estimations against empirical observations (frequentist and bayesian points of view), and so on.

Multinomial_Dirichlet_and_so_on.mw

Nevertheless, there is still a lot missing, but at some point I believe we need to decide that the work is over.

 


The Summer Issue of Maple Transactions has been published.  There are articles from a range of interests: research, education, and personal stories.

Have a look, and I hope you find something of value in the issue.

 

We are pleased to announce that the registration for the Maple Conference 2025 is now open!

Like the last few years, this year’s conference will be a free virtual event. Please visit the conference page for more information on how to register.

This year we are offering a number of new sessions, including more product training options, and an Audience Choice session.
Also included in this year's registration is access to an in-depth Maple workshop day presented by Maplesoft's R&D members following the conference.  You can find an overview of the program on the Sessions page. Those who register before September 14th, 2025 will have a chance to vote for the topics they want to learn more about during the Audience Choice session.

We hope to see you there!

Hi,

look at this Maple code.

short_list_prime_factorization_fun.mw

short_list_prime_factorization_fun.pdf

Have a good day

Matthew

Hi,

check out this maple code

positive_odd_integer_factorization_data.pdf

positive_odd_integer_factorization_data.mw

that is all

Regards,

Matthew

Hi,

have some Maple code to share.

prime_triplet_0_4_6.mw

prime_triplet_0_4_6.pdf

Enjoy

Matthew

ps Prime numbers are fun

see https://t5k.org/

 

Hi,

Don't laugh.

Some of you are not familair with Wagstaff Prime Numbers

see Wolfram Math World

also, this Maple code is esentially, a loop and the isprime() function

for your edification

b

have a look

just wanted to contribute my two cents to the Maple community

good day

We are excited to announce that the Maple Conference will be held Novemeber 5-7, 2025!

Please join us at this free virtual event as it will be an excellent opportunity to meet other members of the Maple community, get the latest news about our products, and hear from the experts about the challenges and opportunities that our technology brings to teaching, learning, and research. More importantly, it's a chance for you to share the work you've been doing with Maple and Maple Learn. 

The Call for Participation is now open. We are inviting submissions of presentation proposals on a range of topics related to Maple, including Maple in education, algorithms and software, and applications. We also encourage submission of proposals related to Maple Learn. 

You can find more information about the themes of the conference and how to submit a presentation proposal at the Call for Participation page. Applications are due July 25, 2025.

After the conference, all accepted presenters and invited speakers will be invited to submit related content to the Maple Transactions journal for consideration.

Registration for attending the conference will open in July.  Watch for further announcements in the coming weeks.

We hope all of you in the Maple Primes community will join us for this event!

Kaska Kowalska
Contributed Program Co-Chair

Thanks to @salim-barzani, I became interested in the Laplace Adomian Decompositon method to solve partial differential equations. It produces a sum of analytical components that converge to the exact solution. Probably it is not that efficient for numerical solutions, but the possibility of finding an exact solution to nonlinear PDEs by finding a formula for the components and therefore the infinite sum is interesting.

The version here covers many of the common forms of PDEs, but is still a work in progress. The method for finding the Adomian polynomials could be improved by using one of the reccurence formulas in the literature. Extension to more PDE forms, ODEs etc are possible and I will attempt some of these before uploading an improved version to the application centre. In the meantime, feedback about bugs, usability etc. are appreciated.

Laplace Adomian Decomposition method to solve partial differential equations.
 D.A. Harrington, June 2025, v 1.16.
 Procedure LAD is in the startup code region.

Arguments:

1. 

Partial differential equation in one "time" variable and several other variables (called spatial variables here). Specified as an equation, or an expression implicitly equal to zero.
Comprising: (1) one time derivative term of order m (denoted L), (2) linear terms that are derivatives in the spatial variables (R), (3) nonlinear terms with derivatives in the spatial variables (N). Together, (1)-(3) is a multivariate polynomial in the dependent variable, its derivatives, and any parameters. (4) inhomogenous terms in the time and spatial variables that do not contain the dependent variable or its derivatives (G). The general form is L = R+N-G.
Any symbolic parameters are assumed to be real. Any numeric coefficients or parameters should be of type algebraic. Floating point values will be converted to rationals with convert(value, rational).

2. 

Function to solve for, e.g., u(x, y, t).

3. 

m initial condition(s) (at time zero). For m > 1, these must be in a list, and are the values of the function and its successive derivatives with respect to time, evaluated at time zero.

4. 

name of time variable.

5. 

number of iterations.

6. 

(optional) order for a fractional derivative, specified as fracorder = name or fracorder = numeric*value (floating point values are converted to type rational). The permissible values alpha are related to the value of m: m-1 < alpha and alpha <= m, i.e., an mth order time derivative is specified in the pde and m initial conditions are provided, as well as the value of alpha in the correct range. A symbolic alpha is assumed to be in this range.

infolevel[LAD] may be set to different values to print out additional information as follows. Greater numbers include the information from smaller values.

1. 

The nonlinear expansion variables (those in the Adomian polynomials).

2. 

The different parts of the the pde (L, R, N, G) in jet notation.

3. 

Progress is indicated, as the time at each iteration.

4. 

Values of the components of the solution as they are produced (one per iteration).

restart

It may be useful to load the Physics package if derivatives contain abs. LAD converts these to a form without abs but with the conjugate of the dependent variable. The Physics package affects the internal simplifications and may affect the form of the output or the running time.

PDEtools:-declare(U(x, t), quiet)

Example from A-M. Wazwaz, Applied Mathematics and Computation 111 (2000) 53. Sec. 4.

pde := diff(U(x, t), t)+U(x, t)^2*(diff(U(x, t), x)); inx := 3*x

diff(U(x, t), t)+U(x, t)^2*(diff(U(x, t), x))

3*x

infolevel[LAD] := 2

approx := LAD(pde, U(x, t), inx, t, 10)

LAD: L = U[t]; R = 0; N = -U^2*U[x]; G = 0.

LAD: nonlinear expansion variables (conjugated functions denoted by C) are: [U, U[x]]

175692092397588*t^10*x^11-5650915252554*t^9*x^10+184670433090*t^8*x^9-6155681103*t^7*x^8+210450636*t^6*x^7-7440174*t^5*x^6+275562*t^4*x^5-10935*t^3*x^4+486*t^2*x^3-27*t*x^2+3*x

By extrapolating to an infinite number of terms, we can find an exact solution. (Wazwaz calculates the coefficient of x^4*t^3 incorrectly and deduces an incorrect exact solution.) Multiplying by t gives a series in y = x*t for which guessgf can use the coefficients to find the exact solution.

algsubs(x*t = y, expand(approx*t)); [seq(coeff(%, y, i), i = 0 .. degree(%, y))]; gfun:-guessgf(%, y)[1]; exact := (eval(%, y = x*t))/t

175692092397588*y^11-5650915252554*y^10+184670433090*y^9-6155681103*y^8+210450636*y^7-7440174*y^6+275562*y^5-10935*y^4+486*y^3-27*y^2+3*y

[0, 3, -27, 486, -10935, 275562, -7440174, 210450636, -6155681103, 184670433090, -5650915252554, 175692092397588]

6*y/(1+(1+36*y)^(1/2))

6*x/(1+(36*t*x+1)^(1/2))

Check

pdetest(U(x, t) = exact, [pde, U(x, 0) = inx])

[0, 0]

An example with inhomogeneous terms (but no nonlinear part). Ex. 3 from R. Shah et al, Entropy 21 (2019) 335.

pde := diff(U(x, t), t)+diff(U(x, t), `$`(x, 3)) = -sin(Pi*x)*sin(t)-Pi^3*cos(Pi*x)*cos(t); inx := sin(Pi*x); exact := sin(Pi*x)*cos(t); pdetest(U(x, t) = exact, [pde, U(x, 0) = inx])

diff(U(x, t), t)+diff(diff(diff(U(x, t), x), x), x) = -sin(Pi*x)*sin(t)-Pi^3*cos(Pi*x)*cos(t)

sin(Pi*x)

sin(Pi*x)*cos(t)

[0, 0]

approx := LAD(pde, U(x, t), inx, t, 10)

LAD: L = U[t]; R = -U[x,x,x]; N = 0; G = -sin(Pi*x)*sin(t)-Pi^3*cos(Pi*x)*cos(t).

LAD: nonlinear expansion variables (conjugated functions denoted by C) are: []

sin(Pi*x)-cos(Pi*x)*Pi^3*sin(t)+sin(Pi*x)*(-1+cos(t))-Pi^3*(Pi^3*(-1+cos(t))*sin(Pi*x)-cos(Pi*x)*sin(t))+(Pi^3*(-sin(t)+t)*cos(Pi*x)+sin(Pi*x)*(-1+cos(t)))*Pi^6-(1/2)*Pi^9*(Pi^3*(t^2+2*cos(t)-2)*sin(Pi*x)+2*cos(Pi*x)*(-sin(t)+t))-(1/6)*Pi^12*(Pi^3*(t^3+6*sin(t)-6*t)*cos(Pi*x)-3*sin(Pi*x)*(t^2+2*cos(t)-2))+(1/24)*(Pi^3*(t^4-12*t^2-24*cos(t)+24)*sin(Pi*x)+4*cos(Pi*x)*(t^3+6*sin(t)-6*t))*Pi^15+(1/120)*Pi^18*(Pi^3*(t^5-20*t^3-120*sin(t)+120*t)*cos(Pi*x)-5*sin(Pi*x)*(t^4-12*t^2-24*cos(t)+24))-(1/720)*Pi^21*(Pi^3*(t^6-30*t^4+360*t^2+720*cos(t)-720)*sin(Pi*x)+6*cos(Pi*x)*(t^5-20*t^3-120*sin(t)+120*t))-(1/5040)*(Pi^3*(t^7-42*t^5+840*t^3+5040*sin(t)-5040*t)*cos(Pi*x)-7*sin(Pi*x)*(t^6-30*t^4+360*t^2+720*cos(t)-720))*Pi^24+(1/40320)*Pi^27*(Pi^3*(t^8-56*t^6+1680*t^4-20160*t^2-40320*cos(t)+40320)*sin(Pi*x)+8*cos(Pi*x)*(t^7-42*t^5+840*t^3+5040*sin(t)-5040*t))+(1/362880)*Pi^30*(Pi^3*(t^9-72*t^7+3024*t^5-60480*t^3-362880*sin(t)+362880*t)*cos(Pi*x)-9*sin(Pi*x)*(t^8-56*t^6+1680*t^4-20160*t^2-40320*cos(t)+40320))

Expand the exact solution as a series in t to the same order and verify that it is the same as above.

series(exact-approx, t, 11, oterm = false)

0

An example with a complex solution

pde := I*(diff(U(x, t), t))+diff(U(x, t), `$`(x, 2))+2*abs(U(x, t))^2*U(x, t); inx := sech(x); exact := sech(x)*exp(I*t); `assuming`([simplify(pdetest(U(x, t) = exact, [pde, U(x, 0) = inx]))], [real])

I*(diff(U(x, t), t))+diff(diff(U(x, t), x), x)+2*abs(U(x, t))^2*U(x, t)

sech(x)

sech(x)*exp(I*t)

[0, 0]

approx := LAD(pde, U(x, t), inx, t, 10)

LAD: L = I*U[t]; R = -U[x,x]; N = -2*U^2*C; G = 0.

LAD: nonlinear expansion variables (conjugated functions denoted by C) are: [C, U]

sech(x)+I*sech(x)*t-(1/2)*sech(x)*t^2-((1/6)*I)*sech(x)*t^3+(1/24)*sech(x)*t^4+((1/120)*I)*sech(x)*t^5-(1/720)*sech(x)*t^6-((1/5040)*I)*sech(x)*t^7+(1/40320)*sech(x)*t^8+((1/362880)*I)*sech(x)*t^9-(1/3628800)*sech(x)*t^10

The series for exp(I*t) is apparent.

collect(approx, sech); series(approx-exact, t, 11, oterm = false)

(1+I*t-(1/2)*t^2-((1/6)*I)*t^3+(1/24)*t^4+((1/120)*I)*t^5-(1/720)*t^6-((1/5040)*I)*t^7+(1/40320)*t^8+((1/362880)*I)*t^9-(1/3628800)*t^10)*sech(x)

0

An example with fractional order differentiation wrt t of order alpha. For m-1 < alpha and alpha <= m, the derivative must be entered with order m and there must be a list of m initial conditions:f(x, 0), (D[2](f))(x, 0), (D[2, 2](f))(x, 0), () .. ()

 Ex. 1 from R. Shah et al, Entropy 21 (2019) 335

pde := diff(U(x, t), t) = -2*(diff(U(x, t), x))-(diff(U(x, t), `$`(x, 3))); inx := sin(x)

diff(U(x, t), t) = -2*(diff(U(x, t), x))-(diff(diff(diff(U(x, t), x), x), x))

sin(x)

Symbolic order is accepted.

approx := LAD(pde, U(x, t), inx, t, 6, fracorder = alpha)

LAD: L = U[t]; R = -2*U[x]-U[x,x,x]; N = 0; G = 0.

LAD: nonlinear expansion variables (conjugated functions denoted by C) are: []

sin(x)-cos(x)*t^alpha/GAMMA(1+alpha)-sin(x)*t^(2*alpha)/GAMMA(1+2*alpha)+cos(x)*t^(3*alpha)/GAMMA(1+3*alpha)+sin(x)*t^(4*alpha)/GAMMA(1+4*alpha)-cos(x)*t^(5*alpha)/GAMMA(1+5*alpha)-sin(x)*t^(6*alpha)/GAMMA(1+6*alpha)

Giving a numerical value for the order will generally be more efficient. For alpha = 1/2, we can find an exact solution.

approx := collect(LAD(pde, U(x, t), inx, t, 20, fracorder = 1/2), [sin, cos])

LAD: L = U[t]; R = -2*U[x]-U[x,x,x]; N = 0; G = 0.

LAD: nonlinear expansion variables (conjugated functions denoted by C) are: []

(1-t+(1/2)*t^2-(1/6)*t^3+(1/24)*t^4-(1/120)*t^5+(1/720)*t^6-(1/5040)*t^7+(1/40320)*t^8-(1/362880)*t^9+(1/3628800)*t^10)*sin(x)+(-2*t^(1/2)/Pi^(1/2)+(4/3)*t^(3/2)/Pi^(1/2)-(8/15)*t^(5/2)/Pi^(1/2)+(16/105)*t^(7/2)/Pi^(1/2)-(32/945)*t^(9/2)/Pi^(1/2)+(64/10395)*t^(11/2)/Pi^(1/2)-(128/135135)*t^(13/2)/Pi^(1/2)+(256/2027025)*t^(15/2)/Pi^(1/2)-(512/34459425)*t^(17/2)/Pi^(1/2)+(1024/654729075)*t^(19/2)/Pi^(1/2))*cos(x)

The first series is exp(-t), the second series is not so obvious, After some scaling, the second series may be passed to guessgf

ser2 := `assuming`([simplify(expand(sqrt(Pi)*op(2, approx)/(sqrt(t)*cos(x))))], [t > 0]); [seq(coeff(ser2, t, i), i = 0 .. 9)]; ser2ex := gfun:-guessgf(%, t)[1]

-2+(4/3)*t-(8/15)*t^2+(16/105)*t^3-(32/945)*t^4+(64/10395)*t^5-(128/135135)*t^6+(256/2027025)*t^7-(512/34459425)*t^8+(1024/654729075)*t^9

-exp(-t)*Pi^(1/2)*erfi(t^(1/2))/t^(1/2)

So the full solution is

exact := exp(-t)*sin(x)+ser2ex*sqrt(t)*cos(x)/sqrt(Pi)

exp(-t)*sin(x)-exp(-t)*erfi(t^(1/2))*cos(x)

Check. fracdiff's default method returns an integral, method = laplace fails, but method = series works with cancellation of all terms but the "left over" last half-integral-order one.

Ord := 20; fracdiff(exact, t, 1/2, method = series, methodoptions = [order = Ord])-series(eval(rhs(pde), U(x, t) = exact), t, Ord)

20

-(1048576/319830986772877770815625)*sin(x)*t^(39/2)/Pi^(1/2)+O(t^(39/2))-O(t^20)

A second-order example (linear)

pde := diff(U(x, t), t, t) = -c^2*(diff(U(x, t), `$`(x, 2))); inx := [exp(I*x/c), exp(I*x/c)]; exact := exp(t+I*x/c)

diff(diff(U(x, t), t), t) = -c^2*(diff(diff(U(x, t), x), x))

[exp(I*x/c), exp(I*x/c)]

exp(t+I*x/c)

pdetest(U(x, t) = exact, [pde, U(x, 0) = inx[1], (D[2](U))(x, 0) = inx[2]])

[0, 0, 0]

approx := LAD(pde, U(x, t), inx, t, 6)

LAD: L = U[t,t]; R = -c^2*U[x,x]; N = 0; G = 0.

LAD: nonlinear expansion variables (conjugated functions denoted by C) are: []

exp(I*x/c)*(1+t)+(1/6)*exp(I*x/c)*t^2*(3+t)+(1/120)*exp(I*x/c)*t^4*(t+5)+(1/5040)*exp(I*x/c)*t^6*(7+t)+(1/362880)*exp(I*x/c)*t^8*(9+t)+(1/39916800)*exp(I*x/c)*t^10*(t+11)+(1/6227020800)*exp(I*x/c)*t^12*(13+t)

series(exact-approx, t, 13, oterm = false)

0

Fractional order example

approx := collect(LAD(pde, U(x, t), inx, t, 10, fracorder = 3/2), [exp, t])

LAD: L = U[t,t]; R = -c^2*U[x,x]; N = 0; G = 0.

LAD: nonlinear expansion variables (conjugated functions denoted by C) are: []

(1+(1/20922789888000)*t^16+(1/1307674368000)*t^15+(32768/6190283353629375)*t^(29/2)/Pi^(1/2)+(16384/213458046676875)*t^(27/2)/Pi^(1/2)+(1/6227020800)*t^13+(1/479001600)*t^12+(4096/316234143225)*t^(23/2)/Pi^(1/2)+(2048/13749310575)*t^(21/2)/Pi^(1/2)+(1/3628800)*t^10+(1/362880)*t^9+(512/34459425)*t^(17/2)/Pi^(1/2)+(256/2027025)*t^(15/2)/Pi^(1/2)+(1/5040)*t^7+(1/720)*t^6+(64/10395)*t^(11/2)/Pi^(1/2)+(32/945)*t^(9/2)/Pi^(1/2)+(1/24)*t^4+(1/6)*t^3+(8/15)*t^(5/2)/Pi^(1/2)+(4/3)*t^(3/2)/Pi^(1/2)+t)*exp(I*x/c)

There appear to be two separate series in t, with integer and half-integer powers.

t_series := approx/exp(I*x/c); t1, t2 := selectremove(proc (w) options operator, arrow; (degree(w, t))::integer end proc, t_series)

1+(1/20922789888000)*t^16+(1/1307674368000)*t^15+(1/6227020800)*t^13+(1/479001600)*t^12+(1/3628800)*t^10+(1/362880)*t^9+(1/5040)*t^7+(1/720)*t^6+(1/24)*t^4+(1/6)*t^3+t, (32768/6190283353629375)*t^(29/2)/Pi^(1/2)+(16384/213458046676875)*t^(27/2)/Pi^(1/2)+(4096/316234143225)*t^(23/2)/Pi^(1/2)+(2048/13749310575)*t^(21/2)/Pi^(1/2)+(512/34459425)*t^(17/2)/Pi^(1/2)+(256/2027025)*t^(15/2)/Pi^(1/2)+(64/10395)*t^(11/2)/Pi^(1/2)+(32/945)*t^(9/2)/Pi^(1/2)+(8/15)*t^(5/2)/Pi^(1/2)+(4/3)*t^(3/2)/Pi^(1/2)

The integer-powers series is the series for exp(t) with every third term missing: missing powers 2, 5, 8, 11, ...

t1sum := simplify(exp(t)-(sum(t^(3*i+2)/factorial(3*i+2), i = 0 .. infinity))); series(%, t, 17)

(2/3)*exp(t)+(1/3)*(cos((1/2)*3^(1/2)*t)+3^(1/2)*sin((1/2)*3^(1/2)*t))*exp(-(1/2)*t)

series(1+t+(1/6)*t^3+(1/24)*t^4+(1/720)*t^6+(1/5040)*t^7+(1/362880)*t^9+(1/3628800)*t^10+(1/479001600)*t^12+(1/6227020800)*t^13+(1/1307674368000)*t^15+(1/20922789888000)*t^16+O(t^17),t,17)

And for the other one, the following scaled series shows coefficients with powers of two divided by double factorials, every third term missing: missing powers 2,5,8,11,...

expand(sqrt(Pi)*t2/(4*t^(3/2))); all := add(2^i*t^i/doublefactorial(2*(i+1)+1), i = 0 .. 9); missing := add(eval(2^i*t^i/doublefactorial(2*(i+1)+1), i = 3*j+2), j = 0 .. 3)

(8192/6190283353629375)*t^13+(4096/213458046676875)*t^12+(1024/316234143225)*t^10+(512/13749310575)*t^9+(128/34459425)*t^7+(64/2027025)*t^6+(16/10395)*t^4+(8/945)*t^3+(2/15)*t+1/3

1/3+(2/15)*t+(4/105)*t^2+(8/945)*t^3+(16/10395)*t^4+(32/135135)*t^5+(64/2027025)*t^6+(128/34459425)*t^7+(256/654729075)*t^8+(512/13749310575)*t^9

(4/105)*t^2+(32/135135)*t^5+(256/654729075)*t^8+(2048/7905853580625)*t^11

t2sum := 4*t^(3/2)*(sum(2^i*t^i/doublefactorial(2*(i+1)+1), i = 0 .. infinity)-(sum(eval(2^i*t^i/doublefactorial(2*(i+1)+1), i = 3*j+2), j = 0 .. infinity)))/sqrt(Pi)

4*t^(3/2)*((1/4)*(Pi^(1/2)*exp(t)-2*t^(1/2)-Pi^(1/2)*erfc(t^(1/2))*exp(t))/t^(3/2)-(4/105)*t^2*hypergeom([1], [3/2, 11/6, 13/6], (1/27)*t^3))/Pi^(1/2)

Putting it together suggests

exact := exp(I*x/c)*simplify(t1sum+t2sum)

(1/3)*exp(I*x/c)*(Pi^(1/2)*exp(-(1/2)*t)*sin((1/2)*3^(1/2)*t)*3^(1/2)+Pi^(1/2)*exp(-(1/2)*t)*cos((1/2)*3^(1/2)*t)+3*Pi^(1/2)*exp(t)*erf(t^(1/2))-(16/35)*t^(7/2)*hypergeom([1], [3/2, 11/6, 13/6], (1/27)*t^3)+2*Pi^(1/2)*exp(t)-6*t^(1/2))/Pi^(1/2)

Check.

Ord := 20; fracdiff(exact, t, 3/2, method = series, methodoptions = [order = Ord+1])-series(eval(rhs(pde), U(x, t) = exact), t, Ord)

20

O(t^(39/2))-(1048576/319830986772877770815625)*exp(I*x/c)*t^(39/2)/Pi^(1/2)-O(t^(41/2))

Numerical calculation of a soliton

pde := I*(diff(U(x, t), t))+diff(U(x, t), `$`(x, 2))+U(x, t)^2*conjugate(U(x, t)); exact := sqrt(2*B)*sech(sqrt(B)*(2*k*t-x-x__0))*exp(I*(k*x+(-k^2+B)*t)); `assuming`([simplify(pdetest(U(x, t) = exact, pde))], [real, B > 0]); inx := eval(exact, t = 0)

I*(diff(U(x, t), t))+diff(diff(U(x, t), x), x)+U(x, t)^2*conjugate(U(x, t))

2^(1/2)*B^(1/2)*sech(B^(1/2)*(2*k*t-x-x__0))*exp(I*(k*x+(-k^2+B)*t))

0

2^(1/2)*B^(1/2)*sech(B^(1/2)*(-x-x__0))*exp(I*k*x)

With numerical parameters it runs faster. Float parameters will be converted to rationals.

params := {B = 2, k = -1/2, x__0 = -2}; inxnum := eval(inx, params); exactnum := eval(exact, params)

2*sech(2^(1/2)*(-x+2))*exp(-((1/2)*I)*x)

2*sech(2^(1/2)*(-t-x+2))*exp(I*(-(1/2)*x+(7/4)*t))

infolevel[LAD] := 3; approx := LAD(pde, U(x, t), inxnum, t, 28)

LAD: L = I*U[t]; R = -U[x,x]; N = -U^2*C; G = 0.

LAD: nonlinear expansion variables (conjugated functions denoted by C) are: [C, U]

LAD: calculating component 1 at time 0.

LAD: calculating component 2 at time .221

[Edited for brevity]

LAD: calculating component 27 at time 90.414

LAD: calculating component 28 at time 103.870

LAD: completed at time 105.309

For larger x and t, the approximation diverges at a point that depends on the number of iterations.

plot3d([abs(exactnum), abs(approx)], t = 0 .. 2, x = -3 .. 3, color = [red, blue], view = [default, default, 0 .. 5], orientation = [-30, 60])

NULL

Download Laplace_Adomian_Decomposition_procedure8b.mw

 

The link below goes to the Proceedings of the Maple 2024 Conference, which includes several articles that will be of interest to the readers of Maple Primes.

There may be one more paper coming in to the proceedings later as per policy; since most things are ready, away we go!

Proceedings of the Maple 2024 Conferenc

In Optical-Illusion--Impossible-Prisms, mcarvalho provides a Maple Learn worksheet to illustrate an intriguing optical illusion.  Here I do that illustration in Maple, and in 3D! 

The graphics below shows 4 bars.  Or is it 3 bars?

Download the worksheet, stare at the graphics for short while to grasp the nature of the optical illusion, and then rotate the 3D graph with the mouse and see the illusion fall apart.

optical-illusion.mw

Geometrical entertainment in the form of rolling without slipping, now inside a torus.
You can also print out the corresponding equations - these graphs are # in the text, but it is better to do this separately from the geometric animation, because textplot3d takes up a lot of resources.

Please consider it not as a Maple program, but simply as an idea for a corresponding algorithm.
for_TORUS_IN_TORUS_for.mw
 

The workbook includes the excel data file but I also included the worksheet and data file separate for versions that can't load workbooks.

 

Average_World_Temperature_workbook-.maple

Average_World_Temperature_since_1850_worksheet.mw

HadCRUT5.zip

Last week, DeepMind announced their new AlphaEvolve software along with a number of new results improving optimization problems in a number of areas. One of those results is a new, smaller, tensor decomposition for multiplying two 4 × 4 matrices in 48 scalar multiplications, improving on the previous best decomposition which needed 49 multiplications (Strassen 1969). While the DeepMind paper was mostly careful about stating this, when writing the introduction, the authors mistakenly/confusingly wrote:

For 56 years, designing an algorithm with rank less than 49 over any field with characteristic 0 was an open problem. AlphaEvolve is the first method to find a rank-48 algorithm to multiply two 4 × 4 complex-valued matrices

This gives the impression that this new result is the way to multiply 4 × 4 matrices over a field with the fewest number of field multiplications. However, it's not even the case that Strassen's 1969 paper gave a faster way to multiply 4 × 4 matrices. In fact, Winograd's 1968 paper on computing inner products faster could be applied to 4 × 4 matrix multiplication to get a formula using only 48 multiplications. Winograd uses a trick 

which changes two multiplications of ab's into one multiplication of ab's and two multiplications involving only a's or only b's. The latter multiplications can be pre-computed and saved when calculating all the innerproducts in a matrix multiplication

    # i=1..4, 4*2 = 8 multiplications
    p[i] := -A[i, 1]*A[i, 2] - A[i, 3]*A[i, 4];
    # 4*2 = 8 multiplications
    q[i] := -B[1, i]*B[2, i] - B[3, i]*B[4, i];
    # i=1..4, j=1..4, 4*4*2 = 32 multiplications
    C[i,j] = p[i] + q[j] 
            + (A[i,1]+B[2,j])*(A[i,2]+B[1,j])
            + (A[i,3]+B[4,j])*(A[i,4]+B[3,j])

It is simple to verify that C[i,j] = A[i,..] . B[..,j] and that the above formula calculates all of the entries of C=A.B with 8+8+32=48 multiplications.

So, if Winograd achieved a formula of 48 multiplications in 1968, why is the DeepMind result still interesting? Well, the relative shortcoming of Winograd's method is that it only works if the entries of A and B commute, while tensor decomposition formulas for matrix multiplication work even over noncommutative rings. That seems very esoteric, but everyone's favorite noncommutative ring is the ring of n × n matrices. So if a formula applies to a matrix of matrices, that means it gives a recursive formula for matrix multiplication - an tensor decomposition formulas do just that.

The original 1969 Strassen result gave a tensor decomposition of 2 × 2 matrix multiplication using only 7 scalar multiplications (rather than the 8 required by the inner product method) and leads to a recursive matrix multiplication algorithm using O(N^log[2](7)) scalar multiplications. Since then, it has been proved that 7 is the smallest rank tensor decompostion for the 2 × 2 case and so researchers have been interested in the 3x3 and larger cases. Alexandre Sedoglavic at the University of Lille catalogs the best tensor decompositions at https://fmm.univ-lille.fr/ with links to a Maple file with an evaluation formula for each.

The previous best 4 × 4 tensor decomposition was based on using the 2 × 2 Strassen decomposition recursively (2 × 2 matrix of 2 × 2 matrices) which led to 7 2 × 2 matrix multiplications each requiring 7 scalar multiplications, for a total of 49. The new DeepMind result reduces that to 48 scalar multiplications, which leads to a recursive algorithm using O(Nlog[4](48)) scalar multiplications: O(N2.7925) vs. O(N2.8074) for Strassen. This is a theoretical improvment over Strassen but in 2024 the best known multiplication algorithm has much lower complexity: O(N2.3713) (see https://arxiv.org/abs/2404.16349). Now, there might be some chance that the DeepMind result could be used in practical implementations, but its reduction in the number of multiplications comes at the cost of many more additions. Before doing any optimizations I counted 1264 additions in the DeepMind tensor, compared to 318 in the unoptimized 4×4 Strassen tensor (which can be optimized to 12*7+4*12=132). Finally, the DeepMind tensor decomposition involves 1/2 and sqrt(-1), so it cannot be used for fields of characteristic 2, or fields without sqrt(-1). Of course, the restriction on characteristic 2 is not a big deal, since the DeepMind team found a 4 × 4 tensor decomposition of rank 47 over GF2 in 2023 (see https://github.com/google-deepmind/alphatensor).

Now if one is only interested in 4 × 4 matrices over a commutative ring, the Winograd result isn't even the best possible. In 1970, Waksman https://ieeexplore.ieee.org/document/1671519 demonstrated an improvement of Winograd's method that used only 46 multiplications as long as the ring allowed division by 2. Waksman's method has since been improved by Rosowski to remove the divisions see https://arxiv.org/abs/1904.07683. Here's that nice compact straight-line program that computes C = A · B in 46 multiplications.

And, here attached are five Maple worksheets that create the explicit formulas for the 4 × 4 matrix methods mentioned in this post, and verify their operation count, and that they are correct.

Strassen_444.mw  Winograd.mw  DM_444.mw Waksman.mw Rosowski.mw

If you are interested in some of the 2022 results, I posted some code to my GitHub account for creating formulas from tensor decompositions, and verifying them on symbolic matrix multiplications: https://github.com/johnpmay/MapleSnippets/tree/main/FastMatrixMultiplication

The clear followup question to all of this is:  does Maple use any of these fast multiplication schemes?  And the answer to that is mostly hidden in low level BLAS code, but generally the answer is: No, the straightforward inner-product scheme for multiplying matrices optimized memory access thus usually ends up being the fastest choice in most cases.

1 2 3 4 5 6 7 Last Page 1 of 76