Applications, Examples and Libraries

Share your work here

Some misguided individuals insist that perpetual motion machines are impossible. Here is a proof that they are wrong!

One of these units hooked up to an electrical generator should be enough to supply all your household electrical needs as well as charge your Tesla in the garage.

If you build one and find out that it doesn't work as demonstrated here, then surely you must have misread the specs. Try it again and again until you succeed.

Download perpetual-motion-machine-corrected.mw

Hi again all

You can enjoy this simple Maple code to find the proper divisors of a given positive integer (whole number).

 

Download proper_divisor.mw

here  some_proper_divisor_examples_3.pdf   file

Hope this helps

Matt Anderson

 

The https://sites.google.com/view/aladjevbookssoft/home site contains free books in English and Russian along with software created under the guidance of the main author prof. V. Aladjev in such areas as general theory of statistics, theory of cellular automata, programming in Maple and Mathematica systems. Each book is archived, including its cover and book block in pdf format. The software with freeware license is designed for Maple and Mathematica.

I’m excited to announce the launch of a new math tool called Maple Flow. Here, I’ll outline our motivation for developing this product, and talk about its features.

A large fraction of Maple users are professional engineers .

All use Maple, but very few say that they do math for a living, in much the same way a plumber wouldn’t say they use a wrench for a living.

They say things like:

  • I design concrete retaining walls
  • I simulate the transients on a transmission line
  • I design heat exchangers
  • I model the absorbency of diapers
  • I design subsea pipelines
  • I need to optimize the trajectory of a space shuttle
  • I work for a power generation company doing load flow analysis
  • I model how a robot arm needs to move

Some of these applications are mathematically simple (but are based on scientific principles, such as the conservation of heat, mass and momentum). The equations consist of basic arithmetic operations, trig and log functions, sprinkled with the occasional numeric integration.

Sometimes, the equations are already formalized in design guides, published by organizations like the IEEE, ASME or ISO. Given the specific physical context, engineers just need to implement the calculations in the right order (this is especially true for Civil and Structural engineering). These applications require you to think at an engineering level.

These are what we call design calculations, done by design engineers.

On the other end of the spectrum, some of these applications are mathematically complex. You might need to derive equations, manipulate PDEs, work with quaternions or transformation matrices, or do some programming. These applications require you to think at a mathematical level.

Let’s call the engineers doing this type of work research engineers. Research engineers are often more closely aligned with mathematicians than design engineers.

So we have design engineers and research engineers (and of course we have engineers with feet in both camps, to a varying degree).

Research engineers and design engineers do different mathematical things, and have different mathematical needs. Both groups use Maple, but one size doesn’t always fit well. Either the toe pinches a little, or the shirt is a mite too baggy.

This is where Maple Flow enters stage right.

Maple Flow is a new tool that we’ve built (and are continuing to expand and improve) with the needs of design engineers in mind.

  • The worksheet lets you put math anywhere – just point, click and type
  • The evaluation model is forward-in-space (unlike Maple’s forward in time evaluation model). This means the execution order is explicitly given by the position of the math on the canvas.
  • The worksheet updates automatically, so results are never stale
  • We’ve made several simplifications to massage away some of the complexity of the Maple programming language.
  • You can use nearly all of tools in the Maple programming language.

Here’s how we see people using Maple Flow. They

  • Enter a few major equations somewhere, followed by some parameters scattered around
  • Make the equations “see” the parameters by moving the parameters above the equations
  • Insert any parameters or equations you’ve forgotten, and move them into position, shifting the existing content out of the way to make room
  • Add text, and perhaps an image or plot
  • Finally, align math and format text for a presentable document

I’ve been using Maple Flow for a while now. I like the fact that the nature of Maple Flow means that you don’t have to start with a grand plan, with every computational detail planned out in advance. You’re encouraged to make things up as you go along, and gradually sculpt your calculations into shape.

Basically, Maple Flow doesn’t issue stiff penalties for making mistakes. You fix them, and then move on.

I also like that Maple Flow makes you feel like you’re “touching” your equations, shifting things about easily with either the mouse or the keyboard. There’s a certain tactility and immediacy to Maple Flow that gives me a micro dose of dopamine every time I use it.

Maple Flow’s freeform interface lets you experiment with space, alignment and layout, drawing attention to different groups of equations.

For example, you can design calculation documents that look like this.

You can use nearly all of the Maple programming language in Flow. Here’s a command from the plots package.

Here’s fsolve in action.

The Maple Flow website has more information, including a demo video.

As ever, your feedback is gratefully received.

Yesterday, user @lcz , while responding as a third party to one of my Answers regarding GraphTheory, asked about breadth-first search. So, I decided to write a more-interesting example of it than the relatively simple example that was used in that Answer. I think that this is general enough to be worthy of a Post.

This application generates all maximal paths in a graph that begin with a given vertex. (I'm calling a path maximal if it cannot be extended and remain a path.) This code requires Maple 2019 or later and 1D input. This works for both directed and undirected graphs. Weights, if present. are ignored.

restart:

AllMaximalPaths:= proc(G::GRAPHLN, v)
description 
    "All maximal paths of G starting at v by breadth-first search"
;
option `Author: Carl Love <carl.j.love@gmail.com> 2021-Mar-17`;
uses GT= GraphTheory;
local 
    P:= [rtable([v])], R:= rtable(1..0),
    VL:= GT:-Vertices(G), V:= table(VL=~ [$1..nops(VL)]),
    Departures:= {op}~(GT:-Departures(G))
;
    while nops(P) <> 0 do
        P:= [
            for local p in P do
                local New:= Departures[V[p[-1]]] minus {seq}(p);
                if New={} then R,= [seq](p); next fi;                
                (
                    for local u in New do 
                        local p1:= rtable(p); p1,= u
                    od
                )       
            od
        ]
    od;
    {seq}(R)  
end proc
:
#large example:
GT:= GraphTheory:
K9:= GT:-CompleteGraph(9):
Pa:= CodeTools:-Usage(AllMaximalPaths(K9,1)):
memory used=212.56MiB, alloc change=32.00MiB, 
cpu time=937.00ms, real time=804.00ms, gc time=312.50ms

nops(Pa);
                             40320
#fun example:
P:= GT:-SpecialGraphs:-PetersenGraph():
Pa:= CodeTools:-Usage(AllMaximalPaths(P,1)):
memory used=0.52MiB, alloc change=0 bytes, 
cpu time=0ns, real time=3.00ms, gc time=0ns

nops(Pa);
                               72

Pa[..9]; #sample paths
    {[1, 2, 3, 4, 10, 9, 8, 5], [1, 2, 3, 7, 8, 9, 10, 6], 
      [1, 2, 9, 8, 7, 3, 4, 5], [1, 2, 9, 10, 4, 3, 7, 6], 
      [1, 5, 4, 3, 7, 8, 9, 2], [1, 5, 4, 10, 9, 8, 7, 6], 
      [1, 5, 8, 7, 3, 4, 10, 6], [1, 5, 8, 9, 10, 4, 3, 2], 
      [1, 6, 7, 3, 4, 10, 9, 2]}

Notes on the procedure:

The two dynamic data structures are

  • P: a list of vectors of vertices. Each vector contains a path which we'll attempt to extend.
  • R: a vector of lists of vertices. Each list is a maximal path to be returned.

The static data structures are

  • V: a table mapping vertices (which may be named) to their index numbers.
  • Departures: a list of sets of vertices whose kth set is the possible next vertices from vertex number k.

On each iteration of the outer loop, P is completely reconstructed because each of its entries, a path p, is either determined to be maximal or it's extended. The set New is the vertices that can be appended to the (connected to vertex p[-1]). If New is empty, then p is maximal, and it gets moved to R


The following code constructs an array plot of all the maximal paths in the Petersen graph. I can't post the array plot, but you can see it in the attached worksheet: BreadthFirst.mw

#Do an array plot of each path embedded in the graph:
n:= nops(Pa):
c:= 9: 
plots:-display(
    (PA:= rtable(
        (1..ceil(n/c), 1..c),
        (i,j)-> 
            if (local k:= (i-1)*ceil(n/c) + j) > n then 
                plot(axes= none)
            else 
                GT:-DrawGraph(
                    GT:-HighlightTrail(P, Pa[k], inplace= false), 
                    stylesheet= "legacy", title= typeset(Pa[k])
                )
            fi
    )),
    titlefont= [Times, Bold, 12]
);

#And recast that as an animation so that I can post it:
plots:-display(
    [seq](`$`~(plots:-display~(PA), 5)),
    insequence
); 

 

Wirtinger Derivatives in Maple 2021

Generally speaking, there are two contexts for differentiating complex functions with respect to complex variables. In the first context, called the classical complex analysis, the derivatives of the complex components ( abs , argument , conjugate , Im , Re , signum ) with respect to complex variables do not exist (do not satisfy the Cauchy-Riemann conditions), with the exception of when they are holomorphic functions. All computer algebra systems implement the complex components in this context, and computationally represent all of abs(z), argument(z), conjugate(z), Im(z), Re(z), signum(z) as functions of z . Then, viewed as functions of z, none of them are analytic, so differentiability becomes an issue.

 

In the second context, first introduced by Poincare (also called Wirtinger calculus), in brief z and its conjugate conjugate(z) are taken as independent variables, and all the six derivatives of the complex components become computable, also with respect to conjugate(z). Technically speaking, Wirtinger calculus permits extending complex differentiation to non-holomorphic functions provided that they are ℝ-differentiable (i.e. differentiable functions of real and imaginary parts, taking f(z) = f(x, y) as a mapping "`&Ropf;`^(2)->`&Ropf;`^()").

 

In simpler terms, this subject is relevant because, in mathematical-physics formulations using paper and pencil, we frequently use Wirtinger calculus automatically. We take z and its conjugate conjugate(z) as independent variables, with that d*conjugate(z)*(1/(d*z)) = 0, d*z*(1/(d*conjugate(z))) = 0, and we compute with the operators "(&PartialD;)/(&PartialD; z)", "(&PartialD;)/(&PartialD; (z))" as partial differential operators that behave as ordinary derivatives. With that, all of abs(z), argument(z), conjugate(z), Im(z), Re(z), signum(z), become differentiable, since they are all expressible as functions of z and conjugate(z).

 

 

Wirtinger derivatives were implemented in Maple 18 , years ago, in the context of the Physics package. There is a setting, Physics:-Setup(wirtingerderivatives), that when set to true - an that is the default value when Physics is loaded - redefines the differentiation rules turning on Wirtinger calculus. The implementation, however, was incomplete, and the subject escaped through the cracks till recently mentioned in this Mapleprimes post.

 

Long intro. This post is to present the completion of Wirtinger calculus in Maple, distributed for everybody using Maple 2021 within the Maplesoft Physics Updates v.929 or newer. Load Physics and set the imaginary unit to be represented by I

 

with(Physics); interface(imaginaryunit = I)

 

The complex components are represented by the computer algebra functions

(FunctionAdvisor(complex_components))(z)

[Im(z), Re(z), abs(z), argument(z), conjugate(z), signum(z)]

(1)

They can all be expressed in terms of z and conjugate(z)

map(proc (u) options operator, arrow; u = convert(u, conjugate) end proc, [Im(z), Re(z), abs(z), argument(z), conjugate(z), signum(z)])

[Im(z) = ((1/2)*I)*(-z+conjugate(z)), Re(z) = (1/2)*z+(1/2)*conjugate(z), abs(z) = (z*conjugate(z))^(1/2), argument(z) = -I*ln(z/(z*conjugate(z))^(1/2)), conjugate(z) = conjugate(z), signum(z) = z/(z*conjugate(z))^(1/2)]

(2)

The main differentiation rules in the context of Wirtinger derivatives, that is, taking z and conjugate(z) as independent variables, are

map(%diff = diff, [Im(z), Re(z), abs(z), argument(z), conjugate(z), signum(z)], z)

[%diff(Im(z), z) = -(1/2)*I, %diff(Re(z), z) = 1/2, %diff(abs(z), z) = (1/2)*conjugate(z)/abs(z), %diff(argument(z), z) = -((1/2)*I)/z, %diff(conjugate(z), z) = 0, %diff(signum(z), z) = (1/2)/abs(z)]

(3)

Since in this context conjugate(z) is taken as - say - a mathematically-atomic variable (the computational representation is still the function conjugate(z)) we can differentiate all the complex components also with respect to  conjugate(z)

map(%diff = diff, [Im(z), Re(z), abs(z), argument(z), conjugate(z), signum(z)], conjugate(z))

[%diff(Im(z), conjugate(z)) = (1/2)*I, %diff(Re(z), conjugate(z)) = 1/2, %diff(abs(z), conjugate(z)) = (1/2)*z/abs(z), %diff(argument(z), conjugate(z)) = ((1/2)*I)*z/abs(z)^2, %diff(conjugate(z), conjugate(z)) = 1, %diff(signum(z), conjugate(z)) = -(1/2)*z^2/abs(z)^3]

(4)

For example, consider the following algebraic expression, starting with conjugate

eq__1 := conjugate(z)+z*conjugate(z)^2

conjugate(z)+z*conjugate(z)^2

(5)

Differentiating this expression with respect to z and conjugate(z) taking them as independent variables, is new, and in this example trivial

(%diff = diff)(eq__1, z)

%diff(conjugate(z)+z*conjugate(z)^2, z) = conjugate(z)^2

(6)

(%diff = diff)(eq__1, conjugate(z))

%diff(conjugate(z)+z*conjugate(z)^2, conjugate(z)) = 1+2*z*conjugate(z)

(7)

Switch to something less trivial, replace conjugate by the real part ReNULL

eq__2 := eval(eq__1, conjugate = Re)

Re(z)+z*Re(z)^2

(8)

To verify results further below, also express eq__2 in terms of conjugate

eq__22 := simplify(convert(eq__2, conjugate), size)

(1/4)*(z^2+z*conjugate(z)+2)*(z+conjugate(z))

(9)

New: differentiate eq__2 with respect to z and  conjugate(z)

(%diff = diff)(eq__2, z)

%diff(Re(z)+z*Re(z)^2, z) = 1/2+Re(z)^2+z*Re(z)

(10)

(%diff = diff)(eq__2, conjugate(z))

%diff(Re(z)+z*Re(z)^2, conjugate(z)) = 1/2+z*Re(z)

(11)

Note these results (10) and (11) are expressed in terms of Re(z), not conjugate(z). Let's compare with the derivative of eq__22 where everything is expressed in terms of z and conjugate(z). Take for instance the derivative with respect to z

(%diff = diff)(eq__22, z)

%diff((1/4)*(z^2+z*conjugate(z)+2)*(z+conjugate(z)), z) = (1/4)*(2*z+conjugate(z))*(z+conjugate(z))+(1/4)*z^2+(1/4)*z*conjugate(z)+1/2

(12)

To verify this result is mathematically equal to (10) expressed in terms of Re(z) take the difference of the right-hand sides

rhs((%diff(Re(z)+z*Re(z)^2, z) = 1/2+Re(z)^2+z*Re(z))-(%diff((1/4)*(z^2+z*conjugate(z)+2)*(z+conjugate(z)), z) = (1/4)*(2*z+conjugate(z))*(z+conjugate(z))+(1/4)*z^2+(1/4)*z*conjugate(z)+1/2)) = 0

Re(z)^2+z*Re(z)-(1/4)*(2*z+conjugate(z))*(z+conjugate(z))-(1/4)*z^2-(1/4)*z*conjugate(z) = 0

(13)

One quick way to verify the value of expressions like this one is to replace z = a+I*b and simplify "assuming" a andNULLb are realNULL

`assuming`([eval(Re(z)^2+z*Re(z)-(1/4)*(2*z+conjugate(z))*(z+conjugate(z))-(1/4)*z^2-(1/4)*z*conjugate(z) = 0, z = a+I*b)], [a::real, b::real])

a^2+(a+I*b)*a-(1/2)*(3*a+I*b)*a-(1/4)*(a+I*b)^2-(1/4)*(a+I*b)*(a-I*b) = 0

(14)

normal(a^2+(a+I*b)*a-(1/2)*(3*a+I*b)*a-(1/4)*(a+I*b)^2-(1/4)*(a+I*b)*(a-I*b) = 0)

0 = 0

(15)

The equivalent differentiation, this time replacing in eq__1 conjugate by abs; construct also the equivalent expression in terms of z and  conjugate(z) for verifying results

eq__3 := eval(eq__1, conjugate = abs)

abs(z)+abs(z)^2*z

(16)

eq__33 := simplify(convert(eq__3, conjugate), size)

(z*conjugate(z))^(1/2)+conjugate(z)*z^2

(17)

Since these two expressions are mathematically equal, their derivatives should be too, and the derivatives of eq__33 can be verified by eye since z and  conjugate(z) are taken as independent variables

(%diff = diff)(eq__3, z)

%diff(abs(z)+abs(z)^2*z, z) = (1/2)*conjugate(z)/abs(z)+z*conjugate(z)+abs(z)^2

(18)

(%diff = diff)(eq__33, z)

%diff((z*conjugate(z))^(1/2)+conjugate(z)*z^2, z) = (1/2)*conjugate(z)/(z*conjugate(z))^(1/2)+2*z*conjugate(z)

(19)

Eq (18) is expressed in terms of abs(z) = abs(z) while (19) is in terms of conjugate(z) = conjugate(z). Comparing as done in (14)

rhs((%diff(abs(z)+abs(z)^2*z, z) = (1/2)*conjugate(z)/abs(z)+z*conjugate(z)+abs(z)^2)-(%diff((z*conjugate(z))^(1/2)+conjugate(z)*z^2, z) = (1/2)*conjugate(z)/(z*conjugate(z))^(1/2)+2*z*conjugate(z))) = 0

(1/2)*conjugate(z)/abs(z)-z*conjugate(z)+abs(z)^2-(1/2)*conjugate(z)/(z*conjugate(z))^(1/2) = 0

(20)

`assuming`([eval((1/2)*conjugate(z)/abs(z)-z*conjugate(z)+abs(z)^2-(1/2)*conjugate(z)/(z*conjugate(z))^(1/2) = 0, z = a+I*b)], [a::real, b::real])

(1/2)*(a-I*b)/(a^2+b^2)^(1/2)-(a+I*b)*(a-I*b)+a^2+b^2-(1/2)*(a-I*b)/((a+I*b)*(a-I*b))^(1/2) = 0

(21)

simplify((1/2)*(a-I*b)/(a^2+b^2)^(1/2)-(a+I*b)*(a-I*b)+a^2+b^2-(1/2)*(a-I*b)/((a+I*b)*(a-I*b))^(1/2) = 0)

0 = 0

(22)

To mention but one not so famliar case, consider the derivative of the sign of a complex number, represented in Maple by signum(z). So our testing expression is

eq__4 := eval(eq__1, conjugate = signum)

signum(z)+z*signum(z)^2

(23)

This expression can also be rewritten in terms of z and  conjugate(z) 

eq__44 := simplify(convert(eq__4, conjugate), size)

z/(z*conjugate(z))^(1/2)+z^2/conjugate(z)

(24)

This time differentiate with respect to conjugate(z),

(%diff = diff)(eq__4, conjugate(z))

%diff(signum(z)+z*signum(z)^2, conjugate(z)) = -(1/2)*z^2/abs(z)^3-z^3*signum(z)/abs(z)^3

(25)

Here again, the differentiation of eq__44, that is expressed entirely in terms of z and  conjugate(z), can be computed by eye

(%diff = diff)(eq__44, conjugate(z))

%diff(z/(z*conjugate(z))^(1/2)+z^2/conjugate(z), conjugate(z)) = -(1/2)*z^2/(z*conjugate(z))^(3/2)-z^2/conjugate(z)^2

(26)

Eq (25) is expressed in terms of abs(z) = abs(z) while (26) is in terms of conjugate(z) = conjugate(z). Comparing as done in (14),

rhs((%diff(signum(z)+z*signum(z)^2, conjugate(z)) = -(1/2)*z^2/abs(z)^3-z^3*signum(z)/abs(z)^3)-(%diff(z/(z*conjugate(z))^(1/2)+z^2/conjugate(z), conjugate(z)) = -(1/2)*z^2/(z*conjugate(z))^(3/2)-z^2/conjugate(z)^2)) = 0

-(1/2)*z^2/abs(z)^3-z^3*signum(z)/abs(z)^3+(1/2)*z^2/(z*conjugate(z))^(3/2)+z^2/conjugate(z)^2 = 0

(27)

`assuming`([eval(-(1/2)*z^2/abs(z)^3-z^3*signum(z)/abs(z)^3+(1/2)*z^2/(z*conjugate(z))^(3/2)+z^2/conjugate(z)^2 = 0, z = a+I*b)], [a::real, b::real])

-(1/2)*(a+I*b)^2/(a^2+b^2)^(3/2)-(a+I*b)^4/(a^2+b^2)^2+(1/2)*(a+I*b)^2/((a+I*b)*(a-I*b))^(3/2)+(a+I*b)^2/(a-I*b)^2 = 0

(28)

simplify(-(1/2)*(a+I*b)^2/(a^2+b^2)^(3/2)-(a+I*b)^4/(a^2+b^2)^2+(1/2)*(a+I*b)^2/((a+I*b)*(a-I*b))^(3/2)+(a+I*b)^2/(a-I*b)^2 = 0)

0 = 0

(29)

NULL


 

Download Wirtinger_Derivatives.mw

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

restart;

with(LinearAlgebra):

Recently @Nour asked (in part) if Maple can simplify 2*(x^2 - x*y - x*z + y^2) to (x - y)^2 + (y - z)^2 + (z - x)^2. User @vv pointed out that a sum of squares form is not unique. I'm not sure if @Nour's intent was to show that this was always non-negative, but even if the sum of squares form is not unique, it does help to demonstrate that the expression will be nonnegative for all real x,y,z. This reminded me that I have over the years had cases where I wanted to know if a quadratic form (multivariate polynomial with all terms of degree 2) would always be non-negative (a quadratic form will be zero if all variables are zero, so we know it won't always be positive). I have typically done this by hand in the following way, which I demonstrate for the simpler case

p1:=2*x^2-3*x*y+4*y^2;

2*x^2-3*x*y+4*y^2

We make a (column) vector of the variables, v = [x,y] and then construct the symmetric Matrix A for which Transpose(v).A.v is the required expression. For the squared term 2*x^2, the coefficient 2 is put in the diagonal entry A[1,1] and likewise 4*y^2 leads to A[2,2] = 4. The term -3*x*y comes from the sum A[1,2]+A[2,1] and so half of the coefficient -3 goes to each of A[2,1] and A[1,2]. The Matrix is therefore

A:=Matrix(2,2,[2,-3/2,-3/2,4]);v:=Vector([x,y]):simplify(v^+.A.v);

A := Matrix(2, 2, {(1, 1) = 2, (1, 2) = -3/2, (2, 1) = -3/2, (2, 2) = 4})

2*x^2-3*x*y+4*y^2

The Matrix is found to be positive definite (positive eigenvalues), which means that the quadratic form is positive for all non-zero real vectors v.

IsDefinite(A,query='positive_definite');evalf(Eigenvalues(A));

true

Vector[column]([[4.802775638], [1.197224362]])

One way to get the sum of squares form is to use the Cholesky decomposition (for positive definite or semidefinite matrices), which yields A = L.Transpose(L) = L.I.Transpose(L), and then note that Transpose(L).v is a change of variables to a diagonal quadratic form

L:=LUDecomposition(A,method='Cholesky');
L.L^+;

L := Matrix(2, 2, {(1, 1) = 2^(1/2), (2, 1) = -(3/4)*2^(1/2), (2, 2) = (1/4)*46^(1/2)}, storage = triangular[lower], shape = [triangular[lower]])

Matrix([[2, -3/2], [-3/2, 4]])

so we have

v2:=L^+.v;
v2^+.v2;
map(simplify,%);

v2 := Vector(2, {(1) = 2^(1/2)*x-(3/4)*2^(1/2)*y, (2) = (1/4)*46^(1/2)*y})

(2^(1/2)*x-(3/4)*2^(1/2)*y)^2+(23/8)*y^2

(1/8)*(4*x-3*y)^2+(23/8)*y^2

and we have rewritten the quadratic form in an explicitly nonnegative form.

 

Another way to the sum of squares would be via the othogonal similarity A = U.Lambda.Transpose(U), where U is an orthogonal matrix of eigenvectors and Lambda is diagonal with eigenvalues on the diagonal. This form works even if the matrix is not positive (semi)definite.

 

In this simple case, Maple has some other ways of showing that the quadratic form is non-negative for real x,y

Student[Precalculus]:-CompleteSquare(p1,x);

2*(x-(3/4)*y)^2+(23/8)*y^2

Without assumptions, is correctly decides p1 is not always nonnegative, and an example of this for complex x and y is easily found. With assumptions, is fails.

is(p1,nonnegative);
eval(p1,{x=I,y=I});
is(p1,nonnegative) assuming real;

false

-3

FAIL

At least in Maple 2017, solve suggests any x and y will do, which is not quite right

solve(p1>=0);

{x = x, y = y}

In the next worksheet, I implement this method in a general procedure IsQformNonNeg, which works as well as IsDefinite does. For most cases of interest (integer or rational coefficients) there are no problems. For floating point coefficients, as always, numerical roundoff issues arise, and I dealt with this by giving warnings. (Rank is more robust in these cases than IsDefinite, but since this case can be better dealt with by converting floats to rationals, I haven't worked too hard on this - perhaps IsDefinite will be improved.)

 

The original example 2*(x^2 - x*y - x*z + y^2)  is dealt with as an example in the next worksheet

 

 

Download QformPost.mw

Maple strings contain extended ASCII characters (codes 1..255). 
The international characters such as  î, ș, Å, Ø ,Ă, Æ, Ç are multi-byte encoded. They are ignored by the Maple engine but are known to the user interface which uses the UTF-8 encoding.
The package StringTools does not support this encoding. However it is not difficult to manage these characters using Python commands (included in the Maple distribution now).
Here are the UTF-8 versions of the Maple procedures length and substring.
You may want to improve these procedures, or to add new ones (my knowledge of Python is very limited).

LEN:=proc(s::string) Python:-EvalFunction("len",s) end proc:

SS:=proc(s::string, mn::{integer, range(({integer,identical()}))})
  local m,n;
  if type(mn,integer) then m,n := mn$2 else m:=lhs(mn); n:=rhs(mn) fi; 
  if m=NULL then m:=1 fi; if n=NULL then n:=-1 fi;
  if n=-1 then n:=LEN(s) elif n<0 then n:=n+1 fi;
  if m=-1 then m:=LEN(s) elif m<0 then m:=m+1 fi;
  Python:-EvalString(cat("\"",  s, "\"", "[", m-1, ":", n, "]"  ));
end proc:

LEN("Canada"), length("Canada");
                              6, 6

s:="România":
LEN(s), length(s);
                              7, 8

SS(s, 1..), SS(s, 1..-3), SS(s, 1..1), SS(s, -7..2), SS(s,1), SS(s,-1); 
            "România", "Român", "R", "Ro", "R", "a"

 

Download FeynmanIntegrals.mw

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

diff(abs(z), z)  returns abs(1, z)  and the latter, for a numeric z, is defined only for a nonzero real z.
However,  functions such as abs(I+sin(t)) are (real) differentiable for a real t and diff should work. It usually does, but not always.

restart
f:= t -> abs(GAMMA(2*t+I)):  # We want D(f)(1)
D(f)(1): 
evalf(%);  # Error, (in simpl/abs) abs is not differentiable at non-real arguments
D(f)(1); simplify(%); 
evalf(%);   # 0.3808979508 + 1.161104935*I,  wrong

The same wrong results are obtained with diff instead of D

diff(f(t),t):   # or  diff(f(t),t) assuming t::real; 
eval(%,t=1);
simplify(%); evalf(%);   # wrong, should be real

To obtain the correct result, we could use the definition of the derivative:

limit((f(t)-f(1))/(t-1), t=1); evalf(%); # OK 
fdiff(f(t), t=1);    # numeric, OK

   

       0.8772316598
       0.8772316599

Note that abs(1, GAMMA(2 + I)); returns 1; this is also wrong, it should produce an error because  GAMMA(2+I) is not real;
 

Let's redefine now `diff/abs`  and redo the computations.

restart
`diff/abs` := proc(u, z)   # implements d/dx |f(x+i*y|) when f is analytic and f(...)<>0
local u1:=diff(u,z);
1/abs(u)*( Re(u)*Re(u1) + Im(u)*Im(u1) )
end:
f:= t -> abs(GAMMA(2*t+I));
D(f)(1); evalf(%);   # OK now

               

         0.8772316597

Now diff works too.

diff(f(t),t);
eval(%,t=1);
simplify(%); evalf(%);   # it works

This is a probably a very old bug which may make diff(u,x)  fail for expressions having subespressions abs(...) depending on x.

However it works  using assuming x::real, but only if evalc simplifies u.
The problem is actually more serious because diff/ for Re, Im, conjugate should be revized too. Plus some other related things. 

The Putnam 2020 Competition (the 81st) was postponed to February 20, 2021 due to the COVID-19 pandemic, and held in an unofficial mode with no prizes or official results.

Four of the problems have surprisingly short Maple solutions.
Here they are.

A1.  How many positive integers N satisfy all of the following three conditions?
(i) N is divisible by 2020.
(ii) N has at most 2020 decimal digits.
(iii) The decimal digits of N are a string of consecutive ones followed by a string of consecutive zeros.

add(add(`if`( (10&^m-1)*10&^(n-m) mod 2020 = 0, 1, 0), 
n=m+1..2020), m=1..2020);

       508536

 

A2.  Let k be a nonnegative integer.  Evaluate  

sum(2^(k-j)*binomial(k+j,j), j=0..k);

        4^k

 

A3.  Let a(0) = π/2, and let a(n) = sin(a(n-1)) for n ≥ 1. 
Determine whether the series   converges.

asympt('rsolve'({a(n) = sin(a(n-1)),a(0)=Pi/2}, a(n)), n, 4);

            

a(n) ^2 being equivalent to 3/n,  the series diverges.

 

B1.  For a positive integer n, define d(n) to be the sum of the digits of n when written in binary
 (for example, d(13) = 1+1+0+1 = 3). 

Let   S =  
Determine S modulo 2020.

d := n -> add(convert(n, base,2)):
add( (-1)^d(k) * k^3, k=1..2020 ) mod 2020; 

        1990

 

Presenters:

Affiliated Research Professor Mohammad Khoshnevisan

Physics Department, Northeastern University

United States of America

&

Behzad Mohasel Afshari, Admitted Ph.D. student

School of Advanced Manufacturing & Mechanical Engineering

University of South Australia

Australia

Note: This webinar is free. For registration, please send an e-mail to m.khoshnevisan@ieee.org. The registration link will be sent to all the participants on April 26, 2021. Maximum number of participants =60

 

Vibrational Mechanics - Practical Applications- Animation 1

Vibrational Mechanics - Practical Applications- Animation 2

Vibrational Mechanics - Practical Applications- Animation 3

I make a maple worksheet for generating Pythagorean Triples Ternary Tree :

Around 10,000 records in the matrix currently !

You can set your desire size or export the Matrix as text ...

But yet ! I wish to understand from you better techniques If you have some suggestion ?

the mapleprimes Don't load my worksheet for preview so i put a screenshot !

Pythagoras_ternary.mw

Pythagoras_ternary_data.mw

Pythagoras_ternary_maple.mw

 

 

 

            We announce the release of a new book, of title Fourier Transforms for Chemistry, which is in the form of a Maple worksheet.  This book is freely available through Maple Application Centre, either as a Maple worksheet with no output from commands or as a .pdf file with all output and plots.

            This interactive electronic book in the form of a Maple worksheet comprises six chapters containing Maple commands, plus an overview 0 as an introduction.  The chapters have content as follows.

  -   1    continuous Fourier transformation

  -   2    electron diffraction of a gaseous sample

  -   3    xray diffraction of a crystal and a powder

  -   4    microwave spectrum of a gaseous sample

  -   5    infrared and Raman spectra of a liquid sample

  -   6    nuclear magnetic resonance of various samples

            This book will be useful in courses of physical chemistry or devoted to the determination of molecular structure by physical methods.  Some content, duly acknowledged, has been derived and adapted from other authors, with permission.

First 6 7 8 9 10 11 12 Last Page 8 of 68