Education

Teaching and learning about math, Maple and MapleSim

A string is wound symmetrically around a circular rod. The string goes exactly
4 times around the rod. The circumference of the rod is 4 cm and its length is 12 cm.
Find the length of the string.
Show all your work.

(It was presented at a meeting of the European Mathematical Society in 2001,
"Reference levels in mathematics in Europe at age16").

Can you solve it? You may want to try before seing the solution.
[I sometimes train olympiad students at my university, so I like such problems].

restart;
eq:= 2/Pi*cos(t), 2/Pi*sin(t), 3/2/Pi*t; # The equations of the helix, t in 0 .. 8*Pi:
               
p:=plots[spacecurve]([eq, t=0..8*Pi],scaling=constrained,color=red, thickness=5, axes=none):
plots:-display(plottools:-cylinder([0,0,0], 2/Pi, 12, style=surface, color=yellow),
                         p, scaling=constrained,axes=none);
 

VectorCalculus:-ArcLength(<eq>, t=0..8*Pi);

                           20

 

Let's look at the first loop around the rod.
If we develop the corresponding 1/4 of the cylinder, it results a rectangle  whose sides are 4 and 12/4 = 3.
The diagonal is 5 (ask Pythagora why), so the length of the string is 4*5 = 20.

 

This presentation is on an undergrad intermediate Quantum Mechanics topic. Tackling the problem within a computer algebra worksheet in the way shown below is actually the novelty, using the Physics package to formulate the problem with quantum operators and related algebra rules in tensor notation.

 

Quantization of the Lorentz Force

 

Pascal Szriftgiser1 and Edgardo S. Cheb-Terrab2 

(1) Laboratoire PhLAM, UMR CNRS 8523, Université Lille 1, F-59655, France

(2) Maplesoft

 

We consider the case of a quantum, non-relativistic, particle with mass m and charge q evolving under the action of an arbitrary time-independent magnetic field "B=Curl(A(x,y,z)), "where `#mover(mi("A",mathcolor = "olive"),mo("&rarr;"))` is the vector potential. The Hamiltonian for this system is

H = (`#mover(mi("p",mathcolor = "olive"),mo("&rarr;"))`-q*`#mover(mi("A",mathcolor = "olive"),mo("&rarr;"))`(X))^2/(2*m)

where `#mover(mi("p",mathcolor = "olive"),mo("&rarr;"))` is the momentum of the particle, and the force acting in this particle, also called the Lorentz force, is given by

 

`#mover(mi("F",mathcolor = "olive"),mo("&rarr;"))` = m*(diff(v(t), t))

 

where `#mover(mi("v",mathcolor = "olive"),mo("&rarr;"))` is the quantized velocity of the particle, and all of  H, `#mover(mi("p",mathcolor = "olive"),mo("&rarr;"))`, `#mover(mi("v",mathcolor = "olive"),mo("&rarr;"))`, `#mover(mi("B",mathcolor = "olive"),mo("&rarr;"))`, `#mover(mi("A",mathcolor = "olive"),mo("&rarr;"))` and `#mover(mi("F",mathcolor = "olive"),mo("&rarr;"))` are Hermitian quantum operators representing observable quantities.

 

In the classic (non-quantum) case, `#mover(mi("F"),mo("&rarr;"))` for such a particle in the absence of electrical field is given by

 

`#mover(mi("F"),mo("&rarr;"))` = `&x`(q*`#mover(mi("v"),mo("&rarr;"))`, `#mover(mi("B"),mo("&rarr;"))`) ,

 

Problem: Departing from the Hamiltonian, show that in the quantum case the Lorentz force is given by [1]

 

`#mover(mi("F",mathcolor = "olive"),mo("&rarr;"))` = (1/2)*q*(`&x`(`#mover(mi("v",mathcolor = "olive"),mo("&rarr;"))`, `#mover(mi("B",mathcolor = "olive"),mo("&rarr;"))`)-`&x`(`#mover(mi("B",mathcolor = "olive"),mo("&rarr;"))`, `#mover(mi("v",mathcolor = "olive"),mo("&rarr;"))`))

 

[1] Photons et atomes, Introduction à l'électrodynamique quantique, p. 179, Claude Cohen-Tannoudji, Jacques Dupont-Roc et Gilbert Grynberg - EDP Sciences janvier 1987.

 

Solution

 

We choose to tackle the problem in Heisenberg's picture of quantum mechanices, where the state of a system is static and only the quantum operators evolve in time according to

``

diff(O(t), t) = I*Physics:-Commutator(H, O(t))/`&hbar;`

 

Also, the algebraic manipulations are simpler using tensor abstract notation instead of the standard 3D vector notation. We then start setting the framework for the problem, a system of coordinates X, indicating the dimension of the tensor space to be 3 and the metric Euclidean, and that we will use lowercaselatin letters to represent tensor indices. In addition, not necessary but for convenience, we set the lowercase latin i to represent the imaginary unit and we request automaticsimplification so that the output of everything comes automatically simplified in size.

 

restart; with(Physics); interface(imaginaryunit = i)

Setup(mathematicalnotation = true, automaticsimplification = true, coordinates = X, dimension = 3, metric = Euclidean, spacetimeindices = lowercaselatin, quiet)

[automaticsimplification = true, coordinatesystems = {X}, dimension = 3, mathematicalnotation = true, metric = {(1, 1) = 1, (2, 2) = 1, (3, 3) = 1}, spacetimeindices = lowercaselatin]

(1)

 

Next we indicate the letters we will use to represent the quantum operators with which we will work, and also the standard commutation rules between position and momentum, always the starting point when dealing with quantum mechanics problems

 

Setup(quantumoperators = {F}, hermitianoperators = {A, B, H, p, r, v, x}, realobjects = {`&hbar;`, m, q}, algebrarules = {%Commutator(p[k], p[n]) = 0, %Commutator(x[k], p[l]) = I*`&hbar;`*KroneckerDelta[k, l], %Commutator(x[k], x[l]) = 0})

[algebrarules = {%Commutator(p[k], p[n]) = 0, %Commutator(x[k], p[l]) = I*`&hbar;`*Physics:-KroneckerDelta[k, l], %Commutator(x[k], x[l]) = 0}, hermitianoperators = {A, B, H, p, r, v, x}, quantumoperators = {A, B, F, H, p, r, v, x}, realobjects = {`&hbar;`, m, q, x1, x2, x3, %dAlembertian, Physics:-dAlembertian}]

(2)

 

Note that we start not indicating F as Hermitian, in order to arrive at that result. The quantum operators A, B, and F are explicit functions of X, so to avoid redundant display of this functionality on the screen we use

 

CompactDisplay((A, B, F)(X))

A(x1, x2, x3)*`will now be displayed as`*A

 

B(x1, x2, x3)*`will now be displayed as`*B

 

F(x1, x2, x3)*`will now be displayed as`*F

(3)

Define now as tensors the quantum operators that we will use with tensorial notation (recalling: for these, Einstein's sum rule for repeated indices will be automatically applied when simplifying)

 

Define(x, p, v, A, B, F, quiet)

{A, B, F, p, v, x, Physics:-Dgamma[a], Physics:-Psigma[a], Physics:-d_[a], Physics:-g_[a, b], Physics:-KroneckerDelta[a, b], Physics:-LeviCivita[a, b, c], Physics:-SpaceTimeVector[a](X)}

(4)

The Hamiltonian,

H = (`#mover(mi("p",mathcolor = "olive"),mo("&rarr;"))`-q*`#mover(mi("A",mathcolor = "olive"),mo("&rarr;"))`(X))^2/(2*m)

in tensorial notation, is given by

H = (p[n]-q*A[n](X))^2/(2*m)

H = (1/2)*Physics:-`^`(p[n]-q*A[n](X), 2)/m

(5)

Generally speaking to arrive at  ```#mover(mi("F",mathcolor = "olive"),mo("&rarr;"))` = (1/2)*q*(`&x`(`#mover(mi("v",mathcolor = "olive"),mo("&rarr;"))`, `#mover(mi("B",mathcolor = "olive"),mo("&rarr;"))`)-`&x`(`#mover(mi("B",mathcolor = "olive"),mo("&rarr;"))`, `#mover(mi("v",mathcolor = "olive"),mo("&rarr;"))`)) what we now need to do is

1) Express this Hamiltonian (5) in terms of the velocity

 

And, recalling that, in Heisenberg's picture, quantum operators evolve in time according to

diff(O(t), t) = I*Physics:-Commutator(H, O(t))/`&hbar;`

 

2) Take the commutator of H with the velocity itself to obtain its time derivative and, from `#mover(mi("F",mathcolor = "olive"),mo("&rarr;"))` = m*(diff(v(t), t)) , that commutator is already the force up to some constant factors.

 

To get in contact with the basic commutation rules between position and momentum behind quantum phenomena, the quantized velocity itself can be computed as the time derivative of the position operator, i.e as the commutator of x[k] with H

I*Commutator(H = (1/2)*Physics[`^`](p[n]-q*A[n](X), 2)/m, x[k])/`&hbar;`

I*Physics:-Commutator(H, x[k])/`&hbar;` = (1/2)*(I*q^2*Physics:-AntiCommutator(A[n](X), Physics:-Commutator(A[n](X), x[k]))-I*q*Physics:-AntiCommutator(p[n], Physics:-Commutator(A[n](X), x[k]))-2*(q*A[n](X)-p[n])*Physics:-KroneckerDelta[k, n]*`&hbar;`)/(`&hbar;`*m)

(6)

This expression for the velocity, that involves commutators between the potential A[n](X), the position x[k] and the momentum p[n], can be simplified taking into account the basic quantum algebra rules between position and momentum. We assume that A[n](X)(X) can be decomposed into a formal power series (possibly infinite) of the x[k], hence all the A[n](X) commute between themselves as well as with all the x[k]

 

{%Commutator(A[k](X), x[l]) = 0, %Commutator(A[k](X), A[l](X)) = 0}

{%Commutator(A[k](X), x[l]) = 0, %Commutator(A[k](X), A[l](X)) = 0}

(7)

(Note: in some cases, this is not true, but those cases are beyond the scope of this worksheet.)

 

Add these rules to the algebra rules already set so that they are all taken into account when simplifying things

 

Setup(algebrarules = {%Commutator(A[k](X), x[l]) = 0, %Commutator(A[k](X), A[l](X)) = 0})

[algebrarules = {%Commutator(p[k], p[n]) = 0, %Commutator(x[k], p[l]) = I*`&hbar;`*Physics:-KroneckerDelta[k, l], %Commutator(x[k], x[l]) = 0, %Commutator(A[k](X), x[l]) = 0, %Commutator(A[k](X), A[l](X)) = 0}]

(8)

Simplify(I*Physics[Commutator](H, x[k])/`&hbar;` = (1/2)*(I*q^2*Physics[AntiCommutator](A[n](X), Physics[Commutator](A[n](X), x[k]))-I*q*Physics[AntiCommutator](p[n], Physics[Commutator](A[n](X), x[k]))-2*(q*A[n](X)-p[n])*Physics[KroneckerDelta][k, n]*`&hbar;`)/(`&hbar;`*m))

I*Physics:-Commutator(H, x[k])/`&hbar;` = (-A[k](X)*q+p[k])/m

(9)

The right-hand side of (9) is then the kth component of the velocity tensor quantum operator, the relationship is the same as in the classical case

v[k] = rhs(I*Physics[Commutator](H, x[k])/`&hbar;` = (-A[k](X)*q+p[k])/m)

v[k] = (-A[k](X)*q+p[k])/m

(10)

and with this the Hamiltonian (5) can now be rewritten in term of the velocity completing step 1)

simplify(H = (1/2)*Physics[`^`](p[n]-q*A[n](X), 2)/m, {SubstituteTensorIndices(k = n, (rhs = lhs)(v[k] = (-A[k](X)*q+p[k])/m))})

H = (1/2)*m*Physics:-`^`(v[n], 2)

(11)

For step 2), to compute

 `#mover(mi("F",mathcolor = "olive"),mo("&rarr;"))` = m*(diff(v(t), t)) and m*(diff(v(t), t)) = I*m*Physics:-Commutator(H, v(t)[k])/`&hbar;` 

 

we need the commutator between the different components of the quantized velocity which, contrary to what happens in the classical case, do not commute. For this purpose, take the commutator between (10) with itself after replacing the free index

Commutator(v[k] = (-A[k](X)*q+p[k])/m, SubstituteTensorIndices(k = n, v[k] = (-A[k](X)*q+p[k])/m))

Physics:-Commutator(v[k], v[n]) = -q*(Physics:-Commutator(A[k](X), p[n])+Physics:-Commutator(p[k], A[n](X)))/m^2

(12)

To simplify (12), we use the fact that if f  is a commutative mapping that can be decomposed into a formal power series in all the complex plan (which is assumed to be the case for all A[n](X)(X)), then

Physics:-Commutator(p[k], f(x, y, z)) = -I*`&hbar;`*`&PartialD;`[k](f(x, y, z))

where p[k]"=-i `&hbar;` `&PartialD;`[k] " is the momentum operator along the x[k] axis. This relation reads in tensor notation:

Commutator(p[k], A[n](X)) = -I*`&hbar;`*d_[k](A[n](X))

Physics:-Commutator(p[k], A[n](X)) = -I*`&hbar;`*Physics:-d_[k](A[n](X), [X])

(13)

Add this rule to the rules previously set in order to automatically take it into account in (12)

Setup(Physics[Commutator](p[k], A[n](X)) = -I*`&hbar;`*Physics[d_][k](A[n](X), [X]))

[algebrarules = {%Commutator(p[k], p[n]) = 0, %Commutator(p[k], A[n](X)) = -I*`&hbar;`*Physics:-d_[k](A[n](X), [X]), %Commutator(x[k], p[l]) = I*`&hbar;`*Physics:-KroneckerDelta[k, l], %Commutator(x[k], x[l]) = 0, %Commutator(A[k](X), x[l]) = 0, %Commutator(A[k](X), A[l](X)) = 0}]

(14)

Physics[Commutator](v[k], v[n]) = -q*(Physics[Commutator](A[k](X), p[n])+Physics[Commutator](p[k], A[n](X)))/m^2

Physics:-Commutator(v[k], v[n]) = -I*q*`&hbar;`*(Physics:-d_[n](A[k](X), [X])-Physics:-d_[k](A[n](X), [X]))/m^2

(15)

Also add this other rule so that it is taken into account automatically

Setup(Physics[Commutator](v[k], v[n]) = -I*q*`&hbar;`*(Physics[d_][n](A[k](X), [X])-Physics[d_][k](A[n](X), [X]))/m^2)

[algebrarules = {%Commutator(p[k], p[n]) = 0, %Commutator(p[k], A[n](X)) = -I*`&hbar;`*Physics:-d_[k](A[n](X), [X]), %Commutator(v[k], v[n]) = -I*q*`&hbar;`*(Physics:-d_[n](A[k](X), [X])-Physics:-d_[k](A[n](X), [X]))/m^2, %Commutator(x[k], p[l]) = I*`&hbar;`*Physics:-KroneckerDelta[k, l], %Commutator(x[k], x[l]) = 0, %Commutator(A[k](X), x[l]) = 0, %Commutator(A[k](X), A[l](X)) = 0}]

(16)

Recalling now the expression of the Hamiltonian (11) as a function of the velocity, one can compute the components of the force operator  "()Component(v*B,k)=m (v[k])=(i m [H,v[k]][-])/`&hbar;`"

F[k](X) = I*m*%Commutator(rhs(H = (1/2)*m*Physics[`^`](v[n], 2)), v[k])/`&hbar;`

F[k](X) = I*m*%Commutator((1/2)*m*Physics:-`^`(v[n], 2), v[k])/`&hbar;`

(17)

Simplify this expression for the quantized force taking the quantum algebra rules (16) into account

Simplify(F[k](X) = I*m*%Commutator((1/2)*m*Physics[`^`](v[n], 2), v[k])/`&hbar;`)

F[k](X) = (1/2)*q*(-Physics:-`*`(Physics:-d_[n](A[k](X), [X]), v[n])+Physics:-`*`(Physics:-d_[k](A[n](X), [X]), v[n])-Physics:-`*`(v[n], Physics:-d_[n](A[k](X), [X]))+Physics:-`*`(v[n], Physics:-d_[k](A[n](X), [X])))

(18)

It is not difficult to verify that this is the antisymmetrized vector product `&x`(`#mover(mi("v",mathcolor = "olive"),mo("&rarr;"))`, `#mover(mi("B",mathcolor = "olive"),mo("&rarr;"))`). Departing from `#mover(mi("B",mathcolor = "olive"),mo("&rarr;"))` = `&x`(VectorCalculus[Nabla], `#mover(mi("A",mathcolor = "olive"),mo("&rarr;"))`) expressed using tensor notation,

B[c](X) = LeviCivita[c, n, m]*d_[n](A[m](X))

B[c](X) = -Physics:-LeviCivita[c, m, n]*Physics:-d_[n](A[m](X), [X])

(19)

and taking into acount that

 Component(`&x`(`#mover(mi("v",mathcolor = "olive"),mo("&rarr;"))`, `#mover(mi("B",mathcolor = "olive"),mo("&rarr;"))`), k) = `&epsilon;`[b, c, k]*v[b]*B[c](X) 

multiply both sides of (19) by `&epsilon;`[b, c, k]*v[b], getting

LeviCivita[k, b, c]*v[b]*(B[c](X) = -Physics[LeviCivita][c, m, n]*Physics[d_][n](A[m](X), [X]))

Physics:-LeviCivita[b, c, k]*Physics:-`*`(v[b], B[c](X)) = -Physics:-LeviCivita[b, c, k]*Physics:-LeviCivita[c, m, n]*Physics:-`*`(v[b], Physics:-d_[n](A[m](X), [X]))

(20)

Simplify(Physics[LeviCivita][b, c, k]*Physics[`*`](v[b], B[c](X)) = -Physics[LeviCivita][b, c, k]*Physics[LeviCivita][c, m, n]*Physics[`*`](v[b], Physics[d_][n](A[m](X), [X])))

Physics:-LeviCivita[b, c, k]*Physics:-`*`(v[b], B[c](X)) = Physics:-`*`(v[m], Physics:-d_[k](A[m](X), [X]))-Physics:-`*`(v[n], Physics:-d_[n](A[k](X), [X]))

(21)

Finally, replacing the repeated index m by n 

SubstituteTensorIndices(m = n, Physics[LeviCivita][b, c, k]*Physics[`*`](v[b], B[c](X)) = Physics[`*`](v[m], Physics[d_][k](A[m](X), [X]))-Physics[`*`](v[n], Physics[d_][n](A[k](X), [X])))

Physics:-LeviCivita[b, c, k]*Physics:-`*`(v[b], B[c](X)) = Physics:-`*`(v[n], Physics:-d_[k](A[n](X), [X]))-Physics:-`*`(v[n], Physics:-d_[n](A[k](X), [X]))

(22)

Likewise, for

 Component(`&x`(`#mover(mi("v",mathcolor = "olive"),mo("&rarr;"))`, `#mover(mi("B",mathcolor = "olive"),mo("&rarr;"))`), k) = `&epsilon;`[b, c, k]*B[b]*B[c](X) 

multiplying (19), this time from the right instead of from the left, we get

Simplify(((B[c](X) = -Physics[LeviCivita][c, m, n]*Physics[d_][n](A[m](X), [X]))*LeviCivita[k, b, c])*v[b])

Physics:-LeviCivita[b, c, k]*Physics:-`*`(B[c](X), v[b]) = Physics:-`*`(Physics:-d_[k](A[m](X), [X]), v[m])-Physics:-`*`(Physics:-d_[n](A[k](X), [X]), v[n])

(23)

SubstituteTensorIndices(m = n, Physics[LeviCivita][b, c, k]*Physics[`*`](B[c](X), v[b]) = Physics[`*`](Physics[d_][k](A[m](X), [X]), v[m])-Physics[`*`](Physics[d_][n](A[k](X), [X]), v[n]))

Physics:-LeviCivita[b, c, k]*Physics:-`*`(B[c](X), v[b]) = Physics:-`*`(Physics:-d_[k](A[n](X), [X]), v[n])-Physics:-`*`(Physics:-d_[n](A[k](X), [X]), v[n])

(24)

Simplifying now the expression (18) for the quantized force taking into account (22) and (24) we get

simplify(F[k](X) = (1/2)*q*(-Physics[`*`](Physics[d_][n](A[k](X), [X]), v[n])+Physics[`*`](Physics[d_][k](A[n](X), [X]), v[n])-Physics[`*`](v[n], Physics[d_][n](A[k](X), [X]))+Physics[`*`](v[n], Physics[d_][k](A[n](X), [X]))), {(rhs = lhs)(Physics[LeviCivita][b, c, k]*Physics[`*`](v[b], B[c](X)) = Physics[`*`](v[n], Physics[d_][k](A[n](X), [X]))-Physics[`*`](v[n], Physics[d_][n](A[k](X), [X]))), (rhs = lhs)(Physics[LeviCivita][b, c, k]*Physics[`*`](B[c](X), v[b]) = Physics[`*`](Physics[d_][k](A[n](X), [X]), v[n])-Physics[`*`](Physics[d_][n](A[k](X), [X]), v[n]))})

F[k](X) = (1/2)*q*Physics:-LeviCivita[b, c, k]*(Physics:-`*`(v[b], B[c](X))+Physics:-`*`(B[c](X), v[b]))

(25)

i.e.

`#mover(mi("F",mathcolor = "olive"),mo("&rarr;"))` = (1/2)*q*(`&x`(`#mover(mi("v",mathcolor = "olive"),mo("&rarr;"))`, `#mover(mi("B",mathcolor = "olive"),mo("&rarr;"))`)-`&x`(`#mover(mi("B",mathcolor = "olive"),mo("&rarr;"))`, `#mover(mi("v",mathcolor = "olive"),mo("&rarr;"))`))

in tensor notation. Finally, we note that this operator is Hermitian as expected

(F[k](X) = (1/2)*q*Physics[LeviCivita][b, c, k]*(Physics[`*`](v[b], B[c](X))+Physics[`*`](B[c](X), v[b])))-Dagger(F[k](X) = (1/2)*q*Physics[LeviCivita][b, c, k]*(Physics[`*`](v[b], B[c](X))+Physics[`*`](B[c](X), v[b])))

F[k](X)-Physics:-Dagger(F[k](X)) = 0

(26)



Download:  Quantization_of_the_Lorentz_force.mw,   Quantization_of_the_Lorentz_force.pdf


Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft
Editor, Computer Physics Communications

Students using Maple often have different needs than non-students. Students need more than just a final answer; they are looking to gain an understanding of the mathematical concepts behind the problems they are asked to solve and to learn how to solve problems. They need an environment that allows them to explore the concepts and break problems down into smaller steps.

The Student packages in Maple offer focused learning environments in which students can explore and reinforce fundamental concepts for courses in Precalculus, Calculus, Linear Algebra, Statistics, and more. For example, Maple includes step-by-step tutors that allow students to practice integration, differentiation, the finding of limits, and more. The Integration Tutor, shown below, lets a student evaluate an integral by selecting an applicable rule at each step. Maple will also offer hints or show the next step, if asked.  The tutor doesn't only demonstrate how to obtain the result, but is designed for practicing and learning.

For this blog post, I’d like to focus in on an area of great interest to students: showing step-by-step solutions for a variety of problems in Maple.

Several commands in the Student packages can show solution steps as either output or inline in an interactive pop-up window. The first few examples return the solution steps as output.

Precalculus problems:

The Student:-Basics sub-package provides a collection of commands to help students and teachers explore fundamental mathematical concepts that are core to many disciplines. It features two commands, both of which return step-by-step solutions as output.

The ExpandSteps command accepts a product of polynomials and displays the steps required to expand the expression:

with(Student:-Basics):
ExpandSteps( (a^2-1)/(a/3+1/3) );

The LinearSolveSteps command accepts an equation in one variable and displays the steps required to solve for that variable.

with(Student:-Basics):
LinearSolveSteps( (x+1)/y = 4*y^2 + 3*x, x );

This command also accepts some nonlinear equations that can be reduced down to linear equations.

Calculus problems:

The Student:-Calculus1 sub-package is designed to cover the basic material of a standard first course in single-variable calculus. Several commands in this package provide interactive tutors where you can step through computations and step-by-step solutions can be returned as standard worksheet output.

Tools like the integration, differentiation, and limit method tutors are interactive interfaces that allow for exploration. For example, similar to the integration-methods tutor above, the differentiation-methods tutor lets a student obtain a derivative by selecting the appropriate rule that applies at each step or by requesting a complete solution all at once. When done, pressing “Close” prints out to the Maple worksheet an annotated solution containing all of the steps.

For example, try entering the following into Maple:

with(Student:-Calculus1):
x*sin(x);

Next, right click on the Matrix and choose “Student Calculus1 -> Tutors -> Differentiation Methods…

The Student:-Calculus1 sub-package is not alone in offering this kind of step-by-step solution finding. Other commands in other Student packages are also capable of returning solutions.

Linear Algebra Problems:

The Student:-LinearAlgebra sub-package is designed to cover the basic material of a standard first course in linear algebra. This sub-package features similar tutors to those found in the Calculus1 sub-package. Commands such as the Gaussian EliminationGauss-Jordan Elimination, Matrix Inverse, Eigenvalues or Eigenvectors tutors show step-by-step solutions for linear algebra problems in interactive pop-up tutor windows. Of these tutors, a personal favourite has to be the Gauss-Jordan Elimination tutor, which were I still a student, would have saved me a lot of time and effort searching for simple arithmetic errors while row-reducing matrices.

For example, try entering the following into Maple:

with(Student:-LinearAlgebra):
M:=<<77,9,31>|<-50,-80,43>|<25,94,12>|<20,-61,-48>>;

Next, right click on the Matrix and choose “Student Linear Algebra -> Tutors -> Gauss-Jordan Elimination Tutor

This tutor makes it possible to step through row-reducing a matrix by using the controls on the right side of the pop-up window. If you are unsure where to go next, the “Next Step” button can be used to move forward one-step. Pressing “All Steps” returns all of the steps required to row reduce this matrix.

When this tutor is closed, it does not return results to the Maple worksheet, however it is still possible to use the Maple interface to step through performing elementary row operations and to capture the output in the Maple worksheet. By loading the Student:-LinearAlgebra package, you can simply use the right-click context menu to apply elementary row operations to a Matrix in order to step through the operations, capturing all of your steps along the way!

An interactive application for showing steps for some problems:

While working on this blog post, it struck me that we did not have any online interactive applications that could show solution steps, so using the commands that I’ve discussed above, I authored an application that can expand, solve linear problems, integrate, differentiate, or find limits. You can interact with this application here, but note that this application is a work in progress, so feel free to email me (maplepm (at) Maplesoft.com) any strange bugs that you may encounter with it.

More detail on each of these commands can be found in Maple’s help pages.

 

We assume that the radius of the outer stationary circle is  1. If we set the radius  x  of the inner stationary circle, all the other circles are uniquely determined by solving the system Sys.  Should be  x<=1/3 . If  x=1/3  then all the inner circles have a radius  1/3 . The following picture explains the meaning of symbols in the procedure Circles:

                                   

 

 

Circles:=proc(x)

local OO, O1, O2, O3, O4, O2x, O2y, O3x, O3y, OT, T1, T2, T3, s, t, dist, Sys, Sol, sol, y, u, v, z, C0, R0, P;

uses plottools, plots;

OO:=[0,0]: O1:=[x+y,0]: O2:=[O2x,O2y]: O3:=[O3x,O3y]: O4:=[-x-z,0]: OT:=[x+2*y-1,0]:

T1:=(O2*~y+O1*~u)/~(y+u): T2:=(O3*~u+O2*~v)/~(u+v): T3:=(O4*~v+O3*~z)/~(v+z):

solve({(T2-T1)[1]*(s-((T1+T2)/2)[1])+(T2-T1)[2]*(t-((T1+T2)/2)[2])=0, (T3-T2)[1]*(s-((T2+T3)/2)[1])+(T3-T2)[2]*(t-((T3+T2)/2)[2])=0}, {s,t}):

assign(%);

dist:=(A,B)->sqrt((B[1]-A[1])^2+(B[2]-A[2])^2):

Sys:={dist(O1,O2)^2=(y+u)^2, dist(OO,O2)^2=(x+u)^2, dist(O2,O3)^2=(u+v)^2, dist(OO,O3)^2=(x+v)^2, dist(O3,O4)^2=(z+v)^2, x+y+z=1, dist(O2,OT)^2=(1-u)^2, dist(O3,OT)^2=(1-v)^2};

Sol:=op~([allvalues([solve(Sys)])]);

sol:=select(i->is(eval(convert([y>0,u>0,v>0,z>0,O2y>0,x<=y,u<=y,v<=u,z<=v],`and`),i)), Sol)[];

assign(sol);

O1:=[x+y,0]: O2:=[O2x,O2y]: O3:=[O3x,O3y]: O4:=[-x-z,0]: OT:=[x+2*y-1,0]:

C0:=eval([s,t],sol);

R0:=eval(dist(T1,C0),sol):

P:=proc(phi)

local eq, r1, r, R, Ot, El, i, S, s, t, P1, P2;

uses plots,plottools;

eq:=1-dist([r*cos(s),r*sin(s)],OT)=r-x;

r1:=solve(eq,r);

r:=eval(r1,s=phi);

R[1]:=evalf(r-x);

Ot[1]:=evalf([r*cos(phi),r*sin(phi)]);

El:=plot([r1*cos(s),r1*sin(s),s=0..2*Pi],color="Green",thickness=3);

for i from 2 to 6 do

S:=[solve({1-dist(OT,[s,t])=dist(Ot[i-1],[s,t])-R[i-1], 1-dist(OT,[s,t])=dist(OO,[s,t])-x})];

P1:=eval([s,t],S[1]); P2:=eval([s,t],S[2]);

Ot[i]:=`if`(evalf(Ot[i-1][1]*P1[2]-Ot[i-1][2]*P1[1])>0,P1,P2);

R[i]:=dist(Ot[i],OO)-x;

od;

display(El,seq(disk(Ot[k],0.012),k=1..6),circle(C0,R0,color=gold,thickness=3),circle([x+2*y-1,0],1, color=blue,thickness=4), circle(OO,x, color=red,thickness=4), seq(circle(Ot[k],R[k], thickness=3),k=1..6), scaling=constrained, axes=none);

end proc:

animate(P,[phi], phi=0..Pi, frames=120);

end proc:  

 

Example of use (I got  x=0.22  just by measuring the ruler displayed original animation):

Circles(0.22);

                               

 

 

The curve on the following animation is an astroid (a special case of hypocycloid). See wiki for details. Hypocycloid procedure creates animation for any hypocycloid.  Parameters of the procedure: R is the radius of the outer circle, r is the radius of the inner circle.

Hypocycloid:=proc(R,r)

local A, B, f, g, F;

uses plots,plottools;

A:=circle(R,color=green,thickness=4):

B:=display(circle([R-r,0],r,color=red,thickness=4),line([R-r,0],[R,0],color=red,thickness=4)):

f:=t->plot([(R-r)*cos(s)+r*cos((R-r)/r*s),(R-r)*sin(s)-r*sin((R-r)/r*s),s=0..t],color=blue,thickness=4):

g:=t->rotate(rotate(B,-R/r*t,[R-r,0]),t):

F:=t->display(A,f(t),g(t),scaling=constrained):

animate(F,[t], t=0..2*Pi*denom(R/r), frames=90);

end proc:

 

Examples of use:

Hypocycloid(4,1); 

                                      

 

 

Hypocycloid(5,3);

                                      

 

 

 Круги.mw

A prime producing polynomial.

 

Observations on the trinomial n2 + n + 41.

 

by Matt C. Anderson

 

September 3, 2016

 

The story so far

 

We assume that n is an integer.  We focus our attention on the polynomial n^2 + n + 41.

 

Furthur, we analyze the behavior of the factorization of integers of the form

 

h(n) = n2 + n + 41                                          (expression 1)

 

where n is a non-negative integer.  It was shown by Legendre, in 1798 that if 0 ≤ n < 40 then h(n) is a prime number.

 

Certain patterns become evident when considering points (a,n) where

 

h(n) ≡ 0 mod a.                                             (expression 2)

 

The collection of all such point produces what we are calling a "graph of discrete divisors" due to certain self-similar features.  From experimental data we find that the integer points in this bifurcation graph lie on a collection of parabolic curves indexed by pairs of relatively prime integers.  The expression for the middle parabolas is –

 

p(r,c) = (c*x – r*y)2 – r*(c*x – r*y) – x + 41*r2.           (expression 3)

 

The restrictions are that 0<r<c and gcd(r,c) = 1 and all four of r,c,x, and y are integers.

 

Each such pair (r,c) yields (again determined experimentally and by observation of calculations) an integer polynomial a*z2 + b*z + c, and the quartic h(a*z2 + b*z + c) then factors non-trivially over the integers into two quadratic expressions.  We call this our "parabola conjecture".  Certain symmetries in the bifurcation graph are due to elementary relationships between pairs of co-prime integers.  For instance if m<n are co-prime integers, then there is an observable relationship between the parabola it determines that that formed from (n-m, n).

 

We conjecture that all composite values of h(n) arise by substituting integer values of z into h(a*z2 + b*z + c), where this quartic factors algebraically over Z for a*z2 + b*z + c a quadratic polynomial determined by a pair of relatively prime integers.  We name this our "no stray points conjecture" because all the points in the bifurcation graph appear to lie on a parabola.

 

We further conjecture that the minimum x-values for parabolas corresponding to (r, c) with gcd(r, c) = 1 are equal for fixed n.  Further, these minimum x-values line up at 163*c^2/4 where c = 2, 3, 4, ...  The numerical evidence seems to support this.  This is called our "parabolas line up" conjecture.

 

The notation gcd(r, c) used above is defined here.  The greatest common devisor of two integers is the smallest whole number that divides both of those integers.

 

Theorem 1 - Consider h(n) with n a non negative integer. 

h(n) never has a factor less than 41.

 

We prove Theorem 1 with a modular construction.  We make a residue table with all the prime factors less than 41.  The fundamental theorem of arithmetic states that any integer greater than one is either a prime number, or can be written as a unique product of prime numbers (ignoring the order).  So if h(n) never has a prime factor less than 41, then by extension it never has an integer factor less than 41.

 

For example, to determine that h(n) is never divisible by 2, note the first column of the residue table.  If n is even, then h(n) is odd.  Similarly, if n is odd then h(n) is also odd.  In either case, h(n) does not have factorization by 2.

 

Also, for divisibility by 3, there are 3 cases to check.  They are n = 0, 1, and 2 mod 3. h(0) mod 3 is 2.  h(1) mod 3 is 1. and h(2) mod 3 is 2.  Due to these three cases, h(n) is never divisible by 3.  This is the second column of the residue table.

 

The number 0 is first found in the residue table for the cases h(0) mod 41 and h(40) mod 41.  This means that if n is congruent to 0 mod 41 then h(n) will be divisible by 41.  Similarly, if n is congruent to 40 mod 41 then h(n) is also divisible by 41.

After the residue table, we observe a bifurcation graph which has points when h(y) mod x is divisible by x.  The points (x,y) can be seen on the bifurcation graph.

 

< insert residue table here >

 

Thus we have shown that h(n) never has a factor less than 41.

 

Theorem 2

 

Since h(a) = a^2 + a + 41, we want to show that h(a) = h( -a -1).

 

Proof of Theorem 2

Because h(a) = a*(a+1) + 41,

Now h(-a -1) = (-a -1)*(-a -1 +1) + 41.

So h(-a -1) = (-a -1)*(-a) +41,

And h(-a -1) = h(a).

Which was what we wanted.

End of proof of theorem 2.

 

Corrolary 1

Further, if h(b) mod c ≡ = then h(c –b -1) mod c ≡ 0.

 

We can observe interesting patterns in the “graph of discrete divisors” on a following page.

 

The presentation below is on undergrad Quantum Mechanics. Tackling this topic within a computer algebra worksheet in the way it's done below, however, is an exciting novelty and illustrates well the level of abstraction that is now possible using the Physics package.

 

Quantum Mechanics: Schrödinger vs Heisenberg picture

Pascal Szriftgiser1 and Edgardo S. Cheb-Terrab2 

(1) Laboratoire PhLAM, UMR CNRS 8523, Université Lille 1, F-59655, France

(2) Maplesoft

 

Within the Schrödinger picture of Quantum Mechanics, the time evolution of the state of a system, represented by a Ket "| psi(t) >", is determined by Schrödinger's equation:

I*`&hbar;`*(diff(Ket(psi, t), t)) = H*Ket(psi, t)

where H, the Hamiltonian, as well as the quantum operators O__S representing observable quantities, are all time-independent.

 

Within the Heisenberg picture, a Ket Ket(psi, 0) representing the state of the system does not evolve with time, but the operators O__H(t)representing observable quantities, and through them the Hamiltonian H, do.

 

Problem: Departing from Schrödinger's equation,

  

a) Show that the expected value of a physical observable in Schrödinger's and Heisenberg's representations is the same, i.e. that

Bra(psi, t)*O__S*Ket(psi, t) = Bra(psi, 0)*O__H(t)*Ket(psi, 0)

  

b) Show that the evolution equation of an observable O__H in Heisenberg's picture, equivalent to Schrödinger's equation,  is given by:

diff(O__H(t), t) = (-I*Physics:-Commutator(O__H(t), H))*(1/`&hbar;`)

where in the right-hand-side we see the commutator of O__H with the Hamiltonian of the system.

Solution: Let O__S and O__H respectively be operators representing one and the same observable quantity in Schrödinger's and Heisenberg's pictures, and H be the operator representing the Hamiltonian of a physical system. All of these operators are Hermitian. So we start by setting up the framework for this problem accordingly, including that the time t and Planck's constant are real. To automatically combine powers of the same base (happening frequently in what follows) we also set combinepowersofsamebase = true. The following input/output was obtained using the latest Physics update (Aug/31/2016) distributed on the Maplesoft R&D Physics webpage.

with(Physics):

Physics:-Setup(hermitianoperators = {H, O__H, O__S}, realobjects = {`&hbar;`, t}, combinepowersofsamebase = true, mathematicalnotation = true)

[combinepowersofsamebase = true, hermitianoperators = {H, O__H, O__S}, mathematicalnotation = true, realobjects = {`&hbar;`, t}]

(1)

Let's consider Schrödinger's equation

I*`&hbar;`*(diff(Ket(psi, t), t)) = H*Ket(psi, t)

I*`&hbar;`*(diff(Physics:-Ket(psi, t), t)) = Physics:-`*`(H, Physics:-Ket(psi, t))

(2)

Now, H is time-independent, so (2) can be formally solved: psi(t) is obtained from the solution psi(0) at time t = 0, as follows:

T := exp(-I*H*t/`&hbar;`)

exp(-I*t*H/`&hbar;`)

(3)

Ket(psi, t) = T*Ket(psi, 0)

Physics:-Ket(psi, t) = Physics:-`*`(exp(-I*t*H/`&hbar;`), Physics:-Ket(psi, 0))

(4)

To check that (4) is a solution of (2), substitute it in (2):

eval(I*`&hbar;`*(diff(Physics[Ket](psi, t), t)) = Physics[`*`](H, Physics[Ket](psi, t)), Physics[Ket](psi, t) = Physics[`*`](exp(-I*H*t/`&hbar;`), Physics[Ket](psi, 0)))

Physics:-`*`(H, exp(-I*t*H/`&hbar;`), Physics:-Ket(psi, 0)) = Physics:-`*`(H, exp(-I*t*H/`&hbar;`), Physics:-Ket(psi, 0))

(5)

Next, to relate the Schrödinger and Heisenberg representations of an Hermitian operator O representing an observable physical quantity, recall that the value expected for this quantity at time t during a measurement is given by the mean value of the corresponding operator (i.e., bracketing it with the state of the system Ket(psi, t)).

So let O__S be an observable in the Schrödinger picture: its mean value is obtained by bracketing the operator with equation (4):

Dagger(Ket(psi, t) = Physics[`*`](exp(-I*H*t/`&hbar;`), Ket(psi, 0)))*O__S*(Ket(psi, t) = Physics[`*`](exp(-I*H*t/`&hbar;`), Ket(psi, 0)))

Physics:-`*`(Physics:-Bra(psi, t), O__S, Physics:-Ket(psi, t)) = Physics:-`*`(Physics:-Bra(psi, 0), exp(I*t*H/`&hbar;`), O__S, exp(-I*t*H/`&hbar;`), Physics:-Ket(psi, 0))

(6)

The composed operator within the bracket on the right-hand-side is the operator O in Heisenberg's picture, O__H(t)

Dagger(T)*O__S*T = O__H(t)

Physics:-`*`(exp(I*t*H/`&hbar;`), O__S, exp(-I*t*H/`&hbar;`)) = O__H(t)

(7)

Analogously, inverting this equation,

(T*(Physics[`*`](exp(I*H*t/`&hbar;`), O__S, exp(-I*H*t/`&hbar;`)) = O__H(t)))*Dagger(T)

O__S = Physics:-`*`(exp(-I*t*H/`&hbar;`), O__H(t), exp(I*t*H/`&hbar;`))

(8)

As an aside to the problem, we note from these two equations, and since the operator T = exp((-I*H*t)*(1/`&hbar;`)) is unitary (because H is Hermitian), that the switch between Schrödinger's and Heisenberg's pictures is accomplished through a unitary transformation.

 

Inserting now this value of O__S from (8) in the right-hand-side of (6), we get the answer to item a)

lhs(Physics[`*`](Bra(psi, t), O__S, Ket(psi, t)) = Physics[`*`](Bra(psi, 0), exp(I*H*t/`&hbar;`), O__S, exp(-I*H*t/`&hbar;`), Ket(psi, 0))) = eval(rhs(Physics[`*`](Bra(psi, t), O__S, Ket(psi, t)) = Physics[`*`](Bra(psi, 0), exp(I*H*t/`&hbar;`), O__S, exp(-I*H*t/`&hbar;`), Ket(psi, 0))), O__S = Physics[`*`](exp(-I*H*t/`&hbar;`), O__H(t), exp(I*H*t/`&hbar;`)))

Physics:-`*`(Physics:-Bra(psi, t), O__S, Physics:-Ket(psi, t)) = Physics:-`*`(Physics:-Bra(psi, 0), O__H(t), Physics:-Ket(psi, 0))

(9)

where, on the left-hand-side, the Ket representing the state of the system is evolving with time (Schrödinger's picture), while on the the right-hand-side the Ket `&psi;__0`is constant and it is O__H(t), the operator representing an observable physical quantity, that evolves with time (Heisenberg picture). As expected, both pictures result in the same expected value for the physical quantity represented by O.

 

To complete item b), the derivation of the evolution equation for O__H(t), we take the time derivative of the equation (7):

diff((rhs = lhs)(Physics[`*`](exp(I*H*t/`&hbar;`), O__S, exp(-I*H*t/`&hbar;`)) = O__H(t)), t)

diff(O__H(t), t) = I*Physics:-`*`(H, exp(I*t*H/`&hbar;`), O__S, exp(-I*t*H/`&hbar;`))/`&hbar;`-I*Physics:-`*`(exp(I*t*H/`&hbar;`), O__S, H, exp(-I*t*H/`&hbar;`))/`&hbar;`

(10)

To rewrite this equation in terms of the commutator  Physics:-Commutator(O__S, H), it suffices to re-order the product  H  exp(I*H*t/`&hbar;`) placing the exponential first:

Library:-SortProducts(diff(O__H(t), t) = I*Physics[`*`](H, exp(I*H*t/`&hbar;`), O__S, exp(-I*H*t/`&hbar;`))/`&hbar;`-I*Physics[`*`](exp(I*H*t/`&hbar;`), O__S, H, exp(-I*H*t/`&hbar;`))/`&hbar;`, [exp(I*H*t/`&hbar;`), H], usecommutator)

diff(O__H(t), t) = I*Physics:-`*`(exp(I*t*H/`&hbar;`), H, O__S, exp(-I*t*H/`&hbar;`))/`&hbar;`-I*Physics:-`*`(exp(I*t*H/`&hbar;`), Physics:-`*`(H, O__S)+Physics:-Commutator(O__S, H), exp(-I*t*H/`&hbar;`))/`&hbar;`

(11)

Normal(diff(O__H(t), t) = I*Physics[`*`](exp(I*H*t/`&hbar;`), H, O__S, exp(-I*H*t/`&hbar;`))/`&hbar;`-I*Physics[`*`](exp(I*H*t/`&hbar;`), Physics[`*`](H, O__S)+Physics[Commutator](O__S, H), exp(-I*H*t/`&hbar;`))/`&hbar;`)

diff(O__H(t), t) = -I*Physics:-`*`(exp(I*t*H/`&hbar;`), Physics:-Commutator(O__S, H), exp(-I*t*H/`&hbar;`))/`&hbar;`

(12)

Finally, to express the right-hand-side in terms of  Physics:-Commutator(O__H(t), H) instead of Physics:-Commutator(O__S, H), we take the commutator of the equation (8) with the Hamiltonian

Commutator(O__S = Physics[`*`](exp(-I*H*t/`&hbar;`), O__H(t), exp(I*H*t/`&hbar;`)), H)

Physics:-Commutator(O__S, H) = Physics:-`*`(exp(-I*t*H/`&hbar;`), Physics:-Commutator(O__H(t), H), exp(I*t*H/`&hbar;`))

(13)

Combining these two expressions, we arrive at the expected result for b), the evolution equation of a given observable O__H in Heisenberg's picture

eval(diff(O__H(t), t) = -I*Physics[`*`](exp(I*H*t/`&hbar;`), Physics[Commutator](O__S, H), exp(-I*H*t/`&hbar;`))/`&hbar;`, Physics[Commutator](O__S, H) = Physics[`*`](exp(-I*H*t/`&hbar;`), Physics[Commutator](O__H(t), H), exp(I*H*t/`&hbar;`)))

diff(O__H(t), t) = -I*Physics:-Commutator(O__H(t), H)/`&hbar;`

(14)


Download:    Schrodinger_vs_Heisenberg_picture.mw     Schrodinger_vs_Heisenberg_picture.pdf

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft
Editor, Computer Physics Communications

Hanze University of Applied Sciences Groningen has created 105 questions related to engineering mechanics for structures (statics/construction). These 105 randomised questions with graphics are used for first year students in civil engineering, structural engineering, architectural engineering and building engineering.

The topics of the course modules are as follows:
- Force Vectors (10)
- Support Reactions (26)
- Internal Forces (31)
- Stress (21)
- Trusses (17)

All questions have a translation button which makes it easy to switch from English to any other language. The questions are first shown in Dutch [NL] but by clicking [UK] in the Preview, the English version is shown. The text can easily be edited and changed into the language of choice in the Maple T.A. question editor. Only the button needs adjustment in the question source.

60 questions are “exercises“ which means that these questions have extended feedback. The remaining questions (45) are “tests” meaning that the questions include no feedback.

Cone.zip - construction exercises (60 questions)

Cont.zip - construction tests (45 questions)

Jonny
Maplesoft Product Manager, Online Education Products

Hi,

In the following example I introduce some commutation rules that are standard in Quantum Mechanics. A major feature of the Maple Physics Package, is that it is possible to define tensors as Quantum Operators. This is of great interest because powerful tensor simplification rules can then be used in Quantum Mechanics. For an example, see the commutation rules of the components of the angular momentum operator in ?Physics,Examples. Here, I focus on a possible issue: when destroying all quantum operators, the pre-defined commutation rules still apply, which should not be the case. As shown in the post, this is link to the fact that these operators are also tensors.
 

NULL

 

Physics:-Version()[2]

`2016, August 16, 18:56 hours`

(1)

NULL

NULL

restart; with(Physics); interface(imaginaryunit = I)

First, set a 3D Euclidian space

Setup(mathematicalnotation = true, dimension = 3, signature = `+`, spacetimeindices = lowercaselatin, quiet)

[dimension = 3, mathematicalnotation = true, signature = `+ + +`, spacetimeindices = lowercaselatin]

(2)

Define two rank 1 tensors

Define(x[k], p[k])

`Defined objects with tensor properties`

 

{Physics:-Dgamma[a], Physics:-Psigma[a], Physics:-d_[a], Physics:-g_[a, b], p[k], x[k], Physics:-KroneckerDelta[a, b], Physics:-LeviCivita[a, b, c]}

(3)

Now, further define these tensors as quantum operators and gives the usual commutation rule between position and momentum operators (Quantum Mechanics).

Setup(hermitianoperators = {p, x}, algebrarules = {%Commutator(p[k], p[n]) = 0, %Commutator(x[k], p[l]) = I*`&hbar;`*KroneckerDelta[k, l], %Commutator(x[k], x[l]) = 0}, realobjects = {`&hbar;`})

[algebrarules = {%Commutator(p[k], p[n]) = 0, %Commutator(x[k], p[l]) = I*`&hbar;`*Physics:-KroneckerDelta[k, l], %Commutator(x[k], x[l]) = 0}, hermitianoperators = {p, x}, realobjects = {`&hbar;`}]

(4)

As expected:

(%Commutator = Commutator)(p[a], x[b])

%Commutator(p[a], x[b]) = -I*`&hbar;`*Physics:-KroneckerDelta[a, b]

(5)

Now, reset all the Hermitian operators, so that all quantum operators are destroyed. This is useful if, for instance, one needs to compare some the result with the commutative case.

Setup(redo, hermitianoperators = {})

[hermitianoperators = none]

(6)

As expected, there are no quantum operators anymore...

Setup(quantumoperators)

[quantumoperators = {}]

(7)

...so that the following expressions should commute (result should be true)

Library:-Commute(p[a], x[b])

false

(8)

Result should be 0NULL

Commutator(p[a], x[b])

-I*`&hbar;`*Physics:-KroneckerDelta[a, b]

(9)

p[a], x[b]

p[a], x[b]

(10)

NULL

NULL

``

NULLNULL

Below is just a copy & paste of the above section. The only difference, is that "Define(x[k], p[k])" has been commented, so that x[k]and p[k] are not a tensor. In that case, everything behaves as expected (but of course, the interesting feature of tensors is not available).

````

NULL

restart; with(Physics); interface(imaginaryunit = I)

First, set a 3D Euclidian space

Physics:-Setup(mathematicalnotation = true, dimension = 3, signature = `+`, spacetimeindices = lowercaselatin, quiet)

[dimension = 3, mathematicalnotation = true, signature = `+ + +`, spacetimeindices = lowercaselatin]

(11)

#Define two rank 1 tensors

Now, further define these tensors as quantum operators and gives the usual commutation rule between position and momentum operators (Quantum Mechanics)

Physics:-Setup(hermitianoperators = {p, x}, algebrarules = {%Commutator(p[k], p[n]) = 0, %Commutator(x[k], p[l]) = Physics:-`*`(Physics:-`*`(I, `&hbar;`), Physics:-KroneckerDelta[k, l]), %Commutator(x[k], x[l]) = 0}, realobjects = {`&hbar;`})

[algebrarules = {%Commutator(p[k], p[n]) = 0, %Commutator(x[k], p[l]) = I*`&hbar;`*Physics:-KroneckerDelta[k, l], %Commutator(x[k], x[l]) = 0}, hermitianoperators = {p, x}, realobjects = {`&hbar;`}]

(12)

As expected:

(%Commutator = Physics:-Commutator)(p[a], x[b])

%Commutator(p[a], x[b]) = -I*`&hbar;`*Physics:-KroneckerDelta[a, b]

(13)

Now, reset all the Hermitian operators, so that all quantum operators are destroyed.

Physics:-Setup(redo, hermitianoperators = {})

[hermitianoperators = none]

(14)

As expected, there are no quantum operators anymore...

Physics:-Setup(quantumoperators)

[quantumoperators = {}]

(15)

...so that the following expressions should commute (result should be true)

Physics:-Library:-Commute(p[a], x[b])

true

(16)

Result should be 0``

Physics:-Commutator(p[a], x[b])

0

(17)

p[a], x[b]

p[a], x[b]

(18)

NULL

``

NULL``

NULL


Download Quantum_operator_as_Tensors_August_23_2016.mw

just reading this story... a large quantity of information does not add up for me, like he denies it is his work and was done by hamilton? also its blasted across the net that he his proof is valid, so he cant say he doesnt want fame because its been all over the internet, and the sheer lack of logic of refusing money based on a moral code of conduct? then give it to charity, pay for a dozen scholarships but what the money is dirty?

 

pretty sure this whole thing is a pile of crap made up by someone.

A more honest and specific version of lemma 3.

CONGRUENT_FUNCTIONS_OF_THE_FRACTIONAL_PART_OVER_Q_LEMMA_4.mw

Maple Worksheet - Error

Failed to load the worksheet /maplenet/convert/CONGRUENT_FUNCTIONS_OF_THE_FRACTIONAL_PART_OVER_Q_LEMMA_4.mw .

Download CONGRUENT_FUNCTIONS_OF_THE_FRACTIONAL_PART_OVER_Q_LEMMA_4.mw

It is very important that you learn to pose and solve equations in practical problems. Ernest Mach, a famous scientist of the nineteenth century, said that algebra is characterized by a lightening of mind, because the solution of a problem, after building the equation, you can "forget" all the practical situation to focus on the mathematical expression; everything that is not necessary to solve the problem no longer interfere with your mind. Another famous scientist, Isaac Newton, wrote that the language of algebra is the equation. To see a problem concerning abstract relations of numbers or amounts, simply translate the problem of colloquial language to the algebraic language. Here I leave the application for first order equations developed in 2016 Maple.

 

Aplicativo_Ecuaciones.mw

(In Spanish)

Lenin Araujo Castillo

Ambassador of Maple - Perú

 

hello i was just looking back on some stuff i did a few months back and although im aware there is a function for generating the prime subset up to a given number already featured in a package in mape im just curious to know how this one measures up in terms of computational efficiency etc.

 

anyway, this is code, if anyone has the time to give it a try and let me know what they think ie faster more logical way about it any feed back is appreciated cheers.

 

restart;
interface(showassumed = 0, rtablesize = infinity);
with(plots); with(numtheory); with(Statistics); with(LinearAlgebra); with(RandomTools); with(codegen, makeproc); with(combinat); with(Maplets[Elements]);
unprotect(real, rational, integer, complex);
alias(P[In] = CurveFitting[PolynomialInterpolation]); alias(L[In] = CurveFitting[LeastSquares]); alias(R[In] = CurveFitting[RationalInterpolation]); alias(S[In] = CurveFitting[Spline]); alias(B[In] = CurveFitting[BSplineCurve]); alias(L[In] = CurveFitting[ThieleInterpolation], rho = frac); alias(`&Nscr;` = Count); alias(`&Dopf;` = numtheory:-divisors); alias(sigma = numtheory:-sigma); alias(`&Fscr;` = ListTools['Flatten']); alias(`&Sopf;` = seq);
delta := proc (x, y) options operator, arrow; piecewise(x = y, 1, x <> y, 0) end proc;
`&Mopf;` := proc (X, Y) options operator, arrow; map(X, Y) end proc;
`&Cscr;`[S, L] := proc (X) options operator, arrow; convert(X, 'list') end proc;
`&Cscr;`[L, S] := proc (X) options operator, arrow; convert(X, 'set') end proc;
`&Popf;` := proc (N) options operator, arrow; `minus`({`&Sopf;`(k*delta(`&Nscr;`(`&Fscr;`(`&Cscr;`[S, L](`&Mopf;`(`&Cscr;`[S, L], `&Mopf;`(`&Dopf;`, `&Dopf;`(k)))))), 3), k = 1 .. N)}, {0}) end proc;
N -> `minus`({(k delta(&Nscr;(&Fscr;(&Cscr;[S, L]((&Cscr;[S, L])

  &Mopf; (&Dopf; &Mopf; (&Dopf;(k)))))), 3)) &Sopf; (k = 1 .. N)},

  {0})
n[P] := proc (N) options operator, arrow; `&Nscr;`(`&Cscr;`[S, L](`&Popf;`(N)))-1 end proc;



Maple Worksheet - Error

Failed to load the worksheet /maplenet/convert/prime_subset_up_to_N.mw .

Download prime_subset_up_to_N.mw

This post is about the relationship between the number of processors used in parallel processing with the Threads package and the resultant real times and cpu times for a computation.

In the worksheet below, I perform the same computation using each possible number of processors on my machine, one thru eight. The computation is adding a list of 32 million pre-selected random integers. The real times and cpu times are collected from each run, and these are analyzed with a variety of metrics that I devised. Note that garbage-collection (gc) time is not an issue in the timings; as you can see below, the gc times are zero throughout.

My conclusion is that there are severely diminishing returns as the number of processors increases. There is a major benefit in going from one processor to two; there is a not-as-great-but-still-substantial benefit in going from two processors to four. But the real-time reduction in going from four processors to eight is very small compared to the substantial increase in resource consumption.

Please discuss the relevance of my six metrics, the soundness of my test technique, and how the presentation could be better. If you have a computer capable of running more than eight threads, please modify and run my worksheet on it.

Diminishing Returns from Parallel Processing: Is it worth using more than four processors with Threads?

Author: Carl J Love, 2016-July-30 

Set up tests

 

restart:

currentdir(kernelopts(homedir)):
if kernelopts(numcpus) <> 8 then
     error "This worksheet needs to be adjusted for your number of CPUs."
end if:
try fremove("ThreadsData.m") catch: end try:
try fremove("ThreadsTestData.m") catch: end try:
try fremove("ThreadsTest.mpl") catch: end try:

#Create and save random test data
L:= RandomTools:-Generate(list(integer, 2^25)):
save L, "ThreadsTestData.m":

#Create code file to be read for the tests.
fd:= FileTools:-Text:-Open("ThreadsTest.mpl", create):
fprintf(
     fd,
     "gc();\n"
     "read \"ThreadsTestData.m\":\n"
     "CodeTools:-Usage(Threads:-Add(x, x= L)):\n"
     "fd:= FileTools:-Text:-Open(\"ThreadsData.m\", create, append):\n"
     "fprintf(\n"
     "     fd, \"%%m%%m%%m\\n\",\n"
     "     kernelopts(numcpus),\n"
     "     CodeTools:-Usage(\n"
     "          Threads:-Add(x, x= L),\n"
     "          iterations= 8,\n"
     "          output= [realtime, cputime]\n"
     "     )\n"
     "):\n"
     "fclose(fd):"
):
fclose(fd):

#Code review
fd:= FileTools:-Text:-Open("ThreadsTest.mpl"):
while not feof(fd) do
     printf("%s\n", FileTools:-Text:-ReadLine(fd))
end do:

fclose(fd):

gc();
read "ThreadsTestData.m":
CodeTools:-Usage(Threads:-Add(x, x= L)):
fd:= FileTools:-Text:-Open("ThreadsData.m", create, append):
fprintf(
     fd, "%m%m%m\n",
     kernelopts(numcpus),
     CodeTools:-Usage(
          Threads:-Add(x, x= L),
          iterations= 8,
          output= [realtime, cputime]
     )
):
fclose(fd):

 

Run the tests

restart:

kernelopts(numcpus= 1):
currentdir(kernelopts(homedir)):
read "ThreadsTest.mpl":

memory used=0.79MiB, alloc change=0 bytes, cpu time=2.66s, real time=2.66s, gc time=0ns

memory used=0.78MiB, alloc change=0 bytes, cpu time=2.26s, real time=2.26s, gc time=0ns

 

Repeat above test using numcpus= 2..8.

 

restart:

kernelopts(numcpus= 2):
currentdir(kernelopts(homedir)):
read "ThreadsTest.mpl":

memory used=0.79MiB, alloc change=2.19MiB, cpu time=2.73s, real time=1.65s, gc time=0ns

memory used=0.78MiB, alloc change=0 bytes, cpu time=2.37s, real time=1.28s, gc time=0ns

 

restart:

kernelopts(numcpus= 3):
currentdir(kernelopts(homedir)):
read "ThreadsTest.mpl":

memory used=0.79MiB, alloc change=4.38MiB, cpu time=2.98s, real time=1.38s, gc time=0ns

memory used=0.78MiB, alloc change=0 bytes, cpu time=2.75s, real time=1.05s, gc time=0ns

 

restart:

kernelopts(numcpus= 4):
currentdir(kernelopts(homedir)):
read "ThreadsTest.mpl":

memory used=0.80MiB, alloc change=6.56MiB, cpu time=3.76s, real time=1.38s, gc time=0ns

memory used=0.78MiB, alloc change=0 bytes, cpu time=3.26s, real time=959.75ms, gc time=0ns

 

restart:

kernelopts(numcpus= 5):
currentdir(kernelopts(homedir)):
read "ThreadsTest.mpl":

memory used=0.80MiB, alloc change=8.75MiB, cpu time=4.12s, real time=1.30s, gc time=0ns

memory used=0.78MiB, alloc change=0 bytes, cpu time=3.74s, real time=910.88ms, gc time=0ns

 

restart:

kernelopts(numcpus= 6):
currentdir(kernelopts(homedir)):
read "ThreadsTest.mpl":

memory used=0.81MiB, alloc change=10.94MiB, cpu time=4.59s, real time=1.26s, gc time=0ns

memory used=0.78MiB, alloc change=0 bytes, cpu time=4.29s, real time=894.00ms, gc time=0ns

 

restart:

kernelopts(numcpus= 7):
currentdir(kernelopts(homedir)):
read "ThreadsTest.mpl":

memory used=0.81MiB, alloc change=13.12MiB, cpu time=5.08s, real time=1.26s, gc time=0ns

memory used=0.78MiB, alloc change=0 bytes, cpu time=4.63s, real time=879.00ms, gc time=0ns

 

restart:

kernelopts(numcpus= 8):
currentdir(kernelopts(homedir)):
read "ThreadsTest.mpl":

memory used=0.82MiB, alloc change=15.31MiB, cpu time=5.08s, real time=1.25s, gc time=0ns

memory used=0.78MiB, alloc change=0 bytes, cpu time=4.69s, real time=845.75ms, gc time=0ns

 

Analyze the data

restart:

currentdir(kernelopts(homedir)):

(R,C):= 'Vector(kernelopts(numcpus))' $ 2:
N:= Vector(kernelopts(numcpus), i-> i):

fd:= FileTools:-Text:-Open("ThreadsData.m"):
while not feof(fd) do
     (n,Tr,Tc):= fscanf(fd, "%m%m%m\n")[];
     (R[n],C[n]):= (Tr,Tc)
end do:

fclose(fd):

plot(
     (V-> <N | 100*~V>)~([R /~ max(R), C /~ max(C)]),
     title= "Raw timing data (normalized)",
     legend= ["real", "CPU"],
     labels= [`number of processors\n`, `%  of  max`],
     labeldirections= [HORIZONTAL,VERTICAL],
     view= [DEFAULT, 0..100]
);

The metrics:

 

R[1] /~ R /~ N:          Gain: The gain from parallelism expressed as a percentage of the theoretical maximum gain given the number of processors

C /~ R /~ N:               Evenness: How evenly the task is distributed among the processors

1 -~ C[1] /~ C:           Overhead: The percentage of extra resource consumption due to parallelism

R /~ R[1]:                   Reduction: The percentage reduction in real time

1 -~ R[2..] /~ R[..-2]:  Marginal Reduction: Percentage reduction in real time by using one more processor

C[2..] /~ C[..-2] -~ 1:  Marginal Consumption: Percentage increase in resource consumption by using one more processor

 

plot(
     [
          (V-> <N | 100*~V>)~([
               R[1]/~R/~N,             #gain from parallelism
               C/~R/~N,                #how evenly distributed
               1 -~ C[1]/~C,           #overhead
               R/~R[1]                 #reduction
          ])[],
          (V-> <N[2..] -~ .5 | 100*~V>)~([
               1 -~ R[2..]/~R[..-2],   #marginal reduction rate
               C[2..]/~C[..-2] -~ 1    #marginal consumption rate        
          ])[]
     ],
     legend= typeset~([
          'r[1]/r/n',
          'c/r/n',
          '1 - c[1]/c',
          'r/r[1]',
          '1 - `Delta__%`(r)',
          '`Delta__%`(c) - 1'       
     ]),
     linestyle= ["solid"$4, "dash"$2], thickness= 2,
     title= "Efficiency metrics\n", titlefont= [HELVETICA,BOLD,16],
     labels= [`number of processors\n`, `% change`], labelfont= [TIMES,ITALIC,14],
     labeldirections= [HORIZONTAL,VERTICAL],
     caption= "\nr = real time,  c = CPU time,  n = # of processors",
     size= combinat:-fibonacci~([16,15]),
     gridlines
);

 

 

Download Threads_dim_ret.mw

In a recent conversation I explained whyLSODE was giving wrong results (http://www.mapleprimes.com/questions/210948-Can-We-Trust-Maple#comment230167). After a lot of confusions and weird infinite loops for answers, it turned out that Newton Raphson was not properly done.

Both LSODE and MEBDFI are currently incompletely implemented (only one iteration is done instead of Newton Raphson till convergence). Maplesoft should update the help files accordingly.

The post below explains how better results are obtained with method = mgear. To run the command mgear you will need Maple 6 or earlier versions. For lsode, any current version is fine.  Unfortunately Maple deprecated an algorithm that worked fine. From Maple 8, the algorithm moved to Rosenbrock methods for stiff equations. This is still not ideal.

If Maple had a working algorithm, I am hoping that Maplesoft folks would consider bringing it back in future versions. (At least with the same functionality as in Maple 6).

PLEASE NOTE, the issue is not with solving this example (Very simple). This example is chosen to show how a popular algorithm in the literature is wrongly implemented.

 

Here Maple's lsode is forced to take only one step and use first order back ward difference formula to integrate from 0 to 1.  LSODE mimics Eulerbackward using the options given below. The post shows that LSODE does not do Newton Raphson and just performs only iteration for nonlinear equations.

restart;

Digits:=15;

Digits := 15

(1)

eq:=diff(y(t),t)=-y(t);

eq := diff(y(t), t) = -y(t)

(2)

C:=array([0$22]);

C := Vector[row](22, {(1) = 0, (2) = 0, (3) = 0, (4) = 0, (5) = 0, (6) = 0, (7) = 0, (8) = 0, (9) = 0, (10) = 0, (11) = 0, (12) = 0, (13) = 0, (14) = 0, (15) = 0, (16) = 0, (17) = 0, (18) = 0, (19) = 0, (20) = 0, (21) = 0, (22) = 0})

(3)

C[9]:=1;

C[9] := 1

(4)

sol:=dsolve({eq,y(0)=1},type=numeric,method=lsode[backfull],ctrl=C,initstep=0.1,minstep=0.1,abserr=1,relerr=1):

sol(0.1);

[t = .1, y(t) = .909090909090834]

(5)

subs(diff(y(t),t)=(y1-1)/0.1,y(t)=y1,eq);

0.1e2*y1-0.1e2 = -y1

(6)

fsolve(%,y1=0.5);

.909090909090909

(7)

 While for linear it gave the expected result, it gives wrong results for nonlinear problems.

sol1:=dsolve({eq,y(0)=1},type=numeric):

sol1(0.1);

[t = .1, y(t) = .904837355407810]

(8)

eq:=diff(y(t),t)=-y(t)^2*exp(-y(t))-10*y(t)*(1+0.01*exp(y(t)));

eq := diff(y(t), t) = -y(t)^2*exp(-y(t))-10*y(t)*(1+0.1e-1*exp(y(t)))

(9)

sol:=dsolve({eq,y(0)=1},type=numeric,method=lsode[backfull],ctrl=C,initstep=0.1,minstep=0.1,abserr=1,relerr=1):

sol(0.1);

[t = .1, y(t) = .501579294869466]

(10)

subs(diff(y(t),t)=(y1-1)/0.1,y(t)=y1,eq);

0.1e2*y1-0.1e2 = -y1^2*exp(-y1)-10*y1*(1+0.1e-1*exp(y1))

(11)

fsolve(%,y1=1);

.488691779256025

(12)

sol1:=dsolve({eq,y(0)=1},type=numeric):

 the expected answer is correctly obtained with default tolerance as

sol1(0.1);

[t = .1, y(t) = .349614721994122]

(13)

 The results obtained are worse than single iteration using jacobian.

eq2:=(lhs-rhs)(subs(diff(y(t),t)=(y1-1)/0.1,y(t)=y1,eq));

eq2 := 0.1e2*y1-0.1e2+y1^2*exp(-y1)+10*y1*(1+0.1e-1*exp(y1))

(14)

jac:=unapply(diff(eq2,y1),y1);

jac := proc (y1) options operator, arrow; 20.+2*y1*exp(-y1)-y1^2*exp(-y1)+.10*exp(y1)+.10*y1*exp(y1) end proc

(15)

f:=unapply(eq2,y1);

f := proc (y1) options operator, arrow; 0.1e2*y1-0.1e2+y1^2*exp(-y1)+10*y1*(1+0.1e-1*exp(y1)) end proc

(16)

y0:=1;

y0 := 1

(17)

dy:=-evalf(f(y0)/jac(y0));

dy := -.508796088545793

(18)

ynew:=y0+dy;

ynew := .491203911454207

(19)

 Following procedures confirm that it is indeed calling the procedure only at 0 and 0.1, with backdiag giving slightly better results.

myfun:= proc(x,y) if not type(x,'numeric') or not type(evalf(y),numeric)then 'procname'(x,y);
    else lprint(`Request at x=`,x); -y^2*exp(-y(x))-10*y*(1+0.01*exp(y)); end if; end proc;

myfun := proc (x, y) if not (type(x, 'numeric') and type(evalf(y), numeric)) then ('procname')(x, y) else lprint(`Request at x=`, x); -y^2*exp(-y(x))-10*y*(1+0.1e-1*exp(y)) end if end proc

(20)

sol1:=dsolve({diff(y(x),x)=myfun(x,y(x)),y(0)=1},numeric,method=lsode[backfull],ctrl=C,initstep=0.1,minstep=0.1,abserr=1,relerr=1,known={myfun}):

sol1(0.1);

`Request at x=`, 0.

`Request at x=`, 0.

`Request at x=`, .1

`Request at x=`, .1

[x = .1, y(x) = .501579304183583]

(21)

sol2:=dsolve({diff(y(x),x)=myfun(x,y(x)),y(0)=1},numeric,method=lsode[backdiag],ctrl=C,initstep=0.1,minstep=0.1,abserr=1,relerr=1,known={myfun}):

sol2(0.1);

`Request at x=`, 0.

`Request at x=`, 0.

`Request at x=`, .1

`Request at x=`, .1

[x = .1, y(x) = .497831388424072]

(22)

 

Download Lsodeanalysistrunc.mws

 

Next see how dsolve method = mgear works just fine in Maple 6 (gives the expected answer upto 3 Digits accuracy). To run this code you will need Maple 6 or earlier versions. Maple 7 has this algorithm, but I don't know to use it as it is hidden. I would like to get support from other members to get Maplesoft's attention to bring this algorithm back.

If Mdy/dt = f(y) is solved using mgear algorithm (instead of dy/dt =f ), then one can have a good DAE solver based on this (M being singular). 

 

restart;

myfun:= proc(x,y) if not type(x,'numeric') or not type(evalf(y),numeric)then 'procname'(x,y);
    else lprint(`Request at x=`,x); -y^2*exp(-y(x))-10*y*(1+0.01*exp(y)); end if; end proc;

myfun := proc (x, y) if not (type(x, 'numeric') and type(evalf(y), numeric)) then ('procname')(x, y) else lprint(`Request at x=`, x); -y^2*exp(-y(x))-10*y*(1+0.1e-1*exp(y)) end if end proc

(1)

sol2:=dsolve({diff(y(x),x)=myfun(x,y(x)),y(0)=1},{y(x)},numeric,method=mgear[mstepnum],stepsize=0.1,minstep=0.1,errorper=1):

sol2(0.1);

`Request at x=`, 0.

`Request at x=`, .1

`Request at x=`, .1

`Request at x=`, .1

[x = .1, y(x) = .4887165263]

(2)

 

 

Download Mgearworks.mws

First 18 19 20 21 22 23 24 Last Page 20 of 52