Maplesoft Blogger Profile: Robert Lopez

Maple Fellow

Dr. Robert J. Lopez, Emeritus Professor of Mathematics at the Rose-Hulman Institute of Technology in Terre Haute, Indiana, USA, is an award winning educator in mathematics and is the author of several books including Advanced Engineering Mathematics (Addison-Wesley 2001). For over two decades, Dr. Lopez has also been a visionary figure in the introduction of Maplesoft technology into undergraduate education. Dr. Lopez earned his Ph.D. in mathematics from Purdue University, his MS from the University of Missouri - Rolla, and his BA from Marist College. He has held academic appointments at Rose-Hulman (1985-2003), Memorial University of Newfoundland (1973-1985), and the University of Nebraska - Lincoln (1970-1973). His publication and research history includes manuscripts and papers in a variety of pure and applied mathematics topics. He has received numerous awards for outstanding scholarship and teaching.

Posts by Robert Lopez

While preparing for a recent webinar, I ran across something that didn't behave the same way in Maple 2017 as it did in previous releases. In particular, it was the failure of the over-dot notation for t-derivatives to display with the over-dot. Turns out that this is due to a change in behavior of typesetting that was detailed in the What's New page for Maple 2017, a page I had looked at many times in the last few months, but apparently didn't comprehend fully. The details are below.

Prior to Maple 2017, under the aegis of extended typesetting, the following two lines of code would alert Maple that the over-dot notation for t-derivatives should be used in the output display.

However, this changed in Maple 2017. Extended typesetting is now the default, but these two lines of code are no longer sufficient to induce Maple to display the over-dot in output. Indeed, we would now have

as output. The change is documented in the following paragraph

lifted from the help page 

Thus, it now takes the additional command

to induce Maple to display the over-dot notation in output.

I must confess that, even though I pored over the "What's New" pages for Maple 2017, I completely missed the import of this change to typesetting. I stumbled over the issue while preparing for an upcoming webinar, and frantically sent out help calls to the developers back in the building. Fortunately, I was quickly set straight on the matter, but was disappointed in my own reading of all the implications of the typesetting changes in Maple 2017. So perhaps this note will alert other users to the changes, and to the help page wherein one finds those essential bits of information needed to complete the tasks we set for ourselves.

And one more thing - I was cautioned that the "= true" was essential. Without it, the command would act as a query, echoing the present state of the setting, and not making the desired change to the setting.
 

Recently, I came across an addendum to a problem that appears in many calculus texts, an addendum I had never explored. It intrigued me, and I hope it will capture your attention too.

The problem is that of girding the equator of the earth with a belt, then extending by one unit (here, taken as the foot) the radius of the circle so formed. The question is by how much does the circumference of the belt increase. This problem usually appears in the section of the calculus text dealing with linear approximations by the differential. It turns out that the circumference of the enlarged band is 2*Pi ft greater than the original band.

(An alternate version of this has the circumference of the band increased by one foot, with the radius then being increased by 0.16 ft.)

The addendum to the problem then asked how high would the enlarged band be over the surface of the earth if it were lifted at one point and drawn as tight as possible around the equator. At first, I didn't know what to think. Would the height be some surprisingly large number? And how would one go about calculating this height.

It turns out that the enlarged and lifted band would be some 616.67 feet above the surface of the earth! This is significantly larger than the increase in the diameter of the original band. So, the result is a surprise, at least to me.

This is the kind of amusement that retirement affords. I heartily recommend both the amusement and the retirement. The supporting calculations can be found in the attached worksheet: Girding.mw

At 3:00 PM EST on Thursday, December 15, Maplesoft hosted a momentous hour in my life, my "retirement party" ending my career at Maplesoft. It was a day I had planned some four years ago when I dropped to a lighter schedule, and a day my wife has been awaiting for six years.

Jim Cooper, CEO at Maplesoft, presented a very brief sketch of some milestones in my life, including my high school graduation in 1958, BA in 1963, MS in 1966, PhD in 1970, jobs at the University of Nebraska-Lincoln, Memorial University of Newfoundland, and the Rose-Hulman Institute of Technology. There was a picture of me taken from my high school graduation yearbook. There was a cake. There were kind words about my contributions to Maple, including "Clickable Calculus," the term and its meaning.

I was handed the microphone - I knew what I wanted to say. My wife was present in the gathering. I pointed to her and said that all the congratulations should go to her who had waited so patiently for my retirement for six years. I thanked Maplesoft and all its employees for nearly 14 of the best years of my life, for I have thoroughly enjoyed my return to Canada and my work (more like play) at Maplesoft. 

It's been a great opportunity to be part of the Maple experience, and now it's time for new ones. There'll be more woodworking in my basement woodshop where I make mostly noise and sawdust, some extra travel, more exercise and fresh air, long-delayed household projects, and whatever else my mate of 49 years asks.

But the best part of all is that I'll still have a connection to Maplesoft - I'll continue doing two webinars a month, will maintain and update much of the content I've created for Maple while at Maplesoft, and contribute additional content of relevance to the Maple community. 

A population p(t) governed by the logistic equation with a constant rate of harvesting satisfies the initial value problem diff(p(t), t) = (2/5)*p(t)*(1-(1/100)*p(t))-h, p(0) = a. This model is typically analyzed by setting the derivative equal to zero and finding the two equilibrium solutions p = 50+`&+-`(5*sqrt(100-10*h)). A sketch of solutions p(t) for different values of a suggests that the larger equilibrium is stable; the smaller, unstable.

 

When a is less that the unstable equilibrium, p(t) becomes zero at a time t[e], and the population becomes extinct. If p(t) is not interpreted as pertaining to a population, its graph exists beyond t[e], and actually has a vertical asymptote between the two branches of its graph.

 

In the worksheet "Logistic Model with Harvesting", two questions are investigated, namely,

 

  1. How does the location of this vertical asymptote depend on on a and h?
  2. How does the extinction time t[e], the time at which p(t) = 0, depend on a and h?

To answer the second question, an explicit solution p = p(a, h, t), readily provided by Maple, is set equal to zero and solved for t[e] = t[e](a, h). It turns out to be difficult both to graph the surface t[e](a, h) and to obtain a contour map of the level sets of this function. Instead, we solve for a = a(t[e], h) and obtain a graph of a(h) with t[e] as a slider-controlled parameter.

 

To answer the first question, the explicit solution, which has the form alpha*tan(phi(a, h, t))*beta(h)+50, exhibits its vertical asymptote when phi(a, h, t) = -(1/2)*Pi. Solving this equation for t[a] = t[a](a, h) gives the time at which the vertical asymptote is located, a function that is as difficult to graph as t[e]. Again the remedy is to solve for, and graph, a = a(h), with t[a] as a slider-controlled parameter.

 

Download the worksheet: Logistic_with_Harvesting.mw

The Saturday edition of our local newspaper (Waterloo Region Record) carries, as part of The PUZZLE Corner column, a weekly puzzle "STICKELERS" by Terry Stickels. Back on December 13, 2014, the puzzle was:

What number comes next?

1   4   18   96   600   4320   ?

The solution given was the number 35280, obtained by setting k = 1 in the general term k⋅k!.

On September 5, 2015, the column contained the puzzle:

What number comes next?

2  3  3  5  10  13  39  43  172  177  ?

The proposed solution was the number 885, obtained as a10 from the recursion

where a0 =2.

As a youngster, one of my uncles delighted in teasing me with a similar question for the sequence 36, 9, 50, 55, 62, 71, 79, 18, 20. Ignoring the fact that there is a missing entry between 9 and 50, the next member of the sequence is "Bay Parkway," which is what 22nd Avenue is actually called in the Brooklyn neighborhood of my youth. These are subway stops on what was then called the West End line of the subway that went out to Stillwell Avenue in Coney Island.

Armed with the skepticism inspired by this provincial chestnut, I looked at both of these puzzles with the attitude that the "next number" could be anything I chose it to be. After all, a sequence is a mapping from the (nonnegative) integers to the reals, and unless the mapping is completely specified, the function values are not well defined.

Indeed, for the first puzzle, the polynomial f(x) interpolating the points


(0, 1), (1, 4), (2, 18), (3, 93), (4, 600), (5, 4320)

reproduces the first six members of the given sequence, and gives 18593 (not 35280) for f(7). In other words, the pattern k⋅k! is not a unique representation of the sequence, given just the first six members. The worksheet NextNumber derives the interpolating polynomial f and establishes that f(n) is an integer for every nonzero integer n.

Likewise, for the second puzzle, the polynomial g(x) interpolating the points

(1, 2) ,(2, 3) ,(3, 3) ,(4, 5) ,(5, 10) ,(6, 13) ,(7, 39) ,(8, 43), (9, 172) ,(10, 177)

reproduces the first ten members of the given sequence, and gives -7331(not 885) for g(11). Once again, the pattern proposed as the "solution" is not unique, given that the worksheet NextNumber contains both g(x) and a proof that for integer n, all values of g(n) are integers.

The upshot of these observations is that without some guarantee of uniqueness, questions like "what is the next number" are meaningless. It would be far better to pose such challenges with the words "Find a pattern for the given members of the following sequence" and warn that the function capturing that pattern might not be unique.

I leave it to the interested reader to prove or disprove the following conjecture: Interpolate the first n terms of either sequence. The interpolating polynomial p will reproduce these n terms, but for k>n, p(k) will differ from the corresponding member of the sequence determined by the stated patterns. (Results of limited numerical experiments are consistent with the truth of this conjecture.)

Attached: NextNumber.mw

In a recent blog post, I found a single rotation that was equivalent to a sequence of Givens rotations, the underlying message being that teaching, learning, and doing mathematics is more effective and efficient when implemented with a tool like Maple. This post has the same message, but the medium is now the Householder reflection.

Given the vector x = , the Householder matrix H = I - 2 uuT reflects x to y = Hx, where I is the appropriate identity matrix, u = (x - y) / ||x - y|| is a unit normal for the plane (or hyperplane) across which x is reflected, and y necessarily has the same norm as x. The matrix H is orthogonal but its determinant is -1, making it a reflection instead of a rotation.

Starting with x and uH can be constructed and the reflection y calculated. Starting with x and yu and H can be determined. But what does any of this look like? Besides, when the Householder matrix is introduced as a tool for upper triangularizing a matrix, or for putting it into upper Hessenberg form, a recipe such as the one stated in Table 1 is the starting point.

In other words, the recipe in Table 1 reflects x to a vector y in which all entries below the kth are zero. Again, can any of this be visualized and rendered more concrete? (The chair who hired me into my first job averred that there are students who can learn from the general to the particular. Maybe some of my classmates in graduate school could, but in 40 years of teaching, I've never met one such student. Could that be because all things are known through the eyes of the beholder?)

In the attached worksheet, Householder matrices that reflect x = <5, -2, 1> to vectors y along the coordinate axes are constructed. These vectors and the reflecting planes are drawn, along with the appropriate normals u. In addition, the recipe in Table 1 is implemented, and the recipe itself examined. If you look at the worksheet, I believe you will agree that without Maple, the explorations shown would have been exceedingly difficult to carry out by hand.

Attached: RHR.mw

This post describes how Maple was used to investigate the Givens rotation matrix, and to answer a simple question about its behavior. The "Givens" part is the medium, but the message is that it really is better to teach, learn, and do mathematics with a tool like Maple.

The question: If Givens rotations are used to take the vector Y = <5, -2, 1> to Y2 = , about what axis and through what angle will a single rotation accomplish the same thing?

The Givens matrix G21 takes Y to the vector Y1 =, and the Givens matrix G31 takes Y1 to Y2. Graphing the vectors Y, Y1, and Y2 reveals that Y1 lies in the xz-plane and that Y2 is parallel to the x-axis. (These geometrical observations should have been obvious, but the typical usage of the Givens technique to "zero-out" elements in a vector or matrix obscured this, at least for me.)

The matrix G = G31 G21 rotates Y directly to Y2; is the axis of rotation the vector W = Y x Y2, and is the angle of rotation the angle  between Y and Y2? To test these hypotheses, I used the RotationMatrix command in the Student LinearAlgebra package to build the corresponding rotation matrix R. But R did not agree with G. I had either the axis or the angle (actually both) incorrect.

The individual Givens rotation matrices are orthogonal, so G, their product is also orthogonal. It will have 1 as its single real eigenvalue, and the corresponding eigenvector V is actually the direction of the axis of the rotation. The vector W is a multiple of <0, 1, 2> but V = <a, b, 1>, where . Clearly, W  V.

The rotation matrix that rotates about the axis V through the angle  isn't the matrix G either. The correct angle of rotation about V turns out to be

the angle between the projections of Y and Y2 onto the plane orthogonal to V. That came as a great surprise, one that required a significant adjustment of my intuition about spatial rotations. So again, the message is that teaching, learning, and doing mathematics is so much more effective and efficient when done with a tool like Maple.

A discussion of the Givens rotation, and a summary of the actual computations described above are available in the attached worksheet, What Gives with Givens.mw.

I have two linear algebra texts [1, 2]  with examples of the process of constructing the transition matrix Q that brings a matrix A to its Jordan form J. In each, the authors make what seems to be arbitrary selections of basis vectors via processes that do not seem algorithmic. So recently, while looking at some other calculations in linear algebra, I decided to revisit these calculations in as orderly a way as possible.

 

First, I needed a matrix A with a prescribed Jordan form. Actually, I started with a Jordan form, and then constructed A via a similarity transform on J. To avoid introducing fractions, I sought transition matrices P with determinant 1.

 

Let's begin with J, obtained with Maple's JordanBlockMatrix command.

 

• 

Tools_Load Package: Linear Algebra

Loading LinearAlgebra

J := JordanBlockMatrix([[2, 3], [2, 2], [2, 1]])

Matrix([[2, 1, 0, 0, 0, 0], [0, 2, 1, 0, 0, 0], [0, 0, 2, 0, 0, 0], [0, 0, 0, 2, 1, 0], [0, 0, 0, 0, 2, 0], [0, 0, 0, 0, 0, 2]])

 

``

The eigenvalue lambda = 2 has algebraic multiplicity 6. There are sub-blocks of size 3×3, 2×2, and 1×1. Consequently, there will be three eigenvectors, supporting chains of generalized eigenvectors having total lengths 3, 2, and 1. Before delving further into structural theory, we next find a transition matrix P with which to fabricate A = P*J*(1/P).

 

The following code generates random 6×6 matrices of determinant 1, and with integer entries in the interval [-2, 2]. For each, the matrix A = P*J*(1/P) is computed. From these candidates, one A is then chosen.

 

L := NULL:

 

 

After several such trials, the matrix A was chosen as

 

A := Matrix(6, 6, {(1, 1) = -8, (1, 2) = -8, (1, 3) = 4, (1, 4) = -8, (1, 5) = -1, (1, 6) = 5, (2, 1) = -1, (2, 2) = 3, (2, 3) = 1, (2, 4) = -2, (2, 5) = 2, (2, 6) = -1, (3, 1) = -13, (3, 2) = -9, (3, 3) = 8, (3, 4) = -11, (3, 5) = 1, (3, 6) = 5, (4, 1) = 3, (4, 2) = 3, (4, 3) = -1, (4, 4) = 4, (4, 5) = 1, (4, 6) = -2, (5, 1) = 7, (5, 2) = 5, (5, 3) = -3, (5, 4) = 6, (5, 5) = 2, (5, 6) = -3, (6, 1) = -6, (6, 2) = -2, (6, 3) = 3, (6, 4) = -7, (6, 5) = 2, (6, 6) = 3})

 

 

for which the characteristic and minimal polynomials are

 

factor(CharacteristicPolynomial(A, lambda))

(lambda-2)^6

factor(MinimalPolynomial(A, lambda))

(lambda-2)^3

 

 

So, if we had started with just A, we'd now know that the algebraic multiplicity of its one eigenvalue lambda = 2 is 6, and there is at least one 3×3 sub-block in the Jordan form. We would not know if the other sub-blocks were all 1×1, or a 1×1 and a 2×2, or another 3×3. Here is where some additional theory must be invoked.

``

The null spaces M[k] of the matrices (A-2*I)^k are nested: `&sub;`(`&sub;`(M[1], M[2]), M[3]) .. (), as depicted in Figure 1, where the vectors a[k], k = 1, () .. (), 6, are basis vectors.

 

Figure 1   The nesting of the null spaces M[k] 

 

 

The vectors a[1], a[2], a[3] are eigenvectors, and form a basis for the eigenspace M[1]. The vectors a[k], k = 1, () .. (), 5, form a basis for the subspace M[2], and the vectors a[k], k = 1, () .. (), 6, for a basis for the space M[3], but the vectors a[4], a[5], a[6] are not yet the generalized eigenvectors. The vector a[6] must be replaced with a vector b[6] that lies in M[3] but is not in M[2]. Once such a vector is found, then a[4] can be replaced with the generalized eigenvector `&equiv;`(b[4], (A-2*I)^2)*b[6], and a[1] can be replaced with `&equiv;`(b[1], A-2*I)*b[4]. The vectors b[1], b[4], b[6] are then said to form a chain, with b[1] being the eigenvector, and b[4] and b[6] being the generalized eigenvectors.

 

If we could carry out these steps, we'd be in the state depicted in Figure 2.

 

Figure 2   The null spaces M[k] with the longest chain determined

 

 

Next, basis vector a[5] is to be replaced with b[5], a vector in M[2] but not in M[1], and linearly independent of b[4]. If such a b[5] is found, then a[2] is replaced with the generalized eigenvector `&equiv;`(b[2], A-2*I)*b[5]. The vectors b[2] and b[5] would form a second chain, with b[2] as the eigenvector, and b[5] as the generalized eigenvector.

``

Define the matrix C = A-2*I by the Maple calculation

 

C := A-2

Matrix([[-10, -8, 4, -8, -1, 5], [-1, 1, 1, -2, 2, -1], [-13, -9, 6, -11, 1, 5], [3, 3, -1, 2, 1, -2], [7, 5, -3, 6, 0, -3], [-6, -2, 3, -7, 2, 1]])

 

``

and note

 

N := convert(NullSpace(C), list)

[Vector(6, {(1) = 1/2, (2) = 1/2, (3) = 1, (4) = 0, (5) = 0, (6) = 1}), Vector(6, {(1) = -1/2, (2) = -1/2, (3) = -2, (4) = 0, (5) = 1, (6) = 0}), Vector(6, {(1) = -2, (2) = 1, (3) = -1, (4) = 1, (5) = 0, (6) = 0})]

NN := convert(LinearAlgebra:-NullSpace(C^2), list)

[Vector(6, {(1) = 2/5, (2) = 0, (3) = 0, (4) = 0, (5) = 0, (6) = 1}), Vector(6, {(1) = 0, (2) = 0, (3) = 0, (4) = 0, (5) = 1, (6) = 0}), Vector(6, {(1) = -1, (2) = 0, (3) = 0, (4) = 1, (5) = 0, (6) = 0}), Vector(6, {(1) = 2/5, (2) = 0, (3) = 1, (4) = 0, (5) = 0, (6) = 0}), Vector(6, {(1) = -3/5, (2) = 1, (3) = 0, (4) = 0, (5) = 0, (6) = 0})]

 

``

The dimension of M[1] is 3, and of M[2], 5. However, the basis vectors Maple has chosen for M[2] do not include the exact basis vectors chosen for M[1].

 

We now come to the crucial step, finding b[6], a vector in M[3] that is not in M[2] (and consequently, not in M[1] either). The examples in [1, 2] are simple enough that the authors can "guess" at the vector to be taken as b[6]. What we will do is take an arbitrary vector in M[3] and project it onto the 5-dimensional subspace M[2], and take the orthogonal complement as b[6].

``

A general vector in M[3] is

 

Z := `<,>`(u || (1 .. 6))

Vector[column]([[u1], [u2], [u3], [u4], [u5], [u6]])

 

``

A matrix that projects onto M[2] is

 

P := ProjectionMatrix(NN)

Matrix([[42/67, -15/67, 10/67, -25/67, 0, 10/67], [-15/67, 58/67, 6/67, -15/67, 0, 6/67], [10/67, 6/67, 63/67, 10/67, 0, -4/67], [-25/67, -15/67, 10/67, 42/67, 0, 10/67], [0, 0, 0, 0, 1, 0], [10/67, 6/67, -4/67, 10/67, 0, 63/67]])

 

``

The orthogonal complement of the projection of Z onto M[2] is then -P*Z+Z. This vector can be simplified by choosing the parameters in Z appropriately. The result is taken as b[6].

 

b[6] := 67*(eval(Z-Typesetting:-delayDotProduct(P, Z), Equate(Z, UnitVector(1, 6))))*(1/5)

Vector[column]([[5], [3], [-2], [5], [0], [-2]])

NULL

 

``

The other two members of this chain are then

 

b[4] := Typesetting:-delayDotProduct(C, b[6])

Vector[column]([[-132], [-12], [-169], [40], [92], [-79]])

b[1] := Typesetting:-delayDotProduct(C, b[4])

Vector[column]([[-67], [134], [67], [67], [0], [134]])

 

``

A general vector in M[2] is a linear combination of the five vectors that span the null space of C^2, namely, the vectors in the list NN. We obtain this vector as

 

ZZ := add(u || k*NN[k], k = 1 .. 5)

Vector[column]([[(2/5)*u1-u3+(2/5)*u4-(3/5)*u5], [u5], [u4], [u3], [u2], [u1]])

 

``

A vector in M[2] that is not in M[1] is the orthogonal complement of the projection of ZZ onto the space spanned by the eigenvectors spanning M[1] and the vector b[4]. This projection matrix is

 

PP := LinearAlgebra:-ProjectionMatrix(convert(`union`(LinearAlgebra:-NullSpace(C), {b[4]}), list))

Matrix([[69/112, -33/112, 19/112, -17/56, 0, 19/112], [-33/112, 45/112, 25/112, 13/56, 0, 25/112], [19/112, 25/112, 101/112, 1/56, 0, -11/112], [-17/56, 13/56, 1/56, 5/28, 0, 1/56], [0, 0, 0, 0, 1, 0], [19/112, 25/112, -11/112, 1/56, 0, 101/112]])

 

``

The orthogonal complement of ZZ, taken as b[5], is then

 

b[5] := 560*(eval(ZZ-Typesetting:-delayDotProduct(PP, ZZ), Equate(`<,>`(u || (1 .. 5)), LinearAlgebra:-UnitVector(4, 5))))

Vector[column]([[-9], [-59], [17], [58], [0], [17]])

 

``

Replace the vector a[2] with b[2], obtained as

 

b[2] := Typesetting:-delayDotProduct(C, b[5])

Vector[column]([[251], [-166], [197], [-139], [-112], [-166]])

 

 

The columns of the transition matrix Q can be taken as the vectors b[1], b[4], b[6], b[2], b[5], and the eigenvector a[3]. Hence, Q is the matrix

 

Q := `<|>`(b[1], b[4], b[6], b[2], b[5], N[3])

Matrix([[-67, -132, 5, 251, -9, -2], [134, -12, 3, -166, -59, 1], [67, -169, -2, 197, 17, -1], [67, 40, 5, -139, 58, 1], [0, 92, 0, -112, 0, 0], [134, -79, -2, -166, 17, 0]])

 

``

Proof that this matrix Q indeed sends A to its Jordan form consists in the calculation

 

1/Q.A.Q = Matrix([[2, 1, 0, 0, 0, 0], [0, 2, 1, 0, 0, 0], [0, 0, 2, 0, 0, 0], [0, 0, 0, 2, 1, 0], [0, 0, 0, 0, 2, 0], [0, 0, 0, 0, 0, 2]])``

 

NULL

The bases for M[k], k = 1, 2, 3, are not unique. The columns of the matrix Q provide one set of basis vectors, but the columns of the transition matrix generated by Maple, shown below, provide another.

 

JordanForm(A, output = 'Q')

Matrix([[-5, -43/5, -9/5, 7/5, -14/5, -3/5], [10, -4/5, -6/25, 1/5, -6/25, -3/25], [5, -52/5, -78/25, 13/5, -78/25, -39/25], [5, 13/5, 38/25, -2/5, 38/25, 4/25], [0, 6, 42/25, -1, 42/25, 21/25], [10, -29/5, -11/25, 1/5, -11/25, 7/25]])

 

``

I've therefore added to my to-do list the investigation into Maple's algorithm for determining an appropriate set of basis vectors that will support the Jordan form of a matrix.

 

References

 

NULL

[1] Linear Algebra and Matrix Theory, Evar Nering, John Wiley and Sons, Inc., 1963

[2] Matrix Methods: An Introduction, Richard Bronson, Academic Press, 1969

 

NULL

``

Download JordanForm_blog.mw

A wealth of knowledge is on display in MaplePrimes as our contributors share their expertise and step up to answer others’ queries. This post picks out one such response and further elucidates the answers to the posted question. I hope these explanations appeal to those of our readers who might not be familiar with the techniques embedded in the original responses.

Before I begin, a quick note that the content below was primarily created by one of our summer interns, Pia, with guidance and advice from me.

The Question: Rearranging the expression of equations

SY G wanted to be able to re-write an equation in terms of different variables.  SY G presented this example: 

I have the following two equations:

x1 = a-y1-d*y2;
x2 = a-y2-d*y1;

I wish to express the first equation in terms of y1 and x2, so that

x1 = c - b*y1+d*x2;

where c=a-a*d and b=1-d^2. How can I get Maple to rearrange the original equation x1 in term of y1, x2, c and b?

This question was answered by nm who provided code with a systematic approach:

restart;
eq1:=x1=a-y1-d*y2:
eq2:=x2=a-y2-d*y1:
z:=expand(subs(y2=solve(eq2,y2),eq1)):
z:=algsubs((a-a*d)=c,z):
algsubs((1-d^2)=b,z);

On the other hand, Carl Love answered this enquiry using a more direct and simple code:

simplify(x1=a-y1-d*y2, {a-y2-d*y1= x2, 1-d^2= b, a-a*d= c});

Let’s talk more about the expand, algsubs, subs, and simplify commands

First let’s take a look at the method nm used to solve the problem using the commands expand, subs, solve and algsubs.

The expand command, expand(expr, expr1, expr2, ..., exprn), distributes products over sums. This is done for all polynomials. For quotients of polynomials, only sums in the numerator are expanded; products and powers are left alone.

The solve command, solve(equations, variables), solves one or more equations or inequalities for their unknowns.

The subs command, subs(x=a,expr), substitutes a for x in the expression expr.

The function algsubs, algsubs(a = b, f),performs an algebraic substitution, replacing occurrences of a with b in the expression f.  It is a generalization of the subs command, which only handles syntactic substitution.

Let’s tackle the Maple code written by nm step by step:

1) restart;
The restart command is used to clear Maple’s internal memory

2)  eq1:=x1=a-y1-d*y2:
      eq2:=x2=a-y2-d*y1:
The names eq1 and eq2 were assigned to the equations SY G provided.

3) z:=expand(subs(y2=solve(eq2,y2),eq1)):
A new variable, z, was created, which will end up being x1 written in the terms SY G wanted.

  • solve(eq2,y2)
    • the solve command was used to solve the expression eq2 for the variable y2.

  • subs(y2=solve(eq2,y2),eq1)
    • The subs command was used to replace in expression eq1, y2 as determined by the solve step. 

  • expand(subs(y2=solve(eq2,y2),eq1))
    • The expand command was used to distribute products over sums. Note: this step served to ensure that the final output looked exactly how SY G wanted.

4) z:=algsubs((a-a*d)=c,z):
First, nm equated a-a*d to c, so later the algsubs command could be applied to substitute the new variable c into the expression z.

5) algsubs((1-d^2)=b,z);
Again, nm equated 1-d^2 to b, so later the algsubs command could be applied to substitute the new variable b into the expression z.

An alternate approach

Now let us check out Carl Love’s approach. Carl Love uses the simplify command in conjunction with side relations.

The simplify command has many calling sequences and one of them is the simplify(expr,eqns), that is known as simplify/siderels. A simplification of expr with respect to the side relations eqns is performed. The result is an expression which is mathematically equivalent toexpr but which is in normal form with respect to the specified side relations. Basically you are telling Maple to simplify the expression (expr) using the parameters (eqns) you gave to it.

 

I hope that you find this useful. If there is a particular question on MaplePrimes that you would like further explained, please let me know. 

A wealth of knowledge is on display in MaplePrimes as our contributors share their expertise and step up to answer others’ queries. This post picks out one such response and further elucidates the answers to the posted question. I hope these explanations appeal to those of our readers who might not be familiar with the techniques embedded in the original responses.

Before I begin, a quick note that the content below was primarily created by one of our summer interns, Pia, with guidance and advice from me.

The Question: Source Code of Math Apps

Eberch, a new Maple user, was interested in learning how to build his own Math Apps by looking at the source code of some of the already existing Math Apps that Maple offers.

Acer helpfully suggested that he look into the Startup Code of a Math App, in order to see definitions of procedures, modules, etc. He also recommended Eberch take a look at the “action code” that most of the Math Apps have which consist of function calls to procedures or modules defined in the Startup Code. The Startup Code can be accessed from the Edit menu. The function calls can be seen by right-clicking on the relevant component and selecting Edit Click Action.

Acer’s answer is correct and helpful. But for those just learning Maple, I wanted to provide some additional explanation.

Let’s talk more about building your own Math Apps

Building your own Math Apps can seem like something that involves complicated code and rare commands, but Daniel Skoog perfectly portrays an easy and straightforward method to do this in his latest webinar. He provides a clear definition of a Math App, a step-by-step approach to creating a Math App using the explore and quiz commands, and ways to share your applications with the Maple community. It is highly recommended that you watch the entire webinar if you would like to learn more about the core concepts of working with Maple, but you can find the Math App information starting at the 33:00 mark.

I hope that you find this useful. If there is a particular question on MaplePrimes that you would like further explained, please let me know. 

A wealth of knowledge is on display in MaplePrimes as our contributors share their expertise and step up to answer others’ queries. This post picks out one such response and further elucidates the answers to the posted question. I hope these explanations appeal to those of our readers who might not be familiar with the techniques embedded in the original responses.

Before I begin, a quick note that the content below was primarily created by one of our summer interns, Pia, with guidance and advice from me.

MaplePrimes member Thomas Dean wanted 1/2*x^(1/2) + 1/13*x^(1/3) + 1/26*x^(45/37)  to become  0.5*x^0.500000 + 0.07692307692*x^0.333333 + 0.03846153846*x^1.216216216  using the evalf command.

Here you can see the piece of code that Thomas Dean wrote in Maple:

eq:=1/2*x^(1/2) + 1/13*x^(1/3) + 1/26*x^(45/37);
evalf(eq);

Carl Love replied simply and effectively with a piece of code, using the evalindets command instead:

evalindets(eq, fraction, evalf);

As always, Love provided an accurate response, and it is absolutely correct. But for those just learning Maple, I wanted to provide some additional explanation.

The evalindets command, evalindets( expr, atype, transformer, rest ), is a particular combination of calls to eval and indets that allows you to efficiently transform all subexpressions of a given type by some algorithm. It encapsulates a common "pattern" used in expression manipulation and transformation.

Each subexpression of type atype is transformed by the supplied transformer procedure. Then, each subexpression is replaced in the original expression, using eval, with the corresponding transformed expression.

 

Note: the parameter restis an optional expression sequence of extra arguments to be passed to transformer. In this example it was not used.

I hope that you find this useful. If there is a particular question on MaplePrimes that you would like further explained, please let me know. 

A wealth of knowledge is on display in MaplePrimes as our contributors share their expertise and step up to answer others’ queries. This post picks out one such response and further elucidates the answers to the posted question. I hope these explanations appeal to those of our readers who might not be familiar with the techniques embedded in the original responses.

Before I begin, a quick note that the content below was primarily created by one of our summer interns, Pia, with guidance and advice from me.

The Question: why is 2*cos(x)^2-1 simpler than 1-2*sin(x)^2

The author, nm, asked why 2*cos(x)^2-1  was simpler than 1-2*sin(x)^2 according to Maple. nm wrote:

I looked at help trying to understand why Maple thinks 2*cos(x)^2-1 is simpler than 1-2*sin(x)^2 but did not see it. I was expecting to see cos(2*x) as a result.

Preben Alsholm answered nm’s question by recommending the use of the combine command to obtain the result he was expecting to see, as well as a further explanation on how the simplify command works. Alsholm wrote:

Use combine to obtain what you want:
combine(1-2*sin(x)^2);

simplify has a general preference for cos over sin. That doesn't mean however, that it turns sin into cos at all costs:

simplify(sin(x));
##Try also
simplify(1-2*sin(x)^2,size);

simplify doesn't necessarily get you the simplest result in the common sense of the word 'simplify'. Try as another example

expand((x+y)^3);
simplify(%);
factor(%);

As always, Alsholm provided an accurate, thoughtful response. But for those just learning Maple, I thought some additional explanation could be helpful.

Let’s talk more about the simplify command and combine function

The simplify command applies simplification rules to an expression. Its parameters can be any expression.

The combine function applies transformations which combine terms in sums, products, and powers into a single term. For many functions, the transformations applied by combine are the inverse of the transformations that are applied by expand. For example, consider the well-known identity:

sin(a + b) = sin(a) cos(b) + cos(a) sin(b)

The combine function applies the identity from right to left, whereas the expand function does the reverse.

 

I hope that you find this useful. If there is a particular question on MaplePrimes that you would like further explained, please feel free to contact me.

 

You are teaching linear algebra, or perhaps a differential equations course that contains a unit on first-order linear systems. You need to get across the notion of the eigenpair of a matrix, that is, eigenvalues and eigenvectors, what they mean, how are they found, for what are they useful.

Of course, Maple's Context Menu can, with a click or two of the mouse, return both eigenvalues and eigenvectors. But that does not satisfy the needs of the student: an answer has been given but nothing has been learned. So, of what use is Maple in this pedagogical task? How can Maple enhance the lessons devoted to finding and using eigenpairs of a matrix?

In this webinar I am going to demonstrate how Maple can be used to get across the concept of the eigenpair, to show its meaning, to relate this concept to the by-hand algorithms taught in textbooks.

Ah, but it's not enough just to do the calculations - they also have to be easy to implement so that the implementation does not cloud the pedagogic goal. So, an essential element of this webinar will be its Clickable format, free of the encumbrance of commands and their related syntax. I'll use a syntax-free paradigm to keep the technology as simple as possible while achieving the didactic goal.

Notes added on July 7, 2015:

Fourteen Clickable Calculus examples have been added to the Teaching Concepts with Maple area of the Maplesoft web site. Four are sequence and series explorations taken from algebra/precalculus, four are applications of differentiation, four are applications of integration, and two are problems from the lines-and-planes section of multivariate calculus. By my count, this means some 111 Clickable Calculus examples have now been posted to the section.

Thirteen Clickable Calculus examples have been added to the Teaching Concepts with Maple section of the Maplesoft web site. The additions include examples in algebra, differential and integral calculus, lines-and-planes in multivariate calculus, and linear algebra. By my count, this means some 97 Clickable Calculus examples are now available.

In the Algebra/Precalculus section, examples of an

1 2 3 4 Page 1 of 4