MaplePrimes Posts

MaplePrimes Posts are for sharing your experiences, techniques and opinions about Maple, MapleSim and related products, as well as general interests in math and computing.

Latest Post
  • Latest Posts Feed
  • Notation is one of the most important things to communicate with others in science. It is remarkable how many people use or do not use a computer algebra package just because of its notation. For those reasons, in the context of the Physics package, strong emphasis is put on using textbook notation as much as possible regarding input and output, including, for that purpose, as people here know, significant developments in Maple typesetting.

    Still, for historical reasons, when using the Physics package, the labels used to refer to a coordinate system had been a single Capital Letter, as in X, Y, ...It was not possible to use, e.g. X', or x.

    That has changed. Starting with the Maplesoft Physics Updates v.1308, any symbol can be used as a coordinate system label. The lines below demo this change.

     

    Download new_coordinates_labels.mw

    Edgardo S. Cheb-Terrab
    Physics, Differential Equations and Mathematical Functions, Maplesoft

    As we head back to school, I want to take a moment to thank all the math teachers out there who take on the demanding yet overlooked task of educating our children, teenagers, and young people. 

    I'm where I am today because my calculus teacher, Prof. Srinivasan, was unwavering in her belief that my classmates and I could master any math topic, including calculus. Her conviction in me gave me the confidence to believe I could 'do' math. While Prof. Srinivasan made teaching look easy, I'm acutely aware that teaching math is no easy feat. Speaking with math educators regularly, I can appreciate how challenging teaching math is today compared to a decade ago. Not only do they have to teach the subject, but they must be able to teach it in-person and online, to a group of students that may not be up to speed on the prerequisite material, and in an era where disruptive technologies vie for their student's attention. No wonder math educators are so anxious about returning to the classroom this fall!

    And while I wish I could abracadabra your worries away, what I can do is offer you the opportunity to use Maple Learn, a tool built to support the utopian vision of a world where all students love math. A world where math is for everyone, not just the gifted, and the purpose of math class is to explore and marvel at the wonders of the universe, not just get to the correct answer.

    Slightly more concretely, Maple Learn is a flexible interactive environment for exploring concepts, solving problems, and creating rich online math content. I've seen educators use Maple Learn to help their students: 

    I’ve talked to lots of instructors, in math, and in courses like economics and physics that use math, who have lots of ideas of how to engage their students and deepen their understanding through interactive online activities. What they don’t have are the tools, programming experience, deployment platform, or time to implement their vision. Fortunately, Maple Learn makes it incredibly easy to develop and share your own content, and all you need are your ideas and a web browser. But you don’t need to start from scratch. You can choose from an extensive, constantly growing repository of ready-made, easily customizable content covering a wide range of topics. I think you’ll be pleasantly surprised by how easy it is, but since we are well aware that instructors are extremely busy people, we also have content development services that can help you transform your static content into interactive lessons.

    If you haven't looked at Maple Learn, or it's been a while since you last saw it, you can visit Reinventing Math Education with Maple Learn for more information, including an upcoming webinar you might be interested in attending and a special offer on Maple Learn for Maple campuses. And if you ever want to discuss ways Maple Learn might help you, or have ideas on how to make it better, please reach out. I'm always up for good conversation. 

    And for all the dedicated teachers who are taking a deep breath and heading back into the classroom this fall, thank you.

    Welcome back to another document walkthrough! Today, I thought we’d take a look at a non-math example, like chemistry. The document we’ll be using is “Finding Average Atomic Mass”. Before we get too into it, I’d like to define some terms. Average atomic mass is defined as the weighted average mass of all isotopes of an element. An elemental isotope can be thought of as a “version” of the element – The same element at its core, but having different weight or other properties. This is due to having the same number of protons, but a different number of neutrons.

    This document is, of course, about finding that average atomic mass. See the picture below for our problem, which states the element, the isotopes, and their separate masses and relative abundance.

    The average atomic mass can then be calculated using sum notation. To calculate, take the weighted mean of the isotopes’ atomic masses, as shown in the overview section of the Average Atomic Mass document.

    Once you’ve tried solving the problem yourself, take a look at the answer in group four, or one of the practice problems in group five. We have three examples on this topic (Average Atomic Mass Example 1, Average Atomic Mass Example 2, and Average Atomic Mass Example 3), so take a look at them all!

    I hope you enjoyed learning just a bit of chemistry today, and let us know in the comments if there are any documents you’d specifically like to see explained, or any topics you’d like us to talk about!

     

    Welcome back to another post on the Maple Learn Calculus collection! Previously on this series we looked at the Limit subcollection, and today we are going to look at the Derivative subcollection in the Maple Learn Document Gallery.

    There are many different types of documents in this sub collection, so let’s take a look at one of them. We’ll start with the very first question people ask when learning about derivatives: What is a derivative?

     

    This document starts us off with an example of f(x):=x2. The example provides the background information for the rest of the document, and a visualization with a slider.

    Then, we define both the Geometric and Algebraic definition of a derivative. This allows us to understand the concept in two different ways, a very useful thing for students as they explore other topics within calculus.  

    Finally, the document suggests two more documents for future learning: Derivatives: Notation, for more information on the notation used in derivatives, and the Formal Definition of a Derivative document, for more information on how derivatives are formally defined and derived. Make sure to check them out too!

    Now, that’s just the start. We’ve got practice problems, definitions and visualizations of rules, information on points without derivatives, and much more. They’re useful for both new learning and as a refresher, so take a look!

    We can’t wait to see you another time for when we dive into Derivative documents. Let us know after the Calculus collection showcase blog posts if there’s another collection you’d like to see showcased!

     

     

    Combing a Prismatic Joint component with an Elasto Gap component does not always provide correct results. Incorrectly combined (red mass below), a force is generated although the distance between the flanges is greater than the relaxed spring length. A force is exerted (instead of no force is exerted as stated here) on the mass which leads to a smaler deflection (expected are 9.81 m).

    This happened to me although I connected flange_a to flange_a and flange_b to flange_b in configruation A bellow. Configuration B works with inverted flanges and configuration C works with inverted unit vector of the prismatic joint. By reversing the direction of gravity, configuration A becomes a valid configuration and configurations B and C become invalid configurations.

    It seems that in invalid configruations the value of the  flange distance s_rel can have a large magnitude but is negative in sign which generates significant forces although there is no contact of flanges.

    So far for the observations.

     

    Would a change of the contact condition

    prevent invalid configurations or do we have to live with it for principal reasons that I am over looking?

    If so, I don't see a foolproof method to avoid invalid configurations. Instead, I can only suggest measuring the flange distance of the Elasto Gap component as in the attached. If negative values of large amplitude occur, the configuration is invalid.

    Assuming that a beginner would connect intuetively flange_a to flange_a and flange_b to flange_b, there is a chance of 50% that the configuration is invalid (A instead of C). This is too much to be acceptable, especially since verifying results in complex assemblies is often not possible.

    It is worth noting that the contact condition comes from the underlying Modelica component and not from MapleSoft.

    Prismatic_Joint_with_Elasto_Gap.msim

    UNIVERSIDAD AUTÓNOMA METROPOLITANA
    UNIDAD XOCHIMILCO

    15º Foro de Investigación de las Matemáticas Aplicadas a las Ciencias Sociales "Reflexiones sobre Educación y Matemáticas”,

    "Learning of mathematical functions using applications with Maple, for students of Social Sciences"

    The students of the first cycles have a low level of learning in the subject of functions, and the arrival of the pandemic worsened the understanding of this content. Increasing the use of ICT in great magnitude improving learning.
    To determine the relationship between learning and mathematical functions using applications with Maple, for students of Social Sciences. The experimental method was used using the scientific software Maple applied to students of Social Sciences.

    In Spanish.

     

    Lenin Araujo Casillo

    Ambassador of Maple

     

     

    Following the previous post on The Electromagnetic Field of Moving Charges, this is another non-trivial exercise, the derivation of 4-dimensional relativistic Lorentz transformations,  a problem of a 3rd-year undergraduate course on Special Relativity whose solution requires "tensor & matrix" manipulation techniques. At the end, there is a link to the Maple document, so that the computation below can be reproduced, and a link to a corresponding PDF file with all the sections open.

    Deriving 4D relativistic Lorentz transformations

    Freddy Baudine(1), Edgardo S. Cheb-Terrab(2)

    (1) Retired, passionate about Mathematics and Physics

    (2) Physics, Differential Equations and Mathematical Functions, Maplesoft

     

    Lorentz transformations are a six-parameter family of linear transformations Lambda that relate the values of the coordinates x, y, z, t of an event in one inertial reference system to the coordinates diff(x, x), diff(y(x), x), diff(z(x), x), diff(t(x), x) of the same event in another inertial system that moves at a constant velocity relative to the former. An explicit form of Lambda can be derived from physics principles, or in a purely algebraic mathematical manner. A derivation from physics principles is done in an upcoming post about relativistic dynamics, while in this post we derive the form of Lambda mathematically, as rotations in a (pseudo) Euclidean 4 dimensional space. Most of the presentation below follows the one found in Jackson's book on Classical Electrodynamics [1].

     

    The computations below in Maple 2022 make use of the Maplesoft Physics Updates v.1283 or newer.

    Formulation of the problem and ansatz Lambda = exp(`𝕃`)

     

     

    The problem is to find a group of linear transformations,

      "x^(' mu)=(Lambda^( mu))[nu]  x^(nu)" 

    that represent rotations in a 4D (pseudo) Euclidean spacetime, and so they leave invariant the norm of the 4D position vector x^mu; that is,

    "x^(' mu) (x')[mu]=x^( mu) (x^())[mu]"

    For the purpose of deriving the form of `#msubsup(mi("Λ",fontstyle = "normal"),mi("ν",fontstyle = "normal"),mrow(mo("⁢"),mi("μ",fontstyle = "normal")))`, a relevant property for it can be inferred by rewriting the invariance of the norm in terms of `#msubsup(mi("Λ",fontstyle = "normal"),mi("ν",fontstyle = "normal"),mrow(mo("⁢"),mi("μ",fontstyle = "normal")))`. In steps, from the above,

    "g[alpha,beta] x^(' alpha) (x^(' beta))[]=g[mu,nu] x^( mu) (x^( nu))[]"
     

    g[alpha, beta]*`#msubsup(mi("Λ",fontstyle = "normal"),mi("μ",fontstyle = "normal"),mrow(mo("⁢"),mi("α",fontstyle = "normal")))`*x^mu*`#msubsup(mi("Λ",fontstyle = "normal"),mi("ν",fontstyle = "normal"),mrow(mo("⁢"),mi("β",fontstyle = "normal")))`*x^nu = g[mu, nu]*x^mu*`#msup(mi("x"),mrow(mo("⁢"),mi("ν",fontstyle = "normal")))`
     

    g[alpha, beta]*`#msubsup(mi("Λ",fontstyle = "normal"),mi("μ",fontstyle = "normal"),mrow(mo("⁢"),mi("α",fontstyle = "normal")))`*x^mu*`#msubsup(mi("Λ",fontstyle = "normal"),mi("ν",fontstyle = "normal"),mrow(mo("⁢"),mi("β",fontstyle = "normal")))`*x^nu = g[mu, nu]*x^mu*`#msup(mi("x"),mrow(mo("⁢"),mi("ν",fontstyle = "normal")))`

    from where,

    g[alpha, beta]*`#msubsup(mi("Λ",fontstyle = "normal"),mi("μ",fontstyle = "normal"),mrow(mo("⁢"),mi("α",fontstyle = "normal")))`*`#msubsup(mi("Λ",fontstyle = "normal"),mi("ν",fontstyle = "normal"),mrow(mo("⁢"),mi("β",fontstyle = "normal")))` = g[mu, nu]``

    or in matrix (4 x 4) form, `#mrow(msubsup(mi("Λ",fontstyle = "normal"),mi("μ",fontstyle = "normal"),mrow(mo("⁢"),mi("α",fontstyle = "normal"))),mo("⁢"),mo("≡"),mo("⁢"),mo("⁢"),mi("Λ",fontstyle = "normal"))`, `≡`(g[alpha, beta], g)

    Lambda^T*g*Lambda = g

    where Lambda^T is the transpose of Lambda. Taking the determinant of both sides of this equation, and recalling that det(Lambda^T) = det(Lambda), we get

     

    det(Lambda) = `&+-`(1)

     

    The determination of Lambda is analogous to the determination of the matrix R (3D tensor R[i, j]) representing rotations in the 3D space, where the same line of reasoning leads to det(R) = `&+-`(1). To exclude reflection transformations, that have det(Lambda) = -1 and cannot be obtained through any sequence of rotations, because they do not preserve the relative orientation of the axes, the sign that represents our problem is +. To explicitly construct the transformation matrix Lambda, Jackson proposes the ansatz

      Lambda = exp(`𝕃`)   

    Summarizing: the determination of `#msubsup(mi("Λ",fontstyle = "normal"),mi("ν",fontstyle = "normal"),mrow(mo("⁢"),mi("μ",fontstyle = "normal")))` consists of determining `𝕃`[nu]^mu entering Lambda = exp(`𝕃`) such that det(Lambda) = 1followed by computing the exponential of the matrix `𝕃`.

    Determination of `𝕃`[nu]^mu

     

    In order to compare results with Jackson's book, we use the same signature he uses, "(+---)", and lowercase Latin letters to represent space tensor indices, while spacetime indices are represented using Greek letters, which is already Physics' default.

     

    restart; with(Physics)

    Setup(signature = "+---", spaceindices = lowercaselatin)

    [signature = `+ - - -`, spaceindices = lowercaselatin]

    (1)

    Start by defining the tensor `𝕃`[nu]^mu whose components are to be determined. For practical purposes, define a macro LM = `𝕃` to represent the tensor and use L to represent its components

    macro(LM = `𝕃`, %LM = `%𝕃`); Define(Lambda, LM, quiet)

    LM[`~mu`, nu] = Matrix(4, symbol = L)

    `𝕃`[`~mu`, nu] = Matrix(%id = 36893488153289603060)

    (2)

    "Define(?)"

    {Lambda, `𝕃`[`~mu`, nu], Physics:-Dgamma[mu], Physics:-Psigma[mu], Physics:-d_[mu], Physics:-g_[mu, nu], Physics:-gamma_[a, b], Physics:-LeviCivita[alpha, beta, mu, nu]}

    (3)

    Next, from Lambda^T*g*Lambda = g (see above in Formulation of the problem) one can derive the form of `𝕃`. To work algebraically with `𝕃`, Lambda, g representing matrices, set these symbols as noncommutative

    Setup(noncommutativeprefix = {LM, Lambda, g})

    [noncommutativeprefix = {`𝕃`, Lambda, g}]

    (4)

    From

    Lambda^T*g*Lambda = g

    Physics:-`*`(Physics:-`^`(Lambda, T), g, Lambda) = g

    (5)

    it follows that

    (1/g*(Physics[`*`](Physics[`^`](Lambda, T), g, Lambda) = g))/Lambda

    Physics:-`*`(Physics:-`^`(g, -1), Physics:-`^`(Lambda, T), g) = Physics:-`^`(Lambda, -1)

    (6)

    eval(Physics[`*`](Physics[`^`](g, -1), Physics[`^`](Lambda, T), g) = Physics[`^`](Lambda, -1), Lambda = exp(LM))

    Physics:-`*`(Physics:-`^`(g, -1), Physics:-`^`(exp(`𝕃`), T), g) = Physics:-`^`(exp(`𝕃`), -1)

    (7)

    Expanding the exponential using exp(`𝕃`) = Sum(`𝕃`^k/factorial(k), k = 0 .. infinity), and taking into account that the matrix product `𝕃`^k/g*g can be rewritten as(`𝕃`/g*g)^k, the left-hand side of (7) can be written as exp(`𝕃`^T/g*g)

    exp(LM^T/g*g) = rhs(Physics[`*`](Physics[`^`](g, -1), Physics[`^`](exp(`𝕃`), T), g) = Physics[`^`](exp(`𝕃`), -1))

    exp(Physics:-`*`(Physics:-`^`(g, -1), Physics:-`^`(`𝕃`, T), g)) = Physics:-`^`(exp(`𝕃`), -1)

    (8)

    Multiplying by exp(`𝕃`)

    (exp(Physics[`*`](Physics[`^`](g, -1), Physics[`^`](`𝕃`, T), g)) = Physics[`^`](exp(`𝕃`), -1))*exp(LM)

    Physics:-`*`(exp(Physics:-`*`(Physics:-`^`(g, -1), Physics:-`^`(`𝕃`, T), g)), exp(`𝕃`)) = 1

    (9)

    Recalling that  "g^(-1)=g[]^(mu,alpha)", g = g[beta, nu] and that for any matrix `𝕃`, "(`𝕃`^T)[alpha]^(   beta)= `𝕃`(( )^(beta))[alpha]",  

    "g^(-1) `𝕃`^T g= 'g_[~mu,~alpha]*LM[~beta, alpha] g_[beta, nu] '"

    Physics:-`*`(Physics:-`^`(g, -1), Physics:-`^`(`𝕃`, T), g) = Physics:-`*`(Physics:-g_[`~mu`, `~alpha`], `𝕃`[`~beta`, alpha], Physics:-g_[beta, nu])

    (10)

    subs([Physics[`*`](Physics[`^`](g, -1), Physics[`^`](`𝕃`, T), g) = Physics[`*`](Physics[g_][`~mu`, `~alpha`], `𝕃`[`~beta`, alpha], Physics[g_][beta, nu]), LM = LM[`~mu`, nu]], Physics[`*`](exp(Physics[`*`](Physics[`^`](g, -1), Physics[`^`](`𝕃`, T), g)), exp(`𝕃`)) = 1)

    Physics:-`*`(exp(Physics:-g_[`~alpha`, `~mu`]*Physics:-g_[beta, nu]*`𝕃`[`~beta`, alpha]), exp(`𝕃`[`~mu`, nu])) = 1

    (11)

    To allow for the combination of the exponentials, now that everything is in tensor notation, remove the noncommutative character of `𝕃```

    Setup(clear, noncommutativeprefix)

    [noncommutativeprefix = none]

    (12)

    combine(Physics[`*`](exp(Physics[g_][`~alpha`, `~mu`]*Physics[g_][beta, nu]*`𝕃`[`~beta`, alpha]), exp(`𝕃`[`~mu`, nu])) = 1)

    exp(`𝕃`[`~beta`, alpha]*Physics:-g_[beta, nu]*Physics:-g_[`~alpha`, `~mu`]+`𝕃`[`~mu`, nu]) = 1

    (13)

    Since every tensor component of this expression is real, taking the logarithm at both sides and simplifying tensor indices

    `assuming`([map(ln, exp(`𝕃`[`~beta`, alpha]*Physics[g_][beta, nu]*Physics[g_][`~alpha`, `~mu`]+`𝕃`[`~mu`, nu]) = 1)], [real])

    `𝕃`[`~beta`, alpha]*Physics:-g_[beta, nu]*Physics:-g_[`~alpha`, `~mu`]+`𝕃`[`~mu`, nu] = 0

    (14)

    Simplify(`𝕃`[`~beta`, alpha]*Physics[g_][beta, nu]*Physics[g_][`~alpha`, `~mu`]+`𝕃`[`~mu`, nu] = 0)

    `𝕃`[nu, `~mu`]+`𝕃`[`~mu`, nu] = 0

    (15)

    So the components of `𝕃`[`~mu`, nu]

    LM[`~μ`, nu, matrix]

    `𝕃`[`~μ`, nu] = Matrix(%id = 36893488151939882148)

    (16)

    satisfy (15). Using TensorArray  the components of that tensorial equation are

    TensorArray(`𝕃`[nu, `~mu`]+`𝕃`[`~mu`, nu] = 0, output = setofequations)

    {2*L[1, 1] = 0, 2*L[2, 2] = 0, 2*L[3, 3] = 0, 2*L[4, 4] = 0, -L[1, 2]+L[2, 1] = 0, L[1, 2]-L[2, 1] = 0, -L[1, 3]+L[3, 1] = 0, L[1, 3]-L[3, 1] = 0, -L[1, 4]+L[4, 1] = 0, L[1, 4]-L[4, 1] = 0, L[3, 2]+L[2, 3] = 0, L[4, 2]+L[2, 4] = 0, L[4, 3]+L[3, 4] = 0}

    (17)

    Simplifying taking these equations into account results in the form of `𝕃`[`~mu`, nu] we were looking for

    "simplify(?,{2*L[1,1] = 0, 2*L[2,2] = 0, 2*L[3,3] = 0, 2*L[4,4] = 0, -L[1,2]+L[2,1] = 0, L[1,2]-L[2,1] = 0, -L[1,3]+L[3,1] = 0, L[1,3]-L[3,1] = 0, -L[1,4]+L[4,1] = 0, L[1,4]-L[4,1] = 0, L[3,2]+L[2,3] = 0, L[4,2]+L[2,4] = 0, L[4,3]+L[3,4] = 0})"

    `𝕃`[`~μ`, nu] = Matrix(%id = 36893488153606736460)

    (18)

    This is equation (11.90) in Jackson's book [1]. By eye we see there are only six independent parameters in `𝕃`[`~mu`, nu], or via

    "indets(rhs(?), name)"

    {L[1, 2], L[1, 3], L[1, 4], L[2, 3], L[2, 4], L[3, 4]}

    (19)

    nops({L[1, 2], L[1, 3], L[1, 4], L[2, 3], L[2, 4], L[3, 4]})

    6

    (20)

    This number is expected: a rotation in 3D space can always be represented as the composition of three rotations, and so, characterized by 3 parameters: the rotation angles measured on each of the space planes x, y, y, z, z, x. Likewise, a rotation in 4D space is characterized by 6 parameters: rotations on each of the three space planes, parameters L[2, 3], L[2, 4] and L[3, 4],  and rotations on the spacetime planest, x, t, y, t, z, parameters L[1, j]. Define now `𝕃`[`~mu`, nu] using (18) for further computing with it in the next section

    "Define(?)"

    {Lambda, `𝕃`[`~mu`, nu], Physics:-Dgamma[mu], Physics:-Psigma[mu], Physics:-d_[mu], Physics:-g_[mu, nu], Physics:-gamma_[a, b], Physics:-LeviCivita[alpha, beta, mu, nu]}

    (21)

    Determination of Lambda[`~mu`, nu]

     

    From the components of `𝕃`[`~mu`, nu] in (18), the components of Lambda[`~mu`, nu] = exp(`𝕃`[`~mu`, nu]) can be computed directly using the LinearAlgebra:-MatrixExponential command. Then, following Jackson's book, in what follows we also derive a general formula for `𝕃`[`~mu`, nu]in terms of beta = v/c and gamma = 1/sqrt(-beta^2+1) shown in [1] as equation (11.98), finally showing the form of Lambda[`~mu`, nu] as a function of the relative velocity of the two inertial systems of references.

     

    An explicit form of Lambda[`~mu`, nu] in the case of a rotation on thet, x plane can be computed by taking equal to zero all the parameters in (19) but for L[1, 2] and substituting in "?≡`𝕃`[nu]^(mu)"  

    `~`[`=`](`minus`({L[1, 2], L[1, 3], L[1, 4], L[2, 3], L[2, 4], L[3, 4]}, {L[1, 2]}), 0)

    {L[1, 3] = 0, L[1, 4] = 0, L[2, 3] = 0, L[2, 4] = 0, L[3, 4] = 0}

    (22)

    "subs({L[1,3] = 0, L[1,4] = 0, L[2,3] = 0, L[2,4] = 0, L[3,4] = 0},?)"

    `𝕃`[`~μ`, nu] = Matrix(%id = 36893488153606695500)

    (23)

    Computing the matrix exponential,

    "Lambda[~mu,nu]=LinearAlgebra:-MatrixExponential(rhs(?))"

    Lambda[`~μ`, nu] = Matrix(%id = 36893488151918824492)

    (24)

    "convert(?,trigh)"

    Lambda[`~μ`, nu] = Matrix(%id = 36893488151918852684)

    (25)

    This is formula (4.2) in Landau & Lifshitz book [2]. An explicit form of Lambda[`~mu`, nu] in the case of a rotation on thex, y plane can be computed by taking equal to zero all the parameters in (19) but for L[2, 3]

    `~`[`=`](`minus`({L[1, 2], L[1, 3], L[1, 4], L[2, 3], L[2, 4], L[3, 4]}, {L[2, 3]}), 0)

    {L[1, 2] = 0, L[1, 3] = 0, L[1, 4] = 0, L[2, 4] = 0, L[3, 4] = 0}

    (26)

    "subs({L[1,2] = 0, L[1,3] = 0, L[1,4] = 0, L[2,4] = 0, L[3,4] = 0},?)"

    `𝕃`[`~μ`, nu] = Matrix(%id = 36893488151918868828)

    (27)

    "Lambda[~mu, nu]=LinearAlgebra:-MatrixExponential(rhs(?))"

    Lambda[`~μ`, nu] = Matrix(%id = 36893488153289306948)

    (28)

    NULL

    Rewriting `%𝕃`[`~mu`, nu] = K[`~i`]*Zeta[i]+S[`~i`]*omega[i]

     

    Following Jackson's notation, for readability, redefine the 6 parameters entering `𝕃`[`~mu`, nu] as

    '{LM[1, 2] = `ζ__1`, LM[1, 3] = `ζ__2`, LM[1, 4] = `ζ__3`, LM[2, 3] = `ω__3`, LM[2, 4] = -`ω__2`, LM[3, 4] = `ω__1`}'

    {`𝕃`[1, 2] = zeta__1, `𝕃`[1, 3] = zeta__2, `𝕃`[1, 4] = zeta__3, `𝕃`[2, 3] = omega__3, `𝕃`[2, 4] = -omega__2, `𝕃`[3, 4] = omega__1}

    (29)

    (Note in the above the surrounding backquotes '...' to prevent a premature evaluation of the left-hand sides; that is necessary when using the Library:-RedefineTensorComponent command.) With this redefinition, `𝕃`[`~mu`, nu] becomes

    Library:-RedefineTensorComponent({`𝕃`[1, 2] = zeta__1, `𝕃`[1, 3] = zeta__2, `𝕃`[1, 4] = zeta__3, `𝕃`[2, 3] = omega__3, `𝕃`[2, 4] = -omega__2, `𝕃`[3, 4] = omega__1})

    LM[`~μ`, nu, matrix]

    `𝕃`[`~μ`, nu] = Matrix(%id = 36893488151939901668)

    (30)

    where each parameter is related to a rotation angle on one plane. Any Lorentz transformation (rotation in 4D pseudo-Euclidean space) can be represented as the composition of these six rotations, and to each rotation, corresponds the matrix that results from taking equal to zero all of the six parameters but one.

     

    The set of six parameters can be split into two sets of three parameters each, one representing rotations on the t, x__j planes, parameters `ζ__j`, and the other representing rotations on the x__i, x__j planes, parameters `ω__j`. With that, following [1], (30) can be rewritten in terms of four 3D tensors, two of them with the parameters as components, the other two with matrix as components, as follows:

    Zeta[i] = [`ζ__1`, `ζ__2`, `ζ__3`], omega[i] = [`ω__1`, `ω__2`, `ω__3`], K[i] = [K__1, K__2, K__3], S[i] = [S__1, S__2, S__3]

    Zeta[i] = [zeta__1, zeta__2, zeta__3], omega[i] = [omega__1, omega__2, omega__3], K[i] = [K__1, K__2, K__3], S[i] = [S__1, S__2, S__3]

    (31)

    Define(Zeta[i] = [zeta__1, zeta__2, zeta__3], omega[i] = [omega__1, omega__2, omega__3], K[i] = [K__1, K__2, K__3], S[i] = [S__1, S__2, S__3])

    {Lambda, `𝕃`[mu, nu], Physics:-Dgamma[mu], K[i], Physics:-Psigma[mu], S[i], Zeta[i], Physics:-d_[mu], Physics:-g_[mu, nu], Physics:-gamma_[a, b], omega[i], Physics:-LeviCivita[alpha, beta, mu, nu]}

    (32)

    The 3D tensors K[i] and S[i] satisfy the commutation relations

    Setup(noncommutativeprefix = {K, S})

    [noncommutativeprefix = {K, S}]

    (33)

    Commutator(S[i], S[j]) = LeviCivita[i, j, k]*S[k]

    Physics:-Commutator(S[i], S[j]) = Physics:-LeviCivita[i, j, k]*S[`~k`]

    (34)

    Commutator(S[i], K[j]) = LeviCivita[i, j, k]*K[k]

    Physics:-Commutator(S[i], K[j]) = Physics:-LeviCivita[i, j, k]*K[`~k`]

    (35)

    Commutator(K[i], K[j]) = -LeviCivita[i, j, k]*S[k]

    Physics:-Commutator(K[i], K[j]) = -Physics:-LeviCivita[i, j, k]*S[`~k`]

    (36)

    The matrix components of the 3D tensor K__i, related to rotations on the t, x__j planes, are

    K__1 := matrix([[0, 1, 0, 0], [1, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]])

    array( 1 .. 4, 1 .. 4, [( 3, 1 ) = (0), ( 4, 2 ) = (0), ( 1, 2 ) = (1), ( 3, 2 ) = (0), ( 1, 3 ) = (0), ( 4, 3 ) = (0), ( 4, 4 ) = (0), ( 1, 1 ) = (0), ( 2, 1 ) = (1), ( 3, 3 ) = (0), ( 2, 4 ) = (0), ( 1, 4 ) = (0), ( 2, 2 ) = (0), ( 2, 3 ) = (0), ( 4, 1 ) = (0), ( 3, 4 ) = (0)  ] )

    (37)

    K__2 := matrix([[0, 0, 1, 0], [0, 0, 0, 0], [1, 0, 0, 0], [0, 0, 0, 0]])

    array( 1 .. 4, 1 .. 4, [( 3, 1 ) = (1), ( 4, 2 ) = (0), ( 1, 2 ) = (0), ( 3, 2 ) = (0), ( 1, 3 ) = (1), ( 4, 3 ) = (0), ( 4, 4 ) = (0), ( 1, 1 ) = (0), ( 2, 1 ) = (0), ( 3, 3 ) = (0), ( 2, 4 ) = (0), ( 1, 4 ) = (0), ( 2, 2 ) = (0), ( 2, 3 ) = (0), ( 4, 1 ) = (0), ( 3, 4 ) = (0)  ] )

    (38)

    K__3 := matrix([[0, 0, 0, 1], [0, 0, 0, 0], [0, 0, 0, 0], [1, 0, 0, 0]])

    array( 1 .. 4, 1 .. 4, [( 3, 1 ) = (0), ( 4, 2 ) = (0), ( 1, 2 ) = (0), ( 3, 2 ) = (0), ( 1, 3 ) = (0), ( 4, 3 ) = (0), ( 4, 4 ) = (0), ( 1, 1 ) = (0), ( 2, 1 ) = (0), ( 3, 3 ) = (0), ( 2, 4 ) = (0), ( 1, 4 ) = (1), ( 2, 2 ) = (0), ( 2, 3 ) = (0), ( 4, 1 ) = (1), ( 3, 4 ) = (0)  ] )

    (39)

    The matrix components of the 3D tensor S__i, related to rotations on the x__i, x__j 3D space planes, are

    S__1 := matrix([[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, -1], [0, 0, 1, 0]])

    array( 1 .. 4, 1 .. 4, [( 3, 1 ) = (0), ( 4, 2 ) = (0), ( 1, 2 ) = (0), ( 3, 2 ) = (0), ( 1, 3 ) = (0), ( 4, 3 ) = (1), ( 4, 4 ) = (0), ( 1, 1 ) = (0), ( 2, 1 ) = (0), ( 3, 3 ) = (0), ( 2, 4 ) = (0), ( 1, 4 ) = (0), ( 2, 2 ) = (0), ( 2, 3 ) = (0), ( 4, 1 ) = (0), ( 3, 4 ) = (-1)  ] )

    (40)

    S__2 := matrix([[0, 0, 0, 0], [0, 0, 0, 1], [0, 0, 0, 0], [0, -1, 0, 0]])

    array( 1 .. 4, 1 .. 4, [( 3, 1 ) = (0), ( 4, 2 ) = (-1), ( 1, 2 ) = (0), ( 3, 2 ) = (0), ( 1, 3 ) = (0), ( 4, 3 ) = (0), ( 4, 4 ) = (0), ( 1, 1 ) = (0), ( 2, 1 ) = (0), ( 3, 3 ) = (0), ( 2, 4 ) = (1), ( 1, 4 ) = (0), ( 2, 2 ) = (0), ( 2, 3 ) = (0), ( 4, 1 ) = (0), ( 3, 4 ) = (0)  ] )

    (41)

    S__3 := matrix([[0, 0, 0, 0], [0, 0, -1, 0], [0, 1, 0, 0], [0, 0, 0, 0]])

    array( 1 .. 4, 1 .. 4, [( 3, 1 ) = (0), ( 4, 2 ) = (0), ( 1, 2 ) = (0), ( 3, 2 ) = (1), ( 1, 3 ) = (0), ( 4, 3 ) = (0), ( 4, 4 ) = (0), ( 1, 1 ) = (0), ( 2, 1 ) = (0), ( 3, 3 ) = (0), ( 2, 4 ) = (0), ( 1, 4 ) = (0), ( 2, 2 ) = (0), ( 2, 3 ) = (-1), ( 4, 1 ) = (0), ( 3, 4 ) = (0)  ] )

    (42)

    NULL

    Verifying the commutation relations between S[i] and K[j]

       

    The `𝕃`[`~mu`, nu] tensor is now expressed in terms of these objects as

    %LM[`~μ`, nu] = omega[i].S[i]+Zeta[i].K[i]

    `%𝕃`[`~μ`, nu] = K[`~i`]*Zeta[i]+S[`~i`]*omega[i]

    (50)

    where the right-hand side, without free indices, represents the matrix form of `%𝕃`[`~mu`, nu]. This notation makes explicit the fact that any Lorentz transformation can always be written as the composition of six rotations

    SumOverRepeatedIndices(`%𝕃`[`~μ`, nu] = K[`~i`]*Zeta[i]+S[`~i`]*omega[i])

    `%𝕃`[`~μ`, nu] = zeta__1*K[`~1`]+zeta__2*K[`~2`]+zeta__3*K[`~3`]+omega__1*S[`~1`]+omega__2*S[`~2`]+omega__3*S[`~3`]

    (51)

    Library:-RewriteInMatrixForm(`%𝕃`[`~μ`, nu] = zeta__1*K[`~1`]+zeta__2*K[`~2`]+zeta__3*K[`~3`]+omega__1*S[`~1`]+omega__2*S[`~2`]+omega__3*S[`~3`])

    `%𝕃`[`~μ`, nu] = (array( 1 .. 4, 1 .. 4, [( 3, 1 ) = (0), ( 4, 2 ) = (0), ( 1, 2 ) = (zeta__1), ( 3, 2 ) = (0), ( 1, 3 ) = (0), ( 4, 3 ) = (0), ( 4, 4 ) = (0), ( 1, 1 ) = (0), ( 2, 1 ) = (zeta__1), ( 3, 3 ) = (0), ( 2, 4 ) = (0), ( 1, 4 ) = (0), ( 2, 2 ) = (0), ( 2, 3 ) = (0), ( 4, 1 ) = (0), ( 3, 4 ) = (0)  ] ))+(array( 1 .. 4, 1 .. 4, [( 3, 1 ) = (zeta__2), ( 4, 2 ) = (0), ( 1, 2 ) = (0), ( 3, 2 ) = (0), ( 1, 3 ) = (zeta__2), ( 4, 3 ) = (0), ( 4, 4 ) = (0), ( 1, 1 ) = (0), ( 2, 1 ) = (0), ( 3, 3 ) = (0), ( 2, 4 ) = (0), ( 1, 4 ) = (0), ( 2, 2 ) = (0), ( 2, 3 ) = (0), ( 4, 1 ) = (0), ( 3, 4 ) = (0)  ] ))+(array( 1 .. 4, 1 .. 4, [( 3, 1 ) = (0), ( 4, 2 ) = (0), ( 1, 2 ) = (0), ( 3, 2 ) = (0), ( 1, 3 ) = (0), ( 4, 3 ) = (0), ( 4, 4 ) = (0), ( 1, 1 ) = (0), ( 2, 1 ) = (0), ( 3, 3 ) = (0), ( 2, 4 ) = (0), ( 1, 4 ) = (zeta__3), ( 2, 2 ) = (0), ( 2, 3 ) = (0), ( 4, 1 ) = (zeta__3), ( 3, 4 ) = (0)  ] ))+(array( 1 .. 4, 1 .. 4, [( 3, 1 ) = (0), ( 4, 2 ) = (0), ( 1, 2 ) = (0), ( 3, 2 ) = (0), ( 1, 3 ) = (0), ( 4, 3 ) = (omega__1), ( 4, 4 ) = (0), ( 1, 1 ) = (0), ( 2, 1 ) = (0), ( 3, 3 ) = (0), ( 2, 4 ) = (0), ( 1, 4 ) = (0), ( 2, 2 ) = (0), ( 2, 3 ) = (0), ( 4, 1 ) = (0), ( 3, 4 ) = (-omega__1)  ] ))+(array( 1 .. 4, 1 .. 4, [( 3, 1 ) = (0), ( 4, 2 ) = (-omega__2), ( 1, 2 ) = (0), ( 3, 2 ) = (0), ( 1, 3 ) = (0), ( 4, 3 ) = (0), ( 4, 4 ) = (0), ( 1, 1 ) = (0), ( 2, 1 ) = (0), ( 3, 3 ) = (0), ( 2, 4 ) = (omega__2), ( 1, 4 ) = (0), ( 2, 2 ) = (0), ( 2, 3 ) = (0), ( 4, 1 ) = (0), ( 3, 4 ) = (0)  ] ))+(array( 1 .. 4, 1 .. 4, [( 3, 1 ) = (0), ( 4, 2 ) = (0), ( 1, 2 ) = (0), ( 3, 2 ) = (omega__3), ( 1, 3 ) = (0), ( 4, 3 ) = (0), ( 4, 4 ) = (0), ( 1, 1 ) = (0), ( 2, 1 ) = (0), ( 3, 3 ) = (0), ( 2, 4 ) = (0), ( 1, 4 ) = (0), ( 2, 2 ) = (0), ( 2, 3 ) = (-omega__3), ( 4, 1 ) = (0), ( 3, 4 ) = (0)  ] ))

    (52)

    Library:-PerformMatrixOperations(`%𝕃`[`~μ`, nu] = zeta__1*K[`~1`]+zeta__2*K[`~2`]+zeta__3*K[`~3`]+omega__1*S[`~1`]+omega__2*S[`~2`]+omega__3*S[`~3`])

    `%𝕃`[`~μ`, nu] = (array( 1 .. 4, 1 .. 4, [( 3, 1 ) = (zeta__2), ( 4, 2 ) = (-omega__2), ( 1, 2 ) = (zeta__1), ( 3, 2 ) = (omega__3), ( 1, 3 ) = (zeta__2), ( 4, 3 ) = (omega__1), ( 4, 4 ) = (0), ( 1, 1 ) = (0), ( 2, 1 ) = (zeta__1), ( 3, 3 ) = (0), ( 2, 4 ) = (omega__2), ( 1, 4 ) = (zeta__3), ( 2, 2 ) = (0), ( 2, 3 ) = (-omega__3), ( 4, 1 ) = (zeta__3), ( 3, 4 ) = (-omega__1)  ] ))

    (53)

    NULL

    which is the same as the starting point (30)NULL

    The transformation Lambda[`~mu`, nu] = exp(`%𝕃`[`~mu`, nu]), where  `%𝕃`[`~mu`, nu] = K[`~i`]*Zeta[i], as a function of the relative velocity of two inertial systems

     

     

    As seen in the previous subsection, in `𝕃`[`~mu`, nu] = K[`~i`]*Zeta[i]+S[`~i`]*omega[i], the second term, S[`~i`]*omega[i], corresponds to 3D rotations embedded in the general form of 4D Lorentz transformations, and K[`~i`]*Zeta[i] is the term that relates the coordinates of two inertial systems of reference that move with respect to each other at constant velocity v.  In this section, K[`~i`]*Zeta[i] is rewritten in terms of that velocity, arriving at equation (11.98)  of Jackson's book [1]. The key observation is that the 3D vector Zeta[i], can be rewritten in terms of arctanh(beta), where beta = v/c and c is the velocity of light (for the rationale of that relation, see [2], sec 4, discussion before formula (4.3)).

     

    Use a macro - say ub - to represent the atomic variable `#mover(mi("β",fontstyle = "normal"),mo("ˆ"))` (this variable can be entered as `#mover(mi("β"),mo("ˆ")`. In general, to create atomic variables, see the section on Atomic Variables of the page 2DMathDetails ).

     

    macro(ub = `#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`)

    ub[j] = [ub[1], ub[2], ub[3]], Zeta[j] = ub[j]*arctanh(beta)

    `#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[j] = [`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[1], `#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[2], `#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[3]], Zeta[j] = `#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[j]*arctanh(beta)

    (54)

    Define(`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[j] = [`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[1], `#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[2], `#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[3]], Zeta[j] = `#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[j]*arctanh(beta))

    {Lambda, `𝕃`[mu, nu], Physics:-Dgamma[mu], K[i], Physics:-Psigma[mu], S[i], Zeta[i], Physics:-d_[mu], Physics:-g_[mu, nu], Physics:-gamma_[a, b], omega[i], Physics:-LeviCivita[alpha, beta, mu, nu], `#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[j]}

    (55)

    With these two definitions, and excluding the rotation term S[`~i`]*omega[i] we have

    %LM[`~μ`, nu] = Zeta[j]*K[j]

    `%𝕃`[`~μ`, nu] = Zeta[j]*K[`~j`]

    (56)

    SumOverRepeatedIndices(`%𝕃`[`~μ`, nu] = Zeta[j]*K[`~j`])

    `%𝕃`[`~μ`, nu] = arctanh(beta)*(K[`~1`]*`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[1]+K[`~2`]*`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[2]+K[`~3`]*`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[3])

    (57)

    Library:-PerformMatrixOperations(`%𝕃`[`~μ`, nu] = arctanh(beta)*(K[`~1`]*`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[1]+K[`~2`]*`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[2]+K[`~3`]*`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[3]))

    `%𝕃`[`~μ`, nu] = (array( 1 .. 4, 1 .. 4, [( 3, 1 ) = (`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[2]*arctanh(beta)), ( 4, 2 ) = (0), ( 1, 2 ) = (`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[1]*arctanh(beta)), ( 3, 2 ) = (0), ( 1, 3 ) = (`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[2]*arctanh(beta)), ( 4, 3 ) = (0), ( 4, 4 ) = (0), ( 1, 1 ) = (0), ( 2, 1 ) = (`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[1]*arctanh(beta)), ( 3, 3 ) = (0), ( 2, 4 ) = (0), ( 1, 4 ) = (`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[3]*arctanh(beta)), ( 2, 2 ) = (0), ( 2, 3 ) = (0), ( 4, 1 ) = (`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[3]*arctanh(beta)), ( 3, 4 ) = (0)  ] ))

    (58)

     

    From this expression, the form of "Lambda[nu]^(mu)" can be obtained as in (24) using LinearAlgebra:-MatrixExponential and simplifying the result taking into account that `#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[j] is a unit vector

    SumOverRepeatedIndices(ub[j]^2) = 1

    `#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[1]^2+`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[2]^2+`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[3]^2 = 1

    (59)

    exp(lhs(`%𝕃`[`~μ`, nu] = (array( 1 .. 4, 1 .. 4, [( 3, 3 ) = (0), ( 2, 3 ) = (0), ( 4, 2 ) = (0), ( 1, 1 ) = (0), ( 1, 2 ) = (`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[1]*arctanh(beta)), ( 4, 4 ) = (0), ( 4, 1 ) = (`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[3]*arctanh(beta)), ( 3, 1 ) = (`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[2]*arctanh(beta)), ( 3, 4 ) = (0), ( 4, 3 ) = (0), ( 1, 4 ) = (`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[3]*arctanh(beta)), ( 3, 2 ) = (0), ( 1, 3 ) = (`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[2]*arctanh(beta)), ( 2, 4 ) = (0), ( 2, 1 ) = (`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[1]*arctanh(beta)), ( 2, 2 ) = (0)  ] )))) = simplify(LinearAlgebra:-MatrixExponential(rhs(`%𝕃`[`~μ`, nu] = (array( 1 .. 4, 1 .. 4, [( 3, 3 ) = (0), ( 2, 3 ) = (0), ( 4, 2 ) = (0), ( 1, 1 ) = (0), ( 1, 2 ) = (`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[1]*arctanh(beta)), ( 4, 4 ) = (0), ( 4, 1 ) = (`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[3]*arctanh(beta)), ( 3, 1 ) = (`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[2]*arctanh(beta)), ( 3, 4 ) = (0), ( 4, 3 ) = (0), ( 1, 4 ) = (`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[3]*arctanh(beta)), ( 3, 2 ) = (0), ( 1, 3 ) = (`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[2]*arctanh(beta)), ( 2, 4 ) = (0), ( 2, 1 ) = (`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[1]*arctanh(beta)), ( 2, 2 ) = (0)  ] )))), {`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[1]^2+`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[2]^2+`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[3]^2 = 1})

    exp(`%𝕃`[`~μ`, nu]) = Matrix(%id = 36893488153234621252)

    (60)

    It is useful at this point to analyze the dependency on the components of `#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[j] of this matrix

    "map(u -> indets(u,specindex(ub)), rhs(?))"

    Matrix(%id = 36893488151918822812)

    (61)

    We see that the diagonal element [4, 4] depends on two instead of only one component of  `#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[j]. That is due to the simplification with respect to side relations , performed in (60), that constructs an elimination Groebner Basis that cannot reduce at once, using the single equation (59), the dependency of all of the elements [2, 2], [3, 3] and [4, 4] to a single component of  `#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[j]. So, to reduce further the dependency of the [4, 4] element, this component of (60) requires one more simplification step, using a different elimination strategy, explicitly requesting the elimination of "{(beta)[1],(beta)[2]}"

    "rhs(?)[4,4]"

    ((`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[1]^2+`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[2]^2)*(-beta^2+1)^(1/2)-`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[1]^2-`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[2]^2+1)/(-beta^2+1)^(1/2)

    (62)

     

    simplify(((`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[1]^2+`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[2]^2)*(-beta^2+1)^(1/2)-`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[1]^2-`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[2]^2+1)/(-beta^2+1)^(1/2), {`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[1]^2+`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[2]^2+`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[3]^2 = 1}, {`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[1], `#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[2]})

    (-(-beta^2+1)^(1/2)*`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[3]^2+`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[3]^2+(-beta^2+1)^(1/2))/(-beta^2+1)^(1/2)

    (63)

    This result involves only `#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[3], and with it the form of Lambda[`~mu`, nu] = exp(`%𝕃`[`~mu`, nu]) becomes

    "subs(1/(-beta^2+1)^(1/2)*((`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[1]^2+`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[2]^2)*(-beta^2+1)^(1/2)-`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[1]^2-`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[2]^2+1) = (-(-beta^2+1)^(1/2)*`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[3]^2+`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[3]^2+(-beta^2+1)^(1/2))/(-beta^2+1)^(1/2),?)"

    exp(`%𝕃`[`~μ`, nu]) = Matrix(%id = 36893488151918876660)

    (64)

    Replacing now the components of the unit vector `#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[j] by the components of the vector `#mover(mi("β",fontstyle = "normal"),mo("→"))` divided by its modulus beta

    seq(ub[j] = beta[j]/beta, j = 1 .. 3)

    `#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[1] = beta[1]/beta, `#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[2] = beta[2]/beta, `#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[3] = beta[3]/beta

    (65)

    and recalling that

    exp(`%𝕃`[`~μ`, nu]) = Lambda[`~μ`, nu]

    exp(`%𝕃`[`~μ`, nu]) = Lambda[`~μ`, nu]

    (66)

    to get equation (11.98) in Jackson's book it suffices to introduce (the customary notation)

    1/sqrt(-beta^2+1) = gamma

    1/(-beta^2+1)^(1/2) = gamma

    (67)

    "simplify(subs(`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[1] = beta[1]/beta, exp(`%𝕃`[`~μ`,nu]) = Lambda[`~μ`,nu], 1/(-beta^2+1)^(1/2) = gamma,(1/(-beta^2+1)^(1/2) = gamma)^(-1),?))"

    Lambda[`~μ`, nu] = Matrix(%id = 36893488151911556148)

    (68)

     

    This is equation (11.98) in Jackson's book.

     

    Finally, to get the form of this general Lorentz transformation excluding 3D rotations, directly expressed in terms of the relative velocity v of the two inertial systems of references, introduce

    v[i] = [v__x, v__y, v__z], beta[i] = v[i]/c

    v[i] = [v__x, v__y, v__z], beta[i] = v[i]/c

    (69)

    At this point it suffices to Define (69) as tensors

    Define(v[i] = [v__x, v__y, v__z], beta[i] = v[i]/c)

    {Lambda, `𝕃`[mu, nu], Physics:-Dgamma[mu], K[i], Physics:-Psigma[mu], S[i], Zeta[i], beta[i], Physics:-d_[mu], Physics:-g_[mu, nu], Physics:-gamma_[a, b], omega[i], v[i], Physics:-LeviCivita[alpha, beta, mu, nu], `#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[j]}

    (70)

    and remove beta and gamma from the formulation using

    (rhs = lhs)(1/(-beta^2+1)^(1/2) = gamma), beta = v/c

    gamma = 1/(-beta^2+1)^(1/2), beta = v/c

    (71)

    "simplify(subs(gamma = 1/(-beta^2+1)^(1/2),simplify(?)),size) "

    Lambda[`~μ`, nu] = Matrix(%id = 36893488153289646316)

    (72)

    NULL

    ``

    NULL

    References

     

    [1] J.D. Jackson, "Classical Electrodynamics", third edition, 1999.

    [2] L.D. Landau, E.M. Lifshitz, "The Classical Theory of Fields", Course of Theoretical Physics V.2, 4th revised English edition, 1975.

    NULL

    Download Deriving_the_mathematical_form_of_Lorentz_transformations.mw

    Deriving_the_mathematical_form_of_Lorentz_transformations.pdf

    Edgardo S. Cheb-Terrab
    Physics, Differential Equations and Mathematical Functions, Maplesoft

    We've just released Maple Flow 2022.1. We've squeezed in a few new features as requested by our users - I'll describe them below.

    Before we get to that, I'd like to give everyone an open invitation to grab a Maple Flow trial - I'd love to know what you think. I'm fanatically devoted to making Flow better, but I can only do that if you give me your feedback.

    You can specify if you want your results to be globally displayed using engineering, scientific, or fixed notation

    Supporting images can be cut and pasted from another source directly into Maple Flow using standard clipboard operations.

    You can now insert a time stamp in headers and footers. And you can optionally place a border around the header, footer or body of the page.

    New content in the help system makes it easier to get started with advanced features, including techniques for optimization and signal processing.

    Go here to learn more...and don't forget to grab a trial.

     

    Yes, you read that right! Steps documents are a feature in Maple Learn that we wanted to highlight this week, as they can provide great use in understanding concepts and solving problems. Within them, they can show all steps to solve a problem, including reminders of any formulas used! They can be found at the homepage for steps documents. A list of all can be found below the image, with links.

    All steps documents:

    All of our documents follow the same format, which I’ll show you using 3 different documents: Derivatives Steps, Factoring Steps, and Matrix Determinant Steps. They will be shown from left to right in that order, so you can see the different steps and how they work.

     

    The first thing you’ll see on any steps document is the place to enter the equation. Each equation can be entered in the appropriate box, in different styles to fit your needs and the problem asked.

    Then, you click on the show steps button, which is the same for all of the documents:

    This is where the magic happens. The steps will appear line by line, in great detail. The actual steps are generated by Maple, and presented in Maple Learn through scripting. Because of this, please don’t click off the group the steps appear in, or they’ll appear in the new group as well!

    There are many other steps documents than the ones we have here, and will be adding more as time goes on. Please keep an eye out, and enjoy the updates! We hope this was helpful to you all, and let us know if there are any other steps you’d like to see.

     

    According to Wikipedia

    "In computing, a programming language reference or language reference manual is
    part of the documentation associated with most mainstream programming languages. It is
    written for users and developers, and describes the basic elements of the language and how
    to use them in a program. For a command-based language, for example, this will include details
    of every available command and of the syntax for using it.

    The reference manual is usually separate and distinct from a more detailed
     programming language specification meant for implementors of the language rather than those who simply use it to accomplish some processing task."

     

    And no, Maple's Programming guide is not a Language reference manual. This is a guide to how to program in Maple. But it is not reference manual for the language itself. i.e. description of the Language itself.

    Examples of Language reference manuals are

    C

    Original C++ 

    Python

    Ada

    Fortran 77

     

    and so on.

    Why is there no LRM for Maple? Should there be one? I find Maple semantics sometime confusing, and many times I find how things work by trial and error. Having a reference to look at will be good. 

    I know for a language as large as Maple, this is not easy to do. But it will good for Maplesoft to invest into making one.

    Registration for Maple Conference 2022 is now open! Please go to our conference home page and click on the "Register Now" button.

    A tentative schedule has also been posted on the conference agenda page.

    The call for participation has closed but please keep in mind that you still have until September 22 to submit creative works.

    We hope to see you at the conference!

    Have you ever heard of a matrix kernel or nullspace? If not, or you’d like a refresher on the topic, keep reading! We’re doing a Maple Learn document walkthrough today on Fundamental Subspaces.

    The document starts by defining the nullspace/kernel and nullity of a matrix. Nullity is defined as the number of vectors in the basis of the kernel for the given matrix. This makes sense, as nullspace is defined as:

                                        

    This may still not make sense to you, and that’s okay! We have an example for a reason, where we try to find a basis for Null(A) and state the dimension of the subspace (nullity).

                                                  

    I won’t go through the solution here, as trying it yourself is always important. But one hint! If you get really stuck, you can find the Reduced Row Echelon Form (RREF) and the kernel using Maple Learn’s context panel, or check out the rest of our Matrices collection for other helpful documents on this topic.

     

    Please let us know what you thought of this walkthrough or if there are any specific documents or topics you’d like to see in the comments below this post. I hope you enjoyed this walkthrough!

    We’ve decided to start a new series of blog posts, where we take a closer look at the collections available in Maple Learn. What collection are we looking at first, you ask? Our largest, the Calculus collection! This collection has around 250 documents, and was one of the first to be added to the Maple Learn document gallery.

    Because it’s so big, we can’t talk about it all in one post. Instead, we’re going to break it up into three posts: Limits (this one!), Derivatives, and Integration. Keep an eye out for those other ones!

    Let’s dive into it. If you’re learning limits for the first time, the first document you’ll want to take a look at is our document on the formal definition of limits.

    And of course, just as the document title says, we start with the formal definition of a limit:

    From there, like many of our other documents, there’s a visualization to the left, and an explanation to the right. Seems fairly simple, right?

    Well, what if you wanted to dig further into the topic?

    That’s what the rest of this collection is for! We have documents on many topics relating to limits, such as The Squeeze Theorem, or The Fundamental Trig Limit (don’t forget to use the slider!). We also have a steps document, to help you solve any limits problems you’ve created or found.

    We can’t wait to see you another time for when we dive into Derivative documents. Let us know if after the Calculus collection showcase, if you have another collection you’d like to see summarized!

    Today I’m here with a document walkthrough under the subject of graph theory! Do you know what an Eulerian path is? Have you ever tried to find one?

    An Eulerian path is a path that uses every edge in the graph exactly once. Vertices can be revisited, just not the edges. There are mathematical ways to find an Eulerian path, but at the level of math I’m at, I just use my eyes!

                                                          

    In the document Eulerian Paths Quiz, we focus on trying to find an Eulerian path. This document, created using Maple scripting, uses the click on plot feature, allowing you to click on the edges and check your answer. When an edge is clicked, it turns red, and feedback is given.

    If you make a mistake, there are a few options. If the most recent edge chosen is the mistake, you can simply click on it again to undo the selection. However, if the mistake is several edges back, or you need to undo the whole thing, you can click the blue reset button.

                                                                                        

    Once you’ve done one, you might want to try another graph. That’s why we have a try another button, to give you another random new graph.

    We hope you enjoy this document! If you’re curious about how this document was scripted, you can see our script HERE. Please let us know if there are any specific documents you’d like as a walkthrough in the comments below, and check out our other graph theory documents.

    For a long time I could not understand how to make a kinematic analysis of this device based on the coupling equations (like here). The equations were drawn up relative to the ends of the grips (horns), that is, relative to the coordinates of 4 points. That's 12 equations. But then only a finite number of mechanism positions take place (as shown by RootFinding[Isolate]). It turns out that there is no continuous transition between these positions. Then it is natural to assume that the movement can be obtained with the help of a small deformation. It seemed that if we discard the condition of a constant distance between the midpoints of the horns (this is the f7 equation at the very beginning), then this will allow us to obtain minimal distortion during movement. In fact, the maximum distortion during the movement was in the second decimal place.
    (It seemed that these guys came to a similar result, but analytically "Configuration analysis of the Schatz linkage" C-C Lee and J S Dai
    Department of Tool and Die-Making Engineering, National Kaohsiung University of Applied Sciences, Kaohsiung, Taiwan Department of Mechanical Engineering, King’s College, University of London, UK)

    The left racks are input.


    The first text is used to calculate the trajectory, and the other two show design options based on the data received.
    For data transfer, a disk called E is used.
    Schatz_mechanism_2_0.mw
    OF_experimental_1_part_3_Barrel.mw
    OF_experimental_1_part_3.mw

    First 18 19 20 21 22 23 24 Last Page 20 of 305