ecterrab

14702 Reputation

24 Badges

20 years, 241 days

MaplePrimes Activity


These are replies submitted by ecterrab

Hi
Just revising the documentation about dsolve's speed to see if the information I have is also available to you, Mac Dude. Check please ?dsolve/rkf45 and ?dsolve/numeric/efficiency: current dsolve runs as fast as C code when specifying the compile = true option in the call to dsolve/numeric. Using this option actually makes dsolve write the C code for you, compile it, and use it for solving the DE system, without you perceiving. It is all automatic, as said in the presentation. The limitations of "dsolve at the speed of C code" are also just those for regular C code you may nevertheless prefer to write yourself to use with a connection Toolbox: it only works with real valued DE systems that contain at most elementary functions (no special functions allowed) - see the list of functions in ?Compile,compile under Runtime Support.

Edgardo S. Cheb-Terrab
Physics, Maplesoft

Hi
These polemic questions and the answers I present in this talk actually passed through my mind at different times and reflect, honestly, what I think.

In 1996 I was interested in computing Poincare sections in the context of a general relativity problem. There was Fortran code for that (see Calzetta and El Hasi, Phys.Rev. D 51 - 1995). After struggling badly because the number of numerical experiments to be done was really large, it resulted to me much simpler to write a complete program to compute Poincare sections within a computer algebra system than trying to use the existing one for Fortran. The program is today found in Maple, DEtools[poincare].

I learned two things: 1) the advantages of running numerical computations within a computer algebra environment were just enormous. 2) the drawback at that time: speed. I didn't like the speed. No, it was not satisfactory. C was abut 30 to 100 times faster.

In 2000 this speed issue got on my nerves. I talked to Allan Witkoppf, my friend, at that time a Ph.D student and an expert in C. We came up with some interesting ideas that ended up in the Maple DNA project for fast numerical ODE solutions (google "DNA Maple 6"). Surprisingly, with simple ideas we got 15x to 30x speed up - hey! I understood one more thing: the numerical slowness of computer algebra systems was not a real or serious obstacle.

Along the next ten years Allan became the Maple numerical DE guy, great - and voila the speed, Dude. It is there. For real. Even DNA is now in the dust. Try your examples. Compare. I - honestly - believe and fully stand by this answer for polemic question 1. Yes, you can also use a connection Toolbox instead, as you say, and some people will prefer it, perhaps because already have C or Fortran code that they can reuse. Still I think the CAS worksheet is really better in the small and large scale, for the opportunity of reusing, symbolic preprocessing not available in Fortran or C, and including the speed.

About polemic question #2: I agree with you in that the computer is not exclusive of books. I see from your comment that the presentation could be misunderstood as suggesting "this or that". The intention is only to reflect a practice, though: people tend more and more to use one or the other, as in "only go to the books if the computer (including web) is insufficient". And not few people ask me about this regarding DEs. The last time I consulted DE books I believe it was ~10 years ago. For ODEs the computers went just far beyond textbooks. Not beyond humans. I currently do consult the arxiv.

You mentioned Abramowitz. It is also true, I have my copy. And always looked at it as a monumental piece of work. But then in 1999 I moved away from the paper version of it (google "abramowitz pdf", second link). Then I saw the DLMF (Digital Library of Mathematical Functions) and Marichev's and others emerging as static repositories of special function information. The FunctionAdvisor in Maple came after all that, with a different idea in mind: do not present 'static' information. Instead: process information, interrelating it on the fly, according to user's input, using an increasing number of new algorithms popping up all around.

There is still a long way to go. I think however that the paradigm has shifted already. Textbooks and static presentations are entering the rear mirror. Core pieces of mathematical information processed on the fly with each-day-more-varied-algorithms is in front of us. This is what is illustrated in the presentation attached.

The core of the special functions part was also presented in "Special Functions in the Digital Age" (IMA 2002), while a previous version of most of this DE material, including the questions, was also presented in the session for "Teaching and Learning Differential Equations" of the meeting of the CMS 2000.

Edgardo S. Cheb-Terrab 
Physics, Maplesoft

Hi
Thanks for your comments. The learning curve: basic or advanced quality textbooks? Struggling to do seemingly trivial things like casting expressions into certain forms … I'd say compact-and-basic is the remedy, with compact & quality in bold. For teaching in Brazil, State University of Rio de Janeiro, I tried to write such a mini-and-basic for physics students; I may revamp it. For advanced: what is what you'd like to see in such a text that would smooth out the learning curve?

Examples and textbook notation: I'm glad to hear that you find these examples useful. The are mostly those shown in ?Physics,Examples. Physics and mathematical methods are not the same thing, and Feynman's and Landau's books (not only them, of course) are great in that in their problems illustrate both for real. BTW three of the four examples under "Mechanics" are from Landau's vol.1.

The 'notation' issue actually refers to keyboard input, and the display of results on the screen, so not palettes. To enter things, say, as you do when computing with paper and pencil, and see the output as you see it in textbooks. It is impressive for me how this speeds up matters in my brain. Even after working with computer algebra for so many years, I'm still not used to (sounds alien to me) computerish style representing mathematical objects so differently than the way we do it by hand, ditto for the output. All the efforts are put in Physics to not do that.

New: we now provide access around the clock to the version of Physics under development (http://www.maplesoft.com/products/maple/features/physicsresearch.aspx). This includes ways for you to present your suggestions/report bugs/feedback. The downloadable package, post 17.01, is at "zero known bugs" in this moment and we intend to keep it this way. Not having to wait for adjustments until the next release anymore.

Edgardo S. Cheb-Terrab
Physics, Maplesoft



``

restart; with(Physics)

 

The default metric already has the signature [+,-,-,-] you are asking. Just recall that the component "0" is mapped into the component "4", and that signature you ask, the same one used by default in the Landau books and others, is

 

 

g_[]

g_[mu, nu] = (Matrix(4, 4, {(1, 1) = -1, (1, 2) = 0, (1, 3) = 0, (1, 4) = 0, (2, 1) = 0, (2, 2) = -1, (2, 3) = 0, (2, 4) = 0, (3, 1) = 0, (3, 2) = 0, (3, 3) = -1, (3, 4) = 0, (4, 1) = 0, (4, 2) = 0, (4, 3) = 0, (4, 4) = 1}))

(1)

 

You can always also use 0 or 4 indistinctly to refer to the components, as in:

 

g_[0, 0] = g_[4, 4];

1 = 1

(2)

 

Independent of that the keyword 'signature' is from old times and needs to be either removed or updated, because it allows only for  [+,-,-,-] or  [+,+,+,+] (adapted to the dimension of spacetime, that is also settable).

 

The issue is that, in current versions of Physics, you can actually set the metric to whatever you want, making that value of 'signature' irrelevant. For example:

 

 

Setup(coordinates = cartesian, metric = a*dx^2+b*dy^2+c*dz^2+d*dt^2);

`* Partial match of  'coordinates' against keyword 'coordinatesystems'`

 

`Default differentiation variables for d_, D_ and dAlembertian are: `*{X = (x, y, z, t)}

 

`Systems of spacetime Coordinates are: `*{X = (x, y, z, t)}

 

[coordinatesystems = {X}, metric = {(1, 1) = a, (2, 2) = b, (3, 3) = c, (4, 4) = d}]

(3)

g_[]

g_[mu, nu] = (Matrix(4, 4, {(1, 1) = a, (1, 2) = 0, (1, 3) = 0, (1, 4) = 0, (2, 1) = 0, (2, 2) = b, (2, 3) = 0, (2, 4) = 0, (3, 1) = 0, (3, 2) = 0, (3, 3) = c, (3, 4) = 0, (4, 1) = 0, (4, 2) = 0, (4, 3) = 0, (4, 4) = d}))

(4)

``

``



Download you_can_set_g_to_wha.mw

@ecterrab 



restart; with(Physics):

The default metric already has the signature [+,-,-,-] you are asking. Just recall that the component "0" is mapped into the component "4", and that signature you ask, the same one used by default in the Landau books and others, is

g_[];

g[mu, nu] = (Matrix(4, 4, {(1, 1) = -1, (1, 2) = 0, (1, 3) = 0, (1, 4) = 0, (2, 1) = 0, (2, 2) = -1, (2, 3) = 0, (2, 4) = 0, (3, 1) = 0, (3, 2) = 0, (3, 3) = -1, (3, 4) = 0, (4, 1) = 0, (4, 2) = 0, (4, 3) = 0, (4, 4) = 1}))

(1)

You can always also use 0 or 4 indistinctly to refer to the components, as in:

g_[0,0] = g_[4,4];

1 = 1

(2)

Independent of that the keyword 'signature' is from old times and needs to be either removed or updated, because it allows only for  [+,-,-,-] or  [+,+,+,+] (adapted to the dimension of spacetime, that is also settable).

 

The issue is that, in current versions of Physics, you can actually set the metric to whatever you want, making that value of 'signature' irrelevant. For example:

Setup(coordinates = cartesian, metric = a*dx^2 + b*dy^2 + c*dz^2 + d*dt^2);

`* Partial match of  'coordinates' against keyword 'coordinatesystems'`

 

`Default differentiation variables for d_, D_ and dAlembertian are: `*{X = (x, y, z, t)}

 

`Systems of spacetime Coordinates are: `*{X = (x, y, z, t)}

 

[coordinatesystems = {X}, metric = {(1, 1) = a, (2, 2) = b, (3, 3) = c, (4, 4) = d}]

(3)

g_[];

g[mu, nu] = (Matrix(4, 4, {(1, 1) = a, (1, 2) = 0, (1, 3) = 0, (1, 4) = 0, (2, 1) = 0, (2, 2) = b, (2, 3) = 0, (2, 4) = 0, (3, 1) = 0, (3, 2) = 0, (3, 3) = c, (3, 4) = 0, (4, 1) = 0, (4, 2) = 0, (4, 3) = 0, (4, 4) = d}))

(4)

 



Download you_can_set_g_to_wha.mw

@ecterrab 

@peter137 Yes I have in mind additional documentation; that is key in this project and chunks are added at each release, with some people now complaining because of the current large size of ?Physics,examples ... These examples are mainly from the Landau books or from the other books shown in the references (footnote of ?Physics).

What would be your suggestion for showing the use of the package that at the same time is of your interest and you think would be of wide range interest?

Edgardo S. Cheb-Terrab
Physics, Maplesoft

@peter137 Yes I have in mind additional documentation; that is key in this project and chunks are added at each release, with some people now complaining because of the current large size of ?Physics,examples ... These examples are mainly from the Landau books or from the other books shown in the references (footnote of ?Physics).

What would be your suggestion for showing the use of the package that at the same time is of your interest and you think would be of wide range interest?

Edgardo S. Cheb-Terrab
Physics, Maplesoft

@Alejandro Jakubi as said in this and another thread this problem is fixed. But you are suggesting something else. Note that in the output you show by simplify/size you have w * conjugate(w) -- that is a non-simplified product, and as such it is not good as output of a simplify command. The output after the fix in fact is as the one you show by simplify/size but with that w * conjugate(w) also simplified.

Besides that there is simplify/size itself being discovered. To have in mind: when writing it tried thoroughly to avoid performing any mathematical operation - nothing - only collect coefficients (including of frozen subexpressions) and at most decomposition of fractional powers, that can be done fast. Anything else would ruin the performance of this wonderful routine, that was originally developed for, and is systematically in use in all the Maple DE, Physics and MathematicalFunctions code, and it is not used by simplify by default only because of backwards compatibility debates.  

And one idea to keep in mind: wouldn't it be beautiful if *every* output, not just the one of simplify, would always be preprocessed with simplify/size? Exception made only (from the top of my head) of the output of collect, factor and expand. I am talking of a kind of a setting. One that you could turn ON/OFF optionally, say as simply as typing > on;, or >off; and voila the thing activated/deactivated :) No? :) I think so, and work everyday with this feature implemented - it becomes addictive. At every release I consider adding it to the system. Perhaps ... It would need some tunning but mostly facing a significant break with so many years of (for me boring or frequently wallpaper) computer algebra output.

Edgardo S. Cheb-Terrab
Physics, Maplesoft

Hi
The comparison you mention consists of mainly 3 pages, of which less than 1/2 is about mathematical features (starting at 'Graph theory' and finishing at 'Cayley tables'), so to begin with there are obvious and significant omissions.

To mention but a few in areas I am more familiar with, the new Maple 17 functionality in Physics, Differential Equations and Differential Geometry - not mentioned at all in the comparison - is not only not available in Mathematica 9 or previous ones, but it is also probably 5 to 10 years of development ahead of what you see in Mathematica. This is the opposite of what is claimed in that comparison advertisement. Moreover, this distance is favorable to Maple, consistently, since year 2000, and consistently increasing every year. For Differential Equations the difference started shockingly (that is the word) in 1997 at least and Mathematica just never recovered.

Independent of these stark omissions, regarding the half page of mathematics that is mentioned in that comparison, either the person who wrote it does not understand the topics, or distorted the facts.

For example, quoting literally from this comparison you mention, it says that Maple 17: "a) introduced Visualization: visualize branch cuts, b) this Maple's functionality is not equivalent to Mathematica's automatic branch cut detection—it allows you to request a visualization of the branch cuts of individual functions but does not detect branch cuts in functions that you try to plot, c) that Mathematica introduced this in 2007". Statement c) is supposed to indicate that Mathematica introduced this functionality before Maple.

Both statements a) and b) about Maple are incorrect and the implication of statement c) is false.

Maple introduced visualization of cuts with its plots:-plotcompare not in Maple 17 but already in 2002, so 5 years before Mathematica and in a way still not seen in Mathematica 9; the Maple visualization is not restricted to cuts of individual functions, and of course the Maple plotting routines detect the branch cuts of the expressions you try to plot. See for instance the examples in the help page for ?plotcompare. But more important, Maple 17: a) introduced *algebraic representations* for the branch cuts of *algebraic expressions* not just individual functions, and this functionality does not exist at all in Mathematica - in fact it is new research work, see references in ?FunctionAdvisor,branch_cuts - and b) Maple introduced *algebraic representations* for branch cuts of individual functions already in 2003, with the FunctionAdvisor project, and neither those algebraic representations for the cuts even of individual functions, nor anything like the FunctionAdvisor project, ever existed in Mathematica, also not in the new Mathematica 9.

You see ...

Edgardo S. Cheb-Terrab
Physics, Maplesoft

This one actually got fixed June/19 before you posted. The problem you mention is the same one mentioned in http://www.mapleprimes.com/questions/148559-Problem-With-Code-Calculation-In-Maple-17 (I gave some details there). In short, simplify/conjugate, a very old routine (previous to 1994), got entirely rewritten.

Edgardo S. Cheb-Terrab
Physics, Maplesoft

Hi Carl,
There is a line (showstat(`simplify/conjugate`, 11)), in place since 1994 as far as I can see, that is problematic in that it introduces significant algebraic complication (see reply by Alejandro) while trying to simplify f*conjugate(f) = abs(f)^2. Until Maple 15 inclusive, the result of that attempt was discarded whenever it didn't result in 'less conjugate(..)' around, and thus the algebraic complication it would introduce was not noticed. But then the logic underlying these lines 11 to 13 fails in an example like f*conjugate(f) + conjugate(f), where replacing the first term by abs(f)^2 does not lead to 'less conjugate(..)' around - you still have conjugate(f); this got fixed in Maple 16, but then the result of this line 11 not always ends discarded in M16 till today ... and then you have the snow ball I see with the example posted in this thread. Of course the problem is that line 11..13 in that  the simplification f*conjugate(f) -> abs(f)^2 can be done more economically and avoiding these algebraic swelling problems. It is easy to fix. Unfortunately I do not see a workaround for you without editting the code. I am not sure the fix can still go into 17.01.

Edgardo S. Cheb-Terrab
Physics, Maplesoft

Hi Carl,
There is a line (showstat(`simplify/conjugate`, 11)), in place since 1994 as far as I can see, that is problematic in that it introduces significant algebraic complication (see reply by Alejandro) while trying to simplify f*conjugate(f) = abs(f)^2. Until Maple 15 inclusive, the result of that attempt was discarded whenever it didn't result in 'less conjugate(..)' around, and thus the algebraic complication it would introduce was not noticed. But then the logic underlying these lines 11 to 13 fails in an example like f*conjugate(f) + conjugate(f), where replacing the first term by abs(f)^2 does not lead to 'less conjugate(..)' around - you still have conjugate(f); this got fixed in Maple 16, but then the result of this line 11 not always ends discarded in M16 till today ... and then you have the snow ball I see with the example posted in this thread. Of course the problem is that line 11..13 in that  the simplification f*conjugate(f) -> abs(f)^2 can be done more economically and avoiding these algebraic swelling problems. It is easy to fix. Unfortunately I do not see a workaround for you without editting the code. I am not sure the fix can still go into 17.01.

Edgardo S. Cheb-Terrab
Physics, Maplesoft

dsolve has a number of optional arguments that may be of use for you, among others: useInt (instead of int) to avoid computing the integrals entering the solution (that may result in a LambertW function), or 'implicit', to avoid solving for y the implicit form of the solution (solving may result in LambertW). See ?dsolve,details; other times choosing the solving method as explained in that page may also help, typically for nonlinear higher order equations.

So for instance regarding your example, x-y/y' = y, try dsolve(ode, implicit) and you receive x - (_C1 - ln(y)) y = 0.

 

Edgardo S. Cheb-Terrab
Physics, Maplesoft

@Alejandro Jakubi You compute. Things are displayed. Not as you input them. There are then two different processes happening. In Maple, part of the display (typesetting) is also implemented at library level, as if it were a computation. That doesn't mean you compute with display, hey! It also doesn't mean that there would be some kernel connection missing as you suggest.

The idea is simple though. Here is an example of something working in Maple 17:

> PDEtools:-declare(A(x,y));
             A(x, y) will now be displayed as A

> A(x,y);
                                    A

So you input A(x, y) and it is displayed as A. Is the above equal to A or A(x, y)? Check it with lprint

> lprint(%);
                                  A(x, y)

So the display is one thing, and the object behind this display is another one. Now, you compute with the object behind, not with the display, of course, so if you input > %; it refers to A(x, y), not to the letter A with no dependency.

This display is implemented at library level, as a "computation", in Maple language. Hence you can reproduce the display itself, without the object behind it. How? In this case invoking the print/A function that produces it

> `print/A`(x, y);
                                    A

Note that this time there is no object behind, it is pure display:

> lprint(%);
Typesetting:-mcomplete(Typesetting:-mi("A"), Typesetting:-mrow(Typesetting:-mi("A"), Typesetting:-mo("⁡"), Typesetting:-mfenced(Typesetting:-mi("x"), Typesetting:-mi("y"))))

Of course you do not intend to compute with display. And why not just 'A' for output of `print/A`? Because we want copy and paste working. That is what mcomplete provides within display: if you mark the A returned by `print/A`(x, y) above, copy, and paste into a new line, you receive A(x, y), not A nor the Typesetting structure.

Summaryzing: the same approach can be used to have the display of v_satellite^2 be the same as that of v[satellite]^2, while you continue computing with v_satellite^2 (there is no kernel connetion missing whatsover). Implementing this with mcomplete resolves the copy and paste issue. This tells that what Erik is asking is perfectly doable in current Maple. Not by you because you do not have access to the display of powers, but yes by the people who do have access in case they agree with Erik's suggestion.

Edgardo S. Cheb-Terrab
Physics, Maplesoft

 

 

 

 

prevent to continue computing with the object that is behind that input, of course.  

@Alejandro Jakubi Note I am saying "this approach could be implemented within Typesetting for the purpose of having powers of the new subscripted atomic variables displayed more correctly". I.e.: display. See how a Physics:-Ket displays and no, you do not compute with the display. Within Typesetting, also within the old print/foo, the display is totally independent of the object being displayed. That is why copy & paste doesn't work with print/foo: you paste "display". But now with Typesetting:-mcomplete - an internal mechanism, not documented for that reason, subject to changes in next releases, etc. still, it is a proof of concept that there is an easy way to resolve the issue posted already in Maple 17.0x.

I do not advertise unimplemented features - doesn't matter to me what Wolfram does, neither I see him as the paradigm; nor I think he really advertise unimplemented features ... but that is to the side, a topic I'm not interested enough to spend time on it. PrintTools is just a natural thing to think to address the limitations of print/foo after the advent of TypeTools addressing the equivalent problem regarding type/foo. ExpandTools, SimplifyTools, etc. But all these were mentioned other times as well by other people.

Anyway, until the limitations of `command/foo` are addressed, we can still use them for what they do well, and in the case of `print/foo` use it with mcomplete to provide the copy and paste functionality.

Regarding development with constant user feedback: the idea is old and many of use think that is good, but the exact implementation of it is not straightforward. Anyway, MaplePrimes - and now it seems we have a moderator that helps - is a forum that can be used concretely for this purpose of a collaborative developing environment. You need to have in mind however that if we are going to reply to all the different opinions expressed and derivations of them (e.g. in this thread) we would - honestly - have little time to do any development.


Edgardo S. Cheb-Terrab
Physics, Maplesoft

First 57 58 59 60 61 62 63 Page 59 of 65