Alejandro Jakubi

MaplePrimes Activity


These are replies submitted by Alejandro Jakubi

@acer 

This table in ascii format contains some useful information.

@ecterrab 

Indeed, my experience is that keeping expression size/complexity as low as possible along intermediate steps frequently helps in obtaining simpler results. Certainly, for linear systems, the arbitrariness in the choice of the solution base plays a role in the simplicity of the solution. But in this example, probably, a more important role is played by the trigonometric functions in the coefficients. Expressions containing them may swell in size if the trigonometric identities are taken the "wrong" way. I guess that there is nothing beyond heuristics (I mean a theory or algorithm) for the transformation of (generic) trigonometric expressions to their smaller form. So, if you start from shorter expressions, and diagonalization provides them, it seems to me more likely that the result will end up simpler. And also it looks likely that smaller intermediate expressions imply less memory and time usage for the whole computation.

In particular, I see no problem in using the assumption that the independent variable is real, if it were necessary to get a simpler solution, as long as that is all what is needed. Indeed, this is frequently the case in applications where the independent variable is e.g. the time or a coordinate.

I understand what you say that the matrix approach was the standard way till the mid 90's. However, now with the differential-algebraic approach in place it seems interesting to compare both again. Moreover, as the linear algebra sector of the system has had a lot of changes in the mean time. In particular I have noted that DEtools[matrixDE] seems somewhat frozen at linalg times.

@J F Ogilvie 

Once again, the experimental data in package is quite outdated, see ?Screfs . E.g. it states that CODATA was last accessed eleven years ago...Also, some links in this help page are already broken. The most reasonable explanation of this situation, is that this subject is of low enough priority for Maplesoft management to be properly maintained with their available human resources, as experimental data keeps comming in.

Given this situation, in my opinion, it is a waste of time to request Maplesoft to maintain it. And the most sensible request to Maplesoft would be that the code related to this data becomes opensource, so that it could be maintained by the community. I do not think that anything else in the system depends on this code, so that it could be affected by what it could be put there. And I do not see either that Maplesoft could loose any revenue ( likely, it could be the other way).  

Preben has shown the steps to obtain the solution in simple form, except for showing the origin of those algebraic steps decoupling the system (adding and substracting equations and dependent variables). This hole can be filled by writing this system in matrix form, diagonalizing the matrix, solving the uncoupled system, and rewriting back in terms of the original variables. For the record, I show here this route also. The advantage of this approach is its generalization to any linear system whose matrix can be diagonalized. Also, I compare the time and memory usage that I get wrt to using dsolve in default way (which follows differential algebraic decoupling instead). Roughly, these numbers show a x10 improvement.

> M:=<<-(4*cos(x))/(sin(x)*(cos(x)^2-9))|
> -(cos(x)^2+3)/(sin(x)*(cos(x)^2-9))>,
> <-(cos(x)^2+3)/(sin(x)*(cos(x)^2-9))|-(4*cos(x))/(sin(x)*(cos(x)^2-9))>>:

> st,ba,bu := time(),kernelopts(bytesalloc),kernelopts(bytesused):
> L,V:=LinearAlgebra:-Eigenvectors(M) assuming real;
L, V :=

    [                    2       2            2           2 1/2 ]
    [-4 cos(x) + (-cos(x)  sin(x)  + 16 cos(x)  + 9 sin(x) )    ]
    [---------------------------------------------------------- ]
    [                                 2                         ]
    [                   sin(x) (cos(x)  - 9)                    ]  [-1    1]
    [                                                           ], [       ]
    [                     2       2            2           2 1/2]  [ 1    1]
    [  4 cos(x) + (-cos(x)  sin(x)  + 16 cos(x)  + 9 sin(x) )   ]
    [- ---------------------------------------------------------]
    [                                  2                        ]
    [                    sin(x) (cos(x)  - 9)                   ]

> L1:=simplify(L) assuming real;
                             [       cos(x) - 1        ]
                             [   -------------------   ]
                             [   (cos(x) + 3) sin(x)   ]
                       L1 := [                         ]
                             [         sin(x)          ]
                             [-------------------------]
                             [(cos(x) - 1) (cos(x) - 3)]

> MD:=LinearAlgebra:-DiagonalMatrix(L1);
                  [    cos(x) - 1                                  ]
                  [-------------------                0            ]
                  [(cos(x) + 3) sin(x)                             ]
            MD := [                                                ]
                  [                                sin(x)          ]
                  [         0             -------------------------]
                  [                       (cos(x) - 1) (cos(x) - 3)]

> SD:=DEtools[matrixDE](MD,x)[1] ;
                         [            1/2                   ]
                         [(1 + cos(x))                      ]
                         [---------------           0       ]
                         [            1/2                   ]
                         [(cos(x) + 3)                      ]
                   SD := [                                  ]
                         [                               1/2]
                         [                   (cos(x) - 1)   ]
                         [       0           ---------------]
                         [                               1/2]
                         [                   (cos(x) - 3)   ]

> CC:=<_C1,_C2>:
> 
> S:=V^(-1).SD.CC;
                [                 1/2                       1/2    ]
                [     (1 + cos(x))    _C1       (cos(x) - 1)    _C2]
                [-1/2 ------------------- + 1/2 -------------------]
                [                   1/2                       1/2  ]
                [       (cos(x) + 3)              (cos(x) - 3)     ]
           S := [                                                  ]
                [                1/2                       1/2     ]
                [    (1 + cos(x))    _C1       (cos(x) - 1)    _C2 ]
                [1/2 ------------------- + 1/2 ------------------- ]
                [                  1/2                       1/2   ]
                [      (cos(x) + 3)              (cos(x) - 3)      ]

> time()-st,kernelopts(bytesalloc)-ba,kernelopts(bytesused)-bu;
                            0.304, 2101248, 16744216

> odetest({y(x) = S[1], z(x) = S[2]}, [sys[1], sys[2]]) ;
                                     [0, 0]

For comparison, these are the usage numbers that I get from dsolve:

> CodeTools:-Usage(dsolve(sys, [y(x), z(x)]));
memory used=31.6MB, alloc=64.3MB, time=0.67
memory used=87.5MB, alloc=68.3MB, time=1.62
memory used=143.6MB, alloc=68.3MB, time=2.44
memory used=155.60MiB, alloc change=62.01MiB, cpu time=2.59s, real time=3.13s       
[long output]

These numbers were obtained with Maple 17.01 CLI on Linux 32-bit.

The big improvement in simplicity and efficiency by this method in this example, suggests that it may be worthwhile comparing systematically wrt dsolve on a sample of ODE systems whose matrices are diagonalizable.

@Christopher2222 

Actually, two things would be desirable, in my opinion.

1. A more frequent publication of updates of packages and other components of the system.

2. A package manager. It could warn that these new versions became available, download them from a repository, and install them automatically, together with any other required stuff. I see this scheme much more practical than downloading individual files and installing them by hand, or using custom installers made for every update.

@ecterrab 

The class of problems is given by the set of matrices that can be diagonalized, a well known area of linear algebra. So, I would not call it "nada"...I agree, matrixDE decoupling should actually be compared with differential algebraic lexicographical decoupling on a good sample of ODE systems for simplicity of results and efficiency, provided that both give correct results (I do not see how any advantage could be shown otherwise, and certainly, I do not have time to go into such testing). In this particular case, the matrix method is a clear winner for both, simplicity and efficiency (for the record, I will add a post at that thread  later). Additionally, I do not know whether the aplicability sets of these two methods coincide. If neither of them is included into the other, there would be an additional motivation for looking into matrix decoupling, I think. 

@ecterrab 

About the In "Play a simple melody" ODE system, I think that Preben has shown the steps to obtain that solution in simple form, except for showing the origin of those algebraic steps decoupling the system (adding and substracting equations and dependent variables). This hole can be filled by writing this system in matrix form, diagonalizing the matrix, solving the uncoupled system, and rewriting back in terms of the original variables. I have just tried these steps and it works fine (and actually more efficiently than dsolve on the original system). So, a priori, rather than an expectation difficult to fulfill, it looks to me as something quite strightforward and convenient to implement.

Yes, it converges for the truncated sum, starting from Digits=10, by using a different algorithm:

> Digits:=10:
> `evalf/Sum1`(I^n*BesselJ(n,2*34^(1/2))*exp(-I*n*arctan(5/3))*exp(1/4*I*n*Pi), 
n = 1 .. 30); 0.05400552045 - 0.7873756700 I > `evalf/Sum1`(I^n*BesselJ(n,2*34^(1/2))*exp(-I*n*arctan(5/3))*exp(1/4*I*n*Pi),
n = 1 .. infinity); infinity ----- \ n 1/2 ) I BesselJ(n, 2 34 ) exp(-I n arctan(5/3)) exp(1/4 I n Pi) / ----- n = 1
> Digits:=24: > `evalf/Sum/infinite`(I^(1+i)*BesselJ(1+i,2*34^(1/2))*exp(-I*(1 > +i)*arctan(5/3))*exp(1/4*I*(1+i)*Pi), i, false, true); 0.0540055204234803729319964 - 0.787375670007105789851047 I

So, which method should try first an improved algorithm? raising Digits or truncating?

Yes, it converges for the truncated sum, starting from Digits=10, by using a different algorithm:

> Digits:=10:
> `evalf/Sum1`(I^n*BesselJ(n,2*34^(1/2))*exp(-I*n*arctan(5/3))*exp(1/4*I*n*Pi), 
n = 1 .. 30); 0.05400552045 - 0.7873756700 I > `evalf/Sum1`(I^n*BesselJ(n,2*34^(1/2))*exp(-I*n*arctan(5/3))*exp(1/4*I*n*Pi),
n = 1 .. infinity); infinity ----- \ n 1/2 ) I BesselJ(n, 2 34 ) exp(-I n arctan(5/3)) exp(1/4 I n Pi) / ----- n = 1
> Digits:=24: > `evalf/Sum/infinite`(I^(1+i)*BesselJ(1+i,2*34^(1/2))*exp(-I*(1 > +i)*arctan(5/3))*exp(1/4*I*(1+i)*Pi), i, false, true); 0.0540055204234803729319964 - 0.787375670007105789851047 I

So, which method should try first an improved algorithm? raising Digits or truncating?

I find your post quite interesting. About a decade ago, a new Java sector was been added to the Maple system and it has been actively developed ever since, notably in the area of the Standard GUI. In this sense, I think that Java code has been given a high priority. However, a lot of this new functionality is available only through graphical interaction, for instance in the area of plotting. This subthread provides an interesting illustration of the apparent difficulty in providing a programable interface to this functionality, as described by the developer. So, I wonder how far the limitations that you describe for the programatic creation of Java objects could explain the alleged difficulty to implement more programatic access to this Java functionality.

@Mac Dude 

Yes, probably this list (as some other parts of Maple) has a Unix bias. In particular, the help from the menu of this X11 device window states:

Under the File menu, Hardcopy allows you to create a file to be printed of the plot.  There is a choice of Postscript, Color Postscript, LN03, Imagen, PIC, UNIX Plot or HP Laserjet formats.

Note in any case that the X11 device (or x11) works in the CLI but not in Standard (on Linux).

@dharr 

Remember that 2-D math is a "somewhat different language", see e,g, this post (former blog).

@Lark 

No, it is not me that would have more success but you. This Linux Mint installation has the latest updates, and for this Linux distro they come every day or so.  In particular, IcedTea and family were updated just a while ago. But it makes no difference. So, if you are not addressing some top used OSs/browsers it is your problem...

On the other hand Axel's security concerns are quite valid, and a non-Java, more standard implementation would be much better.

@Lark 

Altering a stable distribution by introducing alternative untested components does not seem a good advice. Rather, if this site is intended to reach a wide audience, it should implement the required fallbacks.

@PatrickT 

I have just tried with Chrome and Chromium on the same Linux Mint 13, with versions:

$ chromium-browser --version
Chromium 28.0.1500.52 Built on Ubuntu 12.04, running on LinuxMint 13

$ google-chrome --version
Google Chrome 28.0.1500.71 

With (apparently) the same "Java" plugin, after about:plugins,

IcedTea-Web Plugin (using IcedTea-Web 1.2.3 (1.2.3-0ubuntu0.12.04.2)) - Version: 1.2.3

the results are quite different. Following the same steps as described above for Firefox, Chrome does not work either, also becoming very unresponsive, and issuing to the console similar (if not the same) long list of Java error messages. On the other hand, for Chromium all the examples load apparently fine, and no error messages are thrown to the console. Really intriguing...

First 42 43 44 45 46 47 48 Last Page 44 of 109