Items tagged with blog blog Tagged Items Feed

Here's a first working shot at an external, programmatic mechanism for opening .mw worksheets/Documents as new tabs in an already running Maple Standard GUI session.

This involves a `sh` shell script, runnable in Unix/Linux/OSX/cygwin. Maybe someone could post a MS-Windows .bat batch file equivalent.

The basic idea is this: you have a GUI session open. But you want to be able to open other .mw files in that session without having to go through the GUI's File->Open menu every time ...

Maple's Standard GUI has context-sensitive menus. Those are the menus that appear when one right-clicks on a output (or input, in a Dcoument).

Those context-menus can be customized.

Below is an example which adds a new submenu. The new submenu is populated automatically according to the types of thing found within the object itself.

It's an alternative to a menu-item that already exists, which shows up as "Help on Command". But that existing item only...

In this previous post, an example is shown that demonstrates the potential problems that can arise following symbolic conversions such as from sqrt(x^2)  to x^(1/2).

Here x is an unknown symbol. The difficulties include the fact that, while `sqrt` can be smart about simplifying numeric values (eg. integers, rationals) the `^` operator has no such opportunity. Once the conversion from `sqrt`...

subsop example...

August 21 2008 acer 10186 Maple
> q := x^(1/2):

> type(q, `^`);
                                     true
> op(0,q);
                                       ^

> subsindets(q, `^`, f->subsop(0=H,f));
Error, (in unknown) improper op or subscript selector

What's wrong with that last one? It's modelled on the first Example in the ?subsindets help-page.

Are there cases where...

It would be nicer if the nprofile commandline utility had its own script in $MAPLE/bin similar to the maple and xmaple scripts.

That is the place that one usually either looks for Maple executables or appends to one's PATH. A single location makes more sense and is easier.

It would also be nicer if nprofile help-page mentioned something like the exprofile help-page's comment that, "The preferred method for creating the output  file is with writeto() and/or appendto()."

I was reminded of this by another thread.

It is faster to add in-place a large size storage=sparse float[8] Matrix into a new empty storage=rectangular float[8] Matrix than it is to convert it that way using the Matrix() or rtable() constructors.

Here's an example. First I'll do it with in-place Matrix addition. And then after that with a call to Matrix(). I measure the time to execute as well as the increase in bytes-allocated and bytes-used.

> with(LinearAlgebra):

> N := 500:
> A := RandomMatrix(N,'density'=0.1,
>                   'outputoptions'=['storage'='sparse',
>                                    'datatype'=float[8]]):

> st,ba,bu := time(),kernelopts(bytesalloc),kernelopts(bytesused):

> B := Matrix(N,'datatype'=float[8]):
> MatrixAdd(B,A,'inplace'=true):

> time()-st,kernelopts(bytesalloc)-ba,kernelopts(bytesused)-bu;
                            0.022, 2489912, 357907

Suppose you want to solve a large dense linear system AX=B over the rationals - what should you do? Well, one thing you should probably not do is directly apply Gaussian elimination. It does O(n^3) arithmetic operations, but the size of the numbers blow up, leading to an exponential bit complexity. Don't believe me? Try it:

with(LinearAlgebra):
for N from 5 to 9 do
  A := RandomMatrix(2^N, 2^N+1,generator=-10^5..10^5):
  TIMER := time(GaussianElimination(A...

As Demmel and others have noted, SVD is both more reliable and more expensive than QR as a method of solving rank-deficient least squares problems.

SVD is the method that LinearAlgebra:-LeastSquares will choose when the Matrix has more columns than rows (n>m), unless instructed otherwise using the optional 'method' parameter.

LinearAlgebra:-SingularValues always computes a full U and Vt. But for least squares computations, such as when n>m, this is not necessary. Including the smaller singular values may just be (re-)introducing noise. See here for more detail.

Here's a 20x2000 example, using wrapperless external calling and the SVD routine dgesvd in the CLAPACK library. The effective speedup by using the Thin SVD for that 20x2000 least squares example is about a factor of 100 (ie, 2000/20), with a similar reduction in additional memory allocation.

I have been wondering about the reception of an independent patch Library for Maple.

I'll lay out a few ideas, and then maybe some criticism (or indifference) might follow.

I'm a bullet-point sort of person:

  • sourceforge project with a few willing experts for vetting and gatekeeping of submissions.
  • Patch .mla Library archive, going into /toolbox/Patch/lib/ so as to get picked up automatically by libname.
  • Self-building, with...

A few weeks ago I mentioned the ncrunch comparison of "mathematical programs for data analysis" in a comment in another thread.  There is now a new, 5th release of that review. The systems reviewed are:

  • GAUSS
  • Maple
  • Mathematica
  • Matlab
  • O-Matrix
  • Ox
  • SciLab

The review is skewed towards statistical computation and data manipulation, but it includes several interesting comparisons of the major computer algebra systems (CAS).

There is a comparative performance section, and the worksheets used for that benchmarking are available for download. Here is the Maple worksheet, which was used with Maple 11.

There have been a few posts on mapleprimes about numerically solving systems of procedures. The latest one, up until now, was this.

Here's some code to implement the method. Since the algorithm is basically very simple, I've added a few bells and whistles as optional arguments.

The essence of it is as follows. The number of procedures must match the number of parameters of each and every procedure. It does maxtries attempts at choosing a random point, and then does at most maxiter iterations. A solution is only accepted if the norm of the last change in vector (point) x is less than xtol, and if the forward error norm(F(x)) is less than ftol. The jacobian of F may be supplied optionally as a Matrix of procedures, or a method for computing the jacobian may be supplied. The methods are fdiff which only uses Maple's numerical differentiation routine fdiff, or hybrid which attempts symbolic differentiation via Maple's D[] operator and then falls back to fdiff via the nifty evalf@D equivalence.

This forum question led to a discussion of a bitwise magazine review that compared Mathematica 5.2 and Maple 10. In that review the author struggled to get the following numeric integral to compute accurately and quickly in Maple.

evalf(Int(BesselJ(0, 50001*x)*x*exp(I*(355*x^2*1/2)), x = .35 .. 1));

Below, I reproduce an attempt at computing an accurate result quickly in Maple. I'm copying it here because that thread got quite long and messy.

What could be done with a module whose ModuleLoad routine redefined itself?

Could such a routine do some action, and then cover its tracks effectively by overwriting itself?

Would there be any way to use march() to examine the .mla archive member, in which that ModuleLoad routine is stored, without accessing the name of the module? Presumably any invocation of the actual module name would result in its being accessed from the library and hence trigger its ModuleLoad routine.

There are some routines in Maple's library which, when called the first time, redefine themselves.

One plausible explanation for this is that the new versions are session dependent (external calls, say) while also more efficient to call (repeatedly).

For example, consider StringTools:-Join which seems typical of that package. First, consider it before it's been called at all.

> restart:
> showstat(StringTools:-Join);
 
StringTools:-Join := proc(...

There are a number of facilities in Maple which may be extended. Included amongst those are `type`, `print`, `evalf`, and `latex`. The help-page ?extension_mechanism claims that all the built-in functions allow for extension. It also mentions a few system Library routines such as `verify` (but does not mention `latex`).

There are some descriptions of varying completeness in a few...

4 5 6 7 Page 6 of 7