acer

31409 Reputation

29 Badges

19 years, 134 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are Posts that have been published by acer

This page from the Online Help suggests that GMP 4.2.1 is still being used in Maple 15.

But the latest "stable" release is GMP 4.3.2, see release notes here (various fixes and improvements in the 4.3.xx series mentioned).

Perhaps this could be updated in a service release Maple 15.xx.

The goal of computing only a select number of eigenvectors of a real symmetric floating-point Matrix comes up now and then. For very large Matrices the memory requirements can be more restrictive than the timing.

The attached worksheet and code computes this, more quickly and with significantly less memory allocation than does the usual task of computing all eigenvectors. By using the supplied Matrix itself as a partial "workspace" the amount of additional workspace and memory allocation for the task is negligible.

For example, having created the very large Matrix in the first place,  essentially no significant further memory allocation is required to compute the largest eigenvalue and its associated eigenvector.

A little about this routine `SelectedEigenvectors` follows.

It only works in hardware double precision. It expects a float[8] datatype Matrix (because you are serious about using minimal memory!). It uses the CLAPACK function dsyevx, using the "wrapperless" version of Maple's external-calling mechanism. It seems to work fine in the systems I've tried so far: Maple 13 and 14 on both 32bit and 64bit Linux and Windows.

Whether it computes and returns the selected eigenvectors (alongside the selected eigenvalues, which are always returned) is controlled by the 'vectors=truefalse' optional argument. By default it uses the Matrix argument as partial workspace and so destroys the original data; but this can be overridden with the 'preserve=true' optional argument. The requested accuracy can be relaxed with the 'epsilon=float' optional argument, which might sometimes speed it up.

The input Matrix is presumed to be symmetric. By default it uses the data in the lower triangle, but this can be changed to be the upper triangle with the 'uplo' optional argument.

The choice of eigenvalues is controlled by the two integer arguments `il` and `iu`. If il=iu=n then only the nth largest eigenvalue is computed. If il=1 and iu=4 then the four smallest eigenvalues are computed.

It returns three things: a Vector of dimension n whose first m entries are the selected eigenvalues, a nxm Matrix whose columns are the m associated eigenvectors, and a Vector of dimension n whose entries indicate whether corresponding eigenvectors failed to converge.

I didn't enable float arguments such as `vl` and `vu` which in principle could allow one to supply a floating-point range in which to find eigenvalues.

I didn't make an optimization of having it do an initial "dummy" external call in which no calculation would be done, but which would instead query for and subsequently utilize the optimal-performance additonal float workspace size.

For reasons mysterious to me, on Windows the 64bit version runs almost exactly half as fast as the 32bit version.

Usually, the workspace for eigen-solving is implemented to be at least O(n^2) for an nxn Matrix. But this routine does only O(m+n) extra workspace allocation to compute the m eigenvectors. And that is linear. Which is the Big Deal.

A 5000x5000 datatype=float[8] (ie. hardware double precision) Matrix takes 200MB of memory. With the preserve=true option, this routine can compute just the largest eigenvector with only about 200MB of additional allocation. And if the original Matrix is no longer required then with the preserve=false option this routine can do that task with less than 1MB further allocation. In comparison, the regular LinearAlgebra:-Eigenvectors command would require about 600MB of additional memory allocation while computing all eigenvectors.

At size 5000x5000 this routine is only about four times faster than LinearAlgebra:-Eigenvectors. I suspect that is because it still has to compute in full the reduction to tridiagonal form.

Download dsyevx.mw

This is just a programmatic twist on Robert Lopez's very nice original post on this topic.

I'd rather be able to control such declarations with code, than to have to manipulate the palettes or context menus in order to get what are -- in essence -- additional programming and authoring tools. Such code can be dynamic and flexible, and could be inserted in Code- or Startup regions.

Download atomicpartials.mw

While doing some Maple plotting I found myself asking why a high-end scientific computation application like Maple, which is capable of essentially very high ("arbitrary") precision floating-point computation, sometimes makes only crude use of hardware precision plot drivers. I looked around a bit, and found that and related issues are not restricted to Maple.

Let's look first at Maxima. Here's my first example, in Maxima (not Maple) syntax,

I notice that a Seach on this site for the word "cipher" fails to show this result as a hit. Why is that?

It is by no means the first time that a keyword Search on Mapleprimes has failed to return a hit for an old comment/post/answer of mine. I'd really like to know why there are so many such misses.

First 8 9 10 11 12 13 14 Last Page 10 of 29