MaplePrimes Commons General Technical Discussions

The primary forum for technical discussions.

I congratulate Robert Israel on being the first to attain the Mapleprimes Maple Master badge. It is well deserved.

I was about to post this as a "How-do-I" question, but while composing my question, I stumbled upon the solution.  In case this "discovery" would be useful, I'm posting it here.

For some coursework, I'm developing fitting sinusoids to experimental data (poor-man's Fourier Analysis).  At one point, I do a "brute-force-least-squares" computation, one step of which involves computing the sum of a sine over N equally-spaced intervals around the circle.  This...

As some of you know, I'm hoping to, some day, find a closed form expression for the MRB constant.

 Here is my latest little nugget.

Let x=MRB constant.

(1-604*x)/(28+209*x) = log(x) with an error< ...

(This is a reply to PatrickT, who asked about a certain ArrayTools:-Copy example.)

Suppose that you need to do a computation which requires, in part, a certain subportion of a Vector V. Let's call the routine which does the work as `f`. Let's suppose that `f` is a system command and not something that we write ourselves. One natural way to call `f` and supply the subvector is to call `f` like so:

   f( V[a..b] );

Here the inner range a..b denotes...

Perhaps you have heard the terms "ordering difference" or "session dependent" applied to results of some Maple computation. It used to get heard more often back before Maple 12, when elements of sets in Maple were ordered according to address.

What would typically happen would be something like this. A computation gets done by looping either over the variables (symbol indets) of an expression, or over named elements of some set. And the computation happens to be very sensitive to which variables get chosen and used first. But the order in which they get chosen is according to the rule by which the elements of Maple sets are ordered. So if the address in memory of some of the variable names just happens to be different, then they get selected in an different order, and an entirely different computation path might get taken.

Now, merely using a name at some apparently unrelated earlier point in the Maple session can cause that name to get a different memory address than if the name is first used later on. The upshot was that apparently innocuous and unrelated use of variables at some earlier stage in a session could actually completey change a later result of an algorithm which happened to be sensitive to the order of set member access.

Since the difference in the final result in such cases would depend on ostensibly harmless earlier activity in the session, the whole effect was labelled "session dependence". And it caused a lot of grief.

As a wild (ficticious) example, you merely enter the name `x` early in a worksheet, and somehow the later result of an integral has a totally different (valid, but more complicated or unsimplified) form.

But since Maple 12 sets' entries are "ordered" (internally, by Maple's kernel) and a reduction in the appearance of session dependence has come about. But I was reminded recently (on usenet) that ordering differences can also apply to adding and multiplying expressions. And by coincidence this also ties in to something else that I've been thinking about: roundoff error.

Here's the simple example that came up. When you enter x/y you form a Maple expression whose internal representation (DAG) stores it as the product of x and of y^(-1). But which of those two multiplicands is stored first? It matters! When x and y get evaluated at floating-point values then it matters which is evaluated first. If y^(-1) gets evaluated first then roundoff error may occur. But if x is evaluated first then the float division might happen to occur without roundoff error.

Let's look closely at the example that came up.

> restart:

> ee := x/y;
                               x
                               -
                               y

> eval(ee, [x=2.1, y=2.4]);
                          0.8750000000

> restart:

> invee:=y/x:

> ee:=1/invee;
                               x
                               -
                               y

> eval(ee, [x=2.1, y=2.4]);
                          0.8750000001

In both sessions above, ee equals x/y. But only the second incurs roundoff error. The second result differs from the first, in the last decimal digit. It turns out that the order in which x and y^(-1) get stored in the product DAG is different in these two examples. If y^(-1) gets evaluated at y=2.4 before x gets evaluated, then Maple computes the intermediary term 1/2.4 = 0.4166666667 which of course has roundoff error. But if x gets evaluated first them Maple sees 2.1/2.4 as the division to compute, and that can get done with no roundoff due to fortuitous cancelation of factors of 3, ie. it is the same as 0.7/0.8=0.875.

You can see the internal storage distinction, as follows. I'll repeat the above pair of examples, but this time also dismantling `ee` to show its structure. Notice the order of `x` and y^(-1) insides the printed DAG.

> restart:
> ee := x/y;
                               x
                               -
                               y
> dismantle(ee);

PROD(5)
   NAME(4): x
   INTPOS(2): 1
   NAME(4): y
   INTNEG(2): -1

> eval(ee, [x=2.1,y=2.4]);
                          0.8750000000

> restart:
> invee:=y/x:
> ee:=1/invee;
                               x
                               -
                               y

> dismantle(ee);

PROD(5)
   NAME(4): y
   INTNEG(2): -1
   NAME(4): x
   INTPOS(2): 1

> eval(ee, [x=2.1,y=2.4]);
                          0.8750000001

Ok, so perhaps you are thinking: who cares, the difference is tiny. Well, small differences are easily isolated and magnified if they occur inside compound expressions. And so the ensuing results might be entirely different.

Take just a slightly more involved example,

> restart:
> ee := 1/(sqrt(0.8750001 - x/y)) - 3162;

                           1                 
                  -------------------- - 3162
                                 (1/2)       
                  /            x\            
                  |0.8750001 - -|            
                  \            y/            

> eval(ee, [x=2.1,y=2.4]);

                            0.277660

> restart:
> invee:=y/x:
> ee := 1/(sqrt(0.8750001 - 1/invee)) - 3162;

                           1                 
                  -------------------- - 3162
                                 (1/2)       
                  /            x\            
                  |0.8750001 - -|            
                  \            y/            

> eval(ee, [x=2.1,y=2.4]);
                            1.859986

Look at what caused the difference. All I did was assign y/x to invee and then re-use that to form `ee`. The expression `ee` looks the same on the surface. But inside it, the term x/y is stored with a different order of its multiplicands. And the final results are really different.

I suspect that an improvement might be had if evaluation at a point (2-argument eval) were instead done by substituting floats into numerator terms (positive integer exponent) before denominator terms (negative integer exponent). But investigation would have to precede that.

Finally, you might wonder what can be done with the errant version of `ee` from the last example. Simplification may or may not help,

> eval(simplify(ee), [x=2.1,y=2.4]);

                          0.2776606276

> eval(fnormal(ee), [x=2.1,y=2.4]);
                            0.277660

Are either of those last two results correct to ten places, you might ask? Well, no.

> Digits:=5000:

> eval(ee, [x=2.1,y=2.4]): evalf[10](%);
                          0.2776601684

acer

Hello all,

I have encountered a curious bug in the EigenConditionNumbers
procedure. In particular for a pencil pencil (A,B) with B singular,
and precision higher than hardware precision.

The following code for Digits=40 produces a Float(undefined) rather
than a Float(infinity) for the infinite eigenvalue, but an alpha and
beta that will produce an infinite eigenvalue.

Digits:=trunc(evalhf(Digits));
A:=Matrix([[1,0],[0,2]]);
B:=Matrix([[1,0],[0,0]]);

A couple of days ago I found out that gzread from the zlib library can be used for fast reading of binary files in Maple from the disk to memory - about 100 times faster than readbytes - something like in the following simplified example, 

A:=Array(1..2^26,99,datatype=integer[1]):

time(writebytes("A",A));close("A");

                                9.360

B:=Array(1..2^26,1,datatype=integer[1]):
time(readbytes("A",B));close("A");

                                8.065
B[1],B[-1];

                                99, 99

myreadbytes:=proc(f)
local gzopen, gzread, gzclose, n, p, A;
gzopen:=define_external('gzopen',
    'path'::string,
    'mode'::string,
    'RETURN'::integer[4],
    'LIB'="zlibwapi.dll");
gzread:=define_external('gzread',
    'file'::integer[4],
    'buf'::REF(ARRAY(datatype=integer[1])),    
    'len'::integer[4],
    'RETURN'::integer[4],
    'LIB'="zlibwapi.dll");
gzclose:=define_external('gzclose',
    'file'::integer[4],
    'RETURN'::integer[4],
    'LIB'="zlibwapi.dll");
n:=FileTools:-Size(f);
A:=Array(1..n,datatype=integer[1]);
try p:=gzopen(f,"rb");
if gzread(p,A,n)=n
then return A end if
finally gzclose(p)
end try
end proc:
time(assign(C=myreadbytes("A")));

                                0.062

C[1],C[-1];

                                99, 99

'time(myreadbytes("A"))'$5;


                  0.078, 0.062, 0.046, 0.046, 0.046

E:=Array(1..2^26,2,datatype=integer[1]):
time(ArrayTools:-Copy(A,E));

                                0.093

That needs some tweaking, because that works only on uncompressed files. If a file ("A" in this example) was gzipped, then the gzread would ungzip n (uncompressed) bytes in it in this example, instead of copying it into the memory - but it is not a big deal, in general.

Does anybody know about a similar replacement for writebytes? gzwrite doesn't work for copying (it compresses the array.)

I used the zlibwapi.dll library from http://www.winimage.com/zLibDll/index.html, it is a version of zlib 1.2.5 (written by Jean-Loup Gailly and Mark Adler) built by Gilles Vollant. The code is for a 32-bit system (Windows). That should work in 32-bit Linux after replacing that dll with standard libz.so.1, as well as on 64-bit systems after replacing integer[4] with integer[8] in most places.

Click Maple Math, enter x^2/(1+x).   The Preview disappears as soon as I press "/", and the result is not pretty-printed.

x^2/(1+x)

I've been making some use of the Maple Cloud for a while now, and thought that I'd share some comments.

So far, it's been quite useful to me, and I like it. This surprised me a bit. I expected not to find it useful, and to dismiss it with an old-timer's "Bah, humbug... as useless as Maple+twitter!" But, to the contrary, I've found a use for it; a need that isn't otherwise...

Quite often, when plotting an expression in involving trigonometric functions applied to (a rational polynomial of) the main variable, it is desirable to have the major ticks along the x-axis be labeled by multiples of Pi. In particular, it can be much more appealing to have those tickmarks be labeled with short rationals multiplied by the 2D Math symbol π.

In examining ODE models containing parameters I don't like to assign values to these, but use the form

param:={a=2, b=7};
and then
eval(expression, param);

This works well. There are situations, however, where this method can be cumbersome.
This would be the case in a situation like this:

plot(a*sin(x),x=0..b);

You could do

plot(op(eval([a*sin(x),x=0..b],param)));

But maybe an operator having a syntax similar to 'assuming' could be useful.

If I assign a list to a variable named depth it causes invalid object errors in for loops for Maple12

depth:=[1,2,3,4,5,6,7,8]:

for i from 1 to 4 do
  i;
end do;

Error, invalid object: mspace(height = "0.0ex",width = "0.0em",[1, 2, 3, 4, 5, 6, 7, 8] = "0.0ex",linebreak = "newline") and  1 additional error.

 

 

Let's compare the performance of two methods of computing the inverse of a large datatype=float[8] Matrix.


The two methods are that of calling `MatrixInverse`,...

If you want better performance then don't use 2D Math mode to enter procedures which call  `.` (dot).

The following timings are not the result of the order in which the cases are performed. The timings stay roughly the same if the blocks delimited by restarts are executed out of order.

First 17 18 19 20 21 22 23 Last Page 19 of 79