Perhaps you have heard the terms "ordering difference" or "session dependent" applied to results of some Maple computation. It used to get heard more often back before Maple 12, when elements of sets in Maple were ordered according to address.
What would typically happen would be something like this. A computation gets done by looping either over the variables (symbol indets) of an expression, or over named elements of some set. And the computation happens to be very sensitive to which variables get chosen and used first. But the order in which they get chosen is according to the rule by which the elements of Maple sets are ordered. So if the address in memory of some of the variable names just happens to be different, then they get selected in an different order, and an entirely different computation path might get taken.
Now, merely using a name at some apparently unrelated earlier point in the Maple session can cause that name to get a different memory address than if the name is first used later on. The upshot was that apparently innocuous and unrelated use of variables at some earlier stage in a session could actually completey change a later result of an algorithm which happened to be sensitive to the order of set member access.
Since the difference in the final result in such cases would depend on ostensibly harmless earlier activity in the session, the whole effect was labelled "session dependence". And it caused a lot of grief.
As a wild (ficticious) example, you merely enter the name `x` early in a worksheet, and somehow the later result of an integral has a totally different (valid, but more complicated or unsimplified) form.
But since Maple 12 sets' entries are "ordered" (internally, by Maple's kernel) and a reduction in the appearance of session dependence has come about. But I was reminded recently (on usenet) that ordering differences can also apply to adding and multiplying expressions. And by coincidence this also ties in to something else that I've been thinking about: roundoff error.
Here's the simple example that came up. When you enter x/y you form a Maple expression whose internal representation (DAG) stores it as the product of x and of y^(1). But which of those two multiplicands is stored first? It matters! When x and y get evaluated at floatingpoint values then it matters which is evaluated first. If y^(1) gets evaluated first then roundoff error may occur. But if x is evaluated first then the float division might happen to occur without roundoff error.
Let's look closely at the example that came up.
> restart:
> ee := x/y;
x

y
> eval(ee, [x=2.1, y=2.4]);
0.8750000000
> restart:
> invee:=y/x:
> ee:=1/invee;
x

y
> eval(ee, [x=2.1, y=2.4]);
0.8750000001
In both sessions above, ee equals x/y. But only the second incurs roundoff error. The second result differs from the first, in the last decimal digit. It turns out that the order in which x and y^(1) get stored in the product DAG is different in these two examples. If y^(1) gets evaluated at y=2.4 before x gets evaluated, then Maple computes the intermediary term 1/2.4 = 0.4166666667 which of course has roundoff error. But if x gets evaluated first them Maple sees 2.1/2.4 as the division to compute, and that can get done with no roundoff due to fortuitous cancelation of factors of 3, ie. it is the same as 0.7/0.8=0.875.
You can see the internal storage distinction, as follows. I'll repeat the above pair of examples, but this time also dismantling `ee` to show its structure. Notice the order of `x` and y^(1) insides the printed DAG.
> restart:
> ee := x/y;
x

y
> dismantle(ee);
PROD(5)
NAME(4): x
INTPOS(2): 1
NAME(4): y
INTNEG(2): 1
> eval(ee, [x=2.1,y=2.4]);
0.8750000000
> restart:
> invee:=y/x:
> ee:=1/invee;
x

y
> dismantle(ee);
PROD(5)
NAME(4): y
INTNEG(2): 1
NAME(4): x
INTPOS(2): 1
> eval(ee, [x=2.1,y=2.4]);
0.8750000001
Ok, so perhaps you are thinking: who cares, the difference is tiny. Well, small differences are easily isolated and magnified if they occur inside compound expressions. And so the ensuing results might be entirely different.
Take just a slightly more involved example,
> restart:
> ee := 1/(sqrt(0.8750001  x/y))  3162;
1
  3162
(1/2)
/ x\
0.8750001  
\ y/
> eval(ee, [x=2.1,y=2.4]);
0.277660
> restart:
> invee:=y/x:
> ee := 1/(sqrt(0.8750001  1/invee))  3162;
1
  3162
(1/2)
/ x\
0.8750001  
\ y/
> eval(ee, [x=2.1,y=2.4]);
1.859986
Look at what caused the difference. All I did was assign y/x to invee and then reuse that to form `ee`. The expression `ee` looks the same on the surface. But inside it, the term x/y is stored with a different order of its multiplicands. And the final results are really different.
I suspect that an improvement might be had if evaluation at a point (2argument eval) were instead done by substituting floats into numerator terms (positive integer exponent) before denominator terms (negative integer exponent). But investigation would have to precede that.
Finally, you might wonder what can be done with the errant version of `ee` from the last example. Simplification may or may not help,
> eval(simplify(ee), [x=2.1,y=2.4]);
0.2776606276
> eval(fnormal(ee), [x=2.1,y=2.4]);
0.277660
Are either of those last two results correct to ten places, you might ask? Well, no.
> Digits:=5000:
> eval(ee, [x=2.1,y=2.4]): evalf[10](%);
0.2776601684
How high does Digits have to be raised, to evaluate such constant numeric expressions accurately, you might also ask? That's the subject of a followup Post.
acer