acer

32490 Reputation

29 Badges

20 years, 8 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

I get uneasy in the presence of inconsistencies, the cause of which I cannot explain easily.

Why does the second of these see float contagion, while the first does not?

> dsolve( (4*diff(y(x),x)-3.0*y(x)^2)/(2-3.0*y(x)) );

                                         4      
                             y(x) = ------------
                                    -3 x + 4 _C1

> simplify( (4*diff(y(x),x)-3.0*y(x)^2)/(2-3.0*y(x)) );

                            /   / d      \          2\
                         1. |4. |--- y(x)| - 3. y(x) |
                            \   \ dx     /           /
                       - -----------------------------
                                 -2. + 3. y(x)    

While I agree with Jacques that the user ought to understand that a floating-point number is not, in general, a good (and far less so: equivalent) alternative for an exact quantity, I also think that computer algebra systems are out on a limb when it comes to mixed floats and symbolics. There's a pretty decent body of understanding of the discipline of exact symbolic computation. And there's a large body of understanding of the theory and practice of floating-point computation. But the mixture of the two... is fledgling in comparison.

When floats were introduced in Maple, perhaps it seemed like a good idea to just "allow" a float coefficient on a symbolic quantity. But in all seriousness, I'm not sure that I really understand what kind of animal that is supposed to be. How should it behave? What are the rules for symbolically manipulating it? What are the rules for numerically manipulating it, while the symbol stays a symbol? Does the scale of the value that the symbol could take on affect the arithmetic with such animals?

Let me give another example. Why is discont allowed to return an answer here, and what is that answer supposed to mean? Is it supposed to represent any of the particular three literal floating-point numbers appearing inside `expr`? If not, then what does it in fact mean? (The result from `fdiscont`, on the other hand, can be much better explained: there may be discontinuities, when calculating at some specific working precision, within the returned range.)

> restart:

> Digits:=10:

> expr := 1/((x-10.81665)*(x-10.81665382)*(x-10.816653826392));

                                         1                           
       expr := ------------------------------------------------------
               (x - 10.81665) (x - 10.81665382) (x - 10.816653826392)

> # What aspect of `expr`, literal or otherwise, does this
> # exact rational result represent?
> discont(expr, x);

                             /54078437215095621\ 
                            { ----------------- }
                             \5000000000000000 / 

> # Even if we evaluate to floating-point, the meaning is unclear.
> evalf(%);

                                {10.81568744}

> # Don't misunderstand. I know how to compute and get that last, single result.
> # I just don't know for sure, what it's supposed to *mean*. I guess it means
> # something like: here is a result due to approximating the orginal problem to
> # the current working precision. But that doesn't tell me anything about how
> # accurate that result is as an approximation to any (or all) of the
> # discontinuities of the *original* problem.
> # As a user, how would I know that `discont` is maybe not what I want to apply
> # for this example?
> # What about applying `diff`, will that be safe and appropriate? Or `int`?  Etc. > fsolve(expand(1/expr), x); 10.81568744 > # Next result is more clear: discontinuities may lie in this range > fdiscont(expr, x=0..100); [10.8160639211926 .. 10.8172942195832]

So, why is `discont` even allowed to return a result for that `expr`?

This next bit is important: nobody wants a wrong result, or one for which the explanation (of why it's "an answer") is unclear. So it might not matter that you are clever enough to construct examples where there is no such confusion. That still leaves the genuine problem cases to worry about. And it's not just the outright problem cases that need worrying about. It's also the fine line that marks the boundary between the meaningful and the nonmeaningful results that should concern us.

Another example. Do you believe that the derivative of the piecewise below, `pw`, is discontinuous at x=2 or not? Or do you think that I've already cheated by writing "the derivative", as if it couldn't depend conceptually upon Digits or something else, and vary? Or perhaps you have your own philosophy of such mixed exact-symbolic/float expressions.

> restart:

> pw := piecewise( x>2, (x^2-13.)/(x-3.605551275), x+(13)^(1/2) );

                   pw :=  /     2                         
                          |    x  - 13.                   
                          | ---------------        2 < x  
                         <  x - 3.605551275               
                          |                               
                          |         (1/2)                 
                          \   x + 13             otherwise

> # Let's differentiate, and only afterwards convert to exact form.
> dpw:=diff(pw, x);

         dpw :=  /                  1.                       x < 2.
                 |                                                 
                 |           Float(undefined)                x = 2.
                 |                                                 
                <                         / 2      \               
                 |      2. x           1. \x  - 13./               
                 | --------------- - ------------------      2. < x
                 | x - 3.605551272                    2            
                 \                   (x - 3.605551272)             

> simplify(map(expand@rationalize,map(identify,dpw)));

                          / undefined        x = 2  
                         {                          
                          \     1          otherwise

> # Now, instead,convert to exact form before differentiating.
> simplify(map(expand@rationalize,map(identify,pw)));

                                       (1/2)
                                 x + 13     

> # Obviously, this is not discontinuous at x=2.
> diff(%,x);

                                      1

> discont(pw, x);

                               /   144222051\ 
                              { 2, --------- }
                               \   40000000 / 

> evalf(%);

                              {2., 3.605551275}

> fdiscont(pw, x=-20..20);

                                     []

acer

Also possible is:

intk:=proc(d,n,A,Lambda)
  evalf(Int(k->lambdak(k,n,A,d,Lambda),0.0..1.0));
end proc:

intk(2.,14,1.,1.);

acer

Also possible is:

intk:=proc(d,n,A,Lambda)
  evalf(Int(k->lambdak(k,n,A,d,Lambda),0.0..1.0));
end proc:

intk(2.,14,1.,1.);

acer

With the start of a new school year, the MapleCloud is flooded with dozen of posts (mostly empty or near-empty). It's a new form of social networking: giving the shout out to your peeps via Cloud message title.

acer

@mnhoff Roundoff error, upon floating-point evaluation following numeric substitution, can vary greatly according to the particular form of the symbolic expression. Symbolic reformulation such as is done by the simplify command does not necessarily produce a rearrangement that will be prone to less roundoff, in general.

Yes, sol3 seems to suffer more at the numeric data point you mentioned. I don't see a forcing reason why this should (or should not, I'm afraid) be true for all other data inputs.

> forget(evalf):
> evalf({eval(sol1,[x1=-17.275,x2=-180.93,y1=238.59,y2=-156.49])}):
> sort(%);                                                                
          {-143.4745136, -0.01428010101, 0.01428010101, 143.4745136}

> forget(evalf):
> evalf[100]({eval(sol1,[x1=-17.275,x2=-180.93,y1=238.59,y2=-156.49])}):
> sort(evalf(%));                                                         
          {-143.4744804, -0.01427413114, 0.01427413114, 143.4744804}

> 
> forget(evalf):                                                          
> evalf(eval(sol2,[x1=-17.275,x2=-180.93,y1=238.59,y2=-156.49])):         
> sort(indets({%},identical(M)=anything));                                
  {M = -143.4584940, M = -0.01427433450, M = 0.01427433450, M = 143.4584940}

> forget(evalf):                                                   
> evalf[100](eval(sol2,[x1=-17.275,x2=-180.93,y1=238.59,y2=-156.49])):
> sort(indets({evalf(%)},identical(M)=anything));                       
  {M = -143.4744804, M = -0.01427413114, M = 0.01427413114, M = 143.4744804}

> 
> forget(evalf):  
> evalf(eval(sol3,[x1=-17.275,x2=-180.93,y1=238.59,y2=-156.49])):
> sort(indets({%},identical(M)=anything));
  {M = -295.0142973, M = -0.01425070076, M = 0.01425070076, M = 295.0142973}

> forget(evalf):                                                   
> evalf[100](eval(sol3,[x1=-17.275,x2=-180.93,y1=238.59,y2=-156.49])):
> sort(indets({evalf(%)},identical(M)=anything));
  {M = -143.4744804, M = -0.01427413114, M = 0.01427413114, M = 143.4744804}

It seems like a reasonable supposition that, "in the main", lengthier symbolic reformulations are more at risk of such float-evaluation roundoff. But it is not true in general.

ps. I've been wanting to do a followup to an earlier bloggish post relating roundoff to session dependent "order issues" (ie. about sums rather than products). But you may find this of some interest. It won't help you decide whether your `sol1` is better for uses in exported (C, Matlab, or other double-precision computational environments).

acer

@mnhoff Roundoff error, upon floating-point evaluation following numeric substitution, can vary greatly according to the particular form of the symbolic expression. Symbolic reformulation such as is done by the simplify command does not necessarily produce a rearrangement that will be prone to less roundoff, in general.

Yes, sol3 seems to suffer more at the numeric data point you mentioned. I don't see a forcing reason why this should (or should not, I'm afraid) be true for all other data inputs.

> forget(evalf):
> evalf({eval(sol1,[x1=-17.275,x2=-180.93,y1=238.59,y2=-156.49])}):
> sort(%);                                                                
          {-143.4745136, -0.01428010101, 0.01428010101, 143.4745136}

> forget(evalf):
> evalf[100]({eval(sol1,[x1=-17.275,x2=-180.93,y1=238.59,y2=-156.49])}):
> sort(evalf(%));                                                         
          {-143.4744804, -0.01427413114, 0.01427413114, 143.4744804}

> 
> forget(evalf):                                                          
> evalf(eval(sol2,[x1=-17.275,x2=-180.93,y1=238.59,y2=-156.49])):         
> sort(indets({%},identical(M)=anything));                                
  {M = -143.4584940, M = -0.01427433450, M = 0.01427433450, M = 143.4584940}

> forget(evalf):                                                   
> evalf[100](eval(sol2,[x1=-17.275,x2=-180.93,y1=238.59,y2=-156.49])):
> sort(indets({evalf(%)},identical(M)=anything));                       
  {M = -143.4744804, M = -0.01427413114, M = 0.01427413114, M = 143.4744804}

> 
> forget(evalf):  
> evalf(eval(sol3,[x1=-17.275,x2=-180.93,y1=238.59,y2=-156.49])):
> sort(indets({%},identical(M)=anything));
  {M = -295.0142973, M = -0.01425070076, M = 0.01425070076, M = 295.0142973}

> forget(evalf):                                                   
> evalf[100](eval(sol3,[x1=-17.275,x2=-180.93,y1=238.59,y2=-156.49])):
> sort(indets({evalf(%)},identical(M)=anything));
  {M = -143.4744804, M = -0.01427413114, M = 0.01427413114, M = 143.4744804}

It seems like a reasonable supposition that, "in the main", lengthier symbolic reformulations are more at risk of such float-evaluation roundoff. But it is not true in general.

ps. I've been wanting to do a followup to an earlier bloggish post relating roundoff to session dependent "order issues" (ie. about sums rather than products). But you may find this of some interest. It won't help you decide whether your `sol1` is better for uses in exported (C, Matlab, or other double-precision computational environments).

acer

What data structure do you use to handle the data? How do you add new items? Can you post any sample code?

acer

@itsme I believe that is also a problem with the usual `Explore` command. Basically, it (must) use not only a new worksheet but also a new kernel/engine. So it does not allow (lexical or other) scoping of varables to get their values from the original worksheet.The thing to be explore has to be "fully explicit" in the sense you describe.

I have been experimenting with the version of Explore which inserts into the current worksheet. That should solve the scoping problem. (Kamel B. also asked, I think...) But it's been very busy at work and so i've had very little spare time. I'll try and remember to send you a note if i get a chance to make it workable (if crude).

@itsme I believe that is also a problem with the usual `Explore` command. Basically, it (must) use not only a new worksheet but also a new kernel/engine. So it does not allow (lexical or other) scoping of varables to get their values from the original worksheet.The thing to be explore has to be "fully explicit" in the sense you describe.

I have been experimenting with the version of Explore which inserts into the current worksheet. That should solve the scoping problem. (Kamel B. also asked, I think...) But it's been very busy at work and so i've had very little spare time. I'll try and remember to send you a note if i get a chance to make it workable (if crude).

@mnhoff For the R2008b to R2009b releases of Matlab one could configure the symbolic toolbox to use Maple instead of MuPAD as the symbolic engine. See this note. One had to have a working Maple installed. Maybe you have access to such an older Matlab? (I don't know which are the Maple versions which which it would be compatible.)

@mnhoff For the R2008b to R2009b releases of Matlab one could configure the symbolic toolbox to use Maple instead of MuPAD as the symbolic engine. See this note. One had to have a working Maple installed. Maybe you have access to such an older Matlab? (I don't know which are the Maple versions which which it would be compatible.)

It does work in Maple 13, which may well be what the poster wants. Resourceful!

This is also a good time to say thanks to Markiyan Hirnyk for his significant efforts on this forum, and congratulations as today his reputation moved into the top ten.

ps. This approach doesn't work in Maple 10, but I vaguely recall some `solve` issues restricted to just that release.

acer

It does work in Maple 13, which may well be what the poster wants. Resourceful!

This is also a good time to say thanks to Markiyan Hirnyk for his significant efforts on this forum, and congratulations as today his reputation moved into the top ten.

ps. This approach doesn't work in Maple 10, but I vaguely recall some `solve` issues restricted to just that release.

acer

@Stavros Probably true. The referenced code does not properly handle the non-full-rank situations (esp. in which leading zeros in the RREF will end up in columns > m the row-dimension).

A:=Matrix(2,4,symbol=a):
A[2,1..2]:=Vector[row](2):
A;
E:=ElemDecomp(A[1..2,[1,3]]); # otherwise falls down
Einv:=seq(LinearAlgebra:-MatrixInverse(E[i],method=pseudo),i=nops([E])..1,-1);
map(normal,`.`(Einv).A);
LinearAlgebra:-LUDecomposition(A,output=R);

Easier to write one's own simple GJ, which forms (directly, without inverting) and stores the E[i]^(-1) as it goes.

@Stavros Probably true. The referenced code does not properly handle the non-full-rank situations (esp. in which leading zeros in the RREF will end up in columns > m the row-dimension).

A:=Matrix(2,4,symbol=a):
A[2,1..2]:=Vector[row](2):
A;
E:=ElemDecomp(A[1..2,[1,3]]); # otherwise falls down
Einv:=seq(LinearAlgebra:-MatrixInverse(E[i],method=pseudo),i=nops([E])..1,-1);
map(normal,`.`(Einv).A);
LinearAlgebra:-LUDecomposition(A,output=R);

Easier to write one's own simple GJ, which forms (directly, without inverting) and stores the E[i]^(-1) as it goes.

First 434 435 436 437 438 439 440 Last Page 436 of 594