acer

32328 Reputation

29 Badges

19 years, 317 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

I didn't notice the comma in the rhs of the original post. That, and the use of round-brackets, threw me off. I suppose now that the OP was actually using a syntax for a vector-valued function (just not Maple syntax). Sorry.

acer

I didn't notice the comma in the rhs of the original post. That, and the use of round-brackets, threw me off. I suppose now that the OP was actually using a syntax for a vector-valued function (just not Maple syntax). Sorry.

acer

Have a look at the help-page ?allvalues

minor point: Above, you use the word "evaluate" when it seems you actually mean evaluate an approximation in floating-point. The same term is also misused here. In Maple, the term "evaluation" means something quite distinct, but not that. The routine evalf does floating-point approximate evaluation. See also ?radnormal if you are interested in treatment of expressions containing radicals of exact symbolic values.

acer

Have a look at the help-page ?allvalues

minor point: Above, you use the word "evaluate" when it seems you actually mean evaluate an approximation in floating-point. The same term is also misused here. In Maple, the term "evaluation" means something quite distinct, but not that. The routine evalf does floating-point approximate evaluation. See also ?radnormal if you are interested in treatment of expressions containing radicals of exact symbolic values.

acer

This is partly related. The `int` routine is not as smart as `D`, for handling operators. It evaluates the procedure at some name (_X, say). But that may not result in what was intended.

For example,

> restart:

> f := proc(x)
>    if is(x>0)=true then x; else Pi; end if;
> end proc:

> int(f, 1..2); # oops
                                      Pi
 
> forget(`int/int`):
> int(f, a..b) assuming a>0, b>a; # oops
                                  Pi (b - a)
 
> D[1](f); # good
                                    D[1](f)
 
> D[1](f)(1); # good
                                    D(f)(1)
 
> evalf(%); # right
                                  1.000000000
 
> D[1](f)(-1); # good
                                   D(f)(-1)
 
> evalf(%); # right
                                      0.

Here's another example,

> restart:

> h := proc(x) sign(x); end proc:

> int(h, -1..0);
                                       1
 
> evalf(Int(h, -1..0));
                                 -1.000000000

acer

Sure, last-name-eval and remember tables are known to be a risky mix.

I'd be mildly surprised if something couldn't be changed to allow it to work better in `int`, though.

There is a remember table in use by `int/int`. That routine has options `remember` and `system`. That means that, while it will remember arguments, it's memory can be cleared during garbage collection. That probably explains why you got slightly different results in some layouts -- because gc was happening and "fixing" things by forgetting some of the undesirable last-name-eval proc name entries from the remember table.

Maple 8 can get examples wrong too. (It's not the case that `int/int` did not have option `remember` in Maple 8. It did.) For example,

> kernelopts(version);
          Maple 8.00, SUN SPARC SOLARIS, Apr 22 2002 Build ID 110847
 
> f := x->2*x:
> int( f, 0..1 );
                                       1
 
> f := x->3*x:
> int( f, 0..1 );
                                       1
 
> restart:
> f := x->3*x:
> int( f, 0..1 );
                                      3/2

acer

Note that I didn't write that `solve` certainly needed a comprehensive rewrite for this. I said that such a need is conceivable.

There are a variety of approaches that might be taken, to try to teach `solve` about assumptions. One way might be to inject more use of (a strengthened) `is` into a major rewrite. Another way might be to utilize only a (strong) post-processing filter. Another might involve converting some of the assumptions into additional inequality arguments, and strengthening the part of `solve` that handles such. And so on.

One reason why doing such improvements well is a hard task is that in order to keep the expended effort reasonable it's important to get a handle on the likely success of the various approaches. And it can save so much effort if that can be discovered near the start of the work. But it can be very hard to figure out in advance.

If this advance deduction is not done well enough then a lot of effort can be put into a new scheme which then subsequently hits a practical wall. Limits to the new scheme's practical utility can suddenly appear on the event horizon.

Isn't figuring out the "best" places to put development effort the hardest part of CAS improvement?

acer

Note that I didn't write that `solve` certainly needed a comprehensive rewrite for this. I said that such a need is conceivable.

There are a variety of approaches that might be taken, to try to teach `solve` about assumptions. One way might be to inject more use of (a strengthened) `is` into a major rewrite. Another way might be to utilize only a (strong) post-processing filter. Another might involve converting some of the assumptions into additional inequality arguments, and strengthening the part of `solve` that handles such. And so on.

One reason why doing such improvements well is a hard task is that in order to keep the expended effort reasonable it's important to get a handle on the likely success of the various approaches. And it can save so much effort if that can be discovered near the start of the work. But it can be very hard to figure out in advance.

If this advance deduction is not done well enough then a lot of effort can be put into a new scheme which then subsequently hits a practical wall. Limits to the new scheme's practical utility can suddenly appear on the event horizon.

Isn't figuring out the "best" places to put development effort the hardest part of CAS improvement?

acer

[edited, after further reflection]

It is possible to construct an alternate Matrix Browser that is not implemented within a Maplet.

There are a variety of possibilities about how to do it. I hope to find time to post some prototypes to illustrate what (tricks) I have in mind.

How satisfactory these might be would be subjective -- each individual may put differing value of qualities of a Matrix Browser. Those could include such things as non-locking control, separate floating window occurence, dynamic interaction with data, different tab, and flexible ability to switch on-the-fly between any of the above. And so on.

It also raises these questions: is it better to get an alternative non-Maplet Matrix browser right now, or would it be better to implement a sophisticated "array data" analyzer? (The DataMatic, folks. It slices, it dices...) Or would a few simple context-menu driven pointplots suffice?

acer

[edited, after further reflection]

It is possible to construct an alternate Matrix Browser that is not implemented within a Maplet.

There are a variety of possibilities about how to do it. I hope to find time to post some prototypes to illustrate what (tricks) I have in mind.

How satisfactory these might be would be subjective -- each individual may put differing value of qualities of a Matrix Browser. Those could include such things as non-locking control, separate floating window occurence, dynamic interaction with data, different tab, and flexible ability to switch on-the-fly between any of the above. And so on.

It also raises these questions: is it better to get an alternative non-Maplet Matrix browser right now, or would it be better to implement a sophisticated "array data" analyzer? (The DataMatic, folks. It slices, it dices...) Or would a few simple context-menu driven pointplots suffice?

acer

Try issuing the FunctionAdvisor(BesselI) command. You may be interested in particular in the differentiation rule. And indeed you can also issue,

FunctionAdvisor(BesselI,"differentiation_rule");

which should corroborate this:

> diff(BesselI(0,f(x)),x);
                                           /d      \
                          BesselI(1, f(x)) |-- f(x)|
                                           \dx     /
 

acer

Try issuing the FunctionAdvisor(BesselI) command. You may be interested in particular in the differentiation rule. And indeed you can also issue,

FunctionAdvisor(BesselI,"differentiation_rule");

which should corroborate this:

> diff(BesselI(0,f(x)),x);
                                           /d      \
                          BesselI(1, f(x)) |-- f(x)|
                                           \dx     /
 

acer

Thanks very much for the mention, Axel. But I would decline. Having said that...

I would hope that the choice of winner can bring with it the opportunity to get good feedback on key ways to improve Maple.

There is of course quite a bit of feedback provided already -- in the posts, replies and comentary of the most active posters to this site. Attentively reading (all) the posts here can itself lead to insight. There are quite a few important common themes. Some of that may add extra weight for making design and development decisions and choosing directions. Some of it might lead to a few "Aha!"s. Some of it might corroborate views held already. Some parts may be surprising and controversial.

It would be interesting to read the views on improving Maple of any of the nominees, if they chose to post them here explicitly.

acer

It's not clear what you mean by "complex expressions" in the previous reply.

I haven't actually looked to see which scenario that I describe below might pertain.

On the one hand, it could mean that  you see expressions containing  symbolic variables (names) and `I` as well. But it is possible that such terms vanish upon simplification or upon further instantiation of the variables with values. You mentioned a cubic (polynomial, presumably). The "formulae" for the roots of a cubic may appear to be complex-valued, with `I` present, even though evaluation and symbolic simplification leads to a theoretically expected purely real answer.

Or it could mean just be that you see nonzero imaginary components with very small floating-point coefficients. That is what I tried to explain in my previous reply . Here's a simple example. Suppose you have sqrt of some subexpression, and you expect that subexpression to evaluate to zero under exact arithmetic when the variables are instantiated at exact values. And then instead the subexpression is evaluated as a floating-point computation. The subexpression result may come out, under floating-point computation, as a very small negative value and not precisely zero. And so taking sqrt of that will introduce a very small nonzero floating-point imaginary component. Usually, increasing the working precision for the floating-point computations will lead to those imaginary artefacts shrinking in magnitude.

acer

The fact that other names (in some same class) are protected is not in itself a good reason to protect some name.

A good reason could be that assignment to the name causes things to break.

A name that is protected is a name that is also removed from the namespace available to users. Maple is thus "improved" by having less names protected. And hence a name should not be protected without good reason.

If examples are found of unprotected names for which their assignment breaks Maple then there are two reasonable courses of action: protect the name, or fix the instances of use -- if possible -- by quoting, etc.

Do you have examples where assignment to these unprotected dagtag names causes parts of Maple (not user-authored parts) to break?

nb. "parts of Maple" should really also cover code (or objects) that is emitted by Maple itself.

acer

First 517 518 519 520 521 522 523 Last Page 519 of 591