acer

32495 Reputation

29 Badges

20 years, 10 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

Start by stating clearing you operating system and version, and you Maple version.

acer

Is your F evalhf'able? I mean, does F work if you call it like evalhf(F( somenumericvalue )) ?

Why not show us your F? Perhaps we could find suggestions to make it faster, even if it's not evalhf'able.

The `plot` command will try to work adaptively, by default. It may figure out that fewer points are required in some subregions of `t`, and more in others. And it may well be computing fewer points than you imagine. You could use getdata to figure out just how many points are in the plot structure.

The plot renderer may also be interpolating to make the curve show smooth. Perhaps you could do something similar, by generating a modest number of values and then calling ArrayInterpolation to extend to 1000 values.

It's difficult to suggest more without knowing details of F or the resulting plot.

acer

@Preben Alsholm One of the important uses of the condition number is in estimating the error in a numerically computed solution to a linear system. For example this lapack doc page illustrates using

ERRBD = EPSMCH / RCOND

for estimating a bound on the error, where EPSMCH is "machine epsilon". I believe that is why the nag doc page you cite focuses on which side RCOND might be inaccurate, and by how much. (People often want to know how many decimal digits are correct, etc.)

Perhaps LinearAlgebra:-ConditionNumber might be modified for the real/complex float case where the norm p=1 or p=infinity, so as to offer both the definition [ ie, explicit formula norm(A)/norm(inverse(A) ] as well as lapack dgecon/zgecon. Caution may well still be required when computing via defn though, as the conditioning of the Matrix is going to affect the accuracy of the  intermediary Matrix inverse result! 

And of course, documentation. All these details could be described and illustrated by example.

 

@Antonvh Your Question makes it clear that you've already discovered "Tools ->Options ->Precision -> Round screen display", which is also provided by interface(displayprecision=...) .

But your "annoying" example seems to be covered by the suggestion of Patrick T. in that page cited by Markiyan. eg,

interface(displayprecision=5):
interface(typesetting=extended):
Typesetting:-Settings(striptrailing=true):

I used 5 for your X.

Would it be possible to put a vertical axis on (at least) one of those, so that we could see how they match the wavelength values?

acer

@Preben Alsholm I agree. Setting infolevel[LinearAlgebra]:=1 shows that in the float case it uses f06raf (lapack dlange,  which computes the norm of the Matrix), followed by f07adf (lapack dgetrf, which computes the LU decomposition) and then f07agf (lapack dgecon, which estimates the condition number using those intermediary results).

Online ocumentation of the lapack function dgecon says,

An estimate is obtained for norm(inv(A)), and the reciprocal of the
*  condition number is computed as
*     RCOND = 1 / ( norm(A) * norm(inv(A)) )

And online documentation of nag f07adf cites,
   Higham N J (1988) FORTRAN codes for estimating the one-norm of a real or complex matrix, with applications to condition estimation ACM Trans. Math. Software 14 381–396

So it looks as if the discrepancy is that the estimation of norm(inv(A)) by dgecon is not always as accurate as it could be.

Regardless of whether that norm(inv(A)) estimate is useful (perhaps it often is adequate, for some commonplace bounding of error estimates log10, etc?), it might be good if LinearAlgebra:-ConditionNumber provided the functionality to force the choice of method.

That is, in the 1-norm and infinity-norm floating-point case ConditionNumber could offer the choice of the current (presumably faster?) estimate using lapack dgecon or the use of the explicit formula norm(A) * norm(inv(A)) .

 

@akiel123 You've ignored most if not all of my points and suggestions.

1) Your defined operator F__Msl does not compute F_sla(something) * b__1(somethingelse) , as you claim.

Instead it is computing F_sla * (something) * b__1(somethingelse) .

I wrote as much in my Answer above.

2) In consequence of the above syntax mistake your operator F__Msl does not return a numeric value when it is called with numeric arguments. I suggested in my Answer that you try that yourself. If you did you'd likely discover the above problem.

3) You didn't answer why you are trying to use lowercase minimize and maximize instead of uppercase Minimize and Maximize from the Optimization package you loaded and used earlier. [edit] Of course if you intend Maximize/Minimize then something will need adjustment, to obtain objective values.

4) You haven't explained how you intend on the constraints being used in your line that currently calls minimize and maximize. Did you somehow expect them to affect all three calls? (They don't.)

There doesn't seem to be much point in discussing the sqrt's while there are these other issues.

In your latest worksheet an earlier call to your OptimizeSpring returned a numeric result. I don't know whether you want local or global optima. But even if you choose to use DirectSearch instead of Optimization you've have the same issues as mentioned above in your later computation.

In my opinion the heavy use of operators has just made your whole sheet much more difficult to navigate and work with. It seems that at least some of them are unnecessary and have led to some of the confusion. And the calls to evalf don't help efficiency either (you can get float evaluations for your computations without them, and without them and the operators the optimization could perform better under evalhf. And 2D Math seems to have led to problems too.

I'm sorry but I've spent all the time I can on this.

In your definition of the operator assigned to F__Msl you have

F__s1a*(l__s1(theta__3), l__s1(theta__4))*b__1(theta__1, theta__2, theta__3)

because of a space between F_sla and the opening bracket that follows it. That space gets parsed as an implicit multiplication.

I don't understand what you're trying to do in that computation for `Bestangles`. Which calculations are the constraints supposed to affect? Do you really intend `minimize` rather than `Minimize`? Are your bounds intended to keep thing real?

What about the earlier errors, with `theta` in OptimizeSpring?

Is it really necessary to take (so many) square roots?

Perhaps before attempting to optimize you could ensure that the objective behaves as you intend for simple numeric inputs?

acer

@John Dolese I wrote float[8] not evalf[8], meaning the data type for the Matrices and Arrays.

Yes, you'd have to adjust your approach i  order to use evalhf. Probably a good idea anyway.

Combining into fewer, well interacting procedures does not necessarily bring performance improvements on its own. But it does let you leverage Maple's acceleration techniques to greater effect. E.g. evalhf. 

Tom and I seem to be letting you know the same kind off thing wrt atan  and atan2.

I would set a target for doing 100s if not 1000s of wavelength to colour conversions on the order of a few seconds if not less.

GambiaMan, every time you ask a Question here you add a great many meaningless small tags to your post.

Could you please stop doing that?

acer

@fbackelj Right. This side discussion about Normalizer is about only the possible forms of valid exact results for your example. I hope that no other readers accidentally get the idea that I'm suggesting something necessary to get a valid exact result equivelent to exact 120 for your example.

For your example:

1) If you leave it as the default Normalizer=eval(normal) then LinearAlgebra:-LUDecomposition detects that and examines the Matrix. It finds the trig terms in your example Matrix and assigns Normalizer:=(x -> normal(simplify(x, 'trig'))). This results in a valid but more complicated answer, but has the virtue that Testzero is stronger and more likely to not use hidden zeros as pivots because it can reduce the intermediary expressions involving trig calls. That virtue may not bear on your example Matrix.

showstat(LinearAlgebra:-LUDecomposition,241..242); # Maple 2015.0

 241     if eval(Normalizer) = eval(normal) then
 242       Normalizer := DeduceNormalizer(mU)
         end if;

2) If you force the assignment Normalizer:=(x->normal(x)) then LinearAlgebra:-LUDecomposition detects that it's been assigned away from the default, since now evalb(eval(Normalizer) = eval(normal)) returns false, and it respects your choice and does not automatically examine whether it should make Normalizer something stronger. This happens to result in a less complicated valid result for your particular example Matrix.

As I mentioned, notice that these are not identical,

evalb( eval(x->normal(x)) = eval(normal) );

                                     false

If in situation 2) above you don't also reassign to Testzero then Testzero will only be as strong as your forced choice of Normalizer. And of course normal is not generally suitable for detecting that expressions containing trig calls may be zero. So it's dubious to assign to Normalizer without considering whether Testzero remains strong enough.

So I did something subtle in my followup comment. I dumbed down Normalizer to be only as strong as normal, but without it actually being normal. I was interested in seeing whether ConditionNumber would produce a different or more compact result. I was deliberately disabling the automatic promotion of Normalizer as done by LUDecomposition when Normalizer actually is identically normal. I wanted Normalizer as strong as normal, without actually being identical to normal.

Within LUDecomposition (for the exact symbolic case) Testzero is used to test whether candidate pivots are nonzero, while Normalizer is used during row-reduction to keep down expression-swell.

 

@fbackelj You still sound like you think it is not working "fine" in the exact case, by default. Such as claim, which you originally put forth in your Question, is not true.

In fact your posted example did not exhibit a bug in ConditionNumber. The exact result you got for n=5 is, well, exactly equal to 120.

And ConditionNumber is returning a valid exact result that can be simplified (somehow) to exact 120, whether one uses the default Normalizer or not.

Your Question did exhibit the fact that the simplify command is not powerful enough on its own to reduce every constant trig expressions to a rational (if it happens to equal such).

I changed Normalizer only to show that a more compact exact result could attain. That doesn't affect the fact that the default answer is also valid in the exact case. In both cases an expression is returned which involves constant trig terms, and which some form of exact simplification can reduce to 120.

Sorry if you found my remarks about Normalizer unclear.

In LinearAlgebra one can determine that MatrixInverse and LUDecomposition call an internal DeduceNormalizer routine.

That DeduceNormalizer routine checks whether eval(Normalizer)=eval(normal) and if so it examines the types of the Matrix entries and tries to set Normalizer appropriately. This happens automatically. And when it happens then Testzero is strengthened by virtue of the fact that by default Testzero calls Normalizer. The old linalg package does not do this. The old linalg package will produce some incorrect results if it uses a hidden zero (not detectable by just normal) as an elimination pivot.

When I set Normalizer:=x->normal(x) then I am making Normalizer only as strong as normal, but (and this is key) it is no longer true that evalb(eval(Normalizer)=eval(normal)) will return true. So in doing that I have disabled the safer computation mode of LinearAlgebra. For your example that may not affect correctness, as a hidden-zero pivot situation may not have been around, but it does affect the size of the exact result. That is all. I was trying to say that if one does (unwisely!) forcibly dumb down Normalizer in order to try and obtain such more compact exact results then one had better (!) forcibly strengthen Testzero so that it does not stay as weak as normal.

In other words: If I leave Normalizer=eval(normal) then LinearAlgebra will detect that default situation and attempt to strengthen Normalizer (and Testzero, in consequence) to protect against accidentally using a hidden-zero as an elimination pivot. But if I assign anything else to Normalizer then LinearAlgebra will recognize that and use my assigned choice. And if I do that then I should prudently ensure that make Testzero adequately strong. If I deliberately change Normalizer then I ought to ensure that Testzero is adequate. If I don't change Normalizer and Testzero away from their default then LinearAlgebra attempts to strengthen them.

Note that x->normal(x) is not the very same procedure as normal. They have different addresses and are distinct under evalb. They happen to do the same thing, of course.

Are you just trying to test whether any entry of a Matrix is negative? Is each of the entries constant?

Or do you need to detect the minus sign inside an entry like, say, exp(-x) ?

acer

What do you see if you lprint it?

acer

@akiel123 Did you intend Pi/4 instead of 90 inside your calls to tan? Maple's trig functions work with radians.

Whether 90 or Pi/4, some of your constaints are evaluating to impossible inequalities, such as 0 <= -6 . You'll need to figure those out and correct.

First 317 318 319 320 321 322 323 Last Page 319 of 595