Carl Love

Carl Love

19361 Reputation

24 Badges

8 years, 20 days
Mt Laurel, New Jersey, United States
My name was formerly Carl Devore.

MaplePrimes Activity


These are replies submitted by Carl Love

Acer said:
> I'm not sure that I understand why Digits=180 is necessary.

Perhaps when the Asker said that the solutions were arbitrarily close, s/he meant that they could not be distinguished at a lower value of Digits. Perhaps an approach is needed where Digits is gradually pushed up until they can be distinguished. And perhaps this theorem will be useful:

 r is a multiple root of f(r) iff D(f)(r) = 0.

So, if D(f)(r) is close to 0 (say, it fnormals to 0), then increase Digits, and redo the fsolve in a very narrow window, whlle avoiding r.

Acer said:
> I'm not sure that I understand why Digits=180 is necessary.

Perhaps when the Asker said that the solutions were arbitrarily close, s/he meant that they could not be distinguished at a lower value of Digits. Perhaps an approach is needed where Digits is gradually pushed up until they can be distinguished. And perhaps this theorem will be useful:

 r is a multiple root of f(r) iff D(f)(r) = 0.

So, if D(f)(r) is close to 0 (say, it fnormals to 0), then increase Digits, and redo the fsolve in a very narrow window, whlle avoiding r.

I need to see a worksheet or a more-detailed code snippet to tell you what's going wrong. In particular, I need to see where you assign the values of t2, q, and especially sol. Also, I'd like to explicitly see two executions of Az(r) that produce different results. What you describe is definitely not the intended behavior. Maple should be able to handle the large numbers.

Kitonum: You need to divide by n-1 for the sample variance as opposed to the population variance.

Kitonum: You need to divide by n-1 for the sample variance as opposed to the population variance.

Well, you've reached that 10-point threshold.

@AliKhan But what you call C in your original post is the "simple addition" of A and B. Could you write more explicitly the pattern that you want?

@AliKhan But what you call C in your original post is the "simple addition" of A and B. Could you write more explicitly the pattern that you want?

You call that an example? Please provide example values for the parameters lambda, n, and p such that RootFinding:-Analytic does not find all the roots.

How about posting an example?

@acer It's easy to account for any duplication of the sort that your example shows. Continuing right from the end of your example

length(sprintf("%m", eval({a,c})));

                      78914

So, to check the total memory used by any group of variables, use the %m trick on the set of them.

@acer It's easy to account for any duplication of the sort that your example shows. Continuing right from the end of your example

length(sprintf("%m", eval({a,c})));

                      78914

So, to check the total memory used by any group of variables, use the %m trick on the set of them.

You asked the same question before on 13 jan 2013, and I answered it. Did you try using that answer?

@PatD 

I did what I suggested: converting each expression to an optimized procedure, and then using fsolve in list-of-procedures mode. It still returned unevaluated, but after only about five minutes, because the optimized procedures evaluate about 200 times (if I recall correctly) faster than the original expressions. I don't know how it took for you. I think that the trouble is numerical instability, as you suggest. I'm guessing that what's needed is to have the internal computations done at a higher precision than what's needed for the final result. As far as I can see at ?fsolve,details, fsolve has no such capability. But I think that there are other packages, like DirectSearch, that offer that capability.

I will continue this in the new thread that you started.

@PatD 

I did what I suggested: converting each expression to an optimized procedure, and then using fsolve in list-of-procedures mode. It still returned unevaluated, but after only about five minutes, because the optimized procedures evaluate about 200 times (if I recall correctly) faster than the original expressions. I don't know how it took for you. I think that the trouble is numerical instability, as you suggest. I'm guessing that what's needed is to have the internal computations done at a higher precision than what's needed for the final result. As far as I can see at ?fsolve,details, fsolve has no such capability. But I think that there are other packages, like DirectSearch, that offer that capability.

I will continue this in the new thread that you started.

First 532 533 534 535 536 537 538 Last Page 534 of 550