pagan

5147 Reputation

23 Badges

17 years, 122 days

 

 

"A map that tried to pin down a sheep trail was just credible,

 but it was an optimistic map that tried to fix a the path made by the wind,

 or a path made across the grass by the shadow of flying birds."

                                                                 - _A Walk through H_, Peter Greenaway

 

MaplePrimes Activity


These are replies submitted by pagan

@hirnyk I meant while keeping the originally supplied range. With the range kept what I wrote accurately describes what happens. When I wrote about providing an initial point or not I meant to imply that other options such as the supplied range would remain the same. I don't alweays bother to say, "all else staying the same."

I think there's interplay amongst presence of a range option, presence of the initial point option, what initial starting points for Newton's method fsolve may generate internally, and whether it may converge to something other than what's in the `avoid` option. I don't think that the SCR form will suffer if all doubtful cases are logged. Let's use that instead of nitpicking.

@hirnyk I meant while keeping the originally supplied range. With the range kept what I wrote accurately describes what happens. When I wrote about providing an initial point or not I meant to imply that other options such as the supplied range would remain the same. I don't alweays bother to say, "all else staying the same."

I think there's interplay amongst presence of a range option, presence of the initial point option, what initial starting points for Newton's method fsolve may generate internally, and whether it may converge to something other than what's in the `avoid` option. I don't think that the SCR form will suffer if all doubtful cases are logged. Let's use that instead of nitpicking.

It looks like it may be an old bug (Maple 7). The behaviour may be better now, but that doesn't help you right now with your version.

The improved modern (Maple 14) can find S2 different from S1 (if a different initial point, or none, is supplied).

And the M14 syntax also allows for avoid={S1,S2} so as to keep looking still. But that syntax generates a usage error in M7.

The best I can think of is to supply differing initial points (chosen randomly, within the valid ranges, say). For example, if I remove the initial-point optional argument, and replace by just {a,b,c,d} then I get a different solution than S1. (That's on Maple 7 on Solaris.)

I tried setting _EnvAllSolutions:=true and then `solve`, but it's taking a long time to return.

It looks like it may be an old bug (Maple 7). The behaviour may be better now, but that doesn't help you right now with your version.

The improved modern (Maple 14) can find S2 different from S1 (if a different initial point, or none, is supplied).

And the M14 syntax also allows for avoid={S1,S2} so as to keep looking still. But that syntax generates a usage error in M7.

The best I can think of is to supply differing initial points (chosen randomly, within the valid ranges, say). For example, if I remove the initial-point optional argument, and replace by just {a,b,c,d} then I get a different solution than S1. (That's on Maple 7 on Solaris.)

I tried setting _EnvAllSolutions:=true and then `solve`, but it's taking a long time to return.

Did you omit the word `fsolve` in those calls?

Could you post the equations, or upload it as a worksheet (green up-arrow in the editing menu bar)?

Using reflect alone is not adequate in that it does not meet the requirement that the numbering along the y-axis is reversed.

Using reflect alone is not adequate in that it does not meet the requirement that the numbering along the y-axis is reversed.

You might also want to make the end-points for the a-slider and b-slider be floating point values. Ie, 10.0 instead of 10 the exact integer. That will allow for more smoothly varying values of a and b, not just integer values. This is not related to the check-box "floating-point computation" in the Explore pop-up.

Notice what happens when, for some fixed a, b slides from being less than a to being greater than a. And vice versa.

You might also want to make the end-points for the a-slider and b-slider be floating point values. Ie, 10.0 instead of 10 the exact integer. That will allow for more smoothly varying values of a and b, not just integer values. This is not related to the check-box "floating-point computation" in the Explore pop-up.

Notice what happens when, for some fixed a, b slides from being less than a to being greater than a. And vice versa.

Modifying the tags on an entry, or changing it between being a Question/Post/Comment, or branching it off from its parent post, or even deleting it (see spam), are sometimes useful changes.

But modifying the content of someone else's post should not be allowed.

(Offensive partial content should be handled by offering the submitter the chance to modify, or to edit and repost following temporary deletion.)

See "moral rights" as discussed here, for example.

An acceptable exception to this might possibly be when the system software has accidentally removed the original formatting, and the edit is purely aimed as re-establishing original layout.

The question was about symbolic computation, while the current CUDA functionality relating to the LinearAlgebra package (eg. Matrix-Matrix multiplication) is entirely floating-point.

The question was about symbolic computation, while the current CUDA functionality relating to the LinearAlgebra package (eg. Matrix-Matrix multiplication) is entirely floating-point.

@alex_01 Maybe you could set the constraints as w[i] >= eps where you take for eps approximately the same value as for feasibilitytolerance (whether implicitly its default, or explicitly supplied)? Maybe that would be enough to ensure that w[i] are all nonnegative? (Worth a try, perhaps...).

I'm guessing that there may be lots of rapid change for some w[i] near zero, in which case you might want to allow the w[i] to attain 0.0 very closely from above but never get below it.

Certainly it looks like you really want to keep Digits<=evalhf(Digits) so that it sticks with fast hardware double precision computations externally.

@alex_01 Maybe you could set the constraints as w[i] >= eps where you take for eps approximately the same value as for feasibilitytolerance (whether implicitly its default, or explicitly supplied)? Maybe that would be enough to ensure that w[i] are all nonnegative? (Worth a try, perhaps...).

I'm guessing that there may be lots of rapid change for some w[i] near zero, in which case you might want to allow the w[i] to attain 0.0 very closely from above but never get below it.

Certainly it looks like you really want to keep Digits<=evalhf(Digits) so that it sticks with fast hardware double precision computations externally.

First 29 30 31 32 33 34 35 Last Page 31 of 81