pagan

5147 Reputation

23 Badges

17 years, 122 days

 

 

"A map that tried to pin down a sheep trail was just credible,

 but it was an optimistic map that tried to fix a the path made by the wind,

 or a path made across the grass by the shadow of flying birds."

                                                                 - _A Walk through H_, Peter Greenaway

 

MaplePrimes Activity


These are replies submitted by pagan

It's quite likely that Maple implements well-known specific methods for summation of floating-point numbers (eg. this). But those methods might only get used if your program is written is particular subset of possible ways.

In Maple, there are usually many different ways to code a task, and some of those ways might sidestep any careful summation implementations.

For example, we don't know whether you have used `sum` or `add`. And there may be other similar relevant aspects relating to `subs`, `eval`, or perhaps many other things.

Sometimes the best way to figure out what's going on is to use Maple's debugger or tracing facilities, or post the code and hope that someone else might do that.

It's quite likely that Maple implements well-known specific methods for summation of floating-point numbers (eg. this). But those methods might only get used if your program is written is particular subset of possible ways.

In Maple, there are usually many different ways to code a task, and some of those ways might sidestep any careful summation implementations.

For example, we don't know whether you have used `sum` or `add`. And there may be other similar relevant aspects relating to `subs`, `eval`, or perhaps many other things.

Sometimes the best way to figure out what's going on is to use Maple's debugger or tracing facilities, or post the code and hope that someone else might do that.

Here is a very simple example where the order in which the addition is performed can bring about a difference for floating-point numbers. The difference does not manifest itself for exact quantities.

The scaling by 'd' is only there below so as to show that the overall effect can be scaled, and hence appear to hit at any given level of innaccuracy. But it is the calculation of a+b-c which importantly differs in its result.

> restart:

> temp := a+b: (temp - c)/d; # (a+b-c)/d
                                   a + b - c
                                   ---------
                                       d
 
> temp := a-c: (temp + b)/d; # (a+b-c)/d
                                   a + b - c
                                   ---------
                                       d
 
> a:=0.7:
> b:=0.1e-14:
> c:=0.7:
> d:=0.1e-6:

> temp := a+b: (temp - c)/d; # (a+b-c)/d
                                      0.
 
> temp := a-c: (temp + b)/d; # (a+b-c)/d
                                              -7
                               0.1000000000 10

The next thing to realize is that symbolic terms in a DAG (Maple's internal object structure) that represents a sum may be ordered by memory address, at least insofar as how they get added together following evaluation to values.

In the example above, the "order" in which terms a, b, and -c get added together is forced to vary, by doing it in two steps. That's just to show how the order affects the result. In a Maple expression involving the above symbolic subterms, their memory addresses  can come into play -- ie. the pair with lowest addresses (in  the memory space) might get added together first, and the pair for which that is true might differ in a new session.

This is a case of a remaining aspect of "session dependence" on some computations in Maple. A lot of that kind of thing was fixed when Maple's "set" structure lost its session dependence on memory address ordering of members. But there are some other kinds of the behaviour still present.

Here is a very simple example where the order in which the addition is performed can bring about a difference for floating-point numbers. The difference does not manifest itself for exact quantities.

The scaling by 'd' is only there below so as to show that the overall effect can be scaled, and hence appear to hit at any given level of innaccuracy. But it is the calculation of a+b-c which importantly differs in its result.

> restart:

> temp := a+b: (temp - c)/d; # (a+b-c)/d
                                   a + b - c
                                   ---------
                                       d
 
> temp := a-c: (temp + b)/d; # (a+b-c)/d
                                   a + b - c
                                   ---------
                                       d
 
> a:=0.7:
> b:=0.1e-14:
> c:=0.7:
> d:=0.1e-6:

> temp := a+b: (temp - c)/d; # (a+b-c)/d
                                      0.
 
> temp := a-c: (temp + b)/d; # (a+b-c)/d
                                              -7
                               0.1000000000 10

The next thing to realize is that symbolic terms in a DAG (Maple's internal object structure) that represents a sum may be ordered by memory address, at least insofar as how they get added together following evaluation to values.

In the example above, the "order" in which terms a, b, and -c get added together is forced to vary, by doing it in two steps. That's just to show how the order affects the result. In a Maple expression involving the above symbolic subterms, their memory addresses  can come into play -- ie. the pair with lowest addresses (in  the memory space) might get added together first, and the pair for which that is true might differ in a new session.

This is a case of a remaining aspect of "session dependence" on some computations in Maple. A lot of that kind of thing was fixed when Maple's "set" structure lost its session dependence on memory address ordering of members. But there are some other kinds of the behaviour still present.

Isn't it a bad idea to write out to any file under your Maple installation, unless it's to a subfolder that's specifically for the purpose of handling user data? Why risk messing up your Maple installation with an unlucky file name choice, or with any other related unlucky possibility?

Why not set currentdir to something more sensible, like under your home directory, and put that in your initialization file?

It's a Bad Thing if Maple has such a silly default for currentdir when launching from the desktop shortcut. I call it a bad bug, waiting to bite the hapless.

Isn't it a bad idea to write out to any file under your Maple installation, unless it's to a subfolder that's specifically for the purpose of handling user data? Why risk messing up your Maple installation with an unlucky file name choice, or with any other related unlucky possibility?

Why not set currentdir to something more sensible, like under your home directory, and put that in your initialization file?

It's a Bad Thing if Maple has such a silly default for currentdir when launching from the desktop shortcut. I call it a bad bug, waiting to bite the hapless.

Maple is trying to write to that file name in the directory/folder returned by the command currentdir(). You can use that same command, with an argument, to change the "working directory" location.

It is silly if Maple is using something like bin.XXX under kernelopts(mapledir) as the default currentdir(). Wouldn't it be much better to use a user-writable folder  under kernelopts(mapledir), or even kernelopts(homedir)?

Maple is trying to write to that file name in the directory/folder returned by the command currentdir(). You can use that same command, with an argument, to change the "working directory" location.

It is silly if Maple is using something like bin.XXX under kernelopts(mapledir) as the default currentdir(). Wouldn't it be much better to use a user-writable folder  under kernelopts(mapledir), or even kernelopts(homedir)?

I forgot to say, I was disappointed that I couldn't issue this, as an efficient 2D plot analogue to what's possible for the 3D plot (GRID).

plots:-pointplot(M,color=COLOR(RGB,Zc)):

I forgot to say, I was disappointed that I couldn't issue this, as an efficient 2D plot analogue to what's possible for the 3D plot (GRID).

plots:-pointplot(M,color=COLOR(RGB,Zc)):

Could it be that you used the rhs() when applying f, as the first suggested item, but forgot to wrap points in a list as the second item?

Could it be that you used the rhs() when applying f, as the first suggested item, but forgot to wrap points in a list as the second item?

There are lots of ways to do effectively the same thing.

> A:=Array(1..5,1..5,rand(1..4)):

> ArrayTools:-AddAlongDimension(A,1)-Array(1..5,i->A[i,i]);
                             [ 7 8 11 12 10 ]

> seq(add(A[i,j],i=1..j-1)+add(A[i,j],i=j+1..5),j=1..5);
                              7, 8, 11, 12, 10

> seq(add(A[i,j],i in {$1..5} minus {j}),j=1..5);
                              7, 8, 11, 12, 10

There are lots of ways to do effectively the same thing.

> A:=Array(1..5,1..5,rand(1..4)):

> ArrayTools:-AddAlongDimension(A,1)-Array(1..5,i->A[i,i]);
                             [ 7 8 11 12 10 ]

> seq(add(A[i,j],i=1..j-1)+add(A[i,j],i=j+1..5),j=1..5);
                              7, 8, 11, 12, 10

> seq(add(A[i,j],i in {$1..5} minus {j}),j=1..5);
                              7, 8, 11, 12, 10

You wrote that you wanted to take clusters ("max area") of spikes and get the "general max" for them.

So, how far apart do the outermost spikes from two distinct clusters have to be for them to get correctly identified as being from two distinct groups? Let's suppose the answer to that is distance X.

Now, what happens if the end spikes from two distinct clusters fall at less than X distance apart? The algorithm would (mis)identify all spikes from both clusters as being in just one single group. And hence it would identify just a single "general max" despite the fact that there were really two cars (clusters of spikes) present.

If you take X too large, then distinct cars coming close together can get misidenfied as being a single car.

Now what happens if you take X as very small. If X is too small, then spikes from a single cluster may fall at more than X distance apart. In that case, such spikes more than X apart would (by defn of X) get identified as being from separate clusters. And so a single car's spikes could get misidentified as being due to several apparent cars.

So X should be neither too small or too large, if the "general max" of the clusters is to be correctly identified with the right number of cars. So my question is, what size should X be (not just for your cited .wav file, but for any other data you intend to analyse).

Now, distance apart of spikes is not the only possible criterion that can resolve this. Bounding some higher derivative(s) of the smoothing curve might also suffice (but be trickier to specify). Or, in the car example you might be able to state a range for acceptable width of clusters on the basis of the pitch and the known car speeds.

You might be able to use CurveFitting:-ArrayInterpolation for this task. You shouldn't have to convert to lists.

First 49 50 51 52 53 54 55 Last Page 51 of 81