Carl Love

Carl Love

28050 Reputation

25 Badges

12 years, 335 days
Himself
Wayland, Massachusetts, United States
My name was formerly Carl Devore.

MaplePrimes Activity


These are replies submitted by Carl Love

@Alex Joannou You should probably make a separate question. Replies to Answers do not make a thread come to the top of the Active Conversations queue, which makes it difficult to follow the thread. Also, Replies are not searchable AFAIK.

Can you show an example of what you mean by partitioning an integer into sets? By example, I mean Can you take a small integer n and list here all the structures that you want to count? It's possible that you want to count Compositions rather than Partitions.

with(combstruct);
count(Partition(10));
allstructs(Partition(10));

See ?combstruct,structures .

with(combstruct);
count(Partition(10));
allstructs(Partition(10));

See ?combstruct,structures .

@casperyc Your two-stage method and my two-stage method may be effectively the same thing. Either way, it's much much better timewise than passing all the polynomial substitutions to simplify at the same time.

@casperyc Your two-stage method and my two-stage method may be effectively the same thing. Either way, it's much much better timewise than passing all the polynomial substitutions to simplify at the same time.

@Alejandro Jakubi wrote:

If that complexity is something inherent to the polynomial rather than a measure of how the polynomial is written, does it mean, for instance, that it remains invariant under a linear change of variables?

I am not sure about that. But if expand(p - q) = 0 is true, then p and q have the same complexity, by my measure. Note that SolveTools:-Complexity does not have this property.

My notion is heuristic at this point. But the idea clearly has some merit since it makes the simplify commands run 1000s of times faster, and the results are obviously fully simplfied with respect to the side relations. If you take a look at the worksheets in this thread, I think that you'll see what I mean.

@Alejandro Jakubi wrote:

If that complexity is something inherent to the polynomial rather than a measure of how the polynomial is written, does it mean, for instance, that it remains invariant under a linear change of variables?

I am not sure about that. But if expand(p - q) = 0 is true, then p and q have the same complexity, by my measure. Note that SolveTools:-Complexity does not have this property.

My notion is heuristic at this point. But the idea clearly has some merit since it makes the simplify commands run 1000s of times faster, and the results are obviously fully simplfied with respect to the side relations. If you take a look at the worksheets in this thread, I think that you'll see what I mean.

@Jimmy You may be able to do something by incorporating the logarithm right into the model rather than by incorporating it into the plot after the fact. For example, the model is currently

i0*(exp(1000*(v-i*rs)/n0/(259/10))-1)-i = 0

You could make that

ln(i0*(exp(1000*(v-i*rs)/n0/(259/10))-1)) - ln(i) = 0

This might distribute the error more evenly along the curve. I am not sure.

Kitonum wrote: Carl. your code is compact and elegant, but it works too slowly. Can you explain why?

No, I can't explain it. It uses two library procedures: `convert/base` and ListTools:-Reverse. All the rest is kernel. The library procedures are very simple. It would be possible to analyze it to figure out how much time is spent by the library procedures. Although parse is kernel, one must assume that it is quite complex.

Kitonum wrote: Carl. your code is compact and elegant, but it works too slowly. Can you explain why?

No, I can't explain it. It uses two library procedures: `convert/base` and ListTools:-Reverse. All the rest is kernel. The library procedures are very simple. It would be possible to analyze it to figure out how much time is spent by the library procedures. Although parse is kernel, one must assume that it is quite complex.

A generalization: This bug is manifested if there is any division by any variable in any index containing the summation variable, no matter how deeply buried it is:

sum(g(3+a[k+f(1/j)]), k= 1..n);

@Jimmy I am not sure if you are questioning that or just stating a fact. residualmeansquare will always be less than residualsumofsquares because residualmeansquare = residualsumofsquares / degreesoffreedom, and degreesoffreedom is an integer greater than 1 in any practical problem.

Are your "letters" actually entered as j[0], j[1], ..., j[15]? If they are not, would it be convenient to put them in that style?

Is it possible that any of the exponents are 0 or 1? Are the exponents all integers?

It is the expected behaviour: g(i) is 0 because i is not 1. The standard order of evaluation says that g(i) will be evaluated before being passed to the sum command. The i has no value at that time, so it is not 1.

It is the expected behaviour: g(i) is 0 because i is not 1. The standard order of evaluation says that g(i) will be evaluated before being passed to the sum command. The i has no value at that time, so it is not 1.

First 615 616 617 618 619 620 621 Last Page 617 of 709