Carl Love

Carl Love

28070 Reputation

25 Badges

13 years, 29 days
Himself
Wayland, Massachusetts, United States
My name was formerly Carl Devore.

MaplePrimes Activity


These are replies submitted by Carl Love

About 3/4 of the entries in your sequence have nonzero imaginary part. How do you want to handle that?

There is a difference between the computational precision and the display precision. The computational precision is controlled by the Digits environment variable. The display precision is controlled by interface(displayprecision= ...). Unfortunately, the display precision is measured as number of digits after the decimal point rather than as number of significant digits. 

There is a difference between the computational precision and the display precision. The computational precision is controlled by the Digits environment variable. The display precision is controlled by interface(displayprecision= ...). Unfortunately, the display precision is measured as number of digits after the decimal point rather than as number of significant digits. 

@Alex Joannou You should probably make a separate question. Replies to Answers do not make a thread come to the top of the Active Conversations queue, which makes it difficult to follow the thread. Also, Replies are not searchable AFAIK.

Can you show an example of what you mean by partitioning an integer into sets? By example, I mean Can you take a small integer n and list here all the structures that you want to count? It's possible that you want to count Compositions rather than Partitions.

@Alex Joannou You should probably make a separate question. Replies to Answers do not make a thread come to the top of the Active Conversations queue, which makes it difficult to follow the thread. Also, Replies are not searchable AFAIK.

Can you show an example of what you mean by partitioning an integer into sets? By example, I mean Can you take a small integer n and list here all the structures that you want to count? It's possible that you want to count Compositions rather than Partitions.

with(combstruct);
count(Partition(10));
allstructs(Partition(10));

See ?combstruct,structures .

with(combstruct);
count(Partition(10));
allstructs(Partition(10));

See ?combstruct,structures .

@casperyc Your two-stage method and my two-stage method may be effectively the same thing. Either way, it's much much better timewise than passing all the polynomial substitutions to simplify at the same time.

@casperyc Your two-stage method and my two-stage method may be effectively the same thing. Either way, it's much much better timewise than passing all the polynomial substitutions to simplify at the same time.

@Alejandro Jakubi wrote:

If that complexity is something inherent to the polynomial rather than a measure of how the polynomial is written, does it mean, for instance, that it remains invariant under a linear change of variables?

I am not sure about that. But if expand(p - q) = 0 is true, then p and q have the same complexity, by my measure. Note that SolveTools:-Complexity does not have this property.

My notion is heuristic at this point. But the idea clearly has some merit since it makes the simplify commands run 1000s of times faster, and the results are obviously fully simplfied with respect to the side relations. If you take a look at the worksheets in this thread, I think that you'll see what I mean.

@Alejandro Jakubi wrote:

If that complexity is something inherent to the polynomial rather than a measure of how the polynomial is written, does it mean, for instance, that it remains invariant under a linear change of variables?

I am not sure about that. But if expand(p - q) = 0 is true, then p and q have the same complexity, by my measure. Note that SolveTools:-Complexity does not have this property.

My notion is heuristic at this point. But the idea clearly has some merit since it makes the simplify commands run 1000s of times faster, and the results are obviously fully simplfied with respect to the side relations. If you take a look at the worksheets in this thread, I think that you'll see what I mean.

@Jimmy You may be able to do something by incorporating the logarithm right into the model rather than by incorporating it into the plot after the fact. For example, the model is currently

i0*(exp(1000*(v-i*rs)/n0/(259/10))-1)-i = 0

You could make that

ln(i0*(exp(1000*(v-i*rs)/n0/(259/10))-1)) - ln(i) = 0

This might distribute the error more evenly along the curve. I am not sure.

Kitonum wrote: Carl. your code is compact and elegant, but it works too slowly. Can you explain why?

No, I can't explain it. It uses two library procedures: `convert/base` and ListTools:-Reverse. All the rest is kernel. The library procedures are very simple. It would be possible to analyze it to figure out how much time is spent by the library procedures. Although parse is kernel, one must assume that it is quite complex.

Kitonum wrote: Carl. your code is compact and elegant, but it works too slowly. Can you explain why?

No, I can't explain it. It uses two library procedures: `convert/base` and ListTools:-Reverse. All the rest is kernel. The library procedures are very simple. It would be possible to analyze it to figure out how much time is spent by the library procedures. Although parse is kernel, one must assume that it is quite complex.

A generalization: This bug is manifested if there is any division by any variable in any index containing the summation variable, no matter how deeply buried it is:

sum(g(3+a[k+f(1/j)]), k= 1..n);

First 615 616 617 618 619 620 621 Last Page 617 of 709