acer

32490 Reputation

29 Badges

20 years, 7 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

Obviously a Plot0 component is needed, since my posted code accessed such. I explicitly mentioned the need for a Plot0 component in my original reply.

Also, for me it works by updating the image worked with the code "as is", in Maple 13.01, without any extra DocumentTools:-Do. In fact, that's the whole point: DocumentTool:-Do does not have the ability to refresh incrementally.

acer

Obviously a Plot0 component is needed, since my posted code accessed such. I explicitly mentioned the need for a Plot0 component in my original reply.

Also, for me it works by updating the image worked with the code "as is", in Maple 13.01, without any extra DocumentTools:-Do. In fact, that's the whole point: DocumentTool:-Do does not have the ability to refresh incrementally.

acer

I think that one can make this approach faster without having to reduce Digits or the 'digits' option of evalf/Int. This might be done by ensuring that the result (procedure!) from calling the integrand procedure is evalhf'able. That may be accomplished like so,

integrand:=proc(t)
    return subs({_n = n, _t = t}, eval(proc(x)
    local k;
        return x*(1 - add(binomial(_n, k)*(1/10*x - _t/10 + 1/2)^k*
        (1/2 - 1/10*x + _t/10)^(_n - k), k = 0 .. ceil(10 - 1/5*x) - 2));
    end proc));
end proc:

One can test that. (Using your original, the following would produce an error from evalhf, about lexical scoping.)

> n:=45:
> f:=integrand(10):
> evalhf(f(7));
                              4.91780284967950010

The reason this can be faster is because some of the compiled, external quadrature routines first try to make callbacks into maple (to evaluate the integrand at a given point) in faster evalhf mode.

With lexical scoping avoided, I got the following sort of performance,

> numericIntegral:=t->evalf(Int(integrand(t),t-5..t+5,epsilon=1e-5,method =_d01ajc)):

> st:=time():RootFinding:-NextZero(numericIntegral,-10);time()-st;
                                 -1.064932177
 
                                     0.601
 
> n:=35:
> st:=time():RootFinding:-NextZero(numericIntegral,-10);time()-st;
                                 -1.352617315
 
                                     0.695

Using the original, with evalhf disabled due to the lexical scoping in the proc returned by the integrand routine, I got the following performance, fifteen times slower.

> integrand:=proc(t)
>     return proc(x)
>      local k;
>         return x*(1 - add(binomial(n, k)*(1/10*x - t/10 + 1/2)^k*
>         (1/2 - 1/10*x + t/10)^(n - k), k = 0 .. ceil(10 - 1/5*x) - 2));
>     end proc;
> end proc:

> n:=45:
> f:=integrand(10):
> evalhf(f(7));
Error, lexical scoping is not supported in evalhf
> evalf(f(7));
                                  4.917802850

> st:=time():RootFinding:-NextZero(numericIntegral,-10);time()-st;

                                 -1.064932177
 
                                     9.065

Bytes-used went up by about 650MB during that last computation, ie. garbage collection during the software evalf fallback-mode callbacks. Now, I was unable to use fsolve with my approach, so used NextZero instead. (I haven't investigated why.) I consider that an acceptable tradeoff.

acer

I think that one can make this approach faster without having to reduce Digits or the 'digits' option of evalf/Int. This might be done by ensuring that the result (procedure!) from calling the integrand procedure is evalhf'able. That may be accomplished like so,

integrand:=proc(t)
    return subs({_n = n, _t = t}, eval(proc(x)
    local k;
        return x*(1 - add(binomial(_n, k)*(1/10*x - _t/10 + 1/2)^k*
        (1/2 - 1/10*x + _t/10)^(_n - k), k = 0 .. ceil(10 - 1/5*x) - 2));
    end proc));
end proc:

One can test that. (Using your original, the following would produce an error from evalhf, about lexical scoping.)

> n:=45:
> f:=integrand(10):
> evalhf(f(7));
                              4.91780284967950010

The reason this can be faster is because some of the compiled, external quadrature routines first try to make callbacks into maple (to evaluate the integrand at a given point) in faster evalhf mode.

With lexical scoping avoided, I got the following sort of performance,

> numericIntegral:=t->evalf(Int(integrand(t),t-5..t+5,epsilon=1e-5,method =_d01ajc)):

> st:=time():RootFinding:-NextZero(numericIntegral,-10);time()-st;
                                 -1.064932177
 
                                     0.601
 
> n:=35:
> st:=time():RootFinding:-NextZero(numericIntegral,-10);time()-st;
                                 -1.352617315
 
                                     0.695

Using the original, with evalhf disabled due to the lexical scoping in the proc returned by the integrand routine, I got the following performance, fifteen times slower.

> integrand:=proc(t)
>     return proc(x)
>      local k;
>         return x*(1 - add(binomial(n, k)*(1/10*x - t/10 + 1/2)^k*
>         (1/2 - 1/10*x + t/10)^(n - k), k = 0 .. ceil(10 - 1/5*x) - 2));
>     end proc;
> end proc:

> n:=45:
> f:=integrand(10):
> evalhf(f(7));
Error, lexical scoping is not supported in evalhf
> evalf(f(7));
                                  4.917802850

> st:=time():RootFinding:-NextZero(numericIntegral,-10);time()-st;

                                 -1.064932177
 
                                     9.065

Bytes-used went up by about 650MB during that last computation, ie. garbage collection during the software evalf fallback-mode callbacks. Now, I was unable to use fsolve with my approach, so used NextZero instead. (I haven't investigated why.) I consider that an acceptable tradeoff.

acer

Thanks. The error comes from the objective returning as an unevaluated Int, I suppose, when the numeric quadrature might fail. A better error message would be something like "nonnumeric value of the objective encountered" or similar.

Now, for an Int expression form due to passing an unquoted call to E (even if there is no `evalf` in that F) the internal Optimization routines will supply the wrapping evalf. But the numeric quadrature might still fail. One could rewrite F, using evalf(Int(...)), to check whether a numeric value is returned. If not, it could be retried with a looser epsilon tolerance. And if that too fails then maybe some fake large value such as infinity might be returned, although that could take away smoothness of the objective which is something most of the NLPSolvers expect. A reduced range for the variables might also help in this respect.

acer

Thanks. The error comes from the objective returning as an unevaluated Int, I suppose, when the numeric quadrature might fail. A better error message would be something like "nonnumeric value of the objective encountered" or similar.

Now, for an Int expression form due to passing an unquoted call to E (even if there is no `evalf` in that F) the internal Optimization routines will supply the wrapping evalf. But the numeric quadrature might still fail. One could rewrite F, using evalf(Int(...)), to check whether a numeric value is returned. If not, it could be retried with a looser epsilon tolerance. And if that too fails then maybe some fake large value such as infinity might be returned, although that could take away smoothness of the objective which is something most of the NLPSolvers expect. A reduced range for the variables might also help in this respect.

acer

Hmm. Shouldn't it instead be this,

> P:=(n,delta)->1-Statistics:-CDF(Statistics:-RandomVariable(
>       NonCentralFRatio(3,3*n,delta)),
>       Statistics:-Quantile(Statistics:-RandomVariable(
>      FRatio(3,3*n)),0.95,numeric),numeric):

> P(1000,0.82);
                                 0.1028025004

It's interesting to see it with delta being a variable argument, though, for the sake of plotting.

acer

Hmm. Shouldn't it instead be this,

> P:=(n,delta)->1-Statistics:-CDF(Statistics:-RandomVariable(
>       NonCentralFRatio(3,3*n,delta)),
>       Statistics:-Quantile(Statistics:-RandomVariable(
>      FRatio(3,3*n)),0.95,numeric),numeric):

> P(1000,0.82);
                                 0.1028025004

It's interesting to see it with delta being a variable argument, though, for the sake of plotting.

acer

Hi Robert,

That Warning about first-order conditions is (somewhat not well-known) bug that crops up when using the operator form calling sequence of NLPSolve. The (Trackered) problem seems to relate to numeric computation of derivatives, ie. gradients of objectives and jacobians of constraints.

I have had some prior success with two types of workaround for this bug. One is to use expression form, rather than operator form. In this case, passing unquoted E(c0, c1, c2) to NLPSolve results in an unevaluated integral, which seems "good enough" to serve as an expression in this particular need.

The other type of workaround I've used before is to manually and explicitly supply procedures for the gradient (and jacobian if necessary for non-simple constraints). For that, I have previously used fdiff, because in such cases I expect that NLPSolve has already tried codegen:-Gradient (A.D.) and failed. I have previously mailed examples of such to Joe R., but I'd like to find time to post them here too.

Of course, I too replaced `int` with `Int` inside the `F` procedure body.

I also specified the variables, in my NLPSolve call. Ie,

> Optimization:-NLPSolve(E(c0, c1, c2),
>        variables=[c0,c1,c2],
>        initialpoint=[c0=.1035805,c1=1.001279,c2=-.561381]);
Error, (in Optimization:-NLPSolve) complex value encountered

I didn't yet investigate whether this last error is truthful, or whether or not some tiny imaginary component from the numeric integral result might not be just a negligible artefact of floating-point computation, or how to work around such.

acer

Hi Robert,

That Warning about first-order conditions is (somewhat not well-known) bug that crops up when using the operator form calling sequence of NLPSolve. The (Trackered) problem seems to relate to numeric computation of derivatives, ie. gradients of objectives and jacobians of constraints.

I have had some prior success with two types of workaround for this bug. One is to use expression form, rather than operator form. In this case, passing unquoted E(c0, c1, c2) to NLPSolve results in an unevaluated integral, which seems "good enough" to serve as an expression in this particular need.

The other type of workaround I've used before is to manually and explicitly supply procedures for the gradient (and jacobian if necessary for non-simple constraints). For that, I have previously used fdiff, because in such cases I expect that NLPSolve has already tried codegen:-Gradient (A.D.) and failed. I have previously mailed examples of such to Joe R., but I'd like to find time to post them here too.

Of course, I too replaced `int` with `Int` inside the `F` procedure body.

I also specified the variables, in my NLPSolve call. Ie,

> Optimization:-NLPSolve(E(c0, c1, c2),
>        variables=[c0,c1,c2],
>        initialpoint=[c0=.1035805,c1=1.001279,c2=-.561381]);
Error, (in Optimization:-NLPSolve) complex value encountered

I didn't yet investigate whether this last error is truthful, or whether or not some tiny imaginary component from the numeric integral result might not be just a negligible artefact of floating-point computation, or how to work around such.

acer

Shouldn't the call to Probability in proc f instead be to ST:-Probability ?

acer

Shouldn't the call to Probability in proc f instead be to ST:-Probability ?

acer

It seems plausible that there is some (other) example for which the "I" terms are not yet collected in the simplify@evalc result, while the direct solve result is not in the desired form.

Anyway, this is all verysimple tweaking, so arguing about it is mere quibbling. And beauty is in the eye of the beholder. So, yet another quibble: someone else might prefer the result of,

map((normal@Re)+(normal@Im)*I,simplify(evalc([sol])));

acer

It seems plausible that there is some (other) example for which the "I" terms are not yet collected in the simplify@evalc result, while the direct solve result is not in the desired form.

Anyway, this is all verysimple tweaking, so arguing about it is mere quibbling. And beauty is in the eye of the beholder. So, yet another quibble: someone else might prefer the result of,

map((normal@Re)+(normal@Im)*I,simplify(evalc([sol])));

acer

While the random 10x10 Matrix might likely have a full set of linearly independent eigenvectors, it could of course just be that the OP used RandomMatrix for illustration. It may be that for the OP's typical problems the chance of the input being a defective Matrix is more likely. (We don't yet know.)

Such decomposition approaches to computing the floating-point exponential generally fail for defective Matrices. See Method 14 of section 6 of the well-known paper by Moler and Van Loan. I doubt that this method gets "better" by applying it to the mixed float-symbolic problem, which is what we have here for an unassigned symbol `t`. (For a float Matrix, one might be able to compute the condition number quickly, as an initial test.)

Naturally, I fully expect that Robert is aware of all this. I only mention it for the possible benefit of others.

acer

First 463 464 465 466 467 468 469 Last Page 465 of 594