acer

32490 Reputation

29 Badges

20 years, 9 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are answers submitted by acer

I suspect that what is happening is that the internal iteration limit of the least-squares solver is being hit before the optimality tolerance (goal) is met, or before some numerical effect takes over the computation.

I've extracted what Statistics:-NonlinearFit passes to Optimization:-LSSolve , thanks to Maple debugger.

Once the call to LSSolve has been produced, it can be modified to alter the iterationlimit and also the choice of numerical method. See ?LSSolve for more.

First I run with method=sqp and iterationlimit=10000. I also set infolevel[Optimization] to 10, to get more information displayed about what it's doing.

> fnew := proc (w, z, needfi, ruser) local i; for i to 4 do z[i] := evalf(proc ( p, r, i) p[1]*(1-p[2]*cos(Pi/(r[i,1]+1)))^(1/2)-r[i,2] end proc(w,ruser,i)) end do end proc:
>
> fjac := proc (w, z, ruser, iuser) local i; for i to 4 do z[i,1] := evalf(proc (p, r, i) (1-p[2]*cos(Pi/(r[i,1]+1)))^(1/2) end proc(w,ruser,i)); z[i,2] := eval f(proc (p, r, i) -1/2*p[1]/(1-p[2]*cos(Pi/(r[i,1]+1)))^(1/2)*cos(Pi/(r[i,1]+1)) end proc(w,ruser,i)) end do end proc:
>
> infolevel[Optimization]:=10:
>
> sol := Optimization:-LSSolve(
  [2, 4],
  fnew,
  datapoints = (Matrix(4, 2, [[8.,-1.73120199199999991],[12.,-1.9665823319999999 9],[16.,-2.12740288800000021],[20.,-2.18699629199999990]], datatype = float[8])) ,
  output = tables,
  initialpoint = (Vector(2, [-9.,.699999999999999956], datatype = float[8])),
  objectivejacobian = fjac,
  method = sqp,
  iterationlimit = 10000 );

LSSolve:   calling nonlinear LS solver
SolveGeneral:   using method=sqp
SolveGeneral:   number of problem variables   2
SolveGeneral:   number of residuals   4
SolveGeneral:   number of nonlinear inequality constraints   0
SolveGeneral:   number of nonlinear equality constraints   0
SolveGeneral:   number of general linear constraints   0
PrintSettings:   feasibility tolerance set to   .1053671213e-7
PrintSettings:   optimality tolerance set to   .3256082241e-11
PrintSettings:   iteration limit set to   10000
PrintSettings:   infinite bound set to   .10e21
SolveGeneral:   trying evalhf mode
Warning, limiting number of major iterations has been reached
E04USA:   number of major iterations taken   10000
                       sol := [settingstab, resultstab]
 

> eval(sol[2]);

table(["visible" = ["objectivevalue", "solutionpoint", "residuals",
    "evaluations", "iterations"], "iterations" = 10000,
    "objectivevalue" = 0.0492868830043316011,
                      [-0.0488744033423635441]
    "solutionpoint" = [                      ],
                      [ -1732.28244861786970 ]
    "residuals" = [-0.241298347220388010, -0.0384274262517967369,
    0.110016320471836959, 0.163611172536819893]
    ])

> eval(a*sqrt(1-b*cos(Pi/(v+1))),[a=sol[2]["solutionpoint"][1],
>                                 b=sol[2]["solutionpoint"][2]]);
                               /                             Pi   \1/2
        -0.0488744033423635441 |1 + 1732.28244861786970 cos(-----)|
                               \                            v + 1 /

I notice that the residuals and the objective function above are decreasing quite slowly as the iterationlimit is increased. You may wish to play around with this a bit more. You could set the method to modifiednewton, or adjust the toleration of iterationlimit some more. If you decide that roundoff error is preventing it from attaining the best solution then you could increase Digits. What is interesting is that, depending on how I alter those options, very different solution points can be produced which each give quite close to the same residuals.

However, with method=modifiednewton and iterationlimit=100000 I got result without seeing the warning.

With those settings, I saw userinfo messages like this,

E04FCF:   number of major iterations taken   1474
E04FCF:   number of times the objective was evaluated   15018

and a final result that included this,

    "objectivevalue" = 0.0492807204246035,


                      [-0.0192100345224023965]
    "solutionpoint" = [                      ],
                      [ -11218.7180037680664 ]

> eval(a*sqrt(1-b*cos(Pi/(v+1))),[a=sol[2]["solutionpoint"][1],
>                                 b=sol[2]["solutionpoint"][2]]);
                               /                             Pi   \1/2
        -0.0192100345224023965 |1 + 11218.7180037680664 cos(-----)|
                               \                            v + 1 /

One possible conclusion from this (not discounting Georgios' comments in a post below about the merit of the nonlinear form that you have chosen to fit to) is that Statistics:-NonlinearFit could benefit from taking a new option like iterationlimit=<value>

acer

What dharr is trying to tell you, I believe, is that the inert form uppercase-e Eval() will display like a usual notation for "y evaluated at x=0". That uppercase Eval is the so-called inert form of eval. You should be able to assign calls like Eval(y,x=0) to variables, or simply use them in your Document, and so on.

On the other hand, the lowercase-e eval() can actually do the evaluation and is not inert. The two often look good in combination, on either side of an equation for example. Try executing this in Maple

y := cos(x);
Eval(y,x=0) = eval(y,x=0);

acer

There are probably better ways. It might also be possible to get the relational symbols from one of the palettes, and subs into that beast (lprinted to see it, say).

> f := (t,s) -> convert(StringTools:-SubstituteAll(
        cat("-T <= ",s," <= T"), "T", convert(t,string)),
                    name):

> f(2,"x");

                                -2 <= x <= 2

acer

Would you accept using T::ppositive , where ppositive were a new type?

Something like this, perhaps.

TypeTools:-AddType( ppositive,
  x->evalb(type(x,positive) or is(x,positive)=true) );

acer

If one could copy & paste your ode into Maple, as displayed in your post here on mapleprimes, then it would be easier to help.

But the ode appears as an image, in my firefox, from some maplenet server.  Does anyone here know how to get that into a new Maple session, without having to enter all that by hand?

acer

Here's an example.

sleep := proc(delay::nonnegative,expr::uneval,$)
local x, endtime;
   print(expr);
   if kernelopts(':-platform') = "unix" then
      ssystem(sprintf("sleep %d",delay));
   else
      endtime := time() + delay;
      while time() < endtime do end do;
   end if;
   seq(eval(x), x in args[2..-1]);
end proc:
                                                                                
sleep(5,int(x,x));

It could be even nicer if the seq Modifier described on the help-page ?parameter_modifiers allowed a sequence of uneval parameters. (Ambiguity could be resolved by having each argument taken separately and not as a sequence.)

Sidebar: if system/ssystem calls are disabled only in the Tools->Options section Standard GUI of Maple 11 then attempting to use them (like above, for Unix) results in a Warning but not necessarily an Error. But attempting to use system calls after -z is passed an an option to the Maple launcher (TTY or Standard) results in an Error.

acer

How about creating new procedures.

Sin := x->sin(Pi*x/180);
plot(Sin,0..360);

Or you could create and load a module which defines its own routines, so that the function names could appear the same.

my_trig := module()
option package;
export sin, cos;
  sin := x -> :-sin(Pi*x/180);
  cos := x -> :-cos(Pi*x/180);
end module:

with(my_trig);

plot(evalf@sin,0..360);
plot(sin(t),t=-180..180);
plot(sin,0..360); # bug, due to what??

You could add more routines to that module, even though above I only put in sin and cos. Simply define and export whichever you want.

If you feel really brave, you could also try this next trick below. I don't particularly recommend it, because apart from the danger of unprotecting sin, it doesn't get much more. It even seems to suffer from same problem as the module, for the last example.

restart:

Sin:=copy(sin):
unprotect(sin):
sin:=x->Sin(Pi*x/180):

plot(evalf@sin,0..360);
plot(sin(t),t=-180..180);
plot(sin,0..360); # again, why?

acer

The z in op(2,a) is an escaped local variable name.

You should be able to use convert(op(2,a),`global`) instead.

a:=FunctionAdvisor( sum_form, tan);
convert(tan(z),Sum) assuming convert(op(2,a),`global`);

acer

Your instincts are right. It can be done using seq(), and a loop which looks like,

S := S, ...

is to be avoided.

But I ought to ask, do you really need to create all 10 operators, which act by evaluating a piecewise?

Or do you just need that f[i](j) returns the value in M[i,j] , or possibly that f[i](y) returns M[i,trunc[y]] ?

There are two parts of the above question. The first is this.  Do you really need 10 operators, f[1]..f[10] (where f was presumably just a Maple table) or would it suffice to have one procedure f which was able be called in that indexed manner?

The second is this. Since the operators (10 individual ones, or 1 indexed proc, no matter) are apparently only going to be used to evaluate a piecewise (in x) at x=y then do you really need to utilize piecewise constructs for that? Maybe these operators don't need to use piecewise constructs at all (since those are, relatively speaking, expensive to evaluate at a specific point).

Some ideas follow, which you might find useful.

M := LinearAlgebra:-RandomMatrix(10,18):

f := proc(y)
#global M; # or by lexical scoping
local i;
  if type(procname,indexed) and type(op(1,procname),posint) then
    i:=op(1,procname);
  end if;
  M[i,trunc(y)];
end proc:

f[2](1);
f[2](1.3);
M[2,1];

Perhaps you really did need what you described, for some other reasons other than just the returned value M[i,j]. You noticed that this invocation below doesn't work, since seq() sees all the arguments flattened out (and hence invalid).

seq( j < x, MM[j,j]*`if`(j=1,0,M[i,j]), j=1..3 );

But maybe you could do something like this,

B := j < x, MM[j,j]*`if`(j=1,0,MM[i,j]);
seq( B, j=1..3 );

acer

Very nice, Axel.

But it seems to me that you have helped Maple out more than you should have had to, by "giving" it the FTOC result.

How about this: instead of,

D(F)(x): int(%,x=0..t) = F(t) - F(0);

you could have,

D(F)(x): Int(%,x=0..t) = int(%,x=0..t,continuous);

so that Maple does more of the work.

 acer

You forgot to define beta[1] as  value*deg .

By the way, I really prefer Units[Standard] to Units[Natural]. The Units[Natural] environment robs the global namespace of too many common symbols which one might want otherwise to use as variables.

> restart:

> with(Units[Standard]):

> alpha[1]:=4.25 * Unit(deg):
> theta[1]:=11.31 * Unit(deg):
> theta[2]:=78.69 * Unit(deg):
> alpha[2]:=-9.25 * Unit(deg):
> a:=1400.00:
> b:=509.90:

> beta[1]:=33.33 * Unit(deg): # I made that up.

> -b*sin(alpha[2]+alpha[1]-beta[1]-theta[2])+b*sin(beta[1]-theta[2]);

                              91.4313519

acer

Type cs\_e instead of cs_e to get it in 2D Math input.

acer

Check this for correctness. Since (3) is missing in the scan, some deduction was necessary. And sometimes I make mistakes.

The request is to make ETsolver accept a general function f of two arguments (which returns dy/dx). That's fine. But why embed into ETsolver the initial condition y[1]=a when that is only true for the particular supplied initial condition y(0)=0 ? And why not also let ETsolver accept a general endpoint b, instead of hard-coding it to only compute an approximation for y(1)?

> ETsolver := proc( f, a, y_a, h, b)
> local N,y,n;
> N := trunc( (b-a)/h );
> y[1]:=f(a,y_a):
> for n from 1 to N do
> y[n+1]:=y[n]+h/2 * ( f(a+(n-1)*h,y[n]) + f(a+n*h,y[n]+h*f(a+(n-1)*h,y[n])) );
> end do;
> return y[N+1];
> end proc:

> ETsolver( (x,y)->x+y, 0, 0, 0.1, 1);
0.7140808465

> ETsolver( (x,y)->x+y, 0, 0, 0.01, 1);
0.7182368623

> ETsolver( (x,y)->x+y, 0, 0, 0.001, 1);
0.7182813769

Like I mentioned before, the key to understanding this algorithm is to draw the exact curve, and the trapezoid with base from x[n] to x[n+1]=x[n]+h , and to see how it relates (as an approximation) to the fundamental theorem of Calculus.

acer

What have you got, so far, with this?

Do you expect complete solutions to your assignments exercises, without making the major contribution yourself? If not, then please just post your instructor's email address so that we can mail in our solutions directly.

It's asking you to write a procedure that implements Euler's method for numerical integration.

That procedure will accept input function f, initial point a, and stepsize h. You could also make it accept final target point b, so as to more generally be able to approximate y(b) instead of y(1).

It refers to equation (3), which is not shown. It's pretty clear, though, that equation (3) is equivalent to (Maple syntax),

y(1) = Int( f(x), x=a..1 );

or

y(b) = Int( f(x), x=a..b );

The procedure could figure out how many steps of length h it will take to get from point a to point 1 (or b). It needs to run a loop, for each such step. Each time through the loop, it will compute y[n+1] using y[n]. It starts off with y[1]:=a .

Here's a hint. To gets from initial point y[1]=a to final point y[N]=b you need to add N increments of stepsize h = (b-a)/N . So you can deduce that N = (b-a)/h . So, you can figure out how many times to go through the loop.

Here's another hint. Try it without a loop, for practice. Try to get it to compute y[2] . From your equation (4) that should equal y[1] + h/2 * (F[1,n]+F[2,n]) .

Try to draw it by hand, as a picture on paper. What do the F[1,n] and F[2,n] mean? Where is the trapezoid? Is the area of the trapezoid almost equal to the area under the curve f between x=a and x=a+h ? Draw it all.

acer

What else can you say about the nonlinear equations? For example, are they (or their subsets that you created) multivariable polynomials?

Are your data sets exact or floating-point approximations, or formulas involving mixed data? Would you consider conversion of float data to exact rational prior to systematic separation of equations?

It sounds like a dimensional effect, as you describe it. For systems of equations in more than one variable, fsolve uses Newton's method. With many variables and hence a much "larger" space, it often becomes more and more unlikely that a starting point will converge. I don't think that fsolve adjusts its number of attempts (distinct initial points) according to the size of the system, and there's no option to specify the number of attempts. So, yes, as the size of the system gets large it might become less and less likely that it will find a n approximate root.

Your idea of separating and reducing the system sounds good, then. But, do you do it "by hand"? Maple's `solve` routine might help you automate it. Or you might be able to use Groebner, depending on the type of equations.

It's possible that someone might be able to do something more with it, if you can upload the equations and some data values to this site.

acer

First 322 323 324 325 326 327 328 Last Page 324 of 337