What practical limits are there to the size of a system of simultaneous nonlinear equations that fsolve() can solve?
In this case I'm solving 18 nonlinear equations in 18 unknowns (applied to ten different data sets). All variables ought to come out somewhere in the 0.2..0.5 range, and I told fsolve() about that, trying ranges such as 0..1, 0.2..0.5, and no range at all. That didn't work particularly well. I split up the 18 equations first into two blocks of 11 equations/unknowns and 7 equations/unknowns, and as a result got some improvement in sensible outcomes being found. I further split up the equations into sets of 5, 6, 1, 4 and 2 equations/unknowns, which had to be solved in that order so that the correct variable values from one set would be available to the invocation of fsolve() on the next set. Yet more improvement, and in particular I now get sensible solutions to each of the first three sets of equations, in 5, then 6, then 1 unknowns.
My problem is that I can't break down the remaining set of 4 equations in 4 unknowns further to ease the work of fsolve(). (Given the correct values of these, the final set of 2 equations in two unknows should be straightforward.)
At first I ran into problems with floating point operations, getting extremely bizarre results when I did get results. I scaled all of my unknowns up and this helped fsolve() immensely to get sensible results. I suppose it is possible that I'm still running into floating point errors - maybe I should scale up some more.
So are there any hints as to how else I can help fsolve() in terms of putting the equations into a nicer form somehow? Would using symbolic solve() first help, for example?