I have a set of 9 equations and 9 unkowns. These equations are massive 9th order polynomials. The equations themselves are contained in a vector called "C" in the .m file that follows:
Cvector_with_a.txt (please rename the .txt file to .m since this forum doesn't seem to allow the upload of .m files)
In order to solve the eqautions, you must first set the values of two extra variables a1 and a2. I have been having enough problems with fsolve due to the size of the equations themselves and I have gotten many good suggestions from another post. However, more importantly is the level of accuracy required to solve the problem. I have been testing the equations with a known solution. When you set a1:=1.51401039732306275167060829561 and a2:=5.04882865932273295537727396226, then a known solution to the 9 equations and 9 unknowns within reason is:
c=1, hb=0.012 and any combination of c1..c7 such that:
Since a1 and a2 are collected experimentally, they are only accurate to about 4 digits. Therefore I would like to get a solution similar to the one above accurate to about 4 digits, by setting a1:=1.514 and a2:=5.049 instead. The problem is that these equations seem to be highly sensitive to numerical errors due to the 9th order polynomial equations. If you reduce to a1:=1.514 and a2:=5.049 then the above values are off by a factor of 10^5.
So far my fsolve command looks like this:
The ranges set on the unknown variables is required since solving 9th order polynomials produce a lot a possible solutions. I was thinking maybe some sort of minimization procedure might work instead?