Reproducing the same example as in the Helppage of Statistics[LinearFit], we first write the postulated model function:
where p is the independent variable, and b the parameter vector. We then compute the vector of the derivatives of f with respect to the parameters, and then the transposed vector:
Neglecting the need of an optimization procedure in order us to find the parameters, let’s assume that the parameter-vector is the one appearing in Maple Help Page, while r is the vector of the data:
b:=[1.95999999999999996,0.165000000000000063,0.110714285714285710]: r:=[1, 2, 3, 4, 5, 6]:
We then define the procedure atab to find matrix X:
A:=multiply(gT,g): m:=nops(r): atab:=proc() global r, p, A, m; local i; X1:=[[0,0,0],[0,0,0],[0,0,0]]; X:=apply(X1,table); for i to m do p:=r[i]; X:=X+apply(A,table); end do; end proc:
Eventually, the variance-covariance matrix (covmatrix) can be evaluated by:
where of is the residuals sums of squares, as appearing in Maple Help Page.
I understand that procedure atab is not the best in the world to evaluate the matrix, but I am not so ashamed not to upload it.
You can find theory for the above analysis in Himmelblau (1970), Process Analysis by Statistical Methods, pp. 197-198 (for Nonlinear Models).