@JohnS Thanks John, vv,
Here I post my recent findings about my problem.
The idea is to replace
[ 0 0]
[ 0 eps*I]
for some small value eps, small enough to perturb the problem only a small amount. Let N = M^(-1/2), multiply the equation on the left by
[ 0 c*I]
and insert the identity matrix in the form
[N 0 ] [N^-1 0]
[0 c*I] [ 0 (1/c)*I]
between the matrices and the eigenvector
Multiply it all out blockwise and we get what they got, with
I22 = c^2*eps*I.
You want I22 to be the identity matrix, so
c = 1/sqrt(eps).
c is what they called LV. So how small do we have to make eps to not affect the problem too much? Having no direct experience with this I don't know, but I would hazard a guess of least 10 times smaller than the smallest eigenvalue of M11. Here the smallest eigenvalue is 3e-11 so we can plan accordingly. It does lead to some pretty big LV numbers. (The largest eigenvalue is 4e-3, so scaling could be a problem).
There are spurious eigenvalues for the problem I have. Note that the original problem has a number of eigenvalues "at infinity", which correspond to large negative eigenvalues for the approximating problem. Those eigenvalues have size of order LV. The number of these eigenvalues "at infinity" is the dimension of lambda. So these eigenvalues should be ignored -- only the "small" eigenvalues should be kept. The trouble is that if LV becomes large, the smaller eigenvalues are subject to numerical (roundoff) "noise" of size LV.u, u = unit round-off. So taking LV = 10200 would likely destroy any accuracy the small eigenvalues have left. (I could be wrong; it will depend on the exact algorithm used.).
I am noticed these kinds of problems are solved very easily using Eigenvalues(K,M) command in Maple, because this command accepts both K and M matrices as its inputs, regardless how the K and M are.
But does anybody knows how this command compute the eigenvalues of generalized eigenvalue problem?