petit loup

35 Reputation

4 Badges

9 years, 218 days

MaplePrimes Activity


These are replies submitted by petit loup

@acer Correction. In my first $2$ lines of my last post , replace $A$ with $B=(A^-1)^+A$.

@acer As I wrote in my first post, I speak about the cond. number of $P$ (the matrix of the eigenvectors of $A$). This gives a good idea of the accuracy of the numerical diagonalization of $A$.

Have a look on the following two calculations (with the same $A$):

restart; with(LinearAlgebra); Seed := randomize(); Digits := 20; roll := rand(-5 .. 5);
A := Matrix(18, proc (i, j) options operator, arrow; roll() end proc);
t := time();
v, P := evalf(Eigenvectors(Transpose(1/A).A));
time()-t;
Norm(1/P.(LinearAlgebra:-Transpose(1/A).A).P-DiagonalMatrix(v))/Norm(DiagonalMatrix(v));
t := time()-t;
ConditionNumber(P);
                             62"299
                                         
                  2.0456050687550795426 10^296 . 
                             62"433
                                         
                  3.0129332318535323794 10 ^297. 
A := evalf(A);
gc();
t := time();
v1, P1 := Eigenvectors(Transpose(1/A).A);
time()-t;
Norm(1/P1.(LinearAlgebra:-Transpose(1/A).A).P1-DiagonalMatrix(v1))/Norm(DiagonalMatrix(v1));
t := time()-t;
ConditionNumber(P1);
                             0"113
                                         
                  2.2966423159031532614 10^-18  
                             0"260
                     94.230954022647572690

Of course the first method (mine) is not the good one. I habitually use the formal part of Maple (Groebner basis) and I am not very comfortable writing a digital procedure. Yet, I understand why the first method is very slow; but I don't understand why its result is completely false; note that works when I choose Digits:= a very large integer.

Thanks for your reply.

 

@acer  I will answer you this week end. In fact, even when the cond. numb. was good, the final accuracy was catastrophic and I don't understand why...

@acer No, I probably made some wrong moves.

@vv Thank you very much for your prompt reply.

Of course $(A^(-1))^+ . A$ should not be calculated explicitly; yet, I am amazed at the speed of resolution of the generalized eigenvalue problem; moreover, even for $n=100$, there is very little loss of digits.

$n=100,Digits:=20$. We obtain the result, with $16$ significand digits, in $31"$.

Best regards.

@vv 

Yes, if we have a small box (unfortunately, this is not necessarily the case), then we may consider that we are in a neighborhood of the center of the box, say A. If B is the (direct or inverse) Cayley transform of A, then we search a skew-symmetric matrix that is close to B; I think that we may choose 0.5(B-B^T). Good idea...

@Markiyan Hirnyk

OK, your formal calculation works for my toy example and it's a very good thing. We find  a solution for s=21. The calculation can be continued until s=56 (we find other solutions). Then, the expressions sol[s] become complicated.

I tested one difficult example with 18 inequalities (example for which I know how to find a solution by a numerical calculation). For s=1..56, there are no solutions. For s=57..64, the computer gives no answer...

Thanks to Markiyan Hirnyk 7434    Carl Love 13225    vv 4094

Markiyan, I always answer to those who take the trouble to write to me.

In a second time, I'll write a post about your "solution in two steps".

Roughly speaking, the problem I consider is:  two 3.3 matrices A,B are given.  We consider  an open  box  for the 9 entries of a 3.3 matrix X=[x_{i,j}], that is, A<X<B. Does there exist in the box an orthogonal matrix X ?

If the answer is YES, then we ask to find an approximation of some solution in X; of course, an approximation is not, stricto sensu, a proof of existence but the custom is to be satisfied  by that; (*)  we will come back... The problem may be difficult if all the solutions are close to the edge of the box. Of course, if there is a solution X0 in the box, then a neighborhood of X0 in O(3) is also in the box (3 degrees of freedom).

If the answer is NO and if we want a certified answer, then I think (perhaps I am wrong) that we must use a formal calculation -Gröbner method for example- If we also want to certify the existence (cf. (*)), we can perform a formal calculation or construct in the box a segment containing our approximation and that passes from one side to another, through O (3).

Of course, my procedure with 2 inequalities is a toy example. NLPSolve (cf. vv4094's post) gives easily an approximate solution.  How does it manage with 18 inequalities? I use NLPSolve for optimization problems and I like it. Yet, in difficult cases, it may give only a local extremum.

Carl Love 13230 ,    you wrote  about the library "SolveTools:-SemiAlgebraic"; if you have ideas, I'll take them.

Thanks in advance for your suggestions.

 

 

 

 

 

 

 

 

Thanks to Markiyan Hirnyk 7434   and  Carl Love 13225 .

Over the past decade, many papers have been published on polynomial real variable systems as described in /arxiv.org/pdf/1409.1534; more recently, algorithms have been announced to calculate the size of some real semi algebraic sets. I had discussions about these subjects with Safey El Din and Moreno Maza who worked with Maple.
If we are interested in Grobner's methods on C, then we have black boxes on Maple, Sage, and for noncommutative rings, on Gap; and it works pretty well. But on R, we are very far from having such black boxes; only laboratories have complicated assemblies of softwares. Mathematical paper reviewers readily accept papers using Grobner on C but dragging feet for papers dealing with real variables.
So I am interested in using "regular chains" about this question that I found on a forum and that is close to problems that interest me
. Standard optimization methods do not solve all instances.
Question. Can we find orthogonal 3.3 orthogonal matrices X=[x_{i,j}] s.t. a_{i,j}<x_{i,j}<b_{i,j} (where a_{i,j},b_{i,j}are known and the x_{i,j} are unknown)?
There are 9 unknowns, 6 equations and a maximum of 18 strict inequalities.
That follows is the Maple procedure for only 2 inequalities;
there is the obvious solution X=[e_3,e_1,e_2] where (e_i) is the canonical basis of the space.
 

restart:
with(LinearAlgebra):
with(RAG):
Digits := 20:

X := Matrix(3, 3, symbol = x):
Y := Transpose(X) . X-IdentityMatrix(3):
F := NULL:
for i to 3 do
for j from i to 3 do
F := F, Y[i, j] = 0:
od:od:
F := [F, x[1, 1] < 1/2, 3/10 < x[2, 3]]:
K := NULL:
for i to 3 do
for j to 3 do
K := K, x[i, j]
od:od:
K := [K]:
so := HasRealSolutions(F, K):

Thanks in advance.

 

 

 

 

@Kitonum  Thanks.  Both methods work.

@ Kitonum 9607

Thank for your answer.

 

Following your advice, I use: evalf(allvalues(solve(F, K))).  This method gives all complex solutions (that is to say, in the generic case, 2^n solutions). I am only interested by the real ones; yet, of course, we can deduce them immediately.

 

For n=6, Digits=20, we obtain the solutions in 3'4" while, using Isolate, we obtain the result in 2"2.

 

See my answer to acer 13728 on this subject, where I give some details about the system I consider and about the methods I tested.

@acer 13728 . 

Thank for your answer.

I randomly choose n $n\times n$ symmetric matrices A_i and n vectors V_i with integers entries (for example).

I consider the system X^TA_iX+V_i^Tx=1,1<=i<=n, where X is the unknown vector.

Generically, such a system has 2^n complex solutions in X. Yet, I only search the real solutions. Clearly, if n>=3, then it is impossible to calculate the exact values of the x_i.

I use the Grobner theory (with(FGb)). Then we can obtain a triangular system P(x_1)=0,Q(x_1,x_2)=0,...Thus we can easily recover an approximation of the real solutions. The method works until (I think) n=7. (n=6 in 44").

Following your advice, I use RootFinding-Isolate. This method (by Rouillier and all) works very well.  Yet, during an experiment with Digits=20, n=8, I obtain the 14 real solutions in 6'32". That is, a calculation for n=10 should last several days! I think that Isolate uses also the Grobner theory.

What can we do for n>=10  ? I think that we can find some real solutions with standard numerical method as Newton-Raphson one. To do that, you need to know the approximate position of some solution. Yet, how to be sure of having obtained all the real solutions?

 

 

 

 

@Carl Love Hi Carl, I used your code. The first two methods (with 1/A) work (in cpu time) in 74"45 and 13"60. Yet, the last two methods (with IntInv(A)) don't work on my PC...
Regards.

@tomleslie Usually, I use Maple to do formal calculation; I therefore have trouble with numeric computation.

I agree with your points 2. and 4.

About your 1. I divide my matrix A by a factor 10^12 because, if not, the calculation of CharPoly(A,x) does not work (overflow ?) as you can see:

s := 0; for i to 20 do A := Matrix(100, 100, proc (i, j) options operator, arrow; evalf(rand()) end proc, datatype = float[8]); t := time(); p := CharacteristicPolynomial(A, x); s := s+time()-t end do; s; coeff(p, x, 7);
                             0.140
              Float(infinity) + Float(infinity) I

About your 3. I wrote that (when datatype=float[8]) Maple use Digits:=15; clearly, after a calculation,  the obtained result has not necessarily 15 significant digits; for example,  MathInv(A) has 15 -log_10(ConditionNumber(A)) significant digits.


Thanks for your answer.

 

 

 

 

1 2 Page 1 of 2