petit loup

40 Reputation

4 Badges

10 years, 231 days

MaplePrimes Activity


These are replies submitted by petit loup

I am stupid. Firstly I wrote " U,V:=SmithForm(A,x,output=['U,V']) " . Then the calculations are done in Q.

Thanks.

Yes that works wirth

In my worksheet I didn't write the second line.
Yet, I don't understand the construction of the second part "  #method='integer' ) ; " ,

in particular with "#" and only one ")"

Anyway, thank you .

@sand15 

Thanks. I'll try it.

@acer 

Thanks. It's significantly faster.

@acer Correction. In my first $2$ lines of my last post , replace $A$ with $B=(A^-1)^+A$.

@acer As I wrote in my first post, I speak about the cond. number of $P$ (the matrix of the eigenvectors of $A$). This gives a good idea of the accuracy of the numerical diagonalization of $A$.

Have a look on the following two calculations (with the same $A$):

restart; with(LinearAlgebra); Seed := randomize(); Digits := 20; roll := rand(-5 .. 5);
A := Matrix(18, proc (i, j) options operator, arrow; roll() end proc);
t := time();
v, P := evalf(Eigenvectors(Transpose(1/A).A));
time()-t;
Norm(1/P.(LinearAlgebra:-Transpose(1/A).A).P-DiagonalMatrix(v))/Norm(DiagonalMatrix(v));
t := time()-t;
ConditionNumber(P);
                             62"299
                                         
                  2.0456050687550795426 10^296 . 
                             62"433
                                         
                  3.0129332318535323794 10 ^297. 
A := evalf(A);
gc();
t := time();
v1, P1 := Eigenvectors(Transpose(1/A).A);
time()-t;
Norm(1/P1.(LinearAlgebra:-Transpose(1/A).A).P1-DiagonalMatrix(v1))/Norm(DiagonalMatrix(v1));
t := time()-t;
ConditionNumber(P1);
                             0"113
                                         
                  2.2966423159031532614 10^-18  
                             0"260
                     94.230954022647572690

Of course the first method (mine) is not the good one. I habitually use the formal part of Maple (Groebner basis) and I am not very comfortable writing a digital procedure. Yet, I understand why the first method is very slow; but I don't understand why its result is completely false; note that works when I choose Digits:= a very large integer.

Thanks for your reply.

 

@acer  I will answer you this week end. In fact, even when the cond. numb. was good, the final accuracy was catastrophic and I don't understand why...

@acer No, I probably made some wrong moves.

@vv Thank you very much for your prompt reply.

Of course $(A^(-1))^+ . A$ should not be calculated explicitly; yet, I am amazed at the speed of resolution of the generalized eigenvalue problem; moreover, even for $n=100$, there is very little loss of digits.

$n=100,Digits:=20$. We obtain the result, with $16$ significand digits, in $31"$.

Best regards.

@vv 

Yes, if we have a small box (unfortunately, this is not necessarily the case), then we may consider that we are in a neighborhood of the center of the box, say A. If B is the (direct or inverse) Cayley transform of A, then we search a skew-symmetric matrix that is close to B; I think that we may choose 0.5(B-B^T). Good idea...

@Markiyan Hirnyk

OK, your formal calculation works for my toy example and it's a very good thing. We find  a solution for s=21. The calculation can be continued until s=56 (we find other solutions). Then, the expressions sol[s] become complicated.

I tested one difficult example with 18 inequalities (example for which I know how to find a solution by a numerical calculation). For s=1..56, there are no solutions. For s=57..64, the computer gives no answer...

Thanks to Markiyan Hirnyk 7434    Carl Love 13225    vv 4094

Markiyan, I always answer to those who take the trouble to write to me.

In a second time, I'll write a post about your "solution in two steps".

Roughly speaking, the problem I consider is:  two 3.3 matrices A,B are given.  We consider  an open  box  for the 9 entries of a 3.3 matrix X=[x_{i,j}], that is, A<X<B. Does there exist in the box an orthogonal matrix X ?

If the answer is YES, then we ask to find an approximation of some solution in X; of course, an approximation is not, stricto sensu, a proof of existence but the custom is to be satisfied  by that; (*)  we will come back... The problem may be difficult if all the solutions are close to the edge of the box. Of course, if there is a solution X0 in the box, then a neighborhood of X0 in O(3) is also in the box (3 degrees of freedom).

If the answer is NO and if we want a certified answer, then I think (perhaps I am wrong) that we must use a formal calculation -Gröbner method for example- If we also want to certify the existence (cf. (*)), we can perform a formal calculation or construct in the box a segment containing our approximation and that passes from one side to another, through O (3).

Of course, my procedure with 2 inequalities is a toy example. NLPSolve (cf. vv4094's post) gives easily an approximate solution.  How does it manage with 18 inequalities? I use NLPSolve for optimization problems and I like it. Yet, in difficult cases, it may give only a local extremum.

Carl Love 13230 ,    you wrote  about the library "SolveTools:-SemiAlgebraic"; if you have ideas, I'll take them.

Thanks in advance for your suggestions.

 

 

 

 

 

 

 

 

Thanks to Markiyan Hirnyk 7434   and  Carl Love 13225 .

Over the past decade, many papers have been published on polynomial real variable systems as described in /arxiv.org/pdf/1409.1534; more recently, algorithms have been announced to calculate the size of some real semi algebraic sets. I had discussions about these subjects with Safey El Din and Moreno Maza who worked with Maple.
If we are interested in Grobner's methods on C, then we have black boxes on Maple, Sage, and for noncommutative rings, on Gap; and it works pretty well. But on R, we are very far from having such black boxes; only laboratories have complicated assemblies of softwares. Mathematical paper reviewers readily accept papers using Grobner on C but dragging feet for papers dealing with real variables.
So I am interested in using "regular chains" about this question that I found on a forum and that is close to problems that interest me
. Standard optimization methods do not solve all instances.
Question. Can we find orthogonal 3.3 orthogonal matrices X=[x_{i,j}] s.t. a_{i,j}<x_{i,j}<b_{i,j} (where a_{i,j},b_{i,j}are known and the x_{i,j} are unknown)?
There are 9 unknowns, 6 equations and a maximum of 18 strict inequalities.
That follows is the Maple procedure for only 2 inequalities;
there is the obvious solution X=[e_3,e_1,e_2] where (e_i) is the canonical basis of the space.
 

restart:
with(LinearAlgebra):
with(RAG):
Digits := 20:

X := Matrix(3, 3, symbol = x):
Y := Transpose(X) . X-IdentityMatrix(3):
F := NULL:
for i to 3 do
for j from i to 3 do
F := F, Y[i, j] = 0:
od:od:
F := [F, x[1, 1] < 1/2, 3/10 < x[2, 3]]:
K := NULL:
for i to 3 do
for j to 3 do
K := K, x[i, j]
od:od:
K := [K]:
so := HasRealSolutions(F, K):

Thanks in advance.

 

 

 

 

@Kitonum  Thanks.  Both methods work.

1 2 Page 1 of 2