## 40 Reputation

10 years, 261 days

## @ Kitonum 9607 Thank for your answe...

Following your advice, I use: evalf(allvalues(solve(F, K))).  This method gives all complex solutions (that is to say, in the generic case, 2^n solutions). I am only interested by the real ones; yet, of course, we can deduce them immediately.

For n=6, Digits=20, we obtain the solutions in 3'4" while, using Isolate, we obtain the result in 2"2.

See my answer to acer 13728 on this subject, where I give some details about the system I consider and about the methods I tested.

## @acer 13728 .  Thank for your ans...

I randomly choose n $n\times n$ symmetric matrices A_i and n vectors V_i with integers entries (for example).

I consider the system X^TA_iX+V_i^Tx=1,1<=i<=n, where X is the unknown vector.

Generically, such a system has 2^n complex solutions in X. Yet, I only search the real solutions. Clearly, if n>=3, then it is impossible to calculate the exact values of the x_i.

I use the Grobner theory (with(FGb)). Then we can obtain a triangular system P(x_1)=0,Q(x_1,x_2)=0,...Thus we can easily recover an approximation of the real solutions. The method works until (I think) n=7. (n=6 in 44").

Following your advice, I use RootFinding-Isolate. This method (by Rouillier and all) works very well.  Yet, during an experiment with Digits=20, n=8, I obtain the 14 real solutions in 6'32". That is, a calculation for n=10 should last several days! I think that Isolate uses also the Grobner theory.

What can we do for n>=10  ? I think that we can find some real solutions with standard numerical method as Newton-Raphson one. To do that, you need to know the approximate position of some solution. Yet, how to be sure of having obtained all the real solutions?

@Carl Love Hi Carl, I used your code. The first two methods (with 1/A) work (in cpu time) in 74"45 and 13"60. Yet, the last two methods (with IntInv(A)) don't work on my PC...
Regards.

## @tomleslie Usually, I use Maple to do fo...

@tomleslie Usually, I use Maple to do formal calculation; I therefore have trouble with numeric computation.

I agree with your points 2. and 4.

About your 1. I divide my matrix A by a factor 10^12 because, if not, the calculation of CharPoly(A,x) does not work (overflow ?) as you can see:

s := 0; for i to 20 do A := Matrix(100, 100, proc (i, j) options operator, arrow; evalf(rand()) end proc, datatype = float[8]); t := time(); p := CharacteristicPolynomial(A, x); s := s+time()-t end do; s; coeff(p, x, 7);
0.140
Float(infinity) + Float(infinity) I

About your 3. I wrote that (when datatype=float[8]) Maple use Digits:=15; clearly, after a calculation,  the obtained result has not necessarily 15 significant digits; for example,  MathInv(A) has 15 -log_10(ConditionNumber(A)) significant digits.

## It works...

@tomleslie I sent a joint answer to all three.

During my previous test, I used integers for the entries of A, but I asked for   evalf(MatrixInverse(A)),... Upon your instructions, my second test is as follows (in particular, I use directly float entries for the considered matrices)

UseHardwareFloats;
deduced
s := 0; for i to 1000 do A := Matrix(100, 100, proc (i, j) options operator, arrow; evalf((1/1000000000000)*rand()) end proc, datatype = float[8]); t := time(); B := 1/A; s := s+time()-t end do; s; s := 0; for i to 1000 do A := Matrix(100, 100, proc (i, j) options operator, arrow; evalf((1/1000000000000)*rand()) end proc, datatype = float[8]); t := time(); d := Determinant(A); s := s+time()-t end do; s;

Now the results are correct and come very quickly. Times of calculation are in seconds.

det(A) in 0.78.10^-3, AB in 1.2.10^-3, MathInv(A) in 1.5.10^-3 and CharPoly(A,x) in 7.5.10^-3.

Indeed, the complexity of calculations of det(A), AB and MathInv(A) are in O(n^3) operations in the field and for CharPoly(A,x), the complexity is O(n^3.log(n).log(log(n))) operations.

Many thanks.

PS. When we use float[8], Digits is fixed to 15.

## debugger...

The problem happens with existing worksheets. When I use  "stoperror('all');", I get the Maple debugger and with "showstat", I obtain, when I enter in the following block, the warning "kernel connection lost//execution stopped:stack limit reached". I did not see the command "showstack"; how to get there?

To find this latest order, I called the debugger five times and (miraculously ?) my program -and others- have returned to normal operation.

I do not really know what happened but, thanks to you, I got rid of this problem and I thank you.

## Thanks to both of you...

Thanks to both of you

## @Markiyan Hirnyk  ok I understand. ...

@Markiyan Hirnyk  ok I understand. I  consider  numerical samples.

## @Carl Love More generally, to obtain the...

@Carl Love More generally, to obtain the dimension of  an algebraic variety V (that depends on a set P of variables), we can use a specialization of the variables in Q. In general, we obtain the desired dimension. Often I use this trick that is linked to the irreducibility Theorem of Hilbert.

Unfortunately, there is no result that gives explicitly an integer k(P) such that if k random specializations P_0 in Q of P all give the dimension d, then the probability that d is the true dimension, is (for instance) approximately 0.99. To my knowledge, there are only results that (roughly speaking) say that if 10^5 tests give the dimension d, then the probability that the dimension is really d, is >=0.9 ;  while in reality, the probability is ~0.99999. If the duration of such a test is 30", then this type of result is useless. FInally, in a publication, this kind of result can be presented only in the form of conjecture.

Yet, your idea of using finite fields seems to me good. It is interesting to get an estimate of the required probability even if it is only heuristic. In particular, you say that "..where the probability that the rank of the numeric matrix is not maximal is a very small but easily computable number. " How to find this easily computable number ? In a second part, you increase the size S of the field. Therefore, the estimated probability must be stable for large enough S.

## @Axel Vogt   if I can define J by a...

@Axel Vogt   if I can define J by a system of equations, then Maple gives me its dimension. Then it remains to find J.  The problem comes when I compose q and f ; the degree becomes very hight. Yet, I go to test your idea. Thanks.

## Axel, I think also that the key is reaso...

Axel, I think also that the key is reasoning by generic cases, but....

If we consider my function r(X), Maple calculates the Jacobian matrix and its generic rank, that is, its maximal rank: indeed, Maple calculates a Gauss decomposition and stops the process only when it meets expressions that are  FORMALLY  zero. Unfortunately:

i) the calculations are enormous and we can handle only simple cases.

ii) this rustic method does not use Grobner basis theory.

An equation f(X)=0 gives birth to an ideal.

My question is : can I associate an ideal to the image of a function ?

## an alternative formulation...

Axel, I see that you want to say.  I rewrite my question as follows.

Let f:X->f(X)  be a polynomial function from C^n to C^p. Let r(X) be the rank of the Jacobian matrix of f in X. What is the maximal value of r(X) when X goes throught C^n ?

Do you agree with this formulation? Finally, I think that is a difficult problem.

## "Gradient" that depends on the definitio...

"Gradient" that depends on the definition. You are right it is better to say Jacobian matrix.  "Zero" is of course the zero matrix. "Parametrization" you are wrong; we consider a parametrization that is not proper because f is not locally an immersion.

## precision...

OK axel. Locally that is to say when the gradient of f is not zero. Parameters because f is a parametrization of the variety.

You are right ; I forgot to write that f is a polynomial.

 1 2 Page 2 of 2
﻿