Carl Love

Carl Love

28055 Reputation

25 Badges

12 years, 361 days
Himself
Wayland, Massachusetts, United States
My name was formerly Carl Devore.

MaplePrimes Activity


These are replies submitted by Carl Love

So, the ion is always placed on one of the coordinate axes, right? Then it can be done as three 2d plots: Energy vs. position along the axis, done for each axis.

Mehdi,

Do you mean that you want to do exact computations? Or do you mean that you want Maple to truncate the decimals rather than rounding them?

 Here's a comparison of several of Maple's solvers on a 256x256 dense system.

 

Comparison of several dense linear system solvers working at simulated quadruple precision.

restart:

Digits:= 34:

with(LinearAlgebra):

A:= RandomMatrix(2^8):

Make it symmetric positive definite so that we can use Cholesky.

A:= A^%T.A:

B:= RandomVector(2^8):

The modular method is an exact p-adic solver.

X1:= CodeTools:-Usage(LinearSolve(A, B, method= modular)):

memory used=145.09MiB, alloc change=24.00MiB, cpu time=2.09s, real time=2.12s

Verify exactness.

Norm(A.X1 - B);

0

 

Convert random data to floating point for the remaining methods.

A:= Matrix(A, datatype= float):  B:= Vector(B, datatype= float):


X2:= CodeTools:-Usage(LinearSolve(A, B, method= LU)):

memory used=2.31GiB, alloc change=48.00MiB, cpu time=18.06s, real time=17.06s

Norm(A.X2 - B);

0.9485e-27

 

Norm(X1-X2);

0.6429e-30

 

The hybrid method uses LU in hardware double precision with iterative refinement to the full value of Digits.

X3:= CodeTools:-Usage(LinearSolve(A, B, method= hybrid)):

memory used=317.98MiB, alloc change=0 bytes, cpu time=2.97s, real time=2.26s

Norm(A.X3 - B);

0.808e-27

 

Norm(X3 - X1);

0.4583e-30

 

So hybrid is significantly faster and slightly more accurate than plain LU.

 

X4:= CodeTools:-Usage(LinearSolve(A, B, method= QR)):

memory used=4.65GiB, alloc change=0 bytes, cpu time=38.58s, real time=35.67s

Norm(A.X4 - B);

0.1055e-26

 

Norm(X4 - X1);

0.10927e-29

 

X5:= CodeTools:-Usage(LinearSolve(A, B, method= Cholesky)):

memory used=1.19GiB, alloc change=0 bytes, cpu time=8.56s, real time=8.12s

Norm(A.X5 - B);

0.5172e-27

 

Norm(X5 - X1);

0.8069e-30

 

Amazingly, the exact solver wins by a wide margin.

 

Download Linear_Solvers.mw

@Kitonum You're using Maple 16, right? I don't know what's wrong. It works in my Maple 16 (and Maple 17 of course). I think that even in Maple V r 4 (my earliest), `or` took an arbitrary number of arguments.

@Kitonum You're using Maple 16, right? I don't know what's wrong. It works in my Maple 16 (and Maple 17 of course). I think that even in Maple V r 4 (my earliest), `or` took an arbitrary number of arguments.

ISC:=proc(T::list, Eq::`=`)
local f:= unapply((lhs-rhs)(Eq), indets(Eq,name)[]);
     `or`(seq(f(i[]) >= 0, i= T)) and `or`(seq(f(i[]) <= 0, i= T))
end proc:

ISC:=proc(T::list, Eq::`=`)
local f:= unapply((lhs-rhs)(Eq), indets(Eq,name)[]);
     `or`(seq(f(i[]) >= 0, i= T)) and `or`(seq(f(i[]) <= 0, i= T))
end proc:

I am having trouble understanding your phrase "vector that spans it". One vector does not determine a line. Do you mean the ray starting at the origin in the direction of the vector?

@spradlig Suppose that N is a large integer, say with 1000 digits, that happens to be a perfect cube. Now suppose that the user enters a lengthy expression that contains N^(2/3). Should Maple take the significant time needed to reduce that? What if the user wanted to avoid using that time? Afterall, the user could have used surd to force the computation if that was what was wanted. What if the exponent would have cancelled or changed because of something else in the expression? Then the time would've been completely wasted. All automatic simplifications involve a trade-off like that.

Like I said, the issue with the plots is completely different. Changing that requires a fundamental change of the computation mode from complex to real. It is not an issue of automatic simplification.

Now, if you want Maple to be just a fancy graphing calculator for students in elementary classes, then the solution is simple: Make RealDomain the default in an initialization file. But Maple in general is not just for elementary classes.

How does Mathematica handle the case in the first paragraph?

 

@spradlig There are two distinct issues involved in the disposition of a^(p/q) for real a and integer p and q, depending on the sign of a.

The issue for positive a: That Maple does not automatically simplify (when simplification is possible) the expressions for positive a is just an issue of "laziness" rather than an issue of the value of the expression.

The issue for negative a: This is the issue with the plots. In this case the issue involves a change in the actual value of the expression. As stated at ?arithop :

For non-integer (complex) numeric powers y and (complex) numeric x, the expression x^y is evaluated as exp(y*ln(x)), where ln(x) is evaluated using the principal branch of the logarithm.
The function surd can be used to compute real odd roots of negative real numbers.

 

@digerdiga We can compare the residual sums of squares, and the nonlinear fit is better, but only slightly better.

 

restart; X := [51, 61, 71, 81, 101, 121, 141, 161, 201]; Y := [1.041, 1.034, 1.0296, 1.0259, 1.021, 1.017, 1.015, 1.013, 1.010]; Sol1 := Statistics:-Fit(1+a*n^b, X, Y, n, output = [leastsquaresfunction, residualsumofsquares])

[1+HFloat(2.104535315321519)/n^HFloat(1.0016417326693776), 0.4622540436e-6]

(1)

f1, rsos1 := Sol1[]:

[[a1 = .799926275564916, b = -1.01422767855349], Vector[row](2, {(1) = 0.567833223923361e-1, (2) = 0.122755809256029e-1})]

(2)

f2 := eval(1+exp(a1)*n^b, Sol2[1]);

1+HFloat(2.225376857792843)/n^HFloat(1.0142276785534934)

(3)

rsos2 := add((eval(f2, n = X[k])-Y[k])^2, k = 1 .. nops(X));

HFloat(5.959560138109656e-7)

(4)

plots:-display([plot([f1, f2], n = 50 .. 210), plots:-pointplot(zip(`[]`, X, Y))]);

 

``


Download Linearized_Fit.mw

@digerdiga We can compare the residual sums of squares, and the nonlinear fit is better, but only slightly better.

 

restart:

X:= [51, 61, 71, 81, 101, 121, 141, 161, 201]:

Y:= [1.041, 1.034, 1.0296, 1.0259, 1.021, 1.017, 1.015, 1.013, 1.010]:

Sol1:= Statistics:-Fit(
     1+a*n^b, X, Y, n,
     output= [leastsquaresfunction, residualsumofsquares]
);

[1+HFloat(2.1045354792676307)/n^HFloat(1.0016417508071458), 0.462254043633566e-6]

f1, rsos1:= Sol1[]:

X1:= ln~(X):  Y1:= ln~(Y -~ 1):
Sol2:= Statistics:-Fit(a1+b*n, X1, Y1, n, output= [parametervalues, standarderrors]);

Sol2 := [[a1 = .799926272563527, b = -1.01422767794265], Vector[row](2, {(1) = 0.567833227040259e-1, (2) = 0.122755809934050e-1})]

f2:= eval(1+exp(a1)*n^b, Sol2[1]);

1+HFloat(2.22537685111362)/n^HFloat(1.0142276779426467)

rsos2:= add((eval(f2, n= X[k])-Y[k])^2, k= 1..nops(X));

HFloat(5.959559940306572e-7)

plots:-display([
     plot([f1, f2], n= 50..210),
     plots:-pointplot(zip(`[]`, X, Y))
]);

 

Download Linearized_Fit.mw

I updated the procedure substantially to include the eigenvalues, and I added some color to improve the eye-tracking of the circles. I also added a second example, where the disks are more separated.

I updated the procedure substantially to include the eigenvalues, and I added some color to improve the eye-tracking of the circles. I also added a second example, where the disks are more separated.

@DJJerome1976 

I agree with DJ Jerome's skepticism about RealDomain. I am particularly skeptical about RealDomain:-solve. That's why I said to put

with(RealDomain, `^`);

rather than

with(RealDomain);

into your initialization file. The former will overload only the `^` operator, which is enough to accomplish everything discussed in this thread and which doesn't introduce any problems that I know of.

First 599 600 601 602 603 604 605 Last Page 601 of 709