acer

32490 Reputation

29 Badges

20 years, 9 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@Markiyan Hirnyk Yes, it is true that the documentation of the solve command is significantly behind its actual capabilities. But what I showed works too, and not just by chance as can be confirmed by thorough examination of the source.

The real option can quite often also (but not always, naturally) be used to good effect for a univariate trigonometric equation, sometimes when combined with the explicit and the allsolutions options. It is a good addition to one's arsenal for handling some classes of problem.

Here are two other ways to handle this particular example.

restart;

f1 := x^2+y^2 = 1:
f2 := y = x^3:

solve({x>-infinity, f1, f2}, [x,y], explicit);

[[x = -(1/6)*6^(1/2)*((108+12*93^(1/2))^(1/3)*((108+12*93^(1/2))^(2/3)-12))^(1/2)/(108+12*93^(1/2))^(1/3), y = -(1/432)*6^(1/2)*((108+12*93^(1/2))^(1/3)*((108+12*93^(1/2))^(2/3)-12))^(3/2)/(9+93^(1/2))], [x = (1/6)*6^(1/2)*((108+12*93^(1/2))^(1/3)*((108+12*93^(1/2))^(2/3)-12))^(1/2)/(108+12*93^(1/2))^(1/3), y = (1/432)*6^(1/2)*((108+12*93^(1/2))^(1/3)*((108+12*93^(1/2))^(2/3)-12))^(3/2)/(9+93^(1/2))]]

select(s->is(eval(x,s),real) and is(eval(y,s),real),
       solve({f1, f2}, [x,y], explicit));

[[x = (1/6)*(6*(108+12*93^(1/2))^(1/3)-72/(108+12*93^(1/2))^(1/3))^(1/2), y = (1/216)*(6*(108+12*93^(1/2))^(1/3)-72/(108+12*93^(1/2))^(1/3))^(3/2)], [x = -(1/6)*(6*(108+12*93^(1/2))^(1/3)-72/(108+12*93^(1/2))^(1/3))^(1/2), y = -(1/216)*(6*(108+12*93^(1/2))^(1/3)-72/(108+12*93^(1/2))^(1/3))^(3/2)]]

 

Download realsolvemore.mw

I consider RealDomain:-solve to be an approach which is in general weaker than the conglomeration of other approaches using :-solve (but not always -- there are exceptions to everything), because its implementation and fundamental design are not strong.

Perhaps the most important thing to note is that there is currently no command which can handle all the examples of exact symbolic real-solving currently handled by at least one method.

Do you mean that it exits?

Or do you just mean that it seems to hang, because you haven't yet entered end proc and so it's still waiting for additional input for the body or termination of the procedure?

@unproductive I've added a link in my Answer to the Programming Guide.

@Carl Love There are a few ways to get the desired result, sure. My main point for the OP was that assuming only r::real should not (in and of itself) be a way to get rid of the conjugate around r^(1/4).

Yes, int has not yet been taught to utilize the ranges as assumptions in all cases.

That includes the case of nested calls, where it's difficult for an inner call to know that it's inside an outer call. Putting special evaluation rules on the first parameter of int could help, so as to delay the inner call from evaluating, until the outer call could place assumptions, etc. But that kind of thing has some difficult challenges for Library code.

And so the contrasting behavior below is interesting, using Maple 2018.0.

restart;
Psi00:= exp(-r^2/2)/sqrt(Pi):

int(conjugate(Psi00*r^(1/4))*Psi00*r^(1/4)*r, [phi= 0..2*Pi, r= 0..infinity]):
lprint(%);
  (1/4)*Pi*2^(1/2)/GAMMA(3/4)

restart;
Psi00:= exp(-r^2/2)/sqrt(Pi):

int(int(conjugate(Psi00*r^(1/4))*Psi00*r^(1/4)*r, phi= 0..2*Pi), r= 0..infinity):
lprint(%);
  int(2*conjugate(exp(-(1/2)*r^2)*r^(1/4))*exp(-(1/2)*r^2)*r^(5/4), r = 0 .. infinity)

lprint(simplify(%));
  (1/4)*Pi*2^(1/2)/GAMMA(3/4)

ps. It's minor, but your code above has PI instead of Pi, in the assignment to Psi00.

I have learned this by experimenting.

That includes using the mouse to manually select and adjust the appearance of 2D Input (color, style, font), convert to "Atomic Identifier" via right-click, and then lprint. The resulting so-called TypeMK is an analogue of MathML, as are the function calls Typesetting:-mi, etc.

Another useful export is Typesetting:-Typeset which can convert an expression to the nested kind of function call that I used above. By examining the effect of color/font changes on the atomic identifiers I made I was able to guess how the function calls to Typesetting:-mi and friends might be altered in the result from Typesetting:-Typeset.

The Typesetting:-EV procedure seems to do something like cause a 1-level evaluation of its argument when passed to Typesetting:-Typeset. That allows processing of an expression assigned to a name.

Here is some fun. One could also construct procedures which target and replace the aspects altogether or individually.

restart;

expr := sqrt(Sum(sin(n*Pi*x),n=1..N));

(Sum(sin(n*Pi*x), n = 1 .. N))^(1/2)

foo:=Typesetting:-Typeset(Typesetting:-EV(expr)):

lprint(foo);

Typesetting:-msqrt(Typesetting:-mrow(Typesetting:-munderover(Typesetting:-mstyle(Typesetting:-mo("∑", Typesetting:-msemantics = "inert"), mathcolor = "#909090"), Typesetting:-mrow(Typesetting:-mi("n"), Typesetting:-mo("="), Typesetting:-mn("1")), Typesetting:-mi("N")), Typesetting:-mo("⁡"), Typesetting:-mrow(Typesetting:-mi("sin", fontstyle = "normal"), Typesetting:-mo("⁡"), Typesetting:-mfenced(Typesetting:-mrow(Typesetting:-mi("n"), Typesetting:-mo("⁢"), Typesetting:-mi("π"), Typesetting:-mo("⁢"), Typesetting:-mi("x"))))))

map[2](op,0,indets(foo,function));

{Typesetting:-mfenced, Typesetting:-mi, Typesetting:-mn, Typesetting:-mo, Typesetting:-mrow, Typesetting:-msqrt, Typesetting:-mstyle, Typesetting:-munderover}

stuff1 := fontfamily="Helvetica", size = 16, fontweight = "bold", mathcolor = "purple":

subsindets(foo, 'specfunc({Typesetting:-msqrt,Typesetting:-mrow,
                           Typesetting:-mi,Typesetting:-mo,Typesetting:-mn,
                           Typesetting:-mfenced,Typesetting:-munderover})',
           u->op(0,u)(op(remove(type,[op(u)],
                         identical(fontfamily,size,
                                   fontweight,mathcolor)=anything)),stuff1));

sqrt(Sum(sin(n*Pi*x), n = 1 .. N))

# including mstyle, to replace gray of inert Sum

subsindets(foo, 'specfunc({Typesetting:-msqrt,Typesetting:-mrow,
                           Typesetting:-mi,Typesetting:-mo,Typesetting:-mn,
                           Typesetting:-mfenced,Typesetting:-munderover,
                           Typesetting:-mstyle})',
           u->op(0,u)(op(remove(type,[op(u)],
                         identical(fontfamily,size,
                                   fontweight,mathcolor)=anything)),stuff1));

sqrt(Sum(sin(n*Pi*x), n = 1 .. N))

 


Download Typesetting_misc.mw

@tomleslie I have never heard of a plan to document this functionality. It would be nice to have. It can be useful to do such things programmatically, for example in `print/...` extensions and ModulePrint and objects.

I'd like to clarify one aspect for the OP.

In the given examples the round brackets denote function application, and not indexing into the list.

Round brackets can denote indexing into Matrices, Arrays, and Vectors, but not into lists or sets.

@Carl Love 

2) Yes, if the Matrix does not have the symmetric indexing-function then the type-check against 'Matrix(symmetric)' will, alas, get to some interpreted Library code.

restart;

#showstat( `type/rtable/TwoDim`, 147..161 );

stopat(`type/rtable/TwoDim`, 148):

type( Matrix([[1,2],[2,3]], shape=symmetric, datatype=float[8]),
      'Matrix(symmetric, datatype=float[8])' );

                    true

type( Matrix([[1,2],[2,3]], datatype=float[8]),                 
      'Matrix(symmetric, datatype=float[8])' );
false
`type/rtable/TwoDim`:
 148*      if dims[1] <> dims[2] then
               ...
           end if;

DBG> cont

                    true

There should be lots of ways to speed that up that mechanism, whether float[8] or not.

3) I added the conversion back to Fortran_order just because it's the default for float[8] Matrices, and the LDLT code constructs a new triangular[lower,unit] Matrix already, for the return value. Back in 1999 the accelerated BLAS only supported Fortran_order, and LAPACK (and NAG, at that time) was designed for it, which is why it was made the default for float[8] Matrices in Maple 6. Nowadays the Intel MKL has an additional  C_order API (and reasonable support for it). LinearAlgebra could be improved to call out to that MKL C-interface instead of making Fortran_order copies, when passed C_order Matrices. But until then... it's mostly better to have one's float[8] Matrices be Fortran_order.

@Rouben Rostamian  

restart;

kernelopts(version);
    Maple 2018.1, X86 64 LINUX, Jun 8 2018, Build ID 1321769

s := x -> min(frac(x),1-frac(x)):
b := x -> sum(s(2^n*x)/2^n, n=0..4):

limit( ( b(x)-b(9.05) )/(x-9.05), x=9.05 );
                   0.

evalf(Limit( ( b(x)-b(9.05) )/(x-9.05), x=9.05 ));
               3.000000000

MultiSeries:-limit( ( b(x)-b(9.05) )/(x-9.05), x=9.05 );                  
                    3

foo := convert( ( b(x)-b(9.05) )/(x-9.05), rational, exact ):

lprint(foo);
  (min(1-frac(x),frac(x))+1/2*min(1-frac(2*x),frac(2*x))+1/4*min(1-
  frac(4*x),frac(4*x))+1/8*min(
  1-frac(8*x),frac(8*x))+1/16*min(1-frac(16*x),
  frac(16*x))-17/80)/(x-181/20)

MultiSeries:-limit( foo, x=905/100 );                        
                     3

evalf(Limit( foo, x=905/100 ));
                3.000000000

limit( foo, x=905/100 );
                     0

Using Maple 18.02 and 16.02 I get 3. from :-limit, but in Maple 2015.0 and 2015.2 and later I get 0.

@Carl Love Very nice. Vote up.

A few comments:

1) This relates only to the generation of the example Matrix A.
     For the call to RandomMatrix, it is quite a bit faster to use the syntax
     generator=0. .. 1. instead of generator=rand(0. .. 1).

2) The type-check in A::Matrix(symmetric, datatype= float[8]) on the first parameter of LDLT is, unfortunately, much slower than it ought to be in the case that the first (Matrix) argument happens to be symmetric in its values but does not actually have the symmetric indexing function. So it seems fair to pump in a shaped Matrix to LDLT. I see a timing improvement from about 2.12sec to about 1.56sec by that change of the input.

3) The compiled external function which does the Cholesky decomposition is actually DPOTRF from the Intel MKL, and not a generic LAPACK version such as NAG F07FDF. It is highly optimized to use SIMD or other fancy chipset extensions the MKL detects at runtime. But it is also tuned to minimize cache misses. One aspect of such optimization that can (sometimes) be partially achieved when using either Maple's Compiler or evalhf mode is improving the cache hits. The innermost loop of LDLT is the loop,
    for p to k1 do S:= S + A[i,p]*R[p] od
which accesses sucessive values along a row of the Matrix. Constructing the workspace Matrix AA of LDLT (which is passed as Matrix A of LDLT_internal) to be order=C_order means that its entries are stored by row in memory. This seems to improve the performance. For the 1024x1024 example, with shape=symmetric on the input Matrix, I see the timing improve from about 1.56sec to about 1.21sec.

LDLT_a.mw

@David Sycamore You can utilize commands which are members of packages by using a long form of the name, or by loading the package and then using the short form of the name.

The NumberTheory package is newer than the numtheory package.

Both of those packages allow the long form syntax package:-member as well as the long form package[member] . I prefer the former as simple, safe and unambiguous. The latter can run into name-space issues depending on the content, unless you do more typing to arrive at package[':-member'] .

restart;

NumberTheory:-PrimeCounting(31);
                               11
NumberTheory[PrimeCounting](31);
                               11
numtheory:-pi(31);
                               11
numtheory[pi](31);
                               11

restart;

with(NumberTheory):

PrimeCounting(31);
                               11

PrimeCounting(47);
                               15

restart;

with(numtheory):

pi(31);
                               11

pi(47);
                               15

The LDL^T decomposition is suitable for symmetric indefinite mattices, and does pivoting to handle that.

Cholesky LL^T decomposition is for symmetric positive definite.

LDL^T implementations are usually considerably slower than Cholesky LL^T, which is not unexpected.

The OP wrote that his Matrix is "s.p.d". So why the need for LDL^T? Just because some instructor demands it? Or is there some numeric justification? (eg. close to indefinite, or...?) If there is in fact a numeric justification for preferring LDL^T over LL^T for the given Matrix then performing LL^T and then adjusting would defeat that goal.

I don't recall seeing LDL^T called "Cholesky" by an expert. It seems like an unnecessary and confusing equivocation.

@samen You have incorrectly changed the code I showed above. Your Reply shows,

plots:-animate(K->plots:-matrixplot(p(k)-p(K)),[k],k=0.0001..0.001);

while my code above showed,

plots:-animate(K->plots:-matrixplot(p(0.0001)-p(K)),[k],k=0.0001..0.001);

With your incorrect change it won't work properly for any N.

So first I reverted that change.

Next I replace the call to solve inside procedure p with a call to fsolve. That sped it up enough that N=19 was bearable, but N=49 was still too slow.

Then I replaced the use of fsolve (or solve) by use of LinearSolve, after generating the Matrix-form of the equations in sys, which are all linear in the variables in vars. That made the case N=49 run reasonably quickly. See the attachment.

crnk_ac.mw

The call to GenerateEquations is now the slowest part. At least it's outside the of any procedure like pmat or p, and so only gets called once. But you could avoid that time cost as well if you generated the Matrix-form directly instead of bothering to forms the eqs.

I don't know whether your discretization scheme is overall correct. I've focused only on the aspect of solving the equations you've targeted.

If your eventual goal is to numerically evaluate the eigenvectors at floating-point values for all the parameters then you would likely be better off with a procedure (function) that admitted those float values and did the eigenvector calculations in hardware double precision.

That is likely true whether you intend on operating purely within Maple or in C or Fortran. In the latter cases I mean a purely hardware float LAPACK eigen-solver like from the Intel MKL, say, as opposed to exporting the symbolic formulas for the eigenvectors.

Even just within Maple, an efficiently coded procedure that acted inplace on float[8] Matrices should be able to do millions of such eigenvector computations per second. (In a recent bit of work I was able to crank out several million 5x5 eigenvector computations per second in a highly optimized set of procedures in just Maple, and in C/Fortran there should be even less function-call overhead.)

But there is also numerical stability to consider. The eigen-solvers in LAPACK are quite good, but there's no guarantee that the floating-point instantiation of length symbolic eigenvector formulas would be so numerically robust.

And one more.

map( `^`, L1, 2 );

That should faster than using the user-defined operator u->u^2 for very long lists, since it only uses the kernel builtin `^`

[edited] And unlike the elementwise operator ~^ it will work in much older versions of Maple, back to MapleV R5 (released January 1998).

@jthress1 The only difference I see is in the order in which the 3 distinct eigenvalues might appear in the Vector, and in whether the (numerators and denominator) entries in the eigenvectors are factored. But scalar multiples of the eigenvectors would also be acceptable.

1D_Euler_Eigenvectors_ac.mw

I don't see anything to object to here. Just be careful to associate the eigenvectors with the right eigenvalues, and account for the order in which they appear.

And if any eigenvalue were associated with more than one independent eigenvector (not the case in this followup) then various linear combinations would also be acceptable.

First 248 249 250 251 252 253 254 Last Page 250 of 595