acer

32368 Reputation

29 Badges

19 years, 333 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are answers submitted by acer

There are cleverer mathematical ways to determine whether there are real roots, but the following is (deliberately) a minor adjustment to your existing code.

S5AléatoireParaboleSommetGraphe_ac.mw

The cited solution holds for complex x & y, including the real case where either is real and negative.

restart;

eq:=Z^2=y/x;

Z^2 = y/x

sols := [solve(eq,{Z})];

[{Z = (x*y)^(1/2)/x}, {Z = -(x*y)^(1/2)/x}]

eval~(eq,sols);

[y/x = y/x, y/x = y/x]

Or, perhaps,

[allvalues(solve({eq},Z,explicit=false))];

[{Z = (y/x)^(1/2)}, {Z = -(y/x)^(1/2)}]

eval~(eq,%);

[y/x = y/x, y/x = y/x]

Download solve_res.mw

[edit] In general, if for some reason you wanted to restrict the solution to x>0, y>0 then you could impose that as extra conditions. Here, fortunately your desired formulation attains. (These steps are deliberately split.)

restart;

eq:=Z^2=y/x;

Z^2 = y/x

conds := x>0,y>0

0 < x, 0 < y

temp := solve({eq,conds},Z);

temp := piecewise(0 < x and 0 < y, [{Z = sqrt(y)/sqrt(x)}, {Z = -sqrt(y)/sqrt(x)}], [])

temp assuming conds;

[{Z = y^(1/2)/x^(1/2)}, {Z = -y^(1/2)/x^(1/2)}]

Download solve_ex_conds.mw

The lists [ [1], [1,1], [2], [1,2], [2,2] ] could be padded by 0 on the right, each then with two elements.

Then the list of D terms could reindexed (sorted) by using the permutation of sorting that lists of padded lists.

Then that newly sorted list of D terms could be used to sort the original expression with a plex ordering.

restart;

Vars    := [seq(U[i], i=1..2)]:

AtPoint := [seq(P[i], i=1..2)]:

mt      := mtaylor(f(Vars[]), Vars =~ AtPoint, 3);

f(P[1], P[2])+(D[1](f))(P[1], P[2])*(U[1]-P[1])+(D[2](f))(P[1], P[2])*(U[2]-P[2])+(1/2)*(D[1, 1](f))(P[1], P[2])*(U[1]-P[1])^2+(D[1, 2](f))(P[1], P[2])*(U[1]-P[1])*(U[2]-P[2])+(1/2)*(D[2, 2](f))(P[1], P[2])*(U[2]-P[2])^2

K := map(t -> select(has, [op(t)], D)[], select(has,[op(mt)],D));

[(D[1](f))(P[1], P[2]), (D[2](f))(P[1], P[2]), (D[1, 1](f))(P[1], P[2]), (D[1, 2](f))(P[1], P[2]), (D[2, 2](f))(P[1], P[2])]

S1 := K[sort(map(t->[op([0,0,..],t),0][..2], K),output=permutation)];

[(D[1](f))(P[1], P[2]), (D[1, 1](f))(P[1], P[2]), (D[1, 2](f))(P[1], P[2]), (D[2](f))(P[1], P[2]), (D[2, 2](f))(P[1], P[2])]


other terms at end

sort(mt, order=plex(S1[]));

(U[1]-P[1])*(D[1](f))(P[1], P[2])+(1/2)*(U[1]-P[1])^2*(D[1, 1](f))(P[1], P[2])+(U[1]-P[1])*(U[2]-P[2])*(D[1, 2](f))(P[1], P[2])+(U[2]-P[2])*(D[2](f))(P[1], P[2])+(1/2)*(U[2]-P[2])^2*(D[2, 2](f))(P[1], P[2])+f(P[1], P[2])


other terms at front

sort(mt, order=plex(ListTools:-Reverse(S1)[]), ascending);

f(P[1], P[2])+(U[1]-P[1])*(D[1](f))(P[1], P[2])+(1/2)*(U[1]-P[1])^2*(D[1, 1](f))(P[1], P[2])+(U[1]-P[1])*(U[2]-P[2])*(D[1, 2](f))(P[1], P[2])+(U[2]-P[2])*(D[2](f))(P[1], P[2])+(1/2)*(U[2]-P[2])^2*(D[2, 2](f))(P[1], P[2])


Download sort_D.mw

There is a large Section of the Help system, entitled "Applications and Example Worksheets". It is organized mainly by subject, and its worksheets often use multiple commands to accomplish more involved computations (differential equations, statistics, plotting, etc, etc). This is quite different from the kinds of example usually seen in any single command's Help-page.

Also, if you're new to Maple then something to consider is what kind of environment do you want?

There is marked-up 2D Input versus plaintext 1D Maple Notation input mode. A Help-page's Examples Section can be toggled from one to the other using an icon in a menu-bar at top.

The defaults for Maple include 2D Input mode and Documents (as opposed to 1D input and Worksheets), and those aspects can both be changed and set as new preferences.

The User Manual has a great deal of 2D Input in Worksheets. The input is marked-up math, entered using left panel palettes or command-completion. It also has many basic examples that are solved using the right context-panel menus. It also has a great deal of Equation-label style of reference to previous statements. It also points users at Tutors and Assistants (popup solvers). Some people love all those convenient and powerful things, and some people gladly get by (easily) without any of them.

It's difficult to describe just how very different two people's work can look. You should probably try out the alternative modes, so that you don't miss out on something that you might prefer.

In my opinion the User Manual has quite a lot of focus on approaches and functionality and convenience that would be useful to a student.

A decent, inexpensive 3rd-party book that uses only 1D input (my own preference, in a Worksheet) is Understanding Maple by Ian Thompson. Perhaps it's fair to suggest that this focuses more on programming, and could be useful for scientific computation work in Maple.

You have attempted to use the syntax for plotting solutions from pdsolve(...,numeric), but your solutions are produced by dsolve(...,numeric) which are handled differently.

Also, your solutions contain f1(eta), but no f(eta).

sachi_stream_error_3d_ac.mw

You code attempts to find the general symbolic formula for the result, which seems like a bad decision.

Instead, you might construct a re-usable procedure which does the linear-algebra computations as a purely numeric computation for each numeric input value of z and v.

By the way, did you really intend that the first three columns of P3 and P4 be all zero? That doesn't seem to make doing the eigen-solving very interesting... Perhaps you might check your earlier formulas and methodology?

The numeric linear-algebra could be made even faster, but it might not be worthwhile unless you could first confirm that the formulas and overall approach are correct.

restart;

with(LinearAlgebra):
with(plots):

omega := v/h:
t := a[0]+a[1]*x +a[2]*x^2 +a[3]*sinh(omega*x)+a[4]*cosh(omega*x)+a[5]*sin(omega*x)+a[6]*cos(omega*x):
r := diff(t, x): R:=diff(t,x$3):
b1 := eval(t, x = q+3*h) = y[n+3]:
b2 := eval(r, x = q) = f[n]:
b3 := eval(r, x = q+h) = f[n+1]:
b4 := eval(r, x = q+2*h) = f[n+2]:
b5 := eval(r, x = q+3*h) = f[n+3]:
b6 := eval(r, x = q+4*h) = f[n+4]:
b7 := eval(R, x = q+4*h) = g[n+4]:
c := seq(a[i], i = 0 .. 6):
k := solve({b || (1 .. 7)}, {c}):
l := assign(k):
Cf := t: m := diff(Cf, x$3):
S4 := y[n+4] = combine(simplify(expand(eval(Cf, x = q+4*h)), size),trig):
collect(S4, [y[n+3], f[n], f[n+1], f[n+2],f[n+3],f[n+4], g[n+4]]):
S3 := y[n+2] = simplify(expand(eval(Cf, x = q+2*h)), size):
collect(S3, [y[n+3], f[n], f[n+1], f[n+2],f[n+3],f[n+4], g[n+4]]):
S2 := y[n+1] = simplify(expand(eval(Cf, x = q+h)), size):
collect(S2, [y[n+3], f[n], f[n+1], f[n+2],f[n+3],f[n+4], g[n+4]]):
S1 := y[n] = simplify(expand(eval(Cf, x = q)),size):
collect(S1, [y[n+3], f[n], f[n+1], f[n+2],f[n+3],f[n+4], g[n+4]]):

h := 1:
YN_1 := seq(y[n+k], k = 1 .. 4):
A1, a1 := GenerateMatrix([S || (1 .. 4)], [YN_1]):
YN := seq(y[n-k], k = 3 .. 0, -1):
A2, a2 := GenerateMatrix([S || (1 .. 4)], [YN]):
FN_1 := seq(f[n+k], k = 1 .. 4):
A3, a3 := GenerateMatrix([S || (1 .. 4)], [FN_1]):
FN := seq(f[n-k], k = 3 .. 0, -1):
A4, a4 := GenerateMatrix([S || (1 .. 4)], [FN]):
GN_1 := seq(g[n+k], k = 1 .. 4):
A5, a5 := GenerateMatrix([S || (1 .. 4)], [GN_1]):

P1 := A1-ScalarMultiply(A3, z)-ScalarMultiply(A5, z^3):
P3 := A2+ScalarMultiply(A4, z):

funcP5M := proc(z,v,i::posint) local P1,P3,P4;
  if not [z,v]::list(numeric) then return 'procname'(args); end if;
  P1 := evalf(eval(:-P1,[:-z=z,:-v=v]));
  P3 := evalf(eval(:-P3,[:-z=z,:-v=v]));
  P4 := LinearSolve(P1, P3): #print(P4);
  LinearAlgebra:-Eigenvalues(P4)[i];
end proc:

funcP5M(15,-6,4);

-.102336220176213+0.*I

CodeTools:-Usage(
  implicitplot(Re(funcP5M(z,v,4)) = 1, z = -15 .. 15, v = -15 .. 15,
               gridrefine = 2, filled, coloring = [blue, gray],
               labels = [z, v], axes = boxed)
);

memory used=4.69GiB, alloc change=-8.00MiB, cpu time=28.99s, real time=24.27s, gc time=7.33s

 

Download RASTDFFAM_ac.mw

I am not aware of any shortcut syntax in Maple for linear-solving, or that is equivalent to Matlab's A\b.

Computing inv(A)*b is not a good "standard" way to solve a floating-point linear system, and it's not what Matlab does when you enter A\b. One reason for that is that explicitly forming the matrix inverse and then multiplying by it is not generally as accurate.

IIRC, the A\b syntax in Matlab is a shortcut for its mldivide command. And mldivide differs from its linsolve by doing some extra tests (possibly expensive for larger matrices).

If A is square then Matlab's linsolve will do an LU decomposition, otherwise it may use do QR (like a least-squares approach).

And that's what I'd recommend for you in Maple too. If A is a square floating-point Matrix then try the LinearSolve command, and otherwise try LeastSquares. But I suggest not using MatrixInverse (or a shortcut like A^(-1)), followed by multiplication.

note. Those Maple commands also have options. Eg. method=LU as default for LinearSolve in the square float case.

restart;

with(LinearAlgebra):

 

A := <<1|2|3>,<1|2|2.99>,<2|4.001|6.001>>;

Matrix(3, 3, {(1, 1) = 1, (1, 2) = 2, (1, 3) = 3, (2, 1) = 1, (2, 2) = 2, (2, 3) = 2.99, (3, 1) = 2, (3, 2) = 4.001, (3, 3) = 6.001})

b := RandomVector(3,datatype=float[8]):

sol := LinearSolve(A,b):

 

Now let's also solve using the explicit matrix inverse of A.

 

Ainv := MatrixInverse(A):

altsol := Ainv.b:


Let's compare the forward error of both solutions.

Norm( A.altsol - b );

0.769162511460308451e-10

Norm( A.sol - b );

0.181898940354585648e-11


The LinearSolve solution (done internally using LU) has a
smaller forward error (norm) than does the solution obtained
using the explicit Matrix inverse.

The Matrix A is not very well-conditioned.

ConditionNumber(A);

72024.00198


Let's suppose now that you will eventually get another rhs b to
solve, for the same lhs A.

If you had multiple rhs vectors up front, you could just solve
then all together in one call. It's the scenario in which you only
later get other rhs's that makes it worthwhile to re-use some
computation.

Yes, you could gain efficiency by re-using a precomputed
matrix inverse. But even here that is not necessary -- and hence
is still not better.

The following result consists of a Vector of the pivot details, and
the L and U superimposed (the diagonal of L is implicitly all 1). We
will be able to re-use this factorization step in multiple, separate
linear solvings.

naglu := [LUDecomposition(A, output=NAG)]:


The same answer as before.

LinearSolve(naglu, b):
Norm(% - sol);

0.

b2 := RandomVector(3,datatype=float[8]):


We can now re-use the LU factorization for rhs b2.

x2 := LinearSolve(naglu, b2):

Norm( A.x2 - b2 );

0.139053213388251606e-10


The same kind of re-use can be done using the factorization
from the QRdecomposition command, and passing that to
subsequent LinearSolve calls.

However QRdecomposition might not do column-pivoting.
So, if you want to do it as a least-squares computation (eg. A has
more rows than columns), and you are going to have many
rhs's b which are not all available up front, and you want best
general accuracy, then you might elect to do a SVD factorization.

Tricky to say more, without knowing the actual situation.

 

Download LSexample.mw

ps. Some of the extra tests that Matlab's mldivide does (ie. more than what linsolve does) involve checks for various special forms (symmetry, etc).

Maple's LinearSolve and LUDecomposition do some of that -- but quickly -- checking the Matrix for special shape/storage applied as options at Matrix creation time. Scan's of the data are not done.

restart;

kernelopts(version);

`Maple 2023.2, X86 64 LINUX, Nov 24 2023, Build ID 1762575`

kernelopts(ASSERT=true):
kernelopts(assertlevel=2):

foo:= overload(  
        [
        proc(A::integer,B::integer)::integer; option overload;
            B/5;
        end proc,

        proc(A::integer)::integer; option overload;
            A/3;   
        end proc      
       ]):

foo(1);

Error, (in foo) assertion failed, foo expects its return value to be of type integer, but computed 1/3

foo(12);

4

foo(1,2);

Error, (in foo) assertion failed, foo expects its return value to be of type integer, but computed 2/5

foo(1,10);

2

Download overload_return_type.mw

notes:
1) That syntax is consistent with that for non-overloaded procedures, ie. the return-type comes right after each proc's parameter specs.
2) The individual procs get separate return-types, because... they should. They could be altogether different.
3) I changed the order of the two procs appearance in the overload definition. (Otherwise I don't fully understand how/when you intended to be able to get to the second proc.)

In your worksheet you strip off the method=_d01ajc when you call,
    unapply(int(op(1 .. 2, original_Int)), y)
because you only use operands 1..2. With an expression-form integrand, and without that method, the hybrid numeric-symbolic analysis of the integrand makes it slow.

If that method were retained then you could get good speed throughout, using the original expression Int. For example,

orig := Int(22.89730452*exp(-3.373250126*10^6*x^2)*sqrt(2)*(erf(1298.701299*sqrt(2)*(sqrt(2.500000000*10^(-7) - x^2) + 1/40*y - 0.001540000000)) + erf(1298.701299*sqrt(2)*(sqrt(2.500000000*10^(-7) - x^2) - 1/40*y + 0.001540000000))), x = -0.0005000000000 .. 0.0005000000000, method = _d01ajc):

CodeTools:-Usage(plot(orig, y = 0 .. 0.25, size = [400, 200]))

memory used=8.30MiB, alloc change=0 bytes, cpu time=151.00ms, real time=152.00ms, gc time=0ns

CodeTools:-Usage(Optimization:-Maximize(orig, y = 0 .. 0.15, variables = [y]))

memory used=0.56MiB, alloc change=0 bytes, cpu time=12.00ms, real time=12.00ms, gc time=0ns

[HFloat(0.0356073228322899), [y = HFloat(0.06159999993348495)]]

CodeTools:-Usage(fsolve(orig - 0.005, y = 0 .. 0.15, maxsols = 2))

memory used=20.87MiB, alloc change=0 bytes, cpu time=334.00ms, real time=334.00ms, gc time=0ns

Download slow_maximize_and_fsolve_nostrip.mw

Comparably good speed of the numeric integration can also be achieved by making the integrand an operator. Then the _d01ajc method is not necessary.

This new, constructed F can also be used to achieve the plot quickly.

When called, this procedure F substitutes its argument Y for y in the integrand, then uses unapply to get a fresh operator form for that.

restart;

ig := Int(22.89730452*exp(-3.373250126*10^6*x^2)*sqrt(2)*(erf(1298.701299*sqrt(2)*(sqrt(2.500000000*10^(-7) - x^2) + 1/40*y - 0.001540000000)) + erf(1298.701299*sqrt(2)*(sqrt(2.500000000*10^(-7) - x^2) - 1/40*y + 0.001540000000))), x = -0.0005 .. 0.0005):

F := Y->evalf(Int(unapply(eval(op(1,ig),y=Y),x),
                  rhs(op(2,ig)))):

CodeTools:-Usage(plot(F, 0 .. 0.25, size = [400, 300]));

memory used=16.97MiB, alloc change=35.01MiB, cpu time=231.00ms, real time=232.00ms, gc time=0ns

CodeTools:-Usage(Optimization:-Maximize(F, 0 .. 0.15));

memory used=1.10MiB, alloc change=0 bytes, cpu time=17.00ms, real time=17.00ms, gc time=0ns

[0.356073228322897e-1, Vector(1, {(1) = 0.615999999351895e-1})]

CodeTools:-Usage(fsolve(F - 0.005, 0 .. 0.15, maxsols = 2))

memory used=75.71MiB, alloc change=38.00MiB, cpu time=665.00ms, real time=581.00ms, gc time=172.77ms

0.2456248637e-1, 0.9863751363e-1

Download slow_maximize_and_fsolve_acc.mw

See also point 1) in this Reply to another recent Question by you. There's more than one way to get a similar effect.
A) What I did in the earlier Question thread: on the fly substitute (using subs) into an Int call that already contains a template operator.
B) What I did here: on the fly make Int calls with fresh operators from (using unapply on) each substitution into the integrand expression.
By "on the fly" I mean at each distinct evaluation by Maximize, fsolve, or plot.

[edit] I actually don't know yet why fsolve fails (quickly) in the following variant. Some sort of accuracy&convergence thing, I suspect. It would work -- and faster that the OP's original but not as fast as those other approaches above -- if method=_Dexp were used instead.
slow_maximize_and_fsolve_hmm.mw

You could add the following to the command which launches the Maple GUI.

   -standalone

On Linux I simply use that option when I launch the Maple GUI from the commandline terminal.

On Windows I add it to the command property in the desktop icon.

The effect is that I can launch wholly separate instances of the GUI. Each could have its own debugger popup, or its own Help window, or even its own instance of opening the same worksheet.

Each also runs in its own JVM instance, and a GUI crash in either does not affect the other.

Note that such distinct GUI sessions aren't able to share a single Maple kernel with each other, although multiple worksheets opened in the same GUI instance can still share a kernel.

This change to the desktop icon is the first thing I do after installing Maple in my Windows 7. I suppose you could also have another icon, without the change, in case you wanted to be able to share a Maple kernel instance between worksheets by merely dropping the file onto the same desktop icon.

There are several possible variants on an approach using an inert operator for the special construction, a corresponding custom print-extension for it (to get the pretty-printing effect on output), and the simple value command as an easy mechanism to undo it.

For example,

N := x->x: `print/N`:=Typesetting:-Typeset:

 

mass := %N(1) * Unit(m);

%N(1)*Units:-Unit(m)

value(mass);

Units:-Unit(m)

 

mass2 := %N(1) * Unit(cm);

%N(1)*Units:-Unit(cm)

mass + mass2;

%N(1)*Units:-Unit(m)+%N(1)*Units:-Unit(cm)

simplify(value(%));

(101/100)*Units:-Unit(m)

Download unit_fun.mw

To a certain extent you can enlarge a Slider's length, and accompany that by forced larger dimensions for the exploration. Personally, I find that it's easier to utilize the Slider's (albeit, limited) precision if it is longer.

For example, with adjustment to borders only for understanding,

Explore( plot(a*x),
         width=700, tablealignment=left,
         tableborders=false, showhiddenborders=always,
         parameters=[ [a=1.0..2.0, width=500]] );

There's a limit to this, however, since there's some internal Explore code that automatically gives a portion of the overall width to the parameter/value labels on each side. Getting a much wider Slider might currently necessitate specifying the whole exploration to be wider than the GUI, or fiddling with Table properties manually, post-insertion. (That functionality might be improved in the future, perhaps by smarter weighting.)

But IIRC the limit of granularity of sampling of a Slider with a floating-point range is GUI side, and cannot be adjusted arbitrarily -- though minor tickmarks can give a little relief. The same goes for the numeric formatting of the labelling on the Slider itself.

The displayed value beside the Slider (in a separate Label) might also be improved, if Explore were to allow specified numeric formatting for such, within the parameters option. That might be quite useful.

ps. By manual adjustment I mean manually using the mouse to turn on all interior Table's borders, adjust Properties, and adjust Slider length property. This sounds unpleasant, since I don't know if all weighting aspects could be so altered adequately.

pps. Really fine granularity might be accomplished by scaling, eg. replace the instance of the parameter in the plotting expression by constant+newname, etc. You've probably already considerd that kludge. You could even "split" a parameter into a sum , each providing a carefully rounded value on two distinct scales (ie. 2-3 base 10 digits places, each).

restart;

kernelopts(version);

`Maple 2023.1, X86 64 LINUX, Jul 07 2023, Build ID 1723669`

y := -cos(sqrt(x))*x^3/(-x^2 + 24*cos(sqrt(x)) + 12*x - 24);

-cos(x^(1/2))*x^3/(-x^2+24*cos(x^(1/2))+12*x-24)

plot(y, x=0..1, adaptive=true);

Download cannot_plot_ac.mw

I'll file a bug report.

Your phi[1] and phi[2] might agree at x=0, but not for other mirrored x values. And their respective max and min might also not agree everywhere else, over the given ranges.

So when you pass just contours=25 then contourplot will automatically choose a different selection of 25 contour values for phi[1] than it does for phi[2]. And if the choice of contour values are not the same for both plots then there's no match-up along x=0, ie. the y-axis.

But if instead you specifiy the particular contour values -- the same for both phi[1] and phi[2] -- then the matching at x=0 can be made apparent.

By the way, in Maple 2023 there's no need for using that contplot procedure, since its only purpose is to allow a legend that mimics a color-bar. In Maple 2023 that color-bar is available (by default) from the actual plots:-contourplot command. I haven't changed that aspect.

Below I use Maple 2023, and in older versions you'd need to remove the colorbar option.

restart

Digits := 50

50

beta := 1; lambda := .9; b := 4; a := 6; alpha := 0; y[1] := 1.3; y[2] := 1.3; x[1] := -1.5; x[2] := 1.5; Q[1] := 40; Q[2] := 35; T[1] := 5; T[2] := 4.5

v := n*Pi/b

Delta := exp(2*v*a)*(alpha*v+beta)*(1+lambda)-(1-lambda)*(alpha*v-beta)

omega := Pi/(2*b)

P[1] := (-(1+lambda)*exp(-v*abs(x-xi))-(1-lambda)*exp(v*(x+xi)))*exp(2*v*a)-(1+lambda)*exp(-v*(x+xi))-(1-lambda)*exp(v*abs(x-xi))

P[2] := (-(1+lambda)*exp(-v*abs(x-xi))-(1-lambda)*exp(v*(x+xi)))*exp(2*v*a)+((1+lambda)*exp(-v*(x+xi))+(1-lambda)*exp(v*abs(x-xi)))

P[3] := P[1]*(-alpha^2*v-alpha*beta+alpha*v)+beta*P[2]

G[11] := (sum((alpha*P[1]*(1-lambda)*(alpha*v-beta)*exp(-2*v*a)+(1+lambda)*P[3])*(cos(v*(y-eta))-cos(v*(y+eta)))/(v*(exp(2*v*a)*(alpha*v+beta)*(1+lambda)-(1-lambda)*(alpha*v-beta))), n = 1 .. 40))/(2*b*(1+lambda))-(2*(1+lambda)*alpha*b/Pi*.25)*ln((1+2*exp(-omega*abs(x-xi))*cos(omega*(y-eta))+exp(-2*omega*abs(x-xi)))*(1-2*exp(-omega*abs(x-xi))*cos(omega*(y+eta))+exp(-2*omega*abs(x-xi)))*(1+2*exp(-omega*(2*a+x+xi))*cos(omega*(y-eta))+exp(-2*omega*(2*a+x+xi)))*(1-2*exp(-omega*(2*a+x+xi))*cos(omega*(y+eta))+exp(-2*omega*(2*a+x+xi)))/((1-2*exp(-omega*abs(x-xi))*cos(omega*(y-eta))+exp(-2*omega*abs(x-xi)))*(1+2*exp(-omega*abs(x-xi))*cos(omega*(y+eta))+exp(-2*omega*abs(x-xi)))*(1-2*exp(-omega*(2*a+x+xi))*cos(omega*(y-eta))+exp(-2*omega*(2*a+x+xi)))*(1+2*exp(-omega*(2*a+x+xi))*cos(omega*(y+eta))+exp(-2*omega*(2*a+x+xi)))))/(2*b*(1+lambda))-(2*(1-lambda)*alpha*b/Pi*.25)*ln((1+2*exp(omega*(x+xi))*cos(omega*(y-eta))+exp(2*omega*(x+xi)))*(1-2*exp(omega*(x+xi))*cos(omega*(y+eta))+exp(2*omega*(x+xi)))*(1+2*exp(-omega*(2*a-abs(x-xi)))*cos(omega*(y-eta))+exp(-2*omega*(2*a-abs(x-xi))))*(1-2*exp(-omega*(2*a-abs(x-xi)))*cos(omega*(y+eta))+exp(-2*omega*(2*a-abs(x-xi))))/((1-2*exp(omega*(x+xi))*cos(omega*(y-eta))+exp(2*omega*(x+xi)))*(1+2*exp(omega*(x+xi))*cos(omega*(y+eta))+exp(2*omega*(x+xi)))*(1-2*exp(-omega*(2*a-abs(x-xi)))*cos(omega*(y-eta))+exp(-2*omega*(2*a-abs(x-xi))))*(1+2*exp(-omega*(2*a-abs(x-xi)))*cos(omega*(y+eta))+exp(-2*omega*(2*a-abs(x-xi))))))/(2*b*(1+lambda))

g[12] := lambda*(-(alpha*v+beta)*exp(v*(2*a+x))-(alpha*v-beta)*exp(-v*x))*exp(-v*xi)/(v*Delta)

G[12] := (sum(g[12]*(cos(v*(y-eta))-cos(v*(y+eta))), n = 1 .. 40))/b

phi[1] := int(int(G[11]*Q[1]*Dirac(xi-x[1])*Dirac(eta-y[1]), xi = -a .. 0), eta = 0 .. b)+int(int(G[12]*Q[2]*Dirac(xi-x[2])*Dirac(eta-y[2]), xi = 0 .. infinity), eta = 0 .. b)

N[1] := (-(1+lambda)*exp(-v*abs(x-xi))-(-1+lambda)*exp(-v*(x+xi)))*exp(2*v*a)-(1+lambda)*exp(-v*(x+xi))-(-1+lambda)*exp(-v*abs(x-xi))

N[2] := (-(1+lambda)*exp(-v*abs(x-xi))-(-1+lambda)*exp(-v*(x+xi)))*exp(2*v*a)+((1+lambda)*exp(-v*(x+xi))+(-1+lambda)*exp(-v*abs(x-xi)))

N[3] := N[1]*(-alpha^2*v-alpha*beta+alpha*v)+beta*N[2]

G[22] := (sum((alpha*N[1]*(1-lambda)*(alpha*v-beta)*exp(-2*v*a)+(1+lambda)*N[3])*(cos(v*(y-eta))-cos(v*(y+eta)))/(v*(exp(2*v*a)*(alpha*v+beta)*(1+lambda)-(1-lambda)*(alpha*v-beta))), n = 1 .. 40))/(2*b*(1+lambda))-(2*(1+lambda)*alpha*b/Pi*.25)*ln((1+2*exp(-omega*abs(x-xi))*cos(omega*(y-eta))+exp(-2*omega*abs(x-xi)))*(1-2*exp(-omega*abs(x-xi))*cos(omega*(y+eta))+exp(-2*omega*abs(x-xi)))*(1+2*exp(-omega*(2*a+x+xi))*cos(omega*(y-eta))+exp(-2*omega*(2*a+x+xi)))*(1-2*exp(-omega*(2*a+x+xi))*cos(omega*(y+eta))+exp(-2*omega*(2*a+x+xi)))/((1-2*exp(-omega*abs(x-xi))*cos(omega*(y-eta))+exp(-2*omega*abs(x-xi)))*(1+2*exp(-omega*abs(x-xi))*cos(omega*(y+eta))+exp(-2*omega*abs(x-xi)))*(1-2*exp(-omega*(2*a+x+xi))*cos(omega*(y-eta))+exp(-2*omega*(2*a+x+xi)))*(1+2*exp(-omega*(2*a+x+xi))*cos(omega*(y+eta))+exp(-2*omega*(2*a+x+xi)))))/(2*b*(1+lambda))-(2*(-1+lambda)*alpha*b/Pi*.25)*ln((1+2*exp(-omega*(x+xi))*cos(omega*(y-eta))+exp(-2*omega*(x+xi)))*(1-2*exp(-omega*(x+xi))*cos(omega*(y+eta))+exp(-2*omega*(x+xi)))*(1+2*exp(-omega*(2*a+abs(x-xi)))*cos(omega*(y-eta))+exp(-2*omega*(2*a+abs(x-xi))))*(1-2*exp(-omega*(2*a+abs(x-xi)))*cos(omega*(y+eta))+exp(-2*omega*(2*a+abs(x-xi))))/((1-2*exp(-omega*(x+xi))*cos(omega*(y-eta))+exp(-2*omega*(x+xi)))*(1+2*exp(-omega*(x+xi))*cos(omega*(y+eta))+exp(-2*omega*(x+xi)))*(1-2*exp(-omega*(2*a+abs(x-xi)))*cos(omega*(y-eta))+exp(-2*omega*(2*a+abs(x-xi))))*(1+2*exp(-omega*(2*a+abs(x-xi)))*cos(omega*(y+eta))+exp(-2*omega*(2*a+abs(x-xi))))))/(2*b*(1+lambda))

g[21] := (-(alpha*v+beta)*exp(v*(2*a+xi))-(alpha*v-beta)*exp(-v*xi))*exp(-v*x)/(v*Delta)

G[21] := (sum(g[21]*(cos(v*(y-eta))-cos(v*(y+eta))), n = 1 .. 40))/b

phi[2] := int(int(G[21]*Q[1]*Dirac(xi-x[1])*Dirac(eta-y[1]), xi = -a .. 0), eta = 0 .. b)+int(int(G[22]*Q[2]*Dirac(xi-x[2])*Dirac(eta-y[2]), xi = 0 .. infinity), eta = 0 .. b)

f1 := subs(y = 1.2, x = 0, phi[1])

f2 := subs(y = 1.2, x = 0, phi[2])

evalf(f1-f2)

-0.6e-48

contplot := proc (ee, rng1, rng2) local clabels, clegend, i, ncrvs, newP, otherdat, others, tcrvs, tempP; clegend, others := selectremove(type, [_rest], identical(:-legend) = anything); clabels, others := selectremove(type, others, identical(:-contourlabels) = anything); if 0 < nops(clegend) then tempP := :-plots:-contourplot(ee, rng1, rng2, others[], ':-contourlabels' = rhs(clegend[-1])); tempP := subsindets(tempP, 'specfunc(:-_HOVERCONTENT)', proc (u) options operator, arrow; `if`(has(u, "null"), NULL, (':-LEGEND')(op(u))) end proc); if 0 < nops(clabels) then newP := plots:-contourplot(ee, rng1, rng2, others[], ':-contourlabels' = rhs(clabels[-1])); tcrvs := select(type, [op(tempP)], 'specfunc(CURVES)'); ncrvs, otherdat := selectremove(type, [op(newP)], 'specfunc(CURVES)'); return (':-PLOT')(seq((':-CURVES')(op(ncrvs[i]), op(indets(tcrvs[i], 'specfunc(:-LEGEND)'))), i = 1 .. nops(ncrvs)), op(otherdat)) else return tempP end if elif 0 < nops(clabels) then return plots:-contourplot(ee, rng1, rng2, others[], ':-contourlabels' = rhs(clabels[-1])) else return plots:-contourplot(ee, rng1, rng2, others[]) end if end proc

eval(phi[1], [x = -2, y = 1.5]); eval(phi[2], [x = 2, y = 1.5])

-9.8412713172730248458596904204018436823701850499209

-8.7773504289855657587933228948787812084275286238075

plots:-display(contplot(phi[1], x = -5 .. 0, y = 0 .. b, coloring = ["Red", "Blue"], contours = [seq(-25 .. -1)], legend = false, contourlabels = true, colorbar = false, thickness = 0, filledregions = false), contplot(phi[2], x = 0 .. 5, y = 0 .. b, coloring = ["Green", "Magenta"], contours = [seq(-25 .. -1)], legend = false, contourlabels = true, colorbar = false, thickness = 0, filledregions = false), size = [400, 400], view = [-5 .. 6, 0 .. b])

 

Download h-x_nemodar-e_ac.mw

restart

kernelopts(version)

`Maple 2023.1, X86 64 LINUX, Jul 07 2023, Build ID 1723669`

NULLX := Vector(3, {(1) = 3*Unit('m'), (2) = 9*Unit('m'), (3) = 25*Unit('m')})

Y := Vector(3, {(1) = 1, (2) = .56, (3) = .12})

NULL


With the following calling sequence, pointplot shows the units
as axis labels without my having to supply the useunits option.

plots:-pointplot(`<|>`(X, Y), symbolsize = 15, symbol = solidcircle)

plots:-pointplot(`<|>`(X, Y), style = line, color = "Niagara 1")

NULL

Download PlotUnit_ac.mw

First 31 32 33 34 35 36 37 Last Page 33 of 336