acer

27665 Reputation

29 Badges

17 years, 175 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are answers submitted by acer

You could use the rtable_scanblock command for this.

(I'll add a timing, to compare with some other suggestions. It's not clear whether you need it to be very quick.)

restart;

(m,n) := 6,10^5;

6, 100000

M := LinearAlgebra:-RandomMatrix(m,n,density=0.3):

 

First, using just the 3rd row.

 

rtable_scanblock(M[3,..],[1..n],':-NonZeros');

29751

 

Now, for all rows, and measuring performance.

 

CodeTools:-Usage(
   [seq(rtable_scanblock(M[i,..],[1..n],':-NonZeros'),i=1..m)]
);

memory used=4.58MiB, alloc change=3.06MiB, cpu time=18.00ms, real time=17.00ms, gc time=0ns

[29954, 29739, 29751, 29914, 29949, 29709]

ZerosInRow := i -> numelems(op(M[i])[2]):

CodeTools:-Usage(
   [seq(ZerosInRow(i),i=1..m)]
);

memory used=18.05MiB, alloc change=7.53MiB, cpu time=138.00ms, real time=138.00ms, gc time=69.12ms

[29954, 29739, 29751, 29914, 29949, 29709]

with(LinearAlgebra):
CodeTools:-Usage(
   nops~(select~(x -> x <> 0, convert~([Row(M, [1 .. RowDimension(M)])], list)))
);

memory used=29.58MiB, alloc change=12.87MiB, cpu time=200.00ms, real time=200.00ms, gc time=0ns

[29954, 29739, 29751, 29914, 29949, 29709]

Download rtable_scanblock.mw

@emendes Yes, I meant if you added style=line to plots:-odeplot.

I realize that a connected curve the default behaviour here. But doing so explicitly adds a STYLE(LINE) plotting substructure into the result.

In the past there have been some rare 3D plotting export problems where such a change made the key difference. So I thought I might as well see whether it makes a difference to you here.

You're code produces an array-plot, for which the GUI uses a slightly different mechanism. Does the problem occur if either plot is shown without the encapsulating Array?

The uniquification you show for inequalities if not due to the items being placed within a set. It occurs when the inequalities are first constructed. That is a kind of canonicalization.

But equalities are just one aspect of a syntax ( eg. a=b ) which has many other uses in Maple than representing a mathematical equivalence (for solving purposes, etc).

An equality is a Maple syntax that is also used to specify equivalences for substitution (2-argument eval, subs, simplify with side-relations), specifying options for procedure calls, and several other purposes. In these, the rhs versus lhs positions are a crucial part of the syntax.

It is therefore important that equalities can be generally programatically manipulated, including retaining their integrity under enclosure within data structures.

It would be wrong for equalities to uniquify (under set construction, say) as if the lhs/rhs aspect did not make them different. In general Maple usage, they are different.

Direct top-level input of an int call can produces these messages that you find useful. Part of that utility is due (according to some people) to the absence of a burden on the user to adjust a setting -- ie. it's automatic.

For Library routines that are performing broader computations such messaging arising from internal use of the int command would be mostly undesirable.

A goal is to balance supplying helpful information versus flooding unwanted messages.

There are a great many such instances of Library procedures that call int, and to manage this balance of messaging there is a mechanism within the integration subsystem which handles userinfo-style messages, hints, and warnings.

As a start, see the calls to kernelopts(level) inside,
   showstat(int::Exact)
(or more specifically, showstat(int::Exact,33) in Maple 2023.0).

_EnvIntWarning := evalb(kernelopts('level') <= 64
         or kernelopts('level') <= 128 and _Env_assuming_recursion_level = 0);

As you've noticed here, a tricky case involves using value(Int(...)) versus int, eg.,
   value(Int(1/(sqrt(x__0 - x)*sqrt(-x^2 + 1)), x = 0 .. x__0));
If you query kernelopts(level) in the debugger, for the following code under Maple 2023.0, then you may see a value higher than the relevant cutoff. But increasing the cutoff could induce an unwanted flood of messages from other Library code, and so this might be a difficult case to "get just right".

restart;
kernelopts(opaquemodules=false):
stopat(int:-Exact,33);
value(Int(1/(sqrt(x__0 - x)*sqrt(-x^2 + 1)), x = 0 .. x__0));

Managing the whole mechanism is complicated. Memoization effects (remembered results of prior computation), accomodating various (internal) enviroment variables, and a deeper level for top-level input in 2-D Input mode, may also need to be factored in.

There are other parts to this mechanism, and I haven't given a complete explanation. But, even aside from complications of the implementation, the balancing of the situations in which such messages appearing automatically for, say, top-level input versus not appearing for internal Library code is difficult.

You're showing assignments to the "hat" names. If you do that then any rewrite in terms of them will evaluate to your original. So I suggest not making those assignments.

Instead, set them up as equations. And you'll need the new names on the rhs of the equations to be utilized.

restart

eqn2new := rho*V*(diff(x(t), t)) = (conjugate(x)-x)*w+(x__2-`#mscripts(mi("x"),mn("2"),none(),none(),mo("&macr;"),none(),none())`)*w__2+(x__1-`#mscripts(mi("x"),mn("1"),none(),none(),mo("&macr;"),none(),none())`)*w__1

rho*V*(diff(x(t), t)) = (conjugate(x)-x)*w+(x__2-`#mscripts(mi("x"),mn("2"),none(),none(),mo("&macr;"),none(),none())`)*w__2+(x__1-`#mscripts(mi("x"),mn("1"),none(),none(),mo("&macr;"),none(),none())`)*w__1

seqns := [`#mover(mi("x"),mo("&circ;"))` = conjugate(x)-x, `#mscripts(mi("x"),mn("1"),none(),none(),mo("&circ;"),none(),none())` = x__1-`#mscripts(mi("x"),mn("1"),none(),none(),mo("&macr;"),none(),none())`, `#mscripts(mi("x"),mn("2"),none(),none(),mo("&circ;"),none(),none())` = x__2-`#mscripts(mi("x"),mn("2"),none(),none(),mo("&macr;"),none(),none())`]

[`#mover(mi("x"),mo("&circ;"))` = conjugate(x)-x, `#mscripts(mi("x"),mn("1"),none(),none(),mo("&circ;"),none(),none())` = x__1-`#mscripts(mi("x"),mn("1"),none(),none(),mo("&macr;"),none(),none())`, `#mscripts(mi("x"),mn("2"),none(),none(),mo("&circ;"),none(),none())` = x__2-`#mscripts(mi("x"),mn("2"),none(),none(),mo("&macr;"),none(),none())`]

revseqns := `~`[rhs = lhs](seqns)

[conjugate(x)-x = `#mover(mi("x"),mo("&circ;"))`, x__1-`#mscripts(mi("x"),mn("1"),none(),none(),mo("&macr;"),none(),none())` = `#mscripts(mi("x"),mn("1"),none(),none(),mo("&circ;"),none(),none())`, x__2-`#mscripts(mi("x"),mn("2"),none(),none(),mo("&macr;"),none(),none())` = `#mscripts(mi("x"),mn("2"),none(),none(),mo("&circ;"),none(),none())`]

subs(revseqns, eqn2new)

rho*V*(diff(x(t), t)) = w*`#mover(mi("x"),mo("&circ;"))`+w__1*`#mscripts(mi("x"),mn("1"),none(),none(),mo("&circ;"),none(),none())`+w__2*`#mscripts(mi("x"),mn("2"),none(),none(),mo("&circ;"),none(),none())`

eval(eqn2new, revseqns)

rho*V*(diff(x(t), t)) = w*`#mover(mi("x"),mo("&circ;"))`+w__1*`#mscripts(mi("x"),mn("1"),none(),none(),mo("&circ;"),none(),none())`+w__2*`#mscripts(mi("x"),mn("2"),none(),none(),mo("&circ;"),none(),none())`

Download subsrev.mw

Naturally, you could alternatively form the equations directly with the new names on the rhs. (I just thought that you might be interested in knowing how to reverse them.)

note: I cannot tell from your image whether you want the overbars denote conjugates, or to be part of atomic names (atomic identifiers, as via right-click menu 2-D Math -> Convert To -> Atomic variables). I'm supposing that you know which you want, and will enter them accordingly.

plots:-pointplot([Re, Im]~(uu10000));

You can, of course, add options such as symbol, symbolsize, color, etc.

@C_R As you've seen, one aspect of your original approach is that f(x__0) gets evaluated with x__0 being a mere name. In this case you encountered problems with that.

That aspect is sometimes called "premature evaluation", with a problem ensuing because procedure f is not handling a mere name as you'd prefer. The aspect is that procedure f is being called before its argument gets an actual numeric value.

Previously you were given some ways in which the root-finding could be accomplished (using Maple 2021 at the time).

Your procedure f was defined as,
   f := x__0 -> int(1/sqrt(sin(x__0) - sin(x)), x = 0 .. x__0)

Note that for,
    fsolve(f(x__0) = 2*sqrt(alpha), x__0 = 0 .. 0.9*Pi/2)
the call f(x__0) for x__0 a mere name occurs before fsolve ever receives any arguments. The call f(x__0) is being evaluated prematurely.

In my Maple 2023.0 the approach of delaying the evaluation of f(x__0) does succeed, ie,
    g := alpha -> fsolve('f(x__0) = 2*sqrt(alpha)', x__0 = 0 .. Pi*1/2*0.9):
    g(0.6336176953);

produces the answer 0.5614162053. It is slow, taking about 75 seconds, because the numeric integrations are not fast.

For me, it doesn't take "forever". But it's slow. It's slow because the way in which the numeric integrations are requested makes them slow (not because fsolve itself is slow).

You had used a so-called operator calling sequence in a call to plot. That's another way to avoid the call f(x__0) for x__0 a name. A similar approach for fsolve can also avoid the premature evaluation. This too takes about 75 seconds for me.
   g := alpha -> fsolve(f - 2*sqrt(alpha), 0 .. Pi*1/2*0.9):
   g(0.6336176953);

At the risk of being irritating, I'll mention that (as shown before), with a bit of care using numeric integration can do well here. Here's a slightly different tweak that I showed before (which was a coarser error tolerance with a slightly higher working precision).
    f := x__0 -> int(1/sqrt(sin(x__0) - sin(x)), x = 0 .. x__0, method = _d01ajc):
    g := alpha -> fsolve(f - 2*sqrt(alpha), 0 .. Pi*1/2*0.9):
    g(0.6336176953);

That produces the answer 0.5614162054 in less that one second, for me.

As Preben has shown, the integral can be solved symbolically under assumptions and a procedure formed from the result using unapply. Previously I too had shown a symbolic integration under assumptions, using Maple 2021 and a change of variables, followed by unapply. The ensuing fsolve attempt is very fast. As you've seen, cases that do not do well here include that in which the integration is attempted symbolically but without the key assumptions.

The most notable aspect of your worksheet is that you have formed the equation (1) before you assign values with units to the parameters. You don't have to do it that way, but I will retain that approach because it's key to some of the things you've noticed.

restart

with(Units)

Automatically loading the Units[Simple] subpackage
 

 

`&alpha;__1` = arctan(l__1*sin(alpha)/(l__1*cos(alpha)+l__2))

alpha__1 = arctan(l__1*sin(alpha)/(l__1*cos(alpha)+l__2))

(1)

l__1 := 50*Unit('mm')

l__2 := 40*Unit('mm')
alpha := 120*Unit('deg')

combine(alpha__1 = arctan(l__1*sin(alpha)/(l__1*cos(alpha)+l__2)), units)

alpha__1 = arctan((5/3)*3^(1/2))

(2)

:-simplify(alpha__1 = arctan(l__1*sin(alpha)/(l__1*cos(alpha)+l__2)))

alpha__1 = arctan((5/3)*3^(1/2))

(3)

Units:-Standard:-simplify(alpha__1 = arctan(l__1*sin(alpha)/(l__1*cos(alpha)+l__2)))

alpha__1 = arctan((5/3)*3^(1/2))

(4)

simplify(alpha__1 = arctan(l__1*sin(alpha)/(l__1*cos(alpha)+l__2)))

alpha__1 = arctan(5*sin(120*Units:-Unit(arcdeg))/(5*cos(120*Units:-Unit(arcdeg))+4))

(5)

Now the the parameters have been assigned their values
with units, entered input gets parsed with the rebound
arithmetic operators.

`&alpha;__1` = arctan(l__1*sin(alpha)/(l__1*cos(alpha)+l__2))

alpha__1 = arctan((5/3)*3^(1/2))

(6)

Download units_after.mw

If the Units (or Units:-Simple, or Units:-Standard) package is loaded then the arithmetic operators `+` and `*` are rebound to package exports of the same names, which are units-aware.

The reason why it works directly if you "retype" the input (after assigning values with units to the parameters) is that the input gets parsed using those units-aware versions.

But your original equation (1) is formed before the parameters have values with units. And so the resulting expression contains calls to the original global (units-unaware) arithmetic operators. Mere re-evaluation does not reparse, or re-utilize the rebound units-aware versions, even after the parameters are assigned values with units.

It is unfortunate that Units:-Simple:-simplify is inadequate here, taking longer and not getting the desired result. I will submit a bug report against that aspect. But combine(...,units) or the global :-simplify do fine.

Based on your description, you might consider line-printing, eg.

restart;

with(StringTools):

S := $1..20;

1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20

Ls := SubstituteAll(String([S])[2..-2]," ",""):

lprint~([LengthSplit(Ls,10)]):

1,2,3,4,5,
6,7,8,9,10
,11,12,13,
14,15,16,1
7,18,19,20

printf~("%s\n",[LengthSplit(Ls,10)]):

1,2,3,4,5,
6,7,8,9,10
,11,12,13,
14,15,16,1
7,18,19,20

 

Download lineprint_ex1.mw

Your complete requirements are not clear, based only on your original Question. Is a fixed width font wanted? Could your "output" contain non-ASCII characters? Does all whitespace need to be removed? What did you mean by the trailing "/", if something other than the linebreaks? Etc.

This might be as much as I can recover, while also being able to reliably Save&Reopen using Maple 2022.2. (I noticed that your original attachment was last saved in Maple 2022.2.)

Opgaver_til_ULA_13_addition_af_spin_ac22.mw

I suggest that your save a copy of this, on the side as a backup, for now.

Is this what you're after?

L := [a,b,c,d];
foldl(`&^`, L[]);

In a worksheet,

L := [a,b,c,d]

[a, b, c, d]

foldl(`&^`, L[]);

`&^`(`&^`(`&^`(a, b), c), d)

lprint(%);

((a &^ b) &^ c) &^ d

a &^ b &^ c &^ d;

`&^`(`&^`(`&^`(a, b), c), d)

lprint(%);

((a &^ b) &^ c) &^ d

Download foldl_ex1.mw

Since the eigenvectors of a nonsymmetric Matrix with floating-point entries could be nonreal in general, the data structures used for the eigenvalue and eigenvectors results are consistently a Vector and Matrix with datatype=complex[8] (at hardware working precision, and complex(sfloat) at higher working precision), regardless of whether the values are mixed real/nonreal or all purely real.

In such a case as yours, you are free to cast the results to a float[8] or other Vector and Matrix. See below.

Alternatively, you can change a kernelopts setting so that the visual display of the zero-imaginary components are suppressed for such Vectors/Matrices.

restart

kernelopts(display_zero_complex_part = true)

A := Matrix(2, 2, {(1, 1) = .8, (1, 2) = .3, (2, 1) = .2, (2, 2) = .7})

Matrix(%id = 36893628602552940292)

H := LinearAlgebra:-Eigenvectors(A)

Vector[column](%id = 36893628602552927044), Matrix(%id = 36893628602552928724)

simplify([H], zero)[]

Vector[column](%id = 36893628602552925588), Matrix(%id = 36893628602552925708)

Vector(H[1], datatype = float[8]), Matrix(H[2], datatype = float[8])

Vector[column](%id = 36893628602552916196), Matrix(%id = 36893628602552916316)

restart

kernelopts(display_zero_complex_part = false)

A := Matrix(2, 2, {(1, 1) = .8, (1, 2) = .3, (2, 1) = .2, (2, 2) = .7})

Matrix(%id = 36893628602552940292)

LinearAlgebra:-Eigenvectors(A)

Vector[column](%id = 36893628602552926924), Matrix(%id = 36893628602552928604)

NULL

Download Eigenvectors_ac.mw

For this example you could perform simplification after substituting your example numeric values, and in this case avoid the small imaginary artefact due to floating-point round-off error.

For this example this also happens to lead to better accuracy that applying evalf without that simplification (even after removal of the spurious imaginary artefact).

restart;

INV := invztrans((z - 1)^2/(a*z^2 + b*z + c), z, n):

INVnew := simplify(allvalues(INV)):

G :=eval(INVnew, [a=2,b=1,c=4,n=16])

-(1/496)*(-(1/65536)*(-2*(-31)^(1/2)+38)*(-1/2+(1/2)*(-31)^(1/2))^16+(2*(-31)^(1/2)+38)*(-1/4-(1/4)*(-31)^(1/2))^16)*(-31)^(1/2)

simplify(G);

-16429327/131072

evalf(%); # decent accuracy

-125.3458176

INVnew2 := rationalize(allvalues(INV)):

H := eval(INVnew2, [a=2,b=1,c=4,n=16]);

(1/3968)*(-16*(-31)^(1/2)*(-1/4+(1/4)*(-31)^(1/2))^16-16*(-31)^(1/2)*(-1/4-(1/4)*(-31)^(1/2))^16+304*(-1/4+(1/4)*(-31)^(1/2))^16-304*(-1/4-(1/4)*(-31)^(1/2))^16)*(-31)^(1/2)

simplify(H);

-16429327/131072

evalf(%); # decent accuracy

-125.3458176

evalf(G);
simplify(fnormal(%, 9),'zero'); # less accurate

-125.3458180-0.1122533138e-7*I

-125.345818

evalf(H);
simplify(%,'zero'); # less accurate

-125.3458181+0.*I

-125.3458181

Download ztrans_ex1.mw

Your attempt at procedure df is calling diff(f(x),x) where x has a numeric value. That does not make sense.

Also, your loop will not terminate if the process is not converging to a root (because of the choice of initial x0).

Here is an adjustment that computes the derivative more properly, and which also terminates the loop after a maximum numer of iterations. It also uses evalf in a few key places to help ensure that the computations are done in floating-point even if x0 is numeric but not a float (eg. an exact rational).

I haven't changed your overall process, mostly because I suppose that this is coursework and that you are supposed to figure out and refine the approach. Perhaps it would be better if you could repeat the process at other initial x0 choices. Putting the code into a reusable procedure could make it easier for you to try that.

restart;

f := x -> exp(x^2)*sin(x - 5);
df := D(f);
#df := unapply(diff(f('x'),'x'),'x');
x0 := -1.0;
tol := 0.00001;
for i to 3 do
    x := x0;
    n := 0;
    while n<10 and tol < abs(evalf(f(x))) do
        x := evalf(x - f(x)/df(x));
        n := n + 1;
    end do;
    printf("Root %d: %.5f (after %d iterations f=%.5e)\n", i, x, n, f(x));
    x0 := x + 1.0;
end do:

proc (x) options operator, arrow; exp(x^2)*sin(x-5) end proc

proc (x) options operator, arrow; 2*x*exp(x^2)*sin(x-5)+exp(x^2)*cos(x-5) end proc

-1.0

0.1e-4

Root 1: -1.28319 (after 7 iterations f=9.31896e-10)
Root 2: -13.12697 (after 10 iterations f=4.53770e+74)
Root 3: -11.71441 (after 10 iterations f=3.34134e+59)

Download rf0.mw

By splitting up the surface we can arrive at a nicer result. Not only is the gap narrower here, but the jaggedness along the top and bottom boundaries has cleared up.

Unfortunately this is tricky to do programmatically in general.

The gap gets smaller if you increase the resolution (eg. grid=[200,200]), and the overall appearance looks quite nice with style=surface to suppress the grid-lines.

restart;

f := piecewise(x=0, 0, x <> 0, x*y^2/(x^2+y^4));

f := piecewise(x = 0, 0, x <> 0, x*y^2/(y^4+x^2))

plots:-display(
plot3d([x,y,f], y=-1..0, x=-1..-y^2),
plot3d([x,y,f], y=-1..0, x=-y^2..0),
plot3d([x,y,f], y=-1..0, x=0..y^2),
plot3d([x,y,f], y=-1..0, x=y^2..1),
plot3d([x,y,f], y=0..1, x=-1..-y^2),
plot3d([x,y,f], y=0..1, x=-y^2..0),
plot3d([x,y,f], y=0..1, x=0..y^2),
plot3d([x,y,f], y=0..1, x=y^2..1),
labels=[x,y,'f']);

Download plot3d_ridge.mw

There are other ways to accomplish something similar. (You could even get the gap to vanish altogether...) This is simply the first thing I considered.

1 2 3 4 5 6 7 Last Page 1 of 287