pagan

5127 Reputation

23 Badges

16 years, 191 days

 

 

"A map that tried to pin down a sheep trail was just credible,

 but it was an optimistic map that tried to fix a the path made by the wind,

 or a path made across the grass by the shadow of flying birds."

                                                                 - _A Walk through H_, Peter Greenaway

 

MaplePrimes Activity


These are answers submitted by pagan

You can use the Layout palette for inserting that as typset 2D Math input.

Suppose that you are in 2D Math input mode (the default mode, in which you likely were already to get the utilde). If not, then use the F5 key (Linux, MS-Windows) to toggle it on. This enables several palettes, such as the Layout palette.

Choose the "over2" item from the Layout palette. It is second from the left, in the second row. It is the second one which looks like a magenta `b` over a green `A`. Click it, and it should get inserted, with the `A` being bolded for overwrite. Type x. Then hit the Tab key, which should make the `b` be put into bold overwrite state.

What you use to replace the `b` is a matter of choice. You can simply type ~ from your keyboard. That looks ok for the 2D Math input, but it prints poorly in the resulting 2D Math output (ie. after you press Return or Enter). In output, the tilde is too small and lies too high above the base name (here, the `x`)

You might find that using the "Tilde" item from the Relational Round palette is a more attractive choice for ovewrwriting the `b`. It is last on the right, of the top row of the Relational Round palette in my Maple 16. This looks better in the 2D Math output.

The underlying (plaintext) 1D Math notation for entering these names is:

`#mover(mi("x"),mo("˜"))`

and

`#mover(mi("x"),mo("∼"))`

 

note. The following pair of (plaintext) 1D Maple notation names also look good for 2D Math output, but I don't see a way to get them as 2D Math input from the palettes:

`#mover(mi("x"),mo("∼"))`

`#mover(mi("x"),mo("~"))`

Are these all ways to get what you want?

They may differ from each other in performance, for much larger Vectors. You could instead form X as simply a list -- but using Digits=15 and datatype=float[8] for X a Vector seems like a fairer functionality comparison with Matlab.

If you just need simple code then maybe you could just use Mean(X[2..-1]-X[1..-2])

I didn't bother assigning to `timing_data` or `delta_t`.

restart:

with(Statistics): with(LinearAlgebra): with(ArrayTools):

Digits:=15:

X := Vector[row]([seq(0.0..5.0,0.01)],datatype=float[8]):

Mean(X[2..-1]-X[1..-2]);

                             0.0100000000000000

Mean(VectorAdd(X[2..-1],X[1..-2],1,-1,inplace));

                             0.0100000000000000

Mean(Alias(X,1,[op(1,X)-1])-Alias(X,0,[op(1,X)-1]));

                             0.0100000000000000

Mean(VectorAdd(X[2..-1],Alias(X,0,[op(1,X)-1]),1,-1,inplace));

                             0.0100000000000000

AddAlongDimension(VectorAdd(X[2..-1],
                            Alias(X,0,[op(1,X)-1]),1,-1,inplace),
                  1)/(op(1,X)-1);

                             0.0100000000000000
pts:=[O = (0, 0), P = (0, b/d), Q = (beta/delta, 0),
      R = ((-d*beta+b*gamma)/(-delta*d+gamma*c),
          (beta*c-delta*b)/(delta*d-gamma*c))]:

J:=Matrix(2, 2, {(1, 1) = beta-2*delta*x-gamma*y, (1, 2) = -x*gamma,
                (2, 1) = -y*c, (2, 2) = b-2*d*y-c*x}):

eval(J,Equate([x,y],[eval(O,pts)]));

eval(J,Equate([x,y],[eval(P,pts)]));

eval(J,Equate([x,y],[eval(Q,pts)]));

eval(J,Equate([x,y],[eval(R,pts)]));
simplify( % );

Check the spelling of the outer call. You have it as FlipDimenstion instead of FlipDimension.

I don't know whether you will accept any of my suggestions, but they are offered with good intentions. Sorry, if any of this sounds harsher than necessary.

There are some misconceptions in your approach. The most serious is that you seem to think that you need an explicit algebraic (ie. formulaic) result for the eigenvalues and eigenvectors, just so that you can get them "as a function of a variable" (your wording). But that is not true. It's quite easy to alter your procedure `matrixA` to that it admits a numeric parameter eta and returns the corresponding floating-point solutions.

In fact, such a procedure can likely produce the eigen-solutions for hundreds (if not tens of thousands) of distinct numeric values for eta in the time that could be wasted trying to get a 20- (or 50-, or 100-...) page mixed symbolic-float result as a set of formulas in eta.

It's quite likely that you might not be able to obtain explicit formula (mixed floats, and symbolic eta) for the eigen-solutions at all. The eigenvalues will be the roots of a degree six polynomial (in lambda, say) with complicated float-symbolic coefficients. Maple could churn for a long time before deciding that only an implicit RootOf solution were possible. Try and apply the command

solve(LinearAlgebra:-CharacteristicPolynomial(A,lambda),lambda,Explicit)

to that Matrix A your sheet forms (as a global). That computation has already used 1GB memory and 1/2 hour on a fast machine, and it's not finished...

I attach a version of the worksheet where the procedure `matrixA` can compute the eigenvalues, eigenvectors, and inverse or the Matrix of eigenvectors all in under 1/20th of a second for a given floating-point value of eta. There's a chance that whatever you intended on doing with the explicit formulas could be done much faster with such an approach.

I did also change the procedure `matrixA` so that it returned the computed quantities instead of assigning to globals (which isn't very good practice).

FloatsNotHandled2.mw

update: the computation of the explicit eigenvalues of the mixed symbolic-float Matrix A returned, on my machine. It took 700sec, and the result is 812170 words in length. It took half a second to evaluate this at eta=0.6. That means that it takes ten times as long to compute only the float eigenvalues using this formula as it does to compute all of the eigenvalues & eigenvectors & inverse-matrix using the other approach. (The eigenvalue results agree, by the way.)

ps. I applied `simplify` and `fnormal` to the final results returned from the procedure. Those are not necessary steps, and I included them just so that relatively smaller (mostly imaginary) quantities were removed. These seemed to be relatively insignificant artefacts of floating-point calculation, but you could remove and see yourself.

Is this point good enough?

a = 2.61747201678486, b = -1.71950088324973, c = 2.30934676032938,
d = 1.50338513254071, e = 1.84587928605666
   Matrix(9,[a,0,b],scan=diagonal)
 + Matrix(9,[0,0,b,0,a],scan=diagonal);

Yes, the presence of a float amongst numeric values of the integration range will trigger `int` to do numeric integration (quadrature) as if it had been called like evalf(Int(...)). See the 5th bullet point in the Description section of the help page for the int command.

The presence of a float in the integrand, but not in the range, is not sufficient to trigger this behaviour.

By the way, the first result you show is the inaccurate one. Basically, the result of symbolic integration done for your first `int` call is a formula that presents more numeric difficulty than does the original integrand. I believe that `int` does essentially the same computation whether I used 1/2 or 0.5 below for the third argument of dgig (in the case of exact, non-float values in the range).

restart:
dgig := proc(psi, chi, lambda, t)
    (psi/chi)^(lambda/2)*t^(lambda - 1)*exp(-(psi*t+chi/t)/2)/\
    (2*BesselK(lambda,sqrt(chi*p\si))):
end: ## End of dgig

res:=int(dgig(60, 60, 1/2, t), t = 0..1);

                 1             /   (1/2)   (1/2)\   1            1
          foo := - exp(120) erf\2 2      15     / - - exp(120) + -
                 2                                  2            2

So, `res` is the sum of three things: a very large positive quantity, a very large negative quantity, and 1/2. Unless a sufficient number of digits when adding the first of those two quantities then then roundoff error can cause that pair of values to cancel inaccurately to zero.

(1/2)*exp(120)*erf(2*2^(1/2)*15^(1/2));

                      1             /   (1/2)   (1/2)\
                      - exp(120) erf\2 2      15     /
                      2                               

evalf[100](%): part1:=evalf[10](%);

                                                 51
                          part1 := 6.520904392 10  

-(1/2)*exp(120);

                                  1         
                                - - exp(120)
                                  2         

evalf[100](%): part2:=evalf[10](%);

                                                 51
                         part2 := -6.520904392 10  

part1+part2; # inaccurate!

                                     0.

(By the way, if Maple had first added together the first and third of the addends in the sum, then I'd expect to see catastrophic cancellation wipe out the 1/2, ending up with zero as the inaccurate answer instead of 0.5.

Increasing the working precision, an accurate result emerges:

restart:
dgig := proc(psi, chi, lambda, t)
    (psi/chi)^(lambda/2)*t^(lambda - 1)*exp(-(psi*t+chi/t)/2)/\
    (2*BesselK(lambda,sqrt(chi*p\si))):
end: ## End of dgig

res:=int(dgig(60, 60, 1/2, t), t = 0..1);

                 1             /   (1/2)   (1/2)\   1            1
          res := - exp(120) erf\2 2      15     / - - exp(120) + -
                 2                                  2            2

evalf[40](res): evalf[10](%);
                                0.5000000000

evalf[55](res): evalf[10](%);
                                0.4750000000

evalf[56](res): evalf[10](%);
                                0.4745000000

evalf[57](res): evalf[10](%);
                                0.4743500000

evalf[63](res): evalf[10](%);
                                0.4743543708

The integrand, on the other hand, does not suffer from these issues, and function evaluation (eg. plotting) or numeric integration succeeds easily even at less than double-precision.

> restart:
> dgig := proc(psi, chi, lambda, t)
>     (psi/chi)^(lambda/2)*t^(lambda - 1)*exp(-(psi*t+chi/t)/2)/\
>     (2*BesselK(lambda,sqrt(chi*psi))):
> end: ## End of dgig

> dgig(60, 60, 0.5, t): lprint(%);

0.3529023946e27*exp(-30*t-30/t)/t^.5

> evalf(Int(t->evalf[11](0.3529023946e27*exp(-30*t-30/t)/t^.5), 0..1.0));

                                0.4743543709

Statistics:-Tally is an alternative.

It seems to do about as well as your `listhist` for posints, up until about length 10^6 to 10^7 after which it allocates much more memory.

But we cannot so easily see its source, as it seems to be precompiled and accessed as call_external.

showstat(((Statistics::DataManipulation)::Tally)::GetValue);

I wonder whether the memory is high because of the weights. I didn't try to check.

It's not clear whether you want it centered, and if the font details (upright, italic, monospaced, etc) matter to you.

> f1:=x^2: f2:=x^3: f3:=x^4:

> printf("%a %a %a\n",f1,f2,f3);
x^2 x^3 x^4

> convert(sprintf("%a %a %a",f1,f2,f3),name);

                                      x^2 x^3 x^4

> nprintf("%a %a %a",f1,f2,f3);

                                      x^2 x^3 x^4

The second and third of those above will display centered in the GUI, and in the usual italic used for 2D Math. 

If you feel energetic you could investigate ways to produce atomic identifiers programatically, so as to get a centered result with 2D Math output (ie. notice the pretty-printing of the exponents, compared to the previous). For example, the output of this below in the Standard GUI. (It's one long line, and should be entered without a linebreak. Hopefully you can multi-click with your mouse, to copy & paste it.)

`#mrow(msup(mi("x"),mn("2")),mo("⁢"),msup(mi("x"),mn("3")),mo("⁢"),msup(mi("x"),mn("4")))`;

The problem is that fsolve may not always be converging on a solution. And the Statistics:-NonlinearFit routine is stopped dead by any such failure. It's unlikely that using any (finite, or not) fallback value in fitfunc (for the case of fsolve failure) would allow the numeric solver of NonlinearFit to succeed. So the issue is: how to help fsolve succeed here.

Try raising Digits, to avoid numerical difficulties with evaluating the expression being fsolve'd.

Ie.

fitfunc := proc(V, j1, j2, Rs1, Rs2, n1, n2, Rp1, Rp2)
  Digits:=20;
  fsolve(V = J*Rs2+J*Rp2-j2*Rp2+n2*LambertW(j2*Rp2*exp(Rp2*(-J+j2)/vtherm)/vtherm)
             /vtherm+J*Rs1+J*Rp1+j1*Rp1-n1*LambertW(j1*Rp1*exp(Rp1*(J+j1)/vtherm)
             /vtherm)/vtherm, J);
end proc:

Also, if you know that J should be in some reasonable range, then try changing the last argument to fsolve from just J to J=a..b for some explicit numeric range a..b.

If that doesn't work well enough, try making fsolve work on a procedure instead of an expression, and have that procedure instead of fitfunc raise Digits. (The idea is that fsolve would infer an accuracy tolerance from the value of Digits whence it was called, and that the expression evaluation might have to be done at a higher working precision than that.)

By the way, you didn't give us the xdata and ydata, which makes it a bit tough to test. What value of vtherm are you using, 38.7?

alpha:=6.34: # does not interfere, if assigned
plots:-display(plot(sin(x),x=0..2*Pi),
               plots:-textplot([1.5,0.5,typeset('alpha'),font=[TIMES,ROMAN,20]]),
               plots:-textplot([4.5,-0.5,typeset('Zeta'),font=[TIMES,ROMAN,12]]));

plots:-display(plot(sin(x),x=0..2*Pi),                plots:-textplot([1.5,0.5,typeset('alpha'),font=[TIMES,ROMAN,20]]),                plots:-textplot([4.5,-0.5,typeset('Zeta'),font=[TIMES,ROMAN,12]]))

Try it like this, if you are using Maple 15.01

Digits, oldDigits := 16, Digits:
plots:-logplot(mass,pressure);
Digits := oldDigits:
restart;

eq1:=diff(x(z),z)=a:

sol1:=dsolve({eq1,x(0)=2},numeric,parameters=[a],'output'=listprocedure):
xs:=eval(x(z),sol1);

                               proc(z)  ...  end;

Xm:=proc(aValue,zValue)
  option remember;
  if not (type(aValue,numeric) and type(zValue,numeric)) then
    return 'procname'(args);
  end if;
  xs('parameters'= [':-a' = aValue]);
  #try
    # Call `forget`, or else xs(z) gets remembered for wrong aValue!
    # (...and that is not because this proc Xm has option remember)
    forget(`evalf/int`); forget(evalf);
    # You could experiment with adding option method=_CCquad to the
    # evalf/Int call here, but remove if Xm give insufficient accuracy
    evalf[14](1/zValue*Int(xs,0..zValue));
  #catch "unable to store": 
  #catch "not possible":
  #end try;
end proc:

eq2:=diff(a(t),t)=Xm(a(t),2):
sol2:=dsolve({eq2,a(0)=0},numeric,relerr=1e-7,'output'=listprocedure,known=[Xm]):
at:=eval(a(t),sol2);

                                proc(t)  ...  end;

# Now compare with the exact solution
SOL1:=dsolve({eq1,x(0)=2}):
XS:=unapply(eval(x(z),SOL1),a);

                                 a -> a z + 2

XM:=unapply(1/zValue*int(XS(aValue),z=0..zValue),[aValue,zValue]):

EQ2:=diff(a(t),t)=XM(a(t),2):
SOL2:=dsolve({EQ2,a(0)=0}):
AT:=unapply(eval(a(t),SOL2),t);

                               t -> -2 + 2 exp(t)

Xm(8,2), XM(8,2), Xm(10,2), XM(10,2);

                 9.999999999999998, 10, 11.999999999999993, 12

at(4.0): evalf[7](%);

                                107.1963

AT(4.0): evalf[7](%);

                                107.1963

#plots:-display(Array([plot(at, 0..10),
#                      plot(AT, 0..10)]));

See the attached worksheet, which uses dsolve,numeric,parameters for beta, theta0, and thetaR (which all remain unassigned at the top level).

fermions_alter.mw

 

The basic idea is that you can call dsolve just once, to get a "generic" set of resulting procedures. Then, as you do your triple loop, you set each of the parameters following the convention illustrated.

And all three of the global names beta, thetaR, and theta0 can even remain unassigned throughout this whole process.

I cut it off after the first plot.

I just did one instance, without loops. You simply need to add in the loops between the portions where the parameters get set, and use the looping indices to produce the values used in setting the parameters.

2 3 4 5 6 7 8 Last Page 4 of 48