acer

32348 Reputation

29 Badges

19 years, 330 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@opus64 And now you are missing the full details of what came before, in your session/worksheet. (See my Answer's last paragraph, which might describe parts of the cause.)

Guessing at problems is a "huge waste of time."

@Scot Gould The single left-quotes are so-called name-quotes. Here they denote that the word is being used in its operator form (with brackets and a sequence of arguments).

It is a documented part of the Maple language, and during parsing helps avoid amibiguity with the keyword if that occurs in if...then...end. There are a number of similar patterns like this in the language.

This syntax helps with some particular programming cases, and is not uncommonly used. It is not an obscure oddity.

[edit] It also allows for additional functional programming, useful for convenience and efficiency.

What do you mean, you "make an User interface"?

Do you mean you build it from Embedded Components, or Maplets? By hand from the palette, or fully programmatically?

You've omitted the key details.

@Scot Gould 

There can be a few purposes for a 'procname'(args) return, say when the arguments are nonnumeric. For example to avoid various kinds of premature evaluation of the call. The technique is sometimes referred to a making an unevaluated return.

The reason I used it was that I was trying to convince the solver that I had an expression, rather than an operator. But I didn't want the call proc3(K,r) to evaluate up front when passed to Minimize (because I wanted the wrapping evalf to remain, and any local change of Digits to still be in play). I could have tried also passing 'proc3'(K,r) with uneval quotes, but I really wanted to ensure that it wouldn't evaluate in some preprocessing (I was suspicious about the creation of the Gradient).

I wasn't trying to indicate that an actual attempt at numeric integration failed, by the unevaluated return.

Moving to your next query: I suspect that you would run into trouble if dummy integration variable name x evaluates to some assigned value (numeric, say) at the higher scope at which the procedure is called. The procedure won't make such dummy variables of integration into a local automatically. Maple would have flagged the f1,f2 because they are used in the LHS of an assignment statement. If you want to see warnings about use/reference within a procedure of names as if they were global then see the mint utility (or maplemint command, though weaker).

Yes, it's generally sloppy to rely on an unprotected name being unassigned at the higher scope, if referenced in the procedure.

The syntax :-K refers to the global name, to distingush it from the procedure parameter K. Your expression df was defined at the top-level and contained the global :-K. So I substituted the values of parameter K in for the instance of global :-K within df. And similarly for r, say.

As for the unapply, it is complicated. By default evalf(Int...)) tries to find points of discontinuity at which it may have to split the integration, in part because most of its arsenal of numeric integrators rely on smoothness of the integrand (to attain some accuracy). Nobody wants wrong results. But there are few easy ways to disable such discontinuity checks, which may be quite expensive. Sometimes it can be done by specifying a NAG method (say, _d01ajc) but that's only for real-valued integrands with hardware precision. By changing the integrand expression into an integrand procedure it acts like a black box and doesn't get poked at for such checks. In severe cases the cost of using unapply for each integration call can be measurably less that the overhead of the check. I would really like an easy option for making this effect.

@Carl Love Yes, thanks. There is also a pair DDTTRFB/DDTTRSB in MKL for factoring and backsolving for diagonally dominant tridiagonal matrix. (That factorization returns no pivot list, only using dl,d,du. I do not know what it uses, eg. Thomas' algorithm.)

Like you I consider it worthwhile to be able to separate the factorization from the solving step (for a given RHS). For the LU with pivoting that might be possible as a special list return from LUDecomposition, from the other tridiagonal solver I described in my Answer. That prefactorization list might then be passed into LinearSolve. That's how it works now for Cholesky, QR, and LU (but not band LU). It's a little awkward, and all those routines have overhead as they figure out how to dispatch, whether it's a float or exact problem, etc.

As an alternative... perhaps it would be worthwhile to create BLAS and LAPACK packages, with all the in-place and other goodness. They could run hardware and software floats, as possible. But a goal would be low overhead. It would also offer some nice efficiencies like being able to specify by semaphore that a Matrix argument was to be taken as "transposed", without copying or computation cost, eg. solve A^%T . x = b, or multiply a*A^%T . B , etc. I wouldn't be surprised if much of the code could be auto-generated. Another benefit is that CodeGeneration[C] might be taught about it. For some time I've thought about this.

@Rouben Rostamian  Thanks.

And yes, indeed, for a different example I might have had to call frontend differently (which is a common phenomenon for many frontend uses...).

So Physics:-diff is a more general solution for this kind of task.

note. It's often a good idea to allow these (default expression types, {`+`,`*`}) through as unfrozen,

  frontend(diff, [x(t)^2 + sin(x(t)+y(t)), x(t)], [{`+`,`*`},{sin}]);
                            2 x(t) + cos(x(t) + y(t))

  frontend(diff, [x(t)^2 + sin(x(t)+y(t)), x(t)], [{`+`,`*`,specfunc(sin)},{}]);
                            2 x(t) + cos(x(t) + y(t))
As the examples gets more involved and confusing even targeted freeze/thaw starts to look somewhat attractive. Having to double-check every answer robs an approach of merit. I guess I ought to have mentioned Physics:-diff first.

@Carl Love This member usually posts worksheets using Maple 18, and that might still be his working version. Your cited post's code would need revising, to run in that version, I believe.

Fyi, when I ramped up the size and number of RHSs to both be very large I sometimes obtained a result that contained Float(undefined).  However I was sometimes able to obtain a solution by other means in that case, ie. I'm not sure that I was encountering inconsistency in the random example. It was rare. I didn't poke around.

Timing performance of your prior code seemed quite similar to that of the LAPACK tridiagonal solver (as I accessed it), although the latter usually produced a slightly smaller residual. The timing differences were affected by the order in which they were run, and aspects that hinted the loading of shared external objects was a factor.

@Carl Love Yes, sorry.

@Carl Love Right you are, of course (I knew that but was not thinking, it might be a WFH thing).

I think that it's interesting how much garbage collection is part of some of these timings.

Something that I was considering mentioning was that these kinds of schemes can vary in performance according to which aspect of the problems grows in size. So for this problem it might be whether the number of items or the number of keys (or both) grows large.

When I was writing the simple loop code I showed in my Answer I was primarily thinking about dealing with a small number of unsorted keys relative to the number of items. I would write it differently for a large number of keys and, as I did mention in my Answer, I'd consider using ListTools:-Classify. (I lnow I used an example with 10^3 keys in comparison with Kitonum's code -- what I do and what I think don't always match.)

For a large enough number of items but a small number of keys (here, distinct eigenvalues lists) the scheme I coded performs modestly better than Carl's scheme. 

But Carl's scheme scales up much better as both the number of keys as well as the number of items grows large.

Another consideration which sometimes comes into play is how many such computations need to be done, and whether any part is repeated.

And that is an important takeaway, I think -- that the scheme should be taken which best suits the character of the actual data.

Unfortunately, in this question we have not been provided with the actual data, so there is no way to know what would be best except for guessing.

We can also observe that, for some sizes of problem, the repetition and ordering of the compared computations matters and has an effect. Therefore a better comparison would involve separate sessions. (The generated examples below are random and hence session-dependent, but should be large enough that the timings are representative and still meaningfully comparable across sessions.)

Here are some more data points, for fun:

restart;

#Acer's code (verbatim):
#
GL := proc(LL)
  local U,C,T2,t,u,i,L;
  T2 := {map[2](op,2,LL)[]};
  for i from 1 to nops(T2) do
    C[T2[i]] := 0:
  end do:
  for L in LL do
    for t in T2 do
      if L[2]=t then
        C[t] := C[t] + 1;
        U[t][C[t]] := L[1];
        break;
      end if;
    end do;
  end do;
  [seq([lhs(u),convert(rhs(u),list)],u=op([1,..],U))];
end proc:
#

N1,N2:= 10^1,10^6:
M := [seq(convert(LinearAlgebra:-RandomVector(3),list),i=1..N1)]:
f:=rand(1..N1):
MyNestedList := [seq([g||i,M[f()]],i=1..N2)]:

CodeTools:-Usage(GL(MyNestedList)):

memory used=459.37MiB, alloc change=106.02MiB, cpu time=11.55s, real time=6.49s, gc time=7.11s

N1,N2:= 5,10^6:
M := [seq(convert(LinearAlgebra:-RandomVector(3),list),i=1..N1)]:
f:=rand(1..N1):
MyNestedList := [seq([g||i,M[f()]],i=1..N2)]:

CodeTools:-Usage(GL(MyNestedList)):

memory used=398.44MiB, alloc change=22.91MiB, cpu time=7.52s, real time=5.12s, gc time=3.37s

N1,N2:= 10^2,10^5:
M := [seq(convert(LinearAlgebra:-RandomVector(3),list),i=1..N1)]:
f:=rand(1..N1):
MyNestedList := [seq([g||i,M[f()]],i=1..N2)]:

CodeTools:-Usage(GL(MyNestedList)):

memory used=151.74MiB, alloc change=-45.81MiB, cpu time=2.60s, real time=1.83s, gc time=1.13s

N1,N2:= 10^4,10^4:
M := [seq(convert(LinearAlgebra:-RandomVector(3),list),i=1..N1)]:
f:=rand(1..N1):
MyNestedList := [seq([g||i,M[f()]],i=1..N2)]:

CodeTools:-Usage(GL(MyNestedList)):

memory used=0.73GiB, alloc change=-7.64MiB, cpu time=15.83s, real time=13.14s, gc time=4.03s

 

 

restart;

#Carl's alternative:
#
GraphEigenvalueClassify:= (L::list([{Graph, name}, list]))->
    [lhs,rhs]~(
        (op@map)(
            g-> [map(`?[]`, g, [1])[]],
            ListTools:-Classify(g-> g[2], L)
        )
    )
:

N1,N2:= 10^1,10^6:
M := [seq(convert(LinearAlgebra:-RandomVector(3),list),i=1..N1)]:
f:=rand(1..N1):
MyNestedList := [seq([g||i,M[f()]],i=1..N2)]:

CodeTools:-Usage(GraphEigenvalueClassify(MyNestedList)):

memory used=0.94GiB, alloc change=179.12MiB, cpu time=28.27s, real time=16.38s, gc time=18.10s

N1,N2:= 5,10^6:
M := [seq(convert(LinearAlgebra:-RandomVector(3),list),i=1..N1)]:
f:=rand(1..N1):
MyNestedList := [seq([g||i,M[f()]],i=1..N2)]:

CodeTools:-Usage(GraphEigenvalueClassify(MyNestedList)):

memory used=0.52GiB, alloc change=47.28MiB, cpu time=9.87s, real time=7.67s, gc time=3.40s

N1,N2:= 10^2,10^5:
M := [seq(convert(LinearAlgebra:-RandomVector(3),list),i=1..N1)]:
f:=rand(1..N1):
MyNestedList := [seq([g||i,M[f()]],i=1..N2)]:

CodeTools:-Usage(GraphEigenvalueClassify(MyNestedList)):

memory used=55.10MiB, alloc change=0 bytes, cpu time=576.00ms, real time=576.00ms, gc time=0ns

N1,N2:= 10^4,10^4:
M := [seq(convert(LinearAlgebra:-RandomVector(3),list),i=1..N1)]:
f:=rand(1..N1):
MyNestedList := [seq([g||i,M[f()]],i=1..N2)]:

CodeTools:-Usage(GraphEigenvalueClassify(MyNestedList)):

memory used=21.54MiB, alloc change=0 bytes, cpu time=95.00ms, real time=95.00ms, gc time=0ns

 

Download grouplistcomp.mw

ps. I know that the code snippet to generate random examples could be improved. I didn't consider it important enough to optimize.

@hakan So, do you mean like this?

I'll also put in Kitonum's procedure, in case you wish to compare. Of course the order of the sublists may differ in the result, but it's not difficult to use a list rather than a set of the keys -- arranged as you prefer.

I repeat each twice for the larger example, interleaved, to show that the timing's are not just a consequence of which runs first.

restart;

MyNestedList := [ [g1, [1,2,3]], [g2, [0,1,3]], [g3, [1,2,3]], [g4, [0,1,3]],
                  [g5, [2,4,7]], [g6, [0,1,3]], [g7, [2,4,7]], [g8, [0,-1,9]] ]:

 

GL := proc(LL)
  local U,C,T2,t,u,i,L;
  T2 := {map[2](op,2,LL)[]};
  for i from 1 to nops(T2) do
    C[T2[i]] := 0:
  end do:
  for L in LL do
    for t in T2 do
      if L[2]=t then
        C[t] := C[t] + 1;
        U[t][C[t]] := L[1];
        break;
      end if;
    end do;
  end do;
  [seq([lhs(u),convert(rhs(u),list)],u=op([1,..],U))];
end proc:

 

Ans1 := CodeTools:-Usage( GL(MyNestedList) );

memory used=33.94KiB, alloc change=0 bytes, cpu time=1000.00us, real time=0ns, gc time=0ns

[[[2, 4, 7], [g5, g7]], [[0, -1, 9], [g8]], [[1, 2, 3], [g1, g3]], [[0, 1, 3], [g2, g4, g6]]]

Grouping:=proc(L::listlist)
local L1;
uses ListTools;
L1:=[Categorize((x,y)->x[2]=y[2], L)];
map(p->[p[1,2],map(l->l[1],p)], L1);
#map(p->[p[1,2],map[2](op,1,p)], L1);
end proc:

 

Ans2 := CodeTools:-Usage( Grouping(MyNestedList) );

memory used=0.75MiB, alloc change=0 bytes, cpu time=11.00ms, real time=11.00ms, gc time=0ns

[[[1, 2, 3], [g1, g3]], [[0, 1, 3], [g2, g4, g6]], [[2, 4, 7], [g5, g7]], [[0, -1, 9], [g8]]]

 

nops(Ans1), nops(Ans2), evalb( {Ans1[]} = {Ans2[]} );

4, 4, true

 

N1,N2 := 10^3,10^4:
M := [seq(convert(LinearAlgebra:-RandomVector(3),list),i=1..N1)]:
f:=rand(1..N1):
MyNestedList := [seq([g||i,M[f()]],i=1..N2)]:

 

CodeTools:-Usage( GL(MyNestedList) ):

memory used=135.80MiB, alloc change=48.00MiB, cpu time=2.08s, real time=1.92s, gc time=395.52ms

CodeTools:-Usage( Grouping(MyNestedList) ):

memory used=0.58GiB, alloc change=-4.00MiB, cpu time=8.75s, real time=8.02s, gc time=1.53s

CodeTools:-Usage( GL(MyNestedList) ):

memory used=119.80MiB, alloc change=0 bytes, cpu time=1.72s, real time=1.59s, gc time=275.15ms

CodeTools:-Usage( Grouping(MyNestedList) ):

memory used=0.58GiB, alloc change=0 bytes, cpu time=8.41s, real time=7.66s, gc time=1.57s

 

Download grouplist.mw

@fmoraga So did you not read my earlier response about syntax for multiplication? Making the simple edit of adding explicition multiplication symbols in two places (Eq2, Eq3) seems to clear up the problem.

It is unclear why you prefer one representation of W_dot over others. Perhaps you have reasons for that.

If you want also to eliminate {Q_dot_C,Q_dot_H,T_c,T_h} specifically alongside {W_dot} then you could call eliminate against the union of those together -- and obtain no restrictions on the remaining variables. Or you could simply call solve directly against that union -- since the remaining variables are all free.

And you could then use eval as the second step to extract only the W_dot formula, which is simpler than having another solve call as the second step.

I suppose that you realize that you could also eliminate {Q_dot_C, Q_dot_H, eta} specifically in addition to W_dot. Or, you could simply take Eq1 as the answer, or use only Eq4.

Here are your original elimination, and a couple of others.

restart

Eq1 := W_dot = Q_dot_H-Q_dot_C;

W_dot = Q_dot_H-Q_dot_C

Eq2 := Q_dot_H = UA_H*(T_H-T_h);

Q_dot_H = UA_H*(T_H-T_h)

Eq3 := Q_dot_C = UA_C*(T_c-T_C);

Q_dot_C = UA_C*(T_c-T_C)

Eq4 := eta = W_dot/Q_dot_H;

eta = W_dot/Q_dot_H

Eq5 := eta = 1-T_c/T_h;

eta = 1-T_c/T_h

List := eliminate({Eq1, Eq2, Eq3, Eq4, Eq5}, {Q_dot_C, Q_dot_H, T_c, T_h}):

solve(List[2], W_dot);

{W_dot = UA_C*UA_H*eta*(T_H*eta+T_C-T_H)/(UA_C*eta+UA_H*eta-UA_C-UA_H)}

another := eliminate({Eq1, Eq2, Eq3, Eq4, Eq5}, {Q_dot_C, Q_dot_H, T_c, T_h, W_dot}):

{}

W_dot = eval(W_dot, another[1]);

W_dot = UA_C*UA_H*eta*(T_H*eta+T_C-T_H)/(UA_C*eta+UA_H*eta-UA_C-UA_H)

anotherS := solve({Eq1, Eq2, Eq3, Eq4, Eq5}, {Q_dot_C, Q_dot_H, T_c, T_h, W_dot}):

W_dot = eval(W_dot, anotherS);

W_dot = UA_C*UA_H*eta*(T_H*eta+T_C-T_H)/(UA_C*eta+UA_H*eta-UA_C-UA_H)

``

alt := eliminate({Eq1, Eq2, Eq3, Eq4, Eq5}, {Q_dot_C, Q_dot_H, T_c, W_dot, eta}):

{}

W_dot = simplify(eval(W_dot, alt[1]), size);

W_dot = (T_H-T_h)*UA_H*(UA_H*(T_H-T_h)+UA_C*(T_C-T_h))/(UA_H*(T_H-T_h)-UA_C*T_h)

new := solve({Eq1, Eq2, Eq3, Eq4, Eq5}, {Q_dot_C, Q_dot_H, T_c, W_dot, eta}):

W_dot = (T_H-T_h)*UA_H*(UA_H*(T_H-T_h)+UA_C*(T_C-T_h))/(UA_H*(T_H-T_h)-UA_C*T_h)

 

Download Tarea1_ac3.mw

@nikoo This answer is nonsense.

Start off by stating clearly what it means to you for a point to be a solution to this system.

Does it mean that the forward error (substituting results into the rhs-lhs of the equations, and taking absolute values, say) must be less than some specific value, and if so what? At what working precision ought that to hold. Be specific.

Are you looking for a value such that -- at high enough working precision, and if the point is refined to enough accurate decimal places -- that forward error could be made arbitrarily small?

Or would you be content with results in which the points only minimize the forward error (or some other suitable objective) and that it can be approached as an optimization problem? You may be forced into this position due to the fact that your equations already have floating-point coefficients(!). Actual "roots" may not exist, because of imprecise coefficients in the equations.

The nature of your final results (approximates actual roots, only minimizes some objective, etc) may well bear on the validity of use of the results in further scientific computation. Few people consider that properly.

@Jameel123 If I go here then I see a link to here, which contains a link to this, which in turn has two links, apparently for version 1 and 2 of a package.

The second link has a download link (a compressed tar file named adyz_v2_0.tar.gz, apparently containing a "DESOLVII" instructional worksheet and a Desolv-IIa.mpl file).

First 183 184 185 186 187 188 189 Last Page 185 of 592