Carl Love

Carl Love

19074 Reputation

24 Badges

7 years, 349 days
Mt Laurel, New Jersey, United States
My name was formerly Carl Devore.

MaplePrimes Activity


These are replies submitted by Carl Love

@jum The last command given in my code above, unapply, converts the matrix-vector expression into an algebraic function with no matrices or vectors. If you end this command with a semicolon rather than a colon, you will see this algebraic representation.

@Carl Love Multiple assignment can be done like this:

k||(1..2):= $1..2;

@Arif Ullah khan I have read the paper that you sent me. In section 2.2 of the paper, the author describes an ad hoc method for the solution of these equations via a closely related system of two uncoupled single-equation IVPs. The method is very simple, almost too simple to be believed. I haven't implemented it in Maple, but you should try it, and I'll help you if you get stuck.

Later on, the author describes a problem using the shooting method: For some parameter values, there are 2 or 4 solutions to the BVP. I have encountered this problem for the parameter values gamma=1, rho=nu=20. I suspect that the author's ad hoc method also suffers this problem, and it's just a matter of choosing different initial values until something works.

@dharr The OP uses a recent version of Maple (2020, I think) but is attempting to learn from some very old worksheets.

@Ali Hassani Here are some minor corrections and/or refinements of your summary.

You wrote:

  • 1) A process is an "independently running" with no sharing the memory and, consequently, needs more memory than the "Threads" package which fundamentally works based on the shared memory.

Using the Grid package for parallel programming involves starting multiple processes, each of which is like a complete Maple session (but without a separate user interface), so each one uses as much memory as a single Maple session, plus whatever memory is used for your computations: all the globals and locals.

When using Threads, everything happens in a single process. It uses only slightly more memory than a serial computation; only the local variables of procedures running in parallel need duplication; none of the "overhead" of setting up a Maple kernel needs to be duplicated.

  • 2) The word "task" is not essentially limited to "Threads". "Task" is the smallest work unit.

Whereas process has a fairly widely accepted specific meaning (which you can look up as "Process (computing)" in Wikipedia), task does not (also look up "Task (computing)"). But, as for the Threads and Grid packages, I think that it's okay to consider a task to be the smallest unit into which the work is divided by those packages. Certainly, this is the meaning of task meant by the option tasksize, which is used by both packages.

The unit of computation in Threads that is analogous to process in Grid is the thread. Look up "Thread (computing)" in Wikipedia. Actually, this is the Wikipedia article that I think best explains the difference  between shared-memory (like Threads) and independent-memory (like Grid) parallel computing.

  • 3) Too large "tasksize" may be resulted in being idle some processors, and too small "tasksize" could lead to more large CPU-time.

A large tasksize may lead to some idle processors and thus a longer real time (and maybe also a shorter CPU time). A small tasksize will certainly lead to more CPU time, and, if it's too small, more real time. One needs to find the "sweet spot" in the middle that minimizes real time.

  • 4) If code is threadsafe, "Threads:-Map" command is preceded Grid:-Map.

It can be stated more generally: If the code is threadsafe, then using the Threads package will be more efficient than using Grid, regardless of the specific commands used (MapSeq, or something else).

  • Unfortunately, because of uncontrolled sharing in writing in memory, many commands in Maple are not threadsafe.

Yes, that's 100% correct.

Your test function has a discontinuous noise component: a normal random variable of fixed variance. I don't think that that's what Calvin's algorithm was intended for. Here is an example with a continuous noise component, a Wiener process:

Fi:= Finance:  CF:= CurveFitting:
npts:= 10^4:
WS:= Fi:-SamplePath(
    Fi:-WienerProcess(<<.1>>)(t), t= 0..1, timesteps= npts-1
)[1,1]:
T:= rtable(1..npts, k-> (k-1)/(npts-1), datatype= float[8]):
WF:= t-> CF:-ArrayInterpolation(T, WS, t, uniform):
W:= t-> (t-0.5)^2 + WF(t):
CodeTools:-Usage(Calvin(W, 350));
memory used=457.37MiB, alloc change=0 bytes, cpu time=7.61s, 
real time=7.47s, gc time=437.50ms

            [0.702660743159389, -0.527027281446182]

plot(W, 0..1);

The computations take much longer because I discretized the Wiener process to a very large number of points, 10,000.

@ ArrayTools:-Insert was introduced in Maple 2016. Here's a retrofit for the above code that doesn't use it:

Calvin:= proc(
    X::procedure,
    n::posint, 
    gamma::procedure:= (n-> sqrt(n+1)/ln(n+2))
)
description "Minimize continuous function with random noise on 0..1";
option
    `Reference: `,
        `https://www.sciencedirect.com/science/article/pii/S0885064X01905746 `,
        `James M. Calvin, "A One-Dimensional Optimization Algorithm and Its `,
        `Convergence Rate under the Wiener Measure", _Journal of Complexity_ `,
        `17, 306-344 (2001)`,
    `Maple author: Carl Love <carl.j.love@gmail.com> 2020-Aug-27`
;     
local 
    T:= rtable(1..n+2, [0, 1], datatype= float[8], subtype= Vector[row]),
    XT:= X~(T), Y:= copy(XT),
    t, tmin, M, z:= evalf(gamma(0)), tau:= 1, k, i,
    Insert:= proc(V::Vector, i::posint, k::posint, x)
        V[i+1..k+2]:= V[i..k+1];
        V[i]:= x
    end proc    
;
    (tmin,M):= `if`(XT[1] < XT[2], [T[1],XT[1]], [T[2],XT[2]])[];
    for k to n do
        Y[..k+1]:= XT[..k+1] -~ (M - z);
        i:= 1 + max[index]( (T[2..k+1] - T[..k]) /~ Y[2..k+1] /~ Y[..k]);
        t:= T[i-1] + (T[i]-T[i-1])*Y[i-1]/(Y[i-1]+Y[i]);
        tau:= min(tau, t - T[i-1], T[i] - t);
        z:= min(z, evalf(gamma(k))*sqrt(tau));
        Insert~([T, XT], i, k, [t, evalf(X(t))]); 
        if XT[i] < M then (tmin,M):= (t, XT[i]) fi;
    od;
    [tmin, M]
end proc
:
N:= Statistics:-Sample(Statistics:-RandomVariable(Normal(0, 0.1))):
S:= Array(1..1, datatype= float[8]):
W:= t-> 1 + (t - 0.5)^2 + N(S)[1]:
CodeTools:-Usage(Calvin(W, 350)); 
memory used=20.76MiB, alloc change=0 bytes, cpu time=141.00ms, 
real time=146.00ms, gc time=0ns

             [0.495821549079493, 0.710534027290144]

Regarding the precision, or Digits setting: It's ridiculous to use any precision higher than hardware floats for this algorithm or for an objective function with this much noise.

@dharr In this case, I think that Nu stands for the Nusselt number. There are so many named parameters in boundary-layer fluid dynamics that they usually go by two-letter abbreviations.

@nguyenhuyenag The embedded assignment was introduced in Maple 2018. By using it, I was able to avoid having the same statement, vars:= indets(f) minus kv, appear both before the loop and inside the loop. That's just a stylistic preference, and has little effect on efficiency.

@ In your code, you've used a sequence theta(n) for what Calvin calls gamma(n). You've defined theta(n) as 1/(1+n), but this does not have the properties listed at the bottom of p 308: "an increasing sequence of positive numbers such that gamma(n) => infinity, gamma(n)^2*ln(gamma(n))/n => 0, and gamma(n)/gamma(n+1) => 1." So, this is your error, or perhaps one of your errors.

@ Okay, I can read the paper now. But I don't see a formal listing of an algorithm or any pseudocode. What part of the paper have you translated into Maple?

I noticed that you added a try...catch statement. What type of errors has it been catching?

@ Your Maple procedure WP declares several variables that aren't used in the code. That's a sign that you missed something. The variable a is initialized for each iteration of the main loop, but it isn't otherwise used.

Do the simplifications depend on anything that's not explicitly in the expressions being simplified? I'm primarily thinking of assume commands.

What's the value of nops(dummy_UU1)?

What's the average CPU time per simplification for the serial code? In other words, what's the number returned by

CodeTools:-Usage(map(simplify, dummy_UU1), output= cputime, quiet)/nops(dummy_UU1);

Have you tried Grid:-Map without the tasksize option?

You didn't reply to my extensive Answer to one of your other Questions: https://www.mapleprimes.com/questions/230277-What-Is-The-Conception-Of-process

Nor did you respond to the two emails that I sent you.

I spent several hours researching and writing that Answer.

@ The body of the Elsevier / Science Direct article that you linked to is not accessible without paying a fee.

Does your Maple version produce the wrong results? Or does it simply take too long? If it's the latter, I can fix it without needing to see the original. 

First 6 7 8 9 10 11 12 Last Page 8 of 547