Carl Love

Carl Love

28065 Reputation

25 Badges

13 years, 20 days
Himself
Wayland, Massachusetts, United States
My name was formerly Carl Devore.

MaplePrimes Activity


These are answers submitted by Carl Love

I don't know if this is the source of your problem, but it is nonetheless an error: The use statement doesn't work correctly in procedures. Instead, change

use XMLTools in

to

uses XMLTools;

and remove the end use;

@Scot Gould You wrote:

  • Is this a problem with just this function (and probably Maximization) or inherit in the entire Optimization package?

This will be a problem for any of the numerous numerical analysis procedures that except input functions in procedure form. This includes fsolvedsolveStatistics:-Fit, and numeric integration itself.

  • I believe I interpret it as: “If the list [K,r] is not a list of numerical values then return the procedural call with the arguments included.”

Yes, that's correct.

  • In Maple coding, is this the preferred technique for telling a calling function/procedure that the call failed?

No, it's only useful when the returned unevaluated call may be subject to subsequent evaluation. Genuine failures should be signaled with errors. Acer's technique is very common for procedures that are meant to ignore (rather than reject) nonnumeric input.

  • This may require me do learn more Maple coding than I have historically thought was necessary.

This syntax is very simple: returnprocname and args are keywords that can be used in any procedure. So, returning the unevaluated procedure call is always literally return 'procname'(args).

  • In listing local variables, I failed to include the variable x, yet Maple did not flag it like it flagged f1 and f2 when I did not list them as local.

Nonlocals only get flagged when they're on the left side of a direct assignment (:=), the index variable of a for statement, or (in some versions of Maple) the index variable of s​​​​​​eqadd, or mul.

  • Does not listing “x” as a local variable cause problems, or is it simply a case of sloppy coding?

It will cause problems if x has been globally assigned.

  • And yes, it makes sense the K and r of the function is not the arguments K and r. I was unaware of the use of “:-K” as representing the variable of an expression.

The prefix :- on a variable name refers to the global instance of the variable. It's required when there are global of local instances of variables with the same obstensive name. The quotes, ':-K', mean "ignore the variable's assigned value, if any." Note that those are single forward quotes (aposthropes).

  • In procedure proc4 execute nearly 8 times quicker when the “unapply” command is included within the procedure as opposed to placing that function call outside of the procedure, i.e., defining df as a function and not an expression?

This is definitely a "down the rabbit hole" question. What Acer is doing is directly instantiating the numeric values of K and r in the body of the procedure constructed by unapply. This makes it a purely numeric procedure, which can thus be processed much faster by the numeric integrator. To go farther down that rabbit hole, see ?evalhf. I'm not saying that evalhf is necessarily being used in this case. There are other instances (for example, externally compiled code) where a purely numeric procedure will make things faster.

The appropriate built-in command is rtable_scanblock. Although it is very fast, it has complicated syntax. See ?rtable_scanblock.

FirstPositive:= proc(A::{Vector, Array})
local ind:= (); 
    rtable_scanblock(
        A, [],
        proc(v,i) if v::realcons and is(v>0) then ind:= i[]; true fi end proc,
        'stopresult'= true
    );
    ind
end proc
:
FirstPositive(<0, 0, -1, 2, 0, 5>);
                               4

If your vectors are strictly numeric, the above can be improved. If they are also strictly real, it can be further improved.

Here is a much neater encoding of that theorem. In particular, pay attention to the angle-bracket matrix constructor ......>, which greatly simplifies the construction of block matrices. Also note the use of simultaneous multiple assignment (......):= (......), which eliminates the need for intermediary variables when updating multiple recurrences. (The parentheses are not required for multiple assignment; I just use them for clarity.)

Domineering:= proc(m::posint, n::posint, x::name, y::name)
local G0:= <<1>>, G1:= <<0>>, Z:= <<0>>;
    to n do
        (G1,G0,Z):= (<y*G0, Z; Z, Z>, <G0+G1, x*G0; G0, Z>, <Z, Z; Z, Z>)
    od;
    (G0^m)[1,1]
end proc
:                
Domineering(3,3,x,y); #Example call

The above uses 1D input (aka Maple Input).

In the 4th Reply to this Post, I wrote a tridiagonal solver that's much faster than anything offered by the LinearAlgebra package. As you suggest, my method uses prefactoring followed by back substitution. Thus, if the same coefficient matrix is used with different right-side vectors (as is typical), only the back substitution needs to be repeated.

Like this:

d:= x-> x^2+1: #denominator
Adj:= x-> 2*d(x)-x^3;
R:= 2 - (x^3+CurveFitting:-PolynomialInterpolation(
    [[1, Adj(1)], [2, Adj(2)], [3, Adj(3)]], x
))/d(x);

 

As is well documented, ListTools:-Classify is much more efficient than ListTools:-Categorize. It's also much more efficient than Acer's GL, and it can be used for your purpose via a two-line procedure:

GraphEigenvalueClassify:= (L::list([{Graph, name}, list(algebraic)]))->
    ([lhs,rhs]~@op@(g-> [map(`?[]`, g, [1])[]])~)(ListTools:-Classify(g-> g[2], L))
:

Here is an efficiency comparison:

restart
:

#Acer's code (verbatim):
#
GL := proc(LL)
  local U,C,T2,t,u,i,L;
  T2 := {map[2](op,2,LL)[]};
  for i from 1 to nops(T2) do
    C[T2[i]] := 0:
  end do:
  for L in LL do
    for t in T2 do
      if L[2]=t then
        C[t] := C[t] + 1;
        U[t][C[t]] := L[1];
        break;
      end if;
    end do;
  end do;
  [seq([lhs(u),convert(rhs(u),list)],u=op([1,..],U))];
end proc:
#

#Carl's alternative:
#
GraphEigenvalueClassify:= (L::list([{Graph, name}, list]))->
    [lhs,rhs]~(
        (op@map)(
            g-> [map(`?[]`, g, [1])[]],
            ListTools:-Classify(g-> g[2], L)
        )
    )
:

#Acer's efficiency-testing example:
#
N1,N2:= 10^3,10^4:
M := [seq(convert(LinearAlgebra:-RandomVector(3),list),i=1..N1)]:
f:=rand(1..N1):
MyNestedList := [seq([g||i,M[f()]],i=1..N2)]:
#

#Efficiency comparison:
#
Ans1:= CodeTools:-Usage(GL(MyNestedList)):
Ans2:= CodeTools:-Usage(GraphEigenvalueClassify(MyNestedList)):

memory used=131.81MiB, alloc change=7.00MiB, cpu time=1.73s, real time=1.74s, gc time=203.12ms
memory used=12.81MiB, alloc change=-2.00MiB, cpu time=109.00ms, real time=116.00ms, gc time=15.62ms

#Accuracy check:
#
nops(Ans1), nops(Ans2),
evalb(subsindets({Ans1[]}={Ans2[]}, list(name), L-> {L[]}));

1000, 1000, true

 

Download grouplist_test.mw

You haven't indicated whether your lists of eigenvalues are always sorted. If they aren't, and you want to group by sorted eigenvalues, then simply change g[2] to sort(g[2]) in my procedure. This has neglible effect on the efficiency.

Most of the new syntax features introduced in Maple 2018 and Maple 2019, such as do-until loops, only work in 1D input (aka Maple Input) mode. If you change your input mode, then you'll have no syntax errors. The issue raised by Tom Leslie is completely different. It's a logic error---an infinite loop. They don't give error messages.

By the way, I like your neat coding style and your use of the newer syntax.

To generate a sample of size 99, for example, do

X:= Statistics:-RandomVariable(ProbabilityTable([.2, .5, .1, .1, .1])):
S:= Statistics:-Sample(X, 99):
trunc~([seq(S)])

If the support is not an initial segment of positive integers, then use EmpiricalDistribution instead of ProbabilityTable.

There's a factor of 1/sqrt(x) at the start of the expression. Thus, you need to use 

limit(asa, x= 0)

rather than

subs(x= 0, asa)

Do

remove(type, ListTools:-Flatten(w), {constant, linear})

Or, if you want to preserve the sublist structure (no matter how complicated), do

evalindets[2](w, list, type, remove, {constant, linear});

In either case, the resulting list of nonlinear terms can be counted by

nops(ListTools:-Flatten(%))

Maple has no problem doing the animation. The problem was you not stating the problem clearly. 

My animation is similar to Acer's, but I used a tangent transformation to smooth the rotation. 

restart:
Cons:= {4*x1+x2 <= 12, x1-x2 >= 2, x1 >= 0, x2 >= 0}:
Obj:= c1*x1 + 2*x2: 
LP:= proc(C1)
option remember;
    if C1::numeric then
         Optimization:-LPSolve(eval(Obj, c1=  C1), Cons, maximize)[1]
    else
        'procname'(C1)
    fi
end proc
:
plots:-animate(
    plot, 
    [
        (LP(tan(c1))-tan(c1)*x1)/2, x1= 0..4, x2= -1..2, thickness= 3, color= red,
        title= 'sprintf'("c1 = %4.2f, Z = %4.2f", tan(c1), LP(tan(c1)))
    ],
    c1= -Pi/2..Pi/2, frames= 300, paraminfo= false,
    background= plots:-inequal(Cons, x1= 0..4, x2= -1..2, nolines)
);

There's no way that you're going to get an interesting result for this, because Maple doesn't even know how to differentiate KummerU with respect to its first parameter. But if you just want a result, any result, instead of an error message, you could do

series(' 'KummerU' '(p, 3/2, t), p);

That's two set of single quotes, not one set of double quotes.

Your underlying plot command is nonsense because it specifies a plot with respect to two variables, x1 and x2. There can only be one such variable.

Your trouble has nothing to do with the background option.

You can't specify both the value of the step size h and the desired accuracy at the same time because they depend on each other.

You'll also need to specify either the number of steps or the final x-value. Usually n is the number of steps, so are you sure that you're interpretting the instructions correctly?

Your dx and point_list don't make sense because they're loop invariants: Their value doesn't change in the loop.

First 108 109 110 111 112 113 114 Last Page 110 of 395