Carl Love

Carl Love

28070 Reputation

25 Badges

13 years, 29 days
Himself
Wayland, Massachusetts, United States
My name was formerly Carl Devore.

MaplePrimes Activity


These are replies submitted by Carl Love

Here's the same code done with Maple 2019 syntax. Note that this has a major efficiency boost and code shortening also.

OrbitPartition:= (
    parms::And(set, listlist &under [op]),
    tuple_size::posint,
    permutations_by_index::list(table)
)->
local
    C, i, Done:= table(), 
    T:= proc(T,j) option remember; `if`(assigned(T[j]), T[j], j) end proc,
    conds:= varCoef-> map(x-> T~(permutations_by_index, x), varCoef),
    Orbit:= x-> local r:= x; {do Done[r]:= r until assigned(Done[(r:= conds(r))])}[1]
;   
    {seq}(
        if assigned(Done[(i:= parms[[seq(C+~1)]])]) then else Orbit(i) fi,
        C= Iterator:-Combination(nops(parms), tuple_size)
    )
:
parms:= {seq(seq([i,j], j= 0..9), i= 1..3)}:
T1:= table([2= 3, 3= 2, 5= 6, 6= 5, 7= 9, 9= 7]):
T2:= table([2= 3, 3= 2]):

newabc:= CodeTools:-Usage(OrbitPartition(parms, 7, [T2,T1])): 
memory used=6.13GiB, alloc change=0.77GiB, cpu time=80.48s, 
real time=51.26s, gc time=36.52s

 

@emendes That issue is addressed by my next Answer, "New paradigm: Partitioning by orbits".

I don't know what you mean by the "coefficient matrix" for a nonlinear system. Perhaps that concept is defined somewhere, and I'm just not aware of it.

The key to my understanding your problem was you saying that "something like" produced your desired results. Once I have a model that's known to work, it's much easier for me to duplicate its results at higher efficiency.

@emendes Yes, it can be done with Iterator, and there's even an additional factor-of-3 time improvement (due to a few other improvements in addition to Iterator)!

parms:= {seq(seq(alpha[i,j],j=0..9),i=1..3)}:
k:= 6: #tuple size
tab:= table():

T1:= table([2= 3, 3= 2, 5= 6, 6= 5, 7= 9, 9= 7]):
T2:= table([2= 3, 3= 2]):
T:= proc(T::table, j) option remember; `if`(assigned(T[j]), T[j], j) end proc:

conds:= (varCoef::set(specindex(alpha)))->
    map(x-> :-alpha[T~([T2,T1], [op(x)])[]], varCoef)
:

for C in Iterator:-Combination(nops(parms), k) do
    i:= parms[[seq(C+~1)]];
    unassign('tab[conds(i)]');
    tab[i]:= ()  
end do:

newabc:= {indices(tab, nolist)}:

On my computer, this does the 7-tuples in 76 seconds with 292 Mb memory.

@vv I agree that my procedure is not very useful. It's usefulness was not my motivation for posting it. My main point was showing that functions (such as ln) are evaluated correctly---provided that their arguments are evaluated correctly---and you can generally rely on that. It's only the arithmetic operators (the `+` in this case) that you need to worry about.

@vv That would involve careful analysis to determine the number of terms of the series to use. At that would be done at the Maple level. The way I have it, the job is passed to `evalf/ln`, which passes it (after appropriate manipulations) to `evalf/hypergeom`, which (after more Digits manipulations) passes it to `evalf/hypergeom/kernel`. This last procedure is highly optimized, and builtin, so it runs fast. 

If you read several of Maple's `evalf/...procedures, you'll see that very few of them use series. Most evaluations are ultimately passed to `evalf/hypergeom/kernel`, it being the main procedure where the serious numerical analysis algorithms are used. That's what I meant by "Maple's whole network of evalf procedures is built this way." 

@vv Yes, that's true. With some more thought, I could've reduced that number. The majority of `evalf/...` procedures work by increasing Digits. The increase is localized to the procedure and its children, so the extra memory usage is just a few words.

The sin(t) in your problem doesn't quite make sense because sin should be applied to angles, not to times. So it should be sin(A*t) for some constant A. Presumably, the magnitude of is 1, but what about its units? Are they radians/minute, degrees/minute, cycles/minute, or perhaps something else? These choices make a big difference in the solution.

@tomleslie I got the same weirdness with the numeric indices as you did. 

@baharm31 Note that a least-squares solution is not a solution (in the ordinary sense of the word) nor even an approximation to a solution. It's just values of the variables that (locally) minimize the sum of the squares of the residuals.

@baharm31 No, but the presence of a trivial solution makes other solutions more difficult (though not impossible) to find because most methods will converge on this easy solution. 

@AHSAN Why do you want to use an arcane method such as false position when you could just use fsolve?

Your equation has two free variables, lambda and k. So, it's not clear what you mean by "solving" the equation. And even if you clarify that, there's no hope of doing it with an iterative numerical method.

@emendes If there are no multiple copies, and we retain one copy of the target expression, then what is being removed?

First 201 202 203 204 205 206 207 Last Page 203 of 709