Carl Love

Carl Love

28015 Reputation

25 Badges

12 years, 297 days
Himself
Wayland, Massachusetts, United States
My name was formerly Carl Devore.

MaplePrimes Activity


These are answers submitted by Carl Love

For a linear, homogenous system of ODEs such as yours, any Runge-Kutta method can be done as a single matrix-vector multiplication for each solution point: the previous solution point times the matrix. If the system is also autonomous (such as yours), the matrix doesn't change; if it's not autonomous, the matrix can be expressed as a matrix function of the independent variable. The amount of computational effort needed to construct these matrices is trivial. So, if the number of solution points is large, all R-K methods applied to systems such as yours require essentially the same amount of effort regardless of the order of the method.

You said that you didn't see a reduction in error when going from RK2 to RK3 to RK4. My results below contradict that; I get a substantial error reduction at each higher order.

restart:
Digits:= 15:
LA:= LinearAlgebra:

#Setup ODEs. This is your system entered in a neater form.
As:= [seq](A[k], k= 2..7)(t): #dependent functions
nsys:= nops(As):
M:= <
    3626,  -189,   -513,   -630,   -945,   -1323  ;
    14544, -1392,  -2784,  -4415,  -6960,  -9744  ;
    12992, -2016,  -4032,  -6720,  -10215, -14112 ;
    81408, -19200, -38400, -64000, -96000, -133455;
    26,    -9,     -18,    -30,    -45,    -63    ;
    6,     -3,     -6,     -10,    -15,    -21    #
>:
pi:= [seq](Pi^k, k= 0..nsys-1):
sys:= diff~(As,t) =~ 
    [seq](x, x= M.<As*~pi>)*~[-28, 28, -70, 14, -28672, 32768]/~pi/~(315*Pi^2):

#initial conditions:
ICs:= <
    -0.001210685373, -0.1636380032,   -0.003838287851,
    0.01100619795,   -0.001005637627, -0.00002775982849
>:
(t0,tf):= (0,1): #solution interval
npts:= 21: #evaluation points (including ICs at t0) 
for k to 4 do #create matrices to hold solutions for RK1, ..., RK4
    resMat[k]:= Matrix((npts,nsys), datatype= hfloat):
    resMat[k][1]:= ICs:
od:
h:= evalf((tf-t0)/(npts-1)): #stepsize

#Extract coefficient matrix of ODE system and transpose it: 
f:= Matrix(evalf[18](LA:-GenerateMatrix(rhs~(sys), As)[1])^%T, datatype= hfloat):

#Construct matrices for RK1, ..., RK4:
Id:= Matrix(nsys, shape= identity):
RK[1]:= Id+h*f:          #aka forward Euler's method 
RK[2]:= (Id+RK[1]^2)/2:  #aka Heun's method          
RK[3]:= (Id+(Id+RK[2]).RK[1])/3:                   
RK[4]:= (3*Id + (2*(Id+RK[3])+RK[1]).RK[1])/8:      

#Solve by each method:
for k to 4 do
    for j from 2 to npts do resMat[k][j]:= resMat[k][j-1].RK[k] od 
od:
     
#Get matrix-form solution from high-accuracy standard Maple solver:
sol:= dsolve(
    {sys[], seq(x, x= (eval(As, t= t0)=~ICs))}, numeric, abserr= 1e-13,
    output= Array(1..npts, k-> t0+(k-1)*h)
)[2,1][.., 2..]
:
#Compute sum-squared error (all 6 A(t) functions) for each method:
for k to 4 do `Sum-squared error RK`[k] = add(x^2, x= sol-resMat[k]) od;
        Sum-squared error RK[1] = 0.0000302404426180997
                                                     -8
        Sum-squared error RK[2] = 9.36429863900565 10  
                                                     -9
        Sum-squared error RK[3] = 6.24726072793177 10  
                                                     -10
        Sum-squared error RK[4] = 2.24644854174923 10   

 

Maple has at least three packages for constructing combinatorial objects: combinatIterator, and combstruct. Additonally, many combinatorial objects can be created by nesting (or folding) seq commands to an arbitrary depth (by using foldl). This method tends to be extremely fast. The code below is 4-5 times faster than Tom's combinat method and my polynomial method (both of which take roughly the same time) when there are thousands of sublists or more.

AllS:= proc(L::list, n::posint)
local j, k, v:= nops(L), K:= [seq](k[j], j= 1..v);
    [value(
        {foldl}(%seq, `$`~(L,K), seq(k[j]= 0..n-`+`(K[j+1..][]), j= 1..v))
    )[2..][]]
end proc
:
L__s:= CodeTools:-Usage(AllS([x,y,z], 4));
memory used=17.08KiB, alloc change=0 bytes, cpu time=0ns, real time=0ns, gc time=0ns
L__s := [[x], [y], [z], [x, x], [x, y], [x, z], [y, y], [y, z], 
  [z, z], [x, x, x], [x, x, y], [x, x, z], [x, y, y], [x, y, z], 
  [x, z, z], [y, y, y], [y, y, z], [y, z, z], [z, z, z], 
  [x, x, x, x], [x, x, x, y], [x, x, x, z], [x, x, y, y], 
  [x, x, y, z], [x, x, z, z], [x, y, y, y], [x, y, y, z], 
  [x, y, z, z], [x, z, z, z], [y, y, y, y], [y, y, y, z], 
  [y, y, z, z], [y, z, z, z], [z, z, z, z]]

 

You can interpolate the matrix to turn it into a function over the reals rather than over the integers, which makes it easier to use with animate. In the example below, I made up a function to apply to a discrete dataset to simulate the two vectors of 2970 values that you're starting with.

restart:
N:= 2970:
stoimostyM:= sort(rtable(1..N, frandom(1..99, 1), datatype= hfloat)^+);
izmeniya:= (x-> sin(x^1.3)/sqrt(x))~(stoimostyM):
a:= <stoimostyM | izmeniya>:
f:= Interpolation:-LinearInterpolation(a):
plots:-animate(plot, [f, 1..X, numpoints= 500], X= 1..99, frames= 100);

Use polynomial arithmetic to construct all possible terms and then deconstruct the terms (without their coefficients) into lists by using op`$`, and subsindets. Like this:

All:= (L::list(name), n::posint)->
    [subsindets(
        subsindets(([primpart]~@{op}@expand)(`+`(1,L[])^n - 1), `*`, op),
        `^`, `$`@op
    )[]]
:
All([x,y,z], 4);
[[x], [y], [z], [x, x], [x, y], [x, z], [y, y], [y, z], [z, z], 
  [x, x, x], [x, x, y], [x, x, z], [x, y, y], [x, y, z], 
  [x, z, z], [y, y, y], [y, y, z], [y, z, z], [z, z, z], 
  [x, x, x, x], [x, x, x, y], [x, x, x, z], [x, x, y, y], 
  [x, x, y, z], [x, x, z, z], [x, y, y, y], [x, y, y, z], 
  [x, y, z, z], [x, z, z, z], [y, y, y, y], [y, y, y, z], 
  [y, y, z, z], [y, z, z, z], [z, z, z, z]]

 

These are called delay differential equations. They can be solved numerically by Maple. See the help page ?dsolve,numeric,delay. You'll need to use the options delaymax and parameters.

The command is plots:-display. See the help page ?plots,display.

Here's some code for it:

restart:
f:= (x,y)-> 100*(y-x^2)^2 + (1-x)^2: #objective
fx:= D[1](f);  fy:= D[2](f); #its gradient
(initx, inity):= (-1, 1):

PathMin:= proc(a,b)
local 
    t, #unassigned symbolic

    #x- and y- coordinate functions along steepest-descent line:
    X:= unapply(a-fx(a,b)*t, t),
    Y:= unapply(b-fy(a,b)*t, t),

    #objective function restricted to steepest-descent line: 
    P:= unapply(f((X,Y)(t)), t),

    R:= [fsolve](D(P)(t), t= 0..infinity),
    m:= min(P~(R))
;
    if R::{[], specfunc(fsolve)} then
        print("No critical points along line");
        return
    fi;
    #new values for a and b:
    (X,Y)(select(r-> P(r)=m, R)[1])
end proc
:
(a1,b1):= (initx,inity):
(a1,b1):= PathMin(a1,b1);
                    a1, b1 := 1.000000000, 1

So, we got to the global minimum in one step. I think that's just due to having a super-lucky choice of initial point. 

We can generalize from 2*Pi/14 to 2*Pi/n, use symbolic summation, and still get the simplification. Like this:

restart:
w:= 2*Pi/n:
v:= <1, (sin,cos)(w*t)>:
simplify(sum(v.v^%T, t= k..k+n-1));

             

Note that the LinearAlgebra package and the commands withVector, and Transpose are not needed. The notation (...)^%T is a transpose operator; it can also be (...)^+ in 1D (plaintext) input. The <...is a column-vector constructor. The (f,g)(x) is equivalent to f(x), g(x).

If your report is accurate, then that's a bug, but that bug has long since been fixed. I see that you're using Maple 18. In Maple 2019, I get

Logic:-Dual(a &implies b);
               `
&not`(b &implies a)

And that is equivalent to `&not`(a) &and b.
 

Once you have the module pds, you can do this to be reminded of the module-specific commands available (which are called the module's exports):

exports(pds);
           
 plot, plot3d, animate, value, settings

Some examples of using those:
pds:-plot3d(u(x,t), x= 0..1, t= 0..1);

pds:-plot(theta(x, t), x= 0..1, t= 0.1);

pds:-value(u(x,t))(0.2, 0.1);
      [x = 0.2, t = 0.1, u(x, t) = 0.0597342089921277242]

Here's a procedure for it. No package is needed:

ArrowOnTop:= proc(x::symbol, arrow::symbol:= `&rightarrow;`)
    nprintf(`#mover(mn(%a), mo(%a))`, cat("", x), cat("", arrow))
end proc
:
ArrowOnTop(x);

             

To get the x in italic, change mn to mi.

In most or all of the cases that you're interested in, a formula is known for arbitrary terms of the power series, and that formula can be presented as an inert summation like this:

'tan@@(-1)'(x) = convert(arctan(x), FormalPowerSeries);

FormalPowerSeries can be abbreviated FPS.

So, this doesn't use the series command nor the series data structure at all (not even in the background) and thus completely gets around the issue of O(...or `...`.

Here I present six one-line methods to do it. First, I define some variables for the sake of generalizing your example:

d:= 4:               #degree
M:= 3:               #max coefficient + 1
R:= [M$d-1, M-1];    #radices
j:= [d-k $ k= 1..d]; #exponents
A:= a||~j;           #symbolic coefficients
                       R := [3, 3, 3, 2]
                       j := [3, 2, 1, 0]
                     A := [a3, a2, a1, a0]

Note that the product of the radices is 3*3*3*2 = 54.

Each of the six methods below generates a list (of 54 elements) of lists (each of 4 elements) of coefficients. In a later step, they'll be made into the polynomials, and 1+x^4 will be added to each. Two of the methods use an embedded for-loop; these will only work using 1D input (aka plaintext or Maple Input). Three use seq for the loops, and they'll work in any input mode. One uses elementwise operation (~) instead of a loop.

The first two methods use an Iterator called MixedRadixTuples. This is very similar to the CartesianProduct shown in VV's Answer. MixedRadixTuples is specifically a Cartesian product of sets of consecutive integers, each set starting at 0. The only difference in the two methods below is that the second uses seq instead of for.

C1:= [for C in Iterator:-MixedRadixTuples(R) do [seq](C) od];
C2:= [seq]([seq](C), C= Iterator:-MixedRadixTuples(R));

The next three methods generate the lists of coefficients from the base-3 representations of each integer from 0 to 53.

C3:= [for m to mul(R) do convert(M^d+m-1, base, M)[..-2] od];
C4:= [seq](convert(M^d+m-1, base, M)[..-2], m= 1..mul(R));
C5:= convert~(M^d+~[$0..mul(R)-1], base, M)[.., ..-2];

The final method generates a nested seq command and then executes it.

C6:= [eval](subs(S= seq, foldl(S, A, seq(A=~ `..`~(0, R-~1)))));

Now I show that each of the above lists is the same by showing that the set formed from all six has only one element.

nops({C||(1..6)});
                               1

The final list of polynomials can be formed via a matrix multiplication, the coefficients as a 54x4 matrix times a column vector of the powers of x:

po:= [seq](rtable(C1).x^~<j> +~ (1+x^d));

A variation of that last command is

po:= map(inner, C1, x^~j) +~ (1+x^d);

The undocumented long-standing built-in command inner does the dot (or inner) product of two lists as if they were vectors.

The following command returns a table whose indices are the output of galois and whose entries are the corresponding subsets of the input list s:

GG:= ListTools:-Classify(galois, select(irreduc, s)):

If you then want to count those subsets, do

GGn:= nops~(GG);

If you just want one of the five given representations of the groups, then replace galois in the first command with p-> galois(p)[k], where 1 <= k <= 5.

Or the desired information can be extracted from table GG like this:

rtable([curry(op,[2,-1])@[lhs], nops@rhs]~([indices](GG, pairs)));

Here's how to make the loop-based methods (either for or seq) efficient. The key thing is to not use a list for while the computations are being done; instead, make sparse table. A table is one of the fundamental container objects in Maple (along with lists, sets, and rtables). If a table is declared sparse, then unassigned entries evaluate to 0. The table can be efficiently converted to a list when the computations are done. Here's your code (in the box below). Code in green can remain unchanged; that in red must be changed.


roll:=rand(1..9):
N := sort([seq(seq(10*(2+jdx)+roll(),jdx=1..5),idx=1..10)]):
V := 10*seq(roll()+10*trunc(N[idx]/10),idx=1..nops(N)):

## generate index for sum. This groups the values for sum
sumidx := [seq(floor(N[idx]/10)-2,idx=1..nops(N))];

## create and zero sum variable
S := 0 *~convert(convert(trunc~(N/10),set),list); #Corrected in note 1 below

## calc sum and display it - This works
for idx to nops(sumidx) do
    S[ sumidx[idx] ] += V[idx]:  
end do:
S; #See note 2.

## error because sumidx[idx] is not evaluated ?????
S := 0 *~convert(convert(trunc~(N/10),set),list): #See note 1. 
seq('S'[ sumidx[idx] ] += V[idx], idx=1..nops(N)); #See note 3. 
S; #See note 2.

And here are the notes with my corrections:

1. In both cases, the line should be replaced with
S:= table(sparse):
There's no need to create a table with any specific number of elements; indeed, there's not even a formal mechanism for doing that if you wanted to.

2. In both cases, the line should be replaced with
S:= [entries](S, indexorder, nolist);
This is the efficient conversion of the completed table into a list. A commonly used more-intuitive variation that's not quite as efficient but still fine is
S:= convert(S, list);

3. The correction of this line has already been the subject of several Answers and Replies, and you even had a correction in your Question. My preference is 
seq((S[sumidx[idx]]+= V[idx]), idx= 1..nops(N)):
although the differences between that and your or acer's `+=` are minor.


So, you can see that converting accumulator variables from lists to tables is quite simple. If the lists are long, the time savings will be huge.

First 50 51 52 53 54 55 56 Last Page 52 of 394