acer

32490 Reputation

29 Badges

20 years, 7 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

It wasn't clear whether Alex meant that the floats would be of the form xxx.000... or not.

But if he did in fact mean that they are merely the floating-point representation of some integers then conversion of very many of them might be faster using the kernel builtin routine trunc than using round (or floor).

acer

It wasn't clear whether Alex meant that the floats would be of the form xxx.000... or not.

But if he did in fact mean that they are merely the floating-point representation of some integers then conversion of very many of them might be faster using the kernel builtin routine trunc than using round (or floor).

acer

It seems uninstructive to demand use of do-loops if the data is already in list or listlist form. The following may not be optimally efficient (versus using an Array initializer), but it might be more instructive.

> MyA := Array(1 .. 5):
> MyA[1]:=1:
> for i from 2 to 5 do
>   MyA[i]:=5*(i-1);
> end do:
> MyA;
                              [1, 5, 10, 15, 20]

> MyA := Array(1 .. 4, 1..4):
> for i from 1 to 4 do
>   for j from 1 to 4 do
>     MyA[i,j]:=i*3^(j-1);
>   end do:
> end do:
> MyA;
                            [1     3     9     27]
                            [                    ]
                            [2     6    18     54]
                            [                    ]
                            [3     9    27     81]
                            [                    ]
                            [4    12    36    108]

acer

It seems uninstructive to demand use of do-loops if the data is already in list or listlist form. The following may not be optimally efficient (versus using an Array initializer), but it might be more instructive.

> MyA := Array(1 .. 5):
> MyA[1]:=1:
> for i from 2 to 5 do
>   MyA[i]:=5*(i-1);
> end do:
> MyA;
                              [1, 5, 10, 15, 20]

> MyA := Array(1 .. 4, 1..4):
> for i from 1 to 4 do
>   for j from 1 to 4 do
>     MyA[i,j]:=i*3^(j-1);
>   end do:
> end do:
> MyA;
                            [1     3     9     27]
                            [                    ]
                            [2     6    18     54]
                            [                    ]
                            [3     9    27     81]
                            [                    ]
                            [4    12    36    108]

acer

To be able to save the worksheet with some stored results (rtables, plots, etc) likely entails the underlying kernel being in a safe state. And if the kernel had "gone away" while doing gc while swapped out or what have you, then that might not be possible as an immediate action.

A middle ground might be to forcibly and immediately kill the underlying kernel and to be prompted to save any worksheets (open and using that kernel instance) without their output and stored rtable or plot or other embedded data. What might be novel might be for this to be offered without having the entire application quit.

I can imagine that it gets tricky if the GUI is waiting on Typesetting before completing a partial redraw of 2D Math output. If the kernel is not safely finished doing gc or other stuff, while inside Typesetting...  And there could well be other tricky scenarios.

acer

When Maple's too tied up and busy to abort the current computation, I will usually close the whole application (corner button on Maple's main window). This usually results in being queried about saving the open worksheet. That's fine.

But, as you put it, why cannot Maple's GUI be told to forcibly kill the kernel and close that worksheet, without having to quit the entire application.

A stable GUI, that allows forced closure of worksheets individually (even when running unresponsive kernel instances), would be useful. That would allow us to keep the Standard GUI up longer. That would make Standard's launch time less important, and so alleviate a difference w.r.t Classic.

acer

Upon a moment's reflection, it is not at all a good to suggest Matrix(Ans^%T,shape=antisymmetric) as that is the very sort of inefficiency we're trying to avoid. It creates two whole new large Matrices. Sorry.

Instead, the line in the autocompiled proc,

     A[i,j]:=Xmean/Xstdev;

could be followed immediately by,

     A[j,i] := -A[i,j];

acer

Upon a moment's reflection, it is not at all a good to suggest Matrix(Ans^%T,shape=antisymmetric) as that is the very sort of inefficiency we're trying to avoid. It creates two whole new large Matrices. Sorry.

Instead, the line in the autocompiled proc,

     A[i,j]:=Xmean/Xstdev;

could be followed immediately by,

     A[j,i] := -A[i,j];

acer

Inside the Sharpe procedure, there is a double loop over i and j. As I wrote it, the inner such loop goes from 1 to i. If you instead made j go from 1 to NSTOCK then it would compute all NSTOCK^2 entries.

But the results are antisymmetric. So there is no need to make twice the computational effort. If you want the Result Matrix to actually be contain all the full antisymmetric results then you can keep the `for j from 1 to i` (or make it `to i-1`) and instead have outerproc return an antisymmetric copy of Ans. That is to say, the last line in outerproc could be,

   Matrix(Ans^%T,shape=antisymmetric);

Indenting my code does not make it faster. I hope that it makes it more legible. Ideally, it would also have more comments.

yes, it's a big deal that there are many ways to implement an algorithm in Maple, some of which might be very fast and some of which might be very slow, with no 100% crystal clear characterization of what causes that.

If you want just the top 20 items, write a for-loop to walk the Results 20 times, storing the next greatest with each sweep. That can be much faster than sorting the whole thing.

acer

Inside the Sharpe procedure, there is a double loop over i and j. As I wrote it, the inner such loop goes from 1 to i. If you instead made j go from 1 to NSTOCK then it would compute all NSTOCK^2 entries.

But the results are antisymmetric. So there is no need to make twice the computational effort. If you want the Result Matrix to actually be contain all the full antisymmetric results then you can keep the `for j from 1 to i` (or make it `to i-1`) and instead have outerproc return an antisymmetric copy of Ans. That is to say, the last line in outerproc could be,

   Matrix(Ans^%T,shape=antisymmetric);

Indenting my code does not make it faster. I hope that it makes it more legible. Ideally, it would also have more comments.

yes, it's a big deal that there are many ways to implement an algorithm in Maple, some of which might be very fast and some of which might be very slow, with no 100% crystal clear characterization of what causes that.

If you want just the top 20 items, write a for-loop to walk the Results 20 times, storing the next greatest with each sweep. That can be much faster than sorting the whole thing.

acer

Recognize that `option autocompile` does not mean that the routine will certainly be Compiled! If you write it with uncompilable stuff in it, then it will simply run in Maple's regular interpreter.

You can try hitting the proc (the one that's intended to be compiled) with Compiler:-Compile, to see whether it will in fact compile.

Also, just because you have written something that can be Compiled does not mean necessarily that it, or anything else that is auxiliary, has been implemented efficiently.

It strikes me that using combinat to get the all permutations (of two picked items) appears to be a poor idea in this context. It forms the permutations of the indexes and stores them in some otherwise unneeded structure. That's aside from the fact that the computational formula appears to be symmetric (modulo a minus sign?).

> outerproc:=proc(SR::Matrix(datatype=float[8]))
> local Ans, n, nstock;
>   n,nstock := op(1,SR);
>   Ans := Matrix(nstock,nstock,storage=rectangular,
>                 datatype=float[8]);
>   Sharpe(Ans,nstock,n,SR); # acts inplace on Ans
>   #Matrix(Ans^%T,shape=symmetric);
>   ## or shape=antisymmetric ?
>   Ans;
> end proc:
>
> Sharpe:=proc(A::Matrix(datatype=float[8]),
>              NSTOCK::integer[4],
>              N::integer[4],
>              SR::Matrix(datatype=float[8]))
> option autocompile;
> local ap1::float, ap2::float, i::integer[4], j::integer[4],
>       r::integer[4], Xmean::float, Xstdev::float;
>   for i from 1 to NSTOCK do
>     for j from 1 to i-1 do
>       ap1 := 0.;
>       for r to N do
>         ap1 := ap1+SR[r,i]-SR[r,j];
>       end do;
>       Xmean := ap1/N;
>       ap2 := 0.;
>       for r to N do
>         ap2 := ap2+(SR[r,i]-SR[r,j]-Xmean)^2;
>       end do;
>       Xstdev := sqrt(ap2/(N-1));
>       A[i,j]:=Xmean/Xstdev;
>     end do;
>   end do;
> NULL;
> end proc:
>
>
> with(Statistics):
> nstock := 50:
> n := 100:
> StockReturn := Matrix(`<,>`(seq(Sample(RandomVariable(Normal(0, 1)),n),
>                                 i=1..nstock)))^%T;
                                   [ 100 x 50 Matrix      ]
                    StockReturn := [ Data Type: float[8]  ]
                                   [ Storage: rectangular ]
                                   [ Order: Fortran_order ]
 
>
> st:=time():
> Result := outerproc(StockReturn);
                             [ 50 x 50 Matrix             ]
                   Result := [ Data Type: float[8]        ]
                             [ Storage: triangular[upper] ]
                             [ Shape: symmetric           ]
                             [ Order: Fortran_order       ]
 
> time()-st;
                                     0.144
 
>
> st:=time():
> Result := outerproc(StockReturn);
                             [ 50 x 50 Matrix             ]
                   Result := [ Data Type: float[8]        ]
                             [ Storage: triangular[upper] ]
                             [ Shape: symmetric           ]
                             [ Order: Fortran_order       ]
 
> time()-st;
                                     0.013

If you want to make the result symmetric, or antisymmetric, fill in the commented bit above, in the outer proc.

Don't sort the whole result unless you really have a good reason to. If you only want the top 20 results then write something good to get just that. It's a whole other ballgame to sort the whole thing, in computational complexity. Don't do it unless you really need it all for some other computation. The same thing goes for reforming the result to a table, or whatever.

You might want to check that the answers are coming out right (by hand, using another method but on the exact same data).

acer

Recognize that `option autocompile` does not mean that the routine will certainly be Compiled! If you write it with uncompilable stuff in it, then it will simply run in Maple's regular interpreter.

You can try hitting the proc (the one that's intended to be compiled) with Compiler:-Compile, to see whether it will in fact compile.

Also, just because you have written something that can be Compiled does not mean necessarily that it, or anything else that is auxiliary, has been implemented efficiently.

It strikes me that using combinat to get the all permutations (of two picked items) appears to be a poor idea in this context. It forms the permutations of the indexes and stores them in some otherwise unneeded structure. That's aside from the fact that the computational formula appears to be symmetric (modulo a minus sign?).

> outerproc:=proc(SR::Matrix(datatype=float[8]))
> local Ans, n, nstock;
>   n,nstock := op(1,SR);
>   Ans := Matrix(nstock,nstock,storage=rectangular,
>                 datatype=float[8]);
>   Sharpe(Ans,nstock,n,SR); # acts inplace on Ans
>   #Matrix(Ans^%T,shape=symmetric);
>   ## or shape=antisymmetric ?
>   Ans;
> end proc:
>
> Sharpe:=proc(A::Matrix(datatype=float[8]),
>              NSTOCK::integer[4],
>              N::integer[4],
>              SR::Matrix(datatype=float[8]))
> option autocompile;
> local ap1::float, ap2::float, i::integer[4], j::integer[4],
>       r::integer[4], Xmean::float, Xstdev::float;
>   for i from 1 to NSTOCK do
>     for j from 1 to i-1 do
>       ap1 := 0.;
>       for r to N do
>         ap1 := ap1+SR[r,i]-SR[r,j];
>       end do;
>       Xmean := ap1/N;
>       ap2 := 0.;
>       for r to N do
>         ap2 := ap2+(SR[r,i]-SR[r,j]-Xmean)^2;
>       end do;
>       Xstdev := sqrt(ap2/(N-1));
>       A[i,j]:=Xmean/Xstdev;
>     end do;
>   end do;
> NULL;
> end proc:
>
>
> with(Statistics):
> nstock := 50:
> n := 100:
> StockReturn := Matrix(`<,>`(seq(Sample(RandomVariable(Normal(0, 1)),n),
>                                 i=1..nstock)))^%T;
                                   [ 100 x 50 Matrix      ]
                    StockReturn := [ Data Type: float[8]  ]
                                   [ Storage: rectangular ]
                                   [ Order: Fortran_order ]
 
>
> st:=time():
> Result := outerproc(StockReturn);
                             [ 50 x 50 Matrix             ]
                   Result := [ Data Type: float[8]        ]
                             [ Storage: triangular[upper] ]
                             [ Shape: symmetric           ]
                             [ Order: Fortran_order       ]
 
> time()-st;
                                     0.144
 
>
> st:=time():
> Result := outerproc(StockReturn);
                             [ 50 x 50 Matrix             ]
                   Result := [ Data Type: float[8]        ]
                             [ Storage: triangular[upper] ]
                             [ Shape: symmetric           ]
                             [ Order: Fortran_order       ]
 
> time()-st;
                                     0.013

If you want to make the result symmetric, or antisymmetric, fill in the commented bit above, in the outer proc.

Don't sort the whole result unless you really have a good reason to. If you only want the top 20 results then write something good to get just that. It's a whole other ballgame to sort the whole thing, in computational complexity. Don't do it unless you really need it all for some other computation. The same thing goes for reforming the result to a table, or whatever.

You might want to check that the answers are coming out right (by hand, using another method but on the exact same data).

acer

Pass in N the lenth of the Vector as an argument to the proc you want to Compile.

If you hate having to pass in that N, then when you are finished writing the Compilable proc you can write another proc which doesn't take N, and wrap it around.

outer_proc:=proc(V::Vector)
local N;
  N := op(1,V); # gets the dimension
  compilable_proc(V,N);
end proc:

compilable_proc:=proc(V::Vector(datatype=float[8]))
option autocompile;
    ..stuff..
end proc:

Also, doesn't make a whole bunch of individual compilable procs. Make one single mother-of-a-compilable-proc, that calculates the mean, the SD, and does all the updataing of the data, and writes all the output to its output Matrix argument. An important goal can be to cut it down so that when it runs it only makes one or two Maple level function calls.

acer

Pass in N the lenth of the Vector as an argument to the proc you want to Compile.

If you hate having to pass in that N, then when you are finished writing the Compilable proc you can write another proc which doesn't take N, and wrap it around.

outer_proc:=proc(V::Vector)
local N;
  N := op(1,V); # gets the dimension
  compilable_proc(V,N);
end proc:

compilable_proc:=proc(V::Vector(datatype=float[8]))
option autocompile;
    ..stuff..
end proc:

Also, doesn't make a whole bunch of individual compilable procs. Make one single mother-of-a-compilable-proc, that calculates the mean, the SD, and does all the updataing of the data, and writes all the output to its output Matrix argument. An important goal can be to cut it down so that when it runs it only makes one or two Maple level function calls.

acer

The only ImageTools routine that was of use to me in this was Read. I'm simply implementing some formulae for such distortion and applying them to the (layers of the ) image Array.

I'm using option autocompile on the procedures, with evalhf as a fallback. The polynomial formula for forward distortion is evaluated using Horner's rule. I was only looking at the kinds of distortion due to focal length (ie. wide-angle barrel distortion, pincushion, etc) and not due to mismatch focal planes. I'm using an order 3 polynomial.

The very fast prototype version just truncates the Array index value, when accessing from the source Array. A less lossy version would interpolate. (I haven't figured out whether I'll use Curvefitting:-ArrayInterpolation or a hard-coded formula for that yet.)  I haven't yet split out the action so as to treat the three color layers separately (though I understand that this may be desirable as there is a rare lens issue with focal length distortion that affects some colors more than others, so that correcting them separately may be desirable.)

I probably won't have time to make it presentable until next week at earliest.

acer

First 469 470 471 472 473 474 475 Last Page 471 of 594