acer

32293 Reputation

29 Badges

19 years, 294 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

The difference would be efficiency. One might not want to actually compute a^b, before computing a modulo operation on the "result". That straight non-modulo powering might not even be possible, in time or memory. That is why it is `&^` rather than `^`, as the original submitter noticed. Consider, > a,b,c:=12345678917,987654321,111777: > a &^ b mod c; # cool 75146 > `^`(a,b) mod c; Error, Maple was unable to allocate enough memory to complete this computation. Please see ?alloc Jacques's suggestion to enter it by typing &\^ works for me, in 2D Math input in a Document. acer
That's usually called the domain of the function. No, a procedure in Maple doesn't usually allow for a domain to be nicely specified. And that's a pity, because in maths a function is usually meant to mean both a rule (mapping) and a domain. That's what I was taught all through school, at least. It can be a problem, that Maple doesn't have such a basic concept built into its "functions". Oh, you can add type-checks on parameters, so that inputs not-of-the-correct type will make an error be thrown. Or you can create piecewise structures which evaluate according, say, to the position of the input. Or you can code your procedure with calls to is(), and then call that procedure using the `assuming` facility. None of those is a really good domain facility. There are some ways to restrict computations to a particular domain, using special packages, but they're not so strong and not everything works well in them. And they seem to be for specifying a different field or ring and not for nicely specifying closed subsets. So, now there are some Maple routines (like `solve`) which accept constraints as extra arguments, as one of the ways to get some restricted domain effects. But observe that `solve` is one of the hold-outs that doesn't respect the `assume`/`assuming` facilities well (or, at all). acer
That's usually called the domain of the function. No, a procedure in Maple doesn't usually allow for a domain to be nicely specified. And that's a pity, because in maths a function is usually meant to mean both a rule (mapping) and a domain. That's what I was taught all through school, at least. It can be a problem, that Maple doesn't have such a basic concept built into its "functions". Oh, you can add type-checks on parameters, so that inputs not-of-the-correct type will make an error be thrown. Or you can create piecewise structures which evaluate according, say, to the position of the input. Or you can code your procedure with calls to is(), and then call that procedure using the `assuming` facility. None of those is a really good domain facility. There are some ways to restrict computations to a particular domain, using special packages, but they're not so strong and not everything works well in them. And they seem to be for specifying a different field or ring and not for nicely specifying closed subsets. So, now there are some Maple routines (like `solve`) which accept constraints as extra arguments, as one of the ways to get some restricted domain effects. But observe that `solve` is one of the hold-outs that doesn't respect the `assume`/`assuming` facilities well (or, at all). acer
The paper you referenced is fascinating. I imagine that, as I read it over a few times, I'll just wish I could play with MapleMIX. And this quote is nice, "It is rather unfortunate, but there is no canonical reference for the semantics of Maple (operational or otherwise)." acer
The paper you referenced is fascinating. I imagine that, as I read it over a few times, I'll just wish I could play with MapleMIX. And this quote is nice, "It is rather unfortunate, but there is no canonical reference for the semantics of Maple (operational or otherwise)." acer
I've been happy with tetex . But that's on Linux. And you mentioned using Word, so maybe you are using Windows only? If it's primarily assignments that your considering, then you might give Maple a shot. Try out both Document and Worksheet mode (and in the latter, both 2D and 1D Math input modes). Learning LaTeX would be a career skill (esp. if you go to graduate school and have to write a thesis). And it's worth doing. But that's top of the line publication quality stuff. Whipping up nice looking assignments, with a CAS engine right there to augment the process, should be quite doable using Maple. acer
I've been happy with tetex . But that's on Linux. And you mentioned using Word, so maybe you are using Windows only? If it's primarily assignments that your considering, then you might give Maple a shot. Try out both Document and Worksheet mode (and in the latter, both 2D and 1D Math input modes). Learning LaTeX would be a career skill (esp. if you go to graduate school and have to write a thesis). And it's worth doing. But that's top of the line publication quality stuff. Whipping up nice looking assignments, with a CAS engine right there to augment the process, should be quite doable using Maple. acer
It's not clear to me what size problem you profiled. I'm going to focus on N=5, d=2, repeated 1000 times. There was an interesting discussion on ways to get quick, repeated samples from a distribution, using Statistics. I'll show two schemes below. One is faster (than what I've posted before) because a lot of overhead in Statistics is done just once, outside the proc. The other is faster by producing all samples up front in just one single rtable, for all the 1000 calls to be done. In both schemes, I also re-use the complex[8] Matrix (temp) which is set up with the random data. After all, why produce more garbage than is necessary, by recreating that Matrix over and over again? In both schemes, I also used LinearAlgebra:-LA_Main which is a lower-level set of access routines for LinearAlgebra. This cuts out quite a lot of Maple function calls overall.
# Scheme 1

Feynman_random_rho := proc(ZZ, SS, N::posint, d::posint:=2)::'Matrix'(complex[8]);
local rho, temp, X, a, rtemp;
temp := ZZ;
a:=SS(2 * d^N * d^N):
rtemp := ArrayTools:-ComplexAsFloat(temp);
ArrayTools:-Copy(2 * d^N * d^N, a, 0 , 1, rtemp, 0, 1);
rho := LinearAlgebra:-LA_Main:-MatrixMatrixMultiply(
           LinearAlgebra:-LA_Main:-HermitianTranspose(temp,inplace=false,
                                                      outputoptions=[]),
           temp,inlace=false,outputoptions=[]);
LinearAlgebra:-LA_Main:-MatrixScalarMultiply(rho,
                1/LinearAlgebra:-LA_Main:-Trace(rho),
                inplace=true,outputoptions=[]);
return rho;
end proc:
 
st:=time():
N,d:=5,2:
Z:=Matrix(d^N, d^N,datatype=complex[8]):
X := Statistics:-RandomVariable(Normal(0,1)):
S := Statistics:-Sample(X):
for i from 1 to 1000 do
Feynman_random_rho(Z,S,N,d);
od:
time()-st;
# Scheme 2

Feynman_random_rho := proc(ZZ, SS, Biga, BigaOffset, N::posint, d::posint:=2)::'Matrix'(complex[8]);
local rho, temp, X, a, rtemp;
temp := ZZ;
rtemp := ArrayTools:-ComplexAsFloat(temp);
ArrayTools:-Copy(2 * d^N * d^N, Biga, BigaOffset, 1, rtemp, 0, 1);
rho := LinearAlgebra:-LA_Main:-MatrixMatrixMultiply(
           LinearAlgebra:-LA_Main:-HermitianTranspose(temp,inplace=false,
                                                      outputoptions=[]),
           temp,inlace=false,outputoptions=[]);
LinearAlgebra:-LA_Main:-MatrixScalarMultiply(rho,
                1/LinearAlgebra:-LA_Main:-Trace(rho),
                inplace=true,outputoptions=[]);
return rho;
end proc:

st:=time():
maxiter,N,d:=1000,5,2:
Z:=Matrix(d^N, d^N,datatype=complex[8]):
X := Statistics:-RandomVariable(Normal(0,1)):
biga := Statistics:-Sample(X,maxiter*2 * d^N * d^N):
for i from 1 to maxiter do
Feynman_random_rho(Z,S,biga,(i-1)*2*d^N,N,d);
od:
time()-st;
The first code that I posted did N=5,d=2 repeated 1000 times in 4.6 sec on my machine. The second version I posted, where I removed one of the the Sample calls, did it in 3.1 sec. The scheme 1 above did it in 1.6 sec, and scheme 2 above did it in 1.4 sec. As a side note, the BLAS function zgemm (NAG's f06zaf) works as follows. Given arrays A,B,C and complex scalars alpha,beta it does this in-place action on C: C <- alpha*A*B + beta*C But zgemm also has flags which specify whether to consider A or B as (Hermitian-)transposed. This means that, with raw access to zgemm, one could do these next two steps in just one single BLAS call, rho := LinearAlgebra[HermitianTranspose](temp).temp; LinearAlgebra:-MatrixScalarMultiply(rho,1/LinearAlgebra[Trace](rho),inplace=true); The Matrix C (rho, say) could also be created just once, and re-used each time. Moreover, the Matrix temp wouldn't actually get transposed. It'd just get used as if it were transposed, by passing its transposition flag to zgemm. And a lot of modern processors support fused multiply+add to allow the above action to be even more efficient in good vendor BLAS. So, it's be nice to have direct access to these BLAS functions as Maple routines. acer
It's not clear to me what size problem you profiled. I'm going to focus on N=5, d=2, repeated 1000 times. There was an interesting discussion on ways to get quick, repeated samples from a distribution, using Statistics. I'll show two schemes below. One is faster (than what I've posted before) because a lot of overhead in Statistics is done just once, outside the proc. The other is faster by producing all samples up front in just one single rtable, for all the 1000 calls to be done. In both schemes, I also re-use the complex[8] Matrix (temp) which is set up with the random data. After all, why produce more garbage than is necessary, by recreating that Matrix over and over again? In both schemes, I also used LinearAlgebra:-LA_Main which is a lower-level set of access routines for LinearAlgebra. This cuts out quite a lot of Maple function calls overall.
# Scheme 1

Feynman_random_rho := proc(ZZ, SS, N::posint, d::posint:=2)::'Matrix'(complex[8]);
local rho, temp, X, a, rtemp;
temp := ZZ;
a:=SS(2 * d^N * d^N):
rtemp := ArrayTools:-ComplexAsFloat(temp);
ArrayTools:-Copy(2 * d^N * d^N, a, 0 , 1, rtemp, 0, 1);
rho := LinearAlgebra:-LA_Main:-MatrixMatrixMultiply(
           LinearAlgebra:-LA_Main:-HermitianTranspose(temp,inplace=false,
                                                      outputoptions=[]),
           temp,inlace=false,outputoptions=[]);
LinearAlgebra:-LA_Main:-MatrixScalarMultiply(rho,
                1/LinearAlgebra:-LA_Main:-Trace(rho),
                inplace=true,outputoptions=[]);
return rho;
end proc:
 
st:=time():
N,d:=5,2:
Z:=Matrix(d^N, d^N,datatype=complex[8]):
X := Statistics:-RandomVariable(Normal(0,1)):
S := Statistics:-Sample(X):
for i from 1 to 1000 do
Feynman_random_rho(Z,S,N,d);
od:
time()-st;
# Scheme 2

Feynman_random_rho := proc(ZZ, SS, Biga, BigaOffset, N::posint, d::posint:=2)::'Matrix'(complex[8]);
local rho, temp, X, a, rtemp;
temp := ZZ;
rtemp := ArrayTools:-ComplexAsFloat(temp);
ArrayTools:-Copy(2 * d^N * d^N, Biga, BigaOffset, 1, rtemp, 0, 1);
rho := LinearAlgebra:-LA_Main:-MatrixMatrixMultiply(
           LinearAlgebra:-LA_Main:-HermitianTranspose(temp,inplace=false,
                                                      outputoptions=[]),
           temp,inlace=false,outputoptions=[]);
LinearAlgebra:-LA_Main:-MatrixScalarMultiply(rho,
                1/LinearAlgebra:-LA_Main:-Trace(rho),
                inplace=true,outputoptions=[]);
return rho;
end proc:

st:=time():
maxiter,N,d:=1000,5,2:
Z:=Matrix(d^N, d^N,datatype=complex[8]):
X := Statistics:-RandomVariable(Normal(0,1)):
biga := Statistics:-Sample(X,maxiter*2 * d^N * d^N):
for i from 1 to maxiter do
Feynman_random_rho(Z,S,biga,(i-1)*2*d^N,N,d);
od:
time()-st;
The first code that I posted did N=5,d=2 repeated 1000 times in 4.6 sec on my machine. The second version I posted, where I removed one of the the Sample calls, did it in 3.1 sec. The scheme 1 above did it in 1.6 sec, and scheme 2 above did it in 1.4 sec. As a side note, the BLAS function zgemm (NAG's f06zaf) works as follows. Given arrays A,B,C and complex scalars alpha,beta it does this in-place action on C: C <- alpha*A*B + beta*C But zgemm also has flags which specify whether to consider A or B as (Hermitian-)transposed. This means that, with raw access to zgemm, one could do these next two steps in just one single BLAS call, rho := LinearAlgebra[HermitianTranspose](temp).temp; LinearAlgebra:-MatrixScalarMultiply(rho,1/LinearAlgebra[Trace](rho),inplace=true); The Matrix C (rho, say) could also be created just once, and re-used each time. Moreover, the Matrix temp wouldn't actually get transposed. It'd just get used as if it were transposed, by passing its transposition flag to zgemm. And a lot of modern processors support fused multiply+add to allow the above action to be even more efficient in good vendor BLAS. So, it's be nice to have direct access to these BLAS functions as Maple routines. acer
No, it's supposed to be, LinearAlgebra[HermitianTranspose](temp).temp The variable temp is the complex[8] Matrix. The variable rtemp is merely an alternate "view" of that Matrix, but with a float[8] datatype. That rtemp was needed for ArrayTools:-Copy to work. But temp is needed for the rest of the computation. This is a subtle joy of ArrayTools, that Alias and ComplexAsFloat can both produce an rtable that appears to be distinct and new but which actually shares its memory data portion. And, sure, float Matrix-Matrix multiplication using rtemp will be faster than complex multiplication using temp. But using temp gets the right answer. As far as profiling goes, it makes a big difference whether one is doing just a few calls to the procedure for large sizes, or many calls for small sizes. Some of the sorts of further, picky, optimizations that I've shown relate just to the latter. For single calls, at larger sizes, it's quite plausible that the external call to BLAS function zgemm is what takes most of the time. acer
No, it's supposed to be, LinearAlgebra[HermitianTranspose](temp).temp The variable temp is the complex[8] Matrix. The variable rtemp is merely an alternate "view" of that Matrix, but with a float[8] datatype. That rtemp was needed for ArrayTools:-Copy to work. But temp is needed for the rest of the computation. This is a subtle joy of ArrayTools, that Alias and ComplexAsFloat can both produce an rtable that appears to be distinct and new but which actually shares its memory data portion. And, sure, float Matrix-Matrix multiplication using rtemp will be faster than complex multiplication using temp. But using temp gets the right answer. As far as profiling goes, it makes a big difference whether one is doing just a few calls to the procedure for large sizes, or many calls for small sizes. Some of the sorts of further, picky, optimizations that I've shown relate just to the latter. For single calls, at larger sizes, it's quite plausible that the external call to BLAS function zgemm is what takes most of the time. acer
I took a step back, to look better at what the code does, especially the first bit that generates a random complex Matrix. It looks as if it is creating a complex[8] Matrix, where the real and imaginary components are all taken from the same normal distribution. But the real and imaginary components are being treated separately, and there's not actual addition going on, right? The LinearAlgebra[Add] command is being used just to copy the real and imaginary components into the complex[8] Matrix. If that's true then ArrayTools:-Copy could be used, instead of LinearAlgebra[Add]. And only one temp Matrix is needed. That is to say, one could generate twice as many points in the random sample, and then copy them right into place in the complex[8] Matrix. Something like this below. Is it almost twice as fast again? Feynman_random_rho := proc(N::posint, d::posint:=2)::'Matrix'(complex[8]); local rho, temp, X, a, rtemp, temp2; X := Statistics:-RandomVariable(Normal(0,1)): temp := Matrix(d^N, d^N,datatype=complex[8]); a:=Statistics:-Sample(X,2 * d^N * d^N): rtemp := ArrayTools:-ComplexAsFloat(temp); ArrayTools:-Copy(2 * d^N * d^N, a, 0 , 1, rtemp, 0, 1); rho := LinearAlgebra[HermitianTranspose](temp).temp; LinearAlgebra:-MatrixScalarMultiply(rho,1/LinearAlgebra[Trace](rho),inplace=true); end proc: acer
I took a step back, to look better at what the code does, especially the first bit that generates a random complex Matrix. It looks as if it is creating a complex[8] Matrix, where the real and imaginary components are all taken from the same normal distribution. But the real and imaginary components are being treated separately, and there's not actual addition going on, right? The LinearAlgebra[Add] command is being used just to copy the real and imaginary components into the complex[8] Matrix. If that's true then ArrayTools:-Copy could be used, instead of LinearAlgebra[Add]. And only one temp Matrix is needed. That is to say, one could generate twice as many points in the random sample, and then copy them right into place in the complex[8] Matrix. Something like this below. Is it almost twice as fast again? Feynman_random_rho := proc(N::posint, d::posint:=2)::'Matrix'(complex[8]); local rho, temp, X, a, rtemp, temp2; X := Statistics:-RandomVariable(Normal(0,1)): temp := Matrix(d^N, d^N,datatype=complex[8]); a:=Statistics:-Sample(X,2 * d^N * d^N): rtemp := ArrayTools:-ComplexAsFloat(temp); ArrayTools:-Copy(2 * d^N * d^N, a, 0 , 1, rtemp, 0, 1); rho := LinearAlgebra[HermitianTranspose](temp).temp; LinearAlgebra:-MatrixScalarMultiply(rho,1/LinearAlgebra[Trace](rho),inplace=true); end proc: acer
Yes, I used Linux. I thought that perhaps the // dirsep would work on Windows too. It matters whether TEMP is set, as an OS environment variable, to a writable directory. If not set then Temp must be a directory in your homedir, as the code is written. The cat(TEMPDIR,"PrimeBack.jpg") is the location that the image file will be written out to, when the initproc procedure gets run. The location "files//PrimeBack.jpg" was simply the source location, relative to currentdir when I actually ran all that code to create the .mla file. Those two locations, then, aren't related. The initproc is just three steps. The first is to find a temp directory and write out the image file (storing that location). The second is to use that location, and invoke the Maplet which uses that image file. The third is to delete that unpacked image file. Naturally, I realize that you'll have seen all that. I didn't really know much about LibraryTools:-ActivationModule before now. It seems a useful thing to learn. acer
This can be done using an .mla and LibraryTools:-ActivationModule() directly.
try FilesTools:-Remove(".//SimpleTest.mla"); catch: end try:
LibraryTools:-Create(".//SimpleTest.mla");
march('addfile', ".//", "files//PrimesBack.jpg", "SimpleTest.mla");
                                                                                                                                           
initproc:=proc()
local maplet, TEMPDIR, found, x, IMAGELOC;
    TEMPDIR := getenv(TEMP);
    if not (TEMPDIR<>"" and FileTools:-Exists(TEMPDIR)=true
            and FileTools:-IsDirectory(TEMPDIR)=true) then
        TEMPDIR:=cat(kernelopts(homedir),"//Temp//");
        if not (TEMPDIR<>"" and FileTools:-Exists(TEMPDIR)=true
                and FileTools:-IsDirectory(TEMPDIR)=true) then
           error "unable to find directory for temp files";
        end if;
    end if;
    for x in libname do
        try
            march('extractfile', x, "SimpleTest.mla", cat(TEMPDIR,"PrimesBack.jpg"));
            found := true;
        catch:
        end try;
        if found = true then break; end if;
    end do;
    if found<>true then
        error "a problem occurred while attempting to extract the temp files."
    end if;
    IMAGELOC := cat(TEMPDIR,"PrimesBack.jpg");
    use Maplets[Elements] in
    maplet := Maplet(
     [
       Label(Image(IMAGELOC)),
       "Did this maplet show an image stolen from the MaplePrimes website?",
       [
         HorizontalGlue(),
         Button("Yes", Shutdown("Image was shown")),
         Button("No ", Shutdown("Image was not shown")),
         HorizontalGlue()
       ]
     ] ):
    Maplets[Display](maplet);
    end use;
    try
        FileTools:-Remove(IMAGELOC);
    catch:
        error "a problem occurred while attempting to delete the temp files."
    end try;
end proc:
                                                                                                                                           
LibraryTools:-ActivationModule("SimpleTest.mla",initproc);
acer
First 563 564 565 566 567 568 569 Last Page 565 of 591