acer

20896 Reputation

29 Badges

15 years, 60 days

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@mmcdara Sure, before there have previously been intermittent failures to inline uploaded worksheets here. But this seems different and new.

It looks as if, currently, any Maple 2019.0 worksheet cannot be inlined here if it's saved under an rtablesize=list setting for interface (which includes the new default of [10,10]).

@digerdiga The example  int(dilog(x+1)/(x+a),x)  returns unevaluated.

@digerdiga 

restart;

kernelopts(version);

`Maple 2019.0, X86 64 LINUX, Mar 9 2019, Build ID 1384062`

f := polylog(2, -x);

polylog(2, -x)

sol1 := int(f/(x+1), x);

ln(x+1)*polylog(2, -x)+ln(x+1)^2*ln(-x)+2*ln(x+1)*polylog(2, x+1)-2*polylog(3, x+1)

convert(f, dilog);

dilog(x+1)

sol2 := int(%/(x+1), x);

ln(x+1)*dilog(x+1)+ln(x+1)^2*ln(-x)+2*ln(x+1)*dilog(-x)-2*polylog(3, x+1)

convert(sol1-sol2, dilog);

0

diff(sol1,x);

polylog(2, -x)/(x+1)

convert(diff(sol2,x), dilog);

dilog(x+1)/(x+1)

 

dilogthing.mw

@Earl Do you mean a custom coloring by procedures, as available for the plot3d command? That uses the resulting cartesian x,y,z values of the points, rather than any values directly from the parametrization.

Here is an example where such custom coloring procedures are used to generate a color Array (insides a COLOR plotting substructure), which is then used as a replacement in the 2D circle frames.

restart;
N := 45:
g := 101:
P3:=plot3d([ cos(a), sin(a), 0],
             a = -Pi .. Pi,
             s = 0..1,
             grid = [g,2],
             colorscheme = ["xyzcoloring",
                            [unapply(0,[x,y,z]),
                             unapply(sin(arctan(x,y)),[x,y,z]),
                             unapply(cos(arctan(x,y)),[x,y,z])],
                             colorspace="RGB"]):
C:=indets(P3,specfunc(anything,{COLOR,COLOUR}))[1]:
f := proc(NN,t,C)
  local a,gg,P2,shift;
  gg := `*`(op(map(u->op(2,u)-op(1,u)+1,
                   ArrayTools:-Dimensions(op([2],C))[1..2])));
  shift := - Pi/2 - t/(NN-1)*2*Pi;
  P2 := plot([t + NN/10*(1 + cos(a)), NN/10*(1 + sin(a)),
              a = 0 + shift .. 2*Pi +shift],
             adaptive = false, numpoints = gg,
             thickness = 4):
  subsindets(P2,specfunc(anything,{COLOR,COLOUR}),c->C);
end proc:
plots:-display([seq(f(N,t,C), t = 0..N)],
               scaling = constrained, size = [700,500],
               gridlines = false, insequence = true);

Above you could use ImageTools:-Embed(op([2],C)); to see the colors in C.

The final circle frames are constructed with g*2 data points, since P3 is built with grid=[g,2]. This makes the numbers of color points represented by C match the number of points in each circle.

I don't think that there is an easy way to directly generate the 2D circle frames, using parametric form and custom coloring procedures (admitting the parameter values). By this I mean, a straightforward way, without having to construct the PLOT structure even more manually instead of via nice plotting commands.

Why not produce it once, using colorscheme say, and then generate the other frames by different rotation and shift (of the original)?

Search for the topic  dsolve,classical  in the Maple Help.

What have you done so far, for this question?

@vv Naturally, an appliable module is not necessary. I used one only because I was not sure up front whether I'd be able to make some nice optimizations and store part of them as module locals. (I'm somewhat sure that I could.)

But I'll point out that your procedure has additional flaws, because it relies on with(Statistics) executed at the top level, and references unprotected global names unquoted so it breaks if they get assigned.

BTW, I don't think that you have the right piecewise support in the PDF/CDF. The support of your example Y should be [1,infinity) but as its coded Support(Y) will return [0,infinity).  And CDF(Y,0.5) and PDF(Y,0.5) return nonreal values.

 

@mmcdara For clarity: You may have noticed that Support(Ldist, output=range) was already returning m..infinity , where Ldist came from a call to Distribution(Levu(...)). So one thing I was asking was confirmation about the change I made to pdf and cdf, conditional upon x<=m rather than x<0 as you had originally. (I know you'd assigned m:=0, but elsewhere in the piecewise you were using m, so I assumed you wanted it changed appropriately in the conditional.)

I presume that you mean that you now also added the Support piece to the defn passed to Distribution, which sounds fine.

I didn't have time to look at Mean.

It might be nice to speed up the sampling. It might be possible to construct an accurate rational polynomial approximation using numapprox, for the Quantile of Normal(0,1) from the Uniform(1/2,1), for use up to Digits=evalhf(Digits). So much fun, so little time.

@tomleslie The method you used would have to be modified, and does not work otherwise for all distributions which do not uniformly sample from every integer in a finite range returned.

The only thing you used the DiscreteUniform distribution for was to call Support against it. So the 1,6 supplied as argument to Distribution became 1..6 returned from Support. That is not informative because it returns no more information than was entered for the DiscreteUnitform case. And that's the only distribution this approach handles. If the dice had other than six faces then supplying 1,n to Distribution already contains the information in the range 1..n that Support could return. It's more convenient to not have to call two additional commands just to get the range.

Additionally, the range returned by Support may contain not only the discrete points with a non-zero probability. Consider this example:

restart;
with(Statistics):
dist:=Distribution(EmpiricalDistribution([2,3,9,11])):
Support(dist, output=range);
                   2 .. 11

The method you used did not utilize the PDF, but simply used seq over the whole range(s) returned by Support. So that method would not work correctly for this distribution. Since the range returned by Support is not being used in conjunction with something that provides the probabilities (eg. PDF, etc) then it serves no great purpose here. It may even give a slightly misleading impression that the method works for a broader class of example.

I am not claiming that Support doesn't do its job for other distributions. But one of my claims is that the range returned by Support does not make this method work for any broader class of distribution.

So Support doesn't add generality here, and returns only information about the uniform range that is already directly available from what is entered.

I don't think that Tom's implementation is incorrect or wrong. But I also think that it doesn't provide what was already shown, more simply. I was explicitly asked for my opinion, so I've given it. My opinion is no great big deal.

@sand15 I'm not sure I understand what kind of opinion you're looking for. Please note that I used EmpiricalDistribution in a Comment on my own Answer in this thread before Tom used it and in exactly the same fashion. (Actually, before anyone else used it in this whole thread. No prize wanted.)

Perhaps that indicates my judgement on its use here? Clearly it allows us to deal with the admixture of uniform distributions with finite support. Carl demonstrated adroitly that -- by also using the PDF for weighting -- the non-uniform case can also handled (as expected) via EmpiricalDistribution. That Answer by Carl is the most general result in this thread to date, and uses Statistics commands.

Note that Tom's use of Support above only serves here for the uniform case, and as such is merely an abstruse way to generate the range 1..6, IMO. Otherwise the approach is just what I'd already posted, using the EmpiricalDistribution in the same way. I had discounted this use of Support because it gave no extra functionality or generality, and was just a roundabout way to produce the ranges (from the sequence 1,6 rather than the range 1..6 directly). But it doesn't do anything special such as, say, work for the non-uniform case.

@erik10 Here is something for the (potentially) missing entries, and this kind of example.

restart:

with(Statistics):

U:=(a,b)->piecewise(a=1 or b=1,-4,abs(b-a)):
all:=[seq(seq(U(i,j),i=[$1..10]),j=[$1..10])]:
M:=[op({op(all)})]:
X:=RandomVariable(EmpiricalDistribution(all)):

 

N:=10^1:

S:=Sample(X,N):

L:=map(t->lhs(t)=evalf[5](rhs(t)/N),
       sort(Tally(S)));

[HFloat(-4.0) = .20000, HFloat(0.0) = .10000, HFloat(2.0) = .10000, HFloat(3.0) = .10000, HFloat(5.0) = .10000, HFloat(6.0) = .20000, HFloat(7.0) = .20000]

Matrix([lhs~,rhs~](L))^%T;

Matrix(7, 2, {(1, 1) = -4.0, (1, 2) = .20000, (2, 1) = 0., (2, 2) = .10000, (3, 1) = 2.0, (3, 2) = .10000, (4, 1) = 3.0, (4, 2) = .10000, (5, 1) = 5.0, (5, 2) = .10000, (6, 1) = 6.0, (6, 2) = .20000, (7, 1) = 7.0, (7, 2) = .20000})

Lh:=lhs~(L):
LL:=sort([op(L),op(remove(u->member(HFloat(u),Lh),M)=~0.)],
         (a,b)->lhs(a)<lhs(b));

[HFloat(-4.0) = .20000, HFloat(0.0) = .10000, 1 = 0., HFloat(2.0) = .10000, HFloat(3.0) = .10000, 4 = 0., HFloat(5.0) = .10000, HFloat(6.0) = .20000, HFloat(7.0) = .20000, 8 = 0.]

Matrix([lhs~,rhs~](LL))^%T;

Matrix(10, 2, {(1, 1) = -4.0, (1, 2) = .20000, (2, 1) = 0., (2, 2) = .10000, (3, 1) = 1, (3, 2) = 0., (4, 1) = 2.0, (4, 2) = .10000, (5, 1) = 3.0, (5, 2) = .10000, (6, 1) = 4, (6, 2) = 0., (7, 1) = 5.0, (7, 2) = .10000, (8, 1) = 6.0, (8, 2) = .20000, (9, 1) = 7.0, (9, 2) = .20000, (10, 1) = 8, (10, 2) = 0.})

N:=10^5:

S:=Sample(X,N):

L:=map(t->lhs(t)=evalf[5](rhs(t)/N),
       sort(Tally(S)));

[HFloat(-4.0) = .18975, HFloat(0.0) = 0.90680e-1, HFloat(1.0) = .16088, HFloat(2.0) = .13880, HFloat(3.0) = .12002, HFloat(4.0) = .10067, HFloat(5.0) = 0.78330e-1, HFloat(6.0) = 0.60950e-1, HFloat(7.0) = 0.39620e-1, HFloat(8.0) = 0.20300e-1]

Matrix([lhs~,rhs~](L))^%T;

Matrix(10, 2, {(1, 1) = -4.0, (1, 2) = .18975, (2, 1) = 0., (2, 2) = 0.90680e-1, (3, 1) = 1.0, (3, 2) = .16088, (4, 1) = 2.0, (4, 2) = .13880, (5, 1) = 3.0, (5, 2) = .12002, (6, 1) = 4.0, (6, 2) = .10067, (7, 1) = 5.0, (7, 2) = 0.78330e-1, (8, 1) = 6.0, (8, 2) = 0.60950e-1, (9, 1) = 7.0, (9, 2) = 0.39620e-1, (10, 1) = 8.0, (10, 2) = 0.20300e-1})

 

Download sample_freq_all.mw

@erik10 The Statistics:-Tally command does that. It's easy enough to evalf the ensuing rhs's, or reformat as a Matrix.

(In other words, you don't have to create yet another discrete empirical RandomVariable from every sample you create, so as to use its ProbabilityFunction just to get the sample's frequencies. As you've noticed, that's quite a roundabout effort.)

ps. One of the reasons I suggested DocumentTools:-Tabulate in my earlier Answer was that these kinds of running details about the sample's can be added as additional expressions in the Table (and would change with each iteration of sample size N say.)

restart:

with(Statistics):

U:=(a,b)->piecewise(a=1 or b=1,-4,abs(b-a)):
all:=[seq(seq(U(i,j),i=[$1..6]),j=[$1..6])]:
M:=[op({op(all)})]:
X:=RandomVariable(EmpiricalDistribution(all)):

 

 

 

 

N:=10^4:

S:=Sample(X,N):

L:=map(t->lhs(t)=evalf[5](rhs(t)/N),
       sort(Tally(S)));

[HFloat(-4.0) = .30580, HFloat(0.0) = .13760, HFloat(1.0) = .22820, HFloat(2.0) = .16450, HFloat(3.0) = .11030, HFloat(4.0) = 0.53600e-1]

Matrix(Vector~([lhs~,rhs~](L)));

Matrix(6, 2, {(1, 1) = -4.0, (1, 2) = .30580, (2, 1) = 0., (2, 2) = .13760, (3, 1) = 1.0, (3, 2) = .22820, (4, 1) = 2.0, (4, 2) = .16450, (5, 1) = 3.0, (5, 2) = .11030, (6, 1) = 4.0, (6, 2) = 0.53600e-1})

 

Download sample_freq.mw

(or Matrix([lhs~,rhs~](L))^%T for the Matrix)

@Carl Love Regarding finite support and discreteness, I was thinking only of a minor clarification for the general reader as possibly trumping redundancy. I did not mean to imply that you had missed a necessary part of the definition, and I apologize if it came off that way.

@Carl Love That is nice.

Perhaps you meant to write something like, "independent random variables from discrete distributions" in the description, though?

An illustrative example involving dice from a non-uniform distribution might be nice.

Also, do you see a way for it to handle an RV from a discrete distribution defined only by a general symbolic ProbabilityFunction and DiscreteValueMap? (Eg, the first Example from the help page for DiscreteValueMap. Or something where PDF(r,t) were like Sum(2^(-k)*Dirac(t-3^(-k)), k = 1 .. infinity).)

@Carl Love It's true that it uses the GPU. But it only offers a exports for a couple of BLAS operations (eg. O(n^3) Matrix-Matrix multiplcation).

I have mentally blocked it out because the current optimized float[8] BLAS used by the regular LinearAlgebra package (CPU), from the Intel MKL, is so fast  on modern CPU chipsets that it actually beats the CUDA dgemm used via Maple for most reasonably sized examples.

The problem is that the CUDA package has to send all the float[8] Matrix data to the GPU up front, and then get it back afterwards. The overhead of that transfer is on the same order as the time it now takes for the CPU dgemm to compute at most size Matrices that one usually encounter in all but the most esoteric uses in Maple.

Hence the Maple CUDA package only offers a couple of multiplication functions and is no longer generally outperforming the regular LinearAlgebra equivalents on modern CPUs.

There are a couple of CUDA-based general LAPACK projects, but they are still maturing. I believe that, with the current model of having to transfer the data up and down again for each call, operations like SVD and eigen-solving would take enough time for the transfer overhead to be a negligible portion at large but non-enormous size examples. And so those functions would be most promising for a significant gain. LU decompostion less so. I experimented with this a couple of years ago, but performance results were not encouraging because those functions were not yet fully-optimized (and some not yet implemented) on the card.

Another possibility for the future might be something like the Jacket add-on for Matlab. In essence, all the matrix/vector data could get pushed onto the GPU for the duration of an involved multi-operation computation. It can be accessed (transfered back) on demand, but the benefit is that repeated computations with it can be done without repeated up-down transfer. In this scenario only specially treated procedures -- compiled on the GPU using a CUDA compiler -- would compute with such data. So one would need a procedure that 1) did large LinearAlgebra, 2) did enough general computation for it to be worthwhile pushing the data up/down, 3) could be treated by an enhanced Compiler:-Compile to compile on the GPU and call the GPU LAPACK/BLAS.  Items 1) and 2) restrict the use cases. Item 3 is problematic because Compiler:-Compile doesn't yet translate Matrix/Vector operations to LAPACK/BLAS calls for regular CPU operations, so getting that for the GPU is an even further goal.

Having said all that, there a large chance that the OP is hoping for sweeping general computation for which parallelization would be automatically done under the hood by the Maple engine. That does not happen, even on the CPU. Having it happen on the GPU is more distant still.

The OP has a long history of asking bizarre questions with almost no context given.

First 75 76 77 78 79 80 81 Last Page 77 of 443