mmcdara

3733 Reputation

17 Badges

6 years, 67 days

MaplePrimes Activity


These are questions asked by mmcdara

Hi everybody,

I use the Grid[Launch] function  (Windows 7, Maple 2015) to distribute many similar computations over all the processors my machine has.
 

Question 1

My machine is a 4 processors one (not hyperthreaded).
When it was equiped with Windows XP and I was using, let's say 2 proc., the performance manager showed that two processors among 4 were charged up to 95%-100% while the others remained around 0 %.
In this case (my problem is perfecly scalable), the elapsed time was exactly half it was when I used only one proc (and twice as large as the time obtained with 4 proc).


Now I'm working with Windows 7.
This behaviour puzzles me : if I use 2 procs among four and look to the performance manager, all the 4 procs are partially charged. It looks like Window 7 was distributing itself the computations ?
As a result (?), running on 4 proc no longer takes 25% of the elapsed time on 1 proc, but "only" 40%.
Could it be that some inner "dispatching task within processors" Windows 7 could have, might interfere with the distribution of tasks  Grid[Launch] does ?

Does anyone of you already had a same experience ?
If Windows 7 really has some "task managing procces", is it possible to switch it off ?


 

Question 2

Same context as previously.
I run the same code (search of a local maximum of a function where some of its parameters are randomly valued ; the sample of these parameters hase size 10000) over 4 proc.
On order to save intermediate results I wrote a loop within it I send blocks of 500 computations at the same time over the 4 proc.
This loop is executed 5 times (5*500*4 = 10000)

I observe that after each step of the loop the memory used is increased by a rather constant amount. It looks like if a 4 proc computation of 500 optimizations was costing N Mega Bytes, and that the memory was increased by N MB each times the loop is executed.
At the very end the computational time can dramatically slow down because of the amount of the memory used.

More precisely my pseudo code looks like this :
for step 1 to 5 do 
   Grid[Launch](MyCode, numnodes=4, imports=[BlockOf2000data], ...):  
   
# MyCode uses only one quarter of this 2000 data block depending on the processor number it runs on
end do:

Does it exist a way to clean the memory just before the "end do" command in order to avoid it to grow continuously ?


Any contribution will be highly appreciated.

Hello everybody

I'm using discrete distributions from the Statistics package and I found a rather strange result.

In short the theoritical values of some statistics of a NegativeBinomial(1, P) Random Variable (P being the probability of success equal to 1e-4) are correctly computed, but their empirical estimators computed from a sample of this RV are roughly wrong.

For NegativeBinomial(1, P) is similar to Geometric(P) I asked Maple to compute the theoritical values of some statistics of Geometric(P) and next to assess their empirical values from a sample of Geometric(P).
Some discrepancies still remain but they can be explained by statistical fluctuations.

Could you please look to the attached file (an error on my part is still possible) and help me to fix this ?

Thanks in advance


PS : the histogram of Sample(NegativeBinomial(K, P), AnySizeYouWant) is obviously wrong (it should look like a decreasing exponential) 


 

Download NegativeBinomial.mw

Hello everybody

I have a variable U of type set, made of index names name[expression sequence] 
One example is 
U := {A[1], A[2], B[2]}

I want to build the set of all the expression sequence  ; in the example above this is {1, 2}
op~(U) does the job ... although I do not really understand why

I also want to buid the set of all the name ;  ( {A, B} in the example)
Here I have written something that performs correctly  ... but it is very uggly
parse~(substring~(convert~(U, string), 1..1));  # works only for names with a single character !!!


I do not know how to isolate the names ?
Is it possible to write something smarter ?

Thanks in advance

Hello all, 

A question concerning NetCdf files was asked in 2012 and is still unanswered today.
Browsing the questions only returns this item, suggesting the NetCdf topic is not a concern in the Maple community.
Nevertheless, does it exist some capabilities in reading and writting NetCDF files ?
If not, are there some planned development on the subject ?

Let's hope now for not having to wait four years for an answer, all responses will be greatly appreciated, even negative.

years


PS : NetCdf capabilities already exist in Sage or Mathematica

reference :

Question:Quantile function
Posted:
Mikhail Drugov 88 

 

In the reference above, Mikhail has raised a problem concerning the function Statistics:-Quantile.
A problem of the same kind exists for the function Mode.

In fact  Mode returns the value of the mode only for unimodal distributions ; but for "bimodal" distributions it does not work properly.
Theoritically the mode is the value where the PDF reaches its maximum maximorum. Except in very particular cases this maximum is unique, even if common language speaks of "bimodal distributions" instead of "two bumped distributions".

Here is an example of a two bumped distribution (Z) obtained by mixing two gaussians distributions.
It has two bumps (z=-1, z=2) but only one mode (z=-2).
It could be hopefully acceptable that Mode returns the {-2, 2} (even if only -2 is the true mode), but Mode returns also the value of z that minimizes PDF(Z, z), which is not correct at all.


 

restart:
with(Statistics):

X := RandomVariable(Normal(-2,1)):
Y := RandomVariable(Normal(2,1)):

r    := 0.4:
f__Z := unapply((1-r)*PDF(X,t)+r*PDF(Y,t), t);
Z    := Distribution(PDF=f__Z):

proc (t) options operator, arrow; .1692568750*2^(1/2)*exp(-(1/2)*(t+2)^2)+.1128379167*2^(1/2)*exp(-(1/2)*(t-2)^2) end proc

(1)

plot(PDF(Z,t), t=-4..4);

 

Mode(Z);

{-1.999102417, .1352239093, 1.997971857}

(2)

 


 

Download ProblemWithMode.mw

 

First 29 30 31 32 33 Page 31 of 33