I use the Grid[Launch] function (Windows 7, Maple 2015) to distribute many similar computations over all the processors my machine has.
My machine is a 4 processors one (not hyperthreaded).
When it was equiped with Windows XP and I was using, let's say 2 proc., the performance manager showed that two processors among 4 were charged up to 95%-100% while the others remained around 0 %.
In this case (my problem is perfecly scalable), the elapsed time was exactly half it was when I used only one proc (and twice as large as the time obtained with 4 proc).
Now I'm working with Windows 7.
This behaviour puzzles me : if I use 2 procs among four and look to the performance manager, all the 4 procs are partially charged. It looks like Window 7 was distributing itself the computations ?
As a result (?), running on 4 proc no longer takes 25% of the elapsed time on 1 proc, but "only" 40%.
Could it be that some inner "dispatching task within processors" Windows 7 could have, might interfere with the distribution of tasks Grid[Launch] does ?
Does anyone of you already had a same experience ?
If Windows 7 really has some "task managing procces", is it possible to switch it off ?
Same context as previously.
I run the same code (search of a local maximum of a function where some of its parameters are randomly valued ; the sample of these parameters hase size 10000) over 4 proc.
On order to save intermediate results I wrote a loop within it I send blocks of 500 computations at the same time over the 4 proc.
This loop is executed 5 times (5*500*4 = 10000)
I observe that after each step of the loop the memory used is increased by a rather constant amount. It looks like if a 4 proc computation of 500 optimizations was costing N Mega Bytes, and that the memory was increased by N MB each times the loop is executed.
At the very end the computational time can dramatically slow down because of the amount of the memory used.
More precisely my pseudo code looks like this :
for step 1 to 5 do
Grid[Launch](MyCode, numnodes=4, imports=[BlockOf2000data], ...):
# MyCode uses only one quarter of this 2000 data block depending on the processor number it runs on
Does it exist a way to clean the memory just before the "end do" command in order to avoid it to grow continuously ?
Any contribution will be highly appreciated.