569 Reputation

13 Badges

12 years, 326 days

MaplePrimes Activity

These are replies submitted by itsme


haha.. yes definitely fast enough now! In fact because I was using fsolve i needed to jump through some hoops/make-tradeoffs to make thins as fast/parallel as possible (Grid:-Map can't deal with compiled functions, Thread:-Map can't use fsolve, etc...). I now refactored my code a little and just rolled your iterative approach along with all the other stuff that I can compile... Running everything  with no parallelization is now substantially (1 to 2 orders of magnitude) faster than before, even when before I ran on 4-6 cores via Grid:-Map.

Using Thread:-Map does not seem to help right now with the latest compiled code interestingly enough.. but that's another problem for another day.

by the way, I was curious why in, you used a Module rather than a regular function?

thanks again!

@Axel Vogt

Hi Axel:

yes, i agree, but even so I still would need to calculate results in a "standard way" for the first say ~1000 values of n to get the 1e-5 accuracy as you point out. Hence still need a fast way to do it.





aghh... yes! - missed that as well.

Your code seems to work very well for basically all parameters I throw at it (and roughly speaking is and order of magnitude faster than get_theta_n_array3, which is in turn 4 times faster than my original code)




Hi @acer

First of all, thanks very much for taking the time to look into this... I always find your posts very interesting and useful.

Your code when works, seems to be as much as ~70 times faster (!!) on my machine and take as much as 1/50th of the memory of th than the version I posted.

A small problem is that for some n the code does not find the "right" number of solutions. I have ran into an identical issue when I was using fsolve with the same initial guess (as you're essentially doing) for all n. A solution was to either
1) not specify the initial guess and later sort for solutions in range
2) not specify the initial guess but specify an acceptable range of solutions (i.e. theta_n=-Pi/2..Pi/2)
3) use the last solution as the guess for the next one.

All three cases showed similar timing... and seemed to work.

So while this works:
param_n:=1000:param_omega_a:=100e9:param_v:=1e8:param_x_l:=0.3:param_C_a:=20e-14: param_Z0:=50.0:
ans_new:=CodeTools:-Usage(new_get_theta_n_array(param_n,param_omega_a,param_v,param_x_l,param_C_a,param_Z0) ):
ans:=CodeTools:-Usage( get_theta_n_array(param_n,param_omega_a,param_v,param_x_l,param_C_a,param_Z0) ):
plots:-display([plots:-listplot(ans), plots:-listplot(ans_new, color=red, linestyle=dash)], thickness=2);

Changing n to say 10 or 2000 does not:  
param_n:=2000:param_omega_a:=100e9:param_v:=1e8:param_x_l:=0.3:param_C_a:=20e-14: param_Z0:=50.0:
ans_new:=CodeTools:-Usage(new_get_theta_n_array(param_n,param_omega_a,param_v,param_x_l,param_C_a,param_Z0) ):
ans:=CodeTools:-Usage( get_theta_n_array(param_n,param_omega_a,param_v,param_x_l,param_C_a,param_Z0) ):
plots:-display([plots:-listplot(ans), plots:-listplot(ans_new, color=red, linestyle=dash)], thickness=2);

As far as parameters go, at least right now when I'm trying to optimize over what "works best" for the rest of the problem, I do have to be sure that whatever I'm using is robust for a huge range of paramters... Although the parameters I listed are "good candidates".
param_n:=1000: #can vary from 100 to thousands (in principle)
param_omega_a:=100e9: #can vary between 1e9 and 5000e9 (likely a few hundred 1e9)
param_v:=1e8:param_x_l:=0.3: #can vary between 0.05 and 1
param_C_a:=20e-14:  # can vary between 1e-14 1000e-14
param_Z0:=50.0:  #fixed

If you have any ideas to make your code robust against a wide range parameters (especially n), please let me know.
thanks again!


I should note that playing with the initial guess and the number of evalutations of F(n) does not seem to help. Also, I've been playing more with Mac Dude's idea above (see get_theta_n_array3) and that does seem to be a factor of a "few" faster than my original code.


@Thomas Richard

just a comment that selecting (highlighting) everthing is oftern (for me) not a usable option. If one has multiple plots (in particular density plots) with many data points, selecting even a single plot can take minutes ore even completely stall maple.



Well.. the more accurate the better, but maybe results to a few (say 5?) decimal places could be reasonable - I would have to double check what the effect on the final answer would be.

As I mentioned, a part of the "extra" slowdown is that fsolve is not thread safe, so i have to resort to using Grid:-Map, and hence can't compile the rest of the code... but right now there is probably no way around that.



@Mac Dude

thanks for your post. I've played with setting intevals/initial values before, but have not thought of using the "previous" value.. that does seem to help, and cut the time by half or more in small n examples.. but when n gets large (and the solution is close to pi/2) I get a similar timing with the version i posted....

here is the code that uses the last result as a guess for the next one:

get_theta_n_array3:=proc(max_n::integer, omega_a::float, v::float, x_l::float, C_a::float, Z_0::float)
    local theta_n_array, i:


    theta_n_array[1]:=fsolve(subs(n=1, tan(theta_n) - C_a*Z_0*(-v^2*(Pi*n-theta_n)^2/x_l^2+omega_a^2)*x_l/(v*(Pi*n-theta_n)) = 0),theta_n=-Pi/2..Pi/2, real):

    for i from 2 to max_n do
      theta_n_array[i]:=fsolve(tan(theta_n) - C_a*Z_0*(-v^2*(Pi*i-theta_n)^2/x_l^2+omega_a^2)*x_l/(v*(Pi*i-theta_n)) = 0,theta_n=theta_n_array[i-1], real):

    if ArrayNumElems(theta_n_array) <> max_n then
        printf("Bad Array Dimensions! Got too many or not enough solutions.");
        theta_n_array:="CHECK: get_theta_n_array()": #dirrrrty hack that will ring an alarm bell if array is not the right size

wonder if anyone else has any ideas.

thanks again.

The first thing i do every year when i install maple is manually change the java heap size in the maple startup file.. so say for me it's:


I change this:


to say this:


Note, that this file is called from xmaple, so it's the only thing that needs changing.

Also, I'm sure there are other, cleaner ways to do this... for example this info can be passed to the script as an argument. Furthermore, I don't know whether doing it directly through maple interface has the same effect, nor whether your problem is even related to heap size...  One would have to do more testing, and right now I have little time.

I basically have a few worksheets with multiple density plots that are directly affected by this settings - they are only usable with this change.

@Carl Love

that's very useful to know!


just to be clear - if you run these commands from within a GUI worksheet, you will be able to see all the standard 2d math (as you would if you were exectuing the commands directly wihtout the "read" command).

far from optimal i'm sure, but you can probably get what you need.



do you have a simple call to a plot function (as in plot(...) ) that when exported shows this problem?

i looked at my old code in more detail - i think i was first expoting in postrcit (so eps files) directly from maple, and only then one could change the properties (all the info outisie of the "bounding box" is lost when one exports directly to a non-vector format such as jpeg or bmp). I can find the "utlity" functions that i wrote, but can't find any code that actually called them... and trying a few simple calls to plot() or say densityplot() seem to export fine now (i'm using 18.01).

if you can provide a simple way to reproduce this, maybe i can see if these commands i showed above, do help with th e problem.


note this was a while ago, but i'm pretty sure i had the same problem...

through some trial and error, here is a call i would make after exporting the plot (through a function similar to what i posted)

    system(sprintf("sed 's/^%%%%BoundingBox.*/%%%%BoundingBox:  -500 -500 2000 2000/g' /tmp/%s.eps > /tmp/%s_box.eps  ; ps2pdf14 -dFIXEDMEDIA -dDEVICEWIDTHPOINTS=2400 -dDEVICEHEIGHTPOINTS=2400 -dOptimize=true  -dEPSCrop -sOutputFile=/tmp/%s_box.pdf  /tmp/%s_box.eps; pdfcrop  --margins '5 20 5 20' -clip /tmp/%s_box.pdf %s/%s.pdf; sleep 2; convert %s/%s.pdf %s/%s.png" , v_fileName, v_fileName, v_fileName, v_fileName, v_fileName, v_fileDir, v_fileName, v_fileDir, v_fileName, v_fileDir, v_fileName));

this fixed the issue for me. It might (or might not!) be helpful to you... it should be easy to divide up the commands to see all the things i'm calling in a row. it's *really painful* as you see, but it should't take long to adapt this to your code... of course the last call to convert could be to jpeg or bmp instead of png if that's what you need.




If you're exporting to postscript (which you seem not to be, but possibly if you could do that), then you can edit those files by hand (they are usually plain text). There are fields in there (Look for something like BoundingBox) which control the sizes of all elements, in particular the whitespace around the actual plot - just change those to something that works.

If you're on linux, then there is often an easy way to fix some of the plots in a more "automagical" way. You can export an eps file from maple, and then run it through a program called ps2pdf14 like this:

ps2pdf14 -dEPSCrop -dPDFSETTINGS=/prepress  myfile.eps

that will often fix the issues (although you'll end up with a pdf file, which may or may not be good). Note, I haven't done this in a while, so your millage may vary.

As I said elsewhere in this thread, another external tool might be better suited for this.

@Alejandro Jakubi 

yes... academic contacts must be a large part of this... it would be a fascinating statistic to see the usage/sales numbers for the big three Ms and see how they've changed over the years... but one can only speculate.

The nice part of all of this is that the open source alternatives have (especially recently) been gaining ground and maturing. In some areas (arguably) such as say linear algebra or plotting, they are already on par or more feature complete than the commercial offerings.

The answer may be highly personal and will surely depend on what kind of stuff you're interested in doing. The best way to make this decision might be to try them for yourself!... Pick a project that corresponds to something you "usually" work on, and redo it in all the various software packages you might be interested in (don't forget open source collaborations like SAGE - which have come a long way in the last few years, and hopefully, as far as I'm concerened at least, are the future of mathematical computing). This way you can see for yourself what aspects of a given language/environment you like and which ones you don't.

As a side note, Wolfram's (Mathematica's creator) marketing machine is really unstoppable. I am not quite sure how they do it, by in my community, every time mathematical software comes up, all I hear is how unquestionably Mathematica is "the best" in almost everything it does... this even sometimes comes from people who have never used any other software, and even their Mathematica experience consists of a say one or two simple projects they did as undergrads. So at least as far as marketing goes, Mathematica probably is "the best" ;)

5 6 7 8 9 10 11 Last Page 7 of 15