Mac Dude

1571 Reputation

17 Badges

13 years, 113 days

MaplePrimes Activity


These are replies submitted by Mac Dude

I occasionally teach a graduate course in accelerator physics with Maple. The script I use and distribute is a series of files in Document mode and 2-D input; being able to typeset it makes it much more readable (although I have not found a good way to deal with the (unrelated issue of) sometimes long output lines, and on occasion I have asked myself whether there wouldn't be value in exposing Maple input statements by using 1-d input, but in black color, to separate them from the typeset text).

I have not really made much out of the distinction between Document mode and Worksheet mode. I tell the students to set the prefs to Worksheet, which some do; some don't. I do not like getting hung up on technicalities like this; they don't add to the understanding of the material. In a way I wish Maple did not have two modes.

Just as others here I encourage the students to use 1-d input because it is less ambiguous. Students being students some ignore this and merrily work in 2-d input. I am relatively particular to NOT use implicit multiplication, ever, upon input. Even with my own stuff. It is a significant source of trip-ups with little pedagogical value.

My $ 0.02,

M.D.

@rlopez Ah, thanks much; this seems to be an improvement. My copy of Maple 2016 has not been installed yet so I will be looking for these entries once I have Maple 2016 up and running. In Maple 2015, the Markers entry in the View menu already exists but managing Document Blocks always eluded me so for the most part I ignored them.

I do wonder whether there is now maybe a way to label a Document Block so I can jump to it quickly...? One of the features I have been missing is a way to place labels and use these to jump around in a large Document.

As I mentioned before, I use Document mode to write lecture scripts and documentation for a large-ish package. Maybe Maple 2016 will make editing these less painful.

M.D.

 

@John Fredsted For your example, that is certainly so.

But consider the situation with multiple arguments to a functions:

g~([n1,n2,n3],[p1,p2,p3]);
               [g(n1, p1), g(n2, p2), g(n3, p3)]

map(g,[n1,n2,n3],[p1,p2,p3]);
               [g(n1, [p1, p2, p3]), g(n2, [p1, p2, p3]), g(n3, [p1, p2, p3])]

It is this difference in behaviour I was thinking of in my reply to your answer. It has tripped me up a number of times so I am sensitive to the need to understand this. Needless to say, it is not about one form being correct and one being incorrect; it depends entirely on the intent and context.

Cheers,

M.D.

@John Fredsted and rich153: map and tilde(~) are similar but not equivalent.

Say you have a function that acts on one expression but accepts a list of parameters. eval(xpr,[a=1,b=2]) for example. Tilde will fail if you do eval~([xpr1,xpr2,xpr3],[a=1,b=2]) because the length of the parameter list is different from the number of elements the eval is supposed to work on. On the other hand, map(eval,[xpr1,xpr2,xpr3],[a=1.b=2]) will do what you would expect (evaluate each xpr at the given point).

Mac Dude

 

@rostam I really do not understand your issue. The first and last plot look perfectly fine on my machine (Retina Macbook Pro; OS X 10.10.5). No jaggies/steps/other artifacts.

M.D.

@Joe Riel Ok, you got me there.

However, all is not well. I fixed the code to use floats, which I did by using floats for the constants. I also changed the exponents to something small above 1 so the numbers do not get out of hand. I also added a second task.

Things now work for 100 iterations. I get about 120 % CPU usage, which seems ok since the example will be dominated by the overhead of creating the threads and waiting for them to complete.

BUT, when I now increase the iterations to 3000 (threethousand), Maple again allocated in excess of 1 GB. This is actually insane; the numbers are now 15-Digit floats and there should only be a few left in the system.

Here is the updated code & results:

restart;
                "Maple Initialization loaded..."
with(Threads);
f1:=x -> 1.+x^1.001;
f2:=x -> 1.+x^1.002;
                                             1.001
                     f1 := x -> 1. + x     
                                             1.002
                     f2 := x -> 1. + x     
x1:=0:
x2:=0:
tt:=time[real]():

for i from 1 to 3000 do
  id1:=Create(f1(x1),y1):
  id2:=Create(f2(x2),y2):
  Wait(id1,id2):
  x1:=y1:
  x2:=y2:
  y1:='y1':
  y2:='y2':
end do:

time[real]()-tt;
                             48.372
i;
                              3001
x1,x2;
                                      43                        774
              8.031636982 10  , 1.927455013 10   

I can accept the running time as overhead. I have a hard time with the  memory allocation.

So I think it is still broken.

threadtest.mw

As far as Task is concerned; I have not figured out how to use that in a meaningful way. Aren't Threads much easier?

M.D.

As written, your sheet has no equations, just assignments. An equation has the form

A=B;

wheras

A:=B

is an assignment.

C[1] has an equation assigned to it, which is perfectly fine as long as you are aware of this.

The assignment for T[wb] appears to have a typo in T[wb][1,3]. For alph write alpha.

Note than Maple uses square brackets for indices, i.e. write V[b][1] instead of V[b](1) etc. although the round brackets ("programmer indexing") appear to work here.

Using T with and without indexing (T and T[wb]) is a good way to confuse Maple and yourself and should be avoided.

Finally begin rour sheet with 

restart;

to flush out old assignments when re-executing the sheet as you work with it.

Then you would try the solve command to see if Maple can solve your system. Look up in Help how to use it. Your system does appear to be overdetermined, and transzendentals involving cosine and sine terms are notoriously difficult to solve so I have my doubts. I would see if I could break it down into smaller subgroups and make progress that way. Your E[7] e.g. is just –L.

M.D.

 

@acer 

Going a bit OT here: If you compare your and my results you can see how unstable the results are, so clearly this function does not describe the data well. The a coefficient could be replaced by an additive constant to the year scale which would likely produce a much more stable number. I don't know enough about the model to judge whether this makes any statistical sense, though (but then, the year numbers are arbitrary as well so in that sense absolute years also seem not to make too much sense).

Why ExponentialFit comes up with a different answer is not so clear to me. (I hope it is not just fitting log(data) with a straight line and then taking the exp of the result, which would be wrong).

Having said this: When you plot the data on a log scale you can see that there actually seem to be two "time constants" (the inverse of the b coefficient)  involved here, see the plot below (the line is my fit with the coeffs given above). This would suggest the model can be extended.

M.D.

@nrebman1 

Well, the function definition is not essential; I just do that routinely as it makes the program easier to read, and I can use the function in plotting (which I always do for fitting problems so I can see whether the fit is reasonable). What is essential is providing starting values; without those the algorithm is not able to start; presumably because the problem is so ill conditioned. I also like specifying the fit variables although maybe here that is not essential. At any rate, this is the way I typically program such problems.

I originally used 0.01 as starting values for a and b, however, with those the NonlinearFit runs out of iterations before converging. In that case it returns the last trial values, which I used as guess.

M.D.

 

Very nice and very useful. I like the way you explain the tighter code by Carl and by Kitonum.

Keep 'em coming!

M.D.

 

Does the fit ever actually do anything?

NonlinearFit takes all kinds of additional input. You can specify the variables to be varied and you can specify starting values or ranges. Both of these are helpful in getting the fit to work and to converge. Please check the Help file on Statistics:-NonlinearFit; I do not remember all the details.

Also, without the data no one can troubleshoot your problem.

M.D.

@acer Oops, gotta read first before replying I guess... but no; I do not have Maple 2016. Will be a while as I am moving east & have to first get that behind me.

M.D.

@acer Unfortunately I do not keep older point releases so I cannot verify your observation.

M.D.

 

@Leinad evalc stands for "eval complex" (as you might have guessed). It is needed when you want functions like abs and argument to actually do what you expect, but it is also needed for the Re and Im functions to evaluate as expected. By itself, evalc tries to put a complex expression in canonical form (a+I*b) with the other variables all considered real.

While we are at it: evalf stands for "eval floating point" and is used to convert all numeric constants to floats and combine them as much as possible. This includes things like evaluating functions which may return unevaluated if no rational solution exists for a specific set of arguments (e.g. cos(1) won't evaluate, evalf(cos(1)) will). Handy when you ultimately want to do numerics.

eval in itself forces evaluation of its argument if possible. Sometimes needed when certain kinds of table constructs are used (the ones with "last name evaluation", i.e. tables, procedures and Records and maybe a few more). But eval also allows evaluation at a certain point (eval(xpr,x=1)) and to control the depth of evaluation (which in itself is a recursive process). RTFM.

Keep at it and read the manual (Help) on these. It is worth it and important to unlock Maple's power.

Mac Dude

 

@JohnS Thanks much, it works for me as well. I seem to get hung up always at the trivial issues...

M.D.

First 13 14 15 16 17 18 19 Last Page 15 of 42