acer

32490 Reputation

29 Badges

20 years, 8 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@Carl Love The change in semantics that you describe came about in Maple 2015.0, it seems. I don't see it described in either ?updates,Maple2015,compatibility or ?updates,Maple2015,Language .

I'd like to clarify that, while the syntax change you've mentioned is of related interest, this is not the cause of the original problem with procedure f as reported in this thread.

It certainly is interesting that orthopoly:-H(n,x) can now be used instead of orthopoly[':-H] , but I interpreted the original question as being more about how to write the procedure so that the calls to H could be written with just the short form of the name (and that being set up in a centralized manner).

Personally I don't envision myself ever electing to use the syntax orthopoly:-H(n,x) over orthopoly[':-H] since that unnecesarily blurs the fact that it's a reference to a table-based package (and the distinction may matter in some other way, like with uses).

This is the kind of question where it can really help to provide at least one complete example  (whether larger/longer/more involved, etc) that fully illustrates the set of difficulties you're having as well any additional details about the domain.

Otherwise people can just waste time trying to provide suggestions that help only with the toy example.

If you are generating the data from within Maple itself then you can store it in a Matrix.

Within that same session you can then directly access those values for use in further computations, and there is no absolutely no need to export it to a file.

If you need to access the data from another Maple session (eg. after restart, or from another worksheet) then you can use ExportMatrix in the original session and ImportMatrix in the subsequent sessions.

@Carl Love 

Very nice.

I'll just add one minor note, if that's OK.

By default those Sliders with float ranges will each have the Component property "Continuous Update on Drag" toggled on. That means that the GUI will send the underlying code (here, the numeric bvp solution procedure) more than a single call as the Slider is dragged. But the computation of each of these plots, for each passed value/call, takes a little while. So as one moves the Slider these computations can pile up in a queue, and this can make this exploration seem clunkier. However the "Continuous Update" property can be toggled off for each Slider. The result of doing so is that the bvp solver only gets called once per adjustment of the slider (when you stop moving and release it). You may find the over responsiveness a little better for this intensive example, with that setting.

For example,

Explore(
   OneFrame(C__1, C__2, C__3, C__4, C__5),
   parameters=[ seq([rng, continuous=false],
                     rng=[
     C__1= 1.2e8..1.8e8,
     C__2= 2e9..10e9,
     C__3= 2e8..6e8,
     C__4= 0..5e7,
     C__5= 3e7..5e7
                         ]) ],
   widthmode= percentage, width= 100 
);

Of course, as you mention, it's also possible to reduce the value passed for numpoints. (Or that value could also be another explored parameter.)

[edit] The continuous-update property of an exploration Slider can also be adjusted, post-insertion, by right-click on the Slider.

@nm Yes, I mean the Programming Guide.

Try using LibraryTools:-Save, and always supply the name of the desired, existing archive (string) as the last argument (instead of relying on savelibname).

You are of course free to use whatever you prefer, that works for you.

If you go with savelib then watch out for stray .m files produced if it doesn't find a writable .mla archive. (A .m file will not contain a full and functional module, but just its shell which is of no use.)

@mmcdara You may wish to measure the difference in time[rea]() , using an explicit loop where each time through the loop you forget that procedure (which would have to be a module export, or with kernelopts(opaquemodules=false) so forget can be called on it as a module local).

 

@mmcdara It might be interesting to compare Proc2 with iterations=100 alongside Proc1 with a loop from 1 to 100.

And you should probably extract the "real time" from Usage, rather than the "cpu time", and ensure your machine is not otherwise under significant load.

But the results still might vary. And neither may be a clean representation of the arithmetic average of the running time. One reason for that is memoization, where some intermediary results of the computations are cached/remembered. It's not always possible to `forget` all such, bewteen iterations.

There is also the possibility that garbage collection is being triggered differently between your two approaches. And it could be making a mess of both or either of the timing results. You could try and pare off any timing contribution by the garbage-collector using the "real gc time" value (which is not normally printed by Usage), which can be subtracted from the "real time" result. But in modern Maple it's now tricky/tough to force a call to gc() to work right away.

And so it becomes very difficult to distinguish meaningfully between garbage-collection overhead that occurs during a computation from that which happens afterwards when gc is triggered on remember tables that had `option remember, system` in order to clean up garbage accrued during the measured calculations.

Measuring the effective average time for a short computation is thus difficult to do properly. And it's not even always clear what "properly" means, because of the gc mess.

A practical approach may make sense: the timing performance that matters should be measured in a way that matches the expected, eventual use. If a short computation is to be done just once then that's all you can properly measure. You can only properly measure the timing of many short computations  (even repeats) by qualifying its meaning to correspond to an equivalent usage scenario.  Eg, if eventual end-usage includes the case of many short computations including memory management then it's sensible to measure the timing of all of that together.

@anton_dys Yes,  I happen to know that someone is working on interactive plots and animations in the new interactive PlotBuilder.

I feel it to be reasonably likely that the old Maplets-based interactive plot builder will not be removed as an accessible stand-alone command (ie. plots:-interactive) merely on the grounds that the new Embedded-Components-based PlotBuilder might get all that functionality. Maplesoft has a long history of not removing old commands, even when deprecated or superceded. Examples include LinearAlgebra and linalg, Statistics and stats, NumberTheory and numtheory, etc, where the older packages still exist in the product.

But the older Maplets-based interactive plot builder may only be available in user-interfaces that support Maplets and Java popups. So it's not available in the MaplePlayer, or in interactive Apps in the MapleCloud. But those interfaces have growing support for Embedded Components.

By the way, have you used the new Embedded-Components-based interactive plot builder in the right-side panel? I haven't seen comments about it on this forum.

@sand15 

In general one could collect both the time() and the time[real]() data, ie the "cpu time" and the "real time" respectively.

The "cpu time" as reported by Usage is an accumulation from all kernel threads that might be used during your calculation. But if there is successful parallelism then that "cpu time" can include a sum of timings of subcalculuations that may have overlapped temporally. So under successful kernel-level parallelism "cpu time" can be significantly and deceptively larger than the wall-clock duration of the computation. (By "wall-clock" I mean the time as measured by your kitchen clock or wristwatch.)  Your manual measurements obtained as the difference of time() calls is this cpu time.

The "real time" as reported by Usage is the wall-clock timing. It can be larger than the "cpu time" if your machine is loaded by other running programs or services.  On an otherwise unloaded machine the "real time" is usually the one that matters. Manual measurements of the difference of time[real]() calls can mirror this real time.

@anton_dys 

You could try out this attachment.

oldPBmenu.mw

@nm My previous comment also contained a second attachment, with another way, using just indets and no subs. 

Your task starts off with the requirement that y is the dependent variable and x is the independent variable. But dsolve will approach it from another way -- examining all the derivatives and the function calls within them, so as to infer the class of the problem and sort out the dependencies. I think it's highly likely that it uses indets for a significant part of that (or some utility procedures which in turn use indets). But its validation will differ from yours,  because the requirements do.

@nm Does this do what you want?

restart;

F:=ee->`if`(indets(subs(y(x)=__y(x),ee),
                   {identical(y),'specfunc'(y)})={},
            "OK","not OK"):

expr1:=y(x)^2+diff(y(x),x,x)+g(y(x))+diff(y(x),x)^2+Int(f(x),x);

F(expr1);

y(x)^2+diff(diff(y(x), x), x)+g(y(x))+(diff(y(x), x))^2+Int(f(x), x)

 

"OK"

(1)

expr2:=y(x)^2+diff(y(x),x,x)+g(y(x))+diff(y(x),x)^2+Int(f(x),x)
       +y^2;

F(expr2);

y(x)^2+diff(diff(y(x), x), x)+g(y(x))+(diff(y(x), x))^2+Int(f(x), x)+y^2

 

"not OK"

(2)

expr3:=y(x)^2+diff(y(x),x,x)+g(y(x))+diff(y(x),x)^2+Int(f(x),x)
       +diff(y(x,z),x,x)+g(y(z))+diff(y(z),z)^2+Int(f(z),z);

F(expr3);

y(x)^2+diff(diff(y(x), x), x)+g(y(x))+(diff(y(x), x))^2+Int(f(x), x)+diff(diff(y(x, z), x), x)+g(y(z))+(diff(y(z), z))^2+Int(f(z), z)

 

"not OK"

(3)

expr4:=y+y(x)+y(x)^2;

F(expr4);

y+y(x)+y(x)^2

 

"not OK"

(4)

expr5:=y(x)+y(x)^2;

F(expr5);

y(x)+y(x)^2

 

"OK"

(5)

expr6:=sin(y)*cos(y(x))*sin(y(x)^2);

F(expr6);

sin(y)*cos(y(x))*sin(y(x)^2)

 

"not OK"

(6)

expr7:=cos(y(x))*sin(y(x)^2);

F(expr7);

cos(y(x))*sin(y(x)^2)

 

"OK"

(7)

expr8:=sin(y(z))*cos(y(x))*sin(y(x)^2);

F(expr8);

sin(y(z))*cos(y(x))*sin(y(x)^2)

 

"not OK"

(8)

 


Download indets_subs.mw

Or you might try either F or G here.

indets_subs2.mw

@Fabio92 This approach only works if both worksheets are opened and executed.

That means that -- every time he wants to work in any worksheet that doesn't define/construct the modules and procedures -- he first has also to open the defining worksheet and execute that in the parallel.

That's quite a lot of ongoing effort.

@nm I didn't see before that the plain `y` was unwanted, as my eye saw a call y() in one of you later examples. Adjusting the type used by indets can accommodate that easily. (I'm away from my computer but will add it later.)

Using `has` to attempt this kind of thing, is not The Way, IMO. The subsequent predicate used in remove/select ends up having to do the heavy lifting, and then either needs to utilize indets or be some complicated fragile mess that simulates indets .

@rahinui I'm glad if it helps.

The conditional stull with `tools/genglobal/name_counter` is just there so that the first substituiton is done with _F1 or "higher", rather than with _F or _F0. I just thought it looked more sensible that way.

The heavy lifting is done by `tools/genglobal`, which is quite commonly used in system Library commands' code for the purpose of creating new, unassigned global names based upon some given name-stem.

 

First 253 254 255 256 257 258 259 Last Page 255 of 594