acer

32490 Reputation

29 Badges

20 years, 9 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

Sorry if I missed something, but is the call to with(Units) necessary?

Also, I notice that you did not load Units:-Standard, and instead call `simplify` to combine units in results. Is that because things misbehave, if you do? (I'm thinking about some unit cancellation by Units:-Standard:-`=`, but maybe I misremember.)

acer

@okoolo You forgot the tilde (~) after the `ln`, when doing ln~(p).

The tilde makes the action be applied elementwise, similar in this case to issuing map(ln,p).

@okoolo You forgot the tilde (~) after the `ln`, when doing ln~(p).

The tilde makes the action be applied elementwise, similar in this case to issuing map(ln,p).

@Christopher2222 Robert's suggestion is to produce a variation on `RandomArray` which would use whatever values of mu and sigma happen to be (not as local names or parameters of that procedure). That would allow you to produce Arrays with variations on the normal distribution, on the fly.

But it would be better to have such a new routine be named something else entirely, since as Robert mentioned you'd otherwise be in danger of breaking some other routine's dependence on `RandomArray`. Better to call it `myRandomArray` or something else, and not to unprotect and change the original at all.

But in your comment immediately above you've also implied having replaced 'Normal(0,1)' with some other literal numeric parameters such as 'Normal(0,3)', rather that Robert's suggestion of using unassigned names `mu` and `sigma`. Note that repeatedly reconstructing the entire procedure and its body via `subs` and `eval`, just to mimic the behaviour of dynamicaly changing parameters, is a particularly unnecessarily inefficient way to program. Whatever you do, don't devise a work flow wherein you repeatedly recreate the entire procedure instantiated with new literal numeric parameters in the body.

I feel compelled to mention once more, sorry: doing this is inferior to just using Statistics:-Sample to obtain an Array with random float enntries in modern Maple.

@Christopher2222 Robert's suggestion is to produce a variation on `RandomArray` which would use whatever values of mu and sigma happen to be (not as local names or parameters of that procedure). That would allow you to produce Arrays with variations on the normal distribution, on the fly.

But it would be better to have such a new routine be named something else entirely, since as Robert mentioned you'd otherwise be in danger of breaking some other routine's dependence on `RandomArray`. Better to call it `myRandomArray` or something else, and not to unprotect and change the original at all.

But in your comment immediately above you've also implied having replaced 'Normal(0,1)' with some other literal numeric parameters such as 'Normal(0,3)', rather that Robert's suggestion of using unassigned names `mu` and `sigma`. Note that repeatedly reconstructing the entire procedure and its body via `subs` and `eval`, just to mimic the behaviour of dynamicaly changing parameters, is a particularly unnecessarily inefficient way to program. Whatever you do, don't devise a work flow wherein you repeatedly recreate the entire procedure instantiated with new literal numeric parameters in the body.

I feel compelled to mention once more, sorry: doing this is inferior to just using Statistics:-Sample to obtain an Array with random float enntries in modern Maple.

@Christopher2222 If I may offer a little advice here, it would be to listen to Erik's suggestions. Statistics:-Sample can already produce what you've asked about, so why expend effort making some other routine do it too? The routine you write about changing merely calls Statistics:-sample to do the work anyway.

Erik's example a4 above shows that lines 18 and 19 of ArrayTools:-RandomArray might be combined (optimized) into just a single line using Statistics:-Sample.

You mentioned altering routines. Do you mean like this? The only reasons I did it that was because a) I was doing it for someone else to use, and so could publically post a solution without posting the full source of the routine and potentially violating copyright, and b) it was fun. That is overkill, for rewriting something for yourself. (...and nobody else needs it, as Statistics:-Sample does it directly already).

The ArrayTools:-RandomArray routine is only about 30 lines of non-comment code. Instead of using `showstat`, you could just `print` it, then edit it to suit yourself, and then savelib to a new private archive if you want to re-use. You could change the parameter-processing typecheck on the 'distribution' keyword parameter so as to accept something like 'normal(m,s)' instead, and to use the `m` and `s` when it calls Statistics:-Sample internally.

It should be a very simple set of edits. But I would still advise leaving ArrayTools:-RandomArray by the wayside.

One thing that you very much ought not to do is to change and then savelib your working copying of RandomArray in such a way that your sessions get the new version by default. You almost certainly cannot know (without a comprehensive test suite) that no other standard routine does not rely upon the current behaviour (restrictive as it may be) and might malfunction were it changed.

SCRs to submit: RandomArray might call Sample better, to remove some calls to Reshape. RandomArray should use 'datatype'='float' instead of 'datatype'='float[8]'.

@Christopher2222 If I may offer a little advice here, it would be to listen to Erik's suggestions. Statistics:-Sample can already produce what you've asked about, so why expend effort making some other routine do it too? The routine you write about changing merely calls Statistics:-sample to do the work anyway.

Erik's example a4 above shows that lines 18 and 19 of ArrayTools:-RandomArray might be combined (optimized) into just a single line using Statistics:-Sample.

You mentioned altering routines. Do you mean like this? The only reasons I did it that was because a) I was doing it for someone else to use, and so could publically post a solution without posting the full source of the routine and potentially violating copyright, and b) it was fun. That is overkill, for rewriting something for yourself. (...and nobody else needs it, as Statistics:-Sample does it directly already).

The ArrayTools:-RandomArray routine is only about 30 lines of non-comment code. Instead of using `showstat`, you could just `print` it, then edit it to suit yourself, and then savelib to a new private archive if you want to re-use. You could change the parameter-processing typecheck on the 'distribution' keyword parameter so as to accept something like 'normal(m,s)' instead, and to use the `m` and `s` when it calls Statistics:-Sample internally.

It should be a very simple set of edits. But I would still advise leaving ArrayTools:-RandomArray by the wayside.

One thing that you very much ought not to do is to change and then savelib your working copying of RandomArray in such a way that your sessions get the new version by default. You almost certainly cannot know (without a comprehensive test suite) that no other standard routine does not rely upon the current behaviour (restrictive as it may be) and might malfunction were it changed.

SCRs to submit: RandomArray might call Sample better, to remove some calls to Reshape. RandomArray should use 'datatype'='float' instead of 'datatype'='float[8]'.

@Christopher2222 I don't know exactly what herclau intended, but here is one way to adjust that code,

restart:

N := 30000:

start_time:=time[real]():

t := Vector(N, i->evalf[4]((2*i-2)*Pi/N), datatype = float[8]):

x := Vector(N, i-> cos(10*t[i])+I*sin(10*t[i])
     + ArrayTools[RandomArray](distribution = normal)/10.0
     + I*ArrayTools[RandomArray](distribution = normal)/10.0,
            datatype = complex[8]):

time[real]()-start_time;
                             44.754

plot(<map(Re,x)|map(Im,x)>,style=point,symbol=solidcircle,symbolsize=4);

time[real]()-start_time;
                             48.649
 

Without the imaginary noise component it might come out like this, with such an approach,

restart:
N := 30000:
start_time:=time[real]():
t := Vector(N, i->evalf[4]((2*i-2)*Pi/N), datatype = float[8]):
x := Vector(N, i-> cos(10*t[i])+I*sin(10*t[i])
     + ArrayTools[RandomArray](distribution = normal)/10.0,
            datatype = complex[8]):
plot(<map(Re,x)|map(Im,x)>,style=point,symbol=solidcircle,symbolsize=4);

I really do not wish to offend anyone, but it can be done about 100-1000 times faster (see my other Answer).

I suppose that <map(Re,x)|map(Im,x)> might also be obtained more efficiently using ArrayTools:-ComplexAsFloat. Anyway, I was only guessing at what herclau intended.


@Christopher2222 I don't know exactly what herclau intended, but here is one way to adjust that code,

restart:

N := 30000:

start_time:=time[real]():

t := Vector(N, i->evalf[4]((2*i-2)*Pi/N), datatype = float[8]):

x := Vector(N, i-> cos(10*t[i])+I*sin(10*t[i])
     + ArrayTools[RandomArray](distribution = normal)/10.0
     + I*ArrayTools[RandomArray](distribution = normal)/10.0,
            datatype = complex[8]):

time[real]()-start_time;
                             44.754

plot(<map(Re,x)|map(Im,x)>,style=point,symbol=solidcircle,symbolsize=4);

time[real]()-start_time;
                             48.649
 

Without the imaginary noise component it might come out like this, with such an approach,

restart:
N := 30000:
start_time:=time[real]():
t := Vector(N, i->evalf[4]((2*i-2)*Pi/N), datatype = float[8]):
x := Vector(N, i-> cos(10*t[i])+I*sin(10*t[i])
     + ArrayTools[RandomArray](distribution = normal)/10.0,
            datatype = complex[8]):
plot(<map(Re,x)|map(Im,x)>,style=point,symbol=solidcircle,symbolsize=4);

I really do not wish to offend anyone, but it can be done about 100-1000 times faster (see my other Answer).

I suppose that <map(Re,x)|map(Im,x)> might also be obtained more efficiently using ArrayTools:-ComplexAsFloat. Anyway, I was only guessing at what herclau intended.


@Christopher2222 Yes, that's part of the details I've amassed for bug reports on this. It's a really pity, too, since the Maple 12 plot which appears inlined in the worksheet, as seen above, is quite beautiful and "professional looking". It's quite close to what I'd expect to see as high end graphics in a journal. But the smallest pointsize that exports as non-blank, giving similar to the other plot above, well that plot is quite the opposite.

I didn't mention it earlier, but it would also be possible to quickly generate an image file for this that looked good. and the axes & extras can often be faked ok in an image being formed using ImageTools. And this is fastest: no display in the GUI at all, just create and export.

Also, I didn't try the programmatic export drivers for Standard, which seem different from the right-click export drivers. And I didn't try programmatic export from Classic/CLI.

@Christopher2222 Yes, that's part of the details I've amassed for bug reports on this. It's a really pity, too, since the Maple 12 plot which appears inlined in the worksheet, as seen above, is quite beautiful and "professional looking". It's quite close to what I'd expect to see as high end graphics in a journal. But the smallest pointsize that exports as non-blank, giving similar to the other plot above, well that plot is quite the opposite.

I didn't mention it earlier, but it would also be possible to quickly generate an image file for this that looked good. and the axes & extras can often be faked ok in an image being formed using ImageTools. And this is fastest: no display in the GUI at all, just create and export.

Also, I didn't try the programmatic export drivers for Standard, which seem different from the right-click export drivers. And I didn't try programmatic export from Classic/CLI.

@herclau If I understand your meaning, then, sure, there is often a tradeoff between performance and the need for "expert" coding. I'm pretty sure that one could construct an approach for this post's topics which might be almost as fast and not nearly as obscure.

Sometimes I fiddle with trying to find faster ways just because I'm interested in seeing how much a difference there is between 'natural' ways and optimized ways. And i like to find out what are the bottlenecks. Clearly there is a development goal of making more 'natural' ways to code a task also be amongst the faster performers. And it's often the opposite, unfortunately: that the most natural ways can be amongst the worst performers. So examining alternatives is a useful exercise.

From experience, I know that producing point-plots of many points (10s or 100s of thousands) can be very expensive in Maple, and that writing tuned code can make the feasible size problems be quite a bit larger. This is an area where a lot of improvement is possible,and there seems to be a demand for it.

I don't know what the best way is to learn technique here. Perhaps there is a window of opportunity for a book on the broader subject of numeric computation & performance in Maple. (Some days I wonder if I myself could write one well.) I do think that there is a BIG market for a good book on Maple plotting, but I'm certainly not the right person to write such a book alone.

@herclau If I understand your meaning, then, sure, there is often a tradeoff between performance and the need for "expert" coding. I'm pretty sure that one could construct an approach for this post's topics which might be almost as fast and not nearly as obscure.

Sometimes I fiddle with trying to find faster ways just because I'm interested in seeing how much a difference there is between 'natural' ways and optimized ways. And i like to find out what are the bottlenecks. Clearly there is a development goal of making more 'natural' ways to code a task also be amongst the faster performers. And it's often the opposite, unfortunately: that the most natural ways can be amongst the worst performers. So examining alternatives is a useful exercise.

From experience, I know that producing point-plots of many points (10s or 100s of thousands) can be very expensive in Maple, and that writing tuned code can make the feasible size problems be quite a bit larger. This is an area where a lot of improvement is possible,and there seems to be a demand for it.

I don't know what the best way is to learn technique here. Perhaps there is a window of opportunity for a book on the broader subject of numeric computation & performance in Maple. (Some days I wonder if I myself could write one well.) I do think that there is a BIG market for a good book on Maple plotting, but I'm certainly not the right person to write such a book alone.

As Joe mentions, there are lots of ways to get this for simpler examples. Here's a minor variant. (And yet another here).

restart:

show:=proc(expr::uneval)
   subsindets(expr,And(name,satisfies(t->type(eval(t),constant))),
              ``@eval)
   = eval(expr):
end proc:

a:=5;
                               5

b:=6;
                               6

c:=100;
                              100

show(a+b);
                         (5) + (6) = 11

show(a*b-c^2);
                                   2        
                    (5) (6) - (100)  = -9970

acer

As Joe mentions, there are lots of ways to get this for simpler examples. Here's a minor variant. (And yet another here).

restart:

show:=proc(expr::uneval)
   subsindets(expr,And(name,satisfies(t->type(eval(t),constant))),
              ``@eval)
   = eval(expr):
end proc:

a:=5;
                               5

b:=6;
                               6

c:=100;
                              100

show(a+b);
                         (5) + (6) = 11

show(a*b-c^2);
                                   2        
                    (5) (6) - (100)  = -9970

acer

First 430 431 432 433 434 435 436 Last Page 432 of 595