acer

31409 Reputation

29 Badges

19 years, 134 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

I didn't mean to imply that it was necessarily different from all other objects. But it's natural to imagine that someone would want to use as a data structure an object which would not suffer from unwanted evaluations. Prior to kernel-based parameter processing (which is a very recent development) there were a slew of ways in which to inadvertantly get unwanted evaluations of data or parameter options. Consider optional keyword parameters: what else but a string could be used as such a keyword, and not be at risk of extra evaluation? How to make a keyword an unassignable name? Look at the sorts of evaluation that typical use of ProcessOptions makes. I was thinking about ways to pass about data and shield it from unwanted evaluation. So far, the rtable looks not so bad. acer
Of course, yes, backwards compatibility is very important, so the behaviour cannot be changed now. But perhaps the documentation could be improved? Or am I just not finding it? On the help-page ?eval there is only this below, but no mention of this greater-than-1-level evaluation behaviour for entries of table parameters within procedures. "For example, if x := y and y := 1 in a Maple session, what is the value of x; ? In an interactive session where x and y are global variables, x would evaluate to 1 and we would say that x is ``fully evaluated''. For one-level evaluation, we would use the command eval(x, 1) which would in this case yield y. However, inside a Maple procedure, if x is a local variable or a parameter, then x evaluates to y and we would say x evaluated ``one level''." So, in the above, there is no mention of this issue. This 2-level evaluation rule for last-name-eval parameters like table also isn't mentioned on ?updates,v40 or ?lastnameevaluation , that I could see. Assigning the table to a local, in the procedure that accesses the entries, is possible then, to avoid the extra evaluation. Thank you for that. It would be nice to use rtables instead, but one of the great things about a table is that it may be resized at any time. That's not possible with rtables, so it's difficult to be efficient with them when the final data set size is not known at initial creation time. The other great thing about tables is that they may be indexed by things other than integers, but that's not so for the rtable. It would be nice to have something like a table, for resizability and ability to be indexed more loosely, but without last-name-eval, and whose entries only evaluate 1-level on access when it is a procedure parameter or a local. acer
Thanks very much for the response. My example's array parameter was actually assigned to a local (y), but only in the outer procedure. I suppose you are saying that, in any deeper nesting of procedure calls, the table parameter would again have to be assigned to a local? That is, assigned to a local, in each inner procedure in which 1-level eval was wanted? That seems onerous. I still don't see why the level of evaluation for the elements of a table are deemed correct. Running an example with a list, instead of a table, produces exp(0) even from inside the inner procedure. I would claim that the level of evaluation of the entry of the table -- over and above what is needed to accomodate last-name-eval, is wrong. It would be right, were it to produce the same result as occurs when accessing the entry as happens in the list case. Even if one accepts the rationale (which I don't, sorry) it still seems like an overly expensive hack, to get around the fact that last-name-eval tables don't get their contents fully evaluated when first passed in from the top-level. It means that extra evaluation is done upon each and every subsequent access of each table entry, instead of just once per entry up front. Wouldn't it be better to allow the programmer to choose whether to evaluate the table entries fully (just the once up front, or...)? I can see that it's tricky, of course. Suppose one wants to definitely not fully evaluate all the entries, and that one also wants somehow to get the level of evaluation that you described of some particular table entry. A mechanism for that is desirable. Having such a mechanism always take place is less desirable, although that is the current state of affairs -- excluding hacks to get around hacks. Those many test failures that might occur when changing the evaluation rule for table parameters, they occur presumably because code was programmed to work around the current behaviour. Such test failures can't be much of a justification, in and of themselves. But the behaviour still seems hackish, and makes Maple's evaluation rules more complicated. I wouldn't know where to find them in the help-pages, other than ?updates,v40 . acer
You should be able to set the cutoff size below which Matrices get all their entries printed. For example, interface(rtablesize=15); Also, you might be able to increase the working precision of the code, for more accurate results. You could try, for example, Digits := trunc(evalhf(Digits)); at the start of the code, to set the working precision to just under the level that still allows some hardware double precision in the computations. But be warned that as Digits increases so too may the execution time and memory allocation. acer
You should be able to set the cutoff size below which Matrices get all their entries printed. For example, interface(rtablesize=15); Also, you might be able to increase the working precision of the code, for more accurate results. You could try, for example, Digits := trunc(evalhf(Digits)); at the start of the code, to set the working precision to just under the level that still allows some hardware double precision in the computations. But be warned that as Digits increases so too may the execution time and memory allocation. acer
Thanks for the explanation. I did know the 'scalar' example behaviour, and expected it yes. But this table case, which disobeys evaluation rules for local variables, this bothers me. It's a bug, I don't see the behaviour documented anywhere as a special case, and it can cause problems and catch the unwary. I would bet that Maple's own library routines are not all prepared to work around this case, too. I have not yet been able to imagine situations in which this 'hack' would be necessary. I wonder how extensive could be such situations (and whether they are so crucial to make the reported bug 'worth it'). acer
A few things could be mentioned, about Linbox (which I assume you're citing by referring to Zhendong Wan's webpage) and Maple. Some LinearAlgebra routines, such as Determinant and CharacteristicPolynomial, are more efficienct in Maple 11 on integer datatype dense Matrices, through internal use of LinearAlgebra[Modular]. See the help-page, ?updates,Maple11,efficiency for more details. For example, in Maple 9.5.1 Determinant of a 200x200 datatype=integer Matrix takes 27sec and allocates 27MB on my machine. In Maple 10 it takes 13sec and allocates 46MB. But in Maple 11 it takes 1.4sec and allocates only 4.5MB. So, some inroads have begun to be made, for dense exact integer cases, and this relative performance comparison chart is a bit out of date. Another thing to notice about LinBox is that there are a great many people listed as contributors. It took a few people three years before they produced their first "stable" 1.0 version, and two more years to get their current version 1.1 performance. acer
Unless the datatype is a hardware type, or a software float type, the underlying data structure for a storage=sparse rtable is, I believe, a Maple sparse table. If instead, for datatype=, the data structure were similar to that used for those hardware or float datatypes, then some start might be made for sparse exact linear algebra. I mean storage in a triple of C arrays: two as integer arrays for the row and column indices and one, well, one for say ALGEB pointers. There'd still be a lot of work to do, but this might be a start. I wanted to suggest this for datatype=rational, but that could mean a lot of work up front for somebody, to make sure that such rtables continue to work as usual all through Maple. Maybe it would be "easier" if it were done for some new combination, like storage=sparse,datatype=algeb. Gosh, I don't know. It's also good to be realistic. Sparse exact linear algebra could be a good addition for Maple as well as be lower down a great many people's priority lists. If that's true, then maybe someone other than Maplesoft might try to do it. Most if not all the knowledge to do it are in the external-calling details in the advanced programming guide. acer
Thanks very much, Axel, for the explanation. The timings in the worksheet indicate that your maple-language implementations, the translations, might be 4-5 times faster than Maple's own when run outside of evalhf, is that right? And far more importantly, your implementations are evalhf'able! The BesselK1 implementation seems to be quite accurate, except perhaps for very small arguments, judging by the graph. I find this to be very exciting. acer
My point was probably not clear enough, sorry. I am quite familiar with how procedures manage their returns. My issue is with what each of the procedures prints. My issue is that outerproc is printing exp(0) while innerproc is printing 1. But y is an array, which has last-name eval. And, more to the issue, y is a local variable of outerproc, so I don't see why it should be getting that many levels of evaluation by the time the innerproc gets it (for printing, or what have you). I expected to see exp(0) within innerproc as well. acer
Where's your mint symlink? acer
The readme.txt file in bessel.tar.gz says, "copyright Copyright(C) 1996 Takuya OOURA (email: ...). You may use, copy, modify this code for any purpose and without fee. You may distribute this ORIGINAL package." Is that what was in the source that you translated, may I ask, Axel? I couldn't see from it whether one it gives permission to distribute the code in modified or translated form. I know very little about the legalities of such things; perhaps someone here could explain it to me... acer
The symlinks shouldn't have been necessary. Could you not simply have run, % maple -binary IBM_INTEL_LINUX to run the 32bit version that you had installed? Given that the above works, you should be able to install and run both the 32bit and 64bit versions under the same principal location. But with those symlinks in place, you could only install the two instances to two completely separate locations, with the duplication (and disk use) of having two full identical sets of the .mla archives and .hdb help-databases (which are platform independent). acer
I realize that the loop code example was just meant to be rough and quick. But it bears mentioning efficiency once again. The programming method of appending to a list, within a loop, is quite unnecessarily inefficient. By that, I mean things like this, Indexf:=[ ]; for i from... do Indexf:=[op(Indexf),i]; end od: This produces a new list, to be garbage-collected, each time through the loop. At problem size of 1000000 the garbage collection and list creation swamp the rest of the task, and on my machine the selection takes over 1000sec (!) and allocates over 115MB of memory. The simple conversion of the original Array to a list, followed by a call to select, takes about 5sec and uses about 33MB of allocated memory. The posted rtable_scanblock approach, with the final conversion to a list removed and returning table G instead, takes about 13sec and uses about 27MB of allocated memory. I wish it could be made faster, although really its strength is that it can be extended to do more involved tasks. It's a good idea to always try to think a little about efficiency and the complexity (cycles and memory) of one's implementation. It'll help greatly, when one comes to writing code for larger or more involved problems. It all adds up, eventually, whether O(n) or O(n^2), etc. It's not so much a race or competition to find the very fastest method, so much as that it can make or break an implementation to at least get the complexity of the implementation right. acer
I realize that the loop code example was just meant to be rough and quick. But it bears mentioning efficiency once again. The programming method of appending to a list, within a loop, is quite unnecessarily inefficient. By that, I mean things like this, Indexf:=[ ]; for i from... do Indexf:=[op(Indexf),i]; end od: This produces a new list, to be garbage-collected, each time through the loop. At problem size of 1000000 the garbage collection and list creation swamp the rest of the task, and on my machine the selection takes over 1000sec (!) and allocates over 115MB of memory. The simple conversion of the original Array to a list, followed by a call to select, takes about 5sec and uses about 33MB of allocated memory. The posted rtable_scanblock approach, with the final conversion to a list removed and returning table G instead, takes about 13sec and uses about 27MB of allocated memory. I wish it could be made faster, although really its strength is that it can be extended to do more involved tasks. It's a good idea to always try to think a little about efficiency and the complexity (cycles and memory) of one's implementation. It'll help greatly, when one comes to writing code for larger or more involved problems. It all adds up, eventually, whether O(n) or O(n^2), etc. It's not so much a race or competition to find the very fastest method, so much as that it can make or break an implementation to at least get the complexity of the implementation right. acer
First 567 568 569 570 571 572 573 Page 569 of 576