acer

28065 Reputation

29 Badges

17 years, 241 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

Select the omega_0^2 in the original input (not output), using the mouse. Right-click to get the context-menu. Go to the submenu, 2-D Math -> Convert To Then toggle the check-box in that final submenu, titled Atomic Identifier. It should prompt you with further checkboxes, whether to include the superscript or the subscript of that selected omega_0^2 subexpression. Check only the box to "Include subscript as part of identifier". Then re-execute that input line. Then re-execute your second line to call A(omega_1) = A(omega_2). This should now treat the subscripted omega_0 as quite distinct from the name omega itself. acer
In 2dmath input mode, in a Document, enter these keystrokes: t_0 This produces t[0], which can be tested by entering lprint(%) on the next line. Now go back to the 2dmath input (not the output) of the original line. Use the mouse to select the subscripted t0. Right-click to get the context-menu. Select this context-menu command: 2-D Math -> Convert To Then, in the final submenu, toggle the box titled "Atomic identifier". Then hit return again, on the input line. Then repeat the lprint(%). This produces the same name (not t[0]!) as one gets when using the subliteral entry of the Layout palette. acer
In 2dmath input mode, in a Document, enter these keystrokes: t_0 This produces t[0], which can be tested by entering lprint(%) on the next line. Now go back to the 2dmath input (not the output) of the original line. Use the mouse to select the subscripted t0. Right-click to get the context-menu. Select this context-menu command: 2-D Math -> Convert To Then, in the final submenu, toggle the box titled "Atomic identifier". Then hit return again, on the input line. Then repeat the lprint(%). This produces the same name (not t[0]!) as one gets when using the subliteral entry of the Layout palette. acer
2dmath typeset input is quite recent and new, and that is what matters more here, I think. It is the design of 2dmath in Documents, and in input in particular, of which I was thinking most. I would accept a change to the manner in which indexed names in 2dmath output are typeset, if it meant that these problems arising out of typeset subscripted indexed names from 2dmath input were solved. As an alternative, what about Typesetting:-RuleAssistant? Couldn't the fine control be added there, with some more natural behaviour be made the default? acer
Thanks for your input, Roman. I do not suggest making t and t[0] independent names. And the difficulty is not merely that some users do not realize that they should not mix t[0] and t. The difficulty is also that in the GUI muddles them further by representing t[0] as what appears to be a quite independent name, t-subscripted-0. My suggestion was to make t_0 print a subscripted zero, and to make t[0] always print as t[0], yes. That should make matters clear to the user. I have seen many Documents, written by many different people, and I cannot recall once seeing a subscript itself treated as a variable or incremented. Of course, it's quite possible that some users would want it, but I cannot recall having yet seen it. Of course a user would continue to have available the ability to make an indexed name like t[i] get its index changed. I consider that, were it to solve the problem of users' getting caught and confused in the current muddle between unindexed name and subscripted name, removing the ability to programatically change the index of (only) a *typeset* subscripted name would be acceptable. I disagree that it would be mostly useless, because I have seen so many Documents make fine use of subscripted names without relying on that functionality at all. Even if we do not agree, I think that it's very good to hear differing opinions on this sort of thing. It's very difficult to get a design completely right, the first time. Tweaks made sooner rather than later can prevent issues from becoming ensconced in stone. acer
Does this serve? Perhaps you don't really want the second index of EBO to take values p[1] and p[2], because your later comment indicates that you want it to have columns 1 and 2. restart: p[1] := 1; p[2] := 4; EBO:=Array(1..10,1..2): for n to 2 do for s to 10 do EBO[s, n] := sum((x-s)*p[n]^x*evalf(exp(-p[n]))/factorial(x), x = s+1 .. infinity); od: od: EBO; acer
I don't see 64bit Windows (XP or Vista) as fulfilling the system requirements of Maple 11, on this webpage, http://www.maplesoft.com/products/Maple11/system_requirements.aspx acer
The author starts out by discussing software in general. But at some point a restriction to information software gets imposed, with "Most software is information software." A great deal of the subsequent generalizations and assertions, including those about the problems of interactivity, seem subject to that qualification. Maple is not just software for querying information -- it's also used investigatively and creatively, to do math. Mathematics is not just a large data set to be navigated. So if one were to accept ideas in the paper, as they relate to information software, it wouldn't be clear that the same conclusions would hold for Maple when in its creative software role. Of course Maple does also get used as information software, and there are several ways in which the points in the paper can be related to Maple. It is an interesting read over and above how I chose to interpret it while thinking about Maple. It's very nice that you posted it here. acer
The author starts out with an omission, claiming that modern software is experienced almost exclusively through pictures or pointing and pushing. Mathematical software, such as Maple, can also allow interaction by communicating via language. Language is another distinct and important mode of interaction and exchange of information, and it's surprising to see it overlooked in what purports to be a near complete characterization. The communication of mathematics, even when done using specialised notation, is a mechanism with linguistic aspects. acer
Perhaps this anecdote might explain it. acer
Planck's constant has a dimension. It's not so important what the dimensions are (angular momentum) so much as what consequences that brings with it. *You* get to pick your own units for that dimension. So you can decide that Planck's constant is just about any number in magnitude. It depends on what system of units you choose. It's not as if the SI system were god-given! Choose an appropriate system of units, and the constant can then be of size 1 in that system. By that I mean, choose a base size for the units of length, time, etc, so that the constant is 1 in that system. If the question were rephrased to be something like, "is Planck's constant irrational when expressed using SI units?" then it might depend on stuff like whether space itself (distance, or time) is quantized. acer
Keeping the total memory allocation down is often an important part of efficiency. But, also, keeping the memory usage (garbage production and cleanup) down can also leads to time savings. I suspect that ArrayTools:-Reshape produce a full copy of the rtable argument, while ArrayTools:-Alias actually provides an alternate view of the same data in memory without making a copy. So, using Alias can be more efficient than using Reshape. Also, in the test3 as given, it was not necessary to keep all of the instances of Vector a around. And indeed in Joe's test3a the Vector a was overwritten due to a being reassigned. But each assignment to a in test3a extracts a new row of A, producing a new Vector each time. Memeory usage can be reduced by instead forming the container for Vector a just once, and then doing a fast copy of the appropriate row of A into Vector a. So, assuming that I coded it correctly, test3b := proc(N,n) local i,a,X,A; use Statistics in X := RandomVariable(Normal(0,1)): A := ArrayTools:-Alias(Sample(X,n*N),[N,n],Fortran_order); a := Vector[row](n,datatype=float,order=Fortran_order); for i from 1 to N do ArrayTools:-Copy(n,A,n*(i-1),1,a,0,1); end do; end use; end proc: Now, to measure the efficiency, let's also look at memory used and memory allocated, as well as time. > (st,ba,bu):=time(),kernelopts(bytesalloc),kernelopts(bytesused): > test3(30000,100): > time()-st,kernelopts(bytesalloc)-ba,kernelopts(bytesused)-bu; 18.419, 8518120, 899923608 > (st,ba,bu):=time(),kernelopts(bytesalloc),kernelopts(bytesused): > test3a(30000,100): > time()-st,kernelopts(bytesalloc)-ba,kernelopts(bytesused)-bu; 0.711, 56088544, 79456288 > (st,ba,bu):=time(),kernelopts(bytesalloc),kernelopts(bytesused): > test3b(30000,100): > time()-st,kernelopts(bytesalloc)-ba,kernelopts(bytesused)-bu; 0.571, 31844664, 35960072 Of course, what would be really nice would be to get the speed of test3b or test3a and the lower memory allocation (by at least a factor of four) of test3. This may not be currently possible. One way to get it might be with an enhancement to the Statistics:-Sample routine, so that it accepts as an optional argument the container Vector for the result. It could re-use the same Vector container, and fill it with new sample data, operating in-place. That could allow the total memory allocation to stay low, to generate only n pieces of data at any one time, but avoid production of a lot of Vectors as garbage. It's a bit of a shame, that it isn't implemented in this way. Since at default Digits=10 the Vectors have hardware double precision float[8] datatype, there wouldn't be garbage produced from all the entries as software floats. ps. It may be that A should be C_order, according to how I set up the strides for ArrayTools:-Alias, so that it walks rows of A and not columns. It shouldn't affect the timings much. acer
Jacques' points about pragmatism and making progress in the face of technical or theoretical difficulties are very much to the point. m := matrix(2,3,[1,1,a,1,1,b]): linalg[LUdecomp](m,'U1'='u1'): seq(`if`(type(u1[i,i],Non(constant)),u1[i,i],NULL),i=1..2); # Are these bugs below? # How about better documentation of the following? # Eg, as an Example on the gaussjord help-page. Testzero := proc(x) if is(x=0) = true then true else false; end if; end proc: linalg[gaussjord](m) assuming b-a=0; # Should it work for LinearAlgebra too? # Or is Normalizer used when Testzero ought to be instead? M := Matrix(m): LinearAlgebra[ReducedRowEchelonForm](M) assuming b-a=0; So, I also wish for more thorough documentation of Testzero and Normalizer, of where they are used any why. ps. Yes, I realize that I could probably have gotten away with an even simpler true/false/FAIL conditional in my Normalizer. Apologies if it brings up anyone's pet peeves. acer
One can also consider memory usage as an important component of efficiency. Apart from wanting to keep memory allocation down for its own benefits, one can also try to keep memory use and re-use down so as to minimize garbage collection time. Other benefits of this sort of array memory optimization can be that total memory allocation can sometimes be reduced, that large problems can become tractable, and sometimes speed is improved. The reasons for this can be complicated, but it can relate to memory fragmentation and also to the fact that garbage is not immediately freed. For example, Y := (X.beta)^%T; might become Y := X.beta; LinearAlgebra:-Transpose(Y,inplace=true); That should avoid producing an unnecessary object of the size of Y. The creating of Arrays n1 and n2 by forming each N12 sub-Array might also be improved. One might be able to allocate empty n1 and n2 Vector[row]'s of the desired size and datatype, just once, and then use ArrayTools:-Copy each time through the loop so as to get the right portion of N12 into them. The key would be using the right offset and stride. One might also be able to allocate space for Y1 and Y2 to be used in procedure `compute`, just the once. Eg, produce Y1 and Y2 as empty Vector[row]'s of the desired datatype, outside of `compute`, just once. Then, outside of `compute` but each time through the loop, use ArrayTools:-Copy to get Y into Y1. Follow that by VectorAdd(Y1,n1,inplace=true). And similarly for Y2. Notice also that n1 and n2 might only get used in `compute` to produce Yp. So why not produce a re-uasable container for Yp just once, and never produce Y1 and Y2 at all! How about something like this, Yp:=Vector[row](num,datatype=float); # just once n1:=Vector[row](num,datatype=float); # just once # and now, inside the i loop ArrayTools:-Copy(...,N12,...n1...); # get data into n1 ArrayTools:-Copy(...,N12,...Yp...); # get n2 into Yp VectorAdd(Yp,n1,inplace=true); VectorScalarMultiply(Yp,0.5,inplace=true); # (n1+n2)/2 VectorAdd(Yp,Y); # (Y+n1 + Y+n2)/2 = (Y1+Y2)/2 Now consider the lines in `compute` like, add(Yp[k]*X[k,3]/(n/2),k=1..n): These line are the only places where Yp gets used, yes? So why not first scale Yp, inplace, by n/2 and then have those lines be like, add(Yp[k]*X[k,3],k=1..n): The Y1-Y2 object is just n1-n2, no? So the Y1-Y2 = n1-n2 object could also be created just once, as a re-usable Vector `Y1minusY2` outside the loop. But outside the loop, one already has n1 from above code. And n1 is no longer needed, once Yp is formed. So use it to hold Y1-Y2=n1-n2. Ie, outside the loop do, VectorAdd(n1,n2,1,-1,inplace=true); Hopefully I haven't made major mistakes. I'm sure that there are other improvements possible, and some of these techniques above won't make a big difference for smaller problems. acer
Hi Joe, I wasn't trying to say that copy() wasn't needed for anything but tables. As you reiterated, it behaves differently from the rtable constructors themselves. What happens if you try this in Maple 11, versus Maple 10? a := array(1..3,[A,B,C]); b := array(a); a[1]:=z: eval(a); eval(b); It seems to me that in Maple 11 the command array(a) produces a copy of a, and that this is even documented on the ?array help-page. But I make no such claim about the vector constructor. I agree with you, that better documentation of the assignment operations would be of benefit. More obvious explanations of the differences in the array, table, list, rtable, etc, data structures could be good, as well as general advice on what each may be useful for (typically), and why. acer
First 511 512 513 514 515 516 517 Page 513 of 519