Why not permute directly with the Array constructor?
B := Array(rtable_dims(A),(i,j,k)->A[i,k,j]);
acer

Just looking for clarification. Your post's title had the names capitalized, but the post did not.
acer

Thanks for those details. Do you suspect that, even if you find reasonably good defaults for those magic numbers, some user-based control via options may be necessary for some problems to be solved?
Will there be some mechanism for repeated solving using different RHS's (but *not* all done at once)? I ask because depending on the method this can require saving the factorization or preconditioning.
There is also a real floating-point sparse direct solver available from LinearAlgebra[LinearSolve] with the method=SparseLU option.
The userinfo messages indicate that this uses NAG routine

f01brf to factorize and routine

f04axf to do the subsequent solving for a given RHS.
There is an indication that the NAG f01brf approach is based upon the somewhat well-known MA28 FORTRAN code. I don't have much experience with either SuperLU or MA48 (the successor to MA28?).
acer

You can still access the global name using the :- notation, even when the "same" name is used as a procedure parameter. For example, to distiguish the local and global names,
p := proc(array::posint)
local f;
f := :-array(1..1,[p]);
end proc;
p(3);
p := proc(array::posint)
local f;
f := array(1..1,[p]);
end proc;
p(3);
It is certainly not wrong to code it as you did, and I didn't means to imply that. Some people will like the fact that the namespace can be used in this way, and that `array` is available for such use. Personally, I find it a bit unecessary, and suspect that it might confuse some other newer users.
acer

In the floating-point case, solving large sparse linear systems can be difficult. The existing solvers for such floating-point systems (hooked into LinearSolve) accept some options -- to control things like the choice of method, amount of fill-in, tolerances, etc. See ?IterativeSolver .
What sort of options do you expect that your rational solver implementation might need, if any?
ps. The documentation makes it appear that only symmetric/hermitian floating-point sparse systems can be solved with iterative methods. Yet trying it with real nonsymmetric float[8] sparse Matrices shows via userinfo that there are also distinct nonsymmetric solvers. All the sparse floating-point methods show NAG function names through userinfo, ie. with infolevel[LinearAlgebra] > 0 .
acer

You are right, of course, sorry. I was thrown by the use of protected top-level names as paramaters for your indexing functions.
acer

I would still prefer to use rtable_scanblock over `index/XXX`. The posted method using `index/makeIndex` produces an array, and an Array, and then finally a set. It may be that with rtable_scanblock one can produce only a table (array) and then finally a set. Both incur the cost of the function calls for each entry, of course, either of `index/makeIndex` or of checkindex.
FindIndex := proc(comparison, A)
local checkindex, G;
checkindex := proc (val, ind) if comparison(val) then G[ind] := ind end if; NULL; end proc;
rtable_scanblock(A,[rtable_dims(A)],checkindex,[]);
eval(G);
end proc:
A:=Array(1..3,1..4,(i,j)->i-j):
T:=FindIndex(x->type(x,nonnegint),A):
convert(T,set);
acer

You can use rtable_elems() to get the indices of an Array.
How about,
seq([lhs(x)],x in rtable_elems(A));
It shouldn't be necessary to convert an Array to an array, in general. If it really couldn't be avoided for some task, then that'd be an indication that functionality were missing.
acer

Isn't that what implicitplot3d() is for?
See ?plots,implicitplot3d
acer

Isn't that what implicitplot3d() is for?
See ?plots,implicitplot3d
acer

I think that the original poster was wanting the exact symbolic integral in particular.
What's the difference between evalf(int(...)) and evalf(Int(...)), you might wonder, if they both returned a floating-point result here?
Well, the first of those, tries to do an exact integration, just as int() does, and then evaluates the result under evalf.
And the second, evalf(Int(...)) uses numeric methods and may not attempt to compute the exact symbolic integral at all. (It can howevere do some fancy symbolic analysis, to find singularities, etc.)
But when int() returns unevaluated, because it does not compute or find a integration result, then subsequently hitting that unevaluated result with evalf() will result in doing the numeric computation all the same.
If int(foo) returns unevaluated, then evalf(int(foo)) should give you the same thing as evalf(Int(foo)). Except it may take longer, possibly considerably longer, while it tries and fails to do the exact symbolic integral.
There are cases, too, where evalf of a formulaic symbolic exact integral result (at default Digits precision) will have measurable round-off error. And it could also be that, for the same integrand, evalf(Int(...)) will produce a more accurate result. And that's certainly not always the case. I'm not trying to muddy the waters by mentioning that, but it's a good idea to find a way to check results.
acer

I think that the original poster was wanting the exact symbolic integral in particular.
What's the difference between evalf(int(...)) and evalf(Int(...)), you might wonder, if they both returned a floating-point result here?
Well, the first of those, tries to do an exact integration, just as int() does, and then evaluates the result under evalf.
And the second, evalf(Int(...)) uses numeric methods and may not attempt to compute the exact symbolic integral at all. (It can howevere do some fancy symbolic analysis, to find singularities, etc.)
But when int() returns unevaluated, because it does not compute or find a integration result, then subsequently hitting that unevaluated result with evalf() will result in doing the numeric computation all the same.
If int(foo) returns unevaluated, then evalf(int(foo)) should give you the same thing as evalf(Int(foo)). Except it may take longer, possibly considerably longer, while it tries and fails to do the exact symbolic integral.
There are cases, too, where evalf of a formulaic symbolic exact integral result (at default Digits precision) will have measurable round-off error. And it could also be that, for the same integrand, evalf(Int(...)) will produce a more accurate result. And that's certainly not always the case. I'm not trying to muddy the waters by mentioning that, but it's a good idea to find a way to check results.
acer

Which interface are you using with Maple 11? Classic or Standard? If it's Standard, then which "mode", Document or Worksheet?
acer

Ok, first of all (and this is just minor convention), it's common to use .mpl as the file extension for plaintext maple source files (and .mw only for non-plaintext worksheets and documents).
Next, it's not really necessary to go to all the trouble about argc and argv, for the purpose you described.
So, here's a maple source file. Let's call it foo.mpl .
--------- start foo.mpl -----------
fd:=fopen(usethisfile,READ,TEXT);
fscanf(fd,"%ld");
---------- eof file foo.mpl -------
Now, in a bash shell I can do things like this,
bash% echo "23 27 29" > file1
bash% maple -c "usethisfile:=file1" -q foo.mpl
fd := 0
[23]
So, you could probably do it similarly using perl's system() call.
acer

Ok, first of all (and this is just minor convention), it's common to use .mpl as the file extension for plaintext maple source files (and .mw only for non-plaintext worksheets and documents).
Next, it's not really necessary to go to all the trouble about argc and argv, for the purpose you described.
So, here's a maple source file. Let's call it foo.mpl .
--------- start foo.mpl -----------
fd:=fopen(usethisfile,READ,TEXT);
fscanf(fd,"%ld");
---------- eof file foo.mpl -------
Now, in a bash shell I can do things like this,
bash% echo "23 27 29" > file1
bash% maple -c "usethisfile:=file1" -q foo.mpl
fd := 0
[23]
So, you could probably do it similarly using perl's system() call.
acer