What Jacques says is mostly true. One class of development for which I'd disagree is the so-called application worksheet or document. By that I mean a document that contains not only Maple code and procedures but also presentation graphics. With embedded components, it is possible to construct a document which actively illustrates some ideas in a scientific or educational field. But the more sophisticated of these do have programs buried within them, usually as collapsed blocks. It's convenient to be able to pass these documents on to other people without having to pass along a Maple .mla Library file as well. And sometimes developing the programmatic parts of such documents is rather easily done within the document itself.
Having said that, I'll add another reason why the command-line (tty) interface for Maple is great for developing large programming projects. It's the availability of the "include" directives (see ?include). Being able to store the source for each module export in a separate file, each one of which is $include'd in the parent module source, is nice.
George, you could try this sort of thing, in the command-line interface.
interface(verboseproc=2);
writeto("BesselJ.mpl"):
eval(BesselJ);
writeto(terminal):
Following that, you would just need a few edits at the front and back of the code. But were you to do it that way, you shouldn't need to add any colon or semicolon except at the very end, after the `end proc'.
acer

If you are writing (or editing) Maple programs then don't use 2D input. Use 1D (maple) input.
There are a variety of reasons for this, such as to avoid difficulties with the disambiguator, accurately seeing what's there when you want to edit it later, cut'n'paste, etc.
acer

This example below acts inplace on Matrix M. Adjust as desired.
A := LinearAlgebra[RandomMatrix](5,outputoptions=[datatype=float]):
B := LinearAlgebra[RandomMatrix](5,outputoptions=[datatype=float]):
M := A+I*B:
Digits:=trunc(evalhf(Digits)):
for i from 1 to LinearAlgebra[ColumnDimension](M) do
M[1..-1,i]:=LinearAlgebra[Normalize](M[1..-1,i],Euclidean,conjugate=true);
od:
# for fun
LinearAlgebra[Norm](M[1..-1,3],Euclidean,conjugate=true);
acer

Would this do?
power:=4.6860279761942*x^0.413637849985206863;
evalf[3](fnormal(power));
or just,
fnormal(power,3);
acer

I meant for the second example using LPSolve to be,
Optimization:-LPSolve(1,{a1 >= 6, a2 <= 99, a1 <= a2-1}, integervariables=[a1,a2]);
I'm sure that you get the picture. Cover a1<=a2-1, and cover a1>=a2+1, and then a1<>a2 is covered.
It might be a paint to set up programatically, if there are a lot of inequalities to account for amongst the variables. I can't imagine, offhand, how to cover an inequality (like, say, a logical &or) without making the constraint nonlinear. But NLPSolve doesn't allow the integervar option.
acer

Maybe some variant on these,
Optimization:-LPSolve(1,{a1 >= 6, a2 <= 99, a1 >= a2+1}, integervariables=[a1,a2]);
Optimization:-LPSolve(1,{a1 >= 6, a2 <= 99, a1 <= a2+1}, integervariables=[a1,a2]);
acer

Has anyone tried animated ascii art, in this way? Say, a dancing stick-man?
acer

It seems to be more and more common in the answers provided on mapleprimes that opinion appears before accuracy.
The copy command has worked on tables and arrays for many releases. But of those two structures, it's only really needed for tables.
But now consider the help-page for array. Where, up to and including release Maple 10, is the cross-reference to ?copy ? When was the array help-page ever updated to mention the copy() command? How about the vector or matrix help-pages? The lack of properly updated documentation is not so dramatically new.
As Joe pointed out, Vector(V) produces a copy of Vector V. Things work similarly for arrays, Matrices, and Arrays. The command array(a) produces a copy of array a, and so on. So maybe it should be this functionality that should be well documented.
Moreover, if the help-pages of Vector, Array, Matrix, and rtable are going to get a cross-reference to the copy help-page, then let them *also* mention that those constructors themselves can produce copies.
And why not document this difference too, that copy preserves almost all rtable properties, while the constructors themselves may not. Eg,
M:=Matrix(2,2,shape=symmetric):
N:=copy(M):
P:=Matrix(M):
MatrixOptions(N);
MatrixOptions(P);
acer

The general form of your Matrix is not clear. You say that it is nxn, but the portion below the first row is itself nxn.
Perhaps one of these below matches what you intend.
with(LinearAlgebra):
M := n ->
Matrix(n,n,[[Vector[row](n,[seq(x||i,i=1..n)])],
[Matrix(n - 1, n, Matrix(n-1,n-1,shape=identity))]]):
Minv := n ->
Matrix(n,n,[[<>],
[Vector[row](n,[1/(x||n),seq(-(x||i)/(x||n),i=1..n-1)])]]):
Norm(MatrixInverse(M(7))-Minv(7));
Norm(M(7).Minv(7)-IdentityMatrix(7));
MM := n ->
Matrix(n,n,[[Vector[row](n,[seq(x||i,i=1..n)])],
[<>]]):
MMinv := n ->
Matrix(n,n,[[Vector[row](n,[1/(x||1),seq(-(x||(i))/(x||1),i=2..n)])],
[<>]]):
Norm(MatrixInverse(MM(7))-MMinv(7));
Norm(MM(7).MMinv(7)-IdentityMatrix(7));
acer

Once you've created a procedure, you can add option remember to it. Eg,
f := proc(x) 2*x; end proc;
g := subsop(3=remember,eval(f));
acer

How about this,
t:= seq(i*Unit(s),i=100..2000,100);
acer

This is probably not near the most efficient way, but...
dice := proc(n::posint)
local i, N, p, new;
new := convert(n, string);
N := iquo(StringTools[Length](new) + 1, 3);
p := Array(1 .. N);
for i to N do
p[i] := parse(StringTools[Take](new, 3));
new := StringTools[Drop](new, 3)
end do;
convert(p, list);
end proc:
dice(632096185);
diceL := proc(l::list(posint))
local i, j, N, p, new, result;
result:=Array(1..nops(l));
for j from 1 to nops(l) do
new := convert(l[j], string);
N := iquo(StringTools[Length](new) + 1, 3);
p := Array(1 .. N);
for i to N do
p[i] := parse(StringTools[Take](new, 3));
new := StringTools[Drop](new, 3);
end do;
result[j]:=convert(p, list);
end do;
convert(result,list);
end proc:
diceL([632096185,123456789,214365879]);
acer

p1:=nextprime(10^200);
p2:=nextprime(p1);
evalf(p1*p2);
p1:=nextprime(10^399);
p2:=11:
evalf(p1*p2);
acer

There are probably much easier and slicker ways to do the elimination, but four of the equations were so simple that it could be done "by hand".
restart:
C := 2:
TY := .9:
Y := Vector(1 .. C, TY/C):
TM := 1:
DL := 5:
eq1 := X1 = (Y[1]+Z2)/TM:
eq2 := X2 = (Y[2]-Z2)/M2:
TX := X1+X2:
eq3 := M2 = (1-X1)*TM+X1/(1/TM+W1):
W0 := Y[1]/TM^2+Y[2]/M2^2:
EF := W0/(1-TX):
eq4 := W2 = EF+X1*min(W2, DL):
eq5 := W1 = (TX*EF-X2*W2)/X1:
eq6 := Z2 = Y[2]*max(0, W2-DL)/W2:
# Four of the equations are so simple that we can eliminate
# them by hand.
list0 := subs(eq2,[eq1,eq3,eq4,eq5,eq6]):
list1 := subs(eq3,list0):
list2 := subs(list1[5],[list1[1],list1[3],list1[4]]):
list3 := subs(list2[1],[list2[2],list2[3]]):
# Solve the remaining two equations in two variables.
sol1 := fsolve({op(list3)},{W1,W2});
# Now backsubstitute.
z2 := eval(eq6,[op(sol1)]):
x1 := eval(eq1,z2):
m2 := eval(eq3,[x1,op(sol1)]):
x2 := eval(eq2,[m2,z2]):
# Check the original equations, by substition.
eval([eq1,eq2,eq3,eq4,eq5,eq6],[z2,x1,m2,x2,op(sol1)]);
# Here's a solution.
z2,x1,m2,x2,op(sol1);
# Find another solution, avoiding the first solution.
# It helps fsolve to know a range, which may also tell
# it that the ranges are purely real.
sol2 := fsolve({op(list3)},{W1,W2},avoid={sol1},
W1=-1000..1000,W2=-1000..1000);
z2 := eval(eq6,[op(sol2)]):
x1 := eval(eq1,z2):
m2 := eval(eq3,[x1,op(sol2)]):
x2 := eval(eq2,[m2,z2]):
# Check the original equations, by substition.
eval([eq1,eq2,eq3,eq4,eq5,eq6],[z2,x1,m2,x2,op(sol2)]);
# Here's a second solution.
z2,x1,m2,x2,op(sol2);
The two solutions I saw were,
Z2 = -0., X1 = 0.4500000000, M2 = 0.6744105183, X2 = 0.6672493797, W1 = 2.617057514, W2 = -22.32043803
Z2 = 0.4462841010, X1 = 0.8962841010, M2 = 0.1052051185, X2 = 0.03532051532, W1 = 600.8482070, W2 = 605.5062301
Increasing Digits didn't radically alter the final answer, so the simplified two equations seem "stable".
acer

The statement that "..in the above command, ALL numerical computations are being done with only two significant digits," doesn't seem quite right.
> stopat(`evalf/sin`):
> a:=sin(Pi/8):
> evalf(a,2);
The above indicates to me that computation is done something like,
> xr := evalf(Pi/8,2):
> evalf(evalhf(sin(xr)),2);
That still produces .36, sure. But the sine computation is done in evalhf it seems. The input to sin is just .37, though, as you described.
Is this next true? If one extra guard digits had been used in approximating Pi/8 then the final result would be accurate to 2 places.
> restart:
> xr := evalf(Pi/8,3):
> evalf(evalhf(sin(xr)),2);
0.38
Is sin "atomic" enough to warrant guard digits for approximating the argument, assuming what I wrote above is true?
acer