acer

32490 Reputation

29 Badges

20 years, 7 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

A natural followup here is to try and make the resulting exported image file show as more like a Maple plot, with axes or text or labels, etc.

This can be done by creating an empty plot with all the desired text and axes, exporting that to an image, reading the image back in, and then using that to mask the density image (which pretty much amounts to overlaying them, if the text is black).

An exported empty plot has a blank border, as the axes don't lie right at the image boundary. This can be accomodated by either fitting the density image to the axes-bounded portion of the empty plot's image, or vice-versa by chopping off the blank border or a slightly larger empty plot. I take the second approach, below. (I suspect the first would be a better choice, if one were aiming to produce a good, fully fledged plot exporter or plot driver replacement.) The boder size also varies with the choice of Maple interface, eg, commandline or Standard GUI.

This calls `ArgPlot` as coded in the Post. This is rough, top-level stuff, but ideally it could be made into a more general procedure.

F:=z->`if`(z=0,Pi,exp(1/z)):
hsize,vsize:=300,300:

st,str,ba,bu:=time(),time[real](),kernelopts(bytesalloc),kernelopts(bytesused):
  Q:=ArgPlot(F,-0.25,0.25,vsize,hsize):
time()-st,time[real]()-str,kernelopts(bytesalloc)-ba,kernelopts(bytesused)-bu;

st,str,ba,bu:=time(),time[real](),kernelopts(bytesalloc),kernelopts(bytesused):
_Env_plot_warnings:=false:
tempfile:=cat(kernelopts('homedir'),"/temp.jpg"):
# The Standard GUI's plot driver pads the plot with whitespace.
# So we increment the target size for export, and then later chop.
# The padding seems to be absolute in size, rather than relative
# to the dimensions, at least for the GUI's jpeg plot device. Certainly the
# padding can be much greater for jpeg export using the commandline interface's
# jpeg plot driver. Adjust accordingly.
plotsetup('jpeg','plotoutput'=tempfile,'plotoptions'=cat("height=",vsize+6+4,
                                                         ",width=",hsize+5+5));
plot(I,-0.25..0.25,'view'=[-0.25..0.25,-0.25..0.25]);
plotsetup('default'):
try fflush(tempfile); catch: end try:
fclose(tempfile);
# Why doesn't the GUI plot driver flush before returning control?!
# The timing is affected by a delay (YMMV, on NFS or a network drive)
# to wait for the image file to be written. If the `Read` below fails
# and claims the file does not exist, the delay may need increasing.
Threads:-Sleep(1): # 1 second
_Env_plot_warnings:='_Env_plot_warnings':
T:=ImageTools:-Read(tempfile,'format'='JPEG'):
correctedT:=rtable_redim(T[1+6..-1-4,1+5..-1-5,1..-1],1..vsize,1..hsize,1..3):
# The mask could likely be done more efficiently in-place on Q ("by hand"),
# with saving on allocation and thus subsequent collection.
newQ:=ImageTools:-Mask(Q,correctedT):
ImageTools:-Write(cat(kernelopts('homedir'),"/arg_expinv.jpg"),newQ,'quality'=95):
time()-st,time[real]()-str,kernelopts(bytesalloc)-ba,kernelopts(bytesused)-bu;

ImageTools:-View(newQ);

 

This can be compared with less attactive results using `densityplot`. (I understand that densityplot's `grid` option is not the same as the width/height of an image. But increasing the `grid` values, for densityplot or complexplot3d, can be prohibitive in terms of time & memory resources and the performance of the plot driver/exporter and of the GUI itself. These aspects were the original motivation. The usual plotting commands cannot match the image creation method here, especially at higher or publication quality resolutions.)

restart:

F:=z->`if`(z=0,Pi,exp(1/z)):
hsize,vsize:=300,300:
plots:-densityplot(-argument(exp(1/(a+b*I))),a=-0.25..0.25,b=-0.25..0.25,
                   colorstyle=HUE, style=patchnogrid,scaletorange=-Pi..Pi,
                   grid=[hsize,vsize],axes=normal);

Exporting using context-menus (right-click, takes a long time...) produces this,

And using the GUI's plot driver with the plotsetup command produces this,

plotsetup('jpeg','plotoutput'="driv_expinv.jpg",'plotoptions'=cat("height=",300,
                                                                  ",width=",300));

acer

As far as the computational part goes, there seems to be some bottleneck in mapping log[2] over the complex-valued Matrix D2. Ideally, that subtask should be solvable with a very fast implementation using the Compiler, but I've put in comments about some weirdness with the Compiled procedure's behaviour there. (I'll submit an bug report.)

For producing higher resolution graphics than what densityplot (with colorstyle=HUE) provides, you might have a look at this recent Post. At higher resolutions, building the image directly can be much faster and more memory efficient than using densityplot. (The GUI itself uses huge amount of unreported memory for larger grids.)

The code below has some tweaks, with original lines commented out. (Uncommenting and running side by side might give some insight into why some ways are faster.) My machine runs the code below, with n=2^8, in under 7 seconds, and the originally Posted code in about 18 seconds. But I believe that a well-behaved Compiled complex log[2] map over a complex[8] Matrix would allow it to be done in around 1 second.

If you want the computational portions much faster, you could consider going with Compiled procedures to do all the computation, and give up the `~` mapping and evalhf. That also ensures sticking with hardward datatype Matrices (which didn't happen with the original code) and helps keep the memory use down.

restart:
with(DiscreteTransforms): with(plots):

n:=2^8:

Start:=time():

M := Matrix(n, n, 1, datatype = float):
Ii := Vector(n, proc (i) options operator, arrow; i end proc, datatype = float):
x := Vector(n, proc (i) options operator, arrow; Ii[i]-(1/2)*n end proc):
y := Vector(n, proc (i) options operator, arrow; (1/2)*n-Ii[i] end proc):
X, Y := Matrix(n, n, proc (i, j) options operator, arrow; x[j] end proc),
        Matrix(n, n, proc (i, j) options operator, arrow; y[i] end proc):
R := 10:

st:=time():
A := map[evalhf](`<=`,`~`[`+`](`~`[`^`](X, 2), `~`[`^`](Y, 2)), R^2):
time()-st;

#st:=time():
#AA := `~`[`@`(evalb, `<=`)](`~`[`+`](`~`[`^`](X, 2), `~`[`^`](Y, 2)), R^2):
#time()-st;

st:=time():
M := evalhf(zip(`and`,(A, M))):
time()-st;

#st:=time():
#MM:=`~`[evalhf](`~`[`and`](A, M)):
#time()-st;
#LinearAlgebra:-Norm(M-MM);

#plots:-listdensityplot(M, smooth = true, axes = none);

D1 := FourierTransform(M, normalization = none):
D2 := ArrayTools[CircularShift](D1, (1.0/2)*n, (1.0/2)*n):

st:=time():
Q1:=Matrix(map[evalhf](t->Re(abs(t)),D2),datatype=float[8]):
time()-st;

#st:=time():
#Q11:=abs(D2);
#time()-st;
#LinearAlgebra:-Norm(Q1-Q11);

st:=time():
plots:-listdensityplot(Matrix(map[evalhf](t->Re(abs(t)),D2),datatype=float[8]),
                       smooth = true, axes = none, colorstyle = HUE,
                       style=patchnogrid);
time()-st;

st:=time():
D3:=`~`[subs(LN2=evalhf(ln(2)),t->ln(t)/LN2)](D2):
time()-st;

#st:=time():
#D33 := `~`[log[2]](D2):
#time()-st;
#LinearAlgebra:-Norm(D3-D33);

# There is a very weird behaviour, where both evalhf and a Compiled version of
# this makeD3 routine can (for n=8 say) take either less than a second, or 25 seconds,
# intermittently. The very same calls, without re-Compilation, but vastly different timing.
# I've commented the attempts out, and am using the uncommented log[2] method above.
makeD3:=proc(n::integer,a::Matrix(datatype=complex[8]),b::Matrix(datatype=complex[8])) local i::integer, j::integer, LN2::float; LN2:=1.0/ln(2.0); for i from 1 to n do for j from 1 to n do #b[i,j]:=log[2](a[i,j]); b[i,j]:=ln(a[i,j])*LN2; end do; end do; NULL; end proc: #st:=time(): #cmakeD3:=Compiler:-Compile(makeD3,optimize=true): #time()-st; #st:=time(): #D3:=Matrix(n,n,datatype=complex[8]): #evalhf(makeD3(n,D2,D3)): #time()-st; #D3:=Matrix(n,n,datatype=complex[8]): #st:=time(): #cmakeD3(n,D2,D3); #time()-st; st:=time(): plots:-listdensityplot(abs(D3), smooth = true, axes = none, colorstyle = HUE, style=patchnogrid); time()-st; time()-Start;

[note: every time I edit this code it screws up the less-than synbols in the pre heml tags and misinterprets, removing content.]

aperture_density.mw

acer

As far as the computational part goes, there seems to be some bottleneck in mapping log[2] over the complex-valued Matrix D2. Ideally, that subtask should be solvable with a very fast implementation using the Compiler, but I've put in comments about some weirdness with the Compiled procedure's behaviour there. (I'll submit an bug report.)

For producing higher resolution graphics than what densityplot (with colorstyle=HUE) provides, you might have a look at this recent Post. At higher resolutions, building the image directly can be much faster and more memory efficient than using densityplot. (The GUI itself uses huge amount of unreported memory for larger grids.)

The code below has some tweaks, with original lines commented out. (Uncommenting and running side by side might give some insight into why some ways are faster.) My machine runs the code below, with n=2^8, in under 7 seconds, and the originally Posted code in about 18 seconds. But I believe that a well-behaved Compiled complex log[2] map over a complex[8] Matrix would allow it to be done in around 1 second.

If you want the computational portions much faster, you could consider going with Compiled procedures to do all the computation, and give up the `~` mapping and evalhf. That also ensures sticking with hardward datatype Matrices (which didn't happen with the original code) and helps keep the memory use down.

restart:
with(DiscreteTransforms): with(plots):

n:=2^8:

Start:=time():

M := Matrix(n, n, 1, datatype = float):
Ii := Vector(n, proc (i) options operator, arrow; i end proc, datatype = float):
x := Vector(n, proc (i) options operator, arrow; Ii[i]-(1/2)*n end proc):
y := Vector(n, proc (i) options operator, arrow; (1/2)*n-Ii[i] end proc):
X, Y := Matrix(n, n, proc (i, j) options operator, arrow; x[j] end proc),
        Matrix(n, n, proc (i, j) options operator, arrow; y[i] end proc):
R := 10:

st:=time():
A := map[evalhf](`<=`,`~`[`+`](`~`[`^`](X, 2), `~`[`^`](Y, 2)), R^2):
time()-st;

#st:=time():
#AA := `~`[`@`(evalb, `<=`)](`~`[`+`](`~`[`^`](X, 2), `~`[`^`](Y, 2)), R^2):
#time()-st;

st:=time():
M := evalhf(zip(`and`,(A, M))):
time()-st;

#st:=time():
#MM:=`~`[evalhf](`~`[`and`](A, M)):
#time()-st;
#LinearAlgebra:-Norm(M-MM);

#plots:-listdensityplot(M, smooth = true, axes = none);

D1 := FourierTransform(M, normalization = none):
D2 := ArrayTools[CircularShift](D1, (1.0/2)*n, (1.0/2)*n):

st:=time():
Q1:=Matrix(map[evalhf](t->Re(abs(t)),D2),datatype=float[8]):
time()-st;

#st:=time():
#Q11:=abs(D2);
#time()-st;
#LinearAlgebra:-Norm(Q1-Q11);

st:=time():
plots:-listdensityplot(Matrix(map[evalhf](t->Re(abs(t)),D2),datatype=float[8]),
                       smooth = true, axes = none, colorstyle = HUE,
                       style=patchnogrid);
time()-st;

st:=time():
D3:=`~`[subs(LN2=evalhf(ln(2)),t->ln(t)/LN2)](D2):
time()-st;

#st:=time():
#D33 := `~`[log[2]](D2):
#time()-st;
#LinearAlgebra:-Norm(D3-D33);

# There is a very weird behaviour, where both evalhf and a Compiled version of
# this makeD3 routine can (for n=8 say) take either less than a second, or 25 seconds,
# intermittently. The very same calls, without re-Compilation, but vastly different timing.
# I've commented the attempts out, and am using the uncommented log[2] method above.
makeD3:=proc(n::integer,a::Matrix(datatype=complex[8]),b::Matrix(datatype=complex[8])) local i::integer, j::integer, LN2::float; LN2:=1.0/ln(2.0); for i from 1 to n do for j from 1 to n do #b[i,j]:=log[2](a[i,j]); b[i,j]:=ln(a[i,j])*LN2; end do; end do; NULL; end proc: #st:=time(): #cmakeD3:=Compiler:-Compile(makeD3,optimize=true): #time()-st; #st:=time(): #D3:=Matrix(n,n,datatype=complex[8]): #evalhf(makeD3(n,D2,D3)): #time()-st; #D3:=Matrix(n,n,datatype=complex[8]): #st:=time(): #cmakeD3(n,D2,D3); #time()-st; st:=time(): plots:-listdensityplot(abs(D3), smooth = true, axes = none, colorstyle = HUE, style=patchnogrid); time()-st; time()-Start;

[note: every time I edit this code it screws up the less-than synbols in the pre heml tags and misinterprets, removing content.]

aperture_density.mw

acer

@newlam All I see is base64 encoded source in your post, not images.

You can no longer paste 2D Math from Maple straight into Mapleprimes, since the site was revised. But mere images of your code can be awkward for us, since we don't want to have to type out anything.

You can use the green up-arrow in the editor here, to upload your worksheet. Or you can paste the code as 1D text Maple input.

Forcing the tickmarks seems to be key to this kind of thing, in current Maple. And that can be done for the x-axis or y-axis separately, or for both.

Flipping the plot in the x-direction can often be accomplished by a simple change of variables in the expression (as in the link Kamel has given). But it can also be done for either the x- or the y-direction using plottools, eg here.

acer

Forcing the tickmarks seems to be key to this kind of thing, in current Maple. And that can be done for the x-axis or y-axis separately, or for both.

Flipping the plot in the x-direction can often be accomplished by a simple change of variables in the expression (as in the link Kamel has given). But it can also be done for either the x- or the y-direction using plottools, eg here.

acer

@raffamaiden The lprint command is for line (1D, linear) printing, as its help-page explains. That's lprint's purpose, not to do 2D formatted printing. What you described is 2D formatted printing.

@raffamaiden The lprint command is for line (1D, linear) printing, as its help-page explains. That's lprint's purpose, not to do 2D formatted printing. What you described is 2D formatted printing.

A few more comments might be added, if I may. The next two paragraphs are other suggestions to support the idea that symbolic inversion is not a good idea here.

The purely numerical floating-point computation (of the inverse, or of solving a linear system) whether done in Maple or Matlab will be careful about selection of pivots. That is, the floating-point implementation use the specific numeric values during pivot selection. Symbolically inverting the Matrix will remove this carefulness, regardless of whether the symbolic pivots are chosen according to symbolic length, or by no scheme. Only a scheme of symbolic pivot selection that could qualify the relative magnitude of the pivot choices under assumptions on the unknowns would have a hope of regaining this care. But it could be difficult and might require a case-split solution, and it would add to the symbolic processing overhead. And the symbolic LU (or inversion) doesn't allow for a custom pivot selection strategy in the symbolic case.

Even if you keep some form of Testzero, it's still only checking whether the pivots are symbolically identical to zero or not. It doesn't provide for any safety that, given particular numeric values for the unknowns, some pivot might not attain a zero value. If that happens then the whole approach breaks with division by zero.

Perhaps there is some other way to improve the performance of the Matlab code. Do you compute the inverse in order to solve linear equations like A.x=b? It wasn't clear from your post whether you might be solving at different instances for various different RHSs `b`, for the same numeric instance of Matrix `A`. If you will have to solve for different RHSs `b`, for the very same numeric values of the unknowns, then you could consider doing the numeric LU factorization of `A` just once for the given numeric values of the variables, and then using just forward- and backward-substitution for each distinct RHS instance. This isn't going to help, if you solve for just one single RHS instance for any particular set of values of the variables.

acer

A few more comments might be added, if I may. The next two paragraphs are other suggestions to support the idea that symbolic inversion is not a good idea here.

The purely numerical floating-point computation (of the inverse, or of solving a linear system) whether done in Maple or Matlab will be careful about selection of pivots. That is, the floating-point implementation use the specific numeric values during pivot selection. Symbolically inverting the Matrix will remove this carefulness, regardless of whether the symbolic pivots are chosen according to symbolic length, or by no scheme. Only a scheme of symbolic pivot selection that could qualify the relative magnitude of the pivot choices under assumptions on the unknowns would have a hope of regaining this care. But it could be difficult and might require a case-split solution, and it would add to the symbolic processing overhead. And the symbolic LU (or inversion) doesn't allow for a custom pivot selection strategy in the symbolic case.

Even if you keep some form of Testzero, it's still only checking whether the pivots are symbolically identical to zero or not. It doesn't provide for any safety that, given particular numeric values for the unknowns, some pivot might not attain a zero value. If that happens then the whole approach breaks with division by zero.

Perhaps there is some other way to improve the performance of the Matlab code. Do you compute the inverse in order to solve linear equations like A.x=b? It wasn't clear from your post whether you might be solving at different instances for various different RHSs `b`, for the same numeric instance of Matrix `A`. If you will have to solve for different RHSs `b`, for the very same numeric values of the unknowns, then you could consider doing the numeric LU factorization of `A` just once for the given numeric values of the variables, and then using just forward- and backward-substitution for each distinct RHS instance. This isn't going to help, if you solve for just one single RHS instance for any particular set of values of the variables.

acer

The idea that a range should not be necessary (ie. supplied as a restriction on the domain), for numerical, floating-point rootfinding of a general expression or function is wrong.

There is no default range that will work for all problems. Provide a suggestion for an all-purpose range and it can be broken by example. Provide a rule to try and deduce a good working restricted domain by examining the function (under a timelimit or operational resource limit), and it can be broken by example.

Numerical analysis shouldn't be considered too arcane for younger students, if only to illustrate to them some of the difficulties. Younger people can understand not being able to see the forest for the trees, or not being able to see the trees for the forest.

acer

The idea that a range should not be necessary (ie. supplied as a restriction on the domain), for numerical, floating-point rootfinding of a general expression or function is wrong.

There is no default range that will work for all problems. Provide a suggestion for an all-purpose range and it can be broken by example. Provide a rule to try and deduce a good working restricted domain by examining the function (under a timelimit or operational resource limit), and it can be broken by example.

Numerical analysis shouldn't be considered too arcane for younger students, if only to illustrate to them some of the difficulties. Younger people can understand not being able to see the forest for the trees, or not being able to see the trees for the forest.

acer

@erik10 For your simpler example, the way you call NextZero is not best, The documentation describes the maxdistance optional parameter, which is relevant here. The matter is not that complicated: One may need, in general, to supply a function and a starting point and a maximal distance to be searched. I think that is a reasonable sepcification, in general. Thus I'd suggest that the maxdistance argument should not even be optional! And a call like this does succeed in finding the first root:

> RootFinding[NextZero](x -> x^3-3*2^x+3,-1000,maxdistance=2000);

                          -1.189979747

I think that the nastiness in this latest example is not so much due to the rapid oscillation as it is due the the problem of scale. The initial search point of -100 is quite far from the root cluster. This can cause problems like missing the roots, or overshoot of the first roots (which is what now seems to happen for your example with the range supplied as -100 to 100 after my code revision).

One nice aspect of the findroots procedure (if I may be immodest) is that it allows the number of tries to be controlled. The Roots command doesn't allow for that. For example,

> [findroots(sin(1/x)-x, -10, 10, maxtries=100)]:
> nops(%);

                              100

> {findroots(sin(1/x)-x, -10, 10, maxtries=1000)}:
> nops(%);

                              1000

Posing numeric problems without adequate scaling information is the mistake, I think. Trying to find the root of any general blackbox function in a floating-point framework, without any information of the problem scale (even as a search interval width) is misguided. Should the algorithm painstakingly scrutinize the very small scale, or the very large, or what?

Your description of the technique by visual inspection of the plot is slightly misleading in that it makes it sound too foolproof. Intentionally or not, the notes suggest that this is the less error-prone method which should sit on top of one's solving attempts. But, I would claim that just like any other automated solving process the plotting approach is at risk. Do you rely on "smartplot" bounds? If not, how do you know how wide to take the domain for plotting? How do you know that the plot (which is inherently discrete in essence, even if it appears like a smooth view) does not have very tight corners and roots veiled from view by the nature of the plot driver? One can defeat any plot driver, and have it to miss roots to display.

I believe that I had a (only slightly subtle) coding error in the first version. I realized it on a long drive today. Maybe the call to NextZero, insize findroots, should use maxdistance=b-start instead of maxdistance=b-`if`(...). I investigated, and went back and edited that line in the code I posted above.

ps. Please don't get the wrong impression. Yours is a very interesting question. I just don't think that there is any foolproof general answer which does not require human thought and consideration of the particularities of the problem at hand. (You once asked another fascinating question a year of so ago, about accuracy and precision. I've been thinking on it, and very slowly trying to come up with a deserving answer, since that time.)

@erik10 For your simpler example, the way you call NextZero is not best, The documentation describes the maxdistance optional parameter, which is relevant here. The matter is not that complicated: One may need, in general, to supply a function and a starting point and a maximal distance to be searched. I think that is a reasonable sepcification, in general. Thus I'd suggest that the maxdistance argument should not even be optional! And a call like this does succeed in finding the first root:

> RootFinding[NextZero](x -> x^3-3*2^x+3,-1000,maxdistance=2000);

                          -1.189979747

I think that the nastiness in this latest example is not so much due to the rapid oscillation as it is due the the problem of scale. The initial search point of -100 is quite far from the root cluster. This can cause problems like missing the roots, or overshoot of the first roots (which is what now seems to happen for your example with the range supplied as -100 to 100 after my code revision).

One nice aspect of the findroots procedure (if I may be immodest) is that it allows the number of tries to be controlled. The Roots command doesn't allow for that. For example,

> [findroots(sin(1/x)-x, -10, 10, maxtries=100)]:
> nops(%);

                              100

> {findroots(sin(1/x)-x, -10, 10, maxtries=1000)}:
> nops(%);

                              1000

Posing numeric problems without adequate scaling information is the mistake, I think. Trying to find the root of any general blackbox function in a floating-point framework, without any information of the problem scale (even as a search interval width) is misguided. Should the algorithm painstakingly scrutinize the very small scale, or the very large, or what?

Your description of the technique by visual inspection of the plot is slightly misleading in that it makes it sound too foolproof. Intentionally or not, the notes suggest that this is the less error-prone method which should sit on top of one's solving attempts. But, I would claim that just like any other automated solving process the plotting approach is at risk. Do you rely on "smartplot" bounds? If not, how do you know how wide to take the domain for plotting? How do you know that the plot (which is inherently discrete in essence, even if it appears like a smooth view) does not have very tight corners and roots veiled from view by the nature of the plot driver? One can defeat any plot driver, and have it to miss roots to display.

I believe that I had a (only slightly subtle) coding error in the first version. I realized it on a long drive today. Maybe the call to NextZero, insize findroots, should use maxdistance=b-start instead of maxdistance=b-`if`(...). I investigated, and went back and edited that line in the code I posted above.

ps. Please don't get the wrong impression. Yours is a very interesting question. I just don't think that there is any foolproof general answer which does not require human thought and consideration of the particularities of the problem at hand. (You once asked another fascinating question a year of so ago, about accuracy and precision. I've been thinking on it, and very slowly trying to come up with a deserving answer, since that time.)

@Alec Mihailovs 

I do hope that it comes across that I am trying to be precise, and not rude. However, you didn't "just" show how to use evalhf with `eval`. You did imply at some point that it was evalhf which was causing the behaviour to be like that of double precision.

You explained, "evalhf produces the same result if the calculations were done in the hardware floats", apparently  as part of the rationale for wrapping it around `eval`. That implied that evalhf is what made it get the same result as double precision. I was correcting that erroneous implication (intended by you, or not).

In Maple 10, extracting a scalar from a float[8] rtable did not produce an HFloat scalar object, but it does in Maple 15. In Maple 10 such extraction produced a software float scalar with a similar kind of length as the doubles inside a float[8] rtable. And such software floats drop accuracy under arithmetic operations at lower working precision. And so the evalhf(eval(...)) approach fails in Maple 11, when `eval` became available to run under evalhf, as I showed by example.

As for Maple 15, the evalhf wrapping may not always be necessary under certain kinds of operation (eg. arithmetic) in order to get behaviour with higher accuracy (eg. like dbl prec.) that would be obtained by software floats at the same current Digits setting,

> # Maple 15. evalhf is not needed, to get behaviour like doubles on the NLPSolve result.
> restart: 

> evalf(subs(sol[2],1/3.));

                          0.3333333333

> A:=(x-3)^2:

> sol:=Optimization:-NLPSolve(A);

                        [0., [x = 3.)]]

> eval(1/x,sol[2]);

                       0.3333333333333333

As for the bundled Maple Libraries, yes they have a mix, and no it is not necessarily wrong. I did say that evalf(...,n) was not wrong, used in the right way in the right hands. It's quite OK to do evalf(expr, n) where expr is assigned to be some expression and n is assigned to be some posint. It's clear that this will not hit the corner case I mentioned where only a preformed expression sequence is passed as single argument to evalf.

I earnestly hope that this has not come off to other readers as a personal attack by me. I've written on this site before, that I hold Alec's abilities -- Maple, coding, and math -- in high regard. And clearly there was a little confusion about the particularities of the poster's maple 10 version, so it seems good that various distinctions have been aired.

ps. "complete information"... is very amusing. Thanks for that.

First 436 437 438 439 440 441 442 Last Page 438 of 594