tomleslie

13876 Reputation

20 Badges

15 years, 173 days

MaplePrimes Activity


These are answers submitted by tomleslie

of the thought process behind my original post.

I assumed that you wanted (somehow) to come with a "simple polynomial" which would closely approximate the graph dislayed in your original post

A more detailed explanation of the steps I originally performed is given in the attached worksheet. The following just summarises the basic issues

  1. You choose 11 conditions, which requires a 10-th order polynomial in order to produce enough variables for fitting purposes.
  2. One of these t(2.8)=0 is incorrect: I assume you meant t(-2.8)=0 which corresponds to the graph
  3. Your (corrected) fitting points extend over the range [-2.8..3.5], but the graph extends over the range [-3.5, 4.0]. You should never EVER expect to fir something on one range and then be able to extrapolate. This is pretty much guaranteed to fail in the extrapolated ranges, ie [-3.5..-2.8] and [3.5..4.0] - and the higher the order of the polynomial, the bigger the FAIL
  4. It is further true that whiilst you will fit accurately all the desired roots and extremae, a 10-th order polynomial will have further roots and extremae, and you cannot guarantee that these will not occur in the range where you interested
  5. Adding in the two end-points (ie x=-3.5 and x=4) and increasing the polynomila order to 12 addresses (3) above, but not (4), so doing this still result in an very bad fit
  6. Since the basic problem at this stage was the "spurious" extrema and structure arising  becuase of (4) above, I decided to get "innovative", with no purely mathematical justification. This involved using the lowest-order polynomials possible which were likely to fit the available data - more or less why cubic splines exist. I don't propose to get into a discussion here on using higher (or even lower!) order splines
  7. Using splines in this way has two obvious problems
    1. The resultant function is piecewise, although it will be zeroth- and first-order continuous
    2. There is no way to specify that a particular point is a maximum/minimum: for example it is trivial to ensure that the resulting curve will pass through the point [-2,2], there is no criterion (I know of) whihc will force this point to be a maximum
  8. As things turned out, some of the extremae are slightly "off" - but really not by much, and the overall fit of the spline function is much better than any other fit I have seen posted here
  9. Having obtained a spline fit, it is then realtively trivial to generate an arbitrary number of points in the desired range, and then experiment with polynomial orders whihc produce a (more-or-less) equally good fit. In these experiments, a 10-th order polynomial isn't too bad, but a 12-th order polynomial is better. Increasing the polynomila order beyond this doesn't (subjectively) improve things very much

Check the attached - a somewhat extended version of my original response - although the answers are more or less the same.

  restart;
  interface(rtablesize=10);
#
# Op's original code, rewritten slightly for convenience
#
  f:=z-> local j; add( a[j]*z^j, j=0..10);
  sol:=solve( [ f(-2)=2,   f(-1)=0,  f(0)=-2, f(1)=0,
                f(2.5)=-3, f(-2.8)=0, f(3.6)=0,
                D(f)(-2)=0, D(f)(0)=0, D(f)(1)=0, D(f)(2.5)=0
            ]
          );
  assign(sol);
  plot(f, -2..3.6);
  plot(f, -3.5..4.0)

[10, 10]

 

proc (z) local j; options operator, arrow; add(a[j]*z^j, j = 0 .. 10) end proc

 

{a[0] = -2., a[1] = 0., a[2] = 4.215714943, a[3] = .6123603878, a[4] = -2.989035219, a[5] = -.7872968560, a[6] = .8867511005, a[7] = .1875489924, a[8] = -.1192927605, a[9] = -0.1261252414e-1, a[10] = 0.5861935913e-2}

 

 

 

#
# The above is obviously crap outside the range
# [-2..3,6] and isn't very good within this range!
#
# There is a "spurious' maximum arounf x=3.2 and
# some "doubtful" extra "structure" between x=1
# and x=2.
#
# My first thought for *improving* this fit was to
# incorporate the "endpoints" at x=-3.5 and x=4
#
# If anything this probably made things worse! There
# is a spurious minimum at -3.1, a spurious maximum
# at 3.3 and some "doubtful* extra structure between
# x=1 and x=2
#
  restart;
  f:=z-> local j; add( a[j]*z^j, j=0..12);
  sol:=solve( [ f(-3.5)=-5, f(4)=3,    f(-2)=2,   f(-1)=0, f(0)=-2,
                f(1)=0,     f(2.5)=-3, f(-2.8)=0,  f(3.6)=0,
                D(f)(-2)=0, D(f)(0)=0, D(f)(1)=0, D(f)(2.5)=0
            ]
          );
  assign(sol);
  plot(f, -3.5..4);
  plot(f,  -2..3.6);

proc (z) local j; options operator, arrow; add(a[j]*z^j, j = 0 .. 12) end proc

 

{a[0] = -2., a[1] = 0., a[2] = 4.222829476, a[3] = .7077645406, a[4] = -3.073951093, a[5] = -.9556490425, a[6] = 1.004192804, a[7] = .2767298689, a[8] = -.1658857894, a[9] = -0.2993414733e-1, a[10] = 0.1321353602e-1, a[11] = 0.1088780319e-2, a[12] = -0.3989330638e-3}

 

 

 

#
# Having failed completely with "simple" polynomial fits
# I decided to be a little "creative". Since the curve
# presented in the OP's original post is "simple", ie nice
# and smooth between various critical points (ie roots and
# extrema), I decide to investigate what would would
# happen if I used the lowest order polynomials possible
# which would fit all the relevant cirical values, and
# including the end points.
#
# The obvious choice was to use cubic spline - originally
# I just wanted to see how close I could get to original
# posted curve using splines, so I came up with
#
# One problem with using splines is that there is no way
# (I know of!) to specify extrema - eg one can require
# that the curve goes through the point [-2, 2], but
# cannot ensure that this point will actually be a maximum.
# Therefore one can anticipate that exact positioning of
# maximae and minimae will be a little "off", but (I
# thought) what the hell - let's see how close one can get
#
# And the answer was pretty damn close over the whole
# fitting range!
#
  restart;
  sol:= unapply
        ( CurveFitting:-Spline
          ( [ [-3.5,-4], [-2.8,0], [-2,2], [-1,0], [0,-2], [1,0], [2.5,-3], [4,3] ],
            x
          ),
          x
        );
  p1:= plot( sol,
             -3.5..4,
             color=red,
             title = typeset("Spline Fit"),
             titlefont = [times, bold, 20],
             size = [500,500]
           );

sol := proc (x) options operator, arrow; piecewise(x < -2.8, 17.9388292655016244+6.26823693300046436*x+(-1)*1.13051269125459264*(x+3.5)^3, x < -2, 12.8978731751973967+4.60638327685621363*x+(-1)*2.37407665163464454*(x+2.8)^2+(-1)*.323628055544527560*(x+2.8)^3, x < -1, 2.37298953519057854+.186494767595289268*x+(-1)*3.15078398494151068*(x+2)^2+.964289217346221528*(x+2)^3, x < 0, -3.22220555024906741+(-1)*3.22220555024906741*x+(-1)*.257916332902846157*(x+1)^2+1.48012188315191384*(x+1)^3, x < 1, -2.+.702327433400981249*x+4.18244931655289509*x^2+(-1)*2.88477674995387634*x^3, x < 2.5, -.412895816645142411+.412895816645142411*x+(-1)*4.47188093330873393*(x-1)^2+1.90885581480798128*(x-1)^3, -2.70507441668204240+(-1)*.117970233327183038*x+4.11797023332718304*(x-2.5)^2+(-1)*.915104496294929515*(x-2.5)^3) end proc

 

 

#
# The above spline fit relies on a piecewise definition,
# which will be zeroth and first-order continuous at the
# piecewise boundaries. But the OP "appeared" to want a
# polynomial fit. Therefore I decide to use the above
# spline fit to generate a "gazillion" points acros the
# desired range [-3.5, 4], and the use these points to fit
# a polynomial - rather than relying on eight or so
# obvious critical points
#
# There is little "mathematical" justification for this. I
# just wanted to see how close I could get. Hence the
# following. Note that since maximae and minimae position
# will be a little "off" in the spline fit, they will
# almost certainly be slightly "off" in the polynomial fit
#
# Generate points to fit from the spline equation above
#
  XYData:= Matrix
           ( [ seq
               ( [i, sol(i)],
                 i=-3.5..4, 0.1
               )
             ]
           ):
#
# Specify the order of the polynomial to fit
#
  deg:=12:
  poly:= unapply
         ( Statistics:-Fit
           ( add
             ( a[j]*x^j,
               j = 0..deg
             ),
             XYData,
             x
           ),
           x
         );
#
# And plot the resulting polynomial
#
  p2:= plot( poly,
             -3.5..4,
             color = red,
             title = typeset("Polynomial Fit"),
             titlefont = [times, bold, 20],
             size = [500, 500]
           );

proc (x) options operator, arrow; -HFloat(1.9117797607634794)+HFloat(0.7932509567181691)*x+HFloat(2.8883074854838564)*x^2-HFloat(0.969490774176162)*x^3-HFloat(1.230486407494531)*x^4+HFloat(0.1510728611375433)*x^5+HFloat(0.22747130818441824)*x^6-HFloat(0.004915653141855747)*x^7-HFloat(0.022056822920799694)*x^8-HFloat(2.752000230423313e-4)*x^9+HFloat(0.0010920546564248531)*x^10+HFloat(1.5301992199799316e-5)*x^11-HFloat(2.17547980547018e-5)*x^12 end proc

 

 

#
# Just for comparison purposes, plot the spline
# and polynomial fits on the same graph
#
  plots:-display
         ( [ p1, p2],
           color = [red, blue],
           legend = [ typeset("Spline"),
                      typeset(deg, "-th order Polynomial")
                    ],
           legendstyle = [ font=[times, bold, 16] ],
           title = typeset("Spline and Polynomial Fit"),
           titlefont = [times, bold, 20],
           size = [500, 500]
      );

 

 

Download doFit2.mw

 

covers your requirement.

The overall strategy is to identify about six "important" points and fit to these using (cubic) splines. This returns a pretty good fit, but by its nature, teh fitted function will be piecewise. However this piecewise function can be used to generate "lots" of data points. The latter can then be fitted to a (continuous) polynomial function

  restart;
  interface(rtablesize=10):
  with(CurveFitting):
  with(Statistics):

#
# Fit the curve using splines, just by
# "eyeballing" specific values. This will
# lead to a piecewise function
#
# Accuracy of fit is very dependent on how
# well I estimated the following points!
#
  sol:= unapply
        ( Spline
          ( [ [-3.5,-4], [-2,2], [-1,0], [0,-2], [1,0], [2.5,-3], [4,3] ],
            x
          ),
          x
        );
#
# Plot the fit to see how "good" it is
#
  plot( sol,
        -3.5..4,
        title = typeset("Spline Fit"),
        titlefont = [times, bold, 20],
        size = [500,500]
      );

sol := proc (x) options operator, arrow; piecewise(x < -2, 16.2490909090909099+5.78545454545454518*x+(-1)*.793535353535353538*(x+3.5)^3, x < -1, 2.85818181818181838+.429090909090909189*x+(-1)*3.57090909090909125*(x+2)^2+1.14181818181818184*(x+2)^3, x < 0, -3.28727272727272712+(-1)*3.28727272727272712*x+(-1)*.145454545454545475*(x+1)^2+1.43272727272727285*(x+1)^3, x < 1, -2.+.719999999999999751*x+4.15272727272727327*x^2+(-1)*2.87272727272727302*x^3, x < 2.5, -.407272727272727231+.407272727272727231*x+(-1)*4.46545454545454579*(x-1)^2+1.90707070707070692*(x-1)^3, -2.70909090909090722+(-1)*.116363636363637113*x+4.11636363636363711*(x-2.5)^2+(-1)*.914747474747474865*(x-2.5)^3) end proc

 

 

#
# Use the spline (piecewise) fit to generate a "large"
# set of data points, and then fit these to a polynomial
# of degree given by 'deg'. Although the fit with deg:=10
# isn't *too* bad, really eed deg:=12 to get a *good* fit,
#
  deg:=12:
#
# Generate points to fit from the spline equation above
#
  XYData:= Matrix
           ( [ seq
               ( [i, sol(i)],
                 i=-3.5..4, 0.1
               )
             ]
           ):
  poly:= unapply
         ( Fit
           ( add
             ( a[j]*x^j,
               j = 0..deg
             ),
             XYData,
             x
           ),
           x
         );
#
# Plot the spline fit and the polynomial fit on the
# same graph
#
  plot( [ sol, poly ],
         -3.5..4,
          color = [red, blue],
          legend = [ typeset("Spline"),
                     typeset(deg, "-th order Polynomial")
                   ],
          legendstyle = [ font=[times, bold, 16] ],
          title = typeset("Spline and Polynomial Fit"),
          titlefont = [times, bold, 20],
          size = [500, 500]
      );

proc (x) options operator, arrow; -HFloat(1.91382370521809)+HFloat(0.8270491392225706)*x+HFloat(2.8766495822929747)*x^2-HFloat(1.0218731217974237)*x^3-HFloat(1.2062301494265644)*x^4+HFloat(0.1686662965315391)*x^5+HFloat(0.21831973140615726)*x^6-HFloat(0.0067877048883922715)*x^7-HFloat(0.020883518457061916)*x^8-HFloat(2.123258660692321e-4)*x^9+HFloat(0.0010326858108323598)*x^10+HFloat(1.5283062102964467e-5)*x^11-HFloat(2.078340912562484e-5)*x^12 end proc

 

 

 

 

Download fitStuff.mw

plot options, as shown in the attached

  restart;
  interface(rtablesize=10):

  sys:= tau-> [ diff(x(t), t) = x(t)*(2-(1/20)*x(t))-x(t-tau)*y(t)/(x(t)+10)-10,
                diff(y(t), t) = y(t)*(x(t)/(x(t)+10)-2/3),
                x(0) = 40,
                y(0) = 16
              ]:
  doplot1:= sol->  plots[odeplot]
                   ( sol,
                     [[t,x(t)], [t,y(t)]],
                     0 .. 10,
                     labels=[typeset("t"), typeset("x(t), y(t)")],
                     labelfont=[times, bold, 20]
                   ):
  doplot2:= sol-> plots[odeplot]
                  ( sol,
                    [x(t), y(t)],
                    0 .. 10,
                    labels=[typeset("x(t)"), typeset("y(t)")],
                    labelfont=[times, bold, 20]
                  ):

#
# Solution for tau=0
#
  sol := dsolve( sys(0), numeric):
#
# Plot of x(t) and y(t) versus t
#
  doplot1(sol);
#
# Plot of y(t) versus x(t)
#
  doplot2(sol);

 

 

#
# Solution for tau=0.4
#
  sol := dsolve( sys(0.4), numeric):
#
# Plot of x(t) and y(t) versus t
#
  doplot1(sol);
#
# Plot of y(t) versus x(t)
#
  doplot2(sol);

 

 

 

 


 

Download delODE.mw

one of the options in the attached meets your requirements

Once again this website is refusing to display a worksheet - you will have to download to view - sorry


mplot.mw

that shown in the attached, which (again!) this website is failing to display.

Download aPlot.mw


 

states that the "hybrid" solution method (my emphasis)

The method='hybrid' option provides a LU-based solver which operates in part in both hardware and software float modes. This can be a more efficient solver at high settings of Digits. It computes the LU decomposition in hardware floating-point mode and then does iterative refinement in software floating-point mode

Since you have 32-digit initial values. this is way over the limit for hardware floating points (~16 digits if I recall). So don't specify a "method" for the solution. Just set Digits to a value > than your coefficients ( say 40) and let LinearSolve() run with no 'method' option

If that still fails, then upload the eorksheet here using the big green up-arrow in the Mapleprimes toolbar so that we can do serious diagnosis

you are using consists only of terms in p0 and p1.

Your loop returns the coefficients of p0 and p1 in the equation, and correctly returns a coefficient of 0 for p2, p3, p4, and p5, since none of these powers exist in the original equation

the graph shown in the attached

NULL

restart; interface(rtablesize = 20)

hVector := Vector[row](18, proc (i) options operator, arrow; `if`(1 < i, i*(i+1), 0) end proc); plots:-listplot(hVector, style = point, symbol = asterisk, color = blue)

 

``

Download plotVec.mw

as in the attached, which for some reason is failing to  dsiplay inline here :-(
 

Download solveSYS.mw

The following works

restart;
interface(rtablesize=10):
data:="x:='x';y:='y';sol:=dsolve(diff(y(x),x)=1,y(x));";
eval~(parse~(StringTools:-Split(data, ";")));

"x:='x';y:='y';sol:=dsolve(diff(y(x),x)=1,y(x));"

 

[x, y, y(x) = x+_C1]

(1)

 

Download trickPar.mw

 

a procedure, one should use

uses plots;

rather than

with(plots);

So the attached will work.

For future reference, you should upload code in editable/executable form using the big, green, up-arrow in the Mapleprimes toolbar rather than posting 'pictures' of code which have to be retyped

  restart;
  bezier2:= proc(p0, p1, p2, p3)
                 uses plots:
                 local z:
                 z:= (r, s)-> p0[s]*(1-r)^3+3*p1[s]*r*(1-r)^2+3*p2[s]*r*(1-r)+p3[s]*r^3:
                 spacecurve( [z(t,1), z(t,2), 0],
                             t=0..1,
                             axes=boxed,
                             color=blue
                           );
          end proc:
  bezier2( [2,5], [3,5,4], [3.5,1], [2,0]);

 

interface(rtablesize=10):

 

Download doPlot.mw

I've never been too impressed by the RealDomain() package, but for this problem it isn't really necessary, becuase the following will work.

restart;
sol:= solve({x^2+y^2+z^2-4*x+6*y-2*z-11 = 0, 2*x+2*y-z = -18}):
assume(x::real, y::real, z::real);
yv:= solve(Im~(allvalues~(sol))):
`union`(allvalues~(eval(sol, yv[]))[2..3],yv);

The attached code does not really address any of the (somewhat) odd behaviour whihc has been reported by the OP in this thread. It is an attempt to produce code which is somehat more flexible and efficient than that which I originally supplied.

  1. The "flexibility" improvement is that it will automatically determine the optimal(?) number of parallel tasks to run, based on the value returned by kernelopts(numcpus) on the executing machine. It will automatically determine the correct parameters for each of the parallel tasks. It handles the case where the size of the problem does not divide exactly by the number of parallel tasks, by making the final task in the parallel set (marginally) longer than the others.
  2. The "efficiency" improvement (if any) is obtained by using elementwise operation, zip() and map() functions to replace seq() functions whereever possible.

The "flexibility" stuff is a benefit (easier to understand/maintain) and probably of benefit when running on machines whose core count is differetn from mine.

The "efficiency" stuff was a bit of  waste of time, since execution performance is pretty much what I obtained before :-(

Anyhow, for what it is worth, check the attached

#
# Using the fastest sequential code I cam up with
# with
#
  restart;
  nelems:= 10000000:
  n:= 374894756873546859847556:
  #n:= 700;
  A:= Array(1 .. 4, 1 .. nelems):
  st:= time[real]():
  A[1,..]:=  Array( 1..nelems,
                    i->i^10*n
                  ):
  A[2,..]:= length~(A[1, ..])-1:
  A[3,..]:= zip( (i,j)-> iquo(i, 10^(j-2)),
                 A[1,..],
                 A[2,..]
               ):
  A[4,..]:= map(irem, A[1,..], 1000):
  time[real]() - st;
  A:
  A[1, -2];
  A[2, -2];
  A[3, -2];
  A[4, -2];

53.697

 

3748943819789586888963018855788947266280687433550338920856680612859778836854072888791259847556

 

93

 

374

 

556

(1)

  restart;
#
# Parallelize the above across numcpus tasks
#
  nelems:= 10000000:
  n:= 374894756873546859847556:
  st:= time():
#
# Procedure which is executed in each of the
# parallel tasks
#
  doTask:= proc( sv, nv)
                 local r:= nv-sv+1,
                       B:= Array(1..4,1..nv-sv+1),
                      i :
                 B[1,..]:= Array(1..r,i->(sv-1+i)^10*n):
                 B[2,..]:= length~(B[1, ..])-~1:
                 B[3,..]:= zip( (i,j)-> iquo(i, 10^(j-2)),
                                B[1,..],
                                B[2,..]
                              ):
                 B[4,..]:= map(irem, B[1,..], 1000):
                 return B;
           end proc:
#
# Procedure which is executed after all parallel
# tasks are complete to "assemble" all of their
# results into a single output
#
  contTask:= proc( )
                  return ArrayTools:-Concatenate(2, _passed ):
             end proc:
#
# Procedure which figures out how many parallel tasks
# should be run, what the arguments for these tasks
# should be, and what should be run when all parallel
# tasks have completed
#                  
  setupTask:= proc(n, nc)
                   local i, rng:=floor(n/nc):

                   Threads:-Task:-Continue
                   ( contTask,
                     seq( Task=[doTask, (i-1)*rng+1, i*rng ],
                          i=1..nc-1
                        ),
                          Task=[doTask, (nc-1)*rng+1, n ]
                   );

              end proc:

  st:= time[real]():
  A:= Threads:-Task:-Start( setupTask,
                            nelems,
                            kernelopts(numcpus)
                          ):
  time[real]() - st;
  A[1, -2];
  A[2, -2];
  A[3, -2];
  A[4, -2];

25.363

 

3748943819789586888963018855788947266280687433550338920856680612859778836854072888791259847556

 

93

 

374

 

556

(2)

interface(rtablesize=10):

 

Download speedUP4.mw

something like

  restart:
#
# ignore this command: only needed to make
# worksheets dispaly properly on the
# MAplePrimes website
#
  interface(rtablesize=10):
#
# Load necessary packages
#
  with(LinearAlgebra):
  with(CodeTools):
#
# make sure that truly random matrices are being
# egnerated each time this code is executed. If
# the same result is required each time the code
# is executed, comment out the following line
#
  randomize():
#
# Specify the number of times that the calculation
# is done for each matrix size. NB a different
# matrix of the same size will be used for each
# iteration
#
  numiters:=10:
#
# Do the calculation recording the matrix size
# and the average cpu time taken for each matrix
# size. Average is taken over 'numiters' iterations
#
  seq( [i, CPUTime
           ( Eigenvectors
             ( RandomMatrix(i) ),
             iterations=numiters
           )[1]
       ],
       i=2..9
     );

[2, 0.1400000000e-1], [3, 0.3280000000e-1], [4, .2558000000], [5, 0.6710000000e-1], [6, .1217000000], [7, .2168000000], [8, .3916000000], [9, .6489000000]

(1)

 


 

Download cpuIter.mw

#
# Define the size of the (assumed square)
# matrix
#
  n:=20:
#
# Generate a random nXn matrix
#
  m:=LinearAlgebra:-RandomMatrix(n,n):
#
# Multiply its diagonal elements
#
  mul( m[i,i], i=1..n);

First 107 108 109 110 111 112 113 Last Page 109 of 207