acer

32490 Reputation

29 Badges

20 years, 9 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@mzh It would take away too much from Maple if the constructing such things produces an error. For one thing, such constructions may have different interpretations, and thus should be "OK" to exist as intermediate objects.

Consider, as just one example

myequation := 32 = 64;

                            32 = 64

evalb(myequation); # of course

                             false

myequation mod 2;

                             0 = 0

evalb(myequation mod 2);

                              true

So, construction of `myequation` might get done as part of some involved computation involving several values, and only later might the intermediate objects (equations) get tested modulo some value. It wouldn't be possible to code that, if the construction immediately produced an error. In general, there are many more such examples.

Another way to handle this is to ensure that the arguments are row-column. For this column-column example one could transpose the first Vector. This causes LinearAlgebra:-Multiply to do the work, without the conjugation,

<a, b, c> . <d, e, f>;
                        _     _     _  
                        a d + b e + c f

<a, b, c>^%T . <d, e, f>;

                        a d + b e + c f

<a|b|c> . <d,e,f>; # same orientations as above

                        a d + b e + c f

Transposition can be done inplace (highly memory and speed efficient) on a Vector, either by using the LinearAlgebra:-Transpose command or using the `subtype` parameter of the `rtable_options` command

Other straightforward ways to do it include making the assumption that the relevant names are purely real (or to use a command like `evalc` which does that in a blanketing way),

evalc(<a, b, c> . <d, e, f>);

                        a d + b e + c f

<a, b, c> . <d, e, f> assuming real;

                        a d + b e + c f

<a, b, c> . <d, e, f> assuming a::real, b::real, c::real;

                        a d + b e + c f

acer

I overlooked the addition of Jacobi based svd as new functions dgejsv and dgesvj in version 3.2 or LAPACK. (See section 9.5 of these release v.3.2 notes.)

If you were to compile CLAPACK 3.2.1, and its accompanying reference BLAS, as a pair of .dll libraries with those entry point functions exported, then you should be able to use them directly from Maple using the `define-external` command. (I might try building this one first, perhaps. The only pre-built libraries for this that I know of are .lib, and you'd need dynamic .dll instead. So you'd need to build it yourself.)

acer

I overlooked the addition of Jacobi based svd as new functions dgejsv and dgesvj in version 3.2 or LAPACK. (See section 9.5 of these release v.3.2 notes.)

If you were to compile CLAPACK 3.2.1, and its accompanying reference BLAS, as a pair of .dll libraries with those entry point functions exported, then you should be able to use them directly from Maple using the `define-external` command. (I might try building this one first, perhaps. The only pre-built libraries for this that I know of are .lib, and you'd need dynamic .dll instead. So you'd need to build it yourself.)

acer

Very nice explanation, with more sophistication indeed than in my answer.

And thanks for `style=point`, which looks much better! I was trying to steer him towards bounding the error (or "last" term), but didn't want to give away what might be a homework question too easily.

This reminds me of a usenet thread from 2010, which revolved around confusion due to Maple's use of double-precision `sin` which is (on some platforms) not always the same as what the operating system's runtime provides. Axel provided a fascinating reference, during that discussion, BTW.

Any readers interested in the range reduction from values of much higher multiples of Pi into (0,Pi/4) could start with Mueller's great book, Elementary Functions, Algorithms, and Implementation. (Axel, we've discussed that book once before, I think...)

Posted code in reponses here by me and Axel contain conversion to Horner's form, and therein lies a whole other deep area. Which reminds me of something else I had sheets on for potential blogging: Horner's form for Matrix polynomial evaluation. And then there is Estrin's form for use with Threads...

acer

Very nice explanation, with more sophistication indeed than in my answer.

And thanks for `style=point`, which looks much better! I was trying to steer him towards bounding the error (or "last" term), but didn't want to give away what might be a homework question too easily.

This reminds me of a usenet thread from 2010, which revolved around confusion due to Maple's use of double-precision `sin` which is (on some platforms) not always the same as what the operating system's runtime provides. Axel provided a fascinating reference, during that discussion, BTW.

Any readers interested in the range reduction from values of much higher multiples of Pi into (0,Pi/4) could start with Mueller's great book, Elementary Functions, Algorithms, and Implementation. (Axel, we've discussed that book once before, I think...)

Posted code in reponses here by me and Axel contain conversion to Horner's form, and therein lies a whole other deep area. Which reminds me of something else I had sheets on for potential blogging: Horner's form for Matrix polynomial evaluation. And then there is Estrin's form for use with Threads...

acer

@Pavel Kriz By computing the log-base-10 of the error, one can see how many decimal places of the result are accurate.

I chose 16 since evalhf and compiled double-precision C is close to 15-16 decimal places of width for the mantissa. A natural question is: what 16 decimal digit formula can be written out to C or Fortran and then used to get roughly 16 correct decimal digits for all x in (0,Pi/4).

BTW, I don't know if this is assigned work. Sometimes, when computing a series approximation one can consider bounding the error and in turn finding the number of terms to ensure that,

@Pavel Kriz By computing the log-base-10 of the error, one can see how many decimal places of the result are accurate.

I chose 16 since evalhf and compiled double-precision C is close to 15-16 decimal places of width for the mantissa. A natural question is: what 16 decimal digit formula can be written out to C or Fortran and then used to get roughly 16 correct decimal digits for all x in (0,Pi/4).

BTW, I don't know if this is assigned work. Sometimes, when computing a series approximation one can consider bounding the error and in turn finding the number of terms to ensure that,

@alfred Yes, of course, my x=stage2 (or x=stage2[1] with the 'location' option) is not right at all, when computing stage1. It could be repaired by isolating G=g for x, evaluating that with G=stage2, and then using that for the x value to compute stage1. One reason I won't write that out right now is that in general you may not be able to isolate G=g for x (either explicitly, or uniquely).

Here's another shot. The first is a different symbolic approach (even though I just said why symbolics might not do, in general).

And here's what might be a correction to the earlier numeric approach, keeping in mind that what was wrong before was using y=v(x) making the same kind of oops.

restart:

f := (.5+x-y)^2;

g := (.5-x)^2*(.7-y)^2*(.3-y)^2;

                                               2
                             f := (0.5 + x - y) 

                                  2          2          2
                    g := (0.5 - x)  (0.7 - y)  (0.3 - y) 

stage2:=maximize(maximize(g,x=0..1),y=0..1,location);

        stage2 := 0.01102500000, {[{y = 0.}, 0.01102500000], [{y = 1.}, 0.01102500000]}

seq(T[1], T in stage2[2]);

                             {y = 0.}, {y = 1.}

seq(maximize(eval(f,op(T[1])),x=0..1,location), T in stage2[2]);

           2.250000000, {[{x = 1.}, 2.250000000]}, 0.2500000000, 

             {[{x = 0.}, 0.2500000000], [{x = 1.}, 0.2500000000]}

v := proc(X)
      if not type(X,numeric) then 'procname'(args);
      else Optimization:-NLPSolve(eval(g,x=X),y=0..1,maximize)[1];
      end if;
     end proc:

#plot(v(x),x=0..1);

st2num:=Optimization:-NLPSolve(v,0..1,maximize,method=branchandbound);

                    st2num := [0.0110249987483025, [0.]]

H:=proc(X)
    if not type(X,numeric) then 'procname'(args);
    else eval(f,{x=X,y=Optimization:-NLPSolve(v,0..1,maximize,
                                              method=branchandbound)[2][1]});
    end if;
   end proc:

#plot(H(x),x=0..1);  # takes a while

Optimization:-NLPSolve(H(x),{x>=0,1>=x},maximize,method=sqp);

Warning, no iterations performed as initial point satisfies first-order conditions
                       [2.25000000000000000, [x = 1.]]

Come to think of it... the definition of procedure `v` should probably use a global method just in case there are local maxima for some input X which don't happen to be the global maximum.

v := proc(X)
      if not type(X,numeric) then 'procname'(args);
      else Optimization:-NLPSolve(eval(g,x=X),y=0..1,maximize,
                                  method=branchandbound)[1];
      end if;
     end proc:

@alfred Yes, of course, my x=stage2 (or x=stage2[1] with the 'location' option) is not right at all, when computing stage1. It could be repaired by isolating G=g for x, evaluating that with G=stage2, and then using that for the x value to compute stage1. One reason I won't write that out right now is that in general you may not be able to isolate G=g for x (either explicitly, or uniquely).

Here's another shot. The first is a different symbolic approach (even though I just said why symbolics might not do, in general).

And here's what might be a correction to the earlier numeric approach, keeping in mind that what was wrong before was using y=v(x) making the same kind of oops.

restart:

f := (.5+x-y)^2;

g := (.5-x)^2*(.7-y)^2*(.3-y)^2;

                                               2
                             f := (0.5 + x - y) 

                                  2          2          2
                    g := (0.5 - x)  (0.7 - y)  (0.3 - y) 

stage2:=maximize(maximize(g,x=0..1),y=0..1,location);

        stage2 := 0.01102500000, {[{y = 0.}, 0.01102500000], [{y = 1.}, 0.01102500000]}

seq(T[1], T in stage2[2]);

                             {y = 0.}, {y = 1.}

seq(maximize(eval(f,op(T[1])),x=0..1,location), T in stage2[2]);

           2.250000000, {[{x = 1.}, 2.250000000]}, 0.2500000000, 

             {[{x = 0.}, 0.2500000000], [{x = 1.}, 0.2500000000]}

v := proc(X)
      if not type(X,numeric) then 'procname'(args);
      else Optimization:-NLPSolve(eval(g,x=X),y=0..1,maximize)[1];
      end if;
     end proc:

#plot(v(x),x=0..1);

st2num:=Optimization:-NLPSolve(v,0..1,maximize,method=branchandbound);

                    st2num := [0.0110249987483025, [0.]]

H:=proc(X)
    if not type(X,numeric) then 'procname'(args);
    else eval(f,{x=X,y=Optimization:-NLPSolve(v,0..1,maximize,
                                              method=branchandbound)[2][1]});
    end if;
   end proc:

#plot(H(x),x=0..1);  # takes a while

Optimization:-NLPSolve(H(x),{x>=0,1>=x},maximize,method=sqp);

Warning, no iterations performed as initial point satisfies first-order conditions
                       [2.25000000000000000, [x = 1.]]

Come to think of it... the definition of procedure `v` should probably use a global method just in case there are local maxima for some input X which don't happen to be the global maximum.

v := proc(X)
      if not type(X,numeric) then 'procname'(args);
      else Optimization:-NLPSolve(eval(g,x=X),y=0..1,maximize,
                                  method=branchandbound)[1];
      end if;
     end proc:

@PatrickT All three of those have the same URL.

@PatrickT All three of those have the same URL.

@Samir Khan I had started looking at the manipulation behaviour if the data were set up as ISOSURFACE() instead of POINTS(). At smaller numPoints, at least, it seems responsive, although it takes longer to render than to produce. At numPoints=200 the rendering by the GUI seems to "go away forever". The lighting model doesn't seem to get used, as well, and that is unfortunate.

I haven't yet looked at MESH() for this.

@Samir Khan Ok, I've got some changes to the code already. (But I won't be able to finish until at least next week at the earliest, due to work deadlines.)

For numPoints=100, I got the memory allocation down from about 950MB to around 11MB, depending on the method. On an Intel i7 930 @2.8GHz (quad core) the time to produce the plot3d data structure is reduced from 11sec to about 1.4sec using a combination of Threads and evalhf, and about 0.8sec using the Compiler serially.

Using the revised code in Maple 15.01 on 64bit Windows 7 it was also easy to compute numPoints=200 in about 7sec, and numPoints=400 in about 65sec with a few hundred MB allocation.

Also of interest is that this evalhf approach took almost exactly four times as long when run serially compared with its Threaded version on that quad-core. Sure, we know that this computation should be embarassingly parallelizable, but achieving linear speed-up w.r.t the number of cores (with Maple Threads) is more usually another story altogether.

Samir, with your permission I'd like to embed the code into an Application worksheet which demonstrates the various timings by using radio boxes to allow an optimization choice of none/option-hfloat/evalhf/Compiled combined with an acceleration choice of serial/parallel. Of course, the parameters would also be adjustable.

The outstanding irritation is the time it takes the GUI to finish rendering such pointplots. They're not so large that the GUI always freezes merely by clicking on them. But they do cause the GUI to use much more memory, just to plot the structure.

Is there a better way to represent these fractal objects other than point-plots? Apart from being very slow to render they are difficult to see properly when numPoints gets high. I think that that's because the point-plot symbols visually merge together, and so the object does not look crisp. Nor does glossiness and the lightmodel affect a collection of many points symbols, like they do for a patchnogrid surface. What would be nice would be an implicit surface, ideally looking more like this. If any Maple expert has an opinion or suggestion for this visualization aspect it'd be welcome.

@goli I was not originally convinced that simplex[convexhull] was the best way to approach your goal.

The problem here is that some portions (which you hope to connect as if they lay on the nice convex curve segment) of your latest set of data points are only "close to" being convex.

We can see that the very first stage fails to do what was expected of it.

restart:

ch1 := [[0.5e-2, -.57], [0.1e-1, -.56], [0.25e-1, -.57], [0.35e-1, -.555], [0.8e-1, -.485],
 [0.9e-1, -.42], [.115, -.43], [.125, -.36], [.155, -.365], [.16, -.3], [.17, -.34],
 [.175, -.275], [.195, -.24], [.23, -.24], [.23, -.18], [.245, -.215], [.25, -.145],
 [.275, -.1], [.28, -.155], [.3, -.12], [.31, -0.35e-1], [.315, -0.25e-1], [.32, -0.85e-1],
 [.345, -0.4e-1], [.355, 0.55e-1], [.36, 0.65e-1], [.375, 0.15e-1], [.38, .11], [.39, .135],
 [.405, .175], [.41, .19], [.415, .205], [.425, .11], [.425, .24], [.425, .425], [.425, .43],
 [.425, .435], [.425, .44], [.43, .12], [.43, .265], [.43, .405], [.43, .41], [.43, .44],
 [.43, .445], [.435, .295], [.435, .38], [.435, .385], [.435, .445], [.44, .445], [.445, .445],
 [.45, .445], [.465, .195], [.465, .44], [.48, .43], [.485, .245], [.485, .425], [.49, .42],
 [.5, .29], [.5, .405], [.505, .31], [.505, .395], [.51, .345], [.51, .35], [.51, .355],
 [.51, .36], [.51, .365], [.51, .37]]:

ch11 := simplex[convexhull](ch1,output=[hull]):

ch1b:={op(ch1)} minus {op(ch11)}:

plots:-display( plot(ch11,style=point,symbol=solidbox,symbolsize=10,color=cyan),
                plot(ch11,color=cyan),
                plot(ch1b,style=point,symbol=solidbox,symbolsize=10)
               );

You should be able to see the problem points in the above plot. The problem is that the `convexhull` command does not accept a tolerance option in order to denote how far off a point canbe while still being acceptable. That's unfortunate.

Did you round the data for those points? If you rounded, and if you are lucky, then there may be more success with the unrounded data. For example the point [.17, -.34] might originally have been closer to [.17, -.3401] or so.

If retaining more decimal digits in the data doesn't cure this main issue then it still may be possible to write a more careful piece of (involved) code. It might fit a curve to the computed hull segment, and then find out which points are "close enough". And then adjust the set that was the computed hull. And so on.

Sorry, I don't have spare time to write that.

First 426 427 428 429 430 431 432 Last Page 428 of 595