acer

32480 Reputation

29 Badges

20 years, 6 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are answers submitted by acer

You could start by reading this. Then, if you decide to go with alpha-beta pruning, you might try studying this. After that, you could work on a (static) position evaluator.

It is an interesting question: to what degree position heuristics matter in an effective Go position evaluator (relative, say, to how much they do in a chess position evaluator).

acer

See ?updates,Maple9,compatibility but look at that help-page in your actual Maple, in either the Classic or commandline (TTY) interfaces of Maple 13, or in any interface of Maple 9 through 12.

There it explains that x*x does not simplify to x^2 inside a procedure body because `x` might be a function call or something that returns a random result (and anyone who bothered to enter x*x presumably would want two distinct random results multiplied together, rather than a single result that gets squared).

In the Standard GUI of Maple 13 or the Online Help the input in that explanation is itself automatically simplified from x*x to x^2, so it reads as "x^2 does not automatically simplify to x^2". This is my favourite bug of the month. I will submit an SCR against it.

acer

> m := module() option package; export f;
> local ModuleLoad;
> ModuleLoad:=proc() print("You are proud user of the Foo-package!");
> end proc:
> end module:
> with(m);
                                      [f]
 
> LibraryTools:-Create("foo.mla");
> libname:="./foo.mla",libname:
> savelib(m);
> restart:
> libname:="./foo.mla",libname:
> with(m);
                   "You are proud user of the Foo-package!"
 
                                      [f]

acer

What sort of constraints do you have?

It sounds as if you wish to fit a curve to the data using a least squares approach. The curve to which you intend a fit might be linear. And you also wish to constrain the variables (though whether by simple bounds, or by complicated constraint equations remains to be seen).

To do least-squares fitting of data to a linear model, you can use CurveFitting:-LinearFit. See also Statistics,Regression. If you have arbitrary constraints then you should be able to set it up using Optimization:-LSSolve, where the objective would be the formula for the distance (ie. sum of squares) to the model function.

acer

The first link your gave is just a duplicate link to the second upload. Perhaps you meant the first link to point at H_Crear lib ITS90.mw? Mapleprimes isn't letting me view that. So I'll try to guess your problem.

Why are you saving the module to a .m file instead of a .mla archive file?

Consider these two examples,

> m := module() option package; export f;
> local A; A:=<<13>>: f:=proc(x) x*A[1,1]; end proc;
> end module:
> m:-f(3);
                                      39

> save(m,"foo.m");

> restart:
> read("foo.m");
> m:-f(3); # your situation?
                                   3 A[1, 1]

Now, saving the module to a Library archive (adjust locations for your OS. On Linux/UNIX I use "." to refer to the working directory in which I started the commandline interface. On Windows with the Standard GUI, you'd likely want to use "C://temp" or similar),

> restart:
> m := module() option package; export f;
> local A; A:=<<13>>: f:=proc(x) x*A[1,1]; end proc;
> end module:
> m:-f(3);
                                      39

> LibraryTools:-Create("bar.mla"):
> libname:=".",libname:
> savelib(m):

> restart:
> libname:=".",libname:
> m:-f(3);
                                      39

I'll submit an SCR that LibraryTools:-Create's help-page has Examples showing filenames with the older (deprecated?) .lib extension rather than the more modern .mla extension.

You might have also tried to make your module's exports reference the Matrix entries by depending on Maple's scoping functionality. That is, you might have defined the Matrix outside of the module altogether. For example,

> restart:
> m := module() option package; export f;
> f:=proc(x) x*A[1,1]; end proc;
> end module:
> m:-f(3);
                                   3 A[1, 1]

> A:=<<13>>:
> m:-f(3);
                                      39

In such a case I would suggest defining Matrix A explicitly as a local of the module m, as in the examples at top. Alternatively you could also savelib Matrix A to the .mla archive. But I find such approaches which rely on scoping to often cause more trouble than they are worth (unless there's something special which makes it more useful, such as using the Matix data in two distinct modules which must be kept separate for some strange reason.)

acer

Make sure you don't include a space between plot and the ( bracket, if you are entering the command as 2D Math (the default for new users). If the space is there, Maple treats it as `plot` multiplied by (....).

Next, the help-page shows various calling sequences. Be careful not to mix and match their purposes. The one with the form plot( [f(t), g(t), t=a..b] ) is for parameter-plots where f(t) and g(t) represent the formulae for the x and y coordinates evaluated at a particular t. The form plot( f(t), t=a..b ) is for plotting expressions of t. The form which you gave at top, plot( [f(t), t=a..b] ) doesn't match either of those two valid forms.

acer

Is this what you're after?

with(Statistics):

myPDF:=piecewise(x<-10,0,x=-10,CDF(Normal(0,20),-10),PDF(Normal(0,20),x)):

myCDF:=simplify(int(convert(myPDF,Heaviside),x)) assuming x::real;

Then you could plot myPDF and myCDF (with or without discont=true).

acer

It has previously worked for me on both an Intel quad-core Q6600 machine running Windows Vista, as well as an Intel dual-core Core2Duo running Windows XP.

I wonder if perhaps your Maple session was not completely fresh (not merely restarted, but relaunched) or did not pick it up. What happens if you issue these commands in Maple?

getenv(OMP_NUM_THREADS);

kernelopts(numcpus);

I suppose that there is a possibility that the Intel MKL BLAS is not properly recognizing your AMD quad-core Phenom as being multi-core.

acer

What were the other operations that you did before that?

Did you really mean that you assigned into a list at some prior point, or just that you last tried the assignment,

> a:=[1,2];

acer

The x on the Operators palette does LinearAlgebra:-CrossProduct on a pair of 3-Vectors. It also appears to do usual scalar multiplication on unassigned names, so I'm not exactly sure what was going on for you before.

It's very hard to tell how you entered the original, and got an empty plot. I don't quite see how it could have been 2D Math or 1D input (produces plot*something, or the t range is invalid syntax).

Using the correct range syntax of t=0..5, and without a space between plot and the ( left-bracket, the plot works fine for me as 2D Math input with the Operators palette cross (x, 5th from left, at end of second row in the palette) between the 4 and the t.

acer

You likely went wrong with that x in the t^2-4*x*t.

My guess is that either your x was not assigned a numeric value, or you didn't intend to put it in at all, or you used the x as a multiplication symbol to try and mean 4 x t = 4*t.

Did you mean something like this?

plot([1 + sqrt(t), t^2 - 4*t, t=0..5]);

acer

Try the "tip" in this comment if you are on MS-Windows, or its parent post for Linux.

Also, put outputoptions=[datatype=float[8]] in those RandomMatrix calls.

acer

It might be interesting to see how the output=list solution works out, if the Matrix has clusters of close but not "equal" floating-point eigenvalues. It might turn out that the list format output treats close eigenvalues as distinct, even though one might wish them to be considered as clustered. It might even happen that an eigenvalue with multiplicity greater than 1 (practically) could appear as several values whose difference is only on the order of 10^(-Digits) or so.

Perhaps a clustering approach could be devised (with a re-usable procedure, naturally). Given the Vector of eigenvalues along with a floating-point tolerance (tol), a result might be returned which consists of a list distinct index subranges. The criterion might be that the eigenvalues in each such index subrange all fall within 2*tol  of each other. Or some other averaging clustering scheme might do. Given that list of subranges, it would be very easy to split the Matrix of eigenvectors using the [] Matrix subindexing syntax.

ps. I've been looking at computing eigenvectors associated with selected eigenvalues, using clapack. It may be that the iblock and isplit parameters of clapack's dstebz and dstein could be useful, in the case of a real symmetric Matrix. Those two parameters relate to the blocks that occur in the reduced tridiagonal form of the Matrix. Each eigenvalue is associated with a block in the tridiagonal form, but until I brush up on theory I'm not sure sure what (if any) relationship this might have to clustering.

acer

If the Matrix is 3x1000 and has numeric entries then it (or its transpose) can be sent directly to pointplot3d.

m := LinearAlgebra:-RandomMatrix(3,1000,generator=0.0..1.0,
                   outputoptions=[datatype=float[8]]);

plots:-pointplot3d(m^%T);

acer

Sometimes the Optimization package can be used to find a root under certain constraints. A constant objective (such as 1) may be tried, so that success involves finding a feasible point.

> f:=3*x-y^2:
> g:=sin(x-1/4)*(y-3/4):

> Optimization:-Minimize(1,{f=0,g=0,x+y<=1},x=0..1,y=0..1);
          [1., [x = 0.187499999999996642, y = 0.749999999999993783]]
 
> eval(x+y,%[2]);
                                 0.9375000000

Optimization can also handle constrained problems with procedures rather than expressions.

acer

First 294 295 296 297 298 299 300 Last Page 296 of 337