acer

32480 Reputation

29 Badges

20 years, 6 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are answers submitted by acer

Quite a few requests have been made, over several years, for some quick, easy visual cue whether an indexed name is a table reference or an atomic identifier underneath.

Would it be useful if I were to try to put together a quick context-menu entry (automatically loadable into your session using user-owned initialization files, say) which could immediately show the situation upon right-click (but without  necessitating actual 1D conversion)?

acer

The ss1 object is a Maple module. And it has exports a, b, c, and d. (See the ?exports help-page.)

Hence you can access them programmatically like so,

ss1:-a;

ss1[a];

Of the two forms above, the first works the same even if you have assigned to the global name a. But if you have assigned to global a, then the second form needs to be called like ss1['a'] instead.

And if you are inside a procedure with a local a, and global a has also been assigned, then the first form works unchanged. But the second form would have to become something like ss1[':-a'] instead.

For these reasons it is often easier to just use ss1:-a, to avoid having to type out all of ss1[':-a'] which looks silly in comparison. One of the few benefits of the ss1[a] form is that you can do nifty stuff like seq(ss1[x], x in [a,b,c,d]).

acer

@brian abraham One of the common goals of using a Window Function is to correct the oscillation phenomenon near the end-points. Since there is (by defn) no original data beyond what is supplied then the FFT will usually not be able to accomodate such a discontinuity. The FFT process relies on "knowing" that the data extends in some (overlay of) a periodic way. A hard nonzero boundary for the data thus induces an effect similar to the Gibbs phenonmenon.

A key purpose of using a Window Function to scale the data so that it tends strongly to zero near the original data point boundary is to allow the data to be be safely padded further on with zero values. Since the scaled data is converging to zero, then padding with zero (or near zero) values will make the FFT process see a natural continuation. This makes the FFT and Inverse FFT behave much better at the "old" boundary, since they now act on a wider data set. And it's not just that the "old" boundaries are well inside the extended data set. The fact that the data is all nice is also key. The augmented data can easily be made to appear to extend the old curve that is tending to zero near the old boundary. The augmented data doesn't even have to oscillate in order to effectively do its job. There is no significant distinction from the augmented data and the (now tiny) oscillation in the scaled old data near the old boundary. The augmented data is effectively periodic (though literally it is not) because it so close to zero and its would-be oscillation is miniscule.

I will try to demonstrate using the previously posted example. I've added yet another, third, data set. It is based on the Windowed data set, but padded by ten percent more points. It's not padded with zero. It's padded with a continuation of the scaled curve. I've added two more plots, to show close-up where the padded data meets the scaled original data.



 

restart:

 

freqpass:=proc(M::Vector,rng::range(nonnegint))

local Mt, Xlow, Xhi;

  Xlow,Xhi := op(rng);

  Mt := DiscreteTransforms:-FourierTransform(M):

    if Xlow>1 then ArrayTools:-Fill(Xlow-1,0.0,Mt,0,1); end if;

    if Xhi<op(1,Mt) then ArrayTools:-Fill(op(1,Mt)-Xhi-1,0.0,Mt,Xhi+1,1); end if;

    Mt[1]:=Mt[1]/2;

    2*map(Re,DiscreteTransforms:-InverseFourierTransform(Mt));

end proc:

 

f := x -> 5*x + sin(128*x) - sin(130*x) + sin(133*x) - sin(137*x);

(1)

a,b,n := -Pi,Pi,1000:

 

M := Vector(n,(i)->(evalhf@f)(a+i*(b-a)/n),datatype=float[8]):

Mg := Vector(n,(i)->evalhf(f(a+i*(b-a)/n)*exp(-1/2*(a+i*(b-a)/n)^2)),datatype=float[8]):
numpad := trunc(n*0.10):
Mgpadded := Vector(n+2*numpad,datatype=float[8]):
Mgpadded[numpad+1..numpad+1+n]:=Mg:
Mgpadded[1..numpad] := Vector(numpad,(i)->M[1]
           *evalhf(exp(-1/2*(a+(-numpad+i)*(b-a)/n)^2)),datatype=float[8]):
Mgpadded[n+numpad+1..n+2*numpad] := Vector(numpad,(i)->M[n]
           *evalhf(exp(-1/2*(a+(n+i)*(b-a)/n)^2)),datatype=float[8]):

 

plots:-pointplot([seq([k,M[k]],k=1..op(1,M))],style=line,view=[0..1000,-20..20]);

plots:-pointplot([seq([k,Mg[k]],k=1..op(1,Mg))],style=line);
plots:-pointplot([seq([k,Mgpadded[k]],k=1..op(1,Mgpadded))],style=line);
plots:-pointplot([seq([k,Mgpadded[k]],k=1..numpad+100)],style=line);
plots:-pointplot([seq([k,Mgpadded[k]],k=n+numpad+1-100..n+2*numpad)],style=line);

 

 

 

 

 

filtered := freqpass(M,1..30):

filteredg := freqpass(Mg,1..30):
filteredgpadded := freqpass(Mgpadded,1..30)[numpad+1..numpad+1+n]:

plots:-pointplot([seq([k,filtered[k]],k=1..op(1,filtered))],style=line);

plots:-pointplot([seq([k,exp(1/2*(a+k*(b-a)/n)^2)*filteredg[k]],
                      k=1..op(1,filteredg))],style=line);
plots:-pointplot([seq([k,exp(1/2*(a+k*(b-a)/n)^2)*filteredgpadded[k]],
                 k=1..op(1,filteredgpadded))],style=line);

 

 

 

 

 



Download window_correction.mw

I liked Axel's most, primarily because it gives a smooth looking curve quickly.

But just for fun, here is yet another way to get a less smooth looking curve more slowly. (I like the space that shows between the curve and the y-axis. Sigh.)

ee := evalc( subs(z=x+y*I, abs(z-2)=2 ) );
plots:-implicitplot(ee, x=-4..4, y=-4..4);

acer

Can you use Maple to demonstrate that "A_10x10 is not PD under the assumption that B_4x4 is not PD" much faster than you can demonstrate that "B_4x4 is PD under the assumption that A_10x10 is PD"?

The latter of those two ways entails computing the whole logical formula for whether A is PD, up front. You've discovered that unless is is sparse or special then this can be very expensive.

But what if the logical formula for whether B_4x4 was PD was more cheaply attainable?

And what if you also devised a test of PD which could utilize a given logical formula T at any given internal logical step, by demanding that any subcheck be done under the assumption of T?

Under such circumstances, you could use the negation of the logical formula for B_4x4 being PD as T. And you could utilize ~T while computing whether A_10x10 was PD or not. If you got lucky then that computation would return 'false' quickly, reaching a logical contradiction before having to complete. That would allow you to conclude your hypothesis.

Now, how can one compute PD of a Matrix under some logical condition? If you look at (line numbers for Maple 14),

showstat(LinearAlgebra:-LA_Main:-IsDefinite,24..38);

then you can see code for testing PD of a symmetric Matrix, I think. Near the bottom there is a test, 0 < Re(de). You might be able to change that to something like (0<Re(de) and not(T)) or similar. Or maybe it would be better to use `is` and `assuming`?

Oh, but now I realize that you are interested in positive-semidefinite and not positive-definite. I doubt that the above approach is suitable for any method that has to compute whether A_10x10 is PSD by calculating all its eigenvalues at once. Now, what was that rule for computing PSD using minors? Did it involve all the minors, as opposed to just the principal minors, or am I misremembering?

acer

You are mistaken, in your claim that "In NLPSolve, f(Pi) or f(-Pi) are also calculated". If you trace through the calculuation (using `trace`, or `stopat`, or just with `print(x)` at the start of `f`) then you can see that only floating point approximations are passed to `f` during your call to NLPSolve.

acer

So, are you saying that a,b,c,d,wr & p are equal for each of fmax, fmin, and fmoy?

In that case, over what variable(s) is fmax the maximum? It can't be over {a,b,c,d,wr,p} if those are to be taken as fixed when computing fmax, fmin, and fmoy. So then the only thing left to vary when computing fmax and fmin is `t`. Is that correct? I'm supposing that it is, for now.

My first attempt seems to run amok of a known NLPSolve bug.

> f:=(a+2*b+c*d)*sin(wr*p*t);
> fmoy := p*wr*(int(f, t = 0 .. Pi/p/wr))/Pi;
> fmax:='op(1,NLPSolve(f,t=0..2,maximize))':
> fmin:='op(1,NLPSolve(f,t=0..2))':

(a + 2 b + c d) sin(wr p t)
2 (a + 2 b + c d)
-----------------
Pi

> Optimization:-NLPSolve((fmax-fmin)/fmoy,a=1..5,b=0.1..0.5,c=0.4..0.8,
> d=10..20,wr=200..300,p=5..20);

Warning, no iterations performed as initial point satisfies first-order conditions
[0., [a = HFloat(3.0), b = HFloat(0.3),

c = HFloat(0.6000000000000001), d = HFloat(15.0),

p = HFloat(12.5), wr = HFloat(250.0)]]

My next attempt (re-using assignments to f, fmax, etc fromabove) to workaround that problem, doesn't seem to work out. I am wondering whether it is because the objective and its gradient are not continuous (which is a quality that the NLPSolve methods generally require).

> F:=proc(A,B,C,DD,WR,P)
> local fmax, fmin, res;
> fmax:=Optimization:-NLPSolve(eval(f,[a=A,b=B,c=C,d=DD,wr=WR,p=P]),t=0..2,maximize);
> fmin:=Optimization:-NLPSolve(eval(f,[a=A,b=B,c=C,d=DD,wr=WR,p=P]),t=0..2);
> evalf((op(1,fmax)-op(1,fmin))*eval(fmoy,[a=A,b=B,c=C,d=DD,wr=WR,p=P]));
> end proc:

> objf := proc(V::Vector)
> local res;
> F(V[1],V[2],V[3],V[4],V[5],V[6]);
> end proc:

> objfgradient := proc(X::Vector,G::Vector)
> G[1] := fdiff( F, [1], [X[1],X[2],X[3],X[4],X[5],X[6]] );
> G[2] := fdiff( F, [2], [X[1],X[2],X[3],X[4],X[5],X[6]] );
> G[3] := fdiff( F, [3], [X[1],X[2],X[3],X[4],X[5],X[6]] );
> G[4] := fdiff( F, [4], [X[1],X[2],X[3],X[4],X[5],X[6]] );
> G[5] := fdiff( F, [5], [X[1],X[2],X[3],X[4],X[5],X[6]] );
> G[6] := fdiff( F, [6], [X[1],X[2],X[3],X[4],X[5],X[6]] );
> NULL;
> end proc:

> Optimization:-NLPSolve( 6, objf, 'objectivegradient'=objfgradient,
> 'initialpoint'=Vector([1,0.2,0.5,11,210,6]));

Error, (in Optimization:-NLPSolve) no improved point could be found

Maybe a global optimization application (GlobalOptimization or DirectSearch) could do something useful (with either the first or second of those two approaches).

acer

What exactly is the question?

Is `r` a Vector, and do you want something like this?

map(t->(-1)^(t-1), r);

## Or, less efficiently as they each use an extra intermediary Vector,
# (-1)^~(r-~1);
# `^`~(-1,`-`~(r,1))

acer

The fascinating manner in which ArrayTools:-SearchArray varies the number of items in its returned expression sequence relies on _nresults

Now, _nresults is part of parameter-processing. and has been present since Maple 10. It's not very "Maple-like" behaviour, though, which might be why it's not usually seen even in new system commands.

The  ArrayTools:-SearchArray command seems intended to mimic that Matlab `find` command (excluding the underdocumented acceptance by the Matlab command of a qualifier in the argument, eg. X<3) and so may reasonably appear un-Maple-like.

ps. The Mapleprimes functionality to automatically recognize help-page references doesn't work for ?_nresults at the time I write this.

As a general rule, a module has to be savelib'd to a .mla Library archive (repository) and not to a .m file, if it is to be saved fully intact.

If there is no writable .mla archive in the savelibname (or libname) path then Maple will write out to a .m ("dotm") format file instead.

And, in general, a module is not fully encapsulated when saved to a .m file. That should be better documented. But there it is.

Workarounds are to either assign savelibname to a particular .mla filename, or to a directory which contains a writable .mla archive. That's good practice in general, or else lots of savelib() calls will litter some directory instead of nicely saving any individual .m's to a .mla container.

FWIW, that is related to why customized context-menu actions do not work properly on modules as the display output. The GUI-kernel interaction for it is somehow dotm based, and only a shell of the module object gets received by the context-menu package's code. This is a shame, given how long modules have been in Maple.

The Example in ?LibraryTools,Create should use the currently supported .mla and not the old deprecated .lib/.ind filename extensions.

It might be better if the tools were more user-friendly. The `savelib` command could issue an error if no writable archive were present, and suggest using `save` instead. The `save` command could issue an error if told to save a module whichit could not fully handle. And so on. I bet something better is workable, while only losing "misfunctionality".

acer

After reading through all the followups in this thread, I would suspect that the problem is that the mount of a network filesystem is sporadically failing, and then the execution is aborted.

If that guess is the case, and if you are lucky then it could be due to Maple trying to write to a file in a directory whose NFS mount has suddenly disappeared. You might be able to work around that, for your very long maple sessions, by appending/writing/reading only to/from files on local disk, such as in the /tmp directory.

If it guess is the case, and if you are less lucky, then the networked mount point on which Maple resides may itself be failing now and then. To work around that, you could try installing Maple on local disk (eg. /usr/local for Linux/Unix).

Basically, if the appearance of .nfsXXXXXX files during each crash is a true indication that the problem is due to a sporadically failing network file system (NFS) then you may be able to work around it by having both Maple and all personal i/o data files be on local disk.

ps. anames(user) and any other variable values will not collide amongst different/distinct Maple kernels or running sessions of the commandline interface.

acer

@mehrdadparsapour 

If you look carefully enough at the results resturned by `solve` in both my and John's earlier answers you should be able to see that they do in fact contain such formulae. That is, they contain answers of the form a=..., b=... in terms of X.

But the right-hand-sides of those answers like {a=..,b=...} in terms of X are implicit RootOfs, and not explicit arithmetic formulae in terms of radicals.

You may not like that implicit RootOf form. But as I showed it is useful. Using evalf or fsolve following substitution of X by a float value (as I showed using evalf), all the roots for any given numeric X can be obtained. Hence, you can much more easily and reliably obtain all the roots of those RootOfs involving simple polynomials in X (for any given numeric X) than you can obtain roots of the system of equations using fsolve (for any given numeric X).

The extra RootFinding:-Parametric bit I added in my worksheet, before the `solve` stuff, was to show how many roots (ie. distinct sets of values for a,b,c,d) there are for separate ranges of real-valued parameter  X.

acer

A:=[($1..6)];

Matrix(3,2,[ListTools:-LengthSplit(A,3)],scan=columns,order=C_order);

acer

Try setting,

interface(prettyprint=1);

beforehand. That has the advantage that the result might be expressed tersely enough for you to gain whatever insight you'd expected to get by looking at it.

Or you could suppress the output using a full colon instead of a semicolon at the end of the line (and then figure out how to pick apart and make sense of the unviewable answer).

acer

First 286 287 288 289 290 291 292 Last Page 288 of 337