3437 Reputation

5 years, 227 days

Here is something that is far from perfe...

Here is something that is far from perfect but that can give you ideas...
proposal.mw

Tip: build the plot structure that corresponds to the icon only for non-white pixels (to avoid the icon hiding the curve).
Look to the ImageTools package to find image operations

From LinearAlgebra:-SmithFormhelp p...

From LinearAlgebra:-SmithFormhelp page
The SmithForm(A) function returns the Smith normal form S of a Matrix A with univariate polynomial entriesin x over a field F. Thus, the polynomials are regarded as elements of the Euclidean domain F[x].

The elements of your matrix are neither univariate (x and s) nor polynomials  but rational functions
If the variable x is a typo, proceed this way.

B := eval(A, x=s):
C := simplify(denom(B[1, 1]) *~ B):
LinearAlgebra:-SmithForm(C,s)

If the element [1,1] truly contains x, just do this

C := simplify(denom(A[1, 1]) *~ A):
LinearAlgebra:-SmithForm(C,s)

LinearAlgebra[CharacteristicPolynomial]&...

VectorCalculus[Jacobian]
LinearAlgebra[CharacteristicPolynomial]

How is it possible to use all of my CPU ...

How is it possible to use all of my CPU cores to calculate the eigenvalues and eigenvectors of a big matrix faster?

Write a parallel  algorithm which computes the eigenvalues.
There is a lot of articles on this topic and you could begin by this one
12_eigenvalue_8up.pdf
With Maple use the Threads package

I advice you to read the help pages for diff, dsolve and odeplot

 > restart:
 > with(plots):
 > odesys := {diff(y(x), x) + I*y(x) = 0, y(0) = 1}; (1)
 > esol := rhs(dsolve(odesys)); convert(esol, trig)  (2)
 > nsol := dsolve(odesys, numeric, range=0..1):
 > display(   plot(Re(esol), x=0..1, style=point, symbol=circle, numpoints=20, color=blue, legend="exact"),   odeplot(nsol, [x, Re(y(x))], x=0..1, color=red, legend="numerical") ) >

Solution depends on what you want...

Below is an mw file that provides a detailed (though far from exhaustive) approach to your problem. However, I advise you to read the following discussion before you rush to the MAPLE file

You asked "Is there any other linear or nonlinear model that could work?"
It depends on what you mean by "that could work".

When you have data like yours that you want to "resume" by a function the first step is to wonder "What kind of résumé am I looking for?".
Do I want to set the problem in a statistical framework or not?
If it is so then a necessary first step (whilst almost always unformulated) is to model statistically the situation you are concerned with.

In your case you could for instance postulate that:

1. I have an explanatory variable named X
2. I have a dependent variable named Y
3. I postulate that  YF[p](X) where pdenotes a vector parameter whose "true" values pare unknown.
4. I observe Y, and possibly X, with some observational errors EYand EX.
5. I postulate some relations between EYand EX, for instance
• EYand/or E are independent (aka homoskedasticity)  of Yand X(or maybe they are not aka heteroskedasticity)
• EYand E are mutually independant (or not)
• The joint distribution (general case)of EYand E is assumed to be known.

Only after this model has been explicitely written you have the right to assess the value of p with some ad hoc method (including LinearFit, NonlinearFit, Kriging, ...)
This is how any statistical model should be build.

Now, what is this funtionF[p]?
It can be :

• an a priori model which comes from some physical considerations
• an a posteriori model infered from the visual observation of several couples (Xn, Yn)

The postures to adopt are completely different depending on wether you have a prior model or not:

• If you have one you must not doubt it. If the data do not align with this model you should begin by verify tour mesurement processes, your assumptions of points 4 and 5 above.
Let's take an example: you drop a marble in a column with vacuum inside and you record both the time t and the position z of the marble. You know that z=1/2*g*t2: what are you going to do if the records do not align on this parabola?
Are you going to change the model and turn all the Physics upside-down ?

• Without prior knowledge of the relation YF[p](X)you are philosophically free to do what to do to build a model which fits the data whith any accuracy you want (in your case define  F[p](X) as a polynomial of degree 51, or use statistical models such as Kriging which can produce an exact interpolator).
Which should restrain you do avoid doing meaningless things is a prior information you could have in the variances of EYand/or E .

Without knowing what you mean by  "that could work", I have taken the liberty to make these assumptions:

• You have no prior physical model that could represent the relationship between Y and X.
Which means that all you are looking for is a model Mwhich passes reasonably close to the observations.
• You consider that  E = 0.
• You consider that the observation of Y for a given value of X differs from the what M(X) says by some random error E whose true distribution is unknown of yours.
You just assume the variance of E is finite; that the statistical propertoes of  E do not depend on the value of  Y  itself and and that errors at different sites X and X' are mutually independent.

In this situation, which may or may not be the one you have in maind, the attached file will provide you some usefull informations (at least I hope so):
It presents at least four different models which approximate, or even interpolate your data.
Now it's up to you to clearly define your problem: what kind of model are you looking for? how do you base the goodness of a model? and so on.

fit.mw

All the stuff which is done in this file can be extended to ANY OTHER  type of models, even non linear ones.
I'm speaking here about the estimation of the generalization error and the selection of the "best" model

PS 1: for the CurveFitting:-Lowess solution the optimal order-and/or optimal bandwith can be asssessed with the resampling method explained in the attached file (a small bandwith will enhance the fidelity to the data and a larger one will produce a stronger smoothing but would have better predictive capabilities)

PS 2: for the CurveFitting:-BSplineCurve solution the optimal order can be found in the same way

Your object is not a Matrix but an Array...

Your object is not a Matrix but an Array.
Guessing this Array is N x N x P where P is, or not equal to N, you can use this

 > restart:
 A := Array(1..3, 1..3, 1..2, (i, j, k) -> i+2*j+3*k): A[.., .., 1], A[.., .., 2];     # sorry outputs are non loaded (1)
 > # transpose ALL sub-arrays A[.., .., k] for k=1..2 TA := Array(1..3, 1..3, 1..2, (i, j, k) -> A[j, i, k]): TA[.., .., 1], TA[.., .., 2];    # sorry outputs are non loaded (2)

# transpose ONLY ONE sub-array A[.., .., 1]

TA := Array(1..3, 1..3, 1..2, (i, j, k) -> `if`(k=1, A[j, i, k], A[i, j, k])):
TA[.., .., 1], TA[.., .., 2];  # sorry outputs are non loaded (3)
 >

Look also to the ArrayTools package

Example >&nbs....

Example

restart;
plots:-display(
plot3d(x*exp(-x^2-y^2), x=-3..3, y=-2..2, color=red, style=surface),
plot3d(exp(-x^2-y^2), x=-1..1, y=-1..2, color=blue, style=wireframe)
)

Same observatation than @tomleslieB...

As your procedure seems to be an iterative solver to find the value of TMP4 which, in some sense, is closed to the target TMP1,
I suggest you to write the condition this way

while abs(TMP4/(TMP1)-1) > 0.001 do

Then

f(230, 34);

1.654689569

EDITED...

First point : in Maple the Hermite polynomials are mutually orthogonal with respect to the measure exp(-y^2) dy; (which differs from their definition in Statistics where there are mutually orthogonal with respect to the measure df(y), where f denotes the PDF of a standard gaussian random variable,  see for instance Hermite_polynomials).

Accordingly your definition of HN is wrong, and even doubly wrong because of the unjustified presence of an extra sqrt.
You should have define HN this way

HN := (n, y) -> exp(-y^2)*simplify(HermiteH(n, y)) / (sqrt(Pi)*2^n*n!):

Second point: in a reply @acer suggested you to use  [fsolve(simplify(HermiteH(7, x)))];  instead of    evalf(allvalues(RootOf(HermiteH(7, x), x))); and you answered him  "One solution is not enough, that's why I tried with solve. I need to get all the possible solutions, then discard complex ones and study the behavior of the ones that only provide pure real coefficients for the expansion".
It's pure nonsense!
All the zeros of Hermite polynomials are real. For a quick and simple proof look here the-roots-of-hermite-polynomials-are-all-real

What you want to compute are the values named "Gauss points", or "Gauss integration points", which are indeed the zeros of the Hermite polynomials..Have you ever heard of a Gauss quadrature method where the integration points would be complex ?
These zeros have symmetric values, implying that Hermite polynomials of odd degrees have 0 as a root.

You should follow acer's advice, all the more that the way you compute the Gauss points gives wrong results:

r := evalf(allvalues(RootOf(HermiteH(7, x), x)));
0.8162878829, -1.673551629, -0.8162878829, 0.

Only 4 roots instead of 7!
(which explains why @tomleslie's solution contains so many undeterminate coefficients)

Here are the "corrrect" values

fsolve( simplify(HermiteH(7, x), 'HermiteH') );
-2.651961357, -1.673551629, -0.8162878829, 0., 0.8162878829, 1.673551629, 2.651961357

Another way to get them is

evalf([solve(simplify(HermiteH(7, x), 'HermiteH'))]);
sort(Re~(%));
[                                -11
[0., 2.651961356 + 5.356554700 10    I,

-11
-2.651961356 - 5.356554700 10    I,

-10
0.8162878825 + 2.479096188 10    I,

-10
-0.8162878825 - 2.479096188 10    I,

-10
1.673551629 - 3.850404915 10    I,

-10  ]
-1.673551629 + 3.850404915 10    I]
[-2.651961356, -1.673551629, -0.8162878825, 0., 0.8162878825, 1.673551629, 2.651961356]

Finally: Here is a full computation of the ansatz u7

 > restart:
 > HN := (n, y) -> exp(-1/2*y^2)*simplify(HermiteH(n, y))/sqrt(sqrt(Pi)*2^n*n!):
 >
 > u7 := a0*HN(0, y) + a1*HN(1, y) + a2*HN(2, y) + a3*HN(3, y) + a4*HN(4, y) + a5*HN(5, y) + a6*HN(6, y):
 >
 > resid := diff(u7, y \$ 2) + 4*y*diff(u7, y)*u7 + 2*u7^2:
 >
 > r := [ fsolve( simplify(HermiteH(7, x), 'HermiteH') ) ];
 > equations := NULL:
 > for root_r in r do
 > equations := equations, evalf(eval(resid, y = root_r));
 > end do: (1)
 > # Checking numelems({equations}); indets({equations})  (2)
 > solutions := fsolve({equations}, {a0, a1, a2, a3, a4, a5, a6}); (3)
 > plot(eval(u7, solutions), y=-2..3) >

Oldies but Goodies: jresv48n2p111_A1b.pdf

The circled equality is always false...

For m=0 the rhs should be exp(-x).
(So I repeat what I said in my initial answer: where does this equality come from?)

 > restart:
 > interface(version) (1)
 > with(orthopoly):
 > formula := m -> Sum(m!/n! * x^(m-n) * LaguerreL(m, n-m, x)^2, n=0..+infinity) (2)
 > for i from 0 to 2 do   value(simplify(formula(i), 'LaguerreL')):   numer(%) / expand(denom(%)):   print(i, simplify(%, hypergeom)): # simplification required for i >=3 end do:   (3)
 >

Remark: in case you would think that Maple doesn't compute correctly LaguerreL polynomials,
here is an alternative way to do the job.
It's based on the Rodrigues' formula for generalized Laguerre polynomials:

# Rodrigues' formula for Generalized Laguerre Polynomials
# (aka LaguerreL(m, alpha, x))

GLP :=(m, alpha) -> piecewise(
m=0, 1,
m::posint, exp(x)*x^(-alpha)/m! * diff(exp(-x)*x^(m+alpha), x\$m),
undefined
):

# The two first GLP
eval(GLP(0, n-0));
simplify(eval(GLP(1, n-1)));
1
n - x
GLP_formula := m -> Sum(m!/n! * x^(m-n) * GLP(m, n-m)^2, n=0..+infinity):
for i from 0 to 2 do
value(GLP_formula(i)):
numer(%) / expand(denom(%)):
print(i, simplify(%, hypergeom)): # simplification required for i >=3
end do:

# same results that those obtained using LaguerreL

A Generalized Laguerre Polynomial L(m, a, x)  (in LaTex notation \$L_m^a(x)\$ is defined only if a > -1.
In your formula a=n-m  which suggest that n should vary from m-1 (excluded ) to +oo for  L(m, a, x)  to exist (and not from 0 to +oo as you wrote):
Look to this:

restart:
with(orthopoly):
simplify(LaguerreL(2, 3, x), 'LaguerreL')
1  2
10 - 5 x + - x
2
simplify(LaguerreL(2, -0.999, x), 'LaguerreL')
2
0.0005005000000 - 1.001000000 x + 0.5000000000 x
simplify(LaguerreL(2, -1, x), 'LaguerreL')
LaguerreL(2, -1, x)

More of this the lhs of your formula can't be equal to exp(x) for it is a function of x and m.

Last but not least, the formula (in LaTeX style) seems wrong for x=0:

S := (m, N, x) -> add(m!/n!*x^(m-n)*LaguerreL(m, m-n, 0)^2, n=0..N):
p := 10:
S(p, p-1, 0);
S(p, p, 0);
0
2
LaguerreL(10, 0)

Could you be more explicit about your formula and give us some reference?

several possibilities restartwith(P...

several possibilities

restart
with(Physics):
convert([a,b],`*`)
Error, unrecognized conversion

# force the use of the "default" arithop `*`:
convert([a,b],:-`*`)
a b

# or use a workaround
`*`(a,b):
mul([a,b]):

Exact value of the L1 error...

As the zeros of the ChebyshevT polynomial of degree N are

zeros := sort([seq(cos((2*k-1)*Pi/2/N), k=1..N)])

it becomes interesting to find out how far it is possible to push the formal calculation.

The result is in the attached file.
The expression of the L1 norm is quite lengthy (uncomment the line K := ... to wtach it).

 > restart:
 > with(CurveFitting):
 > # zeros of ChebishevT(5, x) N     := 5: zeros := sort([seq(cos((2*k-1)*Pi/2/N), k=1..N)]) (1)
 > # sample u(x) at the zeros u       := x -> exp(1/2*x^2-1/2): u_zeros := u~(zeros); (2)
 > # interpolation polynomial of sampled u P__2 := simplify( PolynomialInterpolation(zeros, u_zeros, x) ): print(): # interpolation gap gap  := simplify(u(x) - P__2): # A definite integral of the interpolation gap J := unapply(int(gap, x=a..b), [a, b]): (3)
 > # To avoid taking the absolute value of the integrand, first find the points where gap=0 # and remark that between two consecutive points gap has a constant sign. # # Main observation: # As P__2 is an interpolation polynomial, gap(r) = 0 for each r in zeros. # These points are ordered in increasing values cut  := select(`is`, zeros, nonnegative): fcut := sort(evalf(cut), output=permutation): cut  := cut[fcut]:
 > # augment cut cut := [cut[], 1] (4)
 > # L1 norm of u(x)-P__2 # # The integration is done by adding all the J(cut[n], cut[n+1]) and accounting for the # sign of gap in the interval [cut[n], cut[n+1]]. K := 2 * add( (-1)^(n+1)*J(cut[n], cut[n+1]), n=1..(N+1)/2 ): K := simplify(K):
 > # Attempt to build a "compact" representation of K fn     := [ indets(K, function)[] ]: nfn    := numelems(fn): __with := [ seq(fn[n]=C__||n, n=1..nfn) ]: print~(__with):          (5)
 > compact_K := eval(K, __with): compact_K := map(simplify, collect(compact_K, [C__2, C__4])) (6)
 > evalf(K); (7)
 >

 1 2 3 4 5 6 7 Last Page 1 of 28