acer

32490 Reputation

29 Badges

20 years, 9 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

Why did I not see this before? Perhaps I shouldn't have been so quick to believe the claim attributed to your instructor.

Note that I changed the initial assignment of various variables to be exact rationals instead of floats, and removed the `evalf` around a fraction of Pi. See the EKM_04.mw upload link, for that in the preliminaries.

> simpleK :=map(expand(K)):

> Digits := 500:

> Nsols := [fsolve(Determinant(simpleK), N)]:

> K8 := eval(simpleK, N = Nsols[8]):

> X8 := Matrix([NullSpace(evalf(K8))[]]):

> Norm(K8.X8);

                                  -476
                         3.7677 10    

> for i to nops(Nsols) do
>    KK[i] := eval(simpleK, N = Nsols[i]);
>    NS[i] := NullSpace(evalf(K8));
>    if nops(NS[i]) > 0 then
>       try XX[i] := Matrix([NullSpace(evalf(KK[i]))[]]);
>          print([i, Norm(KK[i].XX[i]), evalf[10](Norm(XX[i]))]);
>       catch:
>       end try;
>    end if;
> end do;

                [            -474              ]
                [1, 2.5289 10    , 0.7880837989]
                [            -474              ]
                [2, 2.5753 10    , 0.7766997177]
                [            -474              ]
                [3, 2.4118 10    , 0.7576109811]
                [           -476              ]
                [4, 4.755 10    , 0.7312073739]
                [            -475              ]
                [5, 1.5963 10    , 0.7150784196]
                [           -476              ]
                [7, 3.160 10    , 0.7848194670]
                [            -476              ]
                [8, 3.8677 10    , 0.8215593884]
               [             -476              ]
               [9, 5.16758 10    , 0.8558701527]
               [              -477              ]
               [10, 4.17199 10    , 0.8847592280]
               [             -477              ]
               [11, 5.7069 10    , 0.9068848065]
               [             -477              ]
               [12, 1.0583 10    , 0.9222282989]
                [           -476              ]
                [14, 7.90 10    , 0.9460877743]
                [            -474              ]
                [15, 6.603 10    , 0.9553751693]
                [            -474              ]
                [16, 2.416 10    , 0.9599030240]
                 [          -470              ]
                 [17, 5.0 10    , 0.9980170575]
                [            -466              ]
                [18, 6.468 10    , 0.7524713989]

So I don't see what is unacceptable about these results, where each pair KK[i] and (non-zero) XX[i] is the solution given by N=Nsols[i], for i=1..18. It looks more accurate, is simpler and faster, and produces all solutions.

The (less accurate) single solution Xsol from the eigen-solving approach above matches (up to a minus sign) the more accurate result for the 8th root of the determinant of the expanded Matrix K.

> Norm(map(fnormal, XX[8]+Xsol, 10, 0.1e-79));

                                     -72
                       1.351806574 10   

You could try this, producing a "best" eigenvalue of about 10^(1-69) and a result K.X whose norm is of the same order.

EKM_04.mw

I'm not completely convinced that this is producing the globally smallest absolute eigenvalue. And I'm not completely convinced that the smallest absolute eigenvalues tends to zero as Digits gets higher.

Also, there is probably a better way. Certainly it should not be necessary to compute all eigenvalues and their eigenvectors just to find the eigenspace of some eigenvalue closest to zero.

And minimizing the absolute values may just not be best too, since that is often not a great way to do what is (here, essentially) a rootfinding exercise. It turns what might be an easier rootfinding exercise into a global mimization exercise.

I didn't even look at the prelimiary computations, to see what it does and try and figure out whether there is some altogether different approach.

acer

You could try this, producing a "best" eigenvalue of about 10^(1-69) and a result K.X whose norm is of the same order.

EKM_04.mw

I'm not completely convinced that this is producing the globally smallest absolute eigenvalue. And I'm not completely convinced that the smallest absolute eigenvalues tends to zero as Digits gets higher.

Also, there is probably a better way. Certainly it should not be necessary to compute all eigenvalues and their eigenvectors just to find the eigenspace of some eigenvalue closest to zero.

And minimizing the absolute values may just not be best too, since that is often not a great way to do what is (here, essentially) a rootfinding exercise. It turns what might be an easier rootfinding exercise into a global mimization exercise.

I didn't even look at the prelimiary computations, to see what it does and try and figure out whether there is some altogether different approach.

acer

I wonder if there is any decent way to get a useful approximation of Variance(data2), without having to compute the solution data2.

acer

I wonder if there is any decent way to get a useful approximation of Variance(data2), without having to compute the solution data2.

acer

@aryan_ams Earlier you wrote that,

 [A]+[B] N+[C] N^(2)+[D] N^(3)+...+[H] N^(8) = 0

and now you write that the equation is, instead,

 ( [A]+[B] N+[C] N^(2)+[D] N^(3)+...+[J] N^(8) ) . X = 0

  K . X = 0

  K = ( [A]+[B] N+[C] N^(2)+[D] N^(3)+...+[J] N^(8) )

And you think that X will be eigenvectors of K? Are you completely sure that you now have the right equation? Why isn't the equation K.X = Lambda.X for an eigenvalue of K and X an eigenvector of K? Have you been trying to say that you want to solve,

 | K | = 0    where |..| means determinant?

so that you could perhaps solve,

  Linearalgebra:-Determinant(K) = 0

  Linearalgebra:-Determinant(A+B*N+C*N^2+DD*N^3+...+J*N^(8)) = 0

If that is so, then can't you construct a procedure and fsolve it for N,

  z := N ->  Linearalgebra:-Determinant(A+B*N+C*N^2+DD*N^3+...+J*N^(8));

  fsolve(z, a..b); # for some sensible numbers a and b.

@aryan_ams Oh, thanks for the detail. So, what is the Matrix for which you want eigenvalues and eigenvectors?

Does N take only integer values, or float values? Can't you just create a procedure,

N -> LinearAlgebra:-Norm(A + B*N + ... + H*N^7 + J*N^8)

and pass that to fsolve?

I'd like to add that the size of plots inlined in the worksheet is not (as of Maple 13?) important. Some small plotted symbols will not show up, not matter how densely they appear in the plot, below a certain symbolsize and plot size. In other words, for inlined plots the size of the display afffects whether small symbols get seen or not. So if you want to export with the right-click context-menu of the plot, it matters what size it is! (I find this behaviour to be unhelpful, and it is not how "images" work in other software or even some other Maple contexts.)

acer

I think that you're right: mouse/pointer manipulation of a plot (whether in a Component or not) will not necessarily result in the GUI changing anything that the kernel knows by name.

In other words, even changing a plot's qualities in a Component %Plot0 doesn't affect what GetProperty('Plot0','value') will return. That command will simply return whatever was assigned to the Component by use of DocumentTools. Manipulating the plot with the cursor doesn't change the value. So, any such manipulations are lost, in programmatic terms. Hence, if you need programmatic access to any changes to the stored plot, then those changes better all have been made programmatically in the first place.

Have I understood you rightly?

Maple is in rather serious need of a mutable plot data structure, so that such two-way communincation can be done properly and efficiently. If the Standard GUI can access rtables by "handle", then it should be possible to access some new (module or Record based?) plot data structure in a similar fashion. Hey, a fellow can dream.

acer

I don't see an equation, because I don't see an equals sign (or any statement of things being equal).

What is N? It is a Matrix, or a scalar? Is it known, or unknown?

Do you mean that [A]+[B] N+[C] N^(2)+[D] N^(3)+...+[J] N^(8) is equal to something else, say Q? If so, then is Q known while N is unknown?

Which Matrix is it for which you want the eigenvalues and eigenvectors? N, or Q?

I suspect that you want to know the eigenvalues/vectors of Matrix N, and that you might well know Q. But the post is quite unclear, and your could clarify. 

acer

@gretchenp08@gmail.com Can you run the attached worksheet and get the same output?

I created this with Maple 14.01, as a Document with 2D Math input.

DEplot01.mw

@gretchenp08@gmail.com Can you run the attached worksheet and get the same output?

I created this with Maple 14.01, as a Document with 2D Math input.

DEplot01.mw

It would be interesting to know what grade Kitonum gets for completing a significant portion of the assignment.

acer

It would be interesting to know what grade Kitonum gets for completing a significant portion of the assignment.

acer

@elango8 I don't have DirectSearch installed, so don't have much to say about how uou might use it. But in my Answer above I showed that fsolve could reasonably quickly find solutions for a1..a6 and t, given a w (less than w_critical which drives a6 to zero, which I computed using `Q`).

First 418 419 420 421 422 423 424 Last Page 420 of 595