Rouben Rostamian

MaplePrimes Activity


These are replies submitted by Rouben Rostamian

@Preben Alsholm I agree that the help page is not very helpful.  Not only that, it is essentially impossible to reach unless you know exactly what you are looking for.  I have made a little note to myself about how to reach help regarding events, and I refer to it when I need help.  For everyone's benefit, here it is:

In Maple's search box enter
    Events for dsolve[numeric]    (or equivalently, at Maple's prompt enter ?dsolve,events)

For more examples, in Maple's search box enter
    How Do I Solve an Ordinary Differential Equation?

 

@Mariusz Iwaniuk Thanks.  This is a good idea. In effect, it says when the ball digs itself underground, then just keep it there.  Thus,  beyond the halt event, y(t) takes on a constant small negative value.

I had thought of this solution but did not try it because I was under the impression that once a halt event occurs, the computation stops and the solution is undefined beyond that.  Looking at your solution however, it appears that that's not the case.  Plotting the solution shows that the solution is well-defined beyond the halt point.  This may be in the documentation, by as Preben has noted, the documentation is not the easiest thing to read.

@acer Thanks.  This is a reasonably good solution.  In effect, it turns off the amplitude reduction mechanism once the amplitude falls below a certain threshold.  Beyond that the ball continues bouncing up and down forever with that small constant amplitude.

What you are saying here is either wrong, or the information provided is incomplete.

Here is my understanding of your question.

You have the equation

    a(x,y,t,u(x,y,t),...) = the_right_hand_side;

You expect the derivative of that equation with respect to x and to be:

    a[1](x,y,t,u(x,y,t),...) = the x derivative of the_right_hand_side.

That is certainly not true, unless you have some hidden assumptions which you have not revealed.

The derivative of the equation with respect to x will involve not only a[1], by also a[4], etc.

 

@weidade37211 It is difficult to tell what is wrong with your code without seeing the code.

@weidade37211 There is no requirement of a "closed form" f(x,y).  All we need is something that receives x and y and produces a value.  Its internals may be as complicated as you wish, and may contain any number of calls to fsolve().

 

 

@Jjjones98 I cannot tell what you are calculating here.  There are several functions involved in this discussion -- z(x), f(x), y(x), G(x,u).  Which of these are you calculating?  Or perhaps you are calculating something entirely different?

 

@Jjjones98 In my calculations the function f(x) was selected to satisfy the inhomogeneous boundary conditions only.  There was no requirement at all that f(x) were to satisfy the differential equation, although it turned out that it did, merely by accident, not by design.  In fact, in a more complicated case, for instance when the differential equation has non-constant coefficients, finding an f(x) that satisfies both the boundary conditions and the differential equation can be quite impractical.

Your formulation where you require f(x) to be "the solution of the homogeneous equation with inhomogeneous BCs" is unnecessarily restrictive although in the case of this essentially trivial differential equation does not pose a problem.

 

@Jjjones98 I will give you a sketch of the approach.  You should be able to fill in the details.  Ask again if you run into problems.

Your Green's function matches the homogeneous boundary conditions
y(0)=0, y'(0)=0, y(1)=0, y'(1)=0,
while the problem you wish to solve calls for the inhomogeneous boundary conditions
y(0)=1, y'(0)=1, y(1)=2, y'(1)=3.

Your first step is to reconcile these.

  1. First, find a function, any function, that satisfies the inhomogeneous boundary conditions.  We have four conditions to satisfy, therefore the cubic function
    f := x -> a*x^3 + b*x^2 + c*x + d
    which has four undetermined coefficients is a good candidate.  A straightforward calculation shows that
    f := x -> 2*x^3 - 2*x^2 + x + 1.

  2. Let's look at the boundary value problem that you wish to solve:
    y''''(x) = sqrt(x),   y(0)=1, y'(0)=1, y(1)=2, y'(1)=3.
    Define z(x) = y(x) - f(x).  Since y(x) and f(x) both satisfy the inhomogeneous boundary conditions, their difference z(x) satisfies the homogeneous boundary conditions.  That's good news.

  3. From z(x) = y(x) - f(x) we get y(x) = z(x) + f(x).  Plugging this into the differential equation we obtain z''''(x) + f''''(x) = sqrt(x).  But f(x) is a cubic, therefore its fourth derivative is zero.  We conclude that:
    z''''(x) = sqrt(x),    z(0)=0, z'(0)=0, z(1)=0, z'(1)=0.

  4. We solve for z(x) by applying your Green's function.  You may do that in Maple, as in
    z(x) = int(G(x,u)*sqrt(u), u=0..1) assuming x > 0, x < 1;
    or by hand, if you are patient enough.  In either case you will obtain:
    z(x) = 16*x^(9/2)*(1/945)-8*x^3*(1/189)+8*x^2*(1/315).

  5. Now we put z(x) and f(x) together to arrive at
    y(x) = z(x) + f(x) = -622*x^2*(1/315)+16*x^(9/2)*(1/945)+370*x^3*(1/189)+x+1
    which agrees with what you have posted.

 

 

@Jjjones98 The diagonal decomposition of a symmetric matrix A is expressed as A = P Λ P−1, where Λ is a diagonal matrix with eigenvalues of A on the diagonal, and P is a matrix whose columns are a linearly independent set of eigenvectors corresponding to those eigenvalues.

The eigenvectors of a matrix are certainly not uniquely defined—any nonzero multiple of an eigenvector is also an eigenvector and, in the case of a repeated eigenvalue, any linear combination of the corresponding eigenvectors is also an eigenvector. It follows that the matrix P is not unique; it may be modified in very many ways, but the identity A = P Λ P−1 holds regardless of those choices.

In a linear algebra course one shows that among the various choices of the matrix P there exists an orthogonal matrix V, that is, a matrix so that VT V = I. It follows that V−1 = VT, and therefore A = V Λ VT.  I hope that this answers your question.

Nota bene: Throughout this discourse I am assuming that A is real and symmetric.

 

@Jjjones98 In my calculations I showed that A = U Λ U −1. Note that this involves U inverse, not U transpose. This addresses your original question where you had asked for A=UDU^{-1}.

If U were an orthogonal matrix, then you could have replaced the inverse by the transpose, but the U we have calculated is not orthogonal, therefore you can't.

If you really want to express the diagonal decomposition in terms of the transpose, you need to rescale the columns of U so that each column is a vector of unit length in the Euclidean norm.  In this particular case it just happens that all columns of U are of length sqrt(2), therefore if you define V = U / sqrt(2), then you will have A = V Λ V T, where V T is the transpose of V.

@Kitonum Very clever!  It took me some thinking to see how that works.

When we think of a "perturbation", we are thinking of how an expression deviates from a reference/base value.

Your question asks, among other things, for perturbing sin(x/eps) for small esp.  What does that mean?  What is the base value from which the perturbation of  sin(x/eps) is measured?

@student_md

Let A, B, C be `&x`(r, r) matrices, and let's write A__ij for the i, j entry

of A, etc.  Then:

 

If  C = A *~ B, then C__ij = A__ij*B__ij.

If  C = A . B, then C__ij = sum(A__ik*B__kj, k = 1 .. r).

First 59 60 61 62 63 64 65 Last Page 61 of 99