Robert Israel

6577 Reputation

21 Badges

18 years, 205 days
University of British Columbia
Associate Professor Emeritus
North York, Ontario, Canada

MaplePrimes Activity


These are replies submitted by Robert Israel

The inverse of the "exact" version of your Matrix has some huge entries, on the order of 10^35, so it's not surprising that you get numerical difficulties.   This sort of thing can happen with ill-conditioned matrices. You didn't tell us what problem this came from (in particular, you only gave us one 6 x 6 matrix, so we have no idea what is happening with N>=7.  I wonder if what you have is an approximation of a "real" matrix that is singular.

 

Although reports of the "death of the PC" may be premature, it would certainly be a good idea for Maplesoft to look at porting Maple to mobile devices.   They won't reveal their plans ahead of time, of course, but I would hope to see something along these lines in the next few years.

Of course, there are cases where expand would not be a good idea, e.g.

f:=(x^2-(z-3*y)^2)^100*((x+z)^2-9*y^2)^100;
g:= ((x+3*y)^2-z^2)^100*(x-3*y+z)^200;

  In that case, you could try testeq. 

> testeq(f=g);

Of course, there are cases where expand would not be a good idea, e.g.

f:=(x^2-(z-3*y)^2)^100*((x+z)^2-9*y^2)^100;
g:= ((x+3*y)^2-z^2)^100*(x-3*y+z)^200;

  In that case, you could try testeq. 

> testeq(f=g);

We might be able to suggest how to fix those "unrealistic results" if you gave us more details, e.g. upload a worksheet.  One of the problems is that at the boundary between the domains, it is the heat flux (rather than the spatial derivative of the temperature) that must be continuous.  You might be able to fix this by changing the spatial variable.

@hirnyk : I was using the two-variable form arctan(y,x), not arctan(y/x).

Here's a fuller explanation,

Consider the curves y^2 - 2*cos(x) = c.  Since (0,0) is a local minimum of y^2 - 2*cos(x), these are closed curves around (0,0).   Here's a picture.

> plots[contourplot](y^2 - 2*cos(x), x=-1..1,y=-1..1);

A somewhat different parametrization is better: consider the system of DE's

D(x)(t) = y(t), D(y)(t) = - 2*sin(x(t))

This is in fact the system that describes the motion of a pendulum, and y^2 - 2*cos(x) is constant on the solutions (this can be thought of as conservation of energy).  These solutions are periodic.

Now how would a solution u(x,y) of our pde change as x=x(t), y=y(t) according to the system above?

du/dt = du/dx x'(t) + du/dy y'(t) = y du/dx - 2 sin(x) du/dy = u^2

But since the only periodic solution of diff(u(t),t) = u(t)^2 is u(t) = 0, the only solution of our pde in a neighbourhood of (0,0) is 0.  

 

@hirnyk : I was using the two-variable form arctan(y,x), not arctan(y/x).

Here's a fuller explanation,

Consider the curves y^2 - 2*cos(x) = c.  Since (0,0) is a local minimum of y^2 - 2*cos(x), these are closed curves around (0,0).   Here's a picture.

> plots[contourplot](y^2 - 2*cos(x), x=-1..1,y=-1..1);

A somewhat different parametrization is better: consider the system of DE's

D(x)(t) = y(t), D(y)(t) = - 2*sin(x(t))

This is in fact the system that describes the motion of a pendulum, and y^2 - 2*cos(x) is constant on the solutions (this can be thought of as conservation of energy).  These solutions are periodic.

Now how would a solution u(x,y) of our pde change as x=x(t), y=y(t) according to the system above?

du/dt = du/dx x'(t) + du/dy y'(t) = y du/dx - 2 sin(x) du/dy = u^2

But since the only periodic solution of diff(u(t),t) = u(t)^2 is u(t) = 0, the only solution of our pde in a neighbourhood of (0,0) is 0.  

 

You can use pdetest to check solutions of a pde.

> Q:= pdetest(sol, pde);

The result is rather complicated, but note that it involves csgn(y) and csgn(sin(x/2)).

> Q assuming y > 0;

                    0

So it is correct for y > 0.  For y < 0 the solution seems not to be correct.

>plot([Re,Im](eval(Q,{y=-1,_F1=(t->t)})),x=0..2*Pi,-1..1);

 

Also, the solution is not defined at x=0, and is discontinuous there.

> plot([Re,Im](eval(rhs(sol),{_F1=(t->t),y=1})),x=-1..1);

In any case, as Hirnyk noted, these are complex solutions.  You can't just take the real part, because the pde is nonlinear.

You can use pdetest to check solutions of a pde.

> Q:= pdetest(sol, pde);

The result is rather complicated, but note that it involves csgn(y) and csgn(sin(x/2)).

> Q assuming y > 0;

                    0

So it is correct for y > 0.  For y < 0 the solution seems not to be correct.

>plot([Re,Im](eval(Q,{y=-1,_F1=(t->t)})),x=0..2*Pi,-1..1);

 

Also, the solution is not defined at x=0, and is discontinuous there.

> plot([Re,Im](eval(rhs(sol),{_F1=(t->t),y=1})),x=-1..1);

In any case, as Hirnyk noted, these are complex solutions.  You can't just take the real part, because the pde is nonlinear.

Here's my solution.

> RandomPMatrix:= proc(A::set, m::posint)
     local R;
     R:= table([seq(a=combinat:-randperm(m),a=A)]);
     Matrix(m,m, (i,j) -> select(a -> R[a][i]=j, A))
   end proc;

 

Yes, some of the entries might be the null set.  That is allowed by the description of the problem you gave, and it certainly will have to occur at least once in each row and column if A has <= m elements.  Are you saying you want to avoid the null set?

Here's my solution.

> RandomPMatrix:= proc(A::set, m::posint)
     local R;
     R:= table([seq(a=combinat:-randperm(m),a=A)]);
     Matrix(m,m, (i,j) -> select(a -> R[a][i]=j, A))
   end proc;

 

Yes, some of the entries might be the null set.  That is allowed by the description of the problem you gave, and it certainly will have to occur at least once in each row and column if A has <= m elements.  Are you saying you want to avoid the null set?

Yes, just as in my previous example, because on a nonlinear problem LSSolve is likely to return a local rather than global minimum.  Note that by scaling, we may assume, say, F=1.

> residuals:= [seq(1/d[2]^2 - (c4*d[1]^4 + c2*d[1]^2 + c0), d = data)];
   V:= Optimization:-LSSolve(residuals,{c0>=0,c4>=0});
   mbvalues:= eval({b=sqrt(c2 + 2*sqrt(c0*c4)), m=sqrt(c4),
      omega_0 = (c0/c4)^(1/4)},V[2]);

mbvalues := {b = 0.162178508692947e-3, m = 4.64451288663177*10^(-7), omega_0 = 1222.62315668831}

Another issue then presents itself: in the nonlinear case, it helps to rescale the parameters so their values are not too small or too large.  I'll use B = 1000*b,M = 10^6*m, Omega_0 = 1000*omega_0.

> MBvalues:= eval({B=1000*b,M=10^6*m,Omega_0=10^3*omega_0},mbvalues);
   resonance:= omega -> 1/sqrt(M^2/10^12*(omega^2-(Omega_0/10^3)^2)^2 + (B/1000)^2*omega^2);
   residuals2:= [seq(d[2]-resonance(d[1]),d=data)];
   Optimization:-LSSolve(residuals2,{M>=0,Omega_0>=0,B>=0},initialpoint=MBvalues);
   MBvalues2:= %[2];

MBvalues2 := [B = .144701575132953, M = .476326104255629, Omega_0 = 1.23393742722458*10^6]

Again, the new values make a better fit near the peak than the old values.  Here are the data points together with the resonance curve from MBvalues in red and those from MBvalues2 in blue.

with(plots):
display(plot([eval(resonance(omega),MBvalues),eval(resonance(omega),MBvalues2)],omega=500 .. 1925, colour=[red,blue]),pointplot(data));

Yes, just as in my previous example, because on a nonlinear problem LSSolve is likely to return a local rather than global minimum.  Note that by scaling, we may assume, say, F=1.

> residuals:= [seq(1/d[2]^2 - (c4*d[1]^4 + c2*d[1]^2 + c0), d = data)];
   V:= Optimization:-LSSolve(residuals,{c0>=0,c4>=0});
   mbvalues:= eval({b=sqrt(c2 + 2*sqrt(c0*c4)), m=sqrt(c4),
      omega_0 = (c0/c4)^(1/4)},V[2]);

mbvalues := {b = 0.162178508692947e-3, m = 4.64451288663177*10^(-7), omega_0 = 1222.62315668831}

Another issue then presents itself: in the nonlinear case, it helps to rescale the parameters so their values are not too small or too large.  I'll use B = 1000*b,M = 10^6*m, Omega_0 = 1000*omega_0.

> MBvalues:= eval({B=1000*b,M=10^6*m,Omega_0=10^3*omega_0},mbvalues);
   resonance:= omega -> 1/sqrt(M^2/10^12*(omega^2-(Omega_0/10^3)^2)^2 + (B/1000)^2*omega^2);
   residuals2:= [seq(d[2]-resonance(d[1]),d=data)];
   Optimization:-LSSolve(residuals2,{M>=0,Omega_0>=0,B>=0},initialpoint=MBvalues);
   MBvalues2:= %[2];

MBvalues2 := [B = .144701575132953, M = .476326104255629, Omega_0 = 1.23393742722458*10^6]

Again, the new values make a better fit near the peak than the old values.  Here are the data points together with the resonance curve from MBvalues in red and those from MBvalues2 in blue.

with(plots):
display(plot([eval(resonance(omega),MBvalues),eval(resonance(omega),MBvalues2)],omega=500 .. 1925, colour=[red,blue]),pointplot(data));

@MapleFans001 : resnum is a procedure which produces the solution at any given value of theta. 
For example resnum(1.2345) will give you the values of theta, f(theta) and D(f)(theta) at theta = 1.2345.
It does not need "interpolation".  If you want an Array with the values at particular points, you can use dsolve with the
output = array option.  For example:

dsolve({eval(motion,{k=1,g=10}}, f(0)=0,D(f)(0)=0}, numeric, output=Array([2*Pi*j/100 $ j = 0 .. 100]));

A:= %[2,1];

@MapleFans001 : resnum is a procedure which produces the solution at any given value of theta. 
For example resnum(1.2345) will give you the values of theta, f(theta) and D(f)(theta) at theta = 1.2345.
It does not need "interpolation".  If you want an Array with the values at particular points, you can use dsolve with the
output = array option.  For example:

dsolve({eval(motion,{k=1,g=10}}, f(0)=0,D(f)(0)=0}, numeric, output=Array([2*Pi*j/100 $ j = 0 .. 100]));

A:= %[2,1];

First 14 15 16 17 18 19 20 Last Page 16 of 187