mmcdara

6346 Reputation

17 Badges

7 years, 353 days

MaplePrimes Activity


These are replies submitted by mmcdara

@Preben Alsholm 

Thanks you Preben.
I had just edited my previous reply meanwhile.

@Preben Alsholm 

Sometimes I write my procedures the way you suggest, in particular when these procedures are used in a plot command.
But I also noted that these definitions both gave the same correct result:

MyProc1 := proc(u)
  if not u::realcons then return 'procname'(_passed) end if; 
  fsolve(1/2+(1/2)*erf((1/2)*x*sqrt(2)) = u, x)
end proc:

plot(MyProc1(u), u=0.1..0.9):


MyProc2 := proc(u)   
  fsolve(1/2+(1/2)*erf((1/2)*x*sqrt(2)) = u, x) 
end proc: 

plot('MyProc2'(u), u=0.1..0.9):


Son lazzy as I am, I often use the second definition.
This is why I used it in my question.

So, it seems that defining the procedure as in MyProc1 is "safer' than in MyProc2. Am I right?
Do you systematically recommend proceeding as in MyProc1 ?


By the way: the computational time comes from the fact that 

fsolve(1/2+(1/2)*erf((1/2)*x*sqrt(2)) = q, x)

is equal to -infinity if q=0 and +infinity if q=1. In practice I would integreate within the range 0.01..0.99 for instance.

For the record here is the method I used to use to numerically evaluate J2 before I thought of using evalf/Int instead (last block in the attached file).
I suppose that a judicious choice of the method in evalf/Int can provide the result fastly then the default method used in J3 ?

 

restart:

J2 := proc()
  local z:
   z := proc(q1, q2)  
     local x1, x2;
     if not [q1,q2]::list(realcons) then return 'procname'(_passed) end if;
     x1 := fsolve(1/2+(1/2)*erf((1/2)*x*sqrt(2)) = q1, x):
     x2 := fsolve(1/2+(1/2)*erf((1/2)*x*sqrt(2)) = q2, x):
     0.2652582384*exp( 2*x1*x2 - x1^2 - x2^2 )
  end proc:
  evalf(Int(z(q1, q2), q1=0.01..0.99, q2=0.01..0.99))
end proc:

CodeTools:-Usage( J2() );

Warning,  computation interrupted

 

J3 := proc()
  local z:
   z := proc(q1, q2)  
     local x1, x2;
     uses Statistics:
     if not [q1,q2]::list(realcons) then return 'procname'(_passed) end if;
     x1 := Quantile(Normal(0, 1), q1, numeric):
     x2 := Quantile(Normal(0, 1), q2, numeric):
     exp( 2*x1*x2 - x1^2 - x2^2 )
  end proc:
  evalf(Int(z(q1, q2), q1=0.01..0.99, q2=0.01..0.99))
end proc:

CodeTools:-Usage( J3() );

memory used=0.76GiB, alloc change=44.01MiB, cpu time=14.31s, real time=14.55s, gc time=689.80ms

 

.4449972351

(1)


A more efficient way

StringTools:-FormatTime("%H:%M:%S");

h := 0.001:
P := [seq(0.01..0.99, h)]:
Q := CodeTools:-Usage( Statistics:-Quantile~(Normal(0, 1), P, numeric) ):

S:= 0:
for q1 in Q do
  for q2 in Q do
    S := S + exp( 2*q1*q2 - q1^2 - q2^2 )
  end do:
end do:
S * h^2;

StringTools:-FormatTime("%H:%M:%S");
 

"14:39:25"

 

memory used=16.68MiB, alloc change=0 bytes, cpu time=291.00ms, real time=292.00ms, gc time=0ns

 

HFloat(0.44516891241912115)

 

"14:39:29"

(2)

 


 

Download Computational_times.mw

@Ronan 

Writing 

{clr::string:="Blue"}

in the argument list of a procedure means that clr is to be considered as an optional argument whose type is string and default value "Blue".
If you do not mention clr when you execute the procedure spread2, clr will take inside this procedure the default value "Blue".
If you want the procedure uses "Red" instead of "Blue", you have to set ecplivitely clr="Red" when you call it.

Optional arguments are intensively used in Maple: you may see them as an annoyance for they impose writing clr="Red" instead of simply "Red", but their advantage is that the order ofthe optional parameters doesn't matter (so you can write indifferently clr="Red", prnt=false, or prnt=false, clr="Red").
I greatly appreciate this feature, but it remains a personal opinion.

@acer 

Thank you acer.
Your

eval(y, g=''f'')

suits me perfectly.

By the way I sent at @Preben Alsholm a file where the real problem is described, in case you would be interested.

@Preben Alsholm 

Thanks four answer.

"why not simply start with f=F?" simply because this example was notional (maybe oversimplified).
Here is a more detailed example which is closer to the reality (note that writing it I realized there was a simple way [end of the file] to answer my own question!)

 

restart

with(LinearAlgebra):
with(Statistics):

phi := unapply(CDF(Normal(0, 1), x), x);

proc (x) options operator, arrow; 1/2+(1/2)*erf((1/2)*x*2^(1/2)) end proc

(1)

Sigma := Matrix(2$2, [1, rho, rho, 1]):
X := Vector(2, symbol=x):

phi__Sigma := exp(-1/2*(X^+ . Sigma^(-1) . X)) / (2*Pi*sqrt(Determinant(Sigma)));

(1/2)*exp(-(1/2)*(-x[1]/(rho^2-1)+x[2]*rho/(rho^2-1))*x[1]-(1/2)*(x[1]*rho/(rho^2-1)-x[2]/(rho^2-1))*x[2])/(Pi*(-rho^2+1)^(1/2))

(2)

# Let psi the CDF of some continuous random variable V.
# We define C y

C__psi := eval(phi__Sigma, [seq(x[i]=((phi@@(-1))@cdf)(q[i]), i=1..2)]);
 

proc (x) options operator, arrow; psi(x) end proc

 

(1/2)*exp(-(1/2)*(-(phi@@(-1))(psi(q[1]))/(rho^2-1)+(phi@@(-1))(psi(q[2]))*rho/(rho^2-1))*(phi@@(-1))(psi(q[1]))-(1/2)*((phi@@(-1))(psi(q[1]))*rho/(rho^2-1)-(phi@@(-1))(psi(q[2]))/(rho^2-1))*(phi@@(-1))(psi(q[2])))/(Pi*(-rho^2+1)^(1/2))

(3)

# In the specific case V is a standard gaussian random variable we have cdf(V) = phi

C__psi := eval(phi__Sigma, [seq(x[i]=((phi@@(-1))@phi)(q[i]), i=1..2)]);

 

(1/2)*exp(-(1/2)*(-q[1]/(rho^2-1)+q[2]*rho/(rho^2-1))*q[1]-(1/2)*(q[1]*rho/(rho^2-1)-q[2]/(rho^2-1))*q[2])/(Pi*(-rho^2+1)^(1/2))

(4)

# But

cdf := x -> phi(x):
C__psi := eval(phi__Sigma, [seq(x[i]=((phi@@(-1))@cdf)(q[i]), i=1..2)]);

(1/2)*exp(-(1/2)*(-(phi@@(-1))(1/2+(1/2)*erf((1/2)*q[1]*2^(1/2)))/(rho^2-1)+(phi@@(-1))(1/2+(1/2)*erf((1/2)*q[2]*2^(1/2)))*rho/(rho^2-1))*(phi@@(-1))(1/2+(1/2)*erf((1/2)*q[1]*2^(1/2)))-(1/2)*((phi@@(-1))(1/2+(1/2)*erf((1/2)*q[1]*2^(1/2)))*rho/(rho^2-1)-(phi@@(-1))(1/2+(1/2)*erf((1/2)*q[2]*2^(1/2)))/(rho^2-1))*(phi@@(-1))(1/2+(1/2)*erf((1/2)*q[2]*2^(1/2))))/(Pi*(-rho^2+1)^(1/2))

(5)

 

Writing this more realistic example revealed me how to proceed in a simpler way

 

C:= psi ->  eval(phi__Sigma, [seq(x[i]=((phi@@(-1))@psi)(q[i]), i=1..2)]):

C(psi);
C(phi);

(1/2)*exp(-(1/2)*(-(phi@@(-1))(psi(q[1]))/(rho^2-1)+(phi@@(-1))(psi(q[2]))*rho/(rho^2-1))*(phi@@(-1))(psi(q[1]))-(1/2)*((phi@@(-1))(psi(q[1]))*rho/(rho^2-1)-(phi@@(-1))(psi(q[2]))/(rho^2-1))*(phi@@(-1))(psi(q[2])))/(Pi*(-rho^2+1)^(1/2))

 

(1/2)*exp(-(1/2)*(-q[1]/(rho^2-1)+q[2]*rho/(rho^2-1))*q[1]-(1/2)*(q[1]*rho/(rho^2-1)-q[2]/(rho^2-1))*q[2])/(Pi*(-rho^2+1)^(1/2))

(6)

 


 

Download RealProblem.mw

 


 

restart

local Gamma, gamma:

Gamma := phi(sigma__d):

F := f(Gamma)*sigma__v/sigma__d:

dl1dsd := convert(diff(F, sigma__d), diff):

dl1dsd := subs(diff(phi(sigma__d), sigma__d)=freeze(diff(phi(sigma__d), sigma__d)), dl1dsd):

Gamma := 'Gamma':

dl1dsd := eval(dl1dsd, phi = (sigma__d -> Gamma)):
dl1dsd := thaw(dl1dsd):

dl1dsd := eval(dl1dsd, phi = (sigma__d -> gamma*sigma__v*sigma__d));

(diff(f(Gamma), Gamma))*gamma*sigma__v^2/sigma__d-f(Gamma)*sigma__v/sigma__d^2

(1)

Gamma := phi(sigma__v):

F := f(Gamma)*sigma__v/sigma__d:

dl1dsv := convert(diff(F, sigma__v), diff):
dl1dsv := subs(diff(phi(sigma__v), sigma__v)=freeze(diff(phi(sigma__v), sigma__v)), dl1dsv):

Gamma := 'Gamma':

dl1dsv := eval(dl1dsv, phi = (sigma__v -> Gamma)):
dl1dsv := thaw(dl1dsv):
 
dl1dsv := eval(dl1dsv, phi = (sigma__v -> gamma*sigma__v*sigma__d));

(diff(f(Gamma), Gamma))*gamma*sigma__v+f(Gamma)/sigma__d

(2)

local gamma:
Gamma := phi(gamma):

F := f(Gamma)*sigma__v/sigma__d:

dl1dg := convert(diff(F, gamma), diff):
dl1dg := subs(diff(phi(gamma), gamma)=freeze(diff(phi(gamma), gamma)), dl1dg):

Gamma := 'Gamma':

dl1dg := eval(dl1dg, phi = (gamma -> Gamma)):
dl1dg := thaw(dl1dg):

dl1dg := eval(dl1dg, phi = (gamma -> gamma*sigma__v*sigma__d));

(diff(f(Gamma), Gamma))*sigma__v^2

(3)

 

 

Download d11d_only.mw


If you want to get more synthetic displays for der1 and der2  here is a hint:
 

Lambda__1 := RootOf(8*_Z^4 + 12*Gamma*_Z^3 + (5*Gamma^2 - 4)*_Z^2 - 4*Gamma*_Z - Gamma^2):

# A more synthetic representation of der1:

der1 := diff(Lambda__1,Gamma):
Diff('Lambda__1',Gamma) = collect~(normal(eval(der1, Lambda__1 = Lambda[1])), Lambda[1]);

Diff(Lambda__1, Gamma) = -(5*Gamma*Lambda[1]^2+6*Lambda[1]^3-Gamma-2*Lambda[1])/(16*Lambda[1]^3+18*Gamma*Lambda[1]^2+(5*Gamma^2-4)*Lambda[1]-2*Gamma)

(1)

# A more synthetic representation of der2:

der2 := diff(der1,Gamma):
Diff('Lambda__1', Gamma$2) = collect~(normal(eval(der2, Lambda__1 = Lambda[1])), Lambda[1]);

Diff(Lambda__1, Gamma, Gamma) = (448*Lambda[1]^8+1632*Gamma*Lambda[1]^7+(2120*Gamma^2-208)*Lambda[1]^6+(1200*Gamma^3-640*Gamma)*Lambda[1]^5+(250*Gamma^4-632*Gamma^2+16)*Lambda[1]^4+(-240*Gamma^3+32*Gamma)*Lambda[1]^3-25*Gamma^4*Lambda[1]^2-16*Gamma^3*Lambda[1]-5*Gamma^4)/(16*Lambda[1]^3+18*Gamma*Lambda[1]^2+(5*Gamma^2-4)*Lambda[1]-2*Gamma)^3

(2)

 

 

Download smart_display.mw


Last point: lambda__1 has 4 roots (4 real, or 2 real and 2 omplex conjugate, or 2 complex conjugate and 2 other complex conjugate too).
When you do plot(lambda__1, Gama=1..10) you plot only one of them:

eval(Lambda__1, Gamma=0):
evalf([allvalues(%)]);

             [0., 0., 0.7071067810, -0.7071067810]


eval(Lambda__1, Gamma=3):
evalf([allvalues(%)]);

[0.4958019689, -2.302318663 + 0.7071675271 I, -0.3911646429, 
   -2.302318663 - 0.7071675271 I]


eval(Lambda__1, Gamma=10):
evalf([allvalues(%)])

  [0.4628298392, -7.515971909 + 2.487925010 I, -0.4308860212, 
    -7.515971909 - 2.487925010 I]

 

@Nicole Sharp 

If you want to use ScientificErrorAnalysys to get the mean value and the error on rho[cubewano] you must do the things as you did.
See this notional example:
 

restart:

use ScientificErrorAnalysis, Units in
  X := Quantity(10, 2)*Unit(m);
  Y := Quantity( 3, 4)*Unit(kg);
  Z := Quantity(10, 1)*Unit(s);

  F := Y*X/Z^2:

  E    := combine(F, 'errors'):
  mean := GetValue(E);
  std  := GetError(E);
end use;

ScientificErrorAnalysis:-Quantity(10, 2)*Units:-Unit('m')

 

ScientificErrorAnalysis:-Quantity(3, 4)*Units:-Unit('kg')

 

ScientificErrorAnalysis:-Quantity(10, 1)*Units:-Unit('s')

 

ScientificErrorAnalysis:-Quantity(3, 4)*Units:-Unit('kg')*ScientificErrorAnalysis:-Quantity(10, 2)*Units:-Unit('m')/(ScientificErrorAnalysis:-Quantity(10, 1)^2*Units:-Unit('s')^2)

 

ScientificErrorAnalysis:-Quantity((3/10)/Units:-Unit('s')^2, (1/50)*418^(1/2)/Units:-Unit('s')^2)*Units:-Unit('kg')*Units:-Unit('m')

 

Error, (in ScientificErrorAnalysis:-GetValue) expecting a quantity-with-error structure, but got ScientificErrorAnalysis:-Quantity((3/10)/Units:-Unit(s)^2, (1/50)*418^(1/2)/Units:-Unit(s)^2)*Units:-Unit(kg)*Units:-Unit(m)

 

use ScientificErrorAnalysis, Units in
  X := Quantity(10*Unit(m) , 2*Unit(m) );
  Y := Quantity( 3*Unit(kg), 4*Unit(kg));
  Z := Quantity(10*Unit(s) , 1*Unit(s) );

  F := Y*X/Z^2:

  E    := combine(F, 'errors'):
  mean := GetValue(E);
  std  := GetError(E);
end use;

ScientificErrorAnalysis:-Quantity(10*Units:-Unit('m'), 2*Units:-Unit('m'))

 

ScientificErrorAnalysis:-Quantity(3*Units:-Unit('kg'), 4*Units:-Unit('kg'))

 

ScientificErrorAnalysis:-Quantity(10*Units:-Unit('s'), Units:-Unit('s'))

 

ScientificErrorAnalysis:-Quantity(3*Units:-Unit('kg'), 4*Units:-Unit('kg'))*ScientificErrorAnalysis:-Quantity(10*Units:-Unit('m'), 2*Units:-Unit('m'))/ScientificErrorAnalysis:-Quantity(10*Units:-Unit('s'), Units:-Unit('s'))^2

 

ScientificErrorAnalysis:-Quantity((3/10)*Units:-Unit('kg')*Units:-Unit('m')/Units:-Unit('s')^2, (1/50)*418^(1/2)*Units:-Unit('kg')*Units:-Unit('m')/Units:-Unit('s')^2)

 

(3/10)*Units:-Unit('kg')*Units:-Unit('m')/Units:-Unit('s')^2

 

(1/50)*418^(1/2)*Units:-Unit('kg')*Units:-Unit('m')/Units:-Unit('s')^2

(1)

 


 

Download Quantity_with_Unit.mw

@Nicole Sharp 

You < also noticed that "with(Statistics) : Mean(%)" doesn't seem to work with Units either >.

(version used: Maple 2015, but I guess this works also in newer versions)

restart

with(Statistics):

Y := RandomVariable(Normal(2, 3))*Units:-Unit('m')

_R*Units:-Unit('m')

(1)

Mean(Y);

2*Units:-Unit('m')

(2)

Variance(Y);

9*Units:-Unit('m')^2

(3)

Kurtosis(Y);  # should be dimensionless

3

(4)

restart

with(Statistics):

Y := RandomVariable(Normal(mu, sigma))*Units:-Unit('m')

_R*Units:-Unit('m')

(5)

Mean(Y);

mu*Units:-Unit('m')

(6)

Variance(Y);  # units bad placed

Units:-Unit('m')^2*sigma^2

(7)

Kurtosis(Y);

3

(8)

 

Download Statistics_Units.mw

@vv 

I agree. 
I titled my answer "The limit is not a number" because I didn't want to waste time developing this point and explaining why he couldn't get the expected result by using "limit".
Your last point is very pertinent.

Thanks for your comment.


HERE

What exactly do you want more?

@acer 

I understand your "When none of your piecewise conditions holds it will evaluate to its default value, which by default is zero.", but the plot you get by adding undefined in the piecewise definition is wrong.
The function g being exactly equal to 0 on the "big red block", it seems natural, default value or not, that implicitplot (try contourplot instead with option countours=[0]) draws a line to "fill" this rectangle.
Simply give a look to the fie I attached to my answer. 

It is therefore purely fortuitous that the undefined option produces the layout that the OP desires: the graph is wrong and it's exactly this wrong  graph he wanted for aesthetic reasons.
Here is an example to (try to) convince you this option gives wrong results: gnull_2.mw

@Beavis 
I have edited my answer right now

@Kitonum 

Excellent, thank you

@acer 

... and if I had paid more attention I would have observed that 

Mspec := Specialize(B, [p=1/2])*Specialize(X, [mu=-3, sigma=1]) + (1-Specialize(B, [p=1/2]))*Specialize(Y, [nu=3, tau=1]);
                   Mspec := _R2*_R3+(1-_R4)*_R5

clearly indicates that one Specialize(B, [p=1/2]) is _R2 while the other is _R4.

Whatever your "demonstration" is perfectly clear.

If it's not too much to ask I would have two questions:

  1. I use the Statistics package for years and I always wonder  why RandomVariable creates a protected name of rhe form _Rxxx?
    It seems to me that this is quite a unique case in Maple isn't it?
    These _Rxxx names look more like aliases of the names to which RandomVariables are assigned (am I right?) and they make difficult to understand expressions which involve random variables (relation (7) in the attached file)  
     
  2. Second question: why is the type RandomVariable is not inherited?

Thank you for your attention.

restart:

with(Statistics):

X := RandomVariable(Uniform(0, 1))

_R

(1)

equivalence := anames(RandomVariable) = eval(anames(RandomVariable))

X = _R

(2)

Mean(X), Mean(_R)

1/2, 1/2

(3)

PDF(X, x), PDF(_R, x)

piecewise(x < 0, 0, x < 1, 1, 0), piecewise(x < 0, 0, x < 1, 1, 0)

(4)

attributes(X);
attributes(_R);

protected, RandomVariable, _ProbabilityDistribution

 

protected, RandomVariable, _ProbabilityDistribution

(5)

Y := RandomVariable(Uniform(0, 1));

_R0

(6)

Z := X+Y

_R+_R0

(7)

A := _R + _R0

_R+_R0

(8)

Mean~([Z, A])

[1, 1]

(9)

type(Z, RandomVariable);
type(A, RandomVariable);

false

 

false

(10)
 

 

Download RV.mw

@sursumCorda 

Thanks for this detailed information

5 6 7 8 9 10 11 Last Page 7 of 128