I found in the Application Center a quite old work (2010) titled Generation of correlated random numbers  (see here view.aspx).
This work contains a few errors that I thought it was worth correcting. 

Basically the works I refer to concern the sampling of linearly correlated random variables (or correlation in the Pearson sense). Classical textbooks about the subject generally discuss this topic by considering only gaussian random variables and present two methods to generate linearlycorrelated samples: one base on the Cholesky decomposition of the correlation matrix, the other based on its SVD decomposition.

Now the question is: can we apply any of these two procedures to generate linearly correlated samples of arbitrary random variables?
The answer is NO and the reason why it is so is strongly related to a fundamental property of gaussian random variables (GRV) that is that any linear combination of GRVs is still a GRV.
But things are not that simple because even the multi gaussian case handmed with Cholesky's decomposition or SVD can lead to undesired results if no precautions are taken.

The aim of this post is to show what are those wrong results we obtain by thoughtlessly applying these decompositiond and, of course, to show how we must proceed to avoid them.

Let's start by a very simple point of natural good sense: suppose U1 and U2 are two independant identically distributed (iid) random variables and that we have some "function" F which, when applied to the couple (U1, U2) generate a couple (A1, A2) of linearly correlated random variables. Thus F(U1, U2) = (A1, A2).
Let's suppose this same relation holds if we replace U1 and U2 by "a sample of U1" and "a sample of U2", and thus (A1, A2) by "a sample of the bivariate (A1, A2) whose components are linearly correlated". Let's call S the cloud one could obtain by using for instance the ScatterPlot(A1, A2) procedure of Maple.

Let's suppose now that instead of computing F(U1, U2) I decide to compute (U2, U1). Let's call (A1' , A2') the corresponding joint sample and write S' := ScatterPlot(A2', A1').
It seems natural (and it is!) to think that S and S' will be the same up to sampling artifacts. 

Any correct method to generate samples from (linearly or not) correlated random variables must verify this similarity of patterns between S and S' S and S'. But this is not the case in this work view.aspx.

The safer way to correlate, even in the Pearson's sense, random variables is to use the concept of COPULAS (there is a work on copulas in the Application Center, but for a quick overview see here Copula_(probability_theory)).
For this special case of linear correlation on can use copulas without knowing it, and this is very simple: as soon as our  procedure F introduced above gives correct results if U1 and U2 are standard GRVs,

  • take any couple (R1, R2) of arbitrary random variables,
  • build a map M(R1, R2) --> (U1, U2),
  • generate (A1, A2) = F(U1, U2),
  • compute M^(-1)(A1, A2)


What is the point of correcting a work that is 10 years old?
A very simple answer is that the Cholesky's decomposition (or SVD) is still the emblematic method to use for linearly correlating random variable. This is the only one presented in scholar textbooks, the only one a lot of students have been taught about (unless they have  they have had an extensive background in probability or statistics), and thus a systematic source of wrong results users are not even aware of.


Next point: it's well known that the Pearson's correlation cannot be lower than -1 or higher than +1, but this is common mistake to think any value between -1 and +1 can be reached.
This is guaranteed for GRVs, but  not for some other random variables.
For a classical counter-example see  04_correlation_2016_cost_symposium_fkuo_tagged.pdf 

The notation used in the attached file are mainly those used in the initial work  view.aspx.

restart:


This work is aimed to correct the procedure used in  https://fr.maplesoft.com/applications/view.aspx?SID=99806
to correlate arbitrary random variables in the (common) Pearson's sense.

with(LinearAlgebra):
with(plots):
with(Statistics):

 

GAUSSIAN RANDOM VARIABLES

 

# First example: both A1 and A2 are centered gaussian random variables
#                The order we use (A1 next A2 or A2 next A1) to define Ma doesn't matter

Y   := RandomVariable(Normal(0, .25)):
rho := .9:
Q   := 10^4:
A1  := Sample(Y, Q):
A2  := Sample(Y, Q):
Ma  := `<,>`(`<,>`(A1), `<,>`(A2)):
MA  := Transpose(Ma):
Cor := Matrix([[1, rho], [rho, 1]]):
Cd2 := LUDecomposition(Cor, method = 'Cholesky', output = ['U']):
Rs2 := MA . Cd2:


opts := titlefont = [TIMES, BOLD, 12], symbol=point, transparency=0.5:
A1A2 := ScatterPlot(Column(Rs2, 1), Column(Rs2, 2), title = "Correlated Normal RV", opts, color=blue):



Ma  := `<,>`(`<,>`(A2), `<,>`(A1)):
MA  := Transpose(Ma):
Cor := Matrix([[1, rho], [rho, 1]]):
Cd2 := LUDecomposition(Cor, method = 'Cholesky', output = ['U']):
Rs2 := MA . Cd2:

A2A1 := ScatterPlot(Column(Rs2, 2), Column(Rs2, 1), title = "Correlated Normal RV", opts, color=red):

display(A1A2, A2A1);

 

# Second example : both A1 and A2 are non-centered gaussian random variables with equal standard deviations.
#                  The order we use to define Ma does matter

Y   := RandomVariable(Normal(1, .25)):
rho := .9:
Q   := 10^4:
A1  := Sample(Y, Q):
A2  := Sample(Y, Q):
Ma  := `<,>`(`<,>`(A1), `<,>`(A2)):
MA  := Transpose(Ma):
Cor := Matrix([[1, rho], [rho, 1]]):
Cd2 := LUDecomposition(Cor, method = 'Cholesky', output = ['U']):
Rs2 := MA . Cd2:


opts := titlefont = [TIMES, BOLD, 12], symbol=point, transparency=0.5:


A1A2 := ScatterPlot(Column(Rs2, 1), Column(Rs2, 2), title = "Correlated Normal RV", opts, color=blue):



Ma  := `<,>`(`<,>`(A2), `<,>`(A1)):
MA  := Transpose(Ma):
Cor := Matrix([[1, rho], [rho, 1]]):
Cd2 := LUDecomposition(Cor, method = 'Cholesky', output = ['U']):
Rs2 := MA . Cd2:

A2A1 := ScatterPlot(Column(Rs2, 2), Column(Rs2, 1), title = "Correlated Normal RV", opts, color=red):

display(A1A2, A2A1);

 

 

# Second example corrected: to avoid order's dependency proceed this way
#    1/ center A1 and A2
#    2/ correlate the now centered rvs
#    3/ uncenter the couple of correlated rvs


C1  := convert(Scale(A1, scale=Mean), Vector[row]):
C2  := convert(Scale(A2, scale=Mean), Vector[row]):

Ma  := `<,>`(`<,>`(C1), `<,>`(C2)):
MA  := Transpose(Ma):
Cor := Matrix([[1, rho], [rho, 1]]):
Cd2 := LUDecomposition(Cor, method = 'Cholesky', output = ['U']):
Rs2 := MA . Cd2:


opts := titlefont = [TIMES, BOLD, 12], symbol=point, transparency=0.5:


A1A2 := ScatterPlot(Column(Rs2, 1)+~Mean(A1), Column(Rs2, 2)+~Mean(A2), title = "Correlated Normal RV", opts, color=blue):



Ma  := `<,>`(`<,>`(C2), `<,>`(C1)):
MA  := Transpose(Ma):
Cor := Matrix([[1, rho], [rho, 1]]):
Cd2 := LUDecomposition(Cor, method = 'Cholesky', output = ['U']):
Rs2 := MA . Cd2:

A2A1 := ScatterPlot(Column(Rs2, 2)+~Mean(A1), Column(Rs2, 1)+~Mean(A2), title = "Correlated Normal RV", opts, color=red):

display(A1A2, A2A1);

 

# Third example : both A1 and A2 are centered gaussian random variables with unequal standard deviations.
#                 The order we use to define Ma does matter

rho := .9:
Q   := 10^4:
A1  := Sample(Normal(0, 1), Q):
A2  := Sample(Normal(0, 2), Q):
Ma  := `<,>`(`<,>`(A1), `<,>`(A2)):
MA  := Transpose(Ma):
Cor := Matrix([[1, rho], [rho, 1]]):
Cd2 := LUDecomposition(Cor, method = 'Cholesky', output = ['U']):
Rs2 := MA . Cd2:


opts := titlefont = [TIMES, BOLD, 12], symbol=point, transparency=0.5:


A1A2 := ScatterPlot(Column(Rs2, 1), Column(Rs2, 2), title = "Correlated Normal RV", opts, color=blue):



Ma  := `<,>`(`<,>`(A2), `<,>`(A1)):
MA  := Transpose(Ma):
Cor := Matrix([[1, rho], [rho, 1]]):
Cd2 := LUDecomposition(Cor, method = 'Cholesky', output = ['U']):
Rs2 := MA . Cd2:

A2A1 := ScatterPlot(Column(Rs2, 2), Column(Rs2, 1), title = "Correlated Normal RV", opts, color=red):

display(A1A2, A2A1);

 

# Third example corrected: to avoid order's dependency proceed this way
#    1/ scale A1 and A2
#    2/ correlate the now scaled rvs
#    3/ unscale the couple of correlated rvs


C1  := A1 /~ StandardDeviation(A1):
C2  := A2 /~ StandardDeviation(A2):

Ma  := `<,>`(`<,>`(C1), `<,>`(C2)):
MA  := Transpose(Ma):
Cor := Matrix([[1, rho], [rho, 1]]):
Cd2 := LUDecomposition(Cor, method = 'Cholesky', output = ['U']):
Rs2 := MA . Cd2:


opts := titlefont = [TIMES, BOLD, 12], symbol=point, transparency=0.5:


A1A2 := ScatterPlot(Column(Rs2, 1)*~StandardDeviation(A1), Column(Rs2, 2)*~StandardDeviation(A2), title = "Correlated Normal RV", opts, color=blue):



Ma  := `<,>`(`<,>`(C2), `<,>`(C1)):
MA  := Transpose(Ma):
Cor := Matrix([[1, rho], [rho, 1]]):
Cd2 := LUDecomposition(Cor, method = 'Cholesky', output = ['U']):
Rs2 := MA . Cd2:

A2A1 := ScatterPlot(Column(Rs2, 2)*~StandardDeviation(A1), Column(Rs2, 1)*~StandardDeviation(A2), title = "Correlated Normal RV", opts, color=red):

display(A1A2, A2A1);

 

# More generally: to avoid order's dependency proceed this way
#    1/ transform A1 and A2 into standard gaussian random variables (mean and standard deviation scalings)
#    2/ correlate the now scaled rvs
#    3/ unscale the couple of correlated rvs

 

A MORE COMPLEX EXAMPLE:

NON GAUSSIAN RANDOM VARIABLES
(here two LogNormal rvs)

 

 

# Preliminary
#   the expectation (mean) of a LogNormal rv cannot be 0;
#   as a consequence it is expected that the order used to buid Ma will matter
#
# Proceed as Igor Hlivka did

 

Y   := RandomVariable(LogNormal(.5, .25)):
rho := .9:
Q   := 1000:
A1  := Sample(Y, Q):
A2  := Sample(Y, Q):
Ma  := `<,>`(`<,>`(A1), `<,>`(A2)):
MA  := Transpose(Ma):
Cor := Matrix([[1, rho], [rho, 1]]):
Cd2 := LUDecomposition(Cor, method = 'Cholesky', output = ['U']):
Rs2 := MA . Cd2:

ScatterPlot(A1, A2, color = red, title = ["Raw LogNormal RV", font = [TIMES, BOLD, 12]]):
A1A2 := ScatterPlot(Column(Rs2, 1), Column(Rs2, 2), title = "Correlated LogNormal RV", opts, color=blue):

# And now change, as usual, the order in Ma

Ma  := `<,>`(`<,>`(A2), `<,>`(A1)):
MA  := Transpose(Ma):
Cor := Matrix([[1, rho], [rho, 1]]):
Cd2 := LUDecomposition(Cor, method = 'Cholesky', output = ['U']):
Rs2 := MA . Cd2:

A2A1 := ScatterPlot(Column(Rs2, 2), Column(Rs2, 1), title = "Correlated LogNormal RV", opts, color=red):

display(A1A2, A2A1);

 

# How can we avoid that the order used to assemble Ma do matter?
#
# A close examination of what was done with gaussiann rvs show that in all the cases we
# went back to standard gaussian rvs before correlating them.
# So let's just do the same thing here.
#
# Of course it's not as immediate as previously...
# (please do not focus on the slowness of the code, it is written to clearly explain 
# what is done, not to be fast)



#-------------------------------------- from Y to standard gaussian
G  := RandomVariable(Normal(0, 1)):
G1 := Vector[row](Q, q -> Quantile(G, Probability(Y > A1[q], numeric), numeric)):
G2 := Vector[row](Q, q -> Quantile(G, Probability(Y > A2[q], numeric), numeric)):
# could be replaced by this faster code
#   cdf_Y := unapply(CDF(Y, z), z) assuming z > 0;
#   cdf_G := unapply(CDF(G, z), z);
#   S1    := sort(A1):
#   ini   := -10:
#   V     := Vector[row](Q):
#   for q from 1 to Q do
#     V[q] := fsolve(cdf_G(z)=cdf_Y(S1[q]), z=ini);
#     ini  := V[q]:
#   end do:
#------------------------------------------------------------------

Ma  := `<,>`(`<,>`(G1), `<,>`(G2)):
MA  := Transpose(Ma):
Cor := Matrix([[1, rho], [rho, 1]]):
Cd2 := LUDecomposition(Cor, method = 'Cholesky', output = ['U']):
Rs2 := MA . Cd2:


opts := titlefont = [TIMES, BOLD, 12], symbol=point, transparency=0.5:


#-------------------------------------- from standard gaussian to Y
C1 := Vector[row](Q, q -> Quantile(Y, Probability(G > Rs2[q, 1], numeric), numeric)):
C2 := Vector[row](Q, q -> Quantile(Y, Probability(G > Rs2[q, 2], numeric), numeric)):
#------------------------------------------------------------------
A1A2 := ScatterPlot(C1, C2, title = "Correlated Normal RV", opts, color=blue):



Ma  := `<,>`(`<,>`(G2), `<,>`(G1)):
MA  := Transpose(Ma):
Cor := Matrix([[1, rho], [rho, 1]]):
Cd2 := LUDecomposition(Cor, method = 'Cholesky', output = ['U']):
Rs2 := MA . Cd2:

#-------------------------------------- from standard gaussian to Y
C1 := Vector[row](Q, q -> Quantile(Y, Probability(G > Rs2[q, 1], numeric), numeric)):
C2 := Vector[row](Q, q -> Quantile(Y, Probability(G > Rs2[q, 2], numeric), numeric)):
#------------------------------------------------------------------

A2A1 := ScatterPlot(C2, C1, title = "Correlated Normal RV", opts, color=red):

display(A1A2, A2A1);

 

 

CONCLUSION: Be extremely careful when correlating non standard gaussian random variables,
                             and more generally non gaussian random variables.


Correlating rvs the way Igor Hlivka did can be replaced in the more general framework of COPULA THEORY.

Mathematically a bidimensional copula C is a function from [0, 1] x [0, 1] to [0, 1] if C is joint CDF of a bivariate random variable
both with uniform marginals on [0, 1].
See for instanc here  https://en.wikipedia.org/wiki/Copula_(probability_theory)

What I did here to "correlate" A1 and A2 was nothing but to apply in a step-by-step way a GAUSSIAN COPULA to the bivariate
(A1, A2) random variable.
In  Quantile(G, Probability(Y > A1[q], numeric), numeric) the blue expression maps A1 onto [0, 1] (as it is needed
in the definition of a copula), while the brown sequence is the copula itself (when the same operation on A2 has been done).

 

 


 

Download LInear_Correlated_Random_Variables.mw


Please Wait...