## 1364 Reputation

12 years, 305 days
Maplesoft

## Social Networks and Content at Maplesoft.com

I am the manager of the Mathematical Software Group, working mostly on the Maple library. I have been working at Maplesoft since 2007, mostly on the Statistics and Units packages and on contract work with industry users. My background is in abstract algebra, in which I completed a PhD at Eindhoven University of Technology. During my studies I was always searching for interesting questions at the crossroads of math and computer science. When I came to Canada in 2007, I had nothing but a work permit, some contacts at Maplesoft (whom I had met at a computer algebra conference a year earlier), and a plan to travel around beautiful Canada for a few months. Since Maplesoft is a company that solves more interesting math and computer science related questions than just about anywhere else, I was quite eager to join them, and after about three months, I could start.

## Yes, of course...

Much more elegant idea than my procedure. Thanks Joe!

Erik.

## Thanks...

Ah -- thanks for that, I didn't realize that. It seems that it does work if you use table index referencing:

```p := proc(V::Vector(datatype=float[8]))
eval(Statistics['Mean'](V));
end proc:
V := Vector[row]([1, 2, 3], datatype=float[8]):
evalhf(p(V));
```

returns 2.

Erik Postma
Maplesoft

## Thanks...

Ah -- thanks for that, I didn't realize that. It seems that it does work if you use table index referencing:

```p := proc(V::Vector(datatype=float[8]))
eval(Statistics['Mean'](V));
end proc:
V := Vector[row]([1, 2, 3], datatype=float[8]):
evalhf(p(V));
```

returns 2.

Erik Postma
Maplesoft

## You seem to have nailed it...

Thanks Acer, you seem to have nailed it.

One minor addition, though I'm sure the Compiler:-Compile solution is still the best: you can actually call module exports from within evalhf, you just have to wrap the call in eval (which calls into regular Maple). There is a speed penalty for that, but it's not huge I think. This is often useful if you can do the majority of the work in evalhf and just need a few library calls.

Here's a quick illustration:

```evalhf(rand());
```

gives an error message: "Error, module member referencing is not supported in evalhf". However,

```evalhf(eval(rand()));
```

returns 0.193139816415E12 (in my case).

Erik Postma
Maplesoft

## You seem to have nailed it...

Thanks Acer, you seem to have nailed it.

One minor addition, though I'm sure the Compiler:-Compile solution is still the best: you can actually call module exports from within evalhf, you just have to wrap the call in eval (which calls into regular Maple). There is a speed penalty for that, but it's not huge I think. This is often useful if you can do the majority of the work in evalhf and just need a few library calls.

Here's a quick illustration:

```evalhf(rand());
```

gives an error message: "Error, module member referencing is not supported in evalhf". However,

```evalhf(eval(rand()));
```

returns 0.193139816415E12 (in my case).

Erik Postma
Maplesoft

## Hi Willie, Sorry for the...

Hi Willie,

Sorry for the delay in my reply. I only just saw your message.

It turns out that my initial advice was actually not the best. The situation where some of the variables represent real quantities and some are complex is actually not very common, and the way to solve this in Maple is not very straightforward (although it is quite possible).

The best approach is to split U, the complex variable, into UR + I * UI, where UR and UI are real numbers representing the real and imaginary parts of U. Then we can transform the problem into one that has only real variables, and solve that using the routines for real solving. This would work as follows; I'm including your problem definition as well:

```> a := 0.5;

> alph := U -> U^2*(a-(1-U^2)^2);

> dadU := D(alph); # Let Maple do the work of determining the derivative! :)

> equations := {Re(I*U*V + alph(U)) = 0, -V+I*dadU(U)=0};

> equations := eval(equations, U = UR + I * UI); # Split U into real and imaginary parts

> equations := map(Re, eqns) union map(Im, eqns); # Equate real and imaginary parts of right- and lefthand sides

> equations := evalc(equations); # Simplify the equations assuming that all occurring *variables* are real (which they are, now)

> solutions := {RealDomain[solve](equations)}; # Use Maple's real domain solver explicitly. (Note there is a double root at the origin.)

> solutions := map2(s -> {U = eval(UR + I*UI, s), V = eval(V, s)}, solutions); # 'Reassemble' the complex value for U

> KfValues := map(s -> eval(Im(I*U*V + alph(U))/V, s), solutions);
```

You also asked three specific questions in your last email. I think the answer to the first two lies in this being an artifact of Maple's solve command not using the assumptions placed on variables in an optimal way, and not really having an automatic solution for the case where some variables are complex and others are real. I added the answer to the second one to the program listing above.

One more note. I saw that you chose to use floating point numbers in some places; one for a := 0.5 and one in the definition of dadU. That means that you get floating point answers. I already removed the one in dadU; if you replace the definition of a by a := 1/2, you will get exact answers. I wouldn't recommend it for this problem, though; the answers are very long (on the order of a page or so).

I hope this helps! I'll check here again in the next couple of days to see if you have any more questions.

Erik Postma
Maplesoft

## Hi Willie, Sorry for the...

Hi Willie,

Sorry for the delay in my reply. I only just saw your message.

It turns out that my initial advice was actually not the best. The situation where some of the variables represent real quantities and some are complex is actually not very common, and the way to solve this in Maple is not very straightforward (although it is quite possible).

The best approach is to split U, the complex variable, into UR + I * UI, where UR and UI are real numbers representing the real and imaginary parts of U. Then we can transform the problem into one that has only real variables, and solve that using the routines for real solving. This would work as follows; I'm including your problem definition as well:

```> a := 0.5;

> alph := U -> U^2*(a-(1-U^2)^2);

> dadU := D(alph); # Let Maple do the work of determining the derivative! :)

> equations := {Re(I*U*V + alph(U)) = 0, -V+I*dadU(U)=0};

> equations := eval(equations, U = UR + I * UI); # Split U into real and imaginary parts

> equations := map(Re, eqns) union map(Im, eqns); # Equate real and imaginary parts of right- and lefthand sides

> equations := evalc(equations); # Simplify the equations assuming that all occurring *variables* are real (which they are, now)

> solutions := {RealDomain[solve](equations)}; # Use Maple's real domain solver explicitly. (Note there is a double root at the origin.)

> solutions := map2(s -> {U = eval(UR + I*UI, s), V = eval(V, s)}, solutions); # 'Reassemble' the complex value for U

> KfValues := map(s -> eval(Im(I*U*V + alph(U))/V, s), solutions);
```

You also asked three specific questions in your last email. I think the answer to the first two lies in this being an artifact of Maple's solve command not using the assumptions placed on variables in an optimal way, and not really having an automatic solution for the case where some variables are complex and others are real. I added the answer to the second one to the program listing above.

One more note. I saw that you chose to use floating point numbers in some places; one for a := 0.5 and one in the definition of dadU. That means that you get floating point answers. I already removed the one in dadU; if you replace the definition of a by a := 1/2, you will get exact answers. I wouldn't recommend it for this problem, though; the answers are very long (on the order of a page or so).

I hope this helps! I'll check here again in the next couple of days to see if you have any more questions.

Erik Postma
Maplesoft

## Thanks, Robert...

Thanks, Robert. A final improvement then; I also realized that I broke detection of n being less than or equal to lag:

```SampleWithCorr3 := proc(prob::And(nonnegative,satisfies(c->c<=1)),
lag::posint,
c::And(satisfies(c->c<=1 and c>=-1)),
n::posint,
\$)
local S, C, i, q, r;
uses Statistics;
ASSERT(c >= prob/(prob - 1) and c >= (prob - 1)/prob,
"impossible to achieve this combination of correlation and probability");
if n <= lag then
return Sample(Randomvariable(Bernoulli(prob)), n);
end if;
if c >= 0 then
S := Sample(RandomVariable(Bernoulli(prob)), n);
C := Sample(RandomVariable(Bernoulli(c)), n - lag);
for i to n - lag do
if C[i] = 1 then
S[i + lag] := S[i];
end if;
end do;
else
r := -c;
q := (prob + c * (1 - prob))/(1 + c);
# Initial lag elements
S := Vector(n, Sample(RandomVariable(Bernoulli(prob)), lag));
# Fill with the rest of the elements
ArrayTools:-Copy(Sample(RandomVariable(Bernoulli(q)), n - lag), S, lag);
C := Sample(Bernoulli(r), n - lag);
for i to n - lag do
if C[i] = 1 then
S[i + lag] := 1 - S[i];
end if;
end do;
end if;
return S;
end proc;
```

## Extension for negative correlation...

Alec and I thought about the issue for negative correlation a bit more, and thanks to our conversation I've managed to convince at least myself that the following will work.

So let me state the problem once more. We intend to produce a vector S of random numbers such that each number individually is Bernoulli distributed with success probability prob, and such that the correlation of the S[i] and the S[i - lag] is c. The idea behind the code above is to first generate a sequence of independent numbers, all Bernoulli with success probability prob, and then for a fraction abs(c) of them, set S[i] equal to S[i - lag] (if c is positive) or to 1 - S[i - lag] (if c is negative). This will give the right correlation in all cases, but it will destroy the expected value (or success probability) of the sample if c is negative and prob is different from 0.5. Since the code above already works for c >= 0, we assume c < 0 from now on.

One solution is to simply set the initial sample using some other success probability than prob, which may require changing the fraction of numbers that we change from abs(c) to something else. If in the end, we still get expected value prob for each entry, and lag-delayed autocorrelation c, then all is fine.

The first lag entries of S are never changed after generating them initially, so we need to generate them with success probability prob. Let us assume we generate the n - lag other entries with success probability q, and then for a fraction r of them, we set S[i] equal to 1 - S[i - lag]. We will determine the values of q and r that give expected value prob and correlation c later.

So suppose that we have generated all initial values of S and are now in the process of setting S[i] equal to 1 - S[i - lag] for some i. In particular, suppose we have processed the entries S[j] for j < i. Then currently E(S[j]) is equal to prob for j < i and it is equal to q for j >= i. With probability r, we set S[i] equal to 1 - S[i - lag], and otherwise we leave it as is; the expected value of S[i] after this operation will be r(1 - prob) + (1 - r)q. This should be equal to prob, so we have:

r(1 - prob) + (1 - r)q = prob.

Now for the correlation. The correlation of S[i] with S[i - lag] can be defined as

(E(S[i] * S[i - lag]) - E(S[i]) * E(S[i - lag]))/sqrt((E(S[i]^2) - E(S[i])^2)*(E(S[i - lag]^2) - E(S[i - lag])^2)) = (E(S[i] * S[i - lag]) - prob^2)/(prob - prob^2).

We now need to find E(S[i] * S[i - lag]). If we set S[i] equal to 1 - S[i - lag] (which we do with probability r), then their product is certainly 0. And if we don't (probability 1 - r), then the two are independent; S[i - lag] is equal to 1 with probability prob, but S[i] has probability q of being equal to 1. So the expected value we seek to find is equal to (1 - r) prob q. Setting the correlation equal to c, we find

((1 - r)*prob*q - prob^2)/(prob - prob^2) = ((1 - r) * q - prob) / (1 - prob) = c,

thus

(1 - r) * q = prob + c * (1 - prob).

Isolating the term (1 - r) q from the other equation above that came from setting the expected value to prob, we find that

prob - r*(1 - prob) = prob + c * (1 - prob),

or in other words, r = - c. This leads to q = (prob + c (1 - prob))/(1 + c). This leads to the following change to Alec's implementation:

```SampleWithCorr2 := proc(prob::And(numeric,satisfies(c->c>=0 and c<=1)),
lag::posint,
c::And(numeric,satisfies(c->c<=1 and c>=-1)),
n::posint,
\$)
local X, B, S, C, s, i, q, r;
uses Statistics;
if c >= 0 then
X := RandomVariable(Bernoulli(prob));
S := Sample(X, n);
if n <= lag or s = 0 then
return S;
end if;
B := RandomVariable(Bernoulli(abs(c)));
C := Sample(B, n - lag);
for i from lag + 1 to n do
if C[i - lag] = 1 then
S[i] := S[i - lag];
end if;
end do;
return S;
else
r := -c;
q := (prob + c * (1 - prob))/(1 + c);
# Initial lag elements
S := Vector(n, Sample(RandomVariable(Bernoulli(prob)), lag));
# Fill with the rest of the elements
ArrayTools:-Copy(Sample(RandomVariable(Bernoulli(q)), n - lag), S, lag);
C := Sample(Bernoulli(r), n - lag);
for i from lag + 1 to n do
if C[i - lag] = 1 then
S[i] := 1 - S[i - lag];
end if;
end do;
return S;
end if;
end proc;
```
```S := SampleWithCorr2(1/3, 1, -1/5, 10^6);
Vector[row](%id = 137642912)
Statistics:-Correlation(S[1..10^6-1], S[2..10^6]);
-0.2010351921
Statistics:-Mean(S);
0.3331040000
S := SampleWithCorr2(1/4, 1, 2/5, 10^6);
Vector[row](%id = 137036492)
Statistics:-Correlation(S[1..10^6-1], S[2..10^6]);
0.4008726203
Statistics:-Mean(S);
0.2510660000
```

Two final notes:

1. There are values for prob and c where q is not between 0 and 1. I think this reflects the fact that not all combinations of c and prob actually correspond to a solution at all. For example, if you want the correlation to be -1, then the values will have to alternate; thus the expected value of the entries will be 1/2. I haven't looked into this very deeply though.
2. I tried to get the display engine to do Maple formatting, but didn't succeed. I might try to revisit this comment at a later stage and try and get that to work.

All the best, Erik.

 First 18 19 20 Page 20 of 20
﻿