Thanks for your investigation, Doug.
First, I have to confirm that on this PC I have much more FAIL's than true's, so it'll be great if others try it out on theirs. (Mine is still Standard Worksheet Interface, Maple 11.01, Windows XP, July 10 2007 Build ID 303882, User Interface: 303882, Kernel: 296069, Library: 296069.)
Second, the simplest way to prove the inequality seems to be to
1) define m1 and m2
m1 := alpha*((p-1)*mu^2/p^2+sigma^2/p)/(2-alpha)
m2 := alpha*((p-1)*(mu^2+alpha*sigma^2/(2-alpha))/p^3+sigma^2/p^2)/(2-alpha)+(alpha*mu*(p-1)/((2-alpha)*p^2))^2
2) get (1-p)/p via s := collect(m1-m2, [sigma, mu]); S := simplify(-op(1, op(2, s))/op(1, op(1, s)))
3) one can show, rather generally, that inequality S

FAIL do `assuming`([is(S < sigma^2/mu^2)], [1 < p, 0 < mu, 0 < sigma]) end do; L[j] := i-2 end do; convert(L, list) end proc:
then distrue(20) yields [0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 2, 0, 0, 0, 0, 1, 1, 2, 0], [0, 0, 2, 0, 0, 0, 3, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0]
surely, it's random (with probability <1). My code seems to be very slow to simulate the distribution. Or is it IS so demanding?
Now the trick: if you replace `assuming` with `#assuming` you'll get [20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20], at least on this machine. So the problem seems to be caused by assuming facility. Can enyone shed more light please?
This seems to always produce FAIL to me:
INEQ2 := `assuming`([is(S < sigma^2/mu^2)], [p > 1, mu::real, sigma::real])
And on a different note:how do I copy from maple verbatim, that is I don't have `assuming`([is(S < sigma^2/mu^2)], [1 < p, 0 < mu, 0 < sigma]) in my code writings, rather a simpler is(S

Why does Maple return FAIL more often than true when determining whether the inequality below is true or false? Perhaps there's a simpler example, but I can't seem to grasp what's wrong to constract it - why sometimes true, and more often FAIL? Thanks.

`assuming`([is(alpha*((p-1)*mu^2/p^2+sigma^2/p)/(2-alpha) > alpha*((p-1)*(mu^2+alpha*sigma^2/(2-alpha))/p^3+sigma^2/p^2)/(2-alpha)+alpha^2*mu^2*(p-1)^2/((2-alpha)^2*p^4))], [alpha > 0, alpha < 1, p > 1, mu > 0, sigma > 0])

I was told zip was generally less efficient, so i was looking for something else. `+`(op(map(`/`, L))) is good to add reciprocals, for example, so I wanted to sort of generalise it -- It just didn't come to me that L could be L1-L2 (same about add) . I'd be further interested in anything on the relative efficiency of various approaches to doing things. For example, is it true that zip is generally to be avoided, or are there things for which zip is best?

yes, I've learnt smth new from that - thanks.

Thanks. Isn't it slower with the proc evalf being left out?

PS: I know it is easy to have everything related to unfair Rademacher distribution via the Bernoulli, but the question is how to define it independently.

It seems I confused the lower incomplete gamma function with the upper one. With this replacement, C2 is OK and in the same simple form, and is equal to C1. Thanks for opening my eyes a bit wider.

C2 is what theory says, C1 is what Maple says - I think these should be equal with the sum of 2C1=2C2. I'd prefer using CDF(R,t) directly, as with other distributions.

Yes, I meant this. Thanks. b(j) can be an iid sequence of binary random variables, also independent of y(t).

Thanks, Robert. I only can suggest to gather all slow algorithms together with their faster analogues as a separate colelction so less experienced people could educate themselve on how to not write slow codes when speed does matter.