sand15

807 Reputation

13 Badges

10 years, 117 days

MaplePrimes Activity


These are replies submitted by sand15

@tomleslie

 

Sorry, you're right, I forgot the "1/3" in the coefficient of x^3.
 

@tomleslie 

Not applicable: see tomleslie's answer

You got a wrong result with Optimization:-NLPSolve (?)

Given fexact is monotone decreasing over [0, 2] and its values are between [0, 1], given fapp is monotone increasing on [0, 2] with fapp(2) > 6789*2, it is obvious that the the maximum M of abs(fapp-fexact) is reached for x=2.

Then M = subs(x=2, fapp-fexact) =  - sin(2)/2 which is about  30543.54535 and not 2827.54535.

Using instead Optimization:-Maximize( abs(fapp-fexact), x=0..2) returns the good value.

More of this: your two calculations (L2 norm and L1 norm) give strangely the same result: could it be that Optimization:-Maximize(  abs(fapp-fexact), x=0..2, maximize) does not do what you were expecting for?

@tomleslie 

I apreciate these non-gurus explanations.
Thanks Tom

@acer 

Thanks acer for all these technical explanations.

For my understanding: I guess than you writting :-`+`(a, b) is a shortcut for Tolerances:-`+`(a, b) isn't it?
If so, is it not an unsafe way to proceed ifyou use two different packages containing different functions with the same name (I recently met this situation with packages simplex and geom which both contain functions named convexhull)?

 

 

i

@Carl Love

I had noticed the `+` returned by the command line  < with(Tolerances); > but I forgot that  `+` does not operate on a list but on a sequence; thus I forgot to use op or [] before running  `+`.
I like your (`+`@op) solution, easier to understand than using subsindets(Z, :-`+`, `+`);
Thanks

It seems to me that you wanted to answer the question EB1000 asked some hours ago?
Unfortunately, your answer turned out to be a question!

@tomleslie

Thanks Tom,
 I do not understand either why "evalr" seems necessary. In any case, thank you for keeping me from going in circles.

@acer 

Thank you acer.

As often there are several ways to tackle a problem and looking for the "optimal" solution seems to be an endless story.

By the way, I used myself the "subs({0="H", 1="T"},...)" method to return a solution in terms of "H" and "T.
Unfortunately writting
UseHardwareFloats := true:
V := floor~(SignalProcessing:-GenerateUniform(numThrows, 0, 2))
does not return integers making the {0="H", 1="T"} inoperative.
I tried the substitution {0.="H", 1.="T"}  but it didn't work neither.

I understand that floor, round, ceil do not return integers when the argument is a harware floating point number.
Is there a simple way to obtain, for instance, the integer part of harware floating point number?

@Carl Love 

 

Thanks Carl for your very documented answer.
I think I will  try to implement your suggestions, Not that the problem of calculating the probabilities of each sum deserves it, but rather out of personal satisfaction (in "practical" situation one would probably use Central Limit Theorems and the normal approximation when the number of dice exceeds a fex dozens).

By the way, I used in Q3 the package SignalProcessing and, from my student years I remember that convolution is generally be done, for effiency, by using FFT (2 directe + one inverse).
I do not know if SignalProcessing:-Convolution proceeds this way, but if it is the case, do you think its limitations you pointed out also impact FFT?

 

@radaar 

Have you thought about initializing the seed of the pseudo random generator?

This is a classical trap: executing twice the same worksheet generates exactly the same pseudo random numbers.To avoid this use:

Seed := randomize()

in a separate block justr after your top "restart" line

@Carl Love  Thanks for your remark.

It's rater difficult to compare the results produced by Q3 with the true ones. I did that for 3 and 4 dice and that was all good.
Obviously I was overconfident my assuming it would be same for higher values.

I suppose your observation definitely discredits Q3?

 

Maybe I'm wrong in my interpretation?
I understant A is a random variable with integer outcomes 1..6 and that each outcome has a different weight B

Would that be okay with you?
restart:
with(Statistics):
A := [$1..6];                   # A = [1, 2, 3, 4, 5, 6]
B := A *~ 10;                 # B = [10, 20, 30, 40, 50, 60]
P := B /~ add(B);           # P = [1/21, 2/21, ....6/21]
X := RandomVariable(ProbabilityTable(P));
N := 100;
Histogram(Sample(X, N), discrete);

# also:
plot(CDF(X, t), t=min(A)..max(A), discont=true)

Remark: if A was another set of outcomes, for instance A:=[1, 2, 3, 5, 8, 13], the plot of the histogram would be wrong (abscissas would be 1, 2, ..6). In this case you should correct the histogram plot by writting
Histogram(Sample(X, N), discrete, tickmarks=[[seq(i=A[i], i=1..6)], default])

Let me know if my answer is off the mark.

@tomleslie 

I agree: it's always better to try a simple approach.

Nevertheless this doesn't mean we can avoid some preliminary reflection.
When you write that the "penalty" function (I prefer personnaly the term of "objective" function) is  sum( (xmodel-xexp)^2+(ymodel-yexp)^2), you make implicit unwritten hypothesis.
This writting, while usual in least squares method, relies on underlyng assumptions:

  1. The disagreement between the model predictions (x*(t), y*(t)) and the experimental points (x(t), y(t)) come from random additive measurement errors.
  2. These errors are homoskedastic  https://en.wikipedia.org/wiki/Homoscedasticity
  3. The measurement errors on X and Y are iid (independent and identically distributed)
  4. There are gaussian
  5. There is no measurement error on t
  6. There is no measurement error on alpha neither on v__0
  7. ...I'm not even sure this list is exhaustive

Under these hypotheses, sum( (xmodel-xexp)^2+(ymodel-yexp)^2) represents, up to ad additive constant, the log-likelihood of the experimental data given
the predictions from the model.

If only one of the previous assumptions fails, this writting is false.

In the case (3) fails, it's easy to modify your penalty function in  sum( ((xmodel-xexp)/sX)^2+((ymodel-yexp)/sY)^2) where sX and sY are the standard deviations
of the measurement errors on X and Y.

If the homoscedasticity assumption fails the penalty function should be  sum( wX(tn)*(xmodel(tn)-xexp(tn))^2+wY(tn)*(ymodel(tn)-yexp(tn))^2) where  wX(tn) and
 wY(tn) are weights depending on t (the previous case being a particular situation where   wX(tn) = (1/sX)^2 and wY(tn) = (1/sY)^2 for each tn.

if (4) fails the writting is simply false: think of errors with uniform distributions!

if (5) fails the only correct way to account for measurement errors on t is a bayesian approach.
The often invoked "generalized least squares method" is not rigourous an would fail if (4) fails too.

I stop here.
Let me be clear, I'm not saying your work is meaningless. I just say that writtig the formula you used to express a "penalty" function has no sense if you
do not explicitely state the hypothesis that makes it reliable.
Too many people use statistical methods as if they were cooking recipes.
I thought this clarification was necessary in order Erik may form himself a correct idea about the true complexity of its problem, and thus about the most suited method to use in order to solve it

@Carl Love 

The OP wrote "pdf(P(x)=1/k+1  x=0..k) "

Does this mean "the pdf of a random variable (RV) X such that Prob(X=x) = 1/k + 1 for each value x in 0..k" ?
If it is so, then X is not a RV because, obviously the sum of all Prob(X=x) over 0..k doesn't sum to 1.

Does this mean "the pdf of a RV Xk is defined by the function P(x) = 1/k + 1 for each value x in 0..k" ?
If it is so, then X is not a RV because, here again the sum of all P(x) over 0..k doesn't sum to 1.


That is why I interpreted "pdf(P(x)=1/k+1  x=0..k) " the way I did in my reply.
I do not pretend I was right, but the original question is not correctly formulated and leaves open many doors.

Following a suggestion from acer about one of my previous posts concerning the PDF of the sum of two RVs (usage of 'inert') I tried this

restart:
with(Statistics):
roll := rand(0. .. 1.):
z := (a, b, c) -> Mean(c*RandomVariable(Uniform(a, b))):
y := (a, b, c) -> evalf(Mean(c*RandomVariable(Uniform(a, b)), inert)):

abc := roll(), roll()+1., roll:

z(abc);  # fail almost surely with probability 1
y(abc);  # always returns the good result

I remember acer writing he was going to submit a bug report, maybe the fact z(abc) fails could be related to the same bug?

First 13 14 15 16 17 18 19 Last Page 15 of 25