## 17 Badges

7 years, 351 days

## @tomleslie  Your code leaves a non...

@tomleslie

Your code leaves a non hatched region around the minimum of the parabola.
To have a whole hatched region look to the code I published 6 hours before yours.

I have no problems with people who use to copy the answer given previously by others if this copy improves the previous solution, which is unfortunately not the case here. So if you copy, at least copy correctly.

Here is your code with more thigten hatches

 > restart;
 > with(plots):
 > with(plottools):
 > sol:=solve( [ x^2-y=0, y-(x+5)=0],
 > [x, y],
 > explicit
 > ):
 > f:= x-> line([rhs~(x[1]),rhs~(x[2])][], color=red):
 > display( [ inequal( [ x^2-y<0,
 > y-(x+5)<0
 > ],
 > x=-4..4,
 > y=0..10,
 > color=white
 > ),
 > textplot( [ [ rhs~(sol[1])[],
 > "A",
 > align=right
 > ],
 > [ rhs~(sol[2])[],
 > "B",
 > align=left
 > ]
 > ],
 > font=[times, bold, 18]
 > ),
 > seq( f( solve
 > ( [ x^2-y=0, (y+i)-(x+5)=0],
 > [x, y],
 > explicit
 > )
 > ),
 > i=0..5, 0.1
 > )
 > ],
 > size=[600, 600]
 > );
 >
 >

Download Unhatched_region.mw

## @acer  Acer, I'm sorry for all...

@acer

Acer, I'm sorry for all the delay, but I forgot a little about you.

I kept working on a variant of  Statistics:-ChiSquareSuitableModelTest.

For now, given a sample S and any of Maple's "native" distributions with parameters P,  I am able to perform a chi-square gof test whether these parameters are numerical or formal.
The number of degrees of freedom is automatically set and the target distribution is either D(P) if all the Ps are numeric, or a particular instance of D(P) adjusted by the maximum likelihood method.

To perform the test I call the procedure Chi2GoF with two arguments (in fact it has been improved since our latter exchange):

1. the sample S
2. a random variable R (for instance RandomVariable(Normal(a, b)) )

The problem I'm facing now is to extend this strategy to user random variables.
You saw an example with the generalized beta distribution (GBD for short)
For the moment I can create a GBD with four parameters (the classical a and b from the BetaDistribution, plus a shifting parameter and a scaling parameter.

But in my procedure i must do more than this.
One of the first steps is to extract some informations from the random variable R. In particular I must extract its parameters (attribute Parameters of a RandomVariable) to recognize which are numeri and wich are formal in order to assess them from the sample S.

And here is my problem: I can't declare the attribute Parameters for Maple 2015.5 desperately returns an error message (please see the example below ; I tried a lot of other syntaxes without more success)

Download Problem.mw

At the very end,  I would like to be able to make the command Chi2Gof( S, RandomVariable(GBD(a, b, shift, scale)) ) work (here GBD is just an example).

## @minhhieuh2003  Here is a simple w...

@minhhieuh2003

Here is a simple way to hatched a rectangle.
This could serve as a basis for your real need, feel free to tell if this doesn't suit you.

 > restart:
 > # hatched rectangle # # Simplest case: white hatches and non white rectangle f := (x, z) -> x+z; umin := solve(f(1, u) = 0, u); umax := 2; Rect := [[0, 0], [1, 0], [1, 2], [0, 2]]: W    := [-1..2, -1..3]; plots:-display(   PLOT(POLYGONS(Rect, COLOR(RGB, 1, 0, 0)), VIEW(op(W))),   seq(plot(x+u, x=0..2, color=white, view=W), u in [seq(umin..umax, (umax-umin)/30)]) );
 > # A priori a more complex case: non white hatches and white rectangle... # # But very simple if you play with the thickness of the white lines plots:-display(   PLOT(POLYGONS(Rect, COLOR(RGB, 1, 0, 0)), VIEW(op(W))),   seq(plot(x+u, x=0..2, color=white, thickness=6, view=W), u in [seq(umin..umax, (umax-umin)/30)]) );
 >

Download hatched_rectangle.mw

## hatches...

For this simple case (a parabola and a straight line) here is a simple solution:

You can easily customise the final plot as you want

 > restart:
 > # parameterize the straight line by z # (yours correspond to z=5) f := unapply(x+z, (x,z)): g := unapply(x^2, x):
 > # min value of u for which the line tangents the parabola umin := solve(discrim(g(x)-f(x, a), x)=0, a); # draw 31 hatches umax    := 5; Hatches := NULL: for u in [seq(umin..umax, (umax-umin)/30)] do   s       := [solve(g(x)=f(x, u), x)];   P       := evalf(map(x -> [x, g(x)], s));   Hatches := Hatches, CURVES(P, COLOR(RGB, 0.8\$3)): end do: plots:-display(   PLOT(Hatches),   plot([g(x), f(x,5)], x = -5 .. 5, color = ["Blue"\$2], view = [-5..5, -1..10], size = [500,500], scaling=constrained) );

Download hatches.mw

## @vv I never said th the A...

@vv

I never said th the Abramowitz-Stegun's 26.2.23  was incorrect, so I do not understand why you oppose me that "[you] find the Abramowitz-Stegun's 26.2.23   correct". Could you detailed please?

You say that "[I] obtain some L^2 approximation": are you making a reference to the procedure NonLinearFit I used?
Abramowitz & Stegun write |eps(p)| < 4.5*10^(-4): unfortunately this says nothing about the way the formula 26.2.23 had been obtained (maybe by minimizing something in the least squere sense? or maybe not?).
I suppose you confuse two things: the norm used in fitting the coefficients and the the norm of the representation error of the rational expression.

Given your reply I understand you know things I do not know about the way formula 26.2.23 had been obtained ("fit" range, "fitting" procedure that has been used, ...): could you please enlighten me?

Last and not least, could you please be clearer about your  phrase "You should simply give your alternative better form (without detours) in order to compare  (I could have found the terms in bold almost insulting)

I respectfully thank you for your future answer

Oh, I just saw you have modified the content of you reply. Could it be that you thought you had pushed the plug a little too far?
Vous auriez pu au moins assumer vos propos en les maintenant ici.

## @Carl Love  Either you misinterpre...

@Carl Love

Either you misinterpreted or either I poorly explained myself, but there is effectively some disagreement.
What I wanted to say writing "NormInv is not so fast" is more precisely : "NormInv doesn't dramatically reduce the time to estimate quantiles for it's barely twice as fast as the latter"

Please, return to my first post and look to the factor 30 between what you said is a "magic" approximation and Quantile.
Of course this speed comes to a price, which is that the approximations are less precise than which you obtain.
So, from the problematic of fast approximations (and the second term is very important which means "we accept to have an error of, let's say, 10-6), the "magic" approximation outperforms NormInv.

You can object that the most important point is accuracy and that NormInv ia about twice faster than Quantile while being more precise, and I agree with that. But if you have, lets say, 106 or more, quantiles to estimate the "magic" method becomes really more interesting than NormInv.

You compute quantiles for probabilities as high as 1-10-20 and, I agree with that too, NormInv is by far better than the "magic" method. But "in practice", that is in usual statistics, you never have to compute such extremely extreme quantiles.
Look, in advanced technology the target reliability of systems is generally between 10-8 and 10-6 , and even: very small failure probabilities are often obtained by using redundancy or connecting such systems in some adhoc way.
What you need is then to estimate probabilities of this magnitude or, reciprocally, to estimate quantiles which (in the normal case) do not exceed +/-6 (which is in fact the idea of this popular, in some circles, "6 sigmas" method).

To set a final point to this debate:

1. NormInv outperforms Quantile and it could become an improvement of this latter in future Maple's version
2. Despite it's qualities NormInv doesn't fit the objective of much faster approximations of Quantile in the sense I tried to detial above
3. It may be a pity that you spent so much time developing, in a direction that was not what I expected, my initial post which the main objective was to put attention on an improvement of the Abramowitz-Stegun's 26.2.23 formula (more generally to point out that this book can contain little errors).
I should have been clearer... but you quickly steered the debate in a particular direction  saying "I find it hard to believe that any of these "magic coefficients" methods are generally better than this simple Newton's method, which only took 10 minutes to write, can be understood immediately, and works for any setting of Digits or in evalhf mode".
Looking retrospectively to the whole time spent, it was really much ado about nothing

Thank's for your involvement.

Here is the last test performed.

Time and accuracy test of Newton's method inverse CDF of the Normal distribution

 > restart:
 > kernelopts(version);
 (1)
 > #Symbolic steps: Ncdf:= unapply(int(exp(-x^2/2)/sqrt(2*Pi), x= -infinity..X), X): Newton:= unapply(simplify(x-(Ncdf(x)-Q)/D(Ncdf)(x)), (x,Q)): #Purely numeric procedures: NormInv_inner:= proc(Q, z0, eps) local z:= z0, z1, e:= infinity, e1;    do         z1:= evalf(Newton(z,Q));       e1:= abs(z1-z);       if e1 < eps or e1 >= e then return (z+z1)/2 fi;       e:= e1;         z:= z1      od end proc:        NormInv:= proc(Q::And(numeric, realcons, satisfies(Q-> 0 <= Q and Q <= 1))) local r;    if Q > 1/2 then return -thisproc(1-Q) fi;    if Q = 0 then return -infinity fi;    r:= evalhf(       NormInv_inner(          Q,          (Q-0.5)*sqrt(2.*Pi),          max(10.^(1-Digits), evalhf(DBL_EPSILON))       )    );    if Digits <= evalhf(Digits) then r else NormInv_inner(Q, r, 10.^(1-Digits)) fi end proc :
 > Digits:= 20:
 > randomize(156606905209673);
 (2)
 > S:= Statistics:-Sample(Uniform(0,1), 10^4):
 > N_Newton:= CodeTools:-Usage(NormInv~(S)):
 memory used=0.64GiB, alloc change=364.01MiB, cpu time=6.24s, real time=6.07s, gc time=460.08ms
 > N_Quantile:= CodeTools:-Usage(Statistics:-Quantile~(Normal(0,1), S)):
 memory used=1.41GiB, alloc change=128.00MiB, cpu time=15.00s, real time=14.35s, gc time=1.16s

Statistics commands tend to run faster if you first declare a RandomVariable::

 > N01:= Statistics:-RandomVariable(Normal(0,1)):
 > N_rv:= CodeTools:-Usage(Statistics:-Quantile~(N01, S, numeric)):
 memory used=1.37GiB, alloc change=0 bytes, cpu time=13.29s, real time=12.51s, gc time=1.21s

Check the accuracy:

 > LinearAlgebra:-Norm(N_rv - N_Quantile, 1);
 (3)
 > Digits:= 30:
 > N_accurate:= CodeTools:-Usage(Statistics:-Quantile~(N01, S)):
 memory used=0.89GiB, alloc change=0 bytes, cpu time=9.23s, real time=8.54s, gc time=1.02s
 > LinearAlgebra:-Norm(N_Quantile - N_accurate, 1);
 (4)
 > LinearAlgebra:-Norm(N_Newton - N_accurate, 1);
 (5)
 > 14.61 / 5.32, 1.1e-9 / 1.6e-10;
 (6)

Conclusion: This particular Sample shows that using Newton's method (at Digits=20) gives about 6 times the accuracy while running at least 2 times as fast.

 >

Download NormalInveseTest_mmcdara_reply.mw

## @eslamelidy  You wrote "I wan...

@eslamelidy

You wrote "I want to solve tthe systems of diff equation what's the problem"... maybe you should be more specific and say exactly what the "problem" you are facing is.
I gave a look to your code, the first problem you should meet is related to the line R := ... : this line should generate an error message because you try to assign R an expression and that R is protected (R is a component of the package CodeGeneration, to see this write with(CodeGeneration);).

If you want to avoid thus first error you have 3 solutions:

1. change the name of the variable R
2. write with(CodeGeneration, Matlab); instead of with(CodeGeneration)
3. do not load Codegeneration and write CodeGeneration:-Matlab instead of Matlab (bottom of your worksheet):

A second problem could  come from the Matlab conversion: not all the Maple's expressions can be translated into Matlab code.
A simple example among a lot of others:
restart:
with(CodeGeneration):
f := piecewise(x<0, -x, x):
Matlab(f);
#observe the warning

So, could you explain precisely what "what's the problem" does mean to you?

Beyond all this, your worksheet is amazingly complex to read and thus to understand.
I think, and it was the purpose of my previous reply, that you would have advantage in simplifying its content (I gave you an example for the first ode but I'm not going to do the job for the 150 remaining ones)

## @Carl Love Here it is.This new file...

@Carl Love

Here it is.

This new file confirms what I sent you previously.
Still NormInv doesn't work for probabilities 1-1e-7

In some sense it's not dramatic for I will retrieve Maple 2018 in ten days after my vacations.

 > restart:
 > Digits:= 20: #Changing this doesn't change the code; it's just for testing.
 >
 > #Symbolic steps are only needed to create the numeric procedures, not
 > #needed to run them:
 > Ncdf:= unapply(int(exp(-x^2/2)/sqrt(2*Pi), x= -infinity..X), X);
 > Newton:= unapply(simplify(x-(Ncdf(x)-Q)/D(Ncdf)(x)), (x,Q));
 >
 > #Purely numeric procedures:
 > NormInv_inner:= proc(Q, eps)
 > local z:= 0, z1;
 > do  z1:= evalf(Newton(z,Q));  if abs(z1-z) < eps then return z1 fi;  z:= z1  od
 > end proc:
 >
 > NormInv:= (Q::And(numeric, realcons, satisfies(Q-> 0 <= Q and Q <= 1)))->
 > if Q > 1/2 then -thisproc(1-Q)
 > elif Q = 0 then -infinity
 > elif Digits <= evalhf(Digits) then evalhf(NormInv_inner(Q, DBL_EPSILON))
 > else NormInv_inner(evalf(Q), 10.^(1-Digits))
 > fi
 > :
 >
 (1)
 > kernelopts(version)
 (2)
 > Seed:=randomize();
 (3)
 > with(Statistics):
 > # This restricted probability range is chosen because the full range # generates situations that do not end in Maple 2015 P := Sample(Uniform(0.4, 0.6), 10^4): f := proc(P) local n, q: for n from 1 to numelems(P) do q := P[n]: NormInv(q) end do: end proc: CodeTools:-Usage(Quantile~(Normal(0, 1), P, numeric)): CodeTools:-Usage(f(P)):
 memory used=1.59GiB, alloc change=428.01MiB, cpu time=16.34s, real time=15.85s, gc time=1.25s memory used=1.02GiB, alloc change=32.00MiB, cpu time=9.59s, real time=8.98s, gc time=917.58ms
 >

Download NotSoFast_2015_2.mw

## not so fast an approximation...

@Carl Love

Here are the perfomances of NormInv compared to Quantile: you'll see that NormInv is not a very fast alternative to Quantile.
So, maybe it's "hard to believe that any of these "magic coefficients" methods are generally better than this simple Newton's method" but the evidence seems in the numbers.
Maybe it's not for nothing that mathematicians are seeking for fast methods to estimate the normal quantiles (see the paper cited by Axel in the first reply)?

 > # This restricted probability range is chosen because the full range # generates situations that do not end in Maple 2015 P := Sample(Uniform(0.4, 0.6), 10^4): f := proc(P) local n, q: for n from 1 to numelems(P) do q := P[n]: NormInv(q) end do: end proc: CodeTools:-Usage(Quantile~(Normal(0, 1), P, numeric)): CodeTools:-Usage(f(P)):
 memory used=1.57GiB, alloc change=32.00MiB, cpu time=16.02s, real time=14.87s, gc time=1.62s memory used=1.02GiB, alloc change=32.00MiB, cpu time=9.70s, real time=8.80s, gc time=1.29s
 >

Download NotSoFast.mw

## @Carl Love  Thanks. I will test yo...

@Carl Love

Thanks.
I will test your code once my holidays are over (I will then have Maple 2018 and it doesn(t work in Maple 2015)

For the moment here are the "exact quantiles".

BTW, my post has followed a strange direction for its purpose was not really to find a fast method to estimate the inverse normal cdf, but rather to point a "mistak" inthis reference book that is Abramowitz & Stegun

 > restart:
 > Digits := 20: P := [seq(1 - 10.^(-k), k=1..Digits)]:
 > print~(P, parse~(sprintf~("%2.5f", Statistics:-Quantile~(Normal(0, 1), P)))):
 (1)
 >

Download ExactQuantiles.mw

## @vv Your trick helps goinng a littl...

@vv

Your trick helps goinng a little bit further but not beyond P=0.998.

Could you please provide a result obtained with Maple 2018 or 2019?
For instance withe the sequence P=[seq(1-10^k, k=1..6)]

Thanks in advance

 > restart:
 > Digits:= 15: #Changing this doesn't change the code; it's just for testing.
 >
 > #Symbolic steps are only needed to create the numeric procedures, not
 > #needed to run them:
 > Ncdf:= unapply(int(exp(-x^2/2)/sqrt(2*Pi), x= -infinity..X), X);
 > Newton:= unapply(simplify(x-(Ncdf(x)-Q)/D(Ncdf)(x)), (x,Q));
 >
 > #Purely numeric procedures:
 > NormInv_inner:= proc(Q)
 > local z:= 0, z1;
 > do  z1:= Newton(z,Q);  if abs(z1-z)<10^(-Digits+1) then return z1 fi;  z:= z1  od
 > end proc:
 >
 > NormInv:= (Q::And(realcons, satisfies(Q-> 0 <= Q and Q <= 1)))->
 > if Q > 1/2 then -thisproc(1-Q)
 > elif Q=0 then -infinity
 > elif Digits <= evalhf(Digits) then evalhf(NormInv_inner(Q))
 > else NormInv_inner(evalf(Q))
 > fi
 > :
 >
 (1)
 > interface(version)
 (2)
 > P := [seq(k/(k+1), k=100..1000, 100)]: for n from 1 to numelems(P) do   printf("%2d  %0.5f", n, P[n]);   printf("   %2.5f", NormInv(P[n]));   print(): end do;
 1  0.99010   2.33008
 2  0.99502   2.57755
 3  0.99668   2.71415
 4  0.99751   2.80784
 5  0.99800   2.87879
 6  0.99834 Warning,  computation interrupted
 >

Download NewtonFails_2.mw

## @Carl Love  Procedure NormInv fail...

@Carl Love

Procedure NormInv fails do return a esult in a reasonable time for Q > 0.969:

 > restart:
 > Digits:= 15: #Changing this doesn't change the code; it's just for testing.
 >
 > #Symbolic steps are only needed to create the numeric procedures, not
 > #needed to run them:
 > Ncdf:= unapply(int(exp(-x^2/2)/sqrt(2*Pi), x= -infinity..X), X);
 > Newton:= unapply(simplify(x-(Ncdf(x)-Q)/D(Ncdf)(x)), (x,Q));
 >
 > #Purely numeric procedures:
 > NormInv_inner:= proc(Q)
 > local z:= 0, z1;
 > do  z1:= Newton(z,Q);  if z1=z then return z1 fi;  z:= z1  od
 > end proc:
 >
 > NormInv:= (Q::And(realcons, satisfies(Q-> 0 <= Q and Q <= 1)))->
 > if Q > 1/2 then -thisproc(1-Q)
 > elif Q=0 then -infinity
 > elif Digits <= evalhf(Digits) then evalhf(NormInv_inner(Q))
 > else NormInv_inner(evalf(Q))
 > fi
 > :
 >
 (1)
 > f := proc(P)   local n:   for n from 1 to numelems(P) do NormInv(P[n]) end do: end proc:
 > with(Statistics):
 > P := Sample(Uniform(0.5, 1), 10^3): CodeTools:-Usage(Quantile~(Normal(0, 1), P)): CodeTools:-Usage(Quantile~(Normal(0, 1), P, numeric)): CodeTools:-Usage(f(P)):
 memory used=17.03MiB, alloc change=0 bytes, cpu time=304.00ms, real time=304.00ms, gc time=0ns memory used=17.14MiB, alloc change=0 bytes, cpu time=312.00ms, real time=312.00ms, gc time=0ns Warning,  computation interrupted
 > P := Statistics:-Sample(Uniform(0.5, 1), 10^1);
 (2)
 > for n from 1 to 10 do n; NormInv(P[n]) end do;
 > NormInv(0.969)
 >

Download NewtonFails.mw

## @Axel Vogt RThank you Axel, it'...

@Axel Vogt

Thank you Axel, it's indeed a very interesting paper.

## @mehdi jafari  You cannot have a s...

@mehdi jafari

You cannot have a set with elements 1, 1, 3, 3, 5, 6: in a set, each element is present only once.
So you can't do with sets what you did with lists.

## @Carl Love @Moh HudaGiven the title...

@Moh Huda

Given the title of the plots I guessed D[Cchemo] is the Chemo Dose (which is of no practical help)

More interesting, I found this open source for the paper: https://arxiv.org/pdf/1806.05790.pdf.
D[chemo] is indeed the "chemotherapeutic dose" and it seems (but I'm not sure otf that) it' could be defined by relation (7).

Nevertheless I wouldn't go any further on my side.

Moh: your write is "My question is how to get the first two figures below based on these information...", my answer is: "do what is described in the paper cite, all the stuff seems to be here".
If you are puzzled with the term s∞, it's defined below equation (10): "The case with constant immunotherapy (s(t) = s∞)".
Maybe you should read the paper more carefully?

 First 95 96 97 98 99 100 101 Last Page 97 of 128
﻿