mmcdara

6346 Reputation

17 Badges

7 years, 351 days

MaplePrimes Activity


These are replies submitted by mmcdara

@tomleslie 

Your code leaves a non hatched region around the minimum of the parabola.
To have a whole hatched region look to the code I published 6 hours before yours.

I have no problems with people who use to copy the answer given previously by others if this copy improves the previous solution, which is unfortunately not the case here. So if you copy, at least copy correctly.

Here is your code with more thigten hatches

 

restart;

  with(plots):

  with(plottools):

  sol:=solve( [ x^2-y=0, y-(x+5)=0],

              [x, y],

              explicit

            ):

  f:= x-> line([rhs~(x[1]),rhs~(x[2])][], color=red):

  display( [ inequal( [ x^2-y<0,

                        y-(x+5)<0

                      ],

                      x=-4..4,

                      y=0..10,

                      color=white

                    ),

             textplot( [ [ rhs~(sol[1])[],

                           "A",

                           align=right

                         ],

                         [ rhs~(sol[2])[],

                           "B",

                           align=left

                         ]

                       ],

                       font=[times, bold, 18]

                     ),

             seq( f( solve

                     ( [ x^2-y=0, (y+i)-(x+5)=0],

                         [x, y],

                         explicit

                     )

                   ),

                  i=0..5, 0.1

                )

           ],

           size=[600, 600]

         );

 

 

 


 

Download Unhatched_region.mw

@acer 

Acer, I'm sorry for all the delay, but I forgot a little about you.

I kept working on a variant of  Statistics:-ChiSquareSuitableModelTest.


For now, given a sample S and any of Maple's "native" distributions with parameters P,  I am able to perform a chi-square gof test whether these parameters are numerical or formal.
The number of degrees of freedom is automatically set and the target distribution is either D(P) if all the Ps are numeric, or a particular instance of D(P) adjusted by the maximum likelihood method.

To perform the test I call the procedure Chi2GoF with two arguments (in fact it has been improved since our latter exchange):

  1. the sample S
  2. a random variable R (for instance RandomVariable(Normal(a, b)) )


The problem I'm facing now is to extend this strategy to user random variables.
You saw an example with the generalized beta distribution (GBD for short)
For the moment I can create a GBD with four parameters (the classical a and b from the BetaDistribution, plus a shifting parameter and a scaling parameter.

But in my procedure i must do more than this.
One of the first steps is to extract some informations from the random variable R. In particular I must extract its parameters (attribute Parameters of a RandomVariable) to recognize which are numeri and wich are formal in order to assess them from the sample S.

And here is my problem: I can't declare the attribute Parameters for Maple 2015.5 desperately returns an error message (please see the example below ; I tried a lot of other syntaxes without more success)

Download Problem.mw

At the very end,  I would like to be able to make the command Chi2Gof( S, RandomVariable(GBD(a, b, shift, scale)) ) work (here GBD is just an example).

 

@minhhieuh2003 

Here is a simple way to hatched a rectangle.
This could serve as a basis for your real need, feel free to tell if this doesn't suit you.
 

restart:

# hatched rectangle
#
# Simplest case: white hatches and non white rectangle

f := (x, z) -> x+z;

umin := solve(f(1, u) = 0, u);
umax := 2;

Rect := [[0, 0], [1, 0], [1, 2], [0, 2]]:
W    := [-1..2, -1..3];

plots:-display(
  PLOT(POLYGONS(Rect, COLOR(RGB, 1, 0, 0)), VIEW(op(W))),
  seq(plot(x+u, x=0..2, color=white, view=W), u in [seq(umin..umax, (umax-umin)/30)])
);

proc (x, z) options operator, arrow; x+z end proc

 

-1

 

2

 

[-1 .. 2, -1 .. 3]

 

 

# A priori a more complex case: non white hatches and white rectangle...
#

# But very simple if you play with the thickness of the white lines

plots:-display(
  PLOT(POLYGONS(Rect, COLOR(RGB, 1, 0, 0)), VIEW(op(W))),
  seq(plot(x+u, x=0..2, color=white, thickness=6, view=W), u in [seq(umin..umax, (umax-umin)/30)])
);

 

 


 

Download hatched_rectangle.mw

For this simple case (a parabola and a straight line) here is a simple solution:

You can easily customise the final plot as you want
 

restart:

# parameterize the straight line by z
# (yours correspond to z=5)

f := unapply(x+z, (x,z)):
g := unapply(x^2, x):

# min value of u for which the line tangents the parabola

umin := solve(discrim(g(x)-f(x, a), x)=0, a);

# draw 31 hatches

umax    := 5;
Hatches := NULL:
for u in [seq(umin..umax, (umax-umin)/30)] do
  s       := [solve(g(x)=f(x, u), x)];
  P       := evalf(map(x -> [x, g(x)], s));
  Hatches := Hatches, CURVES(P, COLOR(RGB, 0.8$3)):
end do:

plots:-display(
  PLOT(Hatches),
  plot([g(x), f(x,5)], x = -5 .. 5, color = ["Blue"$2], view = [-5..5, -1..10], size = [500,500], scaling=constrained)
);

-1/4

 

5

 

 

 


 

Download hatches.mw

@vv 

I never said th the Abramowitz-Stegun's 26.2.23  was incorrect, so I do not understand why you oppose me that "[you] find the Abramowitz-Stegun's 26.2.23   correct". Could you detailed please?

You say that "[I] obtain some L^2 approximation": are you making a reference to the procedure NonLinearFit I used?
Abramowitz & Stegun write |eps(p)| < 4.5*10^(-4): unfortunately this says nothing about the way the formula 26.2.23 had been obtained (maybe by minimizing something in the least squere sense? or maybe not?).
I suppose you confuse two things: the norm used in fitting the coefficients and the the norm of the representation error of the rational expression. 

Given your reply I understand you know things I do not know about the way formula 26.2.23 had been obtained ("fit" range, "fitting" procedure that has been used, ...): could you please enlighten me?

Last and not least, could you please be clearer about your  phrase "You should simply give your alternative better form (without detours) in order to compare  (I could have found the terms in bold almost insulting)

I respectfully thank you for your future answer

 

Oh, I just saw you have modified the content of you reply. Could it be that you thought you had pushed the plug a little too far?
Vous auriez pu au moins assumer vos propos en les maintenant ici.

 

@Carl Love 

Either you misinterpreted or either I poorly explained myself, but there is effectively some disagreement.
What I wanted to say writing "NormInv is not so fast" is more precisely : "NormInv doesn't dramatically reduce the time to estimate quantiles for it's barely twice as fast as the latter"

Please, return to my first post and look to the factor 30 between what you said is a "magic" approximation and Quantile.
Of course this speed comes to a price, which is that the approximations are less precise than which you obtain.
So, from the problematic of fast approximations (and the second term is very important which means "we accept to have an error of, let's say, 10-6), the "magic" approximation outperforms NormInv.

You can object that the most important point is accuracy and that NormInv ia about twice faster than Quantile while being more precise, and I agree with that. But if you have, lets say, 106 or more, quantiles to estimate the "magic" method becomes really more interesting than NormInv.

You compute quantiles for probabilities as high as 1-10-20 and, I agree with that too, NormInv is by far better than the "magic" method. But "in practice", that is in usual statistics, you never have to compute such extremely extreme quantiles.
Look, in advanced technology the target reliability of systems is generally between 10-8 and 10-6 , and even: very small failure probabilities are often obtained by using redundancy or connecting such systems in some adhoc way.
What you need is then to estimate probabilities of this magnitude or, reciprocally, to estimate quantiles which (in the normal case) do not exceed +/-6 (which is in fact the idea of this popular, in some circles, "6 sigmas" method).

To set a final point to this debate:

  1. NormInv outperforms Quantile and it could become an improvement of this latter in future Maple's version
  2. Despite it's qualities NormInv doesn't fit the objective of much faster approximations of Quantile in the sense I tried to detial above
  3. It may be a pity that you spent so much time developing, in a direction that was not what I expected, my initial post which the main objective was to put attention on an improvement of the Abramowitz-Stegun's 26.2.23 formula (more generally to point out that this book can contain little errors).
    I should have been clearer... but you quickly steered the debate in a particular direction  saying "I find it hard to believe that any of these "magic coefficients" methods are generally better than this simple Newton's method, which only took 10 minutes to write, can be understood immediately, and works for any setting of Digits or in evalhf mode".
    Looking retrospectively to the whole time spent, it was really much ado about nothing


Thank's for your involvement.

Here is the last test performed.

 

Time and accuracy test of Newton's method inverse CDF of the Normal distribution

restart:

kernelopts(version);

`Maple 2015.2, APPLE UNIVERSAL OSX, Dec 20 2015, Build ID 1097895`

(1)

#Symbolic steps:
Ncdf:= unapply(int(exp(-x^2/2)/sqrt(2*Pi), x= -infinity..X), X):
Newton:= unapply(simplify(x-(Ncdf(x)-Q)/D(Ncdf)(x)), (x,Q)):

#Purely numeric procedures:
NormInv_inner:= proc(Q, z0, eps)
local z:= z0, z1, e:= infinity, e1;
   do  
      z1:= evalf(Newton(z,Q));
      e1:= abs(z1-z);
      if e1 < eps or e1 >= e then return (z+z1)/2 fi;
      e:= e1;  
      z:= z1  
   od
end proc:
      
NormInv:= proc(Q::And(numeric, realcons, satisfies(Q-> 0 <= Q and Q <= 1)))
local r;
   if Q > 1/2 then return -thisproc(1-Q) fi;
   if Q = 0 then return -infinity fi;
   r:= evalhf(
      NormInv_inner(
         Q,
         (Q-0.5)*sqrt(2.*Pi),
         max(10.^(1-Digits), evalhf(DBL_EPSILON))
      )
   );
   if Digits <= evalhf(Digits) then r else NormInv_inner(Q, r, 10.^(1-Digits)) fi
end proc
:

Digits:= 20:

randomize(156606905209673);

156606905209673

(2)

S:= Statistics:-Sample(Uniform(0,1), 10^4):

N_Newton:= CodeTools:-Usage(NormInv~(S)):

memory used=0.64GiB, alloc change=364.01MiB, cpu time=6.24s, real time=6.07s, gc time=460.08ms

 

N_Quantile:= CodeTools:-Usage(Statistics:-Quantile~(Normal(0,1), S)):

memory used=1.41GiB, alloc change=128.00MiB, cpu time=15.00s, real time=14.35s, gc time=1.16s

 

Statistics commands tend to run faster if you first declare a RandomVariable::

N01:= Statistics:-RandomVariable(Normal(0,1)):

N_rv:= CodeTools:-Usage(Statistics:-Quantile~(N01, S, numeric)):

memory used=1.37GiB, alloc change=0 bytes, cpu time=13.29s, real time=12.51s, gc time=1.21s

 

Check the accuracy:

LinearAlgebra:-Norm(N_rv - N_Quantile, 1);

0.

(3)

Digits:= 30:

N_accurate:= CodeTools:-Usage(Statistics:-Quantile~(N01, S)):

memory used=0.89GiB, alloc change=0 bytes, cpu time=9.23s, real time=8.54s, gc time=1.02s

 

LinearAlgebra:-Norm(N_Quantile - N_accurate, 1);

0.114816126227654849290526416e-8

(4)

LinearAlgebra:-Norm(N_Newton - N_accurate, 1);

0.16152847737198934987933416e-9

(5)

14.61 / 5.32, 1.1e-9 / 1.6e-10;

2.74624060150375939849624060150, 6.87500000000000000000000000000

(6)

Conclusion: This particular Sample shows that using Newton's method (at Digits=20) gives about 6 times the accuracy while running at least 2 times as fast.

 


 

Download NormalInveseTest_mmcdara_reply.mw

@eslamelidy 

You wrote "I want to solve tthe systems of diff equation what's the problem"... maybe you should be more specific and say exactly what the "problem" you are facing is.
I gave a look to your code, the first problem you should meet is related to the line R := ... : this line should generate an error message because you try to assign R an expression and that R is protected (R is a component of the package CodeGeneration, to see this write with(CodeGeneration);).

If you want to avoid thus first error you have 3 solutions:

  1. change the name of the variable R
  2. write with(CodeGeneration, Matlab); instead of with(CodeGeneration)
  3. do not load Codegeneration and write CodeGeneration:-Matlab instead of Matlab (bottom of your worksheet):


A second problem could  come from the Matlab conversion: not all the Maple's expressions can be translated into Matlab code.
A simple example among a lot of others:
restart:
with(CodeGeneration):
f := piecewise(x<0, -x, x):
Matlab(f);  
 #observe the warning


So, could you explain precisely what "what's the problem" does mean to you?

Beyond all this, your worksheet is amazingly complex to read and thus to understand.
I think, and it was the purpose of my previous reply, that you would have advantage in simplifying its content (I gave you an example for the first ode but I'm not going to do the job for the 150 remaining ones)

@Carl Love 

Here it is.

This new file confirms what I sent you previously.
Still NormInv doesn't work for probabilities 1-1e-7

In some sense it's not dramatic for I will retrieve Maple 2018 in ten days after my vacations.

 

restart:

Digits:= 20: #Changing this doesn't change the code; it's just for testing.

 

#Symbolic steps are only needed to create the numeric procedures, not

#needed to run them:

Ncdf:= unapply(int(exp(-x^2/2)/sqrt(2*Pi), x= -infinity..X), X);

Newton:= unapply(simplify(x-(Ncdf(x)-Q)/D(Ncdf)(x)), (x,Q));

 

#Purely numeric procedures:

NormInv_inner:= proc(Q, eps)

local z:= 0, z1;

   do  z1:= evalf(Newton(z,Q));  if abs(z1-z) < eps then return z1 fi;  z:= z1  od

end proc:

      

NormInv:= (Q::And(numeric, realcons, satisfies(Q-> 0 <= Q and Q <= 1)))->

   if Q > 1/2 then -thisproc(1-Q)

   elif Q = 0 then -infinity

   elif Digits <= evalhf(Digits) then evalhf(NormInv_inner(Q, DBL_EPSILON))

   else NormInv_inner(evalf(Q), 10.^(1-Digits))

   fi

:

 

proc (X) options operator, arrow; 1/2+(1/2)*erf((1/2)*2^(1/2)*X) end proc

 

proc (x, Q) options operator, arrow; -(1/2)*2^(1/2)*Pi^(1/2)*erf((1/2)*2^(1/2)*x)*exp((1/2)*x^2)+2^(1/2)*Pi^(1/2)*Q*exp((1/2)*x^2)+x-(1/2)*2^(1/2)*Pi^(1/2)*exp((1/2)*x^2) end proc

(1)

kernelopts(version)

`Maple 2015.2, APPLE UNIVERSAL OSX, Dec 20 2015, Build ID 1097895`

(2)

Seed:=randomize();

156606905209673

(3)

with(Statistics):

# This restricted probability range is chosen because the full range
# generates situations that do not end in Maple 2015

P := Sample(Uniform(0.4, 0.6), 10^4):


f := proc(P)
local n, q:
for n from 1 to numelems(P) do q := P[n]: NormInv(q) end do:
end proc:
CodeTools:-Usage(Quantile~(Normal(0, 1), P, numeric)):
CodeTools:-Usage(f(P)):

memory used=1.59GiB, alloc change=428.01MiB, cpu time=16.34s, real time=15.85s, gc time=1.25s

memory used=1.02GiB, alloc change=32.00MiB, cpu time=9.59s, real time=8.98s, gc time=917.58ms

 

 


 

Download NotSoFast_2015_2.mw

@Carl Love 

Here are the perfomances of NormInv compared to Quantile: you'll see that NormInv is not a very fast alternative to Quantile.
So, maybe it's "hard to believe that any of these "magic coefficients" methods are generally better than this simple Newton's method" but the evidence seems in the numbers.
Maybe it's not for nothing that mathematicians are seeking for fast methods to estimate the normal quantiles (see the paper cited by Axel in the first reply)?

 

# This restricted probability range is chosen because the full range
# generates situations that do not end in Maple 2015

P := Sample(Uniform(0.4, 0.6), 10^4):


f := proc(P)
local n, q:
for n from 1 to numelems(P) do q := P[n]: NormInv(q) end do:
end proc:
CodeTools:-Usage(Quantile~(Normal(0, 1), P, numeric)):
CodeTools:-Usage(f(P)):
 

memory used=1.57GiB, alloc change=32.00MiB, cpu time=16.02s, real time=14.87s, gc time=1.62s

memory used=1.02GiB, alloc change=32.00MiB, cpu time=9.70s, real time=8.80s, gc time=1.29s

 

 


 

Download NotSoFast.mw

@Carl Love 

Thanks.
I will test your code once my holidays are over (I will then have Maple 2018 and it doesn(t work in Maple 2015)

For the moment here are the "exact quantiles". 

BTW, my post has followed a strange direction for its purpose was not really to find a fast method to estimate the inverse normal cdf, but rather to point a "mistak" inthis reference book that is Abramowitz & Stegun
 

restart:

Digits := 20:
P := [seq(1 - 10.^(-k), k=1..Digits)]:

print~(P, parse~(sprintf~("%2.5f", Statistics:-Quantile~(Normal(0, 1), P)))):

.90000000000000000000, 1.28155

 

.99000000000000000000, 2.32635

 

.99900000000000000000, 3.09023

 

.99990000000000000000, 3.71902

 

.99999000000000000000, 4.26489

 

.99999900000000000000, 4.75342

 

.99999990000000000000, 5.19934

 

.99999999000000000000, 5.61200

 

.99999999900000000000, 5.99781

 

.99999999990000000000, 6.36134

 

.99999999999000000000, 6.70602

 

.99999999999900000000, 7.03448

 

.99999999999990000000, 7.34880

 

.99999999999999000000, 7.65063

 

.99999999999999900000, 7.94135

 

.99999999999999990000, 8.22208

 

.99999999999999999000, 8.49379

 

.99999999999999999900, 8.75729

 

.99999999999999999990, 9.01327

 

.99999999999999999999, 9.26234

(1)

 


 

Download ExactQuantiles.mw

@vv 

Your trick helps goinng a little bit further but not beyond P=0.998.

Could you please provide a result obtained with Maple 2018 or 2019?
For instance withe the sequence P=[seq(1-10^k, k=1..6)]

Thanks in advance


 

restart:

Digits:= 15: #Changing this doesn't change the code; it's just for testing.

 

#Symbolic steps are only needed to create the numeric procedures, not

#needed to run them:

Ncdf:= unapply(int(exp(-x^2/2)/sqrt(2*Pi), x= -infinity..X), X);

Newton:= unapply(simplify(x-(Ncdf(x)-Q)/D(Ncdf)(x)), (x,Q));

 

#Purely numeric procedures:

NormInv_inner:= proc(Q)

local z:= 0, z1;

   do  z1:= Newton(z,Q);  if abs(z1-z)<10^(-Digits+1) then return z1 fi;  z:= z1  od

end proc:

      

NormInv:= (Q::And(realcons, satisfies(Q-> 0 <= Q and Q <= 1)))->

   if Q > 1/2 then -thisproc(1-Q)

   elif Q=0 then -infinity

   elif Digits <= evalhf(Digits) then evalhf(NormInv_inner(Q))

   else NormInv_inner(evalf(Q))

   fi

:

 

proc (X) options operator, arrow; 1/2+(1/2)*erf((1/2)*2^(1/2)*X) end proc

 

proc (x, Q) options operator, arrow; -(1/2)*2^(1/2)*Pi^(1/2)*erf((1/2)*2^(1/2)*x)*exp((1/2)*x^2)+2^(1/2)*Pi^(1/2)*Q*exp((1/2)*x^2)+x-(1/2)*2^(1/2)*Pi^(1/2)*exp((1/2)*x^2) end proc

(1)

interface(version)

`Standard Worksheet Interface, Maple 2015.2, Mac OS X, December 21 2015 Build ID 1097895`

(2)

P := [seq(k/(k+1), k=100..1000, 100)]:

for n from 1 to numelems(P) do
  printf("%2d  %0.5f", n, P[n]);
  printf("   %2.5f", NormInv(P[n]));
  print():
end do;

 1  0.99010   2.33008

 

 

 2  0.99502   2.57755

 

 

 3  0.99668   2.71415

 

 

 4  0.99751   2.80784

 

 

 5  0.99800   2.87879

 

 

 6  0.99834

Warning,  computation interrupted

 

 


 

Download NewtonFails_2.mw

@Carl Love 

Procedure NormInv fails do return a esult in a reasonable time for Q > 0.969:

 

restart:

Digits:= 15: #Changing this doesn't change the code; it's just for testing.

 

#Symbolic steps are only needed to create the numeric procedures, not

#needed to run them:

Ncdf:= unapply(int(exp(-x^2/2)/sqrt(2*Pi), x= -infinity..X), X);

Newton:= unapply(simplify(x-(Ncdf(x)-Q)/D(Ncdf)(x)), (x,Q));

 

#Purely numeric procedures:

NormInv_inner:= proc(Q)

local z:= 0, z1;

   do  z1:= Newton(z,Q);  if z1=z then return z1 fi;  z:= z1  od

end proc:

      

NormInv:= (Q::And(realcons, satisfies(Q-> 0 <= Q and Q <= 1)))->

   if Q > 1/2 then -thisproc(1-Q)

   elif Q=0 then -infinity

   elif Digits <= evalhf(Digits) then evalhf(NormInv_inner(Q))

   else NormInv_inner(evalf(Q))

   fi

:

 

proc (X) options operator, arrow; 1/2+(1/2)*erf((1/2)*2^(1/2)*X) end proc

 

proc (x, Q) options operator, arrow; -(1/2)*2^(1/2)*Pi^(1/2)*erf((1/2)*2^(1/2)*x)*exp((1/2)*x^2)+2^(1/2)*Pi^(1/2)*Q*exp((1/2)*x^2)+x-(1/2)*2^(1/2)*Pi^(1/2)*exp((1/2)*x^2) end proc

(1)

f := proc(P)
  local n:
  for n from 1 to numelems(P) do NormInv(P[n]) end do:
end proc:

with(Statistics):

P := Sample(Uniform(0.5, 1), 10^3):
CodeTools:-Usage(Quantile~(Normal(0, 1), P)):
CodeTools:-Usage(Quantile~(Normal(0, 1), P, numeric)):
CodeTools:-Usage(f(P)):

memory used=17.03MiB, alloc change=0 bytes, cpu time=304.00ms, real time=304.00ms, gc time=0ns

memory used=17.14MiB, alloc change=0 bytes, cpu time=312.00ms, real time=312.00ms, gc time=0ns

Warning,  computation interrupted

 

P := Statistics:-Sample(Uniform(0.5, 1), 10^1);

P := Vector[row](10, {(1) = .5814494501088662, (2) = .9192028848365151, (3) = .5837804651056354, (4) = .7511003077047363, (5) = .9996647395128597, (6) = .6777035756339407, (7) = .523538854774417, (8) = .6068302925603996, (9) = .6989195620513526, (10) = .6668340901901474}, datatype = float[8])

(2)

for n from 1 to 10 do
n;
NormInv(P[n])
end do;

1

 

.205602915707357559

 

2

 

1.39972985397027738

 

3

 

.211574416971719526

 

4

 

.677956326586950597

 

5

 

3.40135741509089939

 

6

 

Warning,  computation interrupted

 

NormInv(0.969)

Warning,  computation interrupted

 

 


 

Download NewtonFails.mw

@Axel Vogt 

Thank you Axel, it's indeed a very interesting paper.

 

@mehdi jafari 

You cannot have a set with elements 1, 1, 3, 3, 5, 6: in a set, each element is present only once.
So you can't do with sets what you did with lists.

@Carl Love 

@Moh Huda

Given the title of the plots I guessed D[Cchemo] is the Chemo Dose (which is of no practical help)

More interesting, I found this open source for the paper: https://arxiv.org/pdf/1806.05790.pdf.
D[chemo] is indeed the "chemotherapeutic dose" and it seems (but I'm not sure otf that) it' could be defined by relation (7).

Nevertheless I wouldn't go any further on my side.

Moh: your write is "My question is how to get the first two figures below based on these information...", my answer is: "do what is described in the paper cite, all the stuff seems to be here".
If you are puzzled with the term s∞, it's defined below equation (10): "The case with constant immunotherapy (s(t) = s∞)".
Maybe you should read the paper more carefully?

 

First 95 96 97 98 99 100 101 Last Page 97 of 128