## 5989 Reputation

7 years, 266 days

## One way...

`y__88 := x -> rhs(dsk50):`

do

```y__88 := unapply(rhs(dsk50), x):
y__88(1)```

Do you want to:

1. Define a user Distribution (B1) in order to construct a RandomVariable X and to be able to Sample it?
2. Simply generate a Sample from a ZIP (Zero-Inflated Poisson) model?
3. Do regression?

Consructing a Distribution with the only purpose of sampling it (point 1 above) has absolutely no point, and is a useless piece of work.
We construct a Distribution when we want to perform formal operations on a random variable whose distribution is Distribution. If only the generation of a sample matters, you just have to design a sampling algorithm.
To help you understand the futility of the construction of a Distribution, here is a simple example to draw a sample from the classical univariate ZIP model
Univariate_ZIP.mw

(Think to softwares like R, SPSS, Matlab to cite a few: they have little or no formal capabilities compared to Maple: even if they do not contain any abstract construction of random variables, they are nevertheless capable  to generate samples, even for functions of several random variables.)

So I believe that the first question you must answer to is "What do I want to do in fine, and do I need everything I've written to achieve it?"
If you just want to generate a sample of a bivariate ZIP model things are quite simple.
If you want to construct the bivariate random variable of this ZIP model things can become very touchy.
_____________________________________________________

For the moment, assuming that you really want to construct the distribution of a bivariate ZIP random variable, your code contains several errors and undefined quantities.
Look here a partial correction of your code:
(Stochastic representations of ZIP models generally use Bernoulli random variables. I can't see any in your code, but it contains two Binomial random variables oughtn't they not to be Bernoulli random variables?)

 > restart:
 > with(Statistics):
 > randomize():
 > N    := 100;
 > `x__01` := Vector[row](N, [1\$N]):
 > `x__11` := Sample(Binomial(N, 0.4), N):
 > `x__22` := Sample(Normal(0, 1), N):
 > `z__02` := Vector[row](N, [1\$N]):
 > `z__12` := Sample(Binomial(N, 0.4), N):
 > `z__22` := Sample(Normal(0, 1), N):
 (1)
 > gamma__01 *~ `z__01`
 (2)
 > # As you operate on vectors use +~ instead of +, *~instead od *, exp~instead of exp and so on # undefined quantities in red aux1 := ((1 +~ exp~(-(gamma__01 *~ `z__01` +~  gamma__11 *~ `z__11` +~ gamma__21 *~ `z__21`)))): aux2 := ((1 +~ exp~(-(gamma__02 *~ `z__02` +~ gamma__12 *~ `z__12` +~ gamma__22 *~ `z__22`)))): phi__1 := 1 /~ aux1 ;      phi__2 := 1/~ aux2;   lambda__1 := exp~(beta__01 *~ `x__01` +beta__11 *~ `x__11` + beta__21 *~ `x__21`) *~ aux1; lambda__2 := exp~(beta__02 *~ `x__02` +~ beta__12 *~ `x__12` +~ beta__22 *~ `x__22`) *~ aux2;
 (3)

Fix the issues above (undefined variables) and let me know when it's done.

The next command in your file is

`B:= (y[1],  y[2])->([[phi +(1-phi)*((e)^(-lambda[1]- lambda[2])*(1+ alpha*(1-(e)^(-(1-(e)^(-1))*lambda[1]))*(1-(e)^(-(1-(e)^(-1))*lambda[2])))),],[(1-phi)*((e)^(-lambda[1]- lambda[2])*((lambda[1])^(y[1]) *  (lambda[2])^(y[2]))/(y[1]!* y[2]!)*((1+ alpha)*((e)^(-y[1])-(e)^(-(1-(e)^(-1))*lambda[1]))*((e)^(-y[2])-(e)^(-(1-(e)^(-1))*lambda[2])))),]]):  `

It contains a lot of misplaced commas and a lot of (e)^(something). This latter is a common error many newbies do instead of writting  exp(something): in Maple "e" is just a name and e^(-1) is not 1/exp(1).
After corrections it seems you should have been written (it's likely I used too many "~" but I do not know what quantities are scalars and whar are vectors or matrices).

```B:= y -> (
[
[
phi +(1-phi)*(exp~(-lambda[1]-lambda[2])*~(1+alpha*~(1-~exp~(-(1-~exp(-1))*lambda[1]))*(1-exp~(-(1-exp~(-1))*lambda[2]))))
],
[
(1-phi)*(exp~(-lambda[1]-lambda[2])*((lambda[1])^(y[1]) *  (lambda[2])^(y[2]))/(y[1]!* y[2]!)*((1+ alpha)*(exp~(-y[1])-exp~(-(1-exp(-1))*lambda[1]))*(exp~(-y[2])-exp~(-(1-exp(-1))*lambda[2]))))
]
]
):
```

Note the expression of B contains phi which appears later to be a scalar value and has nothing to do with previously defined phi__1 and phi__2.
So my question: What is the purpose of defining  phi__1 and phi__2 given they are never used.

Had you expressed your problem in a more synthetic form, with simple words, it would have been simpler to understand what you want to achieve and, maybe, to help you.

## Five stationary states...

that can be found using solve

Please: could you write correctly the left hand sides ot all the odes?

 > restart:
 > kernelopts(version)
 (1)
 > deqs := { diff(Cb(t), t) = beta1*Cb(t)*Va(t)-(delta+psi1+varsigma1)*Cb(t), diff(Df(t), t) = beta4*Df(t)*Va(t)-(delta+psi4+varsigma4)*Df(t), diff(Fn(t), t) = beta3*Fn(t)*Va(t)-(delta+psi3+varsigma3)*Fn(t), diff(Rs(t), t) = psi1*Cb(t)+psi2*Sc(t)+psi3*Fn(t)+psi4*Df(t)-delta*Rs(t)-varsigma1*Cb(t)-varsigma2*Sc(t)-varsigma3*Fn(t)-varsigma4*Df(t), diff(Sc(t), t) = beta2*Sc(t)*Va(t)-(delta+psi2+varsigma2)*Sc(t), diff(Va(t), t) = Phi-(beta1*Cb(t)+beta2*Sc(t)+beta3*Fn(t)+beta4*Df(t))*Va(t)-delta*Va(t) }: print~(deqs):
 (2)
 > ind := [indets(rhs~(deqs), function)[]]; unknowns := map2(op, 0, ind);
 (3)
 > eval(rhs~(deqs), ind =~ unknowns):
 > print~(solve(%, unknowns)):
 (4)
 > # No infection... does it mean Va = 0? subsystem := eval(remove(has, deqs, diff(Va(t), t)), Va(t) = 0): print~(subsystem):
 (5)
 > ind := [indets(rhs~(subsystem), function)[]]: unknowns := map2(op, 0, ind);
 (6)
 > eval(rhs~(subsystem), ind =~ unknowns);
 (7)
 > print~(solve(%, unknowns)):
 (8)
 >

(As your stationary state without infection doesn't contain Va(t) I suspected that Va(t) was the "infection" (???) in which case the the solution is given (thin black text) in the attached file.

Two questions:

1. What does "absence of infection" mean ?
2. What is "Q" ?
You claim it is the statiobary value of Cb in case of no infection, but "Q" doesn't appear in your equations.

## Some corrections and a solution...

Hi,

The attached file contains the construction of the Inverse-ChiSquare(n) distribution (first part of the file) and of a Generalized-Inverse-ChiSquare(a, b, n) distribution (the distribution of Y = a + b*X where X ~ Inverse-ChiSquare(n) ).

 > restart; st:=time():
 > with(Statistics):

Inverse-ChiSquare distribution

 > InvChi2 := proc(n)   # source = https://en.wikipedia.org/wiki/Inverse-chi-squared_distribution   description "inverse chisquare distribution with n=degrees of freedom";   Distribution(     PDF = unapply( 2^(-n/2)/GAMMA(n/2)*t^(-n/2-1)*exp(-1/2/t)/abs(b), t)     , CDF = unapply(GAMMA(n/2, 1/2/t)/GAMMA(n/2), t)     , Mean = 1/(n-2)     , Median = 1/(n*(1-2/9/n)^3)     , Mode = 1/(n+2)     , Variance = piecewise(n > 4, 2/(n-2)^2/(n-4), undefined)     , Skewness = piecewise(n > 6, 4/(n-6)*sqrt(2*(n-4)), undefined)     , Kurtosis = piecewise(n > 8, 12*(5*n-22)/(n-6)/(n-8), undefined)     , Conditions = [n > 2]     , RandomSample = proc(N::nonnegint)         1 /~ Sample(GammaDistribution(2/n, n/2), N)       end proc   ); end proc:
 > IC2:= RandomVariable(InvChi2(n)); pdf      = PDF(IC2, t); cdf      = CDF(IC2, t); mean     = Mean(IC2); median   = Median(IC2); variance = Variance(IC2); skewness = Skewness(IC2); kurtosis = Kurtosis(IC2);
 (1)
 > IC2:= RandomVariable(InvChi2(5)); pdf      = PDF(IC2, t); cdf      = CDF(IC2, t); mean     = Mean(IC2); median   = Median(IC2); mode     = Mode(IC2); variance = Variance(IC2); skewness = Skewness(IC2); kurtosis = Kurtosis(IC2); S := Sample(IC2, 10^4): Histogram(log[10]~(S), title=typeset(log__10('S')));

Generalized-Inverse-ChiSquare distribution

(aka scaled and translated Inverse-ChiSquare distribution)

 > GenInvChi2 := proc(a, b, n)   # source = https://en.wikipedia.org/wiki/Inverse-chi-squared_distribution   description "inverse chisquare distribution with a=location, b=scale, n=degrees of freedom";   local IC2      := RandomVariable(InvChi2(n));   local pdf      := unapply(PDF(IC2, t), t);   local cdf      := unapply(CDF(IC2, t), t);   Distribution(          PDF = unapply( pdf((t-a)/b), t)     , CDF = unapply( cdf((t-a)/b), t)     , Mean = a + b*Mean(IC2)     , Median = a + b*Median(IC2)     , Mode = a + b*Mode(IC2)     , Variance = b^2*Variance(IC2)     , Skewness = Skewness(IC2)     , Kurtosis = Kurtosis(IC2)     , Conditions = [n > 2, b > 0]     , RandomSample = proc(N::nonnegint)         a +~ b /~ Sample(GammaDistribution(2/n, n/2), N)       end proc   ); end proc:
 > GIC2 := RandomVariable(GenInvChi2(a, b, n)); pdf      = PDF(GIC2, t); cdf      = CDF(GIC2, t); mean     = Mean(GIC2); median   = Median(GIC2); mode     = Mode(GIC2); variance = Variance(GIC2); skewness = Skewness(GIC2); kurtosis = Kurtosis(GIC2);
 (2)
 > GIC2 := RandomVariable(GenInvChi2(2, 4.32, 3)); pdf      = PDF(GIC2, t); cdf      = CDF(GIC2, t); mean     = Mean(GIC2); median   = Median(GIC2); mode     = Mode(GIC2); variance = Variance(GIC2); skewness = Skewness(GIC2); kurtosis = Kurtosis(GIC2); S := Sample(GIC2, 10^4): Histogram(log[10]~(S), title=typeset(log__10('S')));
 >

Maybe you wonder why I used an a priori so complex construction for InverseChi2 ?
Given that  Y ~ Inverse-ChiSquare(n) iif  X~ ChiSquare(n) and Y=1/X, another possibility would have been to write :

```InvChi2 := proc(n)
# source = https://en.wikipedia.org/wiki/Inverse-chi-squared_distribution
description "inverse chisquare distribution with n=degrees of freedom";

local X := RandomVariable(ChiSquare(n)):
local Y := 1/X:
Distribution(
PDF = unapply(PDF(Y, t), t)
, CDF = unapply(CDF(Y, t), t)
);
end proc:
```

But the problem is that Maple meets difficulties to compute some ot these statistics if not returning wrong results (I'm almost sure of that but I need to check it one more time).
InverseChi2_Why_explicit_expresions.mw

LAST POINT: I didn't check if the definitions of the skewness and kurtosis I used here and I got on Wikipedia (reference within the procedures) are the one used in Maple (which are normalized, and quite common, expressions).

## First remark:Does the matrix you want to...

First remark:
Does the matrix you want to build have some particular type?
For instance are its elements integer, real, complex; is it symmetrix (diagonal would be perfect :-) )

Second remark:
Let's suppose that you build this matrix by modifying some initial matrix in an adhoc way.
Let M

```N := ...;
M := LinearAlgebra:-RandomMatrix(N\$2, generator=a..b)```

this matrix.
One can easily build a transformation F (in fact an infinity of them) applied to M such Determinant(F(M)) = some given value. But this implies that the elements of  F(M) will no longer be able to be considered as samples from "generator=a..b"
(
using "generator=a..b" means that the elements of the random matrix are realizations of N^2 mutually indepenent and identically distributed random variables... but forcing the determinant to some value introduces constraints which break the independency
).

Final remark:
In case of random matrices of integers, only integers value can be a determinant of such matrices.
Reciprocally: Prob(determinant(M)::integer | a::real, b::real) = 0.

The only way to preserve the statistics of the random matrices while forcing their determinant, is to generate a (potentially) large number of such matrices ank the one whose determinant is closer to the desired one.
But it is an highly inefficient method.

Here are a few transformations of a an arbitrary random matrix to another one whose determinant has a given value.

ManySolutions.mw

## From eq.27 to eq.30...

 > restart
 > eq_27 := diff(u[n](t), t\$2) = alpha*(- exp(-beta*u[n-1](t)) + 2*exp(-beta*u[n](t)) - exp(-beta*u[n+1](t)))
 (1)
 > eq_28 := n -> exp(-beta*u[n](t)) = 1+v[n](t)/alpha
 (2)
 > aux_1 := isolate(eq_28(n), u[n](t));
 (3)
 > eq_29 := eval(convert(lhs(eq_27), Diff), aux_1)          =          simplify(eval(rhs(eq_27), {seq(eq_28(n+k), k=-1..1)}));
 (4)
 > lhs_27 := eval(lhs(eq_27), u[n]=(t -> u[n](xi[n](t)))): d      := [select(has, indets(%), diff)[]]: lhs_27 := eval(lhs_27, d =~ eval(d, xi[n](t) = d__1*n + c__1*t + zeta__1)): lhs_27 := convert(eval(lhs_27, xi[n](t)=xi[n]), diff);
 (5)
 > aux_2 := select(has, indets(rhs(aux_1), function), ln)[] = H(t);
 (6)
 > eq_29_a := simplify(isolate(convert(eval(eq_29, aux_2), diff), diff(H(t), t\$2)));
 (7)
 > eq_30_1 := lhs_27=rhs(eq_29_a)
 (8)
 > eq_30 := eval(convert(eq_30_1, Diff), u[n](xi[n])=lhs(aux_2))
 (9)

## There are still a lot of informations yo...

There are still a lot of informations you did not provide:

• What are the ranges of the parameters you want to adjust?
• At least can you tell which ones are potive and which ones are negative?
• Do you have some prior guess, I mean an a priori reasonable initial point to give to the optimization procedure?
• What are your initial condiitons?
• At some point you will have to minimize the discrepancy between experimental data and data generated by your model: how do you "measure" this discrepancy?
Stated otherwise, what is the expression of the function you want to minimize?
• ... and maybe some other stuff I do not have in mind.

Whatever, here is a file where you will find how to fit the 8 parameters (I hope I didn't miss some).
The strategy used is quite raw. It is likely that we could do something more subtle which accounts to the specific structure ot your model (in de1 only A(t) appears and it's parameters could then be adjusted independently of the others, leading to a "two shots fitting method").

In case of A(t), l(t) and S(t) have different orders of magnitude, as it seems to be the case in the experimental data you provide, it's likely that normalizing them (and the simulation data too) will be a good way to preprocess the data.
Weighing the discrepancy is also often a good practice.

 > restart;
 > with(plots):with(CurveFitting);
 (1)
 > with(DEtools):with(linalg): with(LinearAlgebra): with(plots): #with(Student[VectorCalculus]); with(plots, implicitplot): with(Student[Calculus1]):

The vector field:

 > rhs1 := (A,l,S) -> alpha__AP*A - mu__A*A; rhs2 := (A,l,S) -> alpha__lS*l*S - mu__l*l - beta__Al*l*A; rhs3 := (A,l,S) -> alpha__S*S - beta__lS*l*S - beta__AS*A*S;
 (2)

Equilibria:

 > equilibria := solve([rhs1(A,l,S), rhs2(A,l,S), rhs3(A,l,S)], [A,l,S]);
 (3)

The differential equations:

 > de1 := diff(A(t), t) = rhs1(A(t),l(t),S(t)); de2 := diff(l(t), t) = rhs2(A(t),l(t),S(t)); de3 := diff(S(t), t) = rhs3(A(t),l(t),S(t));
 (4)
 > # To dsolve numericaly all the parameters must be given numerical value, or, # you must use the option "parameter": sys := {de1, de2, de3, A(0)=A__0, l(0)=l__0, S(0)=S__0}: params := convert(indets(sys, name) minus {t}, list);
 (5)
 > NP := numelems(params): P := [seq(rand(0. .. 1.)(), n=1..NP)]
 (6)
 > # Solution procedure Simulator := proc(P, times)   dsolve(eval(sys, params=~P), numeric, output=Array(times)): end proc:
 > sys    := {de1, de2, de3, A(0)=A__0, l(0)=l__0, S(0)=S__0}: params := convert(indets(sys, name) minus {t}, list); NP     := numelems(params): # A set of parameters I will consider as the set which generates the experimental data P_exp := [seq(rand(0. .. 1.)(), n=1..NP)]: T_exp := [\$1..4]: print~(params=~P_exp): # Given P_exp and T_ext, here are the experimental data you would get, provided: # #  (1) Times T_ext are free of measurement errors. # #  (2) A, l and S values are free of measurement errors. # #  (3) The model itself (set of odes) has no modeling bias (meaning it represents #      perfectly he same reality which is observed during the experiments). # # Given the way "Experiences" are generated here these three conditions are verified. # But you must be aware of them when you will deal with true experimental data. # # I choosed this strtegy because you did not provide any unformation on the prior # knowledge about the parameters to fit. Experiences := Simulator(P_exp, T_exp)[2][1]
 (7)
 > # Define an objective function aimed to measure the discrepancy between experiences and # simulations. # # I suppose that your initial conditions A__0, l__0 and S__0 (parameters 1 to 3) have # KNOWN values that can be isolate form your 11 parameters (8 UNKNOWN value parameters remain). # # Here I choosed something very simple. Discrepancy := proc(p4, p5, p6, p7, p8, p9, p10, p11)   local P := [IC[], _passed]:   local Sim := Simulator(P, T_exp)[2][1]:   add((Experiences[.., 2..4] - Sim[.., 2..4])^~2) end proc:
 > # Example of use. # # I suppose that your initial conditions A__0, l__0 and S__0 are KNOWN values that can # be isolate form your 111 parameters. IC     := P_exp[1..3]; P_exp; P_test := P_exp[4..NP] +~ [seq(rand(-0.1..0.1)(), n=4..NP)]; Discrepancy(P_test[])
 (8)
 > # Find the set of parameter values which minimizes the discrepancy ParamRanges := seq(0.5*P_exp[n]..2*P_exp[n], n=4..NP): opt := Optimization:-Minimize(Discrepancy, ParamRanges): MinimalDiscrepancy := opt[1]; interface(rtablesize=NP+1): `<,>`(   Vector[row](3, ["Parameter", "Exp. value", "Best Fit"]),   evalf[6](Matrix(NP-3, 3, (i, j) -> `if`(j=1, params[i+3], `if`(j=2, P_exp[i], opt[2][i])))) )
 (9)
 > # Another rum from a different initial point. # # Find the set of parameter values which minimizes the discrepancy StartFrom := [seq(rand(0. .. 1.)(), n=4..NP)]: opt := Optimization:-Minimize(Discrepancy, ParamRanges, initialpoint=StartFrom): MinimalDiscrepancy := opt[1]; interface(rtablesize=NP+1): `<,>`(   Vector[row](3, ["Parameter", "Exp. value", "Best Fit"]),   evalf[6](Matrix(NP-3, 3, (i, j) -> `if`(j=1, params[i+3], `if`(j=2, P_exp[i], opt[2][i])))) )
 (10)
 > # FUZZY EXPERIMENTAL DATA ErrorFreeExperiences := copy(Experiences): FuzzyExperiences     := Experiences + `<|>`(Vector(4, 0), Statistics:-Sample(Normal(0, 0.1), [4, 3])): Experiences          := copy(FuzzyExperiences);
 (11)
 > # Run 1 opt := Optimization:-Minimize(Discrepancy, ParamRanges, initialpoint=StartFrom): MinimalDiscrepancy := opt[1]; interface(rtablesize=NP+1): Res_1 := `<,>`(   Vector[row](3, ["Parameter", "Exp. value", "Best Fit"]),   evalf[6](Matrix(NP-3, 3, (i, j) -> `if`(j=1, params[i+3], `if`(j=2, P_exp[i], opt[2][i])))) ): # Run 2 with another initial point StartFrom := [seq(rand(0. .. 1.)(), n=4..NP)]: opt := Optimization:-Minimize(Discrepancy, ParamRanges, initialpoint=StartFrom): MinimalDiscrepancy := opt[1]; interface(rtablesize=NP+1): Res_2 := `<,>`(   Vector[row](3, ["Parameter", "Exp. value", "Best Fit"]),   evalf[6](Matrix(NP-3, 3, (i, j) -> `if`(j=1, params[i+3], `if`(j=2, P_exp[i], opt[2][i])))) ): # Comparison: Res_12 := `<|>`(Res_1, Res_2[.., -1])
 (12)

IMPORTANT REMARK

It could be surprising that 4 triples (A, l, S) (thus 12 data) cannot enable to determine in a unique way the 8 parameters.
The two runs above give significantly different values (see for instance mu__A).

The reason is that the experimental data are NOT INDEPENDENT.

Indeed A(t), l(t), S(t) are not independent and thus the experimental observation of, let's say S(t) at some time, is not
independent of the observation of, let's say l(t), at this same time.
This why the two runs above give different "best fit" og royghly the same quality (both discrepancies are arpinrd 3e-8).

The dependency of A(t), l(t) and S(s) explains the lack of unicity of the "best fit".
Among the 8 parameters it's likely that a subset of them is of main importance, while the remaining subset is only
responsible of "small adjustments" (this is a thumb of rule in real multi-parameter problems).

Solutions of almost equal quality are spead on a variety of R^11 whose dimension is the cardinal of the subset of
"important" parameters

Collecting the value of A, l and S at only 4 times is not enough is not enough.
Increasing the number of time observations could help reducing the dimension of the previousvariety, which is
notat alla certainty, but it's main interest would be to assess the quality and the robustness of the "best fit" through
some training-and-validation strategy.

HOW TO HANDLE YOUR REAL PROBLEM?

 > # (0) Provide initial conditions #     For simplicity keep the shown above and write IC := "a list of 3 values"; `something like` =  P_exp[1..3];
 (13)
 > # (1) Define your real experiments Experiences := Matrix(4, 4, [1,201.8,89.3,7, 2,257.27,113,13, 3,432.82,150.8,20, 4,730.909,169.8,27])
 (14)
 > # (2) Define the ranges of eaxh parameter ParamRanges := "a sequence of 8 ranges"; `something like` =  seq(0.5*P_exp[n]..2*P_exp[n], n=4..NP)
 (15)
 > # (3) Suitable while not necessary, define a good initial guess StartFrom := "a list of 8 values, each in the corresponding ParamRange"; `something like` = [seq(rand(0. .. 1.)(), n=1..NP)];
 (16)
 > # (4) Run this command # opt := Optimization:-Minimize(Discrepancy, ParamRanges, initialpoint=StartFrom):

## A trick...

is to reparameterize fun by writting y=a*x and z=b*x,which gives an expression (K) which only depends on a and b.
Then Maximizing K for a reasonable initial point, gives (sometimes but not always) a maximum value close to 3.

I think that working of fun before maximizing it could be useful.

 > restart:
 > fun := ((2*(x+y+z))*(sqrt(y*z*(z+x)*(x+y))/(z+2*x+y)+sqrt(z*x*(x+y)*(y+z))/(x+2*y+z)+sqrt(x*y*(y+z)*(z+x))/(y+2*z+x))-9*x*y*z/(x+y+z)-2*(x*y+x*z+y*z))/(sqrt(x*y*z/(x+y+z))*(x+y+z-sqrt(27*x*y*z/(x+y+z))))
 (1)
 > J := eval(fun, [y=a*x, z=b*x]): J := normal(J) assuming x > 0: indets(%, name); K := simplify(J, size)
 (2)
 > opt := Optimization['Maximize'](K, assume = nonnegative, initialpoint = ({seq})(w = (evalf(rand(0. .. 10.)())), w in [a, b]))
 (3)
 > # Ok, I was lucky, because changing the initial point doesn't always lead to a solution :-(
 > limit(fun, x=+infinity)
 (4)
 > J := eval(fun, [x=a*y, z=b*y]): J := normal(J) assuming y > 0: indets(%, name); K := simplify(J, size): opt := Optimization['Maximize'](K, assume = nonnegative, initialpoint = ({seq})(w = (evalf(rand(0. .. 10.)())), w in [a, b]))
 (5)
 > J := eval(fun, [x=a*z, y=b*z]): J := normal(J) assuming z > 0: indets(%, name); K := simplify(J, size): opt := Optimization['Maximize'](K, assume = nonnegative, initialpoint = ({seq})(w = (evalf(rand(0. .. 10.)())), w in [a, b]), iterationlimit=10^4)
 (6)
 >

## A first point to clarify....

Your code contains the name gamma, which is by default the Euler constant (0.5772156649): is this the value of gamma you want to use, or did you forget to do this

```local gamma:
....
gamma := some_numeric_vaue;
...
dsolve(...)```

?

About the error you get: odeplot the solution for the range t=0..0.6e-5 to see what happens.
The solutions seem to go to infinity when t tends to 0.6e-5... thus the "singulatity" Maple suspects.

This "singularity" is the result of the values you choosed for the parameters.
I advice you to check them, particularly their signs.

If this may help you understand where the problem could be, the attached file also contains a formal solution through serie expansions.
Could_help.mw

## Does this suit you?  .....

Does this suit you?

 -> restart:
 > with(plots): with(LinearAlgebra):
 > relaxG__xy := -(Matrix(3, 3, {(1, 1) = 258877736.8, (1, 2) = -258877736.8, (1, 3) = 0., (2, 1) = -258877736.8, (2, 2) = 258877736.8, (2, 3) = 0., (3, 1) = 0., (3, 2) = 0., (3, 3) = 65031182.60})) . (1/t^.9): A := proc(M, T)   local tau := max(1, T):   matrixplot(eval(M, t=tau), heights=histogram, axes=boxed, title=typeset('t'=tau)); end proc: animate(A, [relaxG__xy, 10*t], t=0..10, frames=11, paraminfo=false );
 >

## Corrections...

• For a safer code declare D as local (D is a symbol for differentiation).
• In the definition of TPP: replace the square brackets by parenthesis (square brackets have other purposes).
• A few packages are useless (linalg is an obsolete package replaced by LinearAlgebra for instance), others can be needed for futher purposes you do not mention.

Then

```diff(TPpm(Q, Qr), Q);
g1 := isolate(%, Q);
```

works correctly.
Just keep in mind that the last line above is a short for

`g1 := isolate(%=0, Q);`

q1_mmcdara.mw

## These ways...

 >
 > restart:
 > f := b -> 47.965-(-(3.08*0.2*b^(1-0.7))/(GAMMA(3-0.7)))
 (1)
 > t := [seq(i, i = 1 .. 100, 10)]: a := f~(t):  #type help(elementwise) to understand the syntax
 > dataplot(t, a)
 > # Or plot(f(z), z=1..100)
 >

## A five parameters triangular distributio...

The distribution you want to build cannot have 3 parameters, 5 are needed:

• 2 couples (probability, quantile), thus 4 parameters,
• 1 more parameter for the mode.

Here is a the building of a 5-parameter Triangular distribution of type II:

 > restart:
 > with(Statistics):
 > TriangularTypeII := proc(Q, P, c)    # Q = list of 2 quantiles    # P = list of the associated probabilities    # c  = mode    local F   := unapply(CDF(Triangular(a, b, c), t), t):    local eqs := {F(Q[1])=P[1], F(Q[2])=P[2]}:    local AB  := solve(eqs, {a, b}):    local T   := RandomVariable(Triangular(op(eval([a, b], AB)), c)):    printf("According to Maple help pages : %a\n", eval(["a", "b", "c"]=~[a, b, c], AB));    Distribution(       PDF = unapply(PDF(T, t), t),             CDF = unapply(CDF(T, t), t),       Mean = Mean(T),       Variance = Variance(T),       Mode = c,       Skewness = Skewness(T),       Kurtosis = Kurtosis(T),       #Conditions = [Q > P],       RandomSample = proc(N::nonnegint)                        Sample(T, N)                      end proc    ) end proc:
 > T := RandomVariable(TriangularTypeII([3, 5], [1/4, 3/4], 4))
 According to Maple help pages : ["a" = 2-2^(1/2), "b" = 6+2^(1/2), "c" = 4]
 (1)
 > f := PDF(T, x); F := unapply(CDF(T, x), x): 'F(x)' = F(x); omega := Support(T, output='range');
 (2)
 > N := 10^3: S := Sample(T, N): plots:-display(   Histogram(S),   plot(f, x=omega, color=red, thickness=2) ); plots:-display(   ScatterPlot(sort(S), Vector(N, i -> i/N), symbol=point)   , plot(F(x), x=omega, color=red, thickness=2)   , gridlines=true   , plot([[3, 0], [3, F(3)]], color=green)   , plot([[5, 0], [5, F(5)]], color=green)   , plot([[op(1, omega), F(3)], [3, F(3)]], color=green)   , plot([[op(1, omega), F(5)], [5, F(5)]], color=green) );
 > 'Mean'          = evalf(Mean(T)); 'EmpiricalMean' = Mean(S);  print(): 'Variance'          = evalf(Variance(T)); 'EmpiricalVariance' = Variance(S);  print(): 'Skewness'          = evalf(Skewness(T)); 'EmpiricalSkewness' = Skewness(S);  print(): 'Kurtosis'          = evalf(Kurtosis(T)); 'EmpiricalKurtosis' = Kurtosis(S);  print(): 'Mode'          = evalf(Mode(T)); 'EmpiricalMode' = Mode(S);  print():
 (3)
 > 'FirstQuartile'          = Quantile(T, 1/4); 'EmpiricalFirstQuartile' = Quantile(S, 1/4);  print(): 'ThirdQuartile'          = Quantile(T, 3/4); 'EmpiricalThirdQuartile' = Quantile(S, 3/4);  print():
 (4)
 >

One could improve this by:

• defining the support of the distribution within TriangularTypeII,
• checking the type of the arguments (Q::list, P::list),
• verifying that the components of P are in [0, 1] (the order doesn't matter provided it is consistent with those of Q),
• checking that min(Q) < c < max(Q),
• ...

WATCH OUT:
Contrary to the SubjectiveBeta distribution of your previous thread, you cannot define a TriangularTypeII distribution with parameters ( [q1, q2], [p1, p2], c).
The reason is that there are too many conditions about these 5 parameters for solve being capable
to solve equations eqs formally (eqs depends on 7 quantities q1, q2, p1, p2, c, a, b).
To give you an idea of the complexity of this problem, consider this

```Tref := RandomVariable(Triangular(a, b, c))
_R3
Quantile(Tref, p) assuming p >=0, p <=1:
quant := unapply(%, p):

# In the case the first quantile is less than c and the second is larger than c,
# here are the solutions (a, b):

q__1 := op(2, quant(p__1)):
q__2 := op(3, quant(p__2)):

eqs := {q__1=Q__1, q__2=Q__2};

solve(eqs, {a, b}):  # uncomment to display
/                          (1/2)
{ a + (p__1 (b - a) (c - a))      = Q__1,
\

(1/2)       \
b - ((-p__2 + 1) (b - a) (-c + b))      = Q__2 }
/
```

## Agree...

I was preparing "my proof" as you beat me to the punch.

My demonstration relies upon the fact that, under an ad hoc transformation, the original equation is of the form x^4 + p*x^2 + q = 0.
I hadn't seen the factorization of the initial equation.

 > restart
 > eq := (x-a)^4 + (x-b)^4 - c
 (1)
 > # Rewrite eq by setting b=a+u and c=a+v rew := collect(expand(eval(eq, [b=a+u, c=a+v])), x): # Proceed as if you want to compute the formal solutions of this # equation "by hand". # # As said in math books the first step is to apply the transformation # x -> x - coeff(x^3) / coeff(x^4) / 4 to get a 3rd degree equation. shift := coeff(rew, x^3)/coeff(rew, x^4) / 4; eq_X  := simplify(eval(rew, x=X-shift));
 (2)
 > # As we see we did more than this because the resulting equation is of 2nd degree # wrt the auxiliary variable U=X^2. eq_U  := eval(eq_X, X=surd(U, 2));
 (3)
 > # Solve this equation wrt U sol_U := [solve(eq_U, U)]
 (4)
 > # Compute the solutions of equation "eq". # # Given the solution "sol_U" this could be done by hand, but I use # Maple directly for simplicity. sol_x := [solve(eval(eq, [b=a+u, c=a+v]), x)]: print~(sol_x):
 (5)
 > #                           Discussion # # (1/2)*u + (1/2)*sqrt(-3*u^2+2*sqrt(2*u^4+2*a+2*v)) must be an integer, # thus u + sqrt(-3*u^2+2*sqrt(2*u^4+2*a+2*v)) is a multiple of 2. # # As u is an integer, so sqrt(-3*u^2+2*sqrt(2*u^4+2*a+2*v)) must be. # then -3*u^2+2*sqrt(2*u^4+2*a+2*v) is a pefect square noted K^2. # # Which implies in return that 2*u^4+2*a+2*v is too a perfect square L^2. # # 2*u^4+2*a+2*v being a perfect square L^2 implies u^4+a+v is divided by 2. # Thus u^4+a+v writes 2*L^2 (not the same L as before). # # So either a, u, v are both even, or only one of them is; # #                         Different cases # [1] u is even #   then u/2 is an integer and sqrt(-3*u^2 + 2*sqrt(2*u^4+2*a+2*v)) must be #   an even integer. #   Thus -3*u^2 + 2*sqrt(2*u^4+2*a+2*v) (Rel.1) is of the form 4*K^2. # #   On the same way sqrt(-3*u^2 - 2*sqrt(2*u^4+2*a+2*v)) is also an even #   integer; and thus -3*u^2 - 2*sqrt(2*u^4+2*a+2*v) is of the form 4*L^2 #   (Rel.2). # #   Summing Rel.1 and Rel.2 gives -6*a^2 = 4*(K^2 + L^2). #   Which is impossible. #   Then u cannot be even. # # # [2] u is odd #   then u/2 is of the form P+1/2 and sqrt(-3*u^2 + 2*sqrt(2*u^4+2*a+2*v)) must be #   an odd integer. #   Thus -3*u^2 + 2*sqrt(2*u^4+2*a+2*v) (Rel.1) is of the form (2*K+1)^2. # #   On the same way sqrt(-3*u^2 - 2*sqrt(2*u^4+2*a+2*v)) is also an odd #   integer; and thus -3*u^2 - 2*sqrt(2*u^4+2*a+2*v) is of the form (2*L+1)^2 #   (Rel.2). # #   Summing Rel.1 and Rel.2 gives -6*a^2 = (2*K+1)^2 + (2*L+1)^2. #   Which is impossible. #   Then u cannot be even odd. # # # [3] u = 0 (particular case which implies b=a) #   then u/2 = 0 and sqrt(-3*u^2 - 2*sqrt(2*u^4+2*a+2*v)) = sqrt(-2*sqrt(2*a+2*v)), #   which is imaginary. # # Then there is no triple (a, b, c) such that the equation #        (x-a)^4 + (x-b)^4 - c = 0 # possesses 4 distinct integer roots
 >

## Here is way to make your procedure ALG t...

Here is way to make your procedure ALG to work for non countable sets (see mainly after the PROPOSAL text)

Basically ALG doens't work on thngs like [1, 2) which are intervals and not sets (as  @vv pointed it out).

So the idea is to start converting each interval, for instance [1, 2), into a set (here { 1 <= z < 2 }).
Once done your algorithm produces a set of sets, let's say A1, ..., An which are not in a reduced form in the following sense:

• If A1 = {p < z < q, z=q} its reduced form should be {p < z <= q}
• If A1 = {p < z < q, z=p} its reduced form should be {p <= z < q}
• If A1 = {p < z <= q,  q <= z < r} its reduced form should be {p < z < r}
• and so on

The attached file shows hot to convert intervals and not sets and get the reduced  forms of each A1, ..., An
This file doesn't contain a new pocedure to do this, because It's better that you check the result on the example I provide (and some others)  before writting another version of ALG.

 >
 >
 (1)
 >
 >
 (2)

PROPOSAL

 > kernelopts(version)
 (3)

WATCHOUT: the code below couldn't work with Maple 2023
(see here  https://www.mapleprimes.com/questions/236556-Solve-Equation-With-Parameter-Using-Maple#comment295355)

As I do not have Maple 2023 I cannot assure you the following code will work correctly.

Why do you get the previous error?
The reason is that neither nor  are sets

 > # Here is what C looks like with Maple 2015.2 # # The point is that C is not of type set(set), which is why I modified the declaration # of the arguments in procedure ALG. C := {RealRange(0, Open(1)), RealRange(Open(1), 2), {1}}
 (4)
 > type(C[1], set); type(C[2], set);
 (5)
 > # How to build A := `union`(C, {{}}) when not all C elements are sets? Cs, Cns := selectremove(type, C, set);
 (6)
 > # Initialize A A := `union`(Cs, {{}})
 (7)
 > # Complete A with Cns elements. # # Step 1: build the sets corresponding to each Cns component ContinuousSets := `or`(map(c -> {convert(z in c, relation)}, Cns)); # Step 2: assemble A := A union ContinuousSets
 (8)
 > # Check manually your algorithm X := {0, 2}; ok := false: while not ok do   ok := true:   for S in A do     if `not`(`minus`(X, S) in A) then       A := `union`(A, {`minus`(X, S)});       ok := false     end if;     for T in A do       if `not`(`union`(S, T) in A) then         A := `union`(A, {`union`(S, T)});         ok := false       end if     end do   end do end do:
 (9)
 > A
 (10)
 > # Reduce all the sets in A. # # For instance: #    {1, 1 < z <= 2)}   should be written {And(1 <= z <= 2)} #    {1 < z <= 2, 0, 2} should be written {1 < z <= 2), z=0} #                                      or {Or(1 < z <= 2, z=0)} # # Remark: only sets which contains z have to be possibly reduced    Az := select(has, A, z);
 (11)
 > Areduc := NULL: for a in Az do   s, r := selectremove(has, a, z):   ri   := map(_r -> And(_r <= z , z <= _r), r):   sol  := solve({Or((s union ri)[])});   map(i -> if i[]::`=` then i[] else i end if, {sol});   map(i -> if i::set then And(op(i)) else i end if, %);   Areduc := Areduc union {`if`(numelems(%)=1, %, {Or(%[])})}; end do: Ar := remove(has, A, z) minus {{}}: map(i -> if numelems(i)=1 then {z=i[]} else {Or((z=~i)[])} end if, Ar): FinalA := {{}} union % union Areduc: print~(FinalA):
 (12)
 >