## 5989 Reputation

7 years, 266 days

## Why integrating over -oo..+oo? And a way...

The integrand (10)vanishes extremely fast far from 0:

```eval((10), t=2)
-101
8.74207704900475 10
eval((10),  t=5)
-1450
9.27406195293248 10
```

and is symmetric.

If you are really interested in getting an approximation of the integral look at this

```ff := (11):
L := [seq(0.2..2.0, 0.2)]:
for l in L do
printf("Range = %1.1f..%1.1f  Int = %a\n", -l, l, 2*evalf(Int(ff, t=0..l)));
end do;

Range = -0.2..0.2  Int = .953869585255756e-1
Range = -0.4..0.4  Int = .998942521814238e-1
Range = -0.6..0.6  Int = .999002157052540e-1
Range = -0.8..0.8  Int = .999002158581252e-1
Range = -1.0..1.0  Int = .999002158581254e-1
Range = -1.2..1.2  Int = .999002157052540e-1
Range = -1.4..1.4  Int = .999002158581252e-1
Range = -1.6..1.6  Int = .999002158581252e-1
Range = -1.8..1.8  Int = .999002158581252e-1
Range = -2.0..2.0  Int = .999002158581252e-1
```

You have an almost true value (error ~ 1e-15)  as soon as the half integration range goes beyond than 1.4.
Given that most of the integration methods do not accept a value of Digitslarger than 15, it seems useless to me to expect a more accurate value of the integral.

Now if you really want to integrate over -oo..+oo for some reason, I propose you to transform this integration range into a finite one.
A classical transformation is t = arctanh(tau) (the integration range is now tau=-1..+1).
The result is almost instaneous

```with(IntegrationTools):
J    := Int(ff, t=-infinity..+infinity):
Jtau := Change(J, t=arctanh(tau), tau):
Jtau := op(1, Jtau)*Int(GetIntegrand(Jtau), tau=GetRange(op(-1, Jtau)), method=_Gquad):

Digits := 15:
CodeTools:-Usage(evalf(Jtau));
memory used=6.23MiB, alloc change=0 bytes, cpu time=67.00ms, real time=90.00ms, gc time=0ns
0.0999002158581254
```

OverflowError_mmcdara.mw

## Maybe this?...

EDITED

The desired primes in the range 104.. 10are found in 2.3 seconds.

 > restart

Range 10^4..10^6

 > ithprime(10^5)
 (1)
 > t0 := time(): N1 := 10^4: N2 := 10^5: P  := CodeTools:-Usage( [seq(ithprime(k), k=1..N2)] ): B1 := ListTools:-BinaryPlace(P, N1)+1; B2 := ListTools:-BinaryPlace(P, 10*N2-1);
 memory used=216.44MiB, alloc change=46.01MiB, cpu time=1.64s, real time=1.64s, gc time=95.80ms
 (2)
 > count := proc(n)   local e := StringTools:-Explode(sprintf("%d", n)):   local s := {e[]}:   if numelems(s) in {2, 3} then     map(u -> ListTools:-Occurrences(u, e), [s[]]);     if has(%, 4) then       return n     end if:   end if: end proc:
 > ok := count~(P[B1..B2]):
 > time()-t0
 (3)
 > ok;
 (4)
 > eok := StringTools:-Explode~(sprintf~("%d", ok)): eok := map(op, eok): tok := sort(Statistics:-Tally(eok), key=(x -> lhs(x)))
 (5)
 > plots:-display(   seq(     plot([[k, 0], [k, rhs(tok[k+1])]], thickness=11, color=blue)     , k=0..9   ) )
 >

## Replace abs by a piecewise function...

```restart
rel :=  {x+abs(y+1)=1, y+abs(z+2)=1, z+abs(x-2)=1}
{x + |y + 1| = 1, y + |z + 2| = 1, z + |x - 2| = 1}

# replace abs(u) by piecewise(u < 0, -u, u):

rel_2 := eval(rel, abs=(u -> piecewise(u < 0, -u, u))):
print~(rel_2):
x + piecewise(y < -1, -y - 1, y + 1) = 1
y + piecewise(z < -2, -z - 2, z + 2) = 1
z + piecewise(x < 2, -x + 2, x - 2) = 1

solve(rel_2)
{x = z + 1, y = -1 - z, -2 <= z, z <= 0}

```

The attached file contains a visual check of this solution
piecewised_relations.mw

## You can't always be lucky...

For file T1.mw:
You have 7 equations and 3 unknowns and you can estimate yourself lucky to get a solution.

For file T2.mw:
You have 11 equations and 5 unknowns {d, a[1], b[1], c[1], c[2]}: thus there are all the reasons at world for you not getting a solution (unless your 11 equations are such that, by chance or by construction [compare EQ0 and EQ1 or EQ9 and EQ10] mutually dependent in some way.)
Note that this is all the more true that you wrote

`solve(Eqs, {a[0], a[1], b[1], c[1], c[2]})`

`solve(Eqs, {d, a[1], b[1], c[1], c[2]})`

!!!

So let"s take only 5 equations out of the 11 ones (this can be done in 462 different ways) and look if all these 5-by-5 systems have a solution or not.
The attached file shows that 25 systems out of 462 do not have a solution.

Let's take anyone of them, for instance the system made of equations

`{EQ[i], i in [0, 1, 3, 5, 7]}`

Completing this set of equations by any non empty subset of this next set

`{EQ[i], i in [2, 4, 6, 8, 9, 10]}`

will give a new set of more than 5 equations which has no solution.
In particular the set

`{EQ[i], i in [0, ..., 10]}`

does not have a solution.

 >
 >

 (1)

 (2)

 (3)

 (4)

 (5)

 (6)

 (7)

 (8)

 (9)

 (10)

 (11)

 > indets(Eqs, name);
 (12)
 > 11!/5!/6!
 (13)
 > AllCombs := convert~(convert(convert~(combinat:-permute(11, 5), set), set), list):
 > AllCombs := map(OneComb -> OneComb-~1, AllCombs):
 > failures := 0: for r in AllCombs do   s := solve({seq(EQ[i], i in r)}, {d, a[1], b[1], c[1], c[2]}):   if [s]=[] then     printf("%a\n", r):     failures := failures+1:   end if: end do:
 [0, 1, 3, 5, 7] [0, 2, 3, 5, 7] [0, 3, 4, 5, 7] [0, 3, 5, 6, 7] [0, 3, 5, 7, 8] [0, 3, 5, 7, 9] [1, 2, 3, 5, 7] [1, 3, 4, 5, 7] [1, 3, 5, 6, 7] [1, 3, 5, 7, 8] [1, 3, 5, 7, 9] [1, 3, 5, 7, 10] [2, 3, 4, 5, 7] [2, 3, 5, 6, 7] [2, 3, 5, 7, 9] [2, 3, 5, 7, 10] [3, 4, 5, 7, 8] [3, 4, 5, 7, 9] [3, 4, 5, 7, 10] [3, 5, 6, 7, 8] [3, 5, 6, 7, 9] [3, 5, 6, 7, 10] [3, 5, 7, 8, 9] [3, 5, 7, 8, 10] [3, 5, 7, 9, 10]
 > failures
 (14)
 >

## Done with Maple 2015...

```with(LinearAlgebra):

sys1   := [seq(Eq[i], i=1..4)]:
var1   := [seq(diff(v[i-1](t), t), i=1..4)]:
A1, B1 := GenerateMatrix(sys1, var1):
A1, B1 := GenerateMatrix(sys1, var1):

sys2   := [entries(B1, nolist)]:
var2   := [seq(v[i-1](t), i=1..4)]:
A2, B2 := GenerateMatrix(sys2, var2):
A2, B2 := GenerateMatrix(sys2, var2):

A1 &. <var1> = A2 &. <var2> - B2
```

## One way...

Be carefull, this method doesn't handle correctly all the situations.

```restart
alias (U=U(xi)):
Balance := proc(EQ)
local eq   := eval(EQ, Diff=diff);
local HODD := degree(eval(eval(eq, U=exp(__k*xi)), xi=0), __k);
local HNLD := degree(eval(eq, U=__C), __C);
M = solve(HNLD*M = HODD+M)
end proc:

eq := a*U+b*U^2+diff(U, xi\$2)+diff(U, xi):
Balance(eq)
M = 2

eq := a*U+b*U^2+Diff(U, xi\$2)+Diff(U, xi):
Balance(eq)

M = 2

eq := a*U+b*U^(-2)+diff(U, xi\$5)+diff(U, xi\$3)+c*U^3:
Balance(eq);
5
M = -
2
```

## Illustration with a 3rd degree equation...

Let

```f := a -> x^3+a*x+1
```

a third degree polynom wrt x, parameterized by some real a.
In this simple case solve(f(a), x) doesn't return a list, nor a set, but a sequence:

```f := a -> x^3+a*x+1solve(f(a), x):
whattype(%);
exprseq```

If one defines a procedure sol this way

`sol := a -> {solve(f(a), x)}:`

one can easily verify that repeted executions of sol(a), where a is now assigned a numeric value, alwys returns theroots in the same order (defined by set):

1. If the three roots are real the order is defined by '<', that is the smallest root is the first oneand largest the last one.
2. If only one root is real and the two others are of the form p-q*I and  p+q*I (with q > 0), then the order is:
1. the real root is the first,
2. root  p-q*I is the second one,
3. root  p+q*I is the last (third) one.

To avoid mixing the roots define the variable sola  this way:

```sola := sol(a):
```

Then sola contains roots sorted according to the order given in points 1, 2, 3. This is because the formal result sol(a) contains a real root and two a priori conjugate complex roots (whoch might happento be both real for some values of a).

If you make a to vary in some real range, the first root in sola, which is assumed to be real, will take indeed real values, but also complex values, which means that its rank can take values 1, 2 or 3 depending on the value of a.
The plots in the attached file show how the rank of the three roots evolve as a ranges from -5 to +5.

• The order of roots depends on their expression (real, p-q*Ip+q*I) and then a root who was ranked first for some value of a, can be ranked 2 or 3 for a close value of a, giving the false impression of a "random" order returned by solve.
• To understand (and control it?) this phenomenon you must "stuck" the roots, for example by doing sol := a -> {solve(f(a), x)}, and assign the roots a name, for instance
```R1, R2, R3 := sola[]:

# check
R1;
R2;
R3;```
Doing this will help you to track a given root as a evolves.

root_order.mw

## Something like that?...

 > restart:
 > for i from 1 to 1000 do   # your own lines are replaced by a sleep of 0.01 second   Threads:-Sleep(.01);   if i mod 100 = 0 then     DocumentTools:-Tabulate(       [sprintf("%d", i)],       'alignment'='left',       'widthmode'='percentage',       'width'=10     )   end if end do

An alternative: search on this site for "progression bar"

## An idea...

I suggest you to solve the pde without any boundary condition, which will givea solution that contains integration constants, and next to evaluate these constants given the boundary condition.

If I'm not mistaken thesolution should be this one: laplacian_mmcdara.mw

## An example...

Here is an example of compartmental model (4 compartments) and a way to assess its transition coefficients given some observations.
As I do not have any observation for this hypothetical model I used the classical technique wich consists in creating "pseudo-observations" which correspond to the value of this model at some times.
More precisely for a N-compartments model M(x1(t), ..., xN(t), t ; K) with transition matrix K  (K numeric), the "pseudo-observations" are mape of P (N+1)-tuples { (t, x[1](t), ..., x[N](t)), t in {t1, ..., tP} }.
The goal is then to recover blindly estimations of K by exploting some prior domain Dom(K).
Without observation noise this estimation should be equal to K provided the model is identifiable (which depends on K and of the observations).

The attached file presents the case of noise free observations and a noisy case.

Hope this will give you some idea to go further.
An_example.mw
An_example_edited.mw

For a 9-compartment model you have potentially 90 coefficients to assess (9^2 compartment-to-compartment coefficients plus 9 compartment-to-exterior coefficients) which is almost impossible with usual methods unless the 9-by-9 matrix K is very parse (by "usual methods" I mean optimization methods by contrast to neural network methods for instance)
Even with specific estimation methods it's very unlikely that the solution you get will be "numerically unique": it often happens that the estimation which minimizes some discrepancy between model predictions and observations is not a point but a variety of higher dimension or, even if this is indeed a single point, that large deviations in some directions around it (think to this point as a point in dimension 90 for instance) do not change significantly the value if the discrepancy.

So be extremely carefull in applying a minimization algorithm.
A preliminary analysis may help. For instance, for large times, the behavior of some xn(t) is generally governed by the largest coefficient kn, m: this is a trick which can enable assessing the values of the leading coefficients of each equation?

Without knowing your model it's not possible to say more (see @Carl Love's reply).

## Several ways...

I assume the histograms you refer two are those created by the package Statistics?
If they come from package ImageTools what I say doesn't apply.

 > restart
 > with(Statistics):
 > N := 100: K := 5: S := Sample(Uniform(0, 1), N):
 > H := Histogram(S, maxbins=K): plots:-display(H);
 > # Structure of H op(H);
 (1)
 > # H is made of K polygons (the K vertical bars) g := [plottools:-getdata(H)];
 (2)
 > # to get the "bins" you can # (1) extract the K matrices from the list above bins := map2(op, 3, g);
 (3)
 > # (2) extract only the ranges bins := map2(op, 2, g);
 (4)
 > # An other way is to use TallyInto: # (note this method gives the counts) epsilon := 1e-9:  # to catch the right end point r := [seq((min..max+epsilon)(S), (max-min)(S)/K)]: R := [seq(r[i]..r[i+1], i=1..K)]: T := TallyInto(S, R)
 (5)
 > # If you want the frequencies do this lhs~(T) =~ (rhs/N)~(T)
 (6)
 >

## Use Laplace transform...

In this kind of problem  the causality principle must hold: the Delta pulse is the cause of y(t), meaning that y(t) doesn't exist before Delta is applied.
So it doesn't matter to consider the limit of y(t) when t goes to 0 from the left.
The Laplace transform, for instance, is perfectly suited tosolve this (linear) problem:

```restart
de := D(y)(t) + y(t)/tau = Dirac(t)/tau;
y(t)   Dirac(t)
D(y)(t) + ---- = --------
tau      tau
with(inttrans):
lde := laplace(de, t, p)
laplace(y(t), t, p)    1
p laplace(y(t), t, p) - y(0) + ------------------- = ---
tau           tau
lde0 := eval(lde, y(0)=0):
ly := rhs(isolate(lde0, laplace(y(t), t, p))):
Y := unapply(invlaplace(ly, p, t), t)
/   t \
exp|- ---|
\  tau/
t -> ----------
tau
```

## Two solutions: restart:with(LinearA...

Two solutions:

```restart:
with(LinearAlgebra):

sanity := proc()
local A, __U, __S, __Vt;
A :=RandomMatrix(3,10);
__U, __S, __Vt := SingularValues(A, output=['U','S','Vt']);
end proc:

sanity():
```

or (IMO better for future use)

```restart:
with(LinearAlgebra):

sanity := proc()
local A;
A :=RandomMatrix(3,10);
SingularValues(A, output=['U','S','Vt']);
end proc:

U, VS, Vt := sanity():
```

## Solution up to xi=0.25...

The (A?) key to get a solution up to xi=0.25 is to use a continuation method (see here help(dsolve, numeric_bvp, advanced) ).
I wasn't capable to get a solution for higher values of xi.

Does this "critical" value of 0.25 means something to you?
If not you can try, as suggested by the error messages, to change the continuation strategy... but this likely require a lot of time that I have not to help you further.

secod_code_mmcdara.mw

in exponentials:

 >
 >
 >
 (1)
 >
 >