mmcdara

7891 Reputation

22 Badges

9 years, 54 days

MaplePrimes Activity


These are answers submitted by mmcdara

The quantity you want to minimize is a number:

evalf(integralOF);
                          0.2759938793

The constraints you give are meaningless tautologies:

totalconstr;
               {0 <= 2, 0 <= 7, 4 <= 7, 7 <= 20}


integralOF is of the form

integral := int(Integrand, x=1..2)

with 
Obviously : 

  • Integrand is a number N
  • integralOf := (2-1)*N

The outer integral is meaningless.

 

Maybe (my guess) what you wanted is to minimize integralOF seen as a function of A, K and M ???
In such a case the first step is to remove these three lines from your code:

K := 2;
A := 2;
M := 7;

totalconstr then becomes

totalconstr := {0 <= A, 0 <= K, 0 <= M, M <= 20, A*K <= M}

This could make sense.


Assuming this is what you wanted to do, one obtains the following result:

  • The minimum value of integralOF is  1/tr (thus -1000 / 6).
  • The minimizer is the triple K=0, A=0, M=0
    (the proof for A=0 is quite simple, and I suppose one can prove this minimizer is the correct one)



If I'm wrong somewhere, or if I misinterpreted what I thought you wanted to do, please feel free to let me know.
But I believe you should begin to reconsider your problem and express it correctly.

Here is the code.

 

restart

a2 := 18:

b2 := 5:

a1 := 3:

b1 := 2.5:

ci := 0.05:

cr := 1:

cf := 10:

p := 0.1:

L := 0.5:

S := 3:

T := 10:

cud := 50:

co := 0.02:

tr := 0.006:

ti := 0.0014:

tf := 0.014:

f01 := x -> b1*(x/a1)^(b1 - 1)*exp(-(x/a1)^b1)/a1:

f02 := x -> b2*(x/a2)^(b2 - 1)*exp(-(x/a2)^b2)/a2:

fx := x -> p*f01(x) + (1 - p)*f02(x):

fh := h -> L*exp(-L*h):

Fh := h -> 1 - exp(-L*h):

# Several differences here from you code.
# Are you implicitely assuming that K is Integer?

P1_1 := (K, A, M) -> add(Int(fx(x)*(1 - Fh(i*A - x)), x = (i - 1)*A .. i*A), i = 1 .. K):
P2_1 := (K, A, M) -> add(Int(fh(h)*fx(x), [h = 0 .. i*A - x, x = (i - 1)*A .. i*A]), i = 1 .. K):
P3_1 := (K, A, M) -> Int(fh(h)*fx(x), [h = 0 .. M - x, x = A*K .. M]):
P4_1 := (K, A, M) -> Int(fx(x)*(1 - Fh(M - x)), x = A*K .. M):
P5_1 := (K, A, M) -> Int(fx(x), x = M .. infinity):

L1_1 := (K, A, M) -> add((i*A + tr + i*ti)*P1_1(K, A, M), i = 1 .. K):
L2_1 := (K, A, M) -> add((tf + (i - 1)*ti)*Int((x + h)*fh(h)*fx(x), [h = 0 .. i*A - x, x = (i - 1)*A .. i*A]), i = 1 .. K):
L3_1 := (K, A, M) -> (K*ti + tf)*Int((x + h)*fh(h)*fx(x), [h = 0 .. M - x, x = A*K .. M]):
L4_1 := (K, A, M) -> (K*ti + tr + M)*Int(fx(x)*(1 - Fh(M - x)), x = A*K .. M):
L5_1 := (K, A, M) -> (K*ti + tr + M)*Int(fx(x), x = M .. infinity):

C1_1 := (K, A, M) -> add((ci*i + cr)*P1_1(K, A, M), i = 1 .. K):
C2_1 := (K, A, M) -> add(((i - 1)*ci + cf)*P2_1(K, A, M), i = 1 .. K):
C3_1 := (K, A, M) -> (K*ci + cf)*P3_1(K, A, M):
C4_1 := (K, A, M) -> (K*ci + cr)*Int(fx(x)*(1 - Fh(M - x)), x = A*K .. M):
C5_1 := (K, A, M) -> (K*ci + cr)*Int(fx(x), x = M .. infinity):

Ptotal_1 := (K, A, M) -> P1_1(K, A, M) + P2_1(K, A, M) + P3_1(K, A, M) + P4_1(K, A, M) + P5_1(K, A, M):
Ltotal_1 := (K, A, M) -> L1_1(K, A, M) + L2_1(K, A, M) + L3_1(K, A, M) + L4_1(K, A, M) + L5_1(K, A, M):
Ctotal_1 := (K, A, M) -> C1_1(K, A, M) + C2_1(K, A, M) + C3_1(K, A, M) + C4_1(K, A, M) + C5_1(K, A, M):

Cost_rate_1 := (K, A, M) -> Ctotal_1(K, A, M) / Ltotal_1(K, A, M):

integralOF := (K, A, M) -> Cost_rate_1(K, A, M) * (A-1): # Is that it???

J := proc(K, A, M)
  evalf(integralOF(K, A, M))
end proc:

# example
J(2, 2, 7)

.2759938792

(1)

const1 := proc(K)      0 <= K  end proc:

const2 := proc(K, A) K*A <= M  end proc:

const3 := proc(K)      M <= 20 end proc:

const4 := proc(A)      0 <= A  end proc:

const5 := proc(M)      0 <= M  end proc:

# A naive attempt

optimized_cost := Optimization:-Minimize(
                    integralOF
                    , {const1, const2, const3, const4, const5}
                    , initialpoint=[2, 2, 7]
                  );

Error, (in Optimization:-NLPSolve) could not store 0. <= HFloat(2.0) in a floating-point rtable

 

# Let's begin by analyzing Cost_rate_1(K, A, M):
#
# printing Cost_rate_1(K, A, M) shows it writes

Sum(alpha[i]*Int(f[i](x), x=a[i]..b[i]), i=1..K) / Sum(beta[i]*Int(g[i](x), x=c[i]..d[i]), i=1..K)

(Sum(alpha[i]*(Int(f[i](x), x = a[i] .. b[i])), i = 1 .. K))/(Sum(beta[i]*(Int(g[i](x), x = c[i] .. d[i])), i = 1 .. K))

(2)

# With alpha[i] > 0 for all i=1..K
# and  alpha[i] > 0 for all i=1..K
#
# More of this f[i](x) ang g[i](x) are positive functions for
# all i=1..K.
#
# Then, Cost_rate_1(K, A, M) is positive for all values of K, A and M
# which respect const1..const5.
#
# It follows that a condition for integralOF to be minimum is
#        A = 0
#
# From the result this little piece of code provides, on may infer
# that the minimum of integralOF is reached for
#        A = 0
#        M = 0
#        K = +infinity
#
# Note this triple verifies the five constraints
#

RES := NULL:
for K from 0 to 4 do
  for M from 0 to 4 do
    j := J(K, 0, M):
    printf("K = %2d  M=%2d  J=%2.5f\n", K, M, j):
    RES := RES, [K, A, M, j]:
  end do:
end do:

K =  0  M= 0  J=-166.66667

K =  0  M= 1  J=-1.00206

K =  0  M= 2  J=-0.53638

K =  0  M= 3  J=-0.41296

K =  0  M= 4  J=-0.36673

K =  1  M= 0  J=-141.89189

K =  1  M= 1  J=-1.05034

K =  1  M= 2  J=-0.56110

K =  1  M= 3  J=-0.42978

K =  1  M= 4  J=-0.37965

K =  2  M= 0  J=-125.00000

K =  2  M= 1  J=-1.09848

K =  2  M= 2  J=-0.58578

K =  2  M= 3  J=-0.44659

K =  2  M= 4  J=-0.39255

K =  3  M= 0  J=-112.74510

K =  3  M= 1  J=-1.14650

K =  3  M= 2  J=-0.61043

K =  3  M= 3  J=-0.46338

K =  3  M= 4  J=-0.40544

K =  4  M= 0  J=-103.44828

K =  4  M= 1  J=-1.19437

K =  4  M= 2  J=-0.63504

K =  4  M= 3  J=-0.48015

K =  4  M= 4  J=-0.41832

 

for K from 0 to 10 do
  j := J(K, 0, 0):
  printf("K = %2d  J=%2.5f\n", K, j):
end do:

K =  0  J=-166.66667

K =  1  J=-141.89189
K =  2  J=-125.00000
K =  3  J=-112.74510
K =  4  J=-103.44828
K =  5  J=-96.15385
K =  6  J=-90.27778
K =  7  J=-85.44304
K =  8  J=-81.39535
K =  9  J=-77.95699
K = 10  J=-75.00000

 

# A procedure which removes all terms of the form Int(..., x=0..0)

keep := proc(L)
  local L1 := [op(L)]:
  local L2 := map2(op, -1, L1):
  local L3 := map2(op, -1, L2);
  local L4 := has~(L3, x=0..0);
  add(map(i -> if `not`(L4[i]) then return L1[i] end if, [$1..nops(L1)]))
end proc:

  

CR000 := Cost_rate_1(0, 0, 0) * (0-1):

keepn := keep(numer(CR000));
keepd := keep(denom(CR000));

keepn/keepd

-1.*(Int(0.1603750747e-1*x^1.5*exp(-0.6415002989e-1*x^2.5)+0.2381496723e-5*x^4*exp(-(1/1889568)*x^5), x = 0 .. infinity))

 

0.6e-2*(Int(0.1603750747e-1*x^1.5*exp(-0.6415002989e-1*x^2.5)+0.2381496723e-5*x^4*exp(-(1/1889568)*x^5), x = 0 .. infinity))

 

-166.6666667

(3)

 


 

Download My_Interpretation.mw

@Athar Kharal 

As your initial conditions is not monotonously increasing with respect to x, the Burgers equation (the "inviscid" one without the diffusion term added) develops a shock at a finite time t.
The same phenomenon holds even if you add a (small) diffusion term. The only difference comparing to the inviscid Burgers equation, is that the characteristics no longer criss-cross at the shock but simply vanish downstream it: so the solution remains single-valued.

To "capture" this shock you need specific schemes: Lax, Lax-Wendroff, Lax-Fiedrichs, Godunov, Roe, ... (some are not avaliable in Maple, some others are limited to first order pds wrt x ans thus do not accept diffusion terms [not discretized, likely for the sake of simplicity, too bad]).

I don't know where this error at t =0.942477796076937 comes from.
Probably a version issue for Maple 2015.2 gives a quite correct result beyond this time that, I suspect, is the time when the shock occurs.

The default numerocal scheme provides a solution with a spurious solution (plot 1)
The simplest,conservative and  conditionally stable and convergent (under a correct CFL condition) method is BackwardEuler; adjusting the spacestep (the timestep should be automatically adapted accordingly) gives a very good result (plot 2):

NumericalScheme.mw

restart:

PDE  := diff(u(x, t), t) = -u(x, t)*diff(u(x, t), x) + 0.1 * diff(u(x, t), x$2);

IC   := u(x, 0) = sin(x):

BC   := u(0, t)=u(2.0*Pi, t), D[1](u)(0, t)=D[1](u)(2.0*Pi, t):

IBC  := {IC, BC}:

pds := pdsolve(
         PDE
         , IBC
         , numeric
         , time=t
         , range=0..2.0*Pi
       ):

plots:-display(

  seq(

    pds:-plot(

      t=tau

      , numpoints=50

      , color=ColorTools:-Color([rand()/10^12, rand()/10^12, rand()/10^12])

      , legend=('t'=nprintf("%1.2e", tau))

    )

    , tau in [seq](0.9..2, 0.05)

  )
  , legendstyle=[location=right]
  , size=[700, 500]

)

diff(u(x, t), t) = -u(x, t)*(diff(u(x, t), x))+.1*(diff(diff(u(x, t), x), x))

 

 

pds := pdsolve(
         PDE
         , IBC
         , numeric
         , time=t
         , range=0..2.0*Pi
         , method=BackwardEuler
         , spacestep=0.01*(2.0*Pi)
       );

plots:-display(

  seq(

    pds:-plot(

      t=tau

      , numpoints=50

      , color=ColorTools:-Color([rand()/10^12, rand()/10^12, rand()/10^12])

      , legend=('t'=nprintf("%1.2e", tau))

    )

    , tau in [seq](0.9..4, 0.1)

  )
  , legendstyle=[location=right]
  , size=[800, 500]

)

_m4475986848

 

 

plots:-display(
  pds:-plot3d(
    t=0..10
    , x=0..2*Pi
    , axes=boxed
    , grid=[80, 80]
    , style=surfacecontour
    , contours=10
   #, filledregions=true
    , color=gold
  ),
  plot3d(0, x=0..2*Pi, t=0..10, style=wireframe, color=gray)
)

 

 

Download NumericalScheme.mw

Rouben is undoubtedly right.
Of course one could expect that solve begins by analyzing the system (for instance by trying to reduce the system by the equation which doesn't contain the variables you want to silve it for).

Here is a first (almost systematic) way to solve your 3-equations-2-unknowns system

restart:

solve({r*T = m*r^2*a*(1/r), g*m-T = m*a}, {T, a})

{T = (1/2)*g*m, a = (1/2)*g}

(1)

sys  := [I__cm = m*r^2, r*T = I__cm*a/r, g*m - T = m*a]:
vars := [T, a];

[T, a]

(2)

infolevel[solve] := 4:
solve(sys, vars)

Main: Entering solver with 3 equations in 2 variables
Main: attempting to solve as a linear system
Linear: solving 3 linear equations
Polynomial: # of equations is: 3
Main: Linear solver successful. Exiting solver returning 1 solution
solve: Warning: no solutions found

 

[]

(3)

with(LinearAlgebra):

M, S := GenerateMatrix(sys, vars);

M, S := Matrix(3, 2, {(1, 1) = 0, (1, 2) = 0, (2, 1) = r, (2, 2) = -`#msub(mi("I"),mi("cm"))`/r, (3, 1) = -1, (3, 2) = -m}), Vector(3, {(1) = m*r^2-`#msub(mi("I"),mi("cm"))`, (2) = 0, (3) = -g*m})

(4)

sol := simplify( (M^* . M)^(-1) . M^+ . S ) assuming m::real, I__cm::real, r::real

sol := Vector(2, {(1) = g*m*`#msub(mi("I"),mi("cm"))`/(m*r^2+`#msub(mi("I"),mi("cm"))`), (2) = g*m*r^2/(m*r^2+`#msub(mi("I"),mi("cm"))`)})

(5)

< vars > = eval(sol, sys[1])

(Vector(2, {(1) = T, (2) = a})) = (Vector(2, {(1) = (1/2)*g*m, (2) = (1/2)*g}))

(6)

 

Download Noon_at_2_pm.mw

Now two (ad hoc) ways to solve it

restart:

solve({r*T = m*r^2*a*(1/r), g*m-T = m*a}, {T, a})

{T = (1/2)*g*m, a = (1/2)*g}

(1)

sys  := [I__cm = m*r^2, r*T = I__cm*a/r, g*m - T = m*a]:
vars := [T, a];

[T, a]

(2)

Has, NotHas := selectremove(has, sys, {vars[]})

[r*T = I__cm*a/r, g*m-T = m*a], [I__cm = m*r^2]

(3)

# Way 1

eval(solve(Has, vars), NotHas)[]

[T = (1/2)*g*m, a = (1/2)*g]

(4)

# Way 2

subsys := remove(evalb, eval(sys, NotHas));
solve(subsys, vars);

[r*T = m*r*a, g*m-T = m*a]

 

[[T = (1/2)*g*m, a = (1/2)*g]]

(5)

 

Download Noon_at_2_pm_bis.mw

 

Look the help page about RunWorkSheet

I have Maple 2015.2 right now and in this version this function was still  "experimental and subject to change".
I guess a lot of things have been improved during the last seven years.

I tested RunWorkSheet years ago but I didn't find any real interest in it (personal opinion). Maybe I would say otherwise today ???

Your formulation has lured people ( @Kitonum @Rouben Rostamian )

You must not write p := v -> .... , which is as wrong as writing p := T -> ..., for the equation of state (eos) is a relation between p (pressure), v (volume) and T (temperature).
For some gases, for instance perfect gas, one has p*v/(n*R*T) = constant and this relation can be rewritten in 3 different forms:
p=constant*n*R*T/v, or v =constant*n*R*T/p, or T = constant*n*R*/(p*v); but doing this gives the impression that one thermodynamic quantity is a function of the two others, which is not (unless special cases when some of them are kept constant).

What you could have written is :

f(p, v, T) = p - R*T/(v-b) - a/v^2;

# where, by definition of an eos

f(p, v, T) = 0;

# so, more simply

p - R*T/(v-b) - a/v^2 = 0;

to emphasize the relation between p, v and T.

restart

# pressure p, volume v and temperature T are linked by an equation of state
# which, for Van der Walls gases takes ths form

f(p, v, T) = p - R*T/(v-b) - a/v^2

f(p, v, T) = p-R*T/(v-b)-a/v^2

(1)

# assuming volume v > covolume b > 0 one gets

map(numer, normal(%))

f(p, v, T) = R*T*v^2+b*p*v^2-p*v^3-a*b+a*v

(2)

# get the coefficients of v^n, n=0..3

CV := [seq(C[k], k in [seq](3..0, -1))] =~ [coeffs(rhs(%), v)]

[C[3] = -p, C[2] = R*T+b*p, C[1] = a, C[0] = -a*b]

(3)

# symbolic form

lhs((2)) = add('C'[k]*v^k, k=0..3)

f(p, v, T) = v^3*C[3]+v^2*C[2]+v*C[1]+C[0]

(4)

# extended form

eval(%, CV);

f(p, v, T) = -p*v^3+(R*T+b*p)*v^2+a*v-a*b

(5)

 

Download WanDerWalls_eos.mw


I think ithis is close from what you want

restart:

expr := e^4 + 4*e^3 + 6*e^2 + 4*e + 1:
rule := [e^2 = 1]:
expr = simplify(expr, rule)

e^4+4*e^3+6*e^2+4*e+1 = 8+8*e

(1)

expr :=  (e + 1)^3:
rule := [e^2 = 1]:
expr = simplify(expand(expr), rule)

(e+1)^3 = 4+4*e

(2)

expr := X^3*Y + X^2*Z + X;
rule := [X^2 = 1]:
expr = expand(simplify(expr, rule))

X^3*Y+X^2*Z+X

 

X^3*Y+X^2*Z+X = X*Y+X+Z

(3)

 

 

Download simpleify_rule.mw

Can't be integrated formally.

As the integral is a function of p[1],p[2] and p[3] you can proceed this way

NULL

restart

M := 3:

JJ := 5:

II := 5:

with(ArrayTools):

W := RandomArray(II, JJ, M)

Array(%id = 18446744078164104006)

(1)

V := ArrayTools:-RandomArray(II, JJ, M):

w := add(add(add(W[i, j, m]*LegendreP(i-1, x)*LegendreP(j-1, y)*p[m], i = 1 .. II), j = 1 .. JJ), m = 1 .. M):

v := add(add(add(V[i, j, m]*LegendreP(i-1, x)*LegendreP(j-1, y)*p[m], i = 1 .. II), j = 1 .. JJ), m = 1 .. M):

L := add(add((LegendreP(i-1, x)*LegendreP(j-1, y))^2, i = 1 .. II), j = 1 .. JJ):

H := 1+tanh(w-v):

# here is the first term in tanh(...)

T := op(2, op(1, -[op(op(1, H*L))][2]));
T := simplify~(T, 'LegendreP');

HFloat(0.06261653867107209)*p[2]

 

HFloat(0.06261653867107209)*p[2]

(2)

# try to integrate its hyperbolic tangent formally

int(tanh(T), x=-1..1, y=-1..1);

4*tanh(HFloat(0.06261653867107209)*p[2])

(3)

# as this "simple" integration fails, proceed to a numerical integration


HL := simplify(H*L, 'LegendreP'):
P  := convert(select(has, indets(HL, name), p), list):

J := proc(p::list)
  eval(HL, P=~p):
  evalf(Int(%, x=-1..1, y=-1..1, method=_cuhre))
end proc:

# exemple

J([1, 2, 3])

19.06607069

(4)

# plot J(1, 2, p[3]) versus p[3]

[seq([k, J([1, 2, k])], k in [seq](0..3, 0.25))]:
plot([%], labels=[typeset(p[3]), 'J(p)'])

 

# A 3D plot J(p[1], p[2], 1) versus p[1] and p[2] with interpolation
# on a finer mesh

R := [seq](0..1, 0.25):
A := Array((1..N)$3, (i, j, k) -> J([R[i], R[j], R[k]])):
 

with(CurveFitting):
with(plots):

ai   := Array(R):
aj   := Array(R):
Jij1 := Matrix(N$2, (i, j) -> A[i, j, 1]):

bi   := Matrix(50$2, (i,j) -> i/50):
bj   := Matrix(50$2,( i,j) -> j/50):
bij  := ArrayTools[Concatenate](3, bi, bj):
B    := ArrayInterpolation([ai, aj], Jij1, bij, method=linear):

matrixplot(
  Matrix(B)
  , style=patchcontour
  , shading=zhue
  , labels =[typeset(p[1]), typeset(p[2]), ""]
  , title = 'J(p[1], p[2], 1)'
  , axes=boxed
);

 

 

NULL

Download integralproblem_mmcdara.mw

A lot of people are confused by the term "linear regression".
In regression, linear means "linear with respect to the coeficients", not to the regressors.
Linear must be intended in the sense of "linear form", that is a linear map from a vector space to its field of scalars.

For instance the model Y = add( a[i] * f[1](X[1], ..., X[P]), i=1..Q ) is linear wrt the (vector) coefficient a.

Some models can be transformed into linear models with a suitable transformation and the introduction of auxiliary regressors.
For instance here is (at first sight) an obviously non linear model

model := y = (1+a*x)/(b+c*x)

But it can be made linear...

restart

model := y = (1+a*x)/(b+c*x)

y = (a*x+1)/(c*x+b)

(1)

TransformedModel := simplify((lhs-rhs)(model)) = 0

-(-c*x*y+a*x-b*y+1)/(c*x+b) = 0

(2)

# assuming that denom(%) <> 0

TransformedModel := numer(lhs(TransformedModel)) = 0

c*x*y-a*x+b*y-1 = 0

(3)

TransformedModel := isolate(TransformedModel, y*b)

y*b = -c*x*y+a*x+1

(4)

TransformedModel := lhs(TransformedModel) = eval(rhs(TransformedModel), y=z/x)

y*b = a*x-c*z+1

(5)

TransformedModel := expand(TransformedModel / b)

y = a*x/b-c*z/b+1/b

(6)

# Example :

X := [$1..10]:
A := 1:
B := 2:
C := 3:

f := unapply( eval(rhs(model), [a=A, b=B, c=C]), x):

Y := f~(X):

# traditional approach

NLfit := Statistics:-NonlinearFit(rhs(model), X, Y, x)

(HFloat(1.0000000054257945)*x+1)/(HFloat(3.0000000158261737)*x+HFloat(1.9999999977043728))

(7)

# linear regression with the transformed model

Regressors := < <X> | <X>*~<Y> >:

Lfit := Statistics:-LinearFit([1, x, z], Regressors, <Y>, [x, z])

HFloat(0.49999999962336317)+HFloat(0.49999999800425965)*x-HFloat(1.499999994080024)*z

(8)

# retrieve the values of a, b and c

abc := solve([coeffs(Lfit, [x,z])] =~ [coeffs(rhs(TransformedModel), [x,z])], [a, b, c])

[[a = .9999999968, b = 2.000000002, c = 2.999999990]]

(9)

# reconstruct the model in its original form

ReconstructedModel := eval(rhs(model), abc[])

(.9999999968*x+1)/(2.999999990*x+2.000000002)

(10)

Download Nonlinear_to_Linear.mw

I agree the use of such kind of transformation is a little bit academic.
It was intensively used in the past to do regression "by hand" when optimization tools were not that common.
Its advantage can be its drawback: it modifies the "conditioning" of the problem. Sometimes its better to use NonlinearFit, sometimes an ad hoc transformation may prove better.

One must be aware that transforming a nonlinear model into a linear one can loose something.
But the transformation approach may lead to unsuspected good results that NonlinearFit is unable to obtain:

 

X := [$-5..5]:
A := 1:
B := 2:
C := 3:

f := unapply( eval(rhs(model), [a=A, b=B, c=C]), x):

Y := f~(X):

Statistics:-ScatterPlot(X,Y)

 

# traditional approach

NLfit := Statistics:-NonlinearFit(rhs(model), X, Y, x)

Error, (in Statistics:-NonlinearFit) SVD of estimated Jacobian could not be computed

 

# linear regression with the transformed model

Regressors := < <X> | <X>*~<Y> >:

Lfit := Statistics:-LinearFit([1, x, z], Regressors, <Y>, [x, z])

HFloat(0.5000000001445163)+HFloat(0.5000000001240522)*x-HFloat(1.5000000004778788)*z

(11)

# retrieve the values of a, b and c

abc := solve([coeffs(Lfit, [x,z])] =~ [coeffs(rhs(TransformedModel), [x,z])], [a, b, c])

[[a = 1.000000000, b = 1.999999999, c = 3.000000000]]

(12)

# reconstruct the model in its original form

ReconstructedModel := eval(rhs(model), abc[])

(1.000000000*x+1)/(3.000000000*x+1.999999999)

(13)

 

What do you mean by "find the maximum value of k by putting dw/dk = 0" ?

What I understand is that you have an expression rel

rel := w + 1/2 * sqrt(Pol(k, 8))

where Pol(k, 8) is a polynomial in k, of maximum degree 8, and whose coefficients depend on a, c and gamma (which should be declared as local for gamma is the Euler constant) ;
and that you want to find the value(s) of k where rel reaches its (local, global) maximm (maxima?).

If it is so, rel is maximum for a given value of w when sqrt(Pol(k, 8)) is maximum.
You must distinguish two situations:

  1. Pol(k, 8) >= 0 
    then the maximizers of rel are those of Pol(k, 8).
  2. Pol(k, 8) < 0 
    in which case sqrt(Pol(k, 8)) is imaginary and, assuming you are interested by real values of k, you have to discard this situation.

Let's concentrate on case 1: as there exists no method to find the analytic expressions of the roots of a 8th degree polynomia you have to proceed numerically.

Here is an example where the values of a, cgamma and are set to values randomly chosen
The first part details the solution  strategy, in the second one a procedure is proposed.

restart:

local gamma

rel := w - ( -(1/2)*sqrt(960*c^6*gamma^2*k^2-336*c^4*gamma^2*k^4+64*c^2*gamma^2*k^6-4*gamma^2*k^8+576*alpha*c^6*gamma*k-288*alpha*c^4*gamma*k^3-16*alpha*c^2*gamma*k^5+8*alpha*gamma*k^7-144*alpha^2*c^4*k^2-48*alpha^2*c^2*k^4-4*alpha^2*k^6+128*c^4*gamma*k^2-40*c^2*gamma*k^4+4*gamma*k^6+48*alpha*c^4*k-16*alpha*c^2*k^3-4*alpha*k^5+4*c^2*k^2-k^4) ):

# DETAILED STRATEGY
#
# set numerical values to a, c, gamma and w

r := rand(0. .. 1.): # illustrative

U := convert(indets(rel, name) minus {k}, list);
data := [r(), r(), r(), r()]:

frel := eval(rel, U=~data);
 

[alpha, c, gamma, w]

 

.2907448089+(1/2)*(-1.055709134*k^8+.9627432321*k^7+2.382318921*k^6-.9993347343*k^5-1.843509609*k^4-.1576678405*k^3+.1987390764*k^2+0.1413736031e-1*k)^(1/2)

(1)

# assuming you are interesting in real minimizers

d := op([-1, -1, 1], frel);

M := sort([fsolve(d)]):
print~([k$nops(M)] =~ M):

-1.055709134*k^8+.9627432321*k^7+2.382318921*k^6-.9993347343*k^5-1.843509609*k^4-.1576678405*k^3+.1987390764*k^2+0.1413736031e-1*k

 

k = -.8897996958

 

k = -0.7029441628e-1

 

k = 0.

 

k = .3186252755

 

k = 1.323471483

 

k = 1.416064043

(2)

# find the compact domains where d > 0

if limit(d, k=-infinity) = Float(-infinity) then  
  d_positive := [ seq(M[n]..M[n+1], n in [seq](1..nops(M), 2)) ]
else
  d_positive := [ -infinity..M[1], seq(M[n]..M[n+1], n in [seq](2..nops(M), 2)) ]
end if;

[-.8897996958 .. -0.7029441628e-1, 0. .. .3186252755, 1.323471483 .. 1.416064043]

(3)

# on each of them search for the value of k where d is maximum


Local_max := NULL:

for ra in d_positive do
  dd_0 := [fsolve(diff(d, k), k=ra)];
  P    := sort(map(z -> eval(d, k=z), dd_0), output=permutation);
  Local_max := Local_max, [dd_0[P[-1]], eval(frel, k=dd_0[P[-1]])]
end do:

Local_max

[-.8095361698, .3511111034], [.2159473871, .3310944898], [1.373835099, .3922790723]

(4)

# find the maximum maximorum

Global_max := sort([Local_max], key=(x -> x[2]))[-1]

 

[1.373835099, .3922790723]

(5)

plot(frel, k=(min..max)(M), gridlines=true)

 

# PROCEDURE

FingGlobalMax := proc(rel, var, {DATA::list(`=`):=[]})
  local k, U, frel, d, M, d_positive, ra, dd_0, P, Local_max:

  k    := var:
  U    := convert(indets(rel, name) minus {k}, list);
  frel := eval(rel, DATA);
  d    := op([-1, -1, 1], frel);
  M    := sort([fsolve(d)]):

  if limit(d, k=-infinity) = Float(-infinity) then  
    d_positive := [ seq(M[n]..M[n+1], n in [seq](1..nops(M), 2)) ]
  else
    d_positive := [ -infinity..M[1], seq(M[n]..M[n+1], n in [seq](2..nops(M), 2)) ]
  end if;
 
  Local_max := NULL:
  for ra in d_positive do
    dd_0 := [fsolve(diff(d, k), k=ra)];
    P    := sort(map(z -> eval(d, k=z), dd_0), output=permutation);
    Local_max := Local_max, [dd_0[P[-1]], eval(frel, k=dd_0[P[-1]])]
  end do:

  return sort([Local_max], key=(t -> t[2]))[-1]
   
end proc:

U =~ data:

FingGlobalMax(rel, k, DATA=[alpha = .2342493224, c = .1799302829, gamma = .5137385362, w = .2907448089])

[1.373835099, .3922790723]

(6)

 

Download Minimizers.mw

@vs140580 

Refer to my previous answer for the comment-s in the worksheet

Please: don't forget to vote up if it suits you, it took me hours to do this

restart:

with(Statistics):

Data := ExcelTools:-Import(cat(currentdir(), "/Desktop/A_sample.xlsx")):
Data := convert(Data, Matrix):

XY   := Data[2..-1]:
N, P := LinearAlgebra:-Dimensions(XY):
P    := P-1:
N, P:

Train_fraction := 0.7:
Train_N        := round(N*Train_fraction):
mixing         := combinat:-randperm([$1..N]):
Train_num      := mixing[1..Train_N]:
Test_num       := mixing[Train_N+1..N]:

Train_XY       := XY[Train_num]:
Test_XY        := XY[Test_num]:

regressors := [seq(V__||p, p=1..P)]:

Family_code := convert(
                 map(
                   n -> parse~(
                          StringTools:-Explode(sprintf(cat("%.", P, "d"), convert(n, binary)))
                        )
                        , [$1..2^P-1]
                 )
                 , Matrix
               ):
Member_base := n  -> [op(Family_code[n] . <regressors>)]:
Extractor   := n -> [ ListTools:-SearchAll(1, convert(Family_code[n], list)) ]:

Family_res := Matrix(2^P-1, 3):

for n from 1 to 2^P-1 do
  Member_coeff := LinearFit(
                    Member_base(n)
                    , Train_XY[.., Extractor(n)]
                    , Train_XY[.., P+1]
                    , Member_base(n)
                    , output=parametervector
                  ):

  Member_RSS := Variance(Test_XY[.., P+1] - Test_XY[.., Extractor(n)] . Member_coeff);

  Family_res[n, 1] := numelems(Member_base(n));
  Family_res[n, 2] := Member_RSS;
  Family_res[n, 3] := Variance(Train_XY[.., P+1] - Train_XY[.., Extractor(n)] . Member_coeff);

end do:

Family_AIC := 0*Family_res[.., 1] + Family_res[.., 2]/~Family_res[.., 3]:

ScatterPlot(
  Family_res[.., 1]
  , Family_res[.., 2]/~Family_res[.., 3]
  , symbol=box
  , symbolsize=10
  , labels=["Complexity", "RSS"]
);

print():print():

ScatterPlot(
  Vector(2^P-1, i -> i)
  , Family_AIC
  , symbol=solidcircle
  , symbolsize=5
  , size=[1200, 300]
  , labels=["Model num", "AIC"]
);

 

(1)

# The "five best models"

Family_5_best_num   := sort(Family_AIC, output=permutation)[1..5]:
Family_5_best_model := Member_base~(Family_5_best_num):

# The "best model"

q   := Family_5_best_num[1]:
MOD := Family_5_best_model[1]:

Member_coeff := LinearFit(
                  Member_base(q)
                  , Train_XY[.., Extractor(q)]
                  , Train_XY[.., P+1]
                  , Member_base(q)
                  , output=parametervector
                ):
`Best Model` = add(MOD[k]*%[k], k=1..numelems(MOD)):


Member_pred := Test_XY[.., Extractor(q)] . Member_coeff:

`Residual variance` = Variance(Test_XY[.., P+1] - Test_XY[.., Extractor(q)] . Member_coeff);
`Residual std dev` = StandardDeviation(Test_XY[.., P+1] - Test_XY[.., Extractor(q)] . Member_coeff);
`Akaike criterion` = Family_AIC[q];

plots:-display(
  ScatterPlot(
    Test_XY[.., P+1]
    , Member_pred
    , labels=["Y (test base)", "Y (best prediction)"]
    , labeldirections=[default, vertical]
  )
  , plot([[min(Test_XY[.., P+1])$2], [max(Test_XY[.., P+1])$2]], color=red)
);

 

 

# Let's compare the previous results to the performances of the
# most complex model (number 2^P-1).
#
# As said previously this model has the lowest RSS, but given its complexity it
# appears not to be better then the one whcich minimizes the Akaike criterion.

q := 2^P-1:


Member_coeff := LinearFit(
                  Member_base(q)
                  , Train_XY[.., Extractor(q)]
                  , Train_XY[.., P+1]
                  , Member_base(q)
                  , output=parametervector
                ):
`Best Model` = add(Member_base(q)[k]*%[k], k=1..numelems(Member_base(q)));


Member_pred := Test_XY[.., Extractor(q)] . Member_coeff:

`Residual variance` = Variance(Test_XY[.., P+1] - Test_XY[.., Extractor(q)] . Member_coeff);
`Residual std dev` = StandardDeviation(Test_XY[.., P+1] - Test_XY[.., Extractor(q)] . Member_coeff);
`Akaike criterion` = Family_AIC[q];

plots:-display(
  ScatterPlot(
    Test_XY[.., P+1]
    , Member_pred
    , labels=["Y (test base)", "Y (best prediction)"]
    , labeldirections=[default, vertical]
  )
  , plot([[min(Test_XY[.., P+1])$2], [max(Test_XY[.., P+1])$2]], color=red)
);

 

Download Solution.mw

For an unknown reason the images are not loaded

`Best Model` = 0.222183e-1*V__3
               -0.562851e-1*V__6
               +0.129663e-3*V__7
               -0.104495e-3*V__8
              ​​​​​​​ +0.204609e-3*V__9

If the traditional Akaike criterion doesn't suit you, define exactly what th expression "best model" signifies for you?

@sursumCorda 

I was writting an explanation but was in the end beatten by @acer

Without any excursion into the third dimension:

pt := [
        seq([u, 5*(u-1)], u in [seq](1..2, 0.01))
        , seq([u, 5], u in [seq](2..1, -0.01))
        , seq([1, v], v in [seq](5..0, -0.01))
      ]:



S  := display(polygon(pt, color=black)):
plots:-display(
  transform((s,t)->[s^2*sqrt(t)*cos(t), s^2*sin(t)])(S)
  , size=[500,"golden"]
  ,axes=boxed
);



MyExplanation.mw

Here is a way

restart

# case "two roots are in 0..1" not treated yet

SOL := proc(f, Beta)
  local e, s:
  e := eval(f, beta=Beta):
  if has(e, xi) then
    if Beta::numeric then
      s := [solve(e, xi)]:
      if has(s, I) then
        lprint("two conjugate complex solutions\n"):
        return s:
      else
        # assuming only one root is in 0..1"
        # (how do you want to handle this case?)
        return select(is, [solve(e, xi)], RealRange(Open(0), Open(1)))[]:
      end if:
    else
      return [solve(e, xi)]
    end if:
  else
    error cat(f, " doesn't contain xi for beta = ", Beta)
  end if:
end proc:

eq := [
  (1/6)*beta^3+(1/2)*xi^2*beta-(1/2)*beta^2-(1/2)*xi^2+(1/3)*beta = 0,
  (1/6)*beta*(beta^2+3*xi^2-6*xi+2) = 0
];

[(1/6)*beta^3+(1/2)*xi^2*beta-(1/2)*beta^2-(1/2)*xi^2+(1/3)*beta = 0, (1/6)*beta*(beta^2+3*xi^2-6*xi+2) = 0]

(1)

SOL(eq[1], beta)

[(1/3)*(-3*beta^2+6*beta)^(1/2), -(1/3)*(-3*beta^2+6*beta)^(1/2)]

(2)

SOL(eq[1], 1)

Error, (in SOL) 1/6*beta^3+1/2*xi^2*beta-1/2*beta^2-1/2*xi^2+1/3*beta = 0 doesn't contain xi for beta = 1

 

SOL(eq[1], 1/2)

1/2

(3)

SOL(eq[1], 3)

"two conjugate complex solutions\n"

 

[I, -I]

(4)

plot(['SOL(eq[1], beta)', 'SOL(eq[2], beta)'], beta=0..1, color=[red, blue], legend=["left", "right"])

 

 

Download Numeric_mmcdara.mw

A step by step demonstration of the content of the math book

If your goal is to implement in Maple what is written in your math book as closely as possible, here is a solution.
 

restart:


u := x*y*z:

constraint := x+y+z=6

x+y+z = 6

(1)



u := eval(u, isolate(constraint, z))

x*y*(6-x-y)

(2)



u := expand(u);

-x^2*y-x*y^2+6*x*y

(3)




'Diff(u, x)' = factor(diff(u, x));
'Diff(v, x)' = factor(diff(u, y));

rel := [%%, %]:

Diff(u, x) = -y*(2*x+y-6)

 

Diff(v, x) = -x*(x+2*y-6)

(4)



ro1     := op(-rhs(rel[1])):
cond_11 := y = solve(ro1[1]);
cond_12 := (remove=-select)(is, ro1[2], numeric);

ro2     := op(-rhs(rel[2])):
cond_21 := x = solve(ro2[1]);
cond_22 := (remove=-select)(is, ro2[2], numeric);

y = 0

 

2*x+y = 6

 

x = 0

 

x+2*y = 6

(5)


prop := &or(cond_11, cond_12)  &and &or(cond_21, cond_22);

prop := Logic:-BooleanSimplify(prop):





print();
print~([op(prop)]):

`&and`(`&or`(y = 0, 2*x+y = 6), `&or`(x = 0, x+2*y = 6))

 

 

Logic:-`&and`(x = 0, y = 0)

 

Logic:-`&and`(x = 0, 2*x+y = 6)

 

Logic:-`&and`(y = 0, x+2*y = 6)

 

Logic:-`&and`(x+2*y = 6, 2*x+y = 6)

(6)




map(p -> solve({op(p)}), [op(prop)]):
print~(%):

{x = 0, y = 0}

 

{x = 0, y = 6}

 

{x = 6, y = 0}

 

{x = 2, y = 2}

(7)

 

 

Download StepByStep_Procedure.mw


If you are in search of different methods to get the solution, here is a first one


The code below founds 4 solutions from which only one corresponds to a strictly positive volume.
(replace L by 6)
 

restart:
v := a*b*c: 
p := a+b+c:
# with Lagrange multiplier lambda 
U := {a, b, c, lambda}: 
diff~(v-lambda*(p-L), U): 
sols := solve(%, U); 
         {a = L, b = 0, c = 0, lambda = 0}, 

           {a = 0, b = 0, c = L, lambda = 0}, 

           {a = 0, b = L, c = 0, lambda = 0}, 

            /    1        1        1             1  2\ 
           { a = - L, b = - L, c = - L, lambda = - L  }
            \    3        3        3             9   / 

sol  := remove(has, [sols], lambda=0)[]
           /    1        1        1             1  2\ 
          { a = - L, b = - L, c = - L, lambda = - L  }
           \    3        3        3             9   / 



and a second one based on a derivative-free demonstration: 

# Volume of the cube a=b=c=L/3

Vc := (L/3)^3:

# Volume of the parallelepiped 
# a = L/3-delta__a
# b = L/3-delta__b
# c = L/3-delta__c


Vp := (L/3-delta__a)*(L/3-delta__b)*(L/3-delta__c):

# Variation of volume regarding to Vc

delta__v := Vp - Vc:

# Let's account for the condition delta__a + delta__b + delta__c = 0

delta__v := expand( eval(delta__v, delta__c=-delta__a-delta__b) );

# Let's arrange the expression

map(factor, collect(delta__v, L));

         1 /        2                               2\  
       - - \delta__a  + delta__a delta__b + delta__b / L
         3                                              

                    2                             2
          + delta__a  delta__b + delta__a delta__b 

# The second order perturbation is obviously negative
# for delta__a^2+delta__a*delta__b+delta__b^2 > 0
#
# Then the variation of the volume of the parallelipiped at 
# point a=b=c=L/3 is strictly negative unless delta__a, delta__b=0
# and thus delta__c are both null.
#
# The cube is the parallelipiped with the highest volume given 
# the value of L.

I'm not sure I really understood your question.
So if my answer is off the mark, forget it quickly:

(the flag see enables removing the outputs)

restart

show := proc(ft, a) if ft then print(a) end if: end proc:

E := proc()
  local M := `<,>`(_passed):
  local L := _passed:
  Matrix(_npassed$2, (i, j) -> L[i]&*L[j]);
end proc:
  


Abstract parameters

see := false:

res := E(P, Q):
show(see, res):
show(see, eval(res, Q=P)): # should fix   subs(E[3] = E[2], ...)   (?) 
show(see, E(P, P)):


Integer parameters

Different specifications of  `&*``

res := E(1, 2, 3):
show(see, `* -->` = eval(res, `&*`=`*`) ):
show(see, `cat -->` = eval(res, `&*`=((u, v) -> cat(u, v))) ):
show(see, `mod -->` = eval(res, `&*`=((u, v) -> modp(u, v))) ):


Matrix type parameters

Different specifications of  `&*``

n   := 2:
A   := Matrix(n$2, symbol=a):
B   := Matrix(n$2, symbol=b):
res := E(A, B):

show(see, `. -->` = eval(res, `&*`=`.`) ):

show(see, `. -->` = eval(res, `&*`=LinearAlgebra:-Multiply) ):

show(see, `KroneckerProduct -->` = eval(res, `&*`=LinearAlgebra:-KroneckerProduct) ):

# how to get this:

show(see, eval(res, `&*`=`*~`) ):

# answer

show(see, `*~ -->` = eval(res, `&*`= ((u, v) -> LinearAlgebra:-Zip(`*`, u, v))) ):

show(see, E(A, B, A) ):

 

Download SomethingLikeThat.mw

Is it this that you want to achieve?

Eq := I__C*H(t)-m*rho*rho*H(t)
lprint(Eq);
Physics:-`*`(I__C, H(t))-m*Physics:-`*`(Physics:-`^`(rho, 2), H(t))

eval(Eq, `*`=:-`.`):
lprint(%);
I__C . H(t)-m*(Physics:-`^`(rho, 2) . H(t))
First 28 29 30 31 32 33 34 Last Page 30 of 65