sand15

1608 Reputation

16 Badges

11 years, 139 days

MaplePrimes Activity


These are replies submitted by sand15

@Andiguys 

Read carefully the attached file

NULL

restart

with(Optimization):

_local(Pi);

q_relation := q = (1/2)*(Ce*tau-Ci+Pr)/Cr;

q = (1/2)*(Ce*tau-Ci+Pr)/Cr

(1)

Pr_relation := Pr = -(1/2)*Ce*tau+(1/2)*q*t-(1/2)*Cd+(1/2)*Ci+1/2;

Pr = -(1/2)*Ce*tau+(1/2)*q*t-(1/2)*Cd+(1/2)*Ci+1/2

(2)

subs(Pr_relation, q_relation);

q = (1/2)*((1/2)*Ce*tau-(1/2)*Ci+(1/2)*q*t-(1/2)*Cd+1/2)/Cr

(3)

isolate((3), q)

q = (Ce*tau-Cd-Ci+1)/(4*Cr-t)

(4)

simplify(subs((4), Pr_relation), size)

Pr = ((2*Ce*tau+2*Cd-2*Ci-2)*Cr-t*(Ce*tau-Ci))/(-4*Cr+t)

(5)

``

map(simplify, solve({q_relation, Pr_relation}, {q, Pr}), size);

{Pr = ((2*Ce*tau+2*Cd-2*Ci-2)*Cr-t*(Ce*tau-Ci))/(-4*Cr+t), q = (-Ce*tau+Cd+Ci-1)/(-4*Cr+t)}

(6)


 

simplify((6)[1] - (5))

0 = 0

(7)

simplify((6)[2] - (4))

0 = 0

(8)
 

 

Download reply_1_sand15.mw

@nm 

Thanks for sharing your experience

Maybe a track: the mw file I wanted to upload contains debugger outputs: if I remove them the file can be uploaded.

For information: I get no error using Maple 2015, even for a little bit complex display.

With_Maple2015.mw

@acer 

You're right that AN/2 * BN/2 by (A*B)N/2  are not always equivalent and I had even thought to write "Assuming that AN/2 * BN/2 by (A*B)N/2  holds..."
But I was also guided by the expressions of the two terms the OP wanted to prove the equality, and without this assumption I was pretty sure the proof could not be established
So I thought that this equivalence goes without saying.

But again you're right, I sould have been detailed in my answer.

@FDS 

Is there a policy to mark your latest post as best?
Yes, you should see a trophy at the top right of my answer: just click on it.

See you next time!

@FDS 

The updated worksheet  Stress_Strain_Curve_2.mw
And an illustraton of what you can do with it in the spirit of what you show in your pptx

About polynomial regression... there is really no mistery behind it.
The first question is: Are you familiar with the "multiple" linear regression in the case of Q regressors X1, ...XQ?
If it is so, "regressing" a dense polynomial of degree Q-1 with a single indeterminate Z, is nothing but using the "multiple" linear regression while setting Xn = Z(n-1), n=1..Q+1.
To give an example, suppose you want to fit a dense quadratic polynomial a+bZ+cZ2: simply introduce auxiliary variables X1 = 1, X2 =Z, X3 =Z2 and regress your dependent quantity no longer on  a+bZ+cZ2:  but on aX1+bX2+cX3...and that's it!

If you are not familiar with "multiple" regression I advice you to read any elementary textbook/course/mooc... on the subject.

Beyond that an important question often appears: what is the degree of the polynomial to use in order that the fitted polynomial fulfills two contradictory requirements:

  1. Fidelity to the data: otherwise said the fitted polynomial must be "close" to the data or, mathematically speaking, the residual sum of squares between the data and their restitution by the fitted polynomial must be small.
    Obviously, as a dense polynomial of degree D-1 depends on D parameters, a set od data of size D sill be exactly fitted by this dense polynomial.
    We use to say here that the model is over-parameterized
  2. Stability (or robustness, or generalization capability...): let us take this previous sample of size D and this same polynomial of degree D-1.
    This polynomial may have D-1 real roots and if, unhappily, some of them are located within the range of the regressor (Z), then the fitted polynomial will present undesired overshoots in between consecutive values of the regressor.
    In fact it is extremely likely that the fitted polynomial will behave this way.
    If you want to avoid this phenomenon use a polynomial of degree 0 or 1.

So, to satisfy the fidelity to the data requirement you must take a large degree, close to the sample size, but if you are more concern by "stability" or "smooth fit", then you must take low degree polynomials.
There exist a lot of litterature about strategies which balance these two contradictory requirements. I already gave you a few Wiki references on the subject.

What I wrote in my worksheet is that each branch is a sample of size about one thousand.
So a dense polynomial of degree 5, for instance, is very far ffrom being an  over-parameterized polynomial. This balance stuff I spoke abour above matters only when the number of parameter of the model (not necessarily a polynomial one) is of the order of the sample size, let us say 20% or 25% of this latter.
You would be right considering model selection (here degree selection) if you were investigating polynomials of degree 200 for example. But it is very farfrom being the case.

As I wrote in this last worksheet a mor important criterion is that two polynomial models intersect within L5 and EP5 ranges if you do not want to create unealistic approximated cycles.

@FDS 

Feel free to tell me if this goes to the right direction
Stress_Strain_Curve_2.mw

I understand the notion of cyclic loading-unloading , but you want to assess the "surface area" of each cycle, and so you need to define more precisely what characterizes a cycle in terms of geometry: where does it start (likely the intersection point between a charge path and the next/previous discharge path) and where it ends.
So I propose you my interpretation for the first cycle only... I am not going to go further if I am not on the right track in me doing this.

My Maple version is too old to manage workbooks. Could you simply upload the stress-strain data?

@janhardo 

Beautiful, I vote up

Error messages are pretty clear: you have to specify an approximate solution.

For more detais execute help(dsolve,numeric_bvp,advanced) and help(dsolve,numeric,BVP) in a worksheet and search for the word approxsoln.

Defining an approximate solution can be quite tricky and the best way to do this is to construct a simplified version your problem using a simplified physics (do a dimention analysis, identify second order effects, discard them), to solve it, and to use this solution as approxsoln (which can be either a procedure or an array).
Your are likely the better placed to do this because it is you who knows the physics behind your equations.

For instance, what is order of magnitude of
compared to other terms? Can you simplify it?

Or can you approximate BCs like?
Can you simplify them from a physical point of view?
As a rule a BC on the first derivative which nonlinearly depends on the second derivative makes a BC problem complicated to solve.

My advice is: begin to do this by yourself and come back to us if you get Maple difficulties to sove your problem.

@JoyDivisionMan 

By the way, the attached worksheet explains in more details how to get the pdf of Y = cos(X) where X  is a uniform random variable with support (-𝜋, +𝜋).
Two approaches are used: a direct construction of PDF(Y) and an indirect one by derivation of CDF(Y).

To compute the cdf of, let's say, X = sin(X) (which is obviously the same as Y) without doing all the stuff contained in the attached file (solve/allsolutions, branch identification, aggregation of partial pdf/cdf over the different branches, ...) it is much simpler to observe that sin(X) = cos(𝜋/2X) then define an auxiliary random variable A = 𝜋/2and replace everywhere it occurs  fX(something) by fA(some other thing) which is equal to fX(𝜋/2something)

restart

with(Statistics):

X := RandomVariable(Uniform(-Pi, Pi)):

Y := cos(X):

f__Y := CodeTools:-Usage( unapply(PDF(Y, y), y) )

memory used=17.78MiB, alloc change=34.00MiB, cpu time=273.00ms, real time=2.38s, gc time=0ns

 

proc (y) options operator, arrow; piecewise(y <= -1, 0, y < 1, 1/(Pi*(-y^2+1)^(1/2)), 1 <= y, 0) end proc

(1)

restart

with(Statistics):

X := RandomVariable(Uniform(-Pi, Pi)):

Y := cos(X):

F__Y := CodeTools:-Usage( unapply(CDF(Y, y), y) )

memory used=41.23MiB, alloc change=70.01MiB, cpu time=618.00ms, real time=2.02s, gc time=56.15ms

 

proc (y) options operator, arrow; piecewise(y <= -1, 0, y <= 1, (1/2)*(2*arcsin(y)+Pi)/Pi, 1 < y, 1) end proc

(2)

f__Y := unapply(diff(F__Y(y), y) , y)

proc (y) options operator, arrow; piecewise(y < -1, 0, y = -1, undefined, y < 1, 1/((-y^2+1)^(1/2)*Pi), y = 1, undefined, 1 < y, 0) end proc

(3)


FIRST WAY: BUILD DIRECTLY PDF(Y)

plot(cos(x), x=Support(X, output='range'))

 


This graph is made of two "branches" where ths cosine function has a monotone variation:
          (1)  from -Pi to 0 cosine is monotonously increasing
          (2)  from 0 to Pi  cosine is monotonously decreasing


Watchout
The inverse cosine function is the arccosine function only on intervals where cosine is monotone.
So the reciprocal of  y = cos(x) over -p..p is not x = arccos(y).
Indeed:

solve(cos(x)=y, x, allsolutions)

2*Pi*_Z3-2*arccos(y)*_B3+arccos(y)

(4)

indets((4), name) minus {y, Pi}:
about~(%):

Originally _B3, renamed _B3~:

  is assumed to be: OrProp(0,1)

Originally _Z3, renamed _Z3~:
  is assumed to be: integer

 


Choosing _Z3 = 0 gives
         

reciprocal := eval((4), _Z3=0)

-2*arccos(y)*_B3+arccos(y)

(5)


And the two branches are
         

phi[1] := unapply( eval(reciprocal, _B3=0), y);
phi[2] := unapply( eval(reciprocal, _B3=1), y);

proc (y) options operator, arrow; arccos(y) end proc

 

proc (y) options operator, arrow; -arccos(y) end proc

(6)


pdf  fX(x) of X
   

f__X := unapply( PDF(X, x), x)

proc (x) options operator, arrow; piecewise(x < -Pi, 0, x < Pi, (1/2)/Pi, 0) end proc

(7)


The pdf  fY(y) of Y is given by this formula

result := Sum('f__X'(phi[i](y)) * abs(diff(phi[i](y), y)), i=1..2)

Sum(f__X(phi[i](y))*abs(diff(phi[i](y), y)), i = 1 .. 2)

(8)

result := value(result)

result := piecewise(arccos(y) < -Pi, 0, arccos(y) < Pi, 1/(2*Pi), 0)/sqrt(abs(y^2-1))+piecewise(-arccos(y) < -Pi, 0, -arccos(y) < Pi, 1/(2*Pi), 0)/sqrt(abs(y^2-1))

(9)

f__Y := unapply(simplify(result), y):
f__Y(y);

piecewise(y < -1, 0, y = -1, 1/(2*Pi*sqrt(y^2-1)), y < 1, 1/(sqrt(-y^2+1)*Pi), y = 1, 1/(Pi*sqrt(y^2-1)), 1 < y, 0)

(10)

plot(f__Y(y), y=-1..1, title="Arcsine law")

 


ALTERNATIVE WAY: BUILD CDF(Y) AND DERIVE IT TO GET PDF(Y)


cdf  FX(x) of X
   

F__X := unapply( CDF(X, x), x)

proc (x) options operator, arrow; piecewise(x < -Pi, 0, x < Pi, (1/2)*(x+Pi)/Pi, 1) end proc

(11)


If all the fi(y) were increasing functions of y, the cdf  FY(y) of Y would write

result := Sum(chi('F__X'(phi[i](y))), i=1..2);

Sum(chi(F__X(phi[i](y))), i = 1 .. 2)

(12)


If some fi(y) are decreasing functions of y, the cdf  FY(y) of Y is defined by this formula

result := Sum(chi('F__X'(phi[i](y))), i=1..2), ``(chi('F__X'(phi[i](y))) = piecewise(diff(phi[i](y), y) > 0, 'F__X'(phi[i](y)), 1-'F__X'(phi[i](y))))

result := Sum(chi(`#msub(mi("F"),mi("X"))`(phi[i](y)))*`,`(chi(`#msub(mi("F"),mi("X"))`(phi[i](y))) = piecewise(0 < diff(phi[i](y), y), `#msub(mi("F"),mi("X"))`(phi[i](y)), 1-`#msub(mi("F"),mi("X"))`(phi[i](y)))), i = 1 .. 2)

(13)

plot([phi[1](y), phi[2](y)], y=-1..1, color=[red, blue], legend=[typeset(phi__1(y)), typeset(phi__2(y))])

 


Thus

result := ``(1 - F__X(phi[1](y))) + F__X(phi[2](y))

result*`:=`(1-piecewise(arccos(y) < -Pi, 0, arccos(y) < Pi, (1/2)*(arccos(y)+Pi)/Pi, 1))+piecewise(-arccos(y) < -Pi, 0, -arccos(y) < Pi, (1/2)*(Pi-arccos(y))/Pi, 1)

(14)

F__Y := unapply(simplify(expand~(result)), y):
F__Y(y);

piecewise(y < -1, 1, y <= 1, (Pi-arccos(y))/Pi, 1 < y, 1)

(15)


So PDF(Y) is:

diff(F__Y(y), y);

piecewise(y < -1, 0, y = -1, undefined, y < 1, 1/(sqrt(-y^2+1)*Pi), y = 1, undefined, 1 < y, 0)

(16)


Verify that fY(y) obtained using the direct way (equation (10)) is almost everywhere equal to expression (16) 

"almost everywhere" means "out of the points of null probability measure (y=±1) where (16) is undefined.
   Note that the existence of these points come from the discontinuity of PDF(X) and that they could be removed from
   formulas (10) and (16) (because of their null probability measure).

simplify(f__Y(y) - diff(F__Y(y), y));

piecewise(y = -1, undefined, y = 1, undefined, 0)

(17)

plot(
  [f__Y(y), (16)]
  , y=-1..1
  , color=[red, blue]
  , thickness=[1, 5]
  , transparency=[0, 0.7]
  , legend=[typeset('f__Y'(y)), typeset('diff(F__Y(y), y)')]
)

 


NOTE THAT MAPLE IS CAPABLE TO COMPUTE PDF(Y) DIRECTLY

PDF(cos(X), y)

piecewise(y <= -1, 0, y < 1, 1/(sqrt(-y^2+1)*Pi), 1 <= y, 0)

(18)


But I am pretty sure this is only by chance for, from what I have observed asking Maple to compute the PDF of a polynomial
function of X, Maple does not seem to identify "branches" nor to use formulas like (8) or (13).

In the cos(X) case, without no particular attention to the correctness of the formulas and by invoking wrongly a symmetry argument 
we would get:

WrongFormulaButCorrectResult := diff(simplify(2*F__X(phi[2](y))), y);

WrongFormulaButCorrectResult := diff(simplify(2*F__X(phi[1](y))), y);

WrongFormulaButCorrectResult := piecewise(y < -1, 0, y = -1, undefined, y < 1, 1/(sqrt(-y^2+1)*Pi), y = 1, undefined, 1 < y, 0)

 

piecewise(y < -1, 0, y = -1, undefined, y < 1, -1/((-y^2+1)^(1/2)*Pi), y = 1, undefined, 1 < y, 0)

(19)

V := RandomVariable(Uniform(-5*Pi/3, Pi/3)):

PDF(cos(V), y);    # Should be equal to (10) or (16)

plot(%, y=-1..1, title="Obviously wrong", titlefont=[Tahoma, bold, 14]);

piecewise(y <= 1/2, 0, y < 1, 1/(sqrt(-y^2+1)*Pi), 1 <= y, 0)

 

 
 

 

Download cosine.mw

@JoyDivisionMan 

You're welcome!

@janhardo 

I wrote "at points (a, f(a)) and (-a, -f(a)) if and only if a=0... which is impossible because f(0) is undefined" because I didn't want to write explicitely a=1 and f(a)=2... or more exactly that a was strictly larger than zero.
Indeed, if a > 0 there is no circle tangent to the curve C at points at points (a, f(a)) and (-a, -f(a)).
The only possibility for such a circle to exist is a = 0, which cannot be accepted for the function f is not defined at 0.

@acer 

Thank you for your intervention.

If this can be of any interest here is a worksheet which shows when Maple succeeds in computing the PDF and CDF of a function 𝜙 of a random variable X :

  • Maple successes are illustrated by two examples:
    • the first one is trivial because 𝜙 is linear,
    • the second one is a priori more complex because 𝜙 is not monotonous... but the "branches" have simple analytic expressions and Maple can do the job.
  • Maple failure is illustrated by the  non monotonuous function 𝜙 : X ⟼X3/3 + X 
    (note that Maple succeeds if 𝜙 : X ⟼k*X3 because  𝜙 is now monotonuous).

I know how to compute the CDF of 𝜙(X) in an ad hoc way for arbitrary functions 𝜙, but I'm not capable to know how to do it programatically in a general case.
And I guess this is the lack of such an algorithm which pushes Maple to return FAIL for almost all non monotonuous functions 𝜙 (not to speak of no-failure situations where wrong results are returned... likely for the same lack of algorithm).

When_Maple_fails_in_computing_PDF_ot_CDF.mw

@JoyDivisionMan 

I don't know if you intend to follow @acer's suggestion of submitting an SCR (Software Change Request, see the screen shot below)


But if you do it, please inform the development team that Maple 2024 and beyond must not simply reproduce the result older versions provide (2015 or instance) because this result is wrong (for your "test" case) or might be wrong in case the 
function 𝜙 : X ⟼ Y is not a one-to-one map over the whole support of X.

There is indeed an error in the algorithm which computes PDF(Y) and this is it which must be corrected.

1 2 3 4 5 6 7 Last Page 2 of 40