mmcdara

6630 Reputation

18 Badges

8 years, 122 days

MaplePrimes Activity


These are answers submitted by mmcdara

Joel's answer is perfect.

In case you would prefer working with indexed elements   (for instance sm,n) instead of elements written like Smn, I can propose this to you

AX.mw

@Mariusz Iwaniuk 

 

You just need to form the difference between two infinite series.
(I used the Summtools package but maybe (?) "sum" could do the job as well)

About Mathematica: is it possible that it uses the same trick
 

restart:

with(SumTools):

assume(r > 0):

_EnvFormal := true;
Summation(k^(r), k=1..infinity)

true

 

Zeta(-r)

(1)

S1 := Summation((a+d*k)^(r), k=1..infinity)

Zeta(0, -r, (d+a)/d)*d^r

(2)

assume(N > 1);

f := expand(a+d*(k-N));
f0 := coeff(f, k, 0);
f1 := coeff(f, k, 1);


S2 := Summation((A+B*k)^(r), k=1..infinity)

-N*d+d*k+a

 

-N*d+a

 

d

 

Zeta(0, -r, (A+B)/B)*B^r

(3)

subs({A=f0, B=f1}, S2)

Zeta(0, -r, (-N*d+a+d)/d)*d^r

(4)

S := S1 - subs({A=f0, B=f1}, S2)

Zeta(0, -r, (d+a)/d)*d^r-Zeta(0, -r, (-N*d+a+d)/d)*d^r

(5)

 


 

Download Serie.mwSerie.mw

I understand your problem this way :

  • You have a function F in N parameters P1, ..., PN.
  • Each of them is modeled by a random variable
  • Then Z = F(P1, ..., PN) is a random variable too
  • Sensitivity Analysis (SA), in the statistical sense, assesses the variation of the "ouptut" Z given the variations random variations of the "inputs" P1, ...PN

 

There are two types of SA :

  • local : it is based on (generally first order by also second order can be used) partial derivatives of F arround at some point 
  • global : doesn't assume the "smallness" of the variations of the inputs the local SA requires
    (a recent question here is about Sobol indices, a key item in global SA)

 

It seems your are interested by local SA (LSA) ?

One ingredient is given here by Rouben if you use first order LSA.
But it's not sufficient. Let denotes by Vn the variance of Pn and by Dn the partial derivative of Z according to Pn
In LSA the sensitivity coefficient Sn of Z to Pn is defined by 
Sn = Vn*(Dn)^2 / sum(Vm*(Dm)^2, m=1..N).

Is it what you want ?

Best regards


At a first step, one might infer that searching for some rule behind your sequence, reduces to find some rule in the sequences of the exponents.
A good starting point is the OEIS data base, look  https://oeis.org

OEIS doesn't reference any of these 3 sequences

  • the sequence of the exponents of 3
  • the sequence of the exponents of 2
  • the sequence obtained by interleaving the two previous one

Unfortunately OEIS handles sequences of integer only, so you can't go further (even the smallest is to high a number to be used in OEIS).
So the OEIS option seems to be given up.

Maybe knowing where your sequence comes from could help ?

If you want to save the solution in some file, just type 

save pds, myfile    # I use to use ".m" file ; myfile is a string

To reuse the solution in a new worksheet :

restart:
read myfile:
anames(user) ; # will return you the the name(s) of the read objects it's not necessary but it can help if, like me, you're airhead

 


 

If you just want something for this particular case, here is a probably not very elegant solution
 

NotAtAllGeneric.mw
 

eq16 := r(t) = d__vol*V/(K*U*S*V^2+L*tau)

r(t) = d__vol*V/(K*S*U*V^2+L*tau)

(1)

a := rhs(eq16)

d__vol*V/(K*S*U*V^2+L*tau)

(2)

an := numer(a)/V;

d__vol

(3)

ad := add(op(denom(a))/~V)

K*S*U*V+L*tau/V

(4)

eq17 := lhs(eq16) = an / ad

r(t) = d__vol/(K*S*U*V+L*tau/V)

(5)

 


 

Download NotAtAllGeneric.mw

 

Maybe this could help ?

 

Download latex.mw
 

f := piecewise(-Pi <= x and x < 0, 1, 0 <= x and x < Pi, 0);
la := latex(f);

f := piecewise(`and`(-Pi <= x, x < 0), 1, `and`(0 <= x, x < Pi), 0)

 

\cases{1&$-\pi \leq x$\  and \ $x<0$\cr 0&$0\leq x$\  and \ $x<\pi $\cr}

 

lla := latex(f, output=string);

"\cases{1&$-\pi \leq x$\  and \ $x<0$\cr 0&$0\leq x$\  and \ $x<\pi $\cr}"

(1)

LA := StringTools:-SubstituteAll(la, "and", "\\vedge");

"\cases{1&$-\pi \leq x$\  \vedge \ $x<0$\cr 0&$0\leq x$\  \vedge \ $x<\pi $\cr}"

(2)

printf("%s\n", LA);

\cases{1&$-\pi \leq x$\  \vedge \ $x<0$\cr 0&$0\leq x$\  \vedge \ $x<\pi $\cr}

 

 


 

Download latex.mw

 

 

 

 

Hi, 

 

Here is something unpretentious that could help you.

I think you will be able to improve this the way you want

parametric.mw

 

restart:
NewList := proc(L::list, treshold::{integer, float})::list:
   map(u -> if u < treshold then 2*u else u end if, L):
end proc:
a := [$(1..10)];
                [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
b := 5:
NewList(a, b);
                [2, 4, 6, 8, 5, 6, 7, 8, 9, 10]


 

 

 

@Les 

Your problem relates to some of my own.
One of them is Bayesian Inference and directed acyclic grapgs (DAG) are a very important tool here.
I'm often worried by the difficulty do draw these DAG with Maple, that's why your problem triggered my interest.

I don't  know if this can help you, but I provide you this little Maple code, just in case


PermutationGraph.mw

 Give a look to this site https://en.wikipedia.org/wiki/Pearson_distribution

The Pearson's system is based upon the observation that the probability density function (PDF) of some "classical" continuous random variables (RV), are solution of an ordinary differential equation (ODE)
This ODE depends on 4 parameters the values of them can be expressed in terms of the four first moments of the RV.
Several statistical softwares use this characterization to "identify" the PDF of the population a given sample if drawn from.

I do not know if you are familiar with the language R (https://cran.r-project.org), but it offers the package https://cran.r-project.org/web/packages/PearsonDS/PearsonDS.pdf to do the job.

It should not be very complicated (here again, if you know R, it's easy to derive manually a MAPLE code from a R code) to develop  a procedure that would "identify" the underlying PDF from its first four empirical moments.
The main problem with the Pearson's system is essentially that the solution is very sensitive to the numerical values of the moments.
You need to have a sufficiently large sample for the those values are reliable estimations of the true values. If not the "identification" process will return you unusual distribution (they will correctly fit the data but [look to the wiki page] , their mathematical expression can be quite uncommon).

Please feel free to contact me for further informations.

 

 

You're wrong when you write 

They match up very poorly. I can get a better fit by using Geogebra (which produces a function 48.7 * sin(0.52x + 0.56) + 1124.95, which fits the data much better).

 

Fit or LinearFit are both trying to minimize the residual sum of squares (RSS) between the observations Y and the fitted model.

MapleFit       := unapply(f, x):
GeogebraFit := x -> 48.7 * sin(0.52*x + 0.56) + 1124.95:

MapleRSS       := add((MapleFit~(X) -~ Y)^~2);
6029.89196586546
GeogebraRSS := add((GeogebraFit~(X) -~ Y)^~2);
6080.525269

 

Then, obviously, Maple returns a better fit than Geogebra does.
I think you make a confusion between "the Geogebra fit is better than the Maple fit" and "the Geogebra fit  suits me more than the fit Maple fit."

More of this, what kind of solution are you looking for ?
It seems you prefer (Geogebra) a slowly oscilating model ?
For Geogebra's solution exhibits 2 periods within the 0..24 range, try this

MyModel := x -> A*sin(b*x+c)+d;
RSS := (A, b, c, d) -> add( (MyModel~(X) -~ Y)^~2);
Optimization:-NLPSolve(RSS(A, b, c, d), b=Pi/7..Pi/5);   

This solution (up to the precision of the computations) has practically the same RSS (6029.8909...) than the original Maple's solution, but oscillates as slowly as the Geogebra's solution.
Mariusz Iwaniuk provided here a close solution by other means (only the phase shiftings substantially differ) 

Using Optimization:-NLPSolve can enable you to control the ranges of the parameters A, b, c, d if you have any prior knowledge about them.
But this last result raises the fundamental question : Are you sure that your fitting problem has only one solution ?
tomleslie's "respectfully disagree" answer gives a simple proof of the existence of a continuum of solutions

And, finally, think tabout that : the curve A*sin(b*x)+d densily fills the strip [X]x[d-A, d+A] as b tends to infinity ; my guess is that without no upper bound to the value of b, A*sin(b*x)+d is able to pass arbitrarily close to any (Xn, Yn) point (does anyone here have some mathematical argument to confirm/infirm this claim ?)
If I'm right, the RSS then tends to 0 as b tens to infinity 

 

 

I'm not familiar with MAPLE TA and so I do not understand if your problem is related to MAPLE TA itself or to the construction of the histogram of a weighted sample ?
For the second point you could use the content of the joint file

WeightedSample.mw


Hope it helps

@Carl Love 


The problem is basically of the form U = A*V + B.
After some suitable transformations a LinearFit can give the solution

 

Download FIT.mw

Here is a trick I am used to using :

restart:
with(GraphTheory):
with(SpecialGraphs): 

G     := CycleGraph(9):
POS := GetVertexPositions(G);
EG   := Edges(G); 


PLOT(
   seq( CURVES([POS[EG[i][1]], POS[EG[i][2]]], LINESTYLE(3) ), i=1..numelems(Edges(G))),
   POINTS(POS, SYMBOL(_SOLIDBOX, 35), COLOR(RGB, 1, 1, 0)),
   seq(TEXT(POS[i], i), i=1..numelems(Vertices(G))),
   AXESSTYLE(NONE)
)


Now it is up to you to adjust colors, symbols and line styles and "manually" highlight edges and vertices.




First 54 55 56 57 Page 56 of 57