mmcdara

1237 Reputation

13 Badges

4 years, 68 days

MaplePrimes Activity


These are replies submitted by mmcdara

@Adam Ledger 

Watchout, the output (5) is not the one of CF_LS  but print's.
I have removed the print instruction, replaced the colon at the end of the line CF_LS :=... by a semicolon, and right clicked on the output.
Here is the result.


PS: I'm not familiar with the 2D input mode and I never use it. 

restart:

with(numtheory):

S[0] := proc (N, K) options operator, arrow; map(simplify, {seq(seq(seq(piecewise((a^varphi(b))^(1/(c+1))-floor((a^varphi(b))^(1/(c+1))) = 0, [a, b, c], NULL), a = 1 .. N), b = 1 .. N), c = 1 .. K)}, 'radical') end proc

proc (N, K) options operator, arrow; map(simplify, {seq(seq(seq(piecewise((a^numtheory:-varphi(b))^(1/(c+1))-floor((a^numtheory:-varphi(b))^(1/(c+1))) = 0, [a, b, c], NULL), a = 1 .. N), b = 1 .. N), c = 1 .. K)}, 'radical') end proc

(1)

T := proc (N, K) options operator, arrow; {seq(seq(seq([a, b, c], a = 1 .. N), b = 1 .. N), c = 1 .. K)} end proc

proc (N, K) options operator, arrow; {seq(seq(seq([a, b, c], a = 1 .. N), b = 1 .. N), c = 1 .. K)} end proc

(2)

S[1] := proc (N, K) options operator, arrow; `minus`(T(N, K), S[0](N, K)) end proc

proc (N, K) options operator, arrow; `minus`(T(N, K), S[0](N, K)) end proc

(3)

CardRatio := proc (N, K) options operator, arrow; nops(S[0](N, K))/nops(S[1](N, K)) end proc

proc (N, K) options operator, arrow; nops(S[0](N, K))/nops(S[1](N, K)) end proc

(4)

CF_LS := [
           CurveFitting[LeastSquares]([seq([k, CardRatio(2, k)], k = 1 .. 10)], K),
           CurveFitting[LeastSquares]([seq([k, CardRatio(3, k)], k = 1 .. 10)], K),
           CurveFitting[LeastSquares]([seq([k, CardRatio(4, k)], k = 1 .. 10)], K)
         ];

[1, 44268857/45401356-(532409481/9988298320)*K, 24308311919/13309971675-(135902619982/773879781675)*K]

(5)

smartplot( (5) );

 

# Maybe it would be better to do somethink like this?

colors := [red, green, blue]:

plots:-display(
  seq( plot(CF_LS[k], color=colors[k], legend=cat("Cardio(", k+1, ")")), k=1..3)
)

 

 


 

Download numtheory.mw

@Preben Alsholm 

No code missing, phi is the totient function, @Adam Ledger has just omited to say that.

So Adam's code run correctly after loading the numtheory package but I don't understand what Adam  wants?
 

restart:

with(numtheory):

S[0] := proc (N, K) options operator, arrow; map(simplify, {seq(seq(seq(piecewise((a^varphi(b))^(1/(c+1))-floor((a^varphi(b))^(1/(c+1))) = 0, [a, b, c], NULL), a = 1 .. N), b = 1 .. N), c = 1 .. K)}, 'radical') end proc

proc (N, K) options operator, arrow; map(simplify, {seq(seq(seq(piecewise((a^numtheory:-varphi(b))^(1/(c+1))-floor((a^numtheory:-varphi(b))^(1/(c+1))) = 0, [a, b, c], NULL), a = 1 .. N), b = 1 .. N), c = 1 .. K)}, 'radical') end proc

(1)

T := proc (N, K) options operator, arrow; {seq(seq(seq([a, b, c], a = 1 .. N), b = 1 .. N), c = 1 .. K)} end proc

proc (N, K) options operator, arrow; {seq(seq(seq([a, b, c], a = 1 .. N), b = 1 .. N), c = 1 .. K)} end proc

(2)

S[1] := proc (N, K) options operator, arrow; `minus`(T(N, K), S[0](N, K)) end proc

proc (N, K) options operator, arrow; `minus`(T(N, K), S[0](N, K)) end proc

(3)

CardRatio := proc (N, K) options operator, arrow; nops(S[0](N, K))/nops(S[1](N, K)) end proc

proc (N, K) options operator, arrow; nops(S[0](N, K))/nops(S[1](N, K)) end proc

(4)

CF_LS := {
           CurveFitting[LeastSquares]([seq([k, CardRatio(2, k)], k = 1 .. 10)], K),
           CurveFitting[LeastSquares]([seq([k, CardRatio(3, k)], k = 1 .. 10)], K),
           CurveFitting[LeastSquares]([seq([k, CardRatio(4, k)], k = 1 .. 10)], K)
         }:
print~(CF_LS):

1

 

44268857/45401356-(532409481/9988298320)*K

 

24308311919/13309971675-(135902619982/773879781675)*K

(5)

 

 

Download numtheory.mw

That's impossible : -2/(15*h) -1/(6*h) + 3/(10*h) = 0, thus the result you should want is 0, not 4*epsilon/(15*h)


 

restart

expr := -2*f(x-2*h)/15/h-f(x)/6/h+3*f(x+3*h)/10/h+2*ff(x-2*h)/15/h+ff(x)/6/h-3*ff(x+3*h)/10/h

-(2/15)*f(x-2*h)/h-(1/6)*f(x)/h+(3/10)*f(x+3*h)/h+(2/15)*ff(x-2*h)/h+(1/6)*ff(x)/h-(3/10)*ff(x+3*h)/h

(1)

MyRules := { seq(f(x+i*h)=ff(x+i*h)+epsilon, i=-2..3) };

{f(x) = ff(x)+epsilon, f(-h+x) = ff(-h+x)+epsilon, f(h+x) = ff(h+x)+epsilon, f(2*h+x) = ff(2*h+x)+epsilon, f(x-2*h) = ff(x-2*h)+epsilon, f(x+3*h) = ff(x+3*h)+epsilon}

(2)

subs(MyRules, expr);

-(2/15)*(ff(x-2*h)+epsilon)/h-(1/6)*(ff(x)+epsilon)/h+(3/10)*(ff(x+3*h)+epsilon)/h+(2/15)*ff(x-2*h)/h+(1/6)*ff(x)/h-(3/10)*ff(x+3*h)/h

(3)

expand(%);

0

(4)

 


 

Download MyRules.mw

@Gillee @acer

 

I realized afterwards that I had forgotten to explain some technical details in my first answer to Gillee.
This omission is corrected in the attached file.

It shows step by step how the exact solution can be obtained by hand without using Maple.


 

restart:

with(Statistics):

# Your problem restated in the correct bayesian form:

# Let X a binomial random variable of parameters N=16 and p.
# p is an unknown value yout want to assess given some outcomes of X.
# The bayesian paradigm then consists in
#   1/ modelling your lack of knowledge about p by saying that p is a realization of some
#      random variable P
#   2/ deciding of some prior distribution Prior(P=q) for P
#   3/ writting the posterior distribution Post(P=q) of P by using Bayes' formula
#
# Thus: Post(P=q | X=K) = Prob(X=K | P=q) * Prior(P=q)
# where the notation Prob(A | B) means "probability of the event A given the event B"
# (you can replace "probability" by "density of probability" in case of continuous random variables)
#
# Now the main result:
#    Let L some likelihood, F the prior distribution of a random variable Y, and G the posterior
#    distribution of Y with respect to L.
#    If F belongs to the family of the conjugate priors of L, then G belonfgs to the same family.
#
# Application:
#    The conjugate prior of the binomial distribution is the family of Beta distributions.
#    Then if you chose Prior(P=q) as a Beta distribution of parameters (a, b), then Post(P=q)
#    will be a Beta distribution of parameters (a', b').
#
# Classical result: Post(P=q | X=K) = Beta(K+a, N-K+b)
#
# How to use this result?
#    It's well known (or not?) that the uniform distribution of support [0, 1] is a Beta
#    distribution of parameters (a=1, b=1).
# Then:
#    Post(P=q | X=K) = Beta(K+1, N-K+1)
#
# With N=16 and K=6 one obtains that
#         the posterior P is a Beta distribution Beta(7, 11)
#

# VERIFICATION (fast acer's code)

randomize():

n_draw      := 10^5:
prior_rate  := Sample(Uniform(0, 1), n_draw):
n_trials    := 16:
dummyV      := Vector(1,datatype=float[8]):
RV          := RandomVariable(Binomial(n_trials,pat)):

subscribers := CodeTools:-Usage( map(proc(u) option hfloat;

                                       global pat;

                                       pat := u;

                                       Sample(RV,dummyV);

                                       dummyV[1];

                                     end proc,

                                     prior_rate) ):


post_rate   := map(i -> if subscribers[i]=6. then prior_rate[i] end if, [$1..n_draw]):
 

memory used=0.66GiB, alloc change=38.01MiB, cpu time=9.69s, real time=9.71s, gc time=267.72ms

 

plots:-display(
   Histogram(post_rate, view=[0..1, default], minbins=100, style='polygon'),
   plot(PDF(BetaDistribution(7, 11), t), t=0..1, color=red, thickness=3)
);

 

 


 

Download FinalAnswer.mw

@Carl Love @acer

 

Hi,
Are the errors catchable byt try..catch..end try the same that are catchable by traperror?
When could it be more interesting to use traperror instead of try..catch..end try?

Hi,
I think you will find some useful answers in the "Posts" section (search for "parallel programming"),in particular here
133632-Parallel-Programming-In-Maple

@hajnayeb 

You want to use a non exact Monte-Carlo Method (MCM) to validate exact theoritical results, is that it?
Seems a little bit strange to me.

So, what do you want exactly: 

  • Assess the mean and variance of X(t) for any t with MCM?
  • Assess the autocotrrelation function of X or its spectral density with MCM?
  • Draw a sample of trajectories X(t, omega) (here omega represents "the alea") for different omega?
  • Do all this stuff in the physical space or in the complex dual space?


Two Maple packages could help you if you want to work with frequency response functions:

  • DynamicSystem
  • Finance (stochastic processes)
  • SignalProcessing

If you want to operate in the physical domain it seems to me  that you have practically all you need in my two previous replies.
OK, the autocorrelation is not computed but you can use the Statistics or SignalProcessing packages to do that and this latter to compute the spectral density.


 

@vv 

I was never directly involved in this kind of competitions, just regularly look at the problems and try to find answers as an amusement, no more than that. I always looked at them as a game and thought rather unfair (again no offense) to use a CAS to find the solution.
But I understand that one can consider as a chalenge to solve some of these problems by a CAS.

I recently discovered video games like Euclidea where you have to solve geometric problems. I was stucked by some of them and finally decided to use Maple to obtain the solution (sometimes successfully). But I always thought I was cheating  because it wasn't me who found the solution but Maple, even if I guided it to do so.

I understood you are a mathematician? So let me ask you this simple question: as a mathematician (not as a Maple user), what solution do you prefer, the one you produced with Maple or the one given here / (and especially the first one)?
 




 

Personnaly I never had any problem to include images in a laTeX file when they were exported from Maple in jpeg or pdf format.
Maybe you could try using these formats instead of the encapsulated postscript... except if you really need eps files for some specific reason?
 

Hi, 
No offense, but I'm always dubitative with this kind of exercise. Those kinds of olympiads you refer to are generally aimed to develop the spirit of intuition and synthesis, the "Ha Ha effect" in some sense, and I using a CAS to solve some problems gives me the impression os using a sledgehammer to kill a fly.
Of course, you may think the opposite and see here as good an opportunity to prove that a CAS can solve problems that only clever reasoning can do?

@hajnayeb

A few more results.
Please note that exact results can easily been obtained for a linear ode with a white noise source term in some asymptotic regime (non dimensional damping << 1)  (although not done here because I don't know if this present any interest to you)

As relation (3) demonstrates,the expectation of X(t) depends only on the expectation of F(t), which explains why the plots of X__mean are quite the same whatever the correlation length is.
This is not the case of any higher moment of X(t): forinstance its variance depends on the the value of L.

 

restart:

with(LinearAlgebra):
with(plots):
with(Statistics):

# Step 1: compute the "impulse response function" (IRF) defined as the solution of
# the ode with a Dirac source term at time t.
# To simplify I assume here the ic for the original problem are x(0)=D(x)(0)=0.
#
# It can be shown this IRF is the solution of an homogeneous ode complemented with
# non homogeneous ic:

homogeneous_edo    := m*diff(x(t),t$2)+c*diff(x(t),t)+k*x(t)=0;
non_homogeneous_ic := x(0)=0, D(x)(0)=1/m;
IRF                := unapply( rhs(dsolve([homogeneous_edo, non_homogeneous_ic],x(t))), t);

m*(diff(diff(x(t), t), t))+c*(diff(x(t), t))+k*x(t) = 0

 

x(0) = 0, (D(x))(0) = 1/m

 

proc (t) options operator, arrow; exp((1/2)*(-c+(c^2-4*k*m)^(1/2))*t/m)/(c^2-4*k*m)^(1/2)-exp(-(1/2)*(c+(c^2-4*k*m)^(1/2))*t/m)/(c^2-4*k*m)^(1/2) end proc

(1)

# Step 2: Let F(t) the random excitation and let mu__F(t) its mean and R__F(t__1, t__2) it's
# autocorrelation function.
# Here are some results

# Response X(t) of the system under the random loading F(t)

X(t) = Int(F(tau)*'IRF'(t-tau), tau=-infinity..+infinity);

X(t) = Int(F(tau)*IRF(t-tau), tau = -infinity .. infinity)

(2)

# Mean value of the response

mu__X(t) = Int(mu__F(tau)*'IRF'(t-tau), tau=-infinity..+infinity);
mu__X(t) = Int(mu__F(t-tau)*'IRF'(tau), tau=-infinity..+infinity);  #equivalently

# if F(t) is stationnary so is X(t)

mu__X = mu__F*Int('IRF'(tau), tau=-infinity..+infinity);

mu__X(t) = Int(mu__F(tau)*IRF(t-tau), tau = -infinity .. infinity)

 

mu__X(t) = Int(mu__F(t-tau)*IRF(tau), tau = -infinity .. infinity)

 

mu__X = mu__F*(Int(IRF(tau), tau = -infinity .. infinity))

(3)

# Autocorrelation function of X(t)

R__X(t__1, t__2) = Int(Int(R__X(tau__1, tau__2)*'IRF'(t__1-tau__1)*'IRF'(t__2-tau__2), tau__1=-infinity..+infinity), tau__1=-infinity..+infinity);

R__X(t__1, t__2) = Int(Int(R__X(tau__1, tau__2)*IRF(t__1-tau__1)*IRF(t__2-tau__2), tau__1 = -infinity .. infinity), tau__1 = -infinity .. infinity)

(4)

# Spectral density S__X(xi)
# Let H(xi) the Fourier transform of IRF(tau) and S__F(xi) the spectral density of F(t) then

S__X(xi) = conjugate(H(xi))*H(xi)*S__F(xi);

S__X(xi) = conjugate(H(xi))*H(xi)*S__F(xi)

(5)

# How to draw trajectories: a notional example
# F(t) is a stationnary gaussian random process (look how its point realizations F are constructed),
# with gaussian correlation function (look the expression of the variable K in procedure KG)
# (be free to ask me all precisions you would need).

# Procedure KG generates a random realization of F(t).
# For each such realization there exist a trajectory X(t), solution of the ode with random
# excitation KG(...).
# Nb_traj=10 such trajectories are drawn


KG := proc(X, Y, psi, sigma)
  local NX, DX, K, mu, k, y:
  NX := numelems(X);
  DX := < seq(Vector[row](NX, X), k=1..NX) >^+:
  K  := (sigma, psi) -> evalf( sigma^2 *~ exp~(-((DX - DX^+) /~ psi)^~2) ):
  mu := add(Y) / NX;
  k  := (t, sigma, psi) -> evalf( convert(sigma^2 *~ exp~(-((t -~ X ) /~ psi)^~2), Vector[row]) ):
  y  := mu + k(t, sigma, psi) . (K(sigma, psi))^(-1) . convert((Y -~ mu), Vector[column]):
  return y
end proc:

exp(-2.3)

.1002588437

(6)

# A slightly colored noise (correlation length = L = 0.001, to be compared to max(T)/NX.
# Here L/(max(T)/NX) = 0.001/0.1 = 0.01 which means that only about 10% of the information at location
# t is transmitted at locatrion T+2.3*L (exponential decay). The longer L and the farther the "value"
# of F at location t is transmitted.
#
# Note that theoritical results can be easily obtained in the cas of a linear ode and pure white noise.
# Thus take the following as a kind of "brute force" approach.

NX := 100:
T  := [$1..NX] /~ 10:

Nb_traj := 100:

for traj from 1 to Nb_traj do
  F  := convert( Statistics:-Sample(Normal(0, 1), NX), list):

  X := dsolve(
         {
           m*diff(x(t),t$2)+c*diff(x(t),t)+k*x(t)=KG(T, F, 0.001, 1), x(0)=1, D(x)(0)=0
         },
         numeric, parameters=[m, c, k]
       ):

  X(parameters=[1, 1, 1]):
  pl||traj := odeplot(X, [t, x(t)], t=0..max(T), color=ColorTools:-Color([rand()/10^12, rand()/10^12, rand()/10^12]))
end do:

display(seq(pl||traj, traj=1..Nb_traj))
 

 

# Gather the data of the Nb_traj trajectories in a single matrix (column 1 contains times).
# and do some statistics


data := < op([1, 1], pl||1) | < seq(op([1, 1], pl||k)[..,2], k=2..Nb_traj) >^+ >:

# X__mean(t)

pl__mean := plot(
              [
                seq(
                   [data[k, 1], Mean(data[k, 2..-1])],
                   k=1..numelems(data[..,1])
                )
              ],
              color=blue, title=typeset(X__mean(t)), labels=['t', 'X(t)']
            ):

# X_mean(t) +/- 2 standard_dev(X(t)

pl__plus := plot(
              [
                seq(
                   [data[k, 1], Mean(data[k, 2..-1])+2*StandardDeviation(data[k, 2..-1])],
                   k=1..numelems(data[..,1])
                )
              ],
              color=blue, linestyle=3
            ):
pl__minus := plot(
              [
                seq(
                   [data[k, 1], Mean(data[k, 2..-1])-2*StandardDeviation(data[k, 2..-1])],
                   k=1..numelems(data[..,1])
                )
              ],
              color=blue, linestyle=3
             ):

display(pl__mean, pl__minus, pl__plus);

 

# A strongly colored noise (correlation length = L = 0.1) or "long memory noise" (a white noise is
# a zero memory noise)

NX := 100:
T  := [$1..NX]:

Nb_traj := 100:

for traj from 1 to Nb_traj do
  F  := convert( Statistics:-Sample(Normal(0, 1), NX), list):

  X := dsolve(
         {
           m*diff(x(t),t$2)+c*diff(x(t),t)+k*x(t)=KG(T, F, 0.1, 1), x(0)=1, D(x)(0)=0
         },
         numeric, parameters=[m, c, k]
       ):

  X(parameters=[1, 1, 1]):
  pl||traj := odeplot(X, [t, x(t)], t=0..10, color=ColorTools:-Color([rand()/10^12, rand()/10^12, rand()/10^12]))
end do:

display(seq(pl||traj, traj=1..Nb_traj))

 

# Gather the data of the Nb_traj trajectories in a single matrix (column 1 contains times).
# and do some statistics.


data := < op([1, 1], pl||1) | < seq(op([1, 1], pl||k)[..,2], k=2..Nb_traj) >^+ >:

# X__mean(t)

pl__mean := plot(
              [
                seq(
                   [data[k, 1], Mean(data[k, 2..-1])],
                   k=1..numelems(data[..,1])
                )
              ],
              color=blue, title=typeset(X__mean(t)), labels=['t', 'X(t)']
            ):

# X_mean(t) +/- 2 standard_dev(X(t)

pl__plus := plot(
              [
                seq(
                   [data[k, 1], Mean(data[k, 2..-1])+2*StandardDeviation(data[k, 2..-1])],
                   k=1..numelems(data[..,1])
                )
              ],
              color=blue, linestyle=3
            ):
pl__minus := plot(
              [
                seq(
                   [data[k, 1], Mean(data[k, 2..-1])-2*StandardDeviation(data[k, 2..-1])],
                   k=1..numelems(data[..,1])
                )
              ],
              color=blue, linestyle=3
             ):

display(pl__mean, pl__minus, pl__plus);

 

 


 

Download RandomSourceTerm_2.mw

What do you mean by "f(t) is a random function with normal distribution"?
(I remarked youtr equation doesn't contain f anywhere)

Do you mean "the source term is, at any time t, the ralisation of a gaussian random variable"? In this case f(t) (I suppose F(t)) is not a random variable but a gaussian continuous random (stationnary to simplify) process (GRP).
Let F(t, omega) this process: for any value t* of t, Ft*(omega) is gaussian random variable of mean m(t*) and variance v(t*). Stationnarity (at least some type of it) means that these mean and variance do not depend on the value of t*.
But even in this simpler case, giving m and v does not define the process F(t, omega): you need to gige the autocorrelation function of the process, that is a two point function R(t, t') wich "correlates" how Ft(omega) and Ft'omega) are.

Next point: what do you mean by "solve an ordinary differential equation"?
Do you want to find a particular trajectory x(t) (there exist an infinity of them) or some statistical properties such as themean trajectory, the envelope trajectories corresponding to some confidence interval ... ?
In reliability of the structure accoub-nting for random loadings of a mechanical structure is a very common topic.
Think to all of this and comme back to me if necessary.

 

@Carl Love 

As I said to @itsme I'm in hollydays right now and I no longer use my professional Windows PC. But if you're interested in the way I catch the worksheet title I could give the details in 3 weeks.

@Carl Love 

Thank for your involvment.
Your command is significantly simpler than the one I'd written.

In my case  what I wanted to capture is the full name of the "active" worksheet (the one which appears in the uppermost part of the session window when ran on Windows). For instance if I open the file foo with full name "//.../foo.mw", this is the name I'm interested in.

The reason is: suppose I have a worksheet "//.../foo.mw" which contains the code of a module "foo" that I want to use later as a package. At the end of this worksheet are a few commands to create (or replace) a mla file "//.../foo.mla".  I want to keep the bond between the source code (foo.mw) and the archive ("foo.mla"), that is to be sure, when I will use package "foo", that the generating code corresponds to the file "//.../foo.mw" written at some given time T.
I proceed this way :

  • modification of some procedures in the worksheet 
  • execution of the module
  • saving if no error
  • capture of the full name of the source file "//.../foo.mw"
  • capture of its last modification time (plus a few otherinformations such as the name of the person that modifies "//.../foo.mw", etc
  • creation of the archive "//.../foo.mla" 
  • add to this archive additional information:
      SOURCE_FILE ="//.../foo.mw"
      MODIFICATION TIME = T
      AUTHOR = ...
      ...

This is the simplest strategy I found to manage the evolutions of (the packages of) the application I develop.
For example to be sure that I use "//.../foo.mw" at time T and not at time T', or that I use the temporary variant "//.../foo_1.mw" but not variant "//.../foo_2.mw".

@itsme 

You're right, something like 
ps -ef | grep -i "maple" | grep -v grep
doesn't do the jod as tasklist does.

I remember having used this "wmctrl -l" on a Linux workstation to get name of windows (plus extraction of the names of these windows by awk). Never tried this to get the name of the active Maple's window but I think it would worth trying it.
Unfortunately it doesn't work on my Mac.

There is also the Linux xwinfo command, but I think it requires you to point to the window you want to get the name of 

1 2 3 4 5 6 7 Last Page 1 of 49