## 6568 Reputation

8 years, 103 days

## If I understant correctly your problem, ...

If I understant correctly your problem, you want to assess the value of y2[n] = F(x2[n]) for each n=1..maxx.

If it is so here is a solution.
I have no doubt that someone else will prvide you a more concise/elegant solution

Sorry, I still get this ?!*&?!!  error when I try to load the content of the worksheet
Maple Worksheet - Error
Failed to load the worksheet /maplenet/convert/Spline.mw .

## The simplest form of a third order equat...

The simplest form of a third order equation is Z^3+p*Z+q = 0
This form can always be obtained after elementary transformations of the equation is a*Z^3+b*Z^2+c*Z+d = 0.

It's well known that the narure of the roots of the equation  Z^3+p*Z+q = 0 (either 3 pairwise distinct real roots, or 1 double real root plus a distinct simple one, or 1 real root plus 2 complex conjugate roots) depends on the sign of its discriminent R = 4*p^3+27*q^2.
Maple does not presents the solution this way, but it always present the three roots in complex form.
This means that these forms, which are general enough to cover the three situations R < 0, R = 0 and R > 0 are complicated, unless trivial situation (for instance q=0, or p=0 for instance).

 > solve(x^3+p*x+q, x)
 (1)
 >

Now do back to your problem.
One can show (see the attached file below), that the solutions with respect to B can be written B1=f(A) and B2=-f(A).
Injecting these solutions in a remaining equation "in A" results in a third order equation of the form a*A^3+b*A^2+A*x+d = 0.
It's then easy to identify the coefficients p and q of the standard form Z^3+p*Z+q = 0 (file below).

This means the "Z solutions" Z1, Z2, Z3 are of the form given in the file ThirdOrder.mw.
The "A solutions" are An,= Zn + b/(3*a).

The standard third order equation writes
When solved for Z, the expression one obtains for Z1, Z2, Z3 are still rather simple.
So are the corresponding values of A1, A2, A3.

The difficulties come when the roots B1 and B2 are computed from f(An) (last line of the file below)

Unless you can specify yourself the sign of the discriminant R and do some work by hand, it doesn't seem possible to obtain simpler expressions of the solution (see also Edgardo's answer)

SimplifiedForm.mw

• You want to run your code by clicking on some icon which would run a graphic user interface.
Then use Maplets for instance

• You want to run a .mw file without opening a Maple sesion before.
Then use the online cmaple (windows) or maple (linux/OSX) (with all it's limitations concerning plots)

• You want to run a .mw filr  from another one, just if this latter was a procedure.
Use DocumentTools:-RunWorksheet

• You want your code to be executed while opened in a worksheet.
Look to the entry autoexecutedisabled ine the help pages

•  any other...?

## @Harry Garst Sorry for this late an...

@Harry Garst

Sorry for this late answer but I was busy  with some other stuff and I just forgot this thread.
Concerning the EM algorithm to identify the (known number of) components of a mixture of non necessarily gaussian distributions, I wrote something for personal use times ago.
This is just an implementation of the EM algorithm (classical one without jumps to account for situations with unknown number of components).
If you're interested I can deliver it after some modifications (and comments). But I'm not sure it will fit your educational purposes requirements.

The R code for tetra/poly choric correlations is rather long and complex to translate into Maple (package psych, function tetrachoric). I will ry to do something simpler, at least fro tetrachoric correlations.
I'll keep you in touch in the few days to come.

For now on here is a "raw" version of the EM algorithm I use

 > restart:
 > with(Statistics): local D; Digits := 20: K  := 3:

STEP 1: Simulate a TrueMixture of 3 random variables

 > NbOfComponents := 3: TrueMixture := table([   # 1 = table([law = ExponentialDistribution, weight = 0.5, param = [4]       ]),   # 1 = table([law = NonCentralChiSquare    , weight = 0.4, param = [3, 2]    ]),   # 1 = table([law = MoyalDistribution      , weight = 0.4, param = [-1, 1]   ]),     1 = table([law = NormalDistribution     , weight = 0.4, param = [0, 1]    ]),     2 = table([law = NormalDistribution     , weight = 0.4, param = [3, 1]    ]),     3 = table([law = NormalDistribution     , weight = 0.2, param = [10, 3]   ]) ]); for k from 1 to K do    G__||k := RandomVariable(TrueMixture[k][law](op(TrueMixture[k][param]))): end do: # The sampling can be done in wide variety of ways # The simpler, non purely stochastic method is draw a sample # for each G__k proportional to its weight: N    := 10^4: P    := [seq(TrueMixture[k][weight], k=1..NbOfComponents)]; NS   := floor~(N *~ P); S__1 := Sample(G__1, NS[1]): S__2 := Sample(G__2, NS[2]): S__3 := Sample(G__3, NS[3]): S    := < seq((S__||k)^+, k=1..K) >; Histogram(S): for k from 1 to K do    o__||k := TrueMixture[k][weight]:    m__||k := op(1, TrueMixture[k][param]):    s__||k := op(2, TrueMixture[k][param]): end do:
 (1)

STEP 2: Estimate the parameters of the mixture

 > mroll := rand(-5.0 .. 15.0); sroll := rand(0 .. 10.0)
 (2)
 > # Initialize Mixture with the correct distributions MaxMix := 3: Mixture := table([   seq(k = table([law = TrueMixture[k][law], weight = 1/MaxMix, param = [mroll(), sroll()] ]), k=1..MaxMix) ]): # EM algorithm choix := combinat:-randperm([\$(1..N)])[1..1000]: verbose := false: #------------------------------------------------------------- # Initialisation # # (could be done differently) for k from 1 to MaxMix do    o__||k := Mixture[k][weight]:    m__||k := Mixture[k][param][1]:    s__||k := Mixture[k][param][2]: end do: if verbose then print("m = ", seq(m__||k, k=1..MaxMix)) end if; if verbose then print("s = ", seq(s__||k, k=1..MaxMix)) end if;
 > #------------------------------------------------------------- # algo. printf(cat(seq("-------------------------", k=1..MaxMix), "---\n")); printf(cat(seq(cat("   |     Component ", k, "     "), k=1..MaxMix), "  |\n")); printf(cat(seq("-------------------------", k=1..MaxMix), "---\n")); printf(cat(seq("   | weight  par 1  par2 ", k=1..MaxMix), "  |\n")); printf(cat(seq("-------------------------", k=1..MaxMix), "---\n")); R := 20: for r from 1 to R do    for k from 1 to MaxMix do       pdf    := t -> Mixture[k][weight] * PDF(Mixture[k][law](op(Mixture[k][param])), t);       L__||k := pdf~(S[choix]);    end do:    LL := `+`~(seq(L__||k, k=1..MaxMix));    if verbose then print(seq(L__||k, k=1..MaxMix), LL) end if;        for k from 1 to MaxMix do       post__||k := L__||k /~ LL;    end do:    if verbose then print(seq(post__||k, k=1..MaxMix)) end if;    for k from 1 to MaxMix-1 do       o__||k := Mean(post__||k)[1]    end do:    o__||MaxMix := 1-add(o__||k, k=1..MaxMix-1):    if verbose then print("o = ", seq(o__||k, k=1..MaxMix)) end if;    for k from 1 to MaxMix do       m__||k := add(post__||k *~ S[choix]) /~ add(post__||k);    end do:    if verbose then print("m = ", seq(m__||k, k=1..MaxMix)) end if;    for k from 1 to MaxMix do       s__||k := sqrt(add(post__||k *~ (S[choix] -~ m__||k)^~2) /~ add(post__||k));    end do:    if verbose then print("s = ", seq(s__||k, k=1..MaxMix)) end if;        for k from 1 to MaxMix do       if numelems(Mixture[k][param])=1 then          Mixture[k][param] := m__||k;       else          Mixture[k][param] := [m__||k, s__||k];       end if    end do:    if verbose then print();print() end if;      printf(            cat("%2d ", seq("  %-23a", k=1..MaxMix), "\n"),            r,            op(evalf[4]([seq([o__||k, op(Mixture[k][param])], k=1..MaxMix)]))    ) end do:
 ------------------------------------------------------------------------------    |     Component 1        |     Component 2        |     Component 3       | ------------------------------------------------------------------------------    | weight  par 1  par2    | weight  par 1  par2    | weight  par 1  par2   | ------------------------------------------------------------------------------  1   [.2836, 4.398, 4.942]    [.2630, 3.411, .8269]    [.4534, 1.707, 3.358]    2   [.3541, 4.668, 4.999]    [.2497, 3.125, .8237]    [.3962, 1.225, 2.419]    3   [.3373, 5.380, 4.960]    [.2579, 2.997, .8450]    [.4048, .8177, 1.760]    4   [.3169, 5.947, 4.793]    [.2726, 2.974, .8691]    [.4104, .5431, 1.444]    5   [.3028, 6.301, 4.652]    [.2919, 2.981, .8756]    [.4053, .3468, 1.261]    6   [.2936, 6.525, 4.552]    [.3078, 2.979, .8683]    [.3986, .2152, 1.141]    7   [.2885, 6.660, 4.482]    [.3192, 2.965, .8593]    [.3923, .1296, 1.064]    8   [.2862, 6.734, 4.436]    [.3269, 2.945, .8551]    [.3869, .7433e-1, 1.017]  9   [.2853, 6.773, 4.406]    [.3322, 2.925, .8557]    [.3825, .3820e-1, .9876] 10   [.2850, 6.793, 4.387]    [.3360, 2.907, .8593]    [.3789, .1386e-1, .9693] 11   [.2851, 6.804, 4.375]    [.3389, 2.893, .8642]    [.3761, -.3123e-2, .9573] 12   [.2851, 6.810, 4.366]    [.3410, 2.881, .8692]    [.3738, -.1536e-1, .9492] 13   [.2852, 6.814, 4.360]    [.3427, 2.872, .8740]    [.3721, -.2442e-1, .9435] 14   [.2853, 6.817, 4.356]    [.3440, 2.864, .8781]    [.3707, -.3126e-1, .9394] 15   [.2853, 6.819, 4.353]    [.3451, 2.858, .8816]    [.3696, -.3649e-1, .9363] 16   [.2854, 6.821, 4.350]    [.3459, 2.854, .8846]    [.3687, -.4055e-1, .9340] 17   [.2854, 6.822, 4.348]    [.3466, 2.850, .8870]    [.3680, -.4371e-1, .9323] 18   [.2854, 6.823, 4.347]    [.3471, 2.847, .8889]    [.3675, -.4618e-1, .9309] 19   [.2854, 6.824, 4.345]    [.3476, 2.845, .8905]    [.3670, -.4813e-1, .9299] 20   [.2854, 6.825, 4.344]    [.3479, 2.843, .8918]    [.3667, -.4967e-1, .9290]
 > FormalMixture := evalf[4](add(o__||k * Mixture[k][law](op(evalf[4](Mixture[k][param]))), k=1..MaxMix));
 (3)
 > mix := add(Mixture[k][weight] * PDF(Mixture[k][law](op(Mixture[k][param])), t), k=1..MaxMix): plots:-display(    Histogram(S[choix]),    plot(mix, t=min(S[choix])..max(S[choix]), color=red, thickness=3),    seq(plot(Mixture[k][weight] * PDF(Mixture[k][law](op(Mixture[k][param])), t), t=min(S[choix])..max(S[choix]), color=gold), k=1..MaxMix) )
 >

## The error message indicates you cannot w...

The error message indicates you cannot write 1/F__i = 0 .. 10  as a range.

Doing this you will get the clear error message
Error, (in Plot:-Inequality) additional variables not matching range variables found: {A__c, Z__b, Z__t, k__b, k__t, `(e__0)__mp`, M__max__B, M__min__B, sigma__ci, sigma__cs, sigma__ti, sigma__ts}
saying there exist non instanciated quantities in the inequalities to plot: just give them values

As suggested in the code below, use Explore to help you giving values tio the remaining indets

 > restart:
 > with(plots):
 > eqns :=   {     F__i*(1-e__0/k__b)/A__c+M__min__B/Z__t >= sigma__ti,     .8*F__i*(1-e__0/k__t)/A__c-M__max__B/Z__b >= sigma__ts,     .8*F__i*(1-e__0/k__b)/A__c+M__max__B/Z__t <= sigma__cs,     F__i*(1-e__0/k__t)/A__c-M__min__B/Z__b <= sigma__ci, abs(e__0) <= `(e__0)__mp`   }; inequal(eqns, F__i = 0.1..100, e__0 = -5 .. 5, color = "Niagara 2")
 > indets(eqns, name) minus {F__i, e__0}
 (1)
 > Explore(inequal(eqns, F__i = 0.1..100, e__0 = -5 .. 5, color = "Niagara 2"))
 >

## By change of coordinatesChange_coordinat...

By change of coordinates

Change_coordinates.mw

 > restart;
 > g := (x, y) -> min((3+((x-y)^(2))/(10)+(abs(x+y))/(sqrt(2))),(-abs(x-y)+(7)/(sqrt(2))));
 (1)
 > # the terms (x-y) and (x+y) suggest changing frame [0, x, y]into frame [O, u, v] by a rotation of angle Pi/2 xy2uv := [u=(x+y)/sqrt(2), v=(x-y)/sqrt(2)]; uv2xy := op(solve(xy2uv, [x, y]));
 (2)
 > G := unapply(simplify(subs(uv2xy, g(x, y))), [u, v]);
 (3)
 > q := 0; H := (u, v) -> Heaviside(G(u, v)-q);  # means H(u, v) is either 0 or 1
 (4)
 > L := 10: plots:-contourplot(H(u, v), u=-L..L, v=-L..L, contours=[0.999], filled=true, grid=[100, 100]);  # blue = 1, red = 0
 > # let f := x -> exp(-x^2/2)/sqrt(2*Pi); # then the integrand is phi := (x, y) -> 'h(x, y)' * f(x) * f(y): # apply the rotation Phi := (u, v) -> simplify('H(u, v)' * subs(uv2xy, f(x)*f(y))): Phi(u, v)
 (5)
 > # then p := int(int(h(x, y)*exp((-x^2-y^2)*(1/2))/(2*Pi), y = -infinity .. infinity), x = -infinity .. infinity); # just becomes  p := int(int(Phi(u, v), u = -infinity .. infinity), v = -a..a); # where a is the half width of the blue strip. # # More of this, H(u, v)=1 inside this plur strip, then p := int(int(f(u)*f(v), u = -infinity .. infinity), v = -a..a);  # here "a" is still unknown
 (6)
 > H := unapply(subs(uv2xy, h(x, y)), [u, v])
 (7)
 > # Find the position of the horizontal lines which bound the blue strip solutions := [solve(G(u, v)=0, v)]; strip := sort(select(is, solutions, real)); eval(p, a=strip[2]); evalf(%);
 (8)
 >

## I'm going to bed, here are some elem...

I'm going to bed, here are some elements to answer points 1 to 4.
5 is obviously ExcelTools:-Export(....)

I have no idea of your knowledge about all this stuff, so pardon me if reading this worksheet makes you feel  I'm condescending to you.

 > restart:
 > interface(version)
 (1)
 > with(Statistics):

Simple model set the framework of Ordinary Least Squares (OLS)
X1 and X2 are deterministic variables
B1 and B2 are parameters of unknown values

E is a random Error (with dome adhoc properties [homoskedasticity]  I assume verified here [OLS framework again]
Y = B1*X1 +B2*X2 + E

Given a collection if triples (x1[n], x2[n], y[n]), find approximations b1 and b2 of B1 and B2

Step 1: simulation

 > N := 100: SampleOfX__1 := Sample(Uniform(0, 10), N): # just to "create" x1 points in an arbitrary way SampleOfX__2 := Sample(Uniform(0, 10), N): # just to "create" x1 points in an arbitrary way Error := Sample(Normal(0, 10), N); # usual choice (= unbiased observation error + homoskedasticity) B1 := 3 ; B2 := -4:   # remember these values are UNKNOWN in practical situations ExactResponse := B1 *~ SampleOfX__1 +~ B2 *~ SampleOfX__2: ObservedResponse := ExactResponse +~ Error:
 (2)
 > # Rotate the plot to verify the points do not lie on a single plane (because of the additive error) ScatterPlot3D(< SampleOfX__1, SampleOfX__2, ObservedResponse>^+, symbol=solidcircle, symbolsize=20);

Estimation of B1 and B2

 > OLS_result := LinearFit(                          [x__1, x__2],                          < SampleOfX__1, SampleOfX__2>^+,                          ObservedResponse^+,                          [x__1, x__2],                          output=[parametervalues, leastsquaresfunction, residualstandarddeviation]                        )
 (3)

Interpretation:
The best linear unbiased estimation (BLUE) of B1 equals 3.24372...
The best linear unbiased estimation (BLUE) of B2 equal -3.39397...
The residual standard deviation has value 10.29...
This values characterizes the "spreading" of the points around the optimal OLS plane base on the BLUE estimators above

 > plots:-display(    ScatterPlot3D(< SampleOfX__1, SampleOfX__2, ObservedResponse>^+, symbol=solidcircle, symbolsize=20),    plot3d(OLS_result[2], x__1=0..10, x__2=0..10, color=blue, transparency=0)  # adjust transparency and/or rotate );

The main problem with the residualstandarddeviation is that it only gives an information about the quality of representation of the "BLUE" plane
Then this error is named "REPRESENTATION Error" or also "LEARNING Error"

What we are often interested in is to assess the representability of the model OUTSIDE of the points it was constructed on.
TRhat is to assess the GENERALIZATION error
The classical strategy to realize this is to use a resampling procedure.
For instance :
1/ Leave One Out (LOO) where all the points but one are used to build the optimal model while the discarded one is used to assess the generalization error
Repeat this for all the points and assemble these N estimators ot obtain the desired generalization error

2/ Leave k Out (LkO) similar to LOO but with k points discarded (N!/k!/(N-k)! local estimators can obtained that way)

3/  Bootstrap (simpler to refer to the corresponding Maple's help page)

4/ Cross Validation, or CV, (close to LkO): split the DATA base (the N triples here) in a LEARNING base used to build the optimal model, plus a TEST or GENERALISATION
base to assess its generalization error.
As always repeat the process.

About the 25%-75% splitting parts I have a few things to say. Some others will use 1/3-2/3.
There is no strict rule beyond the fact that the model must remain  constructible on the choosen fraction of the DATA base

 > # Simulation Data_base := < SampleOfX__1, SampleOfX__2, ObservedResponse>^+: Learning_ratio := 0.25: Learning_size  := round(Learning_ratio*N); Test_size      := N - Learning_size; KeptForLearning := combinat[randperm]([\$1..N])[1..Learning_size];  # randomly select 25% of the data base KeptForTest     := convert({\$1..N} minus {KeptForLearning[]}, list);
 (4)
 > Learning_base := Data_base[KeptForLearning, ..]; Test_base     := Data_base[KeptForTest, ..]    ;
 (5)
 > # Learnt (=Trained) model CV_result := LinearFit(                          [x__1, x__2],                          Learning_base,                          [x__1, x__2],                          output=[parametervalues, leastsquaresfunction, residualstandarddeviation]                        )
 (6)
 > # Its predictions CV_model := unapply(CV_result[2], [x__1, x__2]); CV_predictions := Vector[column](Test_size, n -> op(CV_model(entries(Test_base[n, 1..2]))))
 (7)
 > # Its generalization error: CV_Gen_Error := StandardDeviation(CV_predictions -~ Test_base[..,3])
 (8)

A value not very far from the one of the representation error obtained in the OLS test, but it is related to the fact that the TRUE model is ALSO the
regression model (you could change the TRUE model by writting for instance Y = B1*X1 +B2*X2^2 + E and keep the regretion model Y = B1*X1 +B2*X2
unchanged: you will then see large differences between the trainin and the test errors)

Let's generate now R realizations of this test error:

 > R := 100: # it's just illustrative CV_All := NULL: for r from 1 to R do   KeptForLearning := combinat[randperm]([\$1..N])[1..Learning_size];     KeptForTest     := convert({\$1..N} minus {KeptForLearning[]}, list);   Learning_base   := Data_base[KeptForLearning, ..];   Test_base       := Data_base[KeptForTest, ..]    ;   CV_result       := LinearFit(                          [x__1, x__2],                          Learning_base,                          [x__1, x__2],                          output=[parametervalues, leastsquaresfunction, residualstandarddeviation]                     ):   CV_model        := unapply(CV_result[2], [x__1, x__2]);   CV_predictions  := Vector[column](Test_size, n -> op(CV_model(entries(Test_base[n, 1..2])))):   CV_Gen_Error    := StandardDeviation(CV_predictions -~ Test_base[..,3]):      CV_All := CV_All, [CV_result[], CV_Gen_Error]: end do: CV_All := [CV_All]: All_Gen_Errors := op~(-1, CV_All);
 (9)
 > # the better estimate of the generalization error of a bilinear model trained on 25% of the data base is GENERALIZED_Error := sqrt(Mean(All_Gen_Errors^~2));
 (10)
 > # Matrix of all the BLUE b1 and b2 # # Scatter plot of all the BLUEs from all the test bases, plus (red asterixk) the OLS BLUE CV_BLUE := convert(op~(1, CV_All), Matrix)^+; plots:-display(    plot(CV_BLUE, style=point, color=black),    plot([[entries(OLS_result[1], nolist)]], color=red, symbol=asterisk, symbolsize=30, style=point) );
 > # Let's built a very simple aggregated model Aggregated_Model := Mean(CV_BLUE[..,1]) * x__1 + Mean(CV_BLUE[..,2]) * x__2;
 (11)

In some sense, this aggregated model can be said to have a generalization error equal to

 > Predictons_of_Aggregated_Model := Vector[column](N, n -> op(CV_model(entries(Data_base[n, 1..2])))); Representation_error_of_Aggregated_Model := StandardDeviation(Predictons_of_Aggregated_Model -~ Data_base[.., 3]) * (N-1) / (N-3)
 (12)
 >

## Kitonum has been  faster th...

Kitonum has been  faster than me !!!

I was prepared to delete my reply, very close to Kitonum's with whom I agree on a lot of points....
excepted the final result which, for a reason I do not know about, is not the same .
So I finally decided to send "my" proposal... maybe there is an obvious typo error in it or there is something Kitonum maybe will be able to explain?

 > restart
 > k := T -> 2/cosh(2/T)/coth(2/T)
 (1)
 > local delta, e:
 > delta := T -> sqrt(1-k(T)^2*sin(theta)^2)
 (2)
 > e := T -> -2*tanh(2/T)*diff(k(T)/2/Pi*Int(sin(theta)^2/delta(T)/(1+delta(T)), theta=0..Pi), T)
 (3)
 > ee := evalf(e(T)):  # e(T) does the differentiation accortinng to T and evalf replaces Int by int
 > eval(ee, T=1):      # substitute T by 1 in ee
 > evalf(%);           # compute the result
 (4)
 >

## You made a confusion between  c[1] ...

c[1] := 2; c[2] := 5;
and farther
c := (beta[1]*s(t)*p(t).(lambda[2](t)-lambda[1](t)))/(w[2]*(a+p(t)))+(lambda[2](t).e(t)+(i(t)+alpha.e(t)).lambda[3](t)-(gamma.i(t)+p(t)).lambda[4](t))/w[2];

Then, in the lambda[2](t) edo, the first term of the rhs contains c[1] and appears as (beta[1]*s(t)*p(t)......./w[2])instead of (I guess) 2.
Idem in the lambda[2](t) edo, the first term of the rhs contains c[2] and appears as (beta[1]*s(t)*p(t)......./w[2])instead of (I guess) 5.
This explains the error you get Error, (in fproc) unable to store '-1.*HFloat(0.0)[1]': the HFloat(0.0)[1] refers to (beta[1]*s(t)*p(t)......./w[2])

It's to you to correct this.
For example by writting
C:= (beta[1]*s(t)*p(t).(lambda[2](t)-lambda[1](t)))/(w[2]*(a+p(t)))+(lambda[2](t).e(t)+(i(t)+alpha.e(t)).lambda[3](t)-(gamma.i(t)+p(t)).lambda[4](t))/w[2];
u[2] := min(max(0, C), 1);

Once done it should be work better, but don't be in a hurry, with maxmesh=2400 Maple takes its time to provide a solution (about 5 minutes).

## Is this suit you? (don't worry about...

Is this suit you?
(don't worry about the expression of A: I just copied-pasted the content of your question).

PS: Given a Graph G, the command AdjacencyMatrix(G) returns the equivalent of your matrix A, but, I do not know about a Maple's procedure which construct a graph G given its AdjacencyMatrix.

 > restart
 > with(GraphTheory):
 > A := Matrix(4, 10, {(1, 1) = 0, (1, 2) = 0, (1, 3) = 0, (1, 4) = 0, (1, 5) = 0, (1, 6) = 0, (1, 7) = 0, (1, 8) = 0, (1, 9) = 0, (1, 10) = 0, (2, 1) = 0, (2, 2) = 0, (2, 3) = 1, (2, 4) = 1, (2, 5) = 0, (2, 6) = 1, (2, 7) = 0, (2, 8) = 0, (2, 9) = 0, (2, 10) = 1, (3, 1) = 1, (3, 2) = 1, (3, 3) = 0, (3, 4) = 0, (3, 5) = 1, (3, 6) = 0, (3, 7) = 1, (3, 8) = 1, (3, 9) = 1, (3, 10) = 0, (4, 1) = 0, (4, 2) = 0, (4, 3) = 0, (4, 4) = 0, (4, 5) = 0, (4, 6) = 0, (4, 7) = 0, (4, 8) = 0, (4, 9) = 0, (4, 10) = 0})
 (1)
 > edges := map(x -> if A[op(x)] = 1 then {x[]} end if, {indices(A)});
 (2)
 > G := Graph(edges);
 (3)
 > DrawGraph(G, style=spring);
 >

## @a_simsim A third way is "tabl...

@a_simsim

A third way is "table" ; maybe easier to manipulate (for instance to add or to remove a "field")

Table.mw

PS: a "field" can itself be a table, and so on

## Personally I prefer using recordsI find ...

Personally I prefer using records

I find them more readable than arrays (Tom's proposal) and the structure is less artificial than the one obtained with arrays.
Betond, I don't have enough enough skill in Maple to tell you which of these two structures is more efficient in terms of memory usage or manipulation time.

Last by not least, as you mention Matlab, record is closer than struct than array is

 > restart:
 > #              the #              name #              you #              want #               | #               | #               V r := Record(              'MyArray' = Array(1..3, 1..3,1..3),              'data'    = Vector([2,1,3,5,6]),              'status'  = true,              'x'       = 123.456,              'n'       = 17,              'name'    = "astring",              'MyVect'  = Vector([true, false, true, true, true, false]),              'MyMat'   = Matrix(7,7, symbol=x),              'MyPlot'  = plot(x^2, x=0..5)             ):
 > r:-data
 (1)
 > r:-n
 (2)
 > r:-MyPlot
 >

## No, Maple hasn'tWhat are you looking...

No, Maple hasn't

What are you looking for?

If you just need sampling a Levy(m, s)  distribution (m = location parameter, s = dispersion parameter [classical stat/prob notations], see for instance https://en.wikipedia.org/wiki/Lévy_distribution where m is noted mu and s is noted c) you can use this:

(a few extra material is given to help you define a LevyDistribution)

 > restart:
 > with(Statistics):
 > U := RandomVariable(Uniform(1/2, 1)):
 > N  := 10^4: SU := Sample(U, N);
 (1)
 > m := 3; s := 2;
 (2)
 > SL := m +~ s /~ ( Quantile~(Normal(0, 1), SU) )^~2 ;
 (3)
 > Histogram(log[10]~(SL), frequencyscale=absolute, minbins=100)

For comparison, here is the result returned by R with N=10^4, m=3 and s=2

 > #

 > m := 'm': s := 's': pdf := unapply(piecewise(x<0, 0, sqrt(s/2/Pi)*exp(-s/2/(x-m))/(x-m)^(3/2)), x); cdf := unapply(erfc(sqrt(s/2/(x-m))), x);
 (4)
 > Levy := Distribution(PDF=(x->pdf(x)), CDF= (x->cdf(x)), Conditions = [s > 0])
 (5)
 > X := RandomVariable(Levy)
 (6)
 > Levy__mean := Mean(X);
 (7)
 > Levy__median := Median(X);
 (8)
 > evalf(subs({m=0, s=1}, Levy__median)); # compare with wikipedia evalf(subs({m=0, s=1}, s/2*(1/erfc(1/2))^2));
 (9)
 > # plot of the pdf (wiki ways the mode, in case m=0 equals s/3 =1/3 here) plots:-display(   plot(subs({m=0, s=1}, PDF(X, x)), x=0..2, gridlines=true),   plot([[1/3, 0], [1/3, 0.5]], color=blue) );
 > ms := {m=0, s=1}: MyLevy := Distribution(PDF=(x->subs(ms, pdf(x))), CDF= (x->subs(ms, cdf(x))))
 (10)
 > MyX := RandomVariable(MyLevy):
 > A := Sample(MyX, 10^4, method = [envelope, range = 0.00001..6]):
 > plots:-display(    Histogram(A, minbins=100, color=blue, transparency=0.5, view=[0..2, default]),    plot(subs({m=0, s=1}, PDF(X, x)), x=0..2, color=red, thickness=3,         title="A scaling problem I've not solved yet\n", font=[Roman, bold, 14]) )
 >

## Here are a few examples.RollingDice.mw&n...

Here are a few examples.

PS: All RandomVariables have floating point outcomes. If you want to generate "true integer" outcomes, use random instead ... but you will loss the concept of random variable.

RollingDice.mw

 > restart:
 > with(Statistics):
 > Dice1 := RandomVariable(DiscreteUniform(1, 6)); Dice2 := RandomVariable(DiscreteUniform(1, 6));
 (1)
 > # example 1: f1 := (Dice1, Dice2) -> Dice1 + Dice2: Sample(f1(Dice1, Dice2), 10^5): Histogram(%, title=f1('Dice1', 'Dice2'));
 > # example 2: f2 := (Dice1, Dice2) -> Dice1^2 + Dice2: Sample(f2(Dice1, Dice2), 10^5): Histogram(%, title=f2('Dice1', 'Dice2'));
 > # example 3: f3 := (Dice1, Dice2) -> exp(Dice1)*cos(Dice2): Sample(f3(Dice1, Dice2), 10^5): Histogram(%, title=f3('Dice1', 'Dice2'));
 > # Finally: a lot of symbolic computations can be done, for instance print~([seq(Mean__f||n, n=1..3)] =~ Mean~([seq(f||n(Dice1, Dice2), n=1..3)]))
 (2)
 >

## Like Tom I'm not sure to correctly u...

Like Tom I'm not sure to correctly understand your problem.
It seems to me you want to sample truncated Poisson distributions (between 0 and 10 included) whose parameter lambda is itself a linear combination of random variables (am I right? )

Be careful: not all the values of lambda you create from your model are positive!!!
In the attached file I show you how to estimate the frequency of negative lambda values. This frequency goes to zero as the value of N (that is the size of the vector lambda) tends to +infinity. For N=10 the ratio of negative lambda values is close to 1/4.

 > restart:
 > with(Statistics):
 > randomize():
 > N    := 10; x__0 := Vector[row](P, [1\$N]); x__1 := Sample(Binomial(N, 0.4), N); x__2 := Sample(Normal(0, 1), N);
 (1)
 > local lambda:
 > lambda := unapply(add(beta__||k *~ x__||k, k=0..2), (seq(beta__||k, k=0..2)) );
 > B := (alpha, NMC) -> Sample(Poisson(alpha), NMC, method=[discrete, range=0..10])
 (3)
 > beta := (0.25, 0.5, 3.2); lambda~(beta);
 (4)
 > MC := 10: NumberOfNegativeLambdaValues := 0: U := NULL: for u in lambda~(beta) do    if u > 0 then      b := round~(convert(B(u, MC), list)) ;      U := U, b[];    else      # printf("Error, parameter is negative(%a)\n", u):      NumberOfNegativeLambdaValues := NumberOfNegativeLambdaValues + 1:    end if; end do; printf("%2.f %% of the lambda have a negative value\n", evalf(NumberOfNegativeLambdaValues/MC)*100); DataSummary([U]); Histogram([U], discrete, minbins=N+1, thickness=20)
 0 % of the lambda have a negative value

REMARKS

The frequency of occurrences of negative lambda values can be easily determined

 > RV__0 := 1: RV__1 := RandomVariable(Binomial(N, 4/10)); RV__2 := RandomVariable(Normal(0, 1)); RV__3 := unapply( add(beta__||k *~ RV__||k, k=0..2), (seq(beta__||k, k=0..2)) )
 (5)
 > (Mean, StandardDeviation)(RV__3(seq(beta__||k, k=0..2)) );
 (6)
 > (Mean, StandardDeviation)(RV__3(beta) ); FrequencyOfNegativeLambda := Probability(RV__3(beta) <= 0);
 (7)
 >