Carl Love

Carl Love

28065 Reputation

25 Badges

13 years, 23 days
Himself
Wayland, Massachusetts, United States
My name was formerly Carl Devore.

MaplePrimes Activity


These are answers submitted by Carl Love

You'll get a much smoother plot if you let plot itself choose the ordinates (the values of i for which the fsolve is done). This is very easy to do:

F:= proc(i)
local
   x, ThetaBn:= (1/180)*i*Pi,
   s:= cos(2*ThetaBn)*x+(2*sin(ThetaBn)*sin(ThetaBn))*sin(x)
;
   180.0/Pi*fsolve(s = 0, x, 1 .. 6)
end proc:

plot(F, 50..85);

For your second example, it's even easier to get plot to do most of the work because there's no fsolve:

alpha:= 
   sqrt(V^2-2*V*sin(theta)*(V*sin(theta)-1)+4*sin(theta)^2*(V*sin(theta))^2)
   /(-1/cos(theta)-2*cos(theta)*(V*sin(theta)-1))
;
plot(eval(180/Pi*alpha-140, theta= 70*Pi/180), V= 0.1..0.8);

In either case, there's no need for an array or any other explicit container structure to hold the numeric data.

Here is some simpler code:

IntegerTriangles:= proc(p::posint)
local x, y, `p/2`:= iquo(p,2)+1;
   {seq(seq([x, y, p-x-y], y= max(x, `p/2`-x)..iquo(p-x, 2)), x= 1..iquo(p,3))}
end proc
:

So, it's possible to ensure that the triangle inequality is satisfied just by using the loop bounds of the middle number, y; so, there's no need to explictly check whether p-x-y < x+y (what Kitonum did).

Where you have used irem (integer remainder), you meant to use iquo (integer quotient).

Using assume on a variable which will be assigned a value does nothing useful. In other words, assume is only effective for symbolic variables. If -- for debugging purposes -- you want to restrict the type of values that can be assigned to a variable, there is a way to do that such that an error will be thrown if the restriction is violated while you're in debugging mode and there'll be no overhead for the checking code when you're not. Plus, the checking code can be isolated into the declarations section. If that was your purpose for using assume, let me know.

Creating a sequence (as your code does with L), list, or set by adding elements one at a time is extremely inefficient due to a Maple idiosyncracy of these container structures. (That same idiosyncracy makes these structures more efficient for other purposes.) Kitonum shows one way around that: Use a table as the container structure. I show another: Use seq instead of for

Since your code never changes p, there's no need to copy the parameter n to the local p; hence there's no need for the local variable -- just use the parameter instead.

There are at least four anomalies[*1] in your code:

  1. There is a contradiction between your exposition and your code as to the condition for the product to be 0: The former says m <> l, while the latter says n <> l. Please clarify which is correct.
  2. It makes no sense for to be a local variable. Either make it a parameter to the procedure, or leave it undeclared.
  3. The `.` operator in your main line of computation should be ordinary multiplication, `*`.
  4. If phii is a procedure external to your code above, okay; otherwise it should be phi[n, m+s-2*j](t) (note the square brackets). In any case, this must be the final return value.

And while the following is not an error, it's an easily avoidable inefficiency: g (with parameters m, j, and s) and a should be defined external to multiply.

If I'm reading your exposition correctly, this multiply is non-commutative. Is that correct?

[*1]By anomaly I mean, in this context, something that is not manifestly an error but is suspicious as a potential source of error once this code is put into the broader context for which it's intended.

I don't trust solve in any situation where the number of equations is greater than the number of variables being solved for. In those  cases, I use eliminate instead:

eliminate({x^2= 2, x^3=sqrt(2)^3}, {x});

The return value is always a list of two sets, the first containing the solutions and the second the residual information (possibly empty). The residual information is expressions (other than itself) which when equated to represent what's left after the solved-for variables are eliminated from the equations.

Now here's my unsubstantiated guess as to why solve gets this wrong sometimes, as in the present example: In situations where the residual information is not the empty set, solve correctly and intentionally returns NULL. Here are two such situations, shown here from the point of view of eliminate:

eliminate({x=2, y=3}, {x});
                       [{x = 2}, {y - 3}]
eliminate({x=2, x=3}, {x});
                        [{x = 2}, {-1}]

Note that in the second case, the residual information is self-contradictory, yet eliminate intentionally returns it anyway, as it may be useful for some purposes (that there's no need to go into here). My guess about solve is that it performs a preliminary and superficial analysis of the system of equations, and if it thus guesses that this is one of those situations where the residual information will be non-empty, it returns NULL.

Because my guess about what solve is doing in these situations is something that I've thought about for years, I'd appreciate it if someone could provide some evidence for or against it. The code underneath solve is very long, and has been patched many times over the years, so direct evidence may be difficult to obtain; but perhaps an experiment could be devised to collect some indirect evidence.
 

The issue that you're describing is merely a feature of Maple's GUI's pretty printing of Vectors and Matrices. It has absolutely nothing to do with the form by which a Matrix is imported, where it's imported from, or whether it's imported at all; it has nothing to do with how the Matrix is stored in Maple; there is no distinction between a "standard" form and some other form(s). So, there's no need (and no way) to "recast" a Matrix from one form to another because those other forms simply don't exist. The complete Matrix exists in Maple's memory even if it is displayed in the summarized form that you described.

Do:

interface(rtablesize);

If you have a default setup, this will return 10. A Vector or Matrix with a number of rows or columns greater than this number will be displayed in the summarized form (when it's prettyprinted). If want to change that, do something like

interface(rtablesize= 30);

This will affect the prettyprinted display of all Vectors and Matrices until you change it back, start a new session, or do restart.

[This solution is similar to MMcDara's/Sand15's "Exact solution", although I derived it independently before his was posted. I explicitly use the BetaDistribution. He uses it also, but not explicitly by name. In my (limited) experience, Statistics works better the closer one sticks to the pre-defined distributions.]

It may surprise you that your problem can be solved completely symbolically (and even using fewer lines of code than you used), such that your QStar can be expressed as a fairly simple parameterized random variable to Maple's Statistics. After that, it's trivial to compute the mean, standard deviation, etc., as symbolic expressions and to generate samples in the millions (rather than the 1000 that you asked for), all without a single loop.

Indeed, QStar can be expressed as an affine (i.e., linear plus a constant) combination of two Beta random variables. There are five parameters to the combination:

  • a: the lower bound (not the sample min) of the underlying Uniform distribution
  • b: the upper bound (not the sample max) of the underlying Uniform distribution
  • n: the sample size (the 10 that you used)
  • co: exactly as you used
  • cs: exactly as you used

Here a worksheet with the analysis:
 

Analyzing a random variable derived from the minimum and maximum of a uniform sample

Author: Carl Love <carl.j.love@gmail.com> 23-March-2019

restart
:

St:= Statistics
:

Set lower and upper limits of uniform distribution (a and b) and sample size n. Then generate sample and find its min A and max B.

Note that this code is commented out because I want these five variables remain symbolic, at least for the time being.

(*
(a,b,n):= (10, 20, 10):
(A,B):= (min,max)(St:-Sample('Uniform'(a,b), n)):
*)

Your computation, with symbolic variables:

Ecost:= co*int((Q-x)*f(x), x= A..Q) + cs*int((x-Q)*f(x), x= Q..B):

DCost:= diff(Ecost, Q);

co*(int(f(x), x = A .. Q))+cs*(int(-f(x), x = Q .. B))

QStar:= solve(eval(DCost, [f(x)= 1/(B-A)]), Q);

(A*co+B*cs)/(co+cs)

A and B are derived as mathematical operations (min and max) performed on a random sample of a particular distribution (Uniform(a,b)), thus they are random variables, and thus QStar is a simple linear combination of two random variables and thus is itself a random variable. Surprisingly, we can express and evaluate QStar as a random variable that Maple can easily work with.

 

The k-th order statistic of a sample of size n drawn from Uniform(0,1) has distribution BetaDistribution(k, n-k+1) (see the Wikipedia article "Order statistic" https://en.wikipedia.org/wiki/Order_statistic#Probability_distributions_of_order_statistics). The minimum corresponds to k=1 and the maximum to k=n. So, if we tranlate from interval [0,1] to [a,b] the min and max have distributions A:= (b-a)*BetaDistribution(1,n) + a and B:= (b-a)*BetaDistribution(n,1) + a.

Q:= eval(
   QStar,
   [
      A= a + (b-a)*St:-RandomVariable(BetaDistribution(1,n)),
      B= a + (b-a)*St:-RandomVariable(BetaDistribution(n,1))
   ]
)
:

We can get Maple to derive symbolic expressions for the parametric statistics of Q:

Stats:= seq(St[p], p= (Mean, StandardDeviation, Skewness, Kurtosis)):
QStats:= <cat~([mu, sigma, gamma, kappa], __Q)> =~ <Stats(Q)>;

Vector(4, {(1) = `#msub(mi("&mu;",fontstyle = "normal"),mi("Q"))` = (a*co*n+b*cs*n+a*cs+b*co)/(co*n+cs*n+co+cs), (2) = `#msub(mi("&sigma;",fontstyle = "normal"),mi("Q"))` = sqrt(n*(a^2*co^2+a^2*cs^2-2*a*b*co^2-2*a*b*cs^2+b^2*co^2+b^2*cs^2)/(co^2*n^3+2*co*cs*n^3+cs^2*n^3+4*co^2*n^2+8*co*cs*n^2+4*cs^2*n^2+5*co^2*n+10*co*cs*n+5*cs^2*n+2*co^2+4*co*cs+2*cs^2)), (3) = `#msub(mi("&gamma;",fontstyle = "normal"),mi("Q"))` = -2*n*(a^3*co^3*n-a^3*cs^3*n-3*a^2*b*co^3*n+3*a^2*b*cs^3*n+3*a*b^2*co^3*n-3*a*b^2*cs^3*n-b^3*co^3*n+b^3*cs^3*n-a^3*co^3+a^3*cs^3+3*a^2*b*co^3-3*a^2*b*cs^3-3*a*b^2*co^3+3*a*b^2*cs^3+b^3*co^3-b^3*cs^3)/((co^3*n^5+3*co^2*cs*n^5+3*co*cs^2*n^5+cs^3*n^5+8*co^3*n^4+24*co^2*cs*n^4+24*co*cs^2*n^4+8*cs^3*n^4+24*co^3*n^3+72*co^2*cs*n^3+72*co*cs^2*n^3+24*cs^3*n^3+34*co^3*n^2+102*co^2*cs*n^2+102*co*cs^2*n^2+34*cs^3*n^2+23*co^3*n+69*co^2*cs*n+69*co*cs^2*n+23*cs^3*n+6*co^3+18*co^2*cs+18*co*cs^2+6*cs^3)*(n*(a^2*co^2+a^2*cs^2-2*a*b*co^2-2*a*b*cs^2+b^2*co^2+b^2*cs^2)/(co^2*n^3+2*co*cs*n^3+cs^2*n^3+4*co^2*n^2+8*co*cs*n^2+4*cs^2*n^2+5*co^2*n+10*co*cs*n+5*cs^2*n+2*co^2+4*co*cs+2*cs^2))^(3/2)), (4) = `#msub(mi("&kappa;",fontstyle = "normal"),mi("Q"))` = (9*co^4*n^3+6*co^2*cs^2*n^3+9*cs^4*n^3+15*co^4*n^2+42*co^2*cs^2*n^2+15*cs^4*n^2+72*co^2*cs^2*n+12*co^4+12*cs^4)/((co^2+cs^2)^2*(n^2+7*n+12)*n)})

Now use the particular numeric values that you used in your Question:

NumVals:= [a= 10, b= 20, n= 10, co= 20, cs= 25]
:

We can generate huge samples from Q instantaneously:

n:= eval(n, NumVals):  Sam:= St:-Sample(eval(Q, NumVals), 10^6):  n:= 'n'
:

Compare the theoretical and sample statistics:

<Stats(Sam)>, evalf(eval(QStats, NumVals));

Vector(4, {(1) = 15.4545135702626, (2) = .589764236722842, (3) = -.351366162924583, (4) = 4.44997814866995}), Vector(4, {(1) = `&mu;__Q` = 15.45454545, (2) = `&sigma;__Q` = .5904268660, (3) = `&gamma;__Q` = -.3524307757, (4) = `&kappa;__Q` = 4.454789470})

St:-Histogram(Sam, gridlines= false);

 


 

Download OrderStats.mw

You just need to add sin(x+t) and sin(x-t), letting x be the space parameter and t the time parameter:

plots:-animate(
   plot,
   [
      [sin(x+t), sin(x-t), sin(x+t)+sin(x-t)], x= -2*Pi..2*Pi, 
      color= [red,blue,black], thickness= [0,0,2]
   ],
   t= 0..2*Pi, size= [967,610], frames= 100
);

 

That's all that's needed mathematically. You can add various aesthetic doo-dads like the red dots, of course.

Here's a procedure for a linear-time[*1], linear-memory algorithm to generate permutations:

RandPerm:= proc(n::nonnegint)
description "Fisher/Yates/Durstenfeld shuffling algorithm";
option `Reference: https://en.wikipedia.org/wiki/Fisher%E2%80%93Yates_shuffle`;
local P:= rtable([$1..n]), k, j;
   for k from n by -1 to 2 do
      j:= 1+irem(rand(), k);
      (P[k],P[j]):= (P[j],P[k])
   od;
   [seq(k, k= P)]
end proc
:

Usage:
RandPerm(9);
      [4, 6, 8, 5, 9, 3, 2, 1, 7]

A compiled-Maple version of the above procedure is possible, although I'd be skeptical about relying on a compiler-supplied version of rand. The cycle length of the generator needs to be significantly larger than n! to achieve uniformly random selection from all possible permutations. As of Maple 2018, the Maple-library version of rand has a cycle length slightly larger than 2080!, which is probably adequate for most purposes; but since 21! > 2^64, a generator based on 64-bit hardware arithmetic can't effectively shuffle a deck of playing cards, or even a half deck.

[*1]By linear-time, I mean that the computation time is proportional to the length of the output---n in this case. Obviously, this is the best possible asymptotic time complexity.

The only practical reason I can see for doing such a computation is if it's done in modular arithmetic. Is that your overall goal? Here's an example of how it's done. The first two lines are to choose a random modulus m such as might be used in a cryptographic application.

r:= rand(2^64..2^65):
m:= (nextprime(r())-1) * (nextprime(r())-1);

          m := 868409667625403659344120587678635445760
815427339135043 &^ 5579408896058 mod m;
             28610635084817154115285869033088238569

This is returned instantaneously.

This is an example for educational purposes only. My m is still not large enough or random enough for a secure cryptographic application.

The problem with Kitonum's solution is that the root plots are discontinuous. That will almost always happen if you try to directly plot parameterized roots returned by solve (or other algebraic solution techniques). I get around that by turning each parameterized root into an IVP and solving with dsolve(..., numeric).

restart:
Digits:= trunc(evalhf(Digits)):
gm:= V -> 1/sqrt(1-V^2):
T:= w-k*V:
S:= w*V-k:
H:= unapply(
   numer(
      T*S^2*gm(V)^3*3/2+I*S^2(1+I*27/4*T*gm(V))*gm(V)^2-
      (1+I*27/4*T*gm(V))*(1+I*5*T*gm(V))*T*gm(V)
   ),
   w,V,k
):
Vlist:= [
   Record("V"= 0.8, "color"= "Black", "plot"= ()),
   Record("V"= 0.9, "color"= "Red", "plot"= ()),
   Record("V"= 0.99, "color"= "Blue", "plot"= ())
]:
R:= [solve(H(w,V,k), w, explicit)]:
d:= nops(R): #number of roots
for v in Vlist do
   S:= dsolve(
      {   #odes:
          seq(eval(diff(r[i](k),k) = diff(R[i],k), V= v:-V), i= 1..d),
          #initial conditions:
          ([seq(r[i](0), i= 1..d)] =~ evalf(eval(R, [V= v:-V, k= 0])))[]
      },
      numeric
   );
   v:-plot:= plots:-odeplot(
      S, 
      [seq([[k, Re(r[i](k))], [k, Im(r[i](k))]][], i= 1..d)], 
      k= 0..1,
      color= v:-color,
      linestyle= ['[solid,dash][]'$d]
   )
od:   
plots:-display([seq(v:-plot, v= Vlist)], size= [987,610]);

First, change all < to <=. The optimizer can't handle <. Then do:

Optimization:-NLPSolve(lhs(obj), [const[], obj], 'maximize');

Perhaps you meant sqrt(x-6) rather than sqrt(x-5)? Otherwise, there's no reason why they should "combine".

@a_simsim If you change the command copy(nodeprop) to copy(nodeprop, 'deep'), then you'll get your expected results. This is because not merely the Records themselves but also the Vectors in the Records are mutable structures, which means that a direct assignment merely creates a new pointer to the original structure rather than a new structure. The keyword deep means that copy is recursively applied to all substructures.

There is a blatant bug that can be seen in lines 5-6 of showstat(Statistics:-DataSummary): The X data themselves are being used as the weights, while the weights that you passed (as Y) are totally ignored. It's exactly as if it were behaving correctly and you had specified DataSummary(X, weights= X).

The command is GraphTheory:-SetVertexPositions.

X:= Matrix(9, 9, [[0, 0, 1, 0, 1, 1, 1, 0, 0], [1, 0, 0, 0, 0, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 1], [0, 0, 0, 1, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 1, 0]]):

#Coordinates
P:= [[40, 40], [22, 22], [36, 26], [21, 45], [45, 35], [55, 20], [55, 45], [26, 59], [55, 65]]:

GT := GraphTheory;
G := GT:-Graph(X);
GT:-SetVertexPositions(G, P);
GT:-DrawGraph(G);

It is interesting that the position information is stored in the graph itself rather than simply in the plot. Also, note that SetVertexPositions modifies its first argument, the graph, rather than giving a return value.

You need to use all vertices, in order, in a list; so, tweaking is best done by alternating calls to GetVertexPositions and SetVertexPositions. Several examples are shown at help page ?GraphTheory,GetVertexPositions.

First 148 149 150 151 152 153 154 Last Page 150 of 395