## Why is orthopoly not seen as a package within a pr...

For years I observe that package orthopoly is not considered as a package within a procedure.
Finally I have decided to ask for clarifications: Can someone explain me why procedure f generates an error?

 > kernelopts(version)
 (1)

Without any procedure

 > restart
 > H(2, x)
 (2)
 > with(orthopoly):
 > H(2, x)
 (3)

Within a procedure

 > restart
 > type(orthopoly, package)
 (4)
 > f := proc(m)   uses orthopoly:   H(m, x) end proc:
 > g := proc(m)   orthopoly:-H(m, x) end proc: g(2)
 (5)
 >

## A story of packing

by: Maple 2015

This post is inspired by minhthien2016's question.

The problem, denoted 2/N/1, for reasons that will appear clearly further on, is to pack N disks into the unit square in such a way that the sum of their radii is maximum.

I replied this problem using Optimization-NLPSolve for N from 1 (obvious solution) to 16, which motivated a few questions, in particular:

• @Carl Love:﻿ "Can we confirm that the maxima are global (NLPSolve tends to return local optima)?
Using NLPSolve indeed does not guarantee that the solution found is the (a?) global maximum. In fact packing problems are generaly tackled by using evolutionnary algorithms, greedy algorithms, or specific heuristic strategies.
Nevertheless, running NLPSolve a large number of times from different initial points may provide several different optima whose the largest one can be reasonably considered as the global maximum you are looking for.
Of course this may come to a large price in term of computational time.

• @acer: "How tight are [some constraints], at different N? Are they always equality?"
The fact some inequality constraints type always end to equality constraints (which means that in an optimal packing each disk touches at least one other annd, possibly the boundary of the box) seems hard to prove mathematically, but I gave here a sketch of informal proof.

I found 2/N/1 funny enough to spend some time digging into it and looking to some generalizations I will refer to as D/N/M:  How to pack N D-hypersheres into the unit D-hypercube such that the sum of the M-th power of their radii is maximum?
For the sake of simplicity I will say ball instead of disk/sphere/hypersphere and box instead of square/cube/hypercube.

The first point is that problems D/N/1 do not have a unique solution as soon as N > 1 , indeed any solution can be transformed into another one using symmetries with respect to medians and diagonals of the box. Hereafter I use this convention:

Two solutions and s' are fundamental solutions if:

1. the ordered lists of radii and s'  contain are identical but there is no composition of symmetries from to s',
2. or, the ordered lists of radii and s'  contain are not identical.

It is easy to prove that 2/2/1 and 3/2/1, and likely D/2/1, have an infinity of fundamental solutions: see directory FOCUS__2|2|1_and_3|2|1 in the attached zip file..
At the same time 2/N/2, 3/N/3, and likely D/N/D, have only one fundamental solution (see directory FOCUS__2|N|2 for more details and a simple way to characterize these solutions

(Indeed the strategy ito find the solution of D/N/D  in placing the biggest possible ball in the largest void D/N-1/D contains. Unfortunately this characterization is not algorithmically constructive in the sense that findind this biggest void is a very complex geometrical and combinatorial problem.
it requires finding the largest void  in a pack of balls)

Let Md, 1(N)  the maximum value of the sum of balls radii for problem d/N/1.
The first question I asked myself is: How does Md, 1(N) grows with N?

(Md, 1(N) is obviously a strictly increasing function of N: indeed the solution of problem d/N/1 contains several voids where a ball of strictly positive radius can be placed, then  Md+1, 1(N) > Md, 1(N) )

The answer seems amazing as intensive numerical computations suggest that

See D|N|M__Growth_law.mw in the attached sip file.
This formula fits very well the set of points  { [n, Sd, 1(n) , n=1..48) } for d=2..6.
I have the feeling that this conjecture might be proven (rejected?) by rigourous mathematical arguments.

Fundamental solutions raise several open problems:

• Are D/2/1 problems the only one with more than one fundamental solutions?

The truth is that I have not been capable to find any other example (which does not mean they do not exist).
A quite strange thing is the behaviour of NLPSolve: as all the solutions of D/2/1 are equally likely, the histogram of the solutions provided by a large number of NLPSolve runs from different initial points is far from being uniform.
F
or more detail refer ro directory FOCUS__2|2|1_and_3|2|1
in the attached zip file
I do not understand where this bias comes from: is it due to the implementation of SQP in NLPSolve, or to SQP itself?

• For some couples (D, N) the solution of D/N/1 is made of balls of same radius.
For N from 1 to 48 this is (numerically)
the case for 2/1/1 and2/2/1, but the three dimensional case is reacher as it contains  3/1/13/2/1,  3/3/1,  3/4/1 and 3/14/1 (this latter being quite surprising).
Is there only a finite number of values
If it is so, is this number increasing with
D?
It is worth noting that those values of
N mean that the solution of problems D/N/1 are identical to those of a more classic packing problem: "What is the largest radius N identical balls packed in a unit bow may have?".
For an exhaustive survey of this latter problem see
Packomania.

• A related question is "How does the number of different radii evolves as N increases dor given values of D and M?
Displays of 2D and 3D packings show that the set of radii has significantly less elements than
N... at least for values of N not too large. So might we expect that solution of, let us say, 2/100/1 can contain 100 balls of 10 different radii, or it is more reasonable to expect it contains 100 balls of 100 different radii?

• At the opposite numerical investigations of  2/N/1 and  3/N/1 suggest that the number of different radii a fundamental solution contains increases with N (more a trend than a continuous growth).
So, is it true that very large values of N correspond to solutions where the number of different radii is also very large?

Or could it be that the growth of the number of different radii I observed is simply the consequence of partially converged results?

• Numerical investigations show that for a given dimension d and a given number of balls n,  solutions of d/n/1 and d/n/M (1 < M < d) problems are rather often the same. Is this a rule with a few exceptions or a false impression due to the fact that I did not pushed the simulations to values of n large enough to draw a more solid picture)?

It is likely that some of the questions above could be adressed by using a more powerful machine than mine.

All the codes and results are gathered in  a zip file you can download from OneDrive Google  (link at the end of this post, 262 Mb, 570 Mb when unzipped, 1119 files).
Install this zip file in the directory of your choice and unzip it to get a directory named
PACKING
Within it:

• README.mw contains a description of the different codes and directories
• Repository.rtf must contain a string repesenting the absolute path of directory PACKING

## I want to write nested loop in Maple 2015...

Hi,

I hope everyone is fine here. I want to write the following nested loop in my code

restart; TOL := 10^(-3); v[i] := 10^(-4); u[i] := 10^(-4);
if abs(v[i]-1)<=TOL then    if abs(u[i])<=TOL then break    else t[i+1]:=t[i]+alpha; else; fi; fi;

it is not working. If the first condition (v[i]-1)<=TOL is fulfilled, then we have to check the second condition, u[i]<=TOL. That’s why I don’t use “and” here. If both conditions are fulfilled, then stop the loop and if second the condition not fulfilled (but first fulfill) then t[j+1]=t[j]+alpha.

I shall be waiting for your positive response.

## Why doesn't L2 norm sorting work?...

Why sorting these 4 vectors wrt L(+oo) norm returns a correct result bur sorting them wrt L2 norm doesn't (unless if I evaluate the norms as floats)?

 > restart:
 > kernelopts(version)
 (1)
 > V := [seq(LinearAlgebra:-RandomVector(2, generator=1..10), k=1..4)]; N2   := evalf(norm~(V, 2)); Ninf := norm~(V, +infinity);
 (2)

Sorting wrt L(+oo) norm

 > sort(V, key=(t -> norm(t, +infinity)));  # correct
 (3)

Sorting wrt L(2) norm

 > sort(V, key=(t -> norm(t, 2))); # not correct is(norm(V[4], 2) < norm(V[3], 2));
 (4)
 > sort(V, key=(t -> evalf(norm(t, 2)))); # correct
 (5)
 >

TIA

## How to color an implicitplot3d?...

I have a surface defined by C(x, y, x) = 0 that I visualize with implicitplot3d.
Using shading=shue does not suits me and I would like to define my own coloring function F(x, y, z).

The first error I got made me think that a coloring function cannot depend on 3 parameters.
But a simpler (and not visually satysfying function) F(x, y) already leads to an error, which makes me wonder if it is possible to use a colorig function with  implicitplot3d?

implicitplot3d_coloring.mw

## Why does Statistics:-Mean fails computing the mean...

For a lesson I'm preparing, I want to illustrate some probability concepts using Maple.
In particular, I need to use the fact that the Expectation operator(let say the Mean operator) is a linear operator with respect to random variables.
However, I don't want to particularize my demonstration by using this or that statistical distribution but simply the notion of random variable.
I have therefore created a specific Distribution named  MinimalAbstractDistribution in which only the mean and variance are defined.

When Statistics:-Mean is applied to the expression (A*p+q) where p and q are names and A is a random variable with distribution MinimalAbstractDistribution, linearity is effectively used.
But not when it is applied to A/p or A-q.

Why that?
Is there a way of defining a statistical distribution so that Mean behaves as expected?

(You will easily understand that no workaround of the form

```Mean(A+q);
eval(%, q=-q);

# or
```

can be accepted in a lesson).

 > restart
 > with(Statistics):

 > MinimalAbstractDistribution := proc(i)   Distribution(Mean=mu[i], Variance=sigma[i]^2) end proc:
 > A := RandomVariable(MinimalAbstractDistribution(1))
 (1)
 > [Mean, Variance](A);
 (2)
 > Mean(p*A+q); Mean(p*(A+q))
 (3)
 > # But Mean(A-q); Mean(A/p)
 (4)

## Customize surface plot...

Dear Users!

I hope everyone here is fine. In the attached file I have a list of points in three dimensions. I want to plot surfaceplot (also in Dimension=2) of only those points which are less than 1.

But I want to plot the density plot for all points with the range on x-axes 5 to 15 and the range on y-axes 50-1000

Help.mw

## A numerical algorithm for SFC and its pseudo-inver...

Hi!

I am trying to implement the algorithms given in this paper (free for download) in Maple 2015

https://www.researchgate.net/publication/374636058_A_simple_algorithm_for_computing_a_multi-dimensional_the_Sierpinski_space-filling_curve_generalization

Such algorithms, apparently very easy, provides an approximation of  a sapce-filling curve and its pseudo-inverse. I am interesting in this space--filling curve for its properties. Please, find attached the Maple file, I am not sure if the code of the paper is not fine or I am doing something wrong.

Sierp_v1.mw

## How to calculate time and memory ...

Dear Users!

I hope everyone is fine here. In the attached file I have solved a partial differential equation using the finite difference method for different mesh in spatial directions (i.e., for different Mx). I want to compute the time and memory to compute the solution against each Mx and want to plot it. Kindly help me how to compute the time and memory for each value of Mx.

TIME.mw

I shall be waiting. Thanks in advance.

## How can we force evalf[n] to apple to all numerics...

Here is an example where evalf[n] doesn't operate on the argument of the undefined function f.

```x := rand(0. .. 1.)()
0.2342493224
y := x+f(x):
evalf[4](y)
0.2342 + f(0.2342493224)

# but, as soon as f is a known function:
evalf[4](cos(x))
0.9727

```

Here is a way to force evalf[4](f(x)) to return f(0.2342)?

I found only two ways to do this:
First: declare interface(displayprecision=4)

```interface(displayprecision=4):
y;
0.2342 + f(0.2342)
```

Or: do this (which is relatively cumbersome)

```Evalf := proc(expr, n)
local i := [indets(evalf[n](expr), numeric)[]]:
eval(expr, i =~ evalf[4](i))
end proc:

Evalf(y, 4)
0.2342 + f(0.2342)
```

PS:  I do not like setting displayprecision to some value because its effect is remnant: if you execute again the same worksheet (begining with a restart), the value of displayprecision is not reset to 10 but keeps the value you gave it previously, somewhere in the worksheet.

## Statistics:-Specialize can produce wrong results...

The Statistics package contains a function named Specialize (which quite strangely doesn't appear when you expand the sections of this package).
Here is what help(Specialize) says:

```The Specialize function takes a random variable or distribution data structure that contains symbolic parameters, and performs a substitution to specialize the given random variable or distribution.
```

My goal was to work with mixtures of two random variables. There are many ways to do that depending on the what you really want to achieve, but an elegant way is to define such a mixture this way:

• Let X and Y two random variables representing the two components to be mixed.
For instance X = Normal(mu, sigma) and Y = Normal(nu, tau).

• Let B a Bernoulli random variable with parameter P.

• Then M = B*X + (1-B)*Y represents a random mixture of the two components in proportions (p, 1-p).
Note that M is a 5-parameters random variable.

Doing the things this way enables getting a lot of formal informations about M such as its mean, variance, and so on.

In order to illustrate what the mixture is I draw the histogram of a sample of M.
To do this I Specialized the three random variables X, Y, B.

I used parameters

`mu=-3, nu=3, sigma=1, tau=1, p=1/2`

My first attempt was to draw a sample of the random variable Mspec defined this way

```Mspec := Specialize(B, [p=1/2])*Specialize(X, [mu=-3, sigma=1]) + (1-Specialize(B, [p=1/2]))*Specialize(Y, [nu=3, tau=1]);
```

As you see in the attached file (first plot) the histogram is wrong (so is the variance computed formally).

I changed this into

```Mspec := Specialize(B, [p=1/2])*(Specialize(X, [mu=-3, sigma=1])-Specialize(Y, [nu=3, tau=1])) + Specialize(Y, [nu=3, tau=1]);
```

without more significative success: while the variance is nox corrext the histogram still remains obviously wrong (plot number 2)

My last attempt, which now gives q correct result (plot 3) was:

```Bspec := Specialize(B, [p=1/2]);
Mspec := Bspec*Specialize(X, [mu=-3, sigma=1]) + (1-Bspec)*Specialize(Y, [nu=3, tau=1]);
```

Specialize.mw

I agree that one can easily do this stuff without using  Specialize.
For instance by using the procedure given at the end ofthe attached file.
Or by truly constructing a mixture Distribution (which would be more elegant but more complex).

I also agree that Specialize is in itself of a relative low interest except for educational purposes (you present the theoritical results and next you run a numerical application while giving numeric values to the formal parameters).

But why providing such an anecdotal function if it doesn't do the job correctly?

Note that these results were obtained with Maple 2015, but I doubt they'll be any better for more recent versions, given the confidential nature of Specialize.

## An error in the help(names)...

In case this error has been corrected in more recent versions, please feel free to delete this question.

In the felp page relative to anames this example is given

```restart:
test1 := 1: test2 := 2: test3 := 3.2: testall := 3.2:
select(type,{anames()},suffixed(`test`));
{test1, test2, test3, testall}
```

Note the backward quotes in suffixed(`test`).
This works only if test is not an assigned name itself:

```restart:
test := 0: test1 := 1: test2 := 2: test3 := 3.2: testall := 3.2:
select(type,{anames()},suffixed(`test`));
Error, (in type/suffixed) expecting a 1st argument of type {string, symbol, list(symbol, string), set(symbol, string)}
```

With simple quotes:

```select(type,{anames()},suffixed('test'));
{test, test1, test2, test3, testall}```

## Why am I not able to get the desired integral form...

There are things that seem simple but rapidly turn into a nightmare.

Here is an example: what I want is to the expression given at equation (4) in the attached file.

Using Int gives a wrong result.
Using int gives a right one but not of the desired form (some double integrals are nested while others are not).

TIA

 > restart
 > use Statistics in   # For more generality defina an abstract probability distribution.   AbstractDistribution := proc(N)     Distribution(       PDF = (x -> varphi(seq(x[n], n=1..N)))     )   end proc:   # Define two random variables pf AbstractDistribution type.   X__1 := RandomVariable(AbstractDistribution(2)):   X__2 := RandomVariable(AbstractDistribution(2)): end use;
 (1)
 > F := (U1, U2) -> U1/(U1+U2); T := mtaylor(F(X__1, X__2), [X__1=1, X__2=1], 2):
 (2)

Error: x[2] is droped out of the double integral in the rightmost term

 > use IntegrationTools in J := eval([op(expand(T))], [seq(X__||i=x[i], i=1..2)]); L := add(        map(          j ->            if j::numeric then            j          else            (Expand@CollapseNested)(              Int(                j * Statistics:-PDF(X__1, x)                , seq(x[i]=-infinity..+infinity, i=1..2)              )            )          end if          , J        )        ): ET := % end use;
 (3)

I want this

 > 'ET' = 1/2        +        (1/4)*(Int(Int(x[1]*varphi(x[1], x[2]), x[1] = -infinity .. infinity), x[2] = -infinity .. infinity))        -(1/4)*(Int(Int(x[2]*varphi(x[1], x[2]), x[1] = -infinity .. infinity), x[2] = -infinity .. infinity))
 (4)

With int instead of Int one integral is double the other is double-nested

 > L := add(        map(          j ->            if j::numeric then            j          else              int(                j * Statistics:-PDF(X__1, x)                , seq(x[i]=-infinity..+infinity, i=1..2)              )          end if          , J        )        ): ET := %
 (5)

As the expression of ET is now correct, I tried to use IntegrationTools to get the
form I want (equation (4)).

But as soon as I replace int by Int x[2] is again droped out.

So it's not even worth thinking about using CollapseNested!

 > use IntegrationTools in   eval(ET, int=Int);   end use;
 (6)
 >

## Should this be considered a simplify bug?...

A case where simplify(...) and simplify~(...) both return the wrong result.
Should we consider this a simplify bug?

 > restart:

A simple case

 > J := Int(r[1]^2*varphi[1](r[1]), r[1] = -infinity .. infinity)      *      Int(r[2]^2*varphi[2](r[2]), r[2] = -infinity .. infinity)
 (1)
 > # OK op(1, J) = simplify(op(1, J))
 (2)
 > # OK op(2, J) = simplify(op(2, J))
 (3)
 > # But... # # Not OK simplify(J)
 (4)
 > # Not OK simplify~(J)
 (5)
 > # OK map(simplify, J)
 (6)

A slightly more complex case

 > J := (Int(r[1]^2*varphi[1](r[1]), r[1] = -infinity .. infinity))*(Int(r[2]^2*varphi[2](r[2]), r[2] = -infinity .. infinity))-(Int(r[1]^2*varphi[1](r[1]), r[1] = -infinity .. infinity))*(Int(r[2]*varphi[2](r[2]), r[2] = -infinity .. infinity))^2;
 (7)
 > is(J=simplify(J))
 (8)
 > is(J=simplify~(J))
 (9)
 > is(J=map(simplify, J)); map(simplify, J);
 (10)
 > add(map(u -> map(simplify, u), [op(J)])); is(J=%);
 (11)
 >