mmcdara

6548 Reputation

18 Badges

8 years, 96 days

MaplePrimes Activity


These are replies submitted by mmcdara

If you want to integrate numerically a function F(x, a, b, c, ...) with respect to x you must give a, b, c, ... numerical values.
I have only little knowledge about Mathematica but I guess it is so.
So do with Maple what you did with Mathematica.

Illustration

restart:

f := (epsilon, a, x, tau) -> exp(-(epsilon/2/a)^2-tau*epsilon^3)*(cos(epsilon*x)-1)/epsilon^2

proc (epsilon, a, x, tau) options operator, arrow; exp(-(1/4)*epsilon^2/a^2-tau*epsilon^3)*(cos(epsilon*x)-1)/epsilon^2 end proc

(1)

# No numerical value returned because a, x, tau are names

J := Int(f(epsilon, a, x, tau), epsilon=0..+infinity);
evalf(J);  # no numerical value returned

Int(exp(-(1/4)*epsilon^2/a^2-tau*epsilon^3)*(cos(epsilon*x)-1)/epsilon^2, epsilon = 0 .. infinity)

 

Int(exp(-.2500000000*epsilon^2/a^2-1.*tau*epsilon^3)*(cos(epsilon*x)-1.)/epsilon^2, epsilon = 0. .. Float(infinity))

(2)

# Once a, x, tau are given numeric values the integral is computed

J := Int(f(epsilon, 1, 1, 1), epsilon=0..+infinity);
evalf(J);

Int(exp(-(1/4)*epsilon^2-epsilon^3)*(cos(epsilon)-1)/epsilon^2, epsilon = 0 .. infinity)

 

-.3981488764

(3)
 

 

Download evalf_Int.mw

@acer 

... but I don't think NLPSolve can still handle the problem for large n.
For instance a single run of NLPSolve takes about 10 seconds for n=50, which means my naive strategy consisting in starting from 1000 different points to try and catch a posible global maximum becomes rapidly prohibitive. 

I'd never heard of the OP problem, but I'd already been interested in statistical problems related to packing problems. For example, how to place N points in the unit square so that they are as far apart as possible (similar to placing N disks of identical radius in the unit square).
Those problems areusually adressedby specific algorithms, mosply stochastic (greddy, genetic, moving particles, ...) or Central Voronoi Tesselation (see here  pp 648-650).

Fun for fun, here is the best solution I gor to pack 14 spheres in the unit cube in such a way that the sum of their radii is maximum.
The result is absolutely stunning: all the spheres  have the same radius!
 

restart

with(plots):
with(plottools):
with(Statistics):


DISKS WITHIN THE UNIT SQUARE

The packing shiuld be such that the sum of the radii is maximum

epsilon := 1e-8:

n := 14:
J := add(r[i], i=1..n):
P := [seq(<x[i], y[i], z[i]>, i=1..n)]:
cons_1 := seq(seq(add((P[i]-P[j])^~2) >= (r[i]+r[j])^2, j=i+1..n), i=1..n-1):
cons_2 := seq(seq(op([P[i][k]-r[i] >= 0, P[i][k]+r[i] <= 1]), i=1..n), k=1..3):
cons_3 := seq(r[i] >= epsilon, i=1..n):


SoR : Sum of the Radii

Max_trials : maximum number of trials
                     Note: I used a while loop instead of a for loop because NLPSolve sometimes returns an error

Packing := proc(n, Max_trials)
  local SoR, SoR_max, error_found, error_count, rep, StartAt, opt, sor, error_type, Best_packing:

  SoR     := Vector(Max_trials):
  SoR_max := 0:

  error_found := false:
  error_count := 0:

  rep := 0:
  while rep < Max_trials do
    StartAt := {
                 seq(x[i]=rand(0. .. 1.)(), i=1..n),
                 seq(y[i]=rand(0. .. 1.)(), i=1..n),
                 seq(z[i]=rand(0. .. 1.)(), i=1..n),
                 seq(r[i]=rand(0. .. 1e-6)(), i=1..n)
               };
  
    try
      opt := Optimization:-NLPSolve(J, {cons_1, cons_2, cons_3}, maximize, iterationlimit=10^4, initialpoint=StartAt):
    catch:
      error_found := true
    end try:
  
    if `not`(error_found) then
      rep := rep+1:
      sor := evalf[6](eval(add(r[i], i=1..n), opt[2]));
      SoR[rep] := sor:
  
      DocumentTools:-Tabulate(
        [sprintf("%d", rep), sprintf("%1.6f", SoR_max)],
        'alignment'='left',
        'widthmode'='percentage',
        'width'=20
      ):
      
      if sor > SoR_max then
        SoR_max      := sor;
        Best_packing := opt[2];
      end if:
    else
      error_found := false:
      error_count := error_count+1:
      error_type  := lastexception:
    end if:
  end do:

  return  table([
            Errors = `if`(error_count = 0, None, [Type = error_type, Occurrences = error_count]),
            RadSum = SoR,
            Best   = Best_packing
          ])
end proc:

res := Packing(14, 1000):
res[Errors];

SoR := res[RadSum]:
Best_packing := res[Best]:

[Type = (Optimization:-NLPSolve, "no improved point could be found"), Occurrences = 136]

(1)

FivePointSummary([SoR]):
Histogram([SoR], minbins=100):

Histogram(
  (evalf[6]@rhs)~(select(has, Best_packing, r))
  , discrete=true
  , size=[1200, 100]
  , color=red
  , thickness=3
  , title="Radii"
);

couleur := () -> ColorTools:-Color([rand()/10^12, rand()/10^12, rand()/10^12]):
  
display(
  seq(
    sphere( eval([x[i], y[i], z[i]], Best_packing), eval(r[i], Best_packing), color=couleur(), style=surface)
    , i=1..n
  ),
  cuboid([0,0,0], [1,1,1], color="LightGray", transparency=0.8)
  , title=typeset(Sum(r[i], i=1..n) = evalf[6](eval(add(r[i], i=1..n), Best_packing)))
  , scaling=constrained
)

 

 

# Radii

print~(sort(evalf@rhs)~(select(has, Best_packing, r))):

HFloat(0.2071067811865475)

 

HFloat(0.20710678118654763)

 

HFloat(0.2071067811865472)

 

HFloat(0.20710678118654754)

 

HFloat(0.20710678118654754)

 

HFloat(0.20710678118654752)

 

HFloat(0.20710678118654757)

 

HFloat(0.20710678118654768)

 

HFloat(0.2071067811865474)

 

HFloat(0.20710678118654757)

 

HFloat(0.2071067811865474)

 

HFloat(0.20710678118654763)

 

HFloat(0.20710678118654763)

 

HFloat(0.20710678118654752)

(2)


From packomania , click on Spheres in a cube  at the bottom of the page.

What is the largest radius that 14 identical spheres packed in a unit cube may have?
The value given in the page you look is 0.207106781186547524400844362105        

It is remarkable that a priori two different problems have the same solution.


Download Disks_within_unit_square_3D.mw

Add-on: 3D packings of 9 to 14 spheres: Disks_within_unit_square_3D_9_14.mw

Question: optimal 3D packings of n identical spheres which are also optimal 3D packings of n spheres whose sum of radii is maximum are obtained for n=1, 2, 3, 4, 8, 14. Who comes next?

@schachar 

Your file Data_for_2nd_Order_Gassian_Smoothing_6_21_2024.maple can't be downloaded.
Change the extension into mw (not maple), this should, could work.

Unfortunatelythe paper is indeed not avaliable for free.
I'm going to do some research on Google Scholar from its title.

See you soon.

PS: I'm sorry if you found me a bit rough, that wasn't my intention.

@spalinowy 

A few questions:

  • Are z[F0](t) and z[R0](t) explicitely known?
    If it is the case you hust have to define them as I did in my file. More precisely I wrote
    SYS_example := eval(SYS, RemainingFunc =~ t):

    which is, I agree, quite unclear. 
    RemainingFunc is equal to  { z[F0](t) , z[R0](t) } and  RemainingFunc =~ t means 

    { z[F0](t) = t, z[R0](t) = t } 

    If, for instance, z[F0](t) = cos(t) and  z[R0](t) = exp(-t) then define SYS_example this way

    SYS_example := eval(SYS,  {z[F0](t) = cos(t) , z[R0](t) = exp(-t)}):
    

 

  • If you dont explicitely know these functions, for instance they are knoxn only at some values of t, you must construct from these points two analytic expressions to represent boyh of them.

@schachar 

Where is the complete paper, not just an excerpt which means noting out of context?
Where is this Workbook?
Can't you explain your problem in simple words instead of refering to a complete paper which can be lengthy ro read?

You write "My understanding of 2nd Order Gaussian smoothing is that it is done for both x and y"... so you are not even sure of what you want, are you?

Are you talking about convolution of X and Y by a 1D  gaussian signal?
Or about the convolution of field (X, Y) by a 2D  gaussian signal?
Or about something else?

@minhthien2016 

There are probably a lot of reasons why the two results differ.

  • Algorithms are different.
     
  • One result is not "converged", or maybe both.
     
  • One result right and the other is wrong (maybe both are wrong).
     
  • At the end the better result will be the one with the largest sum of radii, but maybe an even better packing does exist.
     
  • Do not look to the pattern but carefully study the solutions (they are obviously not unique and different packings, can lad to samesum of radii).
     

Why don't you go here and say "with n = 14, The answer of Maple and your result are different." ?

(look to vv's reply)

First point: in the future think to upload your worksheet using the green up arrow.

Next: X has length 781 and Y has length 782 , even a simple scatterplot to look at the data is impossible. So fix this and deliver two vectors with same lengths.

Last but not least: what do you mean precisely  by "2nd Order Gausian Smoothing"?

Do I understand correctly if I suggest this Gaussian_Kernel_smoothing.mw ?

@acer Thanks for the correction, I had browsed the help page in a hurry.

@C_R 

it was kind of you.

Besides, I've often thought it would be useful to know who voted for you. It would at least reveal whether the OP was really interested in the responses he received... because I have some doubts about that.

Have a good day.

@C_R 

This is indeed an interesting proposal to account for this "Unanswered question issue".

Allow me to comment on your sentence "To respect the personal choice of an experienced user'
The key point is experienced, as you underline it.

So, what is an experienced user on Mapleprimes? Someone with more than 2000 points, more than 20000...?

If you look to the Users item you will see i have rank thirteen with about 6000 points: does this enough to qualify me as experienced?
I'm pretty sure you ask @acer or @Carl Love, to cite a few,  if I am experienced or not they will probably say no if they are honnest. Asking the same question to someone who has "only" 30 points could receive an opposite question. So being judged as experienced is an extremely subjective statement.

There is a French proverb which litteraly translates as "In the kingdom of the blind, the one-eyed man is king" and, in case you don't have it in English, it means something like "Surrounded by ignorant people, an individual with meagre knowledge  passes for a genius."
With all due respect to all Mapleprimes members, being rewarded by a newbie or a low-ranking member (obviously everyone starts with 0 points and some newbies will become genuine references in the future while others will remain simple users) often makes me feel I am this kind of king. It flatters my ego, gives me points, moves me up the rankings, but so what? Am I more experienced for that?
Surely not: I just did what I knew how to do and passed it on to someone else, but I didn't become more experienced because I probably didn't learn anything new(*).
So, with more than 6000 points I do not consider myself more experienced in Maple than I was several years ago (or just a little bit more to be  honnest).

One way of not suggesting to an OP that I'm a king is to only make comments: the OP won't be able to vote for me and increase my experience in the eyes of others. 
If the OP is satisfied with my comment it's enough for him/her to say so.


A pervert effect of Mapleprimes' ranking strategy is that simply asking very general questions (like mine) fictitiously increases experience (Without doing anything I gained a few points since this thread opened!). I guess the badge system exists to provide a more detailed idea of the real exprience of a member, but who really look to what badges some member earned, or even understand what these 39 badges really mean?


(*) A suggestion: once a member has reached a certain number of points, he/she should be obliged to fill in a profile defining his/her areas of expertise.

@ecterrab 

I find myself in your comment I agree with completely.

@vv 

When I receive Answers or Replies to my questions I read all of them all just as carefully whatever their status.

And, okay, in the Mapleprimes logic I sometimes though "This Comment deserved to be an Answer, for I would have rewarded you for it and thus raised your value/credibility for the future".
I think that a lot of people are happy to be rewarded this way, and yes, I was too until a while ago. Until I sensed a kind of falseness behind this reward system and simply decided not to enter the game any more.

As I replied to @acer  OP's opinion is more valuable to me than the number of badges I collect.
@acer  raised the issue of Reply appearing before Answer, thus rejecting at the end of the list the potentially more elaborated contributions.
This can be changed programmatically or, I can send only answers with a header telling "Do not vote for this answer nor select it as the best". 

@acer 

for having changed  "MapleSim" to "MaplePrimes", sort fir the inconvenience.

Yes, I have indeed noticed that a Reply (or a Comment, because when you edit an Answer, there is no "Convert to Reply" option, but a "Convert to Comment" option) appears first of an Answer and I always found this quite curious.
It seems to me the opposite would have been more natural.

To be clear: I'm not doing comments instead of replies to be at the top of the list the OP is about to read, but because I don't care about being rewarded for the work I've done.
I consider an OP answering "very useful, thank you" is much more gratifying than a few points awarded by I don't know who.

@Carl Love 

  • Can I confirm that the maxima are global (NLPSolve tends to return local optima)?
    No.
    One often says that starting from many different initial points and ending at the same point gives a strong evidence that this point is indeed a global optimum. But even in this case one cannot be one hundred percent sure of that. Idealy one should use a global optimizer to be conclusive.

    More than this I am not even sure iterationlimit is large enough to consider the results are "converged".
    Last point: the solution I get for n=20 doesn't look like the solution the OP presents. It would br interesting to know what sum of radii this latter corresponds to.
     
  • That's quite impressive work!
    Thanks, but I'm not concinced of that either.
    I just modeled the problem in a simple way and used Maple do find (one of) its solution(s). Nothing impressive behind this.

    In case you would be interested in packing problems here is a erference site packomania which gathers a lot of results concerning optimal packings.

@spalinowy

Thank you for selecting my answer as the best one (there was no other anyway :-) ).
However, I'm going to convert it into a comment for the very simple reason that I no longer wish to participate in the Mapleprimes rewards system.
Doing this will make me lose a few points, but the only thing I value is that my contribution has helped you, which seems to be the case.
All the best

4 5 6 7 8 9 10 Last Page 6 of 138