acer

32485 Reputation

29 Badges

20 years, 7 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@Carl Love  I forget where I first saw the idea to factor the degree 8 polynomial using this field extension. It factors into a pair of quartics (which can each be solved explicitly, naturally). I have a recollection that it might have been something by Robert Israel.

restart;

kernelopts(version);

`Maple 2019.2, X86 64 LINUX, Oct 30 2019, Build ID 1430966`

P:=op(1,convert(cos(Pi/17),RootOf));

256*_Z^8-128*_Z^7-448*_Z^6+192*_Z^5+240*_Z^4-80*_Z^3-40*_Z^2+8*_Z+1

Rs:=simplify(radnormal~([solve(factor(P,sqrt(17)),explicit)]),size):

R:=select(r->is(r-cos(Pi/17)=0),Rs)[1];

(1/64)*((17^(1/2)-3)*(17+4*17^(1/2))^(1/2)+17^(1/2)+5)*(34+6*17^(1/2)-8*(17+4*17^(1/2))^(1/2))^(1/2)+(1/64)*(-4*17^(1/2)+20)*(17+4*17^(1/2))^(1/2)-(1/16)*17^(1/2)+1/16

simplify(R-cos(Pi/17));

0

 

Download forfun.mw

@Carl Love That's nicely done.

 

@Kitonum I'm curious as to your methodology. Mine didn't produce a nice form for the sin term. Did I miss something easy?

restart;

H:=solve(op(1,convert(cos(Pi/7),RootOf)))[1]
   +I*solve(op(1,convert(sin(Pi/7),RootOf)))[3];

(1/12)*(-28+(84*I)*3^(1/2))^(1/3)+(7/3)/(-28+(84*I)*3^(1/2))^(1/3)+1/6+((1/12)*I)*3^(1/2)*((-28+(84*I)*3^(1/2))^(1/3)*(I*(-28+(84*I)*3^(1/2))^(2/3)*3^(1/2)-(-28+(84*I)*3^(1/2))^(2/3)-(28*I)*3^(1/2)+28*(-28+(84*I)*3^(1/2))^(1/3)-28))^(1/2)/(-28+(84*I)*3^(1/2))^(1/3)

evalf(H);

.9009688680+.4338837391*I

Q:=cos((1/7)*Pi)+I*sin((1/7)*Pi);

cos((1/7)*Pi)+I*sin((1/7)*Pi)

evalf(Q);

.9009688678+.4338837393*I

simplify(Q-H);

0

# oof
S:=solve(op(1,convert(sin(Pi/7),RootOf)))[3]:
frontend(u->u,[expand(S^2)],[{`+`,`*`},{}]):
subs((-28+84*I*sqrt(3))^(1/3)=PP,1/(-28+84*I*sqrt(3))^(1/3)=1/PP,%):
new:=(simplify(subs(PP=(-28+84*I*sqrt(3))^(1/3),collect(%,PP)),size))^(1/2):

HH:=solve(op(1,convert(cos(Pi/7),RootOf)))[1]+I*new;

(1/12)*(-28+(84*I)*3^(1/2))^(1/3)+(7/3)/(-28+(84*I)*3^(1/2))^(1/3)+1/6+I*(7/12+(1/48)*(I*3^(1/2)-1)*(-28+(84*I)*3^(1/2))^(1/3)-(7/12)*(I*3^(1/2)+1)/(-28+(84*I)*3^(1/2))^(1/3))^(1/2)

evalf(HH);

.9009688680+.4338837393*I

simplify(Q-HH);

0

 

Download conv_radical.mw

@CyberRob There is no doubt a more efficient way to do this using coeff/coeffs. But I'm trying to figure out whether this is what you're after.

I still don't understand why in your second example you wanted a factor like,
   nurdel*(dnub*nur + dnur*nub)
while in your first example you wanted a factor like,
   dnub*nur*nurdel + dnur*nub*nurdel

restart;

vars := [nur, nub, dnur, dnub, nurdel, nubdel, dnurdel, dnubdel]:

expr1 := c4*dnub*kpbr*ksr*nur*nurdel + c4*dnur*kpbr*ksr*nub*nurdel:

targ1 := c4*kpbr*ksr*(dnub*nur*nurdel+dnur*nub*nurdel):

expr2 := c4*dnub*kpbr*ksr*nur*nurdel + c4*dnur*kpbr*ksr*nub*nurdel + nub:

targ2 := c4*kpbr*ksr*nurdel*(dnub*nur + dnur*nub)+nub:

expr3 := c4*dnub*kpbr*ksr*nur*nurdel + c4*dnur*kpbr*ksr*nub*nurdel
         + (c4*nus0 + c5)*dnub+dnub:

targ3 := c4*kpbr*ksr*nurdel*(dnub*nur + dnur*nub) + (c4*nus0 + c5+1)*dnub:

step1 := proc(ee) local K;
           eval(simplify(collect(ee,vars,K)),K=(x->x));
         end proc:

step2 := proc(ee) local temp;
           temp:=[seq(map(`*`@op,[selectremove(has,[op(u)],vars)]),
                  u=`if`(ee::`+`,ee,[ee]))];
           map[2](op,1,temp),map[2](op,2,temp);
         end proc:

step1(expr3);
step2(%);
targ3;

c4*kpbr*ksr*nurdel*(dnub*nur+dnur*nub)+(c4*nus0+c5+1)*dnub

[nurdel*(dnub*nur+dnur*nub), dnub], [c4*kpbr*ksr, c4*nus0+c5+1]

c4*kpbr*ksr*nurdel*(dnub*nur+dnur*nub)+(c4*nus0+c5+1)*dnub

step1(expr2);
step2(%);
targ2;

c4*kpbr*ksr*nurdel*(dnub*nur+dnur*nub)+nub

[nurdel*(dnub*nur+dnur*nub), nub], [c4*kpbr*ksr, 1]

c4*kpbr*ksr*nurdel*(dnub*nur+dnur*nub)+nub

step1(expr1);
step2(%);
targ1;

c4*kpbr*ksr*nurdel*(dnub*nur+dnur*nub)

[nurdel*(dnub*nur+dnur*nub)], [c4*kpbr*ksr]

c4*kpbr*ksr*(dnub*nur*nurdel+dnur*nub*nurdel)

 

Download collect_ac2.mw

An alternative for step1 (possibly better) might be,

step1 := proc(ee) local K;
           eval(collect(collect(ee,vars,K),K,simplify),K=(x->x));
         end proc:

@Kitonum For the very first example the OP's expected result is not the "simplest" form (lowest leaf count, shortest, fully factored, etc). So, while your comment is correct, I don't see how it figures in here.

Start by showing us the code you used to produce this plot.

@mmcdara The rotation option was introduced for the plots:-textplot command in Maple 2018.

Please don't start a new thread about how to do this kind of thing, ie. without involving the global names, etc.

Put the followup here,  instead. The whole topic should stay together.

@Stretto But it will break if the global name x is assigned a value (eg, 4) prior to calling Iter.

@Stretto Your attempt with  C@@(3)(5)  has incorrect syntax.

And your attempts using f(g) is incorrect syntax for functional composition. If it is corrected to f@g then, lo, Iter returns a result in terms of @@.

restart;

C := n->n^2:

(C@@3)(5);

390625

Iter := proc(f,n)
    local g, i;
    g := f;
    for i from 1 to n do
        g := f@g:
    end:
end:

Iter(C,3);

C@@4

lprint(%);

C@@4

 

Download iter_syntax.mw

@arashghgood I have already read your post. I did that before I made various kinds of plots from it for both parts 1) and (slightly more straightforward) ii).

I would like to know precisely what kind of plots you want. It is unclear from your details provided so far what you want in the case that K or Q are not purely real or purely imaginary. It is unclear what values you want for parameters which you might want fixed.

If you are not going to provide the full details for me to choose between various approaches then I am done here.

@arashghgood Yes, I was able to produce a number of plots.

But I'm going to wait until you answer all my queries for specific details thoroughly and properly.

@Carl Love In some cases Optimization can automatically differentiate a procedure form of the objective. I suspect that might work for your example above, say if the userinfo is replaced by a suitable printf call.

Or, perhaps, if the order of objective calls is not required then one might use a remember table on the objective procedure.

Or the corresponding gradient procedure could be supplied explicitly to the Minimize command.

The above are a few ways in which one might show (or possibly retain) the objective evaluations which are not used merely for numeric approximation of the gradient derivatives.

 

What kind of optimization problem are you doing?

The concept of "iteration" depends on the method.

Do you want to see the result attained for each "major iteration" step (whatever that might mean), or each and every functional evaluation?

Note also that some methods might attempt to approximate derivatives or bounds numerically (which requires functional evaluations of the original objective).

If you want to record or see each and every functional evaluation of the objective then you could construct an objective procedure which does that.

I forgot to mention that the second and first approaches which I described above (which have since been shown explicitly in the Answers of vv and Kitonum) were also possible ten years ago. In fact such functionality has been possible for much longer than that.

The OP has not included any direct link to the earlier Question thread, which so helpful; that does not allow us to tell how much general functionality was wanted earlier. The present example's formulation can be easily (manually) solved for variable end-points or recast in polar form, etc, but that it not the case for more general problems of this type.

First 196 197 198 199 200 201 202 Last Page 198 of 594