dharr

Dr. David Harrington

6399 Reputation

21 Badges

20 years, 15 days
University of Victoria
Professor or university staff
Victoria, British Columbia, Canada

Social Networks and Content at Maplesoft.com

I am a retired professor of chemistry at the University of Victoria, BC, Canada. My research areas are electrochemistry and surface science. I have been a user of Maple since about 1990.

MaplePrimes Activity


These are replies submitted by dharr

@nm Nice use of remove_RootOf; I didn't know about that. When I evalf([%]) your result I get 

[[0.6114019859 = 0., 0.2874388753 = 0., 0.1011591386 = 0.]],

but none of these correspond to the expected result, so I'm confused about that.

@fnavarro If I just evaluate the expression at Digits=40 I don't see a problem, there are no strange values in pts:

restart;
Digits:=40;
pts:=[seq([x,x*exp(-x)],x=0..20,0.2)]:
plots:-pointplot(pts,connect);

I agree if you look at the points in the plot structure for the regular plot without adaptive = true, they look strange, but as  @Preben Alsholm points out this is a function of the adaptive routine. Perhaps you have an specific example not involving plot.

@Art Kalb There are lots of non-commutative operators - "." and the ones starting with "&". I was thinking that since "." knew about inverses, b.b^(-1) =1, that it might be useful, but expand doesn't work (&* is special in this respect, since expand has some special code for &*). In the end I think you have to specify a group to make progress. If you are OK with specifying group elements as permutations, then that is the simplest way, since you can use most groups in the GroupTheory package, as in the following. (It would be nice if Elements(G) always had the identity first so I fiddled with it to force it.) If you want nicer symbols for the group elements you have to work a bit harder, as DrawCayleyTable does.

group-ring_2.mw

@Nicole Sharp If the Maple code is going to run on your local machine, then it needs to be downloaded, regardless of which file format is used. Import imports the whole contents of the file, and MapleCloud also uploads and downloads files. (Your Mapleprimes account should work on MapleCloud,  easiest to sign on from within Maple.) The files could be single functions, if you want to dowload just one of many functions. But I don't see how you can avoid re-downloading after an update.

The alternative would be to run the Maple code on a server, say one you have an account on. Then you are accessing always the current version of that file. Maplesoft also has a separate product Maplenet that allows users to run Maple code on a server through a web browser.

@Nicole Sharp The end user need not know about the local file, in the sense that the file can be downloaded and immediately read every time. (The intermediate file stage could be avoided if the read statement had a source=direct option like import does, which means treat the string as file contents.)

You can take the string, use StringTools:-Split to split it at each endline "\n" character and them parse each line. Parse only does one expression or statement, and then you have to handle the statements differently from the expressions. In the end writing an intermediate file seemed to me the easier option.

(The Import statement is handling the .mpl file in just the same way as if it is reading a local file. It defaults to output=string.)

@WA573 Use

interface(showassumed=0):

to remove the ~

@C_R Yes, equatorial poles are harder with spherical coordinates. If you use phi>Pi/2 and have four regions of theta then you can have each octant colored separately; the equatorial and north/south poles are then at the points where the colors intersect. You could add something like

pointplot3d([[0,0,1],[0,0,-1],[0,1,0],[0,-1,0],[1,0,0],[-1,0,0]],
  symbol=solidsphere,symbolsize=10,color=black):

though there are little bumps then.

An equatorial band is possible with a small phi range and north-south bands with a small theta range, though the north-south ones narrow at the poles.

@Anthrazit  Thanks for the complete information. To make a nested table, you need

WhateverYouNeed["calculations"] := eval(calculations);

Edit: my earlier explanation is too simplistic. In one step:

WhateverYouNeed:=table(["calculations"=
                   table(["structure"=
                     table(["connection"=
                       table(["Cutright1" = "false", "graindirection1" = 90*Unit(arcdeg)
                       ])
                     ])
                   ])
                 ]);

TableDefinition.mw

@WA573 You needed nested seq, and I changed subs to eval, but the calculated values are complex and so you get an empty plot.

plot_dis.mw

@Zeineb I don't see any Maple errors, just a warning that i was not declared in A, which I fixed. You haven't written the integral as an integral with respect to psi, not t - just cancel "dt: in (dpsi/ dt)*dt = dpsi as I did above. I'm not sure what you are trying to do but you still have A's as a function of y's and y's as a function of A, which is confusing.

last.mw

@nm I agree. Indeed, odeadvisor has the option of testing other types, which you would not need if if was intending to return all types.

DEtools:-odeadvisor(ode,y(x),[homogeneous]);

gives

@WA573 In (6)-(8) I only see bars over expressions that contain eta, and are therefore potentially non-real; I don't see any bars directly over n or m alone.

@Zeineb Changing int to Int in Frac_D shows why the integrals are being evaluated as zero. Notice that A[0](x) works as expected but A[0](t) does not. This can be fixed by local t (in red), though this may be a side issue. But the code is confusing because the y[i] are used in the definition of A[n] but are defined later, and I lost track of locals and globals. I suggest you create the A[n] and y[i] one by one  in the right order, and only when it works put it into one loop.

compute_integral.mw

@Zeineb Yes, Maple can do Axel's integral given those assumptions. Simplify (after dividing by GAMMA(alpha)) then gives the form you want.

gives (in Maple 2023):

@nm I agree a better simplification would help here, and I think that was @acer's point - it didn't help. In my answer I said I assumed you wanted the mathematical result, which requires factoring to see if polynomials have common factors, and this then leaves the simplification problem. There is also the issue of how hard you want to look for common factors. But if you only want it to cancel preexisting factors, then it would be done in a different way that might miss potential mathematical common factors.

First 13 14 15 16 17 18 19 Last Page 15 of 66