JacquesC

Prof. Jacques Carette

2396 Reputation

17 Badges

19 years, 259 days
McMaster University
Professor or university staff
Hamilton, Ontario, Canada

Social Networks and Content at Maplesoft.com

From a Maple perspective: I first started using it in 1985 (it was Maple 4.0, but I still have a Maple 3.3 manual!). Worked as a Maple tutor in 1987. Joined the company in 1991 as the sole GUI developer and wrote the first Windows version of Maple (for Windows 3.0). Founded the Math group in 1992. Worked remotely from France (still in Math, hosted by the ALGO project) from fall 1993 to summer 1996 where I did my PhD in complex dynamics in Orsay. Soon after I returned to Ontario, I became the Manager of the Math Group, which I grew from 2 people to 12 in 2.5 years. Got "promoted" into project management (for Maple 6, the last of the releases which allowed a lot of backward incompatibilities, aka the last time that design mistakes from the past were allowed to be fixed), and then moved on to an ill-fated web project (it was 1999 after all). After that, worked on coordinating the output from the (many!) research labs Maplesoft then worked with, as well as some Maple design and coding (inert form, the box model for Maplets, some aspects of MathML, context menus, a prototype compiler, and more), as well as some of the initial work on MapleNet. In 2002, an opportunity came up for a faculty position, which I took. After many years of being confronted with Maple weaknesses, I got a number of ideas of how I would go about 'doing better' -- but these ideas required a radical change of architecture, which I could not do within Maplesoft. I have been working on producing a 'better' system ever since.

MaplePrimes Activity


These are answers submitted by JacquesC

Your reasoning is correct, as one can see from:
> Groebner[Basis]([x-1,x-2,y], tdeg(x,y));
                                 [1]
In Maple 11, Groebner[Basis] returns [1] on your longer problem. It also returns a really nice cofactor Matrix for it.
alpha in maple displays as alpha. So if you are typing away, you can enter it that way too, you don't need to take your hands off the keyboard to reach for the mouse. Of course, if your hands are already on the mouse, by all means, click away!
If you are a vi user (hopefully vim), then there is a maple mode for vim as well, which can help locate errors, in ways similar to Joe's maple-mode for emacs. It can certainly be improved (I have recently taken over maintenance of that, but I am behind), but it's a start. I had an interesting conversation with someone-in-the-know at Maplesoft [at a conference held in a rather nice Castle in Europe] who leads me to believe that Maple 12 will have some real goodies [in Standard] in that respect.
First, in Maple syntax this is eq := x^3 + (D(y)(x) - 9*x)^(5/(y(x)+2)) + 4*(x-y(x))^(D(y)(6*x+4)): Note the use of D instead of diff -- since this is a functional differential equation, as Robert comments. If you want to try to get a series solution, you should try to plug one in, to see if you get a proper system: eq3 := eval(eq, y=( z-> a0+a1*z+a2*z^2*O(z^3))): And now, try the more powerful series in Maple: with(MultiSeries): series(eq3, x); which returns Error, (in MultiSeries:-multiseries) unable to compute series This actually indicates to me that it is very unlikely that your equation, as is, actually has a simple series solution. My guess if that whatever solutions exist, they have some kind of singularity.
I think the 'real' problem is that solve accepts this as a question it is prepared to answer! Asking for singularities of an expression makes sense, but asking for where it is equal to infinity? I think it would be wiser for solve to reply that this is a question it is not built to answer. Of course, Doug makes it clear he also had his doubts about the meaningfulness of this question. I would much prefer to ask about poles, perhaps of a particular order, (essential) singularities, etc. While one can easily enough define what equal-to-infinity might mean for elements of Q(x) [known as ratpoly in Maple], it is already very difficult to figure out what this means for ln(x).
eqns := { tan(delta) = ll/r1, sin(phi2) = la/r2, r2^2 = la^2+r1^2, sin(phi1) = lt/r2, phi = phi1+phi2}: solve(eqns, [phi, r1, r2, phi1, phi2]); immediately returns [[phi = arcsin(lt*tan(delta)/RootOf(-la^2*tan(delta)^2-ll^2+_Z^2,label = _L4))+arcsin(la/RootOf(-la^2*tan(delta)^2-ll^2+_Z^2,label = _L4)*tan(delta)), r1 = ll/tan(delta), r2 = RootOf(-la^2*tan(delta)^2-ll^2+_Z^2,label = _L4)/tan(delta), phi1 = arcsin(lt*tan(delta)/RootOf(-la^2*tan(delta)^2-ll^2+_Z^2,label = _L4)), phi2 = arcsin(la/RootOf(-la^2*tan(delta)^2-ll^2+_Z^2,label = _L4)*tan(delta))]]
You are encountering Gibb's phenomenon; that same page suggest using alternate summation methods [like Cesaro summation], in particular Fejer Kernel (partial sums of the the Dirichlet kernel). Of course, that is no longer the 'fast' Fourier Transform! Another question is: why not allow yourself some symbolic pre-processing? In this case inttrans[fourier](-1/(5*v*I)*(1-exp(5*I*v)), v, s); returns 2/5*Pi*(Heaviside(s)-Heaviside(s-5)) Note that various 'tweaks' are unlikely to help -- the Gibbs phenomenon is not something that can be removed so easily.
Think about what gcd does. Then look at the help page for Gcd (which allows you to compute gcds over finite fields).
(At least in Classic) you have to drag output to be able to drop it on a plot, dragging input does not work.
If you primary aim is to write flawless looking technical documents, with a few simple computations being rather helpful to the writing of the document, then Scientific Workplace is an excellent tool. If your primary aim is powerful mathematical computation, using a nice interactive interface with good (and improving) printed output, Maple is what you are looking for.
I would have to see the details, but I am guessing that you were clever in how you translated your expression into C, while Maple was much more straightforward. Such a straightforward approach saves a lot of manual labour, but cannot replace a clever encoding, especially in the presence of potential numerical instability. Whatever the Marketing for Maple says, it is still just a tool. A very very good one, that is incredibly useful, but nevertheless just a tool that allows you to automate certain things. Not all automations are good, as you have discovered. Currently, code generation is an area where designers know how to translate (from an algebraic point of view) from expressions to code in a way that preserves meaning; as far as I know, numerical stability of such translations is a wide open problem.
I gather you mean the MathieuC function? [Or the MathieuCE?]. I will assume MathieuC. There certainly does not seem to be anything obvious in the "standard" references, nor a quick Google away. However, the (algebraic) version of the Mathieu function is a special case of the HeunC function. And there are interesting integral representations for those, for example in this paper (see p.9 for example). Yes, those representations are in terms of HeunC itself -- however observe the ``free parameter'' c introduced in equation (6.6). The idea then would be to pick a parameter value for this c that causes the HeunC function to "collapse" to something simpler, thus giving you an honest-to-goodness integral transform. The other trick is to try to find good kernel functions. Lemma 3.3 from that paper allows quite a bit of flexibility in that regard. Unfortunately, the obvious representations do not help: both Laplace transforms and Fourier transforms end up giving an ODE whose solution is again in terms of Mathieu functions.
If memory serves me right, the elliptic functions in their current form were introduced to Maple for the explicit purpose of making the answers from definite integration as simple as possible [somewhere between 5.2 and 5.4 I believe]. The conventions were definitely influenced by A&S, as is the convention for most of Maple's special functions, but ``tailored'' to the application of definite integration of variants of the functions that appear on ?EllipticF. Getting branch-cuts ``just right'' was a large amount of work, and the particular form was chosen to make things as simple and as uniform as possible. Some of that information should be available on the web somewhere, but I can't find it right now -- I will look again later.
Sounds like homework...
Your sum is a truncation to n terms of the Taylor series at x=0 for the following: sum(binomial(2*k,k)*x^k,k=1..infinity); 4*x/(-4*x+1)^(1/2)/(1+(-4*x+1)^(1/2)) and then evaluated at x=1. As Robert mentions, the radius of convergence is 1/4, so things do get a little weird.
First 8 9 10 11 12 13 14 Last Page 10 of 23