JacquesC

Prof. Jacques Carette

2401 Reputation

17 Badges

20 years, 88 days
McMaster University
Professor or university staff
Hamilton, Ontario, Canada

Social Networks and Content at Maplesoft.com

From a Maple perspective: I first started using it in 1985 (it was Maple 4.0, but I still have a Maple 3.3 manual!). Worked as a Maple tutor in 1987. Joined the company in 1991 as the sole GUI developer and wrote the first Windows version of Maple (for Windows 3.0). Founded the Math group in 1992. Worked remotely from France (still in Math, hosted by the ALGO project) from fall 1993 to summer 1996 where I did my PhD in complex dynamics in Orsay. Soon after I returned to Ontario, I became the Manager of the Math Group, which I grew from 2 people to 12 in 2.5 years. Got "promoted" into project management (for Maple 6, the last of the releases which allowed a lot of backward incompatibilities, aka the last time that design mistakes from the past were allowed to be fixed), and then moved on to an ill-fated web project (it was 1999 after all). After that, worked on coordinating the output from the (many!) research labs Maplesoft then worked with, as well as some Maple design and coding (inert form, the box model for Maplets, some aspects of MathML, context menus, a prototype compiler, and more), as well as some of the initial work on MapleNet. In 2002, an opportunity came up for a faculty position, which I took. After many years of being confronted with Maple weaknesses, I got a number of ideas of how I would go about 'doing better' -- but these ideas required a radical change of architecture, which I could not do within Maplesoft. I have been working on producing a 'better' system ever since.

MaplePrimes Activity


These are replies submitted by JacquesC

First, you will get a lot of simplifications if you replace the expand with a call to simplify in the line psi2 := expand(F(r,z)^2): and the same in the next 2 lines. However, these integrals are really very tough. It is quite easy for a human to break them down into pieces which are easier to integrate, but as given, a very tough problem. I suggest you do a domain decomposition yourself, and integrate over each region separately, depending on the local/asymptotic behaviour of the integrand on each region. Specifically, at r=0, the integral in z really wants to diverge (for Izr[1,1], the E part). Maple, on its own, is not able to show that this is not the case, because this is in fact rather subtle. This is where human ingenuity still beats machine algorithms hands-down. But this is also where Maple, as a tool to help, can really make a huge difference. I figured this all out in minutes, with a few symbolic experiments on the integrand using series and asympt. THAT, to me, is still Maple's greatest strength.
I would really like to try an experiment where the output from solve would have conditions on parameters (like p) for validity. From my experience in eradicating the square-root bug from the Maple library, I believe that the results would not be as dire as the pundits predict. In fact, I think that overall, for meaningful questions, the output would be largely improved. The only cases where the output would be an absolute mess, I believe, would be in artificial cases. Rather like asking for the determinant of a general 7x7 matrix being a silly question. [There was a partial such experiment run about 14 years ago, and the results were very positive, but no one was brave enough to take the jump.]
I would really like to try an experiment where the output from solve would have conditions on parameters (like p) for validity. From my experience in eradicating the square-root bug from the Maple library, I believe that the results would not be as dire as the pundits predict. In fact, I think that overall, for meaningful questions, the output would be largely improved. The only cases where the output would be an absolute mess, I believe, would be in artificial cases. Rather like asking for the determinant of a general 7x7 matrix being a silly question. [There was a partial such experiment run about 14 years ago, and the results were very positive, but no one was brave enough to take the jump.]
Mathematica uses a lot of precomputed tables, which result in very fast answers. Of course, it also results in NO answers when you are out of those tables, or even when you ask for minor variations on a problem. Maple uses a lot of algorithms, so that you get more answers, sometimes at considerable computation cost. It is a different strategy. There are also times where both Maple and Mathematica use outdated algorithms. This is often the case in Maple for integration, and is often the case in Mathematica when it comes to solving differential equations (or even solving systems of equations). In fact, this list could go on and on, with each system having 20-30 different major pieces of functionality in each column. I prefer Maple's strategy. Table lookup, while potentially fast, is a maintenance and Quality Assurance nightmare [as inttrans as proven beyond a reasonable doubt].
Mathematica uses a lot of precomputed tables, which result in very fast answers. Of course, it also results in NO answers when you are out of those tables, or even when you ask for minor variations on a problem. Maple uses a lot of algorithms, so that you get more answers, sometimes at considerable computation cost. It is a different strategy. There are also times where both Maple and Mathematica use outdated algorithms. This is often the case in Maple for integration, and is often the case in Mathematica when it comes to solving differential equations (or even solving systems of equations). In fact, this list could go on and on, with each system having 20-30 different major pieces of functionality in each column. I prefer Maple's strategy. Table lookup, while potentially fast, is a maintenance and Quality Assurance nightmare [as inttrans as proven beyond a reasonable doubt].
acer, you make a really good point, and I really hope that others read your careful analysis too. Back when I worked at Maplesoft, it was really frustrating that we got very little usability feedback from users. So some ``improvements'' were in fact bad ideas, while minor features turned out to be much bigger deals than was originally thought. [A huge mistakes way too many developers make is to equate the value of a feature with its development time, where in real life there is very very little correlation]. So mapleprimes can be a very powerful force in this direction, to let developers see the kinds of ``use cases'' that users really have. And the cases where some current features are so broken as to be non-features. Non-features are things where backwards compatibility just does not apply, since why be compatible with something that does not work?
acer, you make a really good point, and I really hope that others read your careful analysis too. Back when I worked at Maplesoft, it was really frustrating that we got very little usability feedback from users. So some ``improvements'' were in fact bad ideas, while minor features turned out to be much bigger deals than was originally thought. [A huge mistakes way too many developers make is to equate the value of a feature with its development time, where in real life there is very very little correlation]. So mapleprimes can be a very powerful force in this direction, to let developers see the kinds of ``use cases'' that users really have. And the cases where some current features are so broken as to be non-features. Non-features are things where backwards compatibility just does not apply, since why be compatible with something that does not work?
There are some little known (and even less documented) functional versions of other parts of the syntax. My favourites being `[]` and `{}` for the list and set constructors, respectively. There is also `||` for concatenation and `..` for ranges. But the ones that I really want to use [but keep forgetting about] are `?[]` for constructing a tableref, and `?()` as an almost-synonym for apply. It is the possibility of using `?[]` in a map or map2 that really starts to make things really interesting! Not to forget `<,>` and `<|>` which are the names that the LinearAlgebra package uses for the syntactic shorthands for entering Matrix/Vector expressions.
There are some little known (and even less documented) functional versions of other parts of the syntax. My favourites being `[]` and `{}` for the list and set constructors, respectively. There is also `||` for concatenation and `..` for ranges. But the ones that I really want to use [but keep forgetting about] are `?[]` for constructing a tableref, and `?()` as an almost-synonym for apply. It is the possibility of using `?[]` in a map or map2 that really starts to make things really interesting! Not to forget `<,>` and `<|>` which are the names that the LinearAlgebra package uses for the syntactic shorthands for entering Matrix/Vector expressions.
I believe that your process suffers from cancellation of various terms. Running it with Digits set to 20, I consistently get .5892103785 as the answer. I would suggest you run your procedure with settings of printlevel at either 5 or 10 to 'see' it running.
I dislike has because it gives false positives. I would rather write SPLIT := (f,x) -> [selectremove](type, f, 'freeof'(x)); This will then work properly on products containing objects like Int(g(x),x=0..t) which do not 'depend' on x at all.
I dislike has because it gives false positives. I would rather write SPLIT := (f,x) -> [selectremove](type, f, 'freeof'(x)); This will then work properly on products containing objects like Int(g(x),x=0..t) which do not 'depend' on x at all.
Looking at the shape of the entries, in general there seems to be entries of the form sqrt(|a+k| + |b+j|) where k and j range over {-1,0,1}. That is really like 6 'new' variables. One possible attack is to indeed replace all of those expressions by fresh variables, and *then* compute the determinant by interpolation. The new variables would satisfy some relations, but these relations would be known a priori.
My guess is that those abs and sqrt come from computing some distances. So if we knew how this matrix actually originates, ie what is the actual problem at hand, we might be able to rephrase it to make some of these disappear, which would greatly simplify things. Furthermore, I would hope that some of those floating point quantities could also be abstracted out, which would further help. Following acer's ideas, I guess I should mention a paper I have co-authored on computing LU decompositions using expression management techniques that deal with explosion problem. But it only works when one has a zero-testing oracle, which is not the case for the class of expressions at hand. However, with some further knowledge of the original problem, this may still be possible.
It is well-known amongst numerical analysts that those 500 year old formulas for the roots of a cubic are numerically very ill-behaved. But 500 years of tradition are very hard to change, so the better formulas (involving arctan) are not used, because they are not the ones that show up in textbooks. One day, some system builder will feel brave and buck the tradition and use a better, more modern (only ~120-150 years old!) formula instead. Ah, the day that computer-based mathematics system actually uses mathematics that is as young as 100 years old uniformly and correctly, that will be a joyous day.
First 55 56 57 58 59 60 61 Last Page 57 of 119