Robert Israel

6577 Reputation

21 Badges

18 years, 212 days
University of British Columbia
Associate Professor Emeritus
North York, Ontario, Canada

MaplePrimes Activity


These are replies submitted by Robert Israel

It looks like the D operator is not working on p. Hmmm, I see that happens in Maple 9.5. Well, replace D(p)(x) by diff(p(x),x) and it should work.

Personally, I find the result with style=point better than that with lines joining the points, but de gustibus non est disputandum.

Personally, I find the result with style=point better than that with lines joining the points, but de gustibus non est disputandum.
Perhaps what is meant is simply that you want a polynomial p(x) = sum(c[i]*x^i,i=0..n) to approximate the data {[x[j], y[j]], j=1..m} in the sense of minimizing max(abs(y[j] - p(x[j])),j=1..m) (the connection to Chebyshev is that the uniform norm is sometimes called the Chebyshev norm). That can be obtained using linear programming (LPSolve in the Optimization package) as follows. The linear programming problem is to minimize z subject to the constraints
y[j]-sum(c[i]*x[j]^i, i = 0..n) <= z
and
y[j]-sum(c[i]*x[j]^i,i=0..n) >= -z
for each j. For example:
> X := [1, 3, 5, 7, 8, 10, 12]:
  Y := [6, 1, 1, 3, 5, 6, 5]:
  constraints := [
    seq(Y[j] - add(c[i]*X[j]^i, i=0..3) <= z, j=1..7),
    seq(Y[j] - add(c[i]*X[j]^i, i=0..3) >= -z, j=1..7)]:
 Optimization[LPSolve](z,constraints);
[.32408500590319, [z = .324085005903186396, c[0] = 10.4563164108618364, c[2] = 1.00472255017708845, c[3] = -.472255017709559411e-1, c[1] = -5.73789846517115621]]
> p:= subs(%[2],add(c[i]*x^i, i=0..3);
p := 10.4563164108618364-5.73789846517115621*x+1.00472255017708845*x^2-.472255017709559411e-1*x^3
Perhaps what is meant is simply that you want a polynomial p(x) = sum(c[i]*x^i,i=0..n) to approximate the data {[x[j], y[j]], j=1..m} in the sense of minimizing max(abs(y[j] - p(x[j])),j=1..m) (the connection to Chebyshev is that the uniform norm is sometimes called the Chebyshev norm). That can be obtained using linear programming (LPSolve in the Optimization package) as follows. The linear programming problem is to minimize z subject to the constraints
y[j]-sum(c[i]*x[j]^i, i = 0..n) <= z
and
y[j]-sum(c[i]*x[j]^i,i=0..n) >= -z
for each j. For example:
> X := [1, 3, 5, 7, 8, 10, 12]:
  Y := [6, 1, 1, 3, 5, 6, 5]:
  constraints := [
    seq(Y[j] - add(c[i]*X[j]^i, i=0..3) <= z, j=1..7),
    seq(Y[j] - add(c[i]*X[j]^i, i=0..3) >= -z, j=1..7)]:
 Optimization[LPSolve](z,constraints);
[.32408500590319, [z = .324085005903186396, c[0] = 10.4563164108618364, c[2] = 1.00472255017708845, c[3] = -.472255017709559411e-1, c[1] = -5.73789846517115621]]
> p:= subs(%[2],add(c[i]*x^i, i=0..3);
p := 10.4563164108618364-5.73789846517115621*x+1.00472255017708845*x^2-.472255017709559411e-1*x^3
Giliev asked for "the part of k that contains any/all square roots" [my emphasis]. Thus I assume in your example he would want the result to be sqrt(1+sqrt(x)) + sin(sqrt(y)), which is what select(hastype, k, sqrt) delivers.
Giliev asked for "the part of k that contains any/all square roots" [my emphasis]. Thus I assume in your example he would want the result to be sqrt(1+sqrt(x)) + sin(sqrt(y)), which is what select(hastype, k, sqrt) delivers.
When I pasted this into a Maple 1D input line I got a strange character before the diff(C(x,t),x,x) that looked like a - but wasn't. Removing that, and inserting multiplication signs where required,
> PDE := diff(C(x, t), t) = DiffusionCoefficientST*(exp(1))*(-ExperienceConstant*
(1/Temperature-1/296))*(diff(C(x, t), x, x));
produces what I assume is the expected result. Note that the PDE ends up as diff(C(x,t),t) = .1540836335e-5*diff(C(x,t),`$`(x,2))
I don't know why you're trying to use "cat". That produces a string or a name. You don't want that, you seem to want the expression (0 <= x) and (x <= 20). So you could try
> plotint := (0 <= x) and (x <= 20);
except that if you're using this inside a procedure where x is a formal parameter, you probably want x to be that parameter rather than the global variable x. The best way around that might be something like this:
> plotcond:= x ->  (0 <= x) and (x <= 20);  
  f := proc(x) 
   ...
   piecewise(plotcond(x), ..., ...)
   end proc;
The main problem (I almost said "the real problem") is that although complexcons is a property, Maple does not seem to know any relations between that property and others. For example:
> assume(z, complexcons):
  is(z, complex);
false
> is(z, constant);
false
> about(complexcons);
complexcons:
  nothing known about this object
Compare, for example:
> about(realcons);
realcons:
  property aliased to AndProp(real,constant)
  composite (and'ed) property, a property describing
  objects which have all the AndProp(real,constant) properties

> about(constant);
constant:
  an expression which is known to have a constant real or complex value
  a known property having {complex} as immediate parents
  and {BottomProp} as immediate children.
My suggestion is that you don't use "complexcons": use "constant" instead. If you want to assume z is a constant but is not real, use
> assume(z, AndProp(constant, Non(real)));
And then:
> evalc(z + conjugate(z));
z+conjugate(z)
I don't know Arfken and Weber, but I think it is quite useful to have problems that are not just repetitions of the examples in the section. No apology needs to be made for the fact that a problem may require you to bring in knowledge from outside that section. You are supposed to remember what you have learned before, in this course or others, and be able to apply it. In this case the multiplicative property of determinants is something that should be learned in an introductory linear algebra course; if you don't know it and you're studying about groups of matrices, it's time you learned it. I like what Herstein wrote in the preface to his text "Topics in Algebra" (which had many difficult problems):
A word about the problems. There are a great number of them. It would be an extraordinary student indeed who could solve them all. Some are present merely to complete proofs in the text material, others to illustrate and to give practice in the results obtained. Many are introduced not so much to be solved as to be tackled. The value of a problem is not so much in coming up with the answer as in the ideas and attempted ideas it forces on the would-be solver. Others are included in anticipation of material to be developed later, the hope and rationale for this being both to lay the groundwork for the subsequent theory and also to make more natural ideas, definitions, and arguments as they are introduced.
I don't know Arfken and Weber, but I think it is quite useful to have problems that are not just repetitions of the examples in the section. No apology needs to be made for the fact that a problem may require you to bring in knowledge from outside that section. You are supposed to remember what you have learned before, in this course or others, and be able to apply it. In this case the multiplicative property of determinants is something that should be learned in an introductory linear algebra course; if you don't know it and you're studying about groups of matrices, it's time you learned it. I like what Herstein wrote in the preface to his text "Topics in Algebra" (which had many difficult problems):
A word about the problems. There are a great number of them. It would be an extraordinary student indeed who could solve them all. Some are present merely to complete proofs in the text material, others to illustrate and to give practice in the results obtained. Many are introduced not so much to be solved as to be tackled. The value of a problem is not so much in coming up with the answer as in the ideas and attempted ideas it forces on the would-be solver. Others are included in anticipation of material to be developed later, the hope and rationale for this being both to lay the groundwork for the subsequent theory and also to make more natural ideas, definitions, and arguments as they are introduced.
It's almost a complete proof; not quite perfect though. 1) It's not necessarily true that "On the first call to BinarySearch f-s is the length of the dictionary less one", although that may be what the programmer intended. This doesn't affect the fact that BinarySearch will terminate. 2) You really should prove that s <= m <= f if s <= f. Actually that's not necessarily true if s and f are negative (but then the program terminates with an error). 3) The search doesn't terminate with entry not found when f-s becomes 0, rather when it becomes -1. The classic book on how to prove stuff is "How to Solve It" by Polya.
1d and 2d: See the help page ?worksheet,documenting,2Dmath. 2-D Math input is the initial default setting for input into Maple. It's supposed to be very easy for novices to use; you can mess about with the mouse, palettes, etc; you don't need to remember a * sign for multiplication; and it looks like math, with subscripts and nicely typeset fractions etc. Most experienced Maple users avoid it like the plague. 1-D input, or Maple notation, is straight text typed from the keyboard, so that you can see exactly what input Maple is getting. You can access it from the Insert > Maple Input menu, or by pressing the F5 key, or go to Tools > Options > Display and change "Input display" to Maple Notation. implicit plots are done using implicitplot in the plots package. See the help page ?implicitplot.
As I said, the exercise doesn't ask you to prove that the result is correct, just that the procedure terminates. That doesn't mean that it isn't possible to prove that the result is correct when the inputs are valid (in particular, in this case, when the list D is sorted in increasing lexicographic order) - it is indeed possible. And yes, I think it is something that you should be "concerned about". Proving a program correct, when that is possible, is a great way to avoid bugs. Unfortunately, it gets very difficult when the program is long and complicated.
First 163 164 165 166 167 168 169 Last Page 165 of 187