Carl Love

Carl Love

28015 Reputation

25 Badges

12 years, 296 days
Himself
Wayland, Massachusetts, United States
My name was formerly Carl Devore.

MaplePrimes Activity


These are replies submitted by Carl Love

You can't compare what you've done in 2D Input with anything possible in Maple V because Maple V doesn't have 2D Input. In your 1st usage of the range in your most-recent Reply, the right endpoint is 0.2, so the range is the single point 0.2. In your 2nd usage, the right endpoint is 2.

Also, I didn't say that there were no discrepancies; I said that there were no forward discrepancies, meaning that the code shown in this thread that works correctly in Maple V's 1D input (its only input mode) continues to work correctly in Maple 2022 1D Input.

Also, assuming doesn't exist in Maple V, although it does have assume.

@jediknight There are no forward discrepancies in the results. The code that works in Maple V continues to work in Maple 2022. But until you said that you were using Maple V, people were giving you Answers that worked only in more-recent versions. That's only natural, because the way that things are coded in the more-recent versions is generally better, less prone to errors. (An example is the modern eval(GainQ1, params) versus Maple V's subs(params, GainQ1); the eval does some checking of the mathematical validity of the substitutions.)

The function GainQ1 is an even function of wx, i.e., GainQ1(wx) = GainQ1(-wx) for any real wx. So, any solution has a corresponding negative solution. The way that you used fsolve in your last command caused it to return a single real root of its choosing. It happened to choose a negative. You can force it to give a positive root by making the 2nd argument {wx= 0..infinity}

It's possible to give a complete symbolic solution to your overall original problem. In other words, it's possible to get Maple to return a single algebraic expression for the maximum value of GainQ1 for any m > 1 and Q1 > 0. The expression is long (about a screenful at the smallest readable Zoom level) but Maple can easily handle it. It has several square roots and cube roots of polynomials. 

@Wes My last Reply wasn't directed to you. It was meant for others who may be trying to work with your data but don't have your Excel file. And there's no need for the file now because it's easier to just copy-and-paste your column of data.

Regarding those two lines: There's nothing to "test". It's just a listing of some Maple code.

Excel and other software that can do this quickly likely have databases of the integral values. There's one common set for n <= 2000 (apparently used by the external code that Maple calls) and another for n <= 5000 (used by R, among others).  

@jediknight For Maple V, try this:

restart:
GainQ1:= 8*wx^2*(m-1)/(Pi^2*sqrt((m*wx^2-1)^2+Q1^2*wx^2*(wx^2-1)^2*(m-1)^2));
params:= [m= 6.3, Q1= 1];
a:= subs(params, GainQ1);
Top_of_Q1:= max(seq(subs(wx= x, a), x= [fsolve](numer(diff(a, wx)))));

That's based on my memory of Maple V; I have no way of testing it. (Of course, it still works in Maple 2022.)

If you're using software that's well over 20 years old, and that software has been continuously updated ever since by a whole company of programmers, and everyone who answers your questions is used to the more-recent versions, then don't you think that you should tell people upfront what version you're using? 

A hint for anyone who wants to work with this data set in Maple: An unpunctuated column of numbers, such as the OP gave above, can be directly copy-and-pasted to a 1D-input execution group. The result will be 1D code that returns the equivalent 1-column Matrix. No other commands or typing of any kind is needed.

I think that Maple does this W test with external code (which I can't see). The deepest that I dug was

kernelopts(opaquemodules= false):
eval(Statistics:-Tests:-ShapiroWilkWTest:-PerformTest);

I'm working on computing the W statistic (to compare with OP's result), but it involves over 3000 numeric integrations, most of which are double integrals. It's still doable though. It's the covariance matrix of the order-statistic distributions of 80 standard normal r.v.s.

@Preben Alsholm Is there some reason (that I didn't see) that you suspect that the OP is using Maple 12? 

@jediknight The nonreal value that you got, 0.5730486817*I, is in fact the function's value at one of the roots of its derivative, but, as I said, it makes no sense to me that maximize would return it. So, let's assume that there's a bug in maximize in your version of Maple. There's an easy alternative: The Calculus I method of finding the maximum value when the function is evaluated at the (real) roots (or zeros) of its derivative:

restart:
GainQ1:= 8*wx^2*(m-1)/(Pi^2*sqrt((m*wx^2-1)^2+Q1^2*wx^2*(wx^2-1)^2*(m-1)^2));
params:= [m= 6.3, Q1= 1];
a:= eval(GainQ1, params);
Top_of_Q1:= max(eval~(a, wx=~ [fsolve](numer(diff(a, wx)))));

 

@jediknight I don't understand it either. A nonreal result from maximize makes no sense to me. What Maple version are you using? If I directly copy-and-paste your code into Maple 2018 and Maple 2022 (the two versions that I happen to have open at the moment), I get

restart:
Top_of_Q1= maximize(
    (((8*wx^2*(6.3-1)/(Pi^2*sqrt((6.3*wx^2-1)^2+1^2*wx^2*(wx^2-1)^2*(6.3-1)^2)))))
);
                    Top_of_Q1 = 0.8281966930

which seems correct.

@Wes If moy is a MatrixArrayVector, or rtable (not a matrixarrayvector, or table) with a single element, it can be converted to that single element by

seq(moy)

@Wes You wrote:

  • As you describe, there is not direct function to calculate p-value.

There is a command Probability that could be used like this:

T:= Statistics:-RandomVariable(StudentT(23)):
p_value:= Statistics:-Probability(abs(T) > 1.96);

                     
 0.0622210380605361

  • I have read that for bi-latteral test
    p-value = 2*P(ST >|St| | H0 true) = 2*(1-CDF(abs(St)).

For a symmetric distribution, 1 - CDF(T, abs(St)) = CDF(T, -abs(St)), so the expressions are (theoretically) equal. The form that I gave is computationally safer due to rounding error.

@Wes Good question. It helps to visualize the p-value as the area in the "tails" of the distribution. That might be the left tail, the right tail, or the sum of both tail areas; it depends on what type of inequality is in your "alternative hypothesis"--- x < a (left), x > a (right), or x <> a (both). Here, x is some statistical parameter (such as mean) that you're testing, and a is some fixed real number. (Let me know if you need more details on any of that.)

Suppose that T is the test distribution (such as Chi^2 or Student's t) (which is not the underlying distribution of the raw data, such as Normal), and t is the value of the test statistic (based on some computation, often elaborate, done with the raw data). Then the p-value for a 

  • right-tailed test is 1 - CDF(T, t),
  • left-tailed test is CDF(T, t),
  • two-tailed test on a symmetric test distribution (such as Student's t) is
    2*CDF(T, -abs(t)),
  • two-tailed test on an asymmetric distribution (such as Chi^2 or F)... I need to defer to another expert's opinion on that; I don't know off the top of my head.

@nm When a symbol appears as a left operand of ->, that is its declaration.

@Carl Love I wrote above:

  • It's fairly easy to customize the number of terms in the series to your value of Digits. The series is alternating, so Leibniz's Theorem on the error of truncated alternating series applies.

Actually, it's so easy to do this that I couldn't resist writing it up and posting it here. Using a procedure of only two lines to numerically evaluate the asymptotic series, we can get results that are several times more efficient and several digits more accurate than the default numeric evaluation of your function. 

In the code below, I've omitted your factor of 4
 

restart:

f:= x-> x*ln(1+1/x);

proc (x) options operator, arrow; x*ln(1+1/x) end proc

The next line of code gives a formal presentation of the asymptotic series of f(x). This
is not part of the computation. It's only here to serve as a guide to coding the procedure.

simplify(eval(convert(f(1/x), FormalPowerSeries), x= 1/x));

Sum((-x)^(-n)/(n+1), n = 0 .. infinity)

An alternative way of viewing the series, also just a guide:

value(eval(%, infinity= 6));

1-(1/2)/x+(1/3)/x^2-(1/4)/x^3+(1/5)/x^4-(1/6)/x^5+(1/7)/x^6

This is the procedure for the numeric evaluation of f(x). There are two main factors to its efficiency:

1. 

No explicit powers of -x need to be computed. Instead, the power is computed as the previous term's
power divided by -x.

2. 

There's no need to explicitly strive for a particular accuracy. Instead, we just keep adding terms until
they no longer make a difference.


This procedure requires 1D Input.

`evalf/f1`:= (x::float)->  local r, r1:= 1, p:= 1, n:= 1;
    if x>1 then do r:= r1 until (r1+= (p/= -x)/++n) = r else f(x) fi
:

f1:= x-> `if`(x::float, evalf('f1'(x)), f(x)):

Digits:= 16:
seq(f1(10.^k), k= 1..20);

.9531017980432486, .9950330853168082, .9995003330835331, .9999500033330833, .9999950000333330, .9999995000003333, .9999999500000033, .9999999950000000, .9999999995000000, .9999999999500000, .9999999999950000, .9999999999995000, .9999999999999500, .9999999999999950, .9999999999999995, 1, 1, 1, 1, 1

That procedure is many times more efficient than the default numeric evaluation:

X:= evalf(['rand'() $ 2^14]):

forget(evalf); gc(); CodeTools:-Usage(f1~(X)):

memory used=21.25MiB, alloc change=0 bytes, cpu time=109.00ms, real time=107.00ms, gc time=0ns

forget(evalf); gc(); CodeTools:-Usage(f~(X)):

memory used=72.20MiB, alloc change=0 bytes, cpu time=469.00ms, real time=400.00ms, gc time=109.38ms

And it's also many times more accurate:

forget(evalf);

evalf[32](evalf[16]([f,f1](99.)) - evalf[32]([f,f1](99.)));

[-0.997717133689829526e-14, 0.2282866310170378e-16]

 

Download AsymptEvalf.mw

@Carl Love Here's an easy-to-understand and purely algebraic alternative:

subs(x= x-1, collect(subs(x= x+1, p), x));

The first-degree term expands automatically. To prevent that, you can do

subs(x= ``(x-1), collect(subs(x= x+1, p), x));

 

@MathPrincess123 Your concept of long division is completely wrong. The first term of the quotient should be the quotient of the highest-degree terms, not the lowest-degree terms. Perhaps you are being confused (visually) by your sort order. Descending degrees is the usual order, just like ordinary integers (where the degrees are the exponents of 10).

Also, it can be clearly seen that my program doesn't make any "change" to the polynomials other than replacing q with Q, because there are no fractional or negative exponents. The quotient is the same as it would be under any division process, whether by hand or computer. In this particular case, the quotient of the highest-degree terms is the degree-0 term of the quotient.

This does not need to be your "last question". Feel free to keep asking until you completely understand.

First 72 73 74 75 76 77 78 Last Page 74 of 708