Carl Love

Carl Love

28035 Reputation

25 Badges

12 years, 322 days
Himself
Wayland, Massachusetts, United States
My name was formerly Carl Devore.

MaplePrimes Activity


These are replies submitted by Carl Love

Certainly the problem as stated cannot produce anything like that graph.

The graph looks possibly like the solution of a pair of nonlinear differential equations x'(t) = f(x,y,t), y'(t) = g(x,y,t). Could you be missing something like that?

Oh, and I agree with Mariusz. It's impolite to not give Thumbs Up or Best Answer to good/best Answers.

@acer My guess is that in order to get close to Matlab times for this, you'll need to implement a QR factorization that can be updated based on the deletion or addition of single columns from the original matrix. There are about 3 pages on this in Matrix Computations by Golub and Van Loan (let me know if you don't have access to a copy). In this particular algorithm, columns are always deleted on the left and added on the right (so the matrix's columns behave as if queued). Perhaps it is possible to exploit that special positioning of the deletions and insertions to simplify the QR updating (and thus get one-up on Matlab). I believe that QR is used in this algorithm simply because it is the factorization of dense unstructured matrices that is most amenable to being updated. 

In addition to that, you'll need to correct the OP's constant creation and destruction of rtables. If you've read the code, this is no doubt obvious to you, and I simply mention it for the benefit of other readers, especially the OP.

@kfli The embedded assignment is semantically equivalent to the non-embedded assignment, so I guess that my reasons are aesthetic, but also with a practical component. About 30 years ago was a great era of development of "software metrics": ways of measuring how long it takes (or how costly it is) to write and maintain code. (I am recalling this all from distant memory; I'll need to check, if possible, some details.) One striking thing that I read was that the overall coding time was more closely correlated with the number of lines of code than with the language, the complexity of the code, or the number of characters of code.

Regardless of whether that's actually true for most programmers, it's certainly true for me. So, I maintain a consistent and rigorous indenting style, I use a reasonable but not excessive amount of white space (my white space amount (minus the indents and right sides) is about equivalent to standard English text), and I strive to reduce the number of lines as much as possible. I put lengthy comments at the top of a procedure. I strive to get procedures to fit onto one screen. I can hold the entire procedure in my mind at once if it can fit on one screen.

 

@acer 

The algorithm under discussion is a method of numerically solving a system of nonlinear algebraic equations of the form F(X) = X, where is a vector of unknowns. (Please excuse me if I'm stating something that you already know.) I don't know yet whether it's restricted to real numbers, continuous functions, Lipschitz continuous functions, etc. If one needs to solve the more-usual system G(X) = 0, I'm sure that you can see how to modify so that it fits the format of the present algorithm.

Since it performs essentially the same job as fsolve or DirectSearch:-SolveEquations, those might be good things to compare it to.

@kfli If you want to solve df.gamma = fval by QR decomposition, you can do something like

gamma:= LinearAlgebra:-LeastSquares([((Q,R):= LinearAlgebra:-QRDecomposition(df))], fval);

with perhaps some adjustments if you're willing to store and together in NAG format. Since the above will work even if df has only one column, I don't think that you should have special code for that case, at least not at this point in the development. That alone eliminates a great many lines of code.

@Ida2018 You are using the Natural units enviroment. If, instead, you use the Simple environment, then variables can "have" units, so to speak, and your original solve command should work as is, and the result will be expressed with units attached.

@Ida2018 I am putting my next Reply under my Answer, where it belongs.

@kfli 

MTM[ldivide](R, df) is equivalent to df /~ R (for scalar R and Vector df). All of Maple's elementwise operators (those that end with ~) have the same precedence, associativity, and commutativity as the corresponding operator without the ~. So df /~ R means "divide every element of df by R" (for scalar R and Vector df), which is clearly what you want.

So, the three lines can be replaced by

R:= norm(df, 2);
Q:= df /~ R;  #Normalize df.
gamma:= Q^+ . fval /~ R; #Note Maple's transpose operator.

In Maple 2018, the three lines can be (and IMO should be) replaced by the single line

gamma:= (Q:= df /~ (R:= norm(df, 2)))^+ . fval /~ R;

Please tell me whether you're using Maple 2018 so that I don't waste more time distinguishing it.

What you have labelled "Alt 1" and "Alt 3" unquestionably give the same results (modulo possibly a little round-off error, only with floating-point computation). Thus, we need to discuss whatever technique you're using to decide whether things are equal or unequal because you're doing it wrong.

 

 

 

@Ida2018 Sorry, I should've been more precise. I meant replace O__L by O__L*Unit(kN) in the equation. The other O__L, which is the second argument to solve, should remain as is.

@Joe Riel I think that time would be better spent by first doing obvious optimizations (obvious to you and me, that is) than in analyzing a profile or dump of a program that is obviously so raw. But first, I want to make sure that it works.

@kfli Read? Ha! I found it out by experimenting with your code for about 90 minutes. You can confirm what I said by doing this little experiment:

A:= Matrix():
whattype(A);

                             Matrix
print(whattype(A));
                      proc()  ...  end;

This is the same difference (and for exactly the same reason) as can be observed in

P:= proc() end proc:
P;
print(P);

That difference is described on help page ?last_name_eval.

Your procedure is great. I see a great many ways that it can be improved, and I look forward to working on that. But before I work on anything other than stylistic issues, I want to make sure that it works. Why spend time optimizing something that doesn't even work? You'd be the best person to say whether it works.

This little code fragment seems problematic to me:

R := norm(df,2);
Q := R/~df;
gamma := R/~Transpose(Q).fval;

Please tell me in purely algebraic terms (by which I mean without making any reference to the rest of the algorithm or to what df or fval represent) what you think it does. All that it actually does is gamma:= Transpose(df).fval (possibly the round-off errors are different).
 

 

@waseem I accept your apology. Thank you.

Thank you for the new paper. Your code shows that you have obviously learned a tremendous amount from reading my MaplePrimes posts. It's quite impressive.

Do you have any information at all that could help me understand equations 23 - 31? Obviously, the authors (Gupta, et al) have written some code. Why don't they supply it? Given any code in any language, I could translate it to Maple and improve it tremendously.

The notions of error control mentioned in the Gupta paper are ridiculous. Basically, they have said something akin to "For one set of parameter values, we got convergence to 4 decimal places using 160 nodes; therefore, we use 160 nodes for all sets of parameter values." and "For one set of parameter values, we got convergence to 4 decimal places by setting infinity to 8; therefore, we use infinity = 8 for all sets of parameter values." No self-respecting mathematican would say those things.

The Maple BVP solver has very fine error control. If it detects a failure to converge, it will report that. If you ask for 4 digits and it thinks that it can only get 3, it will report that. So, if it produces a numeric result, I trust it. At this point, I have matched the skin friction numbers in the first column of Table 3 to 1-2 decimal places. Thus, I assume that the numbers reported by Gupta, et al, in that table are not accurate to 4 decimal places and the Maple numbers are correct.

I am interested in coding their FEM, but I can't understand Eqs 23-31. When I code it, I will dynamically adjust the mesh size on every run and make sure that it converges.

 

@666 basha You say that "the problem can be solved very easily using RK method dsolve." Well, I'd certainly like to see that. Please show me a solution technique (of either the corrected or uncorrected system) by whatever method you choose to use. Having any solution (for comparison) will help with coding and testing the authors' FEM method.

Before the problem can be solved by any method, it must be input correctly into that method. At this point, I am absolutely certain that the system as typed into the paper is incorrect (in both places that it appears in the paper), and I believe that Preben is also certain of that. I think that the only error is that the sign of lambda needs to be changed, but I am not certain of that.

If I had plunged ahead with coding the paper's FEM, and I'd assumed that the system was correct as published, then there would have been enormous and lengthy frustration trying to find a non-existent error in the FEM code. Please tell me: Can you understand such a fundamental principle of engineering? 

If you have some understanding of equations 23 - 31, please share that. I am still trying to understand them. To begin with, how are the mesh nodes eta[e] (which appear as the limits of integration) chosen? Are they simply evenly spaced over the eta interval (0 to 8 in this case)? 

@Ritti For real x, it is obvious that abs(cosh(x)) = cosh(x). If this is not obvious to you, then you should take a step back from the problem that you're currently working on and spend a few minutes reviewing the definition of cosh. Any decent calculus textbook will help, as will the Wikipedia article "Hyperbolic function".

@vv By "sequential order", the OP is referring to tree depth within the expression, not tree breadth. In other words, they want the innermost function (or perhaps functions) with the desired type of arguments, not the rightmost.

First 307 308 309 310 311 312 313 Last Page 309 of 708