acer

32490 Reputation

29 Badges

20 years, 8 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

Could you be more precise than just, "generalized symbolic answer"? What kind of terms would an acceptable answer have? What do you know about omega[1] and omega[2], relative to each other?

acer

@H-R The command that does this is LinearAlgebra:-GenerateMatrix.

But it takes a fast machine about 3-4 minutesfor GenerateMatrix to process your system of 2500x2500 equations and variables. This is the wrong way to go. As I wrote before, much better would be to never form the symbolic equations at all, if you need to deal with problems of this size. You seem insistent on ignoring this key point. You should instead write code that writes numeric coefficients directly to the lhs Matrix M and the rhs Vector b, to represent M.x=b.

Your problem has more serious issues. The numeric coefficients range from 10^1000 down to -10^1000 and this will not fit inside the range of hardware double precision. Casting your data to float[8] data type will result in many entries becoming infinity, and that will not solve using LU, QR, or SVD. Moving from hardware to software working precision will make it run too long - expectedly, I could get no factorisation after even hours and hours... So, either your code which generates the system is just wrong, or your problem is very badly scaled, and your code doesn't correct t that. This is a serious problem with your code/approach.

Another issue is that the rhs of the M.x=b representation, b, seems to be almost all zero. So the system may be inconsistent, or under specified, or perhaps wrongly generated. Hard to test his serious this is until the scaling problem of entries of M is resolved.

Written well a FEM calculation producing 2500x2500/systems should be able to compute from start to end in 2-20 seconds or so on a modern machine, even within Maple. It seems that you have at least three serious problems which will either make that speed impossible or even make any solution unattainable.

@H-R In that case you should separate the unknowns from the coefficients, to produce a purely numeric Matrix M and a purely numeric Vector rhs b such that M.x=b would hold. Here x stands for the vector of unknowns in which your system is linear. For decent petformamce M must be floats not exact rationals, and preferaby have datatype=float[8].

I suspect that it would have been much better to never have formed your symbolic system explicitly in the first place. Much better would have been for your code to have simply written purely float coefficient data to data type=float[8] Matrix M and rhs b in the first place. Then pass those to LinearAlgebra:-LinearSolve. That can do a 2500x2500 linear system with a single rhs Vector in a few seconds if the data type is right.

@H-R You will not be able to solve the symbolic system for your FEM problem to get a general solution (which you would then try to instantiate at numeric values to get a solution for a particular problem). It's the wrong way to try it. A workable way is to produce a procedure which accepts numeric values for a particular problem, generates a numeric Matrix, and then solves that.

What do you intend on doing with the result? It will likely be enormous.

acer

The first assignment to parlist,

    [mu[p],seq(mu[p]+tau[p||j],j=3..K)]

does not depend on C and so could be pulled out of the vecC procedure. That procedure gets called C times, so it's a benefit to reduce what it does.

A few of the map calls in vecC can be rolled together. Each one creates temporary lists of expressions, and the less temporary expressions to garbage collect the better.

Some of the map calls could be done using only builtins and more than 2 parameters, rather than use of simple custom operators.

Right now vecC has a concatenation in eta[p||C] inside a mapped operator and gets done C times. How about making a local etapC of vecC which is assigned that right at the start of vecC and is used instead.

Lastly, are you taking a powerset of K things? The way you have it set up it first creates a list of multiplicands and then takes a powerset, and then combines those as products. Could you not take the call to `powerset` out of vecC, have it act on the initial list [mu[p],seq(mu[p]+tau[p||j],j=3..K)] of exponent addends, and then pass that to a reworked vecC which turns its input into the final products?

acer

@Jazen1 Are you supposed to use canned procedures from Maple which simply return the tangent and normal? Or are you supposed to show the steps of how it is done?

It may be that two square matrices can be so permuted only if they have the same Permanent. But is that sufficient as well as necessary? (proof might be in graph theoretic terms?)

And then there is generalization to nonsquare matrices to consider. Are your Matrices always square?

[edit: I think maybe this was a poor guess. I suspect that having the same permanent may be necessary but not sufficient.]

acer

@itsme I used a module in the proc I hit with Explore so that it could re-use the workspace Array for the many calls done while exploring. Even with the zeroing step on the Array it often wins to avoid producing Arrays for each call with must be memory managed as temp garbage. I could have made that re-usable workspace a global but a module keeps the global namespace cleaner.

The compiled procedure could be parallelized with Threads:-Task, following here, since the iterated map can be computed across entries of segments of the Array in a concurrent fashion. That's true even if each thread needs a few more iterations for the first entry in the segment. (A likely improvement to the iterated map proc would be to stop iterating, early, when the change in value for a given entry was smaller than a tolerance. I believe that my last revision already used the previous entrie's last value as the starting value for the next entrie's iteration.) The splitting by task would involve each thread computing on a subrange of the entire n=1..max_n.

So one could parallelize the inner workings of the iterated map code, acting inplace on an Array. To do this the Compiler:-Compile call would need to be passed the `threadsafe` option, otherwise the compiled call_external will be blocking. By blocking I mean that only one external call to the compiled external call is performed at any given time. The ability to Compile to non-blocking call_externals is a new feature of Maple 18, for which the external runtime of the Compiler is itself thread-safe.

So that's all about internally multithreading that inplace iterated mapping proc we'd been looking at. If you prefer to try and parallelize at a higher level then note that the resuable local workspace Array in an appliable module won't work (as I had it in the Explore thing). A local to a module is accessed globally by Threads. It is possible to create "thread-locals" inside a module but I consider that very advanced and have not tried it here. I'm talking here about parallelizing your unseen higher code which computes results for various values of the parameters (other than n). So without special coding that kind of parallelizing would need to create different result Arrays for each thread. The iterated mapping proc would still need to be Compiled with option `threadsafe`.

It's hard to say more usefully pertinent things, without knowing exactly what your full code is. It is often beneficial to highly optimize numeric stuff for serial computation, and quite often that can beat out parallelizing under-optimized code. That's not always true, but it pays to consider both approaches.

@CakePie No, it would not make sense to substitute lign5 into Matrix T.

lign5 is an equation involving symbolic names.

T is a Matrix built with coefficients of such symbolic equations. But lign1 does not appear explicitly anywhere inside T.

GenerateMatrix takes one or more equations and turns that into one or more Matrix rows.

So you can replace a row of Matrix T with a new row produced by GenerateMatrix. (That's what my answer did.)

But it doesn't make sense to want to substitute equation lign1 by a new equation lign5 in T, since T doesn't contain such symbolic equations.

I hope that helps some.

By the way, how does lign5 arise? Is it the result of some command or other computation? Or are you typing it in fresh? If it is computed, then what does it mean? (Is is supposed to be the result of taking some linear combination of the other rows?)

 

 

What database do you mean?

acer

@Markiyan Hirnyk Having already created Matrix T it is shorter to do,

T[1,..] := GenerateMatrix([lign5], [x,y,z], augmented):

But of course it's not necessary to use GenerateMatrix again, if one is simply typing in the defn of lign5. In such a case it would be simpler still to just enter,

T[1,..]:=<1|1|1|6>:

@H-R I was working on it... but am not satisfied.

@acer In Maple 15 there is no ColorTools:-RGB24ToHex, hence the rgb2hex procedure.

I suppose that for a list `L` of three nonnegints between 0 and 255 one could also do,

sprintf("#%06X",L[1]*65536+L[2]*256+L[3])

 

Only when n is very small (ie, n=1,2,3 say) are more than a few iterates needed to get an accuracy change that is visible in a plot. And even then it is only detectable for certain ranges of x_l and C_a, it seems.

The Explore call in the end of the attached worksheet (which requires Maple 18) is set up so that moving just the `max_iter` slider (with all other parameters taking the specified initial values) illustrates the need for more iterates at very small n.

itsme1.mw

This computes all n from 1 to 10^5 in about 3 ms (milliseconds) on my machine with the Compiler working, and about 50 ms if it has to falll back to evalhf instead. As mentioned before, it might also be multithreaded by splitting the action across Array A with Threads:-Task (since it does not use the previous n's solution as the initial value). But maybe it's fast enough now?

First 350 351 352 353 354 355 356 Last Page 352 of 594