1. Problem size: I forgot to include the tuple size as a factor in the proportionality estimates of the problem size p (both run time and memory consumption). The p is proportional to t*binomial(n*binomial(n+d,d), t), where t is tuple size, n is the number of 1st indices, and d is the degree. For the current (30,15) problem, we have evalf(15*binomial(30,15)) ~ 2.3*10^9. For the (60,30) problem, it's evalf(30*binomial(60,30)) ~ 3.5*10^18. So the (60,30) problem is more than a billion times larger than the (30,15) problem. Dividing the problem into products of 3 binomial(20, ...)s only helps a tiny amount because
Thus, I think that the (60,30) problem is totally infeasible.
2. nterms and FullDeg: Okay, I totally understand it now, and I've modified the code accordingly.
3. Combining conditions: It's condition 4, not condition 5, that can be combined with condition 6; and I've modified the code accordingly.
1. Conditon names:
condition 1 ==> AllI (as before)
condition 2 ==> FullDeg
condition 3 ==> removed
condition 4 ==> incorporated into FullDim (condition 6)
condition 5 ==> ValidJ
condition 6 ==> FullDim
2. The cut-off value for FullDeg is named FullDegJ, and it's computed from nterms in Init as you specified.
3. The singleton subsets of Is are now included in SubsetsI.
4. The module now exports a procedure named CondCheck that does what you asked for in an email.
5. Since the problem must be first "set up" before CondCheck can be run, there's now a truefalse module local named Setup.
6. Since Setup must start out false before any problem is run there's a procedure ModuleInit that makes it so.
7. Since you might want to set up a problem without actually iterating through tuples just so that you can use CondCheck, I included a mechanism for that: Just specify a tuple size of 1. You'll still be able to call CondCheck with a tuple of any size.
8. The module can only be "set up" for one problem at a time. If you need to change that, let me know. It would require changing the module to an object module.
9. I improved the runtime by a factor of 3 - 4 (based on very limited testing) by an extremely subtle manipulation of cache/remember tables inside of cache/remember tables. Discussing the precise details of how this works would only be suitable for the most-expert-level course in Maple programming, so I'll leave it at that. You can see the details in the very short procedures JsbyI and CT.
10. Cache sizes are set based on the number of processors.
I'll post the new code in a new Answer because this subthread is getting long.