@tomleslie Thank You very much!
Looks like pretty much the goal. I like your method of changing the '-', "&ndash"
This is my lack of maple experience and limited usage of -> arrow operator and the ~ tilde operator in next line. I'll check in the morning in slow time. any hints would be helpful.
Next the output matrix is great, my goal will be to reduce the 166 columns as a lot of that data is not necessarry. But for ease at this moment I like the approach and if small memory footprint, just carry the excess data along.
Just reading through how you extracted and if you could comment around the seq function on what it is doing within "Matrix". I don't see the from 1 to end (or last row) as the seq idx. Is the 1st argument the matrix SD[1st row, all columns] being set to start scan as start row?
I see/note the scan=columns (read across columbs in a row?) at the end.
the NULL line is confusing
lastly forgive my newness with the op command. In this case op[1,1] is referencing which expression that its grabbing the [1st operand of expression 1(?)] . Is it the 1st row as expression 1?
Back on memory. Did you check the import time and can the var SD be checked for memory utilization? Presuming this grabs a copy of the full table. I am not sure how some 10,000 record data sets will import though like this on performance. I did look at the matlab xlsread as the goal I was trying with Maple (the hard ay) to scan in place the data without loading into mem as a var.