Consider the following code, which just generates two "identical" matrices, differing only in their requested storage type, and then does some simple manipulations.
# Define matrix using sparse storage
testM:= Matrix( 40,40,
# Define identical(?) matrix with rectangular storage
nm:= Matrix( 40,40,
# Define procedure to return some matrix properties
matData:= proc( myMat::Matrix)
return op(3, myMat), # check storage type
myMat[5, 1..-1], # get 5-th row
add(myMat[5, 1..-1]); # add elements in 5-th row
# Get properies of the two matrices - should be identical
# but check result of adding elements in the 5-th row
The matData procedure ought to produce the same results for the two matrices, with the exception of the storrage type. But the 'add()' command does not. The 'myMat[5, 1..-1]' command produces the same vector, the 5-th row - but stick an add() wrapper around it and all hell breaks loose.
Is this a bug or am I missing something?
Suggestions such as avoiding sparse data storage are not really acceptable: the above is a much simplified version of my original problem where I was using graph theory to play with a "cost function" and (with G a graph) the command,
returned a sparse-storage matrix - and I didn't notice. There appears to be no option on the WeightMatrix() command to control the storage tyoe of the returned matrix. Result was that all subsequent code based on slicing/dicing/and particularly 'add()ing' sub-blocks of this weight matrix fell apart
Don't get me wrong: I can sort of accept that the weight matrix of minimal spanning tree would (hopefully) be mainly zeros so sparse-storage might be a good default option but I don't see why the results of a command such as
should vary depending on the internal storage used for the matrix, particularly when I have no control over the storage type being adopted