You didn't say how accurate you want the results, and note that the accuracy will be limited by the closeness of the interpolated approximation of the surface to the "true" surface. Hence it probably doesn't make much sense to ask for numeric quadrature results with a tolerance of less than say 1e-5 (and perhaps even 1e-3).

Next, change those lowercase `int` calls to `Int`(if only to make it appear deliberate -- too many people get hit by an unwanted failing symbolic integration prior to purely numeric quadrature kicking in).

Change the arrow-operator **Vapprox** to be defined with **proc(rho) ...end proc** instead. Do that so that you can make it's first line be,

if `not`(type(rho, numeric)) then return ('procname')(args) end if;

As it stands now your calls like Vapprox(SSpline(t,r)) are prematurely evaluating, so that your get the high degree "polynomial" (in unevaluated SSpline calls). That means that SSpline gets called multiple times (with the same arguments) each time Vapprox is called. You could also add **option remember** (or **option remember,system**) to SSpline as a way to memoize. But do change Vapprox to return unevaluated unless it gets numeric arguments. [edited] Or you could set up the evalf(Int(...)) calls to use so-called operator form, rather than have the integrand be an expression. But this way seemed simplest. This brings about (at least?) a ten-fold speedup for me.

Try adding **epsilon=1e-3** to each of your evalf(Int(...)) calls.

There is more to say, but I must run.

[edited]

Running in 64bit Maple 18.02 for Linux, on a fast (skylake) Intel i5. I obtained yet more speed by toning down the evalf(Int(...)) accuracy tolerance requests. But I refrained from using your digits=7, because in such examples where one computes evalf(Int(...)) of complicated procedure (that have there own accuracy issues to deal with) I have generaly found it best to not downgrade working precision but rather to just moderate the evalf(Int(...)) epsilon/accuracy goal. You may of course still experiment, but it's tricky to ensure that the numeric differentiation will produce accurate results -- you don't want to get a rough numeric quadrature approximation of a much rougher numeric differentiation approximation of a numeric interplation approximation of something and lose all versimilitude.

SimplifiedIntegration_1.mw

If this is not fast enough (or if you want to try and squeeze more accuracy out -- keeping in mind that the interpolation order and method is likely going to have a big effect on how accurately your interpolated surface itself matches the "true" surface...) then you could probably improve on the numerical differentiation. The fdiff command basically does numeric differentiation using a difference scheme. You could build you own scheme and procedure to do that. I mention it because you could then call ArrayInterpolation to very quickly compute all the interpolated points required by that 1D numerical differentiation scheme at one shot, by passing in a float[8] Vector. (Imagine you are using a 7-point 1D scheme... I imagine that you get my gist.) That would mean the dtSSpline would only have to make a single call to ArrayInterpolation for each numerical derivative evaluation. Off the cuff I don't know how much benefit there'd be to eliminating that overhead portion. Of course you would not use the custom module exactly as I showed (which only gave modest savings in computing interpolation for single t-r pairs. You'd have to experiment first with the original rougher call to ArrayInterpolation to set it up, and them perhaps optimize with a module that handled the multiple points gracefuly and efficiently.)

acer