There is no such things in Maple (Ridge, LASSO, LARS, ...).
I've written my own code for Ridge regression (which is the simplest method, just look to the formula in my first reply).
Simple Ridge regression algorithms generally proceed this way:
- Define a range D of prior values for lambda.
- Take a value of from D.
- For this value of lambda compute the minimizer w(lambda) according to the formula I sent you.
- Assess the "quality" of this lambda (see below).
- Return to point 2.
Assessing the "quality" of lambda:
This is usually done by using some resampling method (often cross-validation or Leave-k-out) which, all take the following form:
- Split, randomly or not, your data base in two disjoint sub-bases (let's say L and T).
- Using subase L: compute w(lambda) for a given value of lambda (alg. above) (previous algorithm).
- Compute the prediction error w(lambda) gives on sub-base T.
This pseudo-code is to be executed a large number of times to reduce the splitting effect,
(note that the w's you get at point (2) are all different, thus are the prediction errors you get: but they all are a realization of the prediction error associated to the particular value of lambda you used).
So, to be clear, if you take 100 lambda values within the domain D and assess the prediction error through 100 (L, T)-splitings, you have to run 10^4 computations.
The best lambda is the one which minimizes some criterion, for instance the mean of all the replicates of the prediction error.
Let me know if you need more help.
(I'm going to sleep right now).