public class SVR
extends java.lang.Object
| Constructor and Description |
|---|
SVR() |
| Modifier and Type | Method and Description |
|---|---|
static Regression<double[]> |
fit(double[][] x,
double[] y,
double eps,
double C,
double tol)
Fits a linear epsilon-SVR.
|
static Regression<int[]> |
fit(int[][] x,
double[] y,
int p,
double eps,
double C,
double tol)
Fits a linear epsilon-SVR of binary sparse data.
|
static Regression<smile.util.SparseArray> |
fit(smile.util.SparseArray[] x,
double[] y,
int p,
double eps,
double C,
double tol)
Fits a linear epsilon-SVR of sparse data.
|
static <T> KernelMachine<T> |
fit(T[] x,
double[] y,
smile.math.kernel.MercerKernel<T> kernel,
double eps,
double C,
double tol)
Fits a epsilon-SVR.
|
public static Regression<double[]> fit(double[][] x, double[] y, double eps, double C, double tol)
x - training samples.y - response variable.eps - threshold parameter. There is no penalty associated with
samples which are predicted within distance epsilon from
the actual value. Decreasing epsilon forces closer fitting
to the calibration/training data.C - the soft margin penalty parameter.tol - the tolerance of convergence test.public static Regression<int[]> fit(int[][] x, double[] y, int p, double eps, double C, double tol)
x - training samples.y - response variable.eps - threshold parameter. There is no penalty associated with
samples which are predicted within distance epsilon from
the actual value. Decreasing epsilon forces closer fitting
to the calibration/training data.p - the dimension of input vector.C - the soft margin penalty parameter.tol - the tolerance of convergence test.public static Regression<smile.util.SparseArray> fit(smile.util.SparseArray[] x, double[] y, int p, double eps, double C, double tol)
x - training samples.y - response variable.eps - threshold parameter. There is no penalty associated with
samples which are predicted within distance epsilon from
the actual value. Decreasing epsilon forces closer fitting
to the calibration/training data.p - the dimension of input vector.C - the soft margin penalty parameter.tol - the tolerance of convergence test.public static <T> KernelMachine<T> fit(T[] x, double[] y, smile.math.kernel.MercerKernel<T> kernel, double eps, double C, double tol)
x - training samples.y - response variable.eps - threshold parameter. There is no penalty associated with
samples which are predicted within distance epsilon from
the actual value. Decreasing epsilon forces closer fitting
to the calibration/training data.kernel - the kernel function.C - the soft margin penalty parameter.tol - the tolerance of convergence test.