Metrics
metrics.RdA metric is a function having arguments actual, predicted and optionally w (weights)
and returning a non-negative real value.
The following metrics taken from package MetricsWeighted are included in modeltuner:
Usage
rmse(actual, predicted, w = NULL, ...)
mae(actual, predicted, w = NULL, ...)
medae(actual, predicted, w = NULL, ...)
mse(actual, predicted, w = NULL, ...)
logLoss(actual, predicted, w = NULL, ..., eps = .Machine$double.neg.eps)
classification_error(actual, predicted, w = NULL, ..., cut_value = 0.5)Arguments
- actual
Observed values.
- predicted
Predicted values.
- w
Optional case weights.
- ...
Passed to other functions or methods.
- eps
Adjustment value in
logLoss.- cut_value
Cut value for binary classification.
Details
The two metrics for binary response are slightly different from their MetricsWeighted counterparts:
logLoss(): Values of 0 and 1 inpredictedare replaced byepsand1-eps, respectively, before applyingMetricsWeighted::logLoss(). This prevents errors in case of prediction values of exactly 0 or 1.classification_error():predictedis replaced byifelse(predicted>=cut_value, 1, 0)before applyingMetricsWeighted::classification_error().
Examples
data(mcycle, package = "MASS")
mod <- lm(accel ~ times, mcycle)
actual <- mcycle$accel
pred <- predict(mod)
rmse(actual, pred)
#> [1] 45.97677
medae(actual, pred)
#> [1] 35.47474
# performance() uses these metrics:
performance(mod)
#> --- Performance table ---
#> Metric: rmse
#> train_rmse test_rmse
#> model 45.977 NA
performance(mod, metric = "medae")
#> --- Performance table ---
#> Metric: medae
#> train_medae test_medae
#> model 35.475 NA
performance(mod, metric = list(med_abs_err = medae))
#> --- Performance table ---
#> Metric: med_abs_err
#> train_med_abs_err test_med_abs_err
#> model 35.475 NA