Skip to contents

These classes encode various metrics which can be used to evaluate the performance characteristics of point and interval estimators.

Usage

Expectation()

Bias()

Variance()

MSE()

OverestimationProbability()

Coverage()

SoftCoverage(shrinkage = 1)

Width()

TestAgreement()

Centrality(interval = NULL)

Arguments

shrinkage

shrinkage factor for bump function.

interval

confidence interval with respect to which centrality of a point estimator should be evaluated.

Value

an object of class EstimatorScore. This class signals that an object can be used with the evaluate_estimator function.

Slots

label

name of the performance score. Used in printing methods.

Details on the implemented estimators

In the following, precise definitions of the performance scores implemented in adestr are given. To this end, let \(\hat{\mu}\) denote a point estimator, (\(\hat{l}\), \(\hat{u}\)) an interval estimator, denote the expected value of a random variable by \(\mathbb{E}\), the probability of an event by \(P\), and let \(\mu\) be the real value of the underlying parameter to be estimated.

Scores for point estimators (PointEstimatorScore):

  • Expectation(): \(\mathbb{E}[\hat{\mu}]\)

  • Bias(): \(\mathbb{E}[\hat{\mu} - \mu]\)

  • Variance(): \(\mathbb{E}[(\hat{\mu} - \mathbb{E}[\hat{\mu}])^2]\)

  • MSE(): \(\mathbb{E}[(\hat{\mu} - mu)^2]\)

  • OverestimationProbability(): \(P(\hat{\mu} > \mu)\)

  • Centrality(interval): \(\mathbb{E}[(\hat{\mu} - \hat{l}) + (\hat{\mu} - \hat{u}]\)

Scores for confidence intervals (IntervalEstimatorScore):

  • Coverage(): \(P(\hat{l} \leq \mu \leq \hat{u})\)

  • Width(): \(\mathbb{E}[\hat{u} - \hat{l}]\)

  • TestAgreement(): \(P\left( \left(\{0 < \hat{l} \text{ and } (c_{1, e} < Z_1 \text{ or } c_{2}(Z_1) < Z_2 ) \right) \text{ or } \left(\{\hat{l} \leq 0 \text{ and } ( Z_1 < c_{1, f} \text{ or } Z_2 \leq c_{2}(Z_1))\}\right)\right)\)

Examples

evaluate_estimator(
  score = MSE(),
  estimator = SampleMean(),
  data_distribution = Normal(FALSE),
  design = get_example_design(),
  mu = c(0, 0.3, 0.6),
  sigma = 1,
  exact = FALSE
)
#> Design:                               TwoStageDesign<n1=28;0.8<=x1<=2.3:n2=9-40>
#> Data Distribution:                                          Normal<single-armed>
#> Estimator:                                                           Sample mean
#> Assumed sigma:                                                                 1
#> Assumed mu:                                                          0.0 0.3 0.6
#> Results:
#>  Expectation:                                -0.02491922  0.30567290  0.62041636
#>  Bias:                                    -0.024919220  0.005672903  0.020416356
#>  Variance:                                      0.02779122 0.03777824 0.02790974
#>  MSE:                                           0.02841219 0.03781042 0.02832657
#> 

evaluate_estimator(
  score = Coverage(),
  estimator = StagewiseCombinationFunctionOrderingCI(),
  data_distribution = Normal(FALSE),
  design = get_example_design(),
  mu = c(0, 0.3),
  sigma = 1,
  exact = FALSE
)
#> Design:                               TwoStageDesign<n1=28;0.8<=x1<=2.3:n2=9-40>
#> Data Distribution:                                          Normal<single-armed>
#> Estimator:                                                      SWCF ordering CI
#> Assumed sigma:                                                                 1
#> Assumed mu:                                                              0.0 0.3
#> Results:
#>  Coverage:                                                   0.9500681 0.9499744
#>