Calibration of probability predictions from machine-learning and statistical models

Abstract: Aim
Predictions from statistical models may be uncalibrated, meaning that the predicted values do not have the nominal coverage probability. This is easiest seen with probability predictions in machine‐learning classification, including the common species occurrence probabilities. Here, a predicted probability of, say, .7 should indicate that out of 100 cases with these environmental conditions, and hence the same predicted probability, the species should be present in 70 and absent in 30.

Innovation
A simple calibration plot shows that this is not necessarily the case, particularly not for overfitted models or algorithms that use non‐likelihood target functions. As a consequence, ‘raw’ predictions from such a model could easily be off by .2, are unsuitable for averaging across model types, and resulting maps hence be substantially distorted. The solution, a flexible calibration regression, is simple and can be applied whenever deviations are observed.

Main conclusions
‘Raw’, uncalibrated probability predictions should be calibrated before interpreting or averaging them in a probabilistic way

Location
Deutsche Nationalbibliothek Frankfurt am Main
Extent
Online-Ressource
Language
Englisch
Notes
Global ecology and biogeography. - 29, 4 (2020) , 760-765, ISSN: 1466-8238

Event
Veröffentlichung
(where)
Freiburg
(who)
Universität
(when)
2020
Creator

DOI
10.1111/geb.13070
URN
urn:nbn:de:bsz:25-freidok-1554536
Rights
Der Zugriff auf das Objekt ist unbeschränkt möglich.
Last update
25.03.2025, 1:50 PM CET

Data provider

This object is provided by:
Deutsche Nationalbibliothek. If you have any questions about the object, please contact the data provider.

Associated

Time of origin

  • 2020

Other Objects (12)