Universal adaptability: Target-independent inference that competes with propensity scoring


Kim, Michael P. ; Kern, Christoph ; Goldwasser, Shafi ; Kreuter, Frauke ; Reingold, Omer



DOI: https://doi.org/10.1073/pnas.2108097119
URL: https://www.pnas.org/content/119/4/e2108097119
Document Type: Article
Year of publication: 2022
The title of a journal, publication series: Proceedings of the National Academy of Sciences of the United States of America : PNAS
Volume: 119
Issue number: 4, Article e108097119
Page range: 1-6
Place of publication: Washington, DC
Publishing house: National Academy of Sciences
ISSN: 0027-8424 , 1091-6490
Publication language: English
Institution: School of Social Sciences > Statistik u. Sozialwissenschaftliche Methodenlehre (Kreuter 2014-2020)
Subject: 310 Statistics
Keywords (English): statistical validity , propensity scoring , algorithmic fairness
Abstract: The gold-standard approaches for gleaning statistically valid conclusions from data involve random sampling from the population. Collecting properly randomized data, however, can be challenging, so modern statistical methods, including propensity score reweighting, aim to enable valid inferences when random sampling is not feasible. We put forth an approach for making inferences based on available data from a source population that may differ in composition in unknown ways from an eventual target population. Whereas propensity scoring requires a separate estimation procedure for each different target population, we show how to build a single estimator, based on source data alone, that allows for efficient and accurate estimates on any downstream target data. We demonstrate, theoretically and empirically, that our target-independent approach to inference, which we dub “universal adaptability,” is competitive with target-specific approaches that rely on propensity scoring. Our approach builds on a surprising connection between the problem of inferences in unspecified target populations and the multicalibration problem, studied in the burgeoning field of algorithmic fairness. We show how the multicalibration framework can be employed to yield valid inferences from a single source population across a diverse set of target populations.

Dieser Eintrag ist Teil der Universitätsbibliographie.




Metadata export


Citation


+ Search Authors in

+ Page Views

Hits per month over past year

Detailed information



You have found an error? Please let us know about your desired correction here: E-Mail


Actions (login required)

Show item Show item