Alzheimer's disease (AD) research is plagued by a proliferation of competing etiological theories, often coexisting without undergoing systematic critical comparison. This article examines the epistemological limitations of the traditional falsifiability criterion, formulated by Karl Popper, and demonstrates how this principle fails to function effectively in the context of AD research. Biological complexity, the absence of unequivocal biomarkers, institutional resistance to paradigm shifts, and academic incentives to preserve dominant hypotheses all contribute to the erosion of falsifiability as an operational standard. In response, we propose an alternative framework based on Bayesian inference, understood as eliminative induction-a process in which scientific theories are modeled as probabilistic hypotheses with gradable plausibility, continuously updated considering new evidence. Within this framework, models are not regarded as literally "true," but as pragmatic tools whose predictive performance determines their scientific value. We advocate for a more comparative, predictive, and transparent scientific practice, wherein progress does not hinge on identifying a unique cause or on proving (or disproving) a hypothesis, but rather on enhancing our ability to rationally distinguish among competing models using quantitative criteria.
Are current etiological theories of Alzheimer’s disease falsifiable? An epistemological assessment
Costa T.
First
;Liloia D.Last
2025-01-01
Abstract
Alzheimer's disease (AD) research is plagued by a proliferation of competing etiological theories, often coexisting without undergoing systematic critical comparison. This article examines the epistemological limitations of the traditional falsifiability criterion, formulated by Karl Popper, and demonstrates how this principle fails to function effectively in the context of AD research. Biological complexity, the absence of unequivocal biomarkers, institutional resistance to paradigm shifts, and academic incentives to preserve dominant hypotheses all contribute to the erosion of falsifiability as an operational standard. In response, we propose an alternative framework based on Bayesian inference, understood as eliminative induction-a process in which scientific theories are modeled as probabilistic hypotheses with gradable plausibility, continuously updated considering new evidence. Within this framework, models are not regarded as literally "true," but as pragmatic tools whose predictive performance determines their scientific value. We advocate for a more comparative, predictive, and transparent scientific practice, wherein progress does not hinge on identifying a unique cause or on proving (or disproving) a hypothesis, but rather on enhancing our ability to rationally distinguish among competing models using quantitative criteria.| File | Dimensione | Formato | |
|---|---|---|---|
|
fnagi-17-1708234.pdf
Accesso aperto
Tipo di file:
PDF EDITORIALE
Dimensione
591.59 kB
Formato
Adobe PDF
|
591.59 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.



