Reproducibility and Replicability in Science

Hi,

there is a lot to munch in the recent report by the National Academies of Sciences, Engineering and Medecine (196 pages):

https://www.nap.edu/catalog/25303/reproducibility-and-replicability-in-science

An interesting point is them fixing the definitions of reproducibility (obtaining consistent results using the same input data, computational steps, methods, and conditions of analysis; it is synonymous with computational reproducibility) and replicability (obtaining consistent results across studies aimed at answering the same scientific question, each of which has obtained its own data). This will probably raise infinite debates, but I find it very pragmatic and operational.

Highly recommended !!!

2 « J'aime »

Quelques extraits du rapport

“Ensuring the longevity and openness of digital artifacts” :

« Although many research funders support the costs of data management during the life of the project, including preparing data for deposit in a repository, the ongoing costs of stewardship that are borne by the institution need to be covered by some combination of indirect budgets, unrestricted or purpose-directed funds, and perhaps savings from cutting other services. » (Committee on Reproducibility and Replicability in Science et al., 2019, p. 101)
« There are many discipline-specific considerations, such as which data and artifacts to store , in what formats , how best to define and establish the institutional and physical infrastructure needed , and how to set and encourage standards . […] It is important to keep in mind that storage itself is not the only expense ; there are other life-cycle costs associated with accessioning and de-accessioning data and artifacts , manually curating and managing the information, updating to new technology, migrating data and artifacts to new and changing systems, and other activities and costs that may enable the data to be used.» (Committee on Reproducibility and Replicability in Science et al., 2019, p. 101)

##Quel degré de réplicabilité atteindre?
« […] no field of science produces perfectly replicable results, but it may be useful to estimate the current level of replicability of published psychology results and ask whether that level is as high as the field believes it needs to be. » (Committee on Reproducibility and Replicability in Science et al., 2019, p. 122)
« An unresolved question in any analysis of replicability is what criteria to use to determine success or failure . Meta-analysis across a set of results may be a more promising technique to assess replicability, because it can evaluate moderators of effects as well as uniformity of results. However, meta-analysis may not achieve sufficient power given only a few studies. » (Committee on Reproducibility and Replicability in Science et al., 2019, p. 124)
« Non-replicability is a normal part of the scientific process and can be due to the intrinsic variation and complexity of nature, the scope of current scientific knowledge, and the limits of current technologies. » (Committee on Reproducibility and Replicability in Science et al., 2019, p. 71)

##Sur la préregistration et notamment, la place pour l’analyse exploratoire
« […] when done correctly, preregistration improves interpretability of any statistical tests and, when relevant, can ensure that a single, predetermined statistical hypothesis has been carried out and can be regarded as a legitimate test. […] In other words, preregistration would allow researchers to achieve the stated goal of frequentist hypothesis testing, namely, error control .» (Committee on Reproducibility and Replicability in Science et al., 2019, p. 108)
« […] researchers can document their planned hypothesis test and also collect and analyze data for exploratory analyses, and the scientific community can verify that the planned tests were indeed planned. » (Committee on Reproducibility and Replicability in Science et al., 2019, p. 108) « […] both exploratory and confirmatory analyses can be conducted and reported; preregistration simply allows the scientific community to distinguish between analyses that were prespecified and those that were not and calibrate their confidence accordingly (Simmons, 2018, presentation to the committee). » (Committee on Reproducibility and Replicability in Science et al., 2019, p. 109)

2 « J'aime »