Recherche reproductible dans un cadre contraint : qqs extraits sur ce thème

#Quelques points de vue sur la question de la recherche reproductible avec peu de moyens
(synthèse non exhaustive et compléments bienvenus svp)

« In theory, improving research practices is noncontroversial; in practice, any new requirements or standards may be a burden on tight budgets . » (Committee on Reproducibility and Replicability in Science et al., 2019, p. 88) « [collecting more and better observations ] can involve tradeoffs in how researchers allocate limited resources of time, money, and access to limited participant populations. » (Committee on Reproducibility and Replicability in Science et al., 2019, p. 88)
Un problème qui touche particulièrement les ECRs (pression pour publier des résultats dans des revues cotées par les évaluateurs, mais ces auteurs n’ont pas toujours les moyens de conduire de tels travaux) : « […] the career incentives are not yet aligned with these practices » (R. A. Poldrack, 2019, p. 11).
« The primary determinant of statistical power that is under our control is sample size, and it has become increasingly common for studies to increase sample sizes in order to address concerns about power. » (R. A. Poldrack, 2019, p. 11)
« I think the answer is not to collect an underpowered sample and label it a “pilot study” » (R. A. Poldrack, 2019, p. 11).

Quelles solutions, quelles pistes?
##adapter sa question de recherche
“if you can’t answer the question you love, love the question you can” (R. Poldrack, 2018) « […] try to find related question where you can make progress » (R. A. Poldrack, 2019, p. 11).

##collaborer
« […] rather than simply accept underpowered studies, researchers in that field have instead formed large consortia that together have generated datasets large enough to powerfully test the hypotheses of interest and have demonstrated replicable associations. » (R. A. Poldrack, 2019, p. 11‑12)

« First, crowdsourced research projects can achieve high statistical power by increasing sample size. A major limiting factor for individual researchers is the available number of participants for a particular study, especially when the study requires in-person participation. Crowdsourced research mitigates this problem by aggregating data from many labs. » (Moshontz et al., 2018, p. 6)

« Crowdsourced research requires providing many teams access to the experimental materials and procedures needed to complete the same study. This demands greater transparency and documentation of the research workflow. Data from these projects are frequently analyzed by teams at multiple institutions, requiring researchers to take much greater care to document and share data and analyses. Once materials and data are ready to share within a collaborating team, they are also ready to share with the broader community of fellow researchers and consumers of science. » (Moshontz et al., 2018, p. 7)

##utiliser des jeux de données publics
« There are many success stories in human neuroscience of researchers who have succeeded by bringing interesting new ideas to bear on openly available data. » (R. A. Poldrack, 2019, p. 12)
« If you can’t find relevant data, then consider contacting another lab and asking to obtain their published data for further analysis ». (R. A. Poldrack, 2019, p. 12)

##ne pas tout miser sur les données, mais s’intéresser aussi aux aspects plus méthodologiques
« […] it is increasingly clear that deeper theory and more powerful computational methods will soon become the limiting factors in the progress of neuroscience. […] In nearly every domain, there are open questions about how to model and analyze data , and computational neuroscientists are in great demand right now […]. A bonus of this kind of skills that you learn will still be very useful if you later return to hands-on experimentation […]. » (R. A. Poldrack, 2019, p. 12)

##sources citées :
Committee on Reproducibility and Replicability in Science, Committee on National Statistics, Board on Behavioral, Cognitive, and Sensory Sciences, Division of Behavioral and Social Sciences and Education, Nuclear and Radiation Studies Board, Division on Earth and Life Studies, Committee on Applied and Theoretical Statistics, et al. 2019. Reproducibility and Replicability in Science . Washington, D.C.: National Academies Press. https://doi.org/10.17226/25303.

Moshontz, Hannah, Lorne Campbell, Charles R. Ebersole, Hans IJzerman, Heather L. Urry, Patrick S. Forscher, Jon E. Grahe, et al. 2018. ‘The Psychological Science Accelerator: Advancing Psychology Through a Distributed Collaborative Network’. Advances in Methods and Practices in Psychological Science 1 (4): 501–15. https://doi.org/10.1177/2515245918797607.

Poldrack, Russ. 2018. ‘How Can One Do Reproducible Science with Limited Resources?’ Russpoldrack.Org (blog). 2018. http://www.russpoldrack.org/2018/04/how-can-one-do-reproducible-science.html.

Poldrack, Russell A. 2019. ‘The Costs of Reproducibility’. Neuron 101 (1): 11–14. https://doi.org/10.1016/j.neuron.2018.11.030.