This topic is intended to collect references about reproducibility in the aforementioned scientific fields. That is, articles about reproducibility in VARIOUS fields (be they physics, life sciences, psychology, medicine, etc…) but from the lens of historians/sociologists/philosophers of science and STS scholars.
The 4S annual conference (4S = Society for the Social Studies of Science) has just closed its doors and the program (link removed) comprised three different thematic sessions of concern to reproducibility. I’ll give a summary and link to these panels in the next posts.
Reproducibility and Other Problems: Practical and Institutional Responses to Contemporary Crises in Science - I
There is an emerging sense that science writ large is facing an unprecedented set of crises and pressures. Oft mentioned is the politicization and public contestation of climate science, nutrition research, vaccines, and the like. But there is another set of crises, less about the politicization of science and much more about doubts among scientists about the credibility of their research. At least since biomathematician John Ioannidis’s 2005 paper, “Why Most Published Research Findings Are False,” there has been widespread anxiety about the epistemological reliability of science. Since then scientists have raised the alarm about a range of crises across a variety of fields: the reproducibility crisis, the ubiquity of questionable research practices, rising rates of retraction, high profile revelations of scientific fraud, predatory publishing, rampant conflicts of interest, and the general inadequacy of peer review. Scientists and scientific institutions have launched an array of responses: Metascientific research, mass replication experiments, new journal standards, open science frameworks aiming to enhance research transparency and data sharing, post publication peer review systems and misconduct clearinghouses like PubPeer and Retraction Watch, and even blogs and tweetstorms publicizing supposed scientific transgressions in real time. This panel invites papers probing the history, development, dimensions, and boundaries of this scientific crisis or crises; formal and informal efforts to respond; and the epistemic, practical, and institutional implications for scientific organization and especially the dilemmas and contradictions that emerge.
Reproducibility and Other Problems: Practical and Institutional Responses to Contemporary Crises in Science - II
Repetition and Replication across Epistemic Cultures
Fueled by growing concerns about the reproducibility of well-known scientific studies, and exacerbated by a number of high-profile misconduct cases, the sciences seem to be undergoing an epistemological crisis. In response, many involved have doubled down on traditional criteria for objectivity, culminating in the ambition that all science ought to be replicable. Some salient characteristics of this normative discourse are formalization of methods and of reporting, delayed attribution of credit, renewed struggles over the boundaries of science and the explicit devaluation of epistemic variation. More recently, the call for replicability has been extended across all sciences and humanities, positioning replicability as a universal goal. While loud and very dominant, such calls for replicability (or reproducibility) as the decisive criterion for research quality across the sciences and humanities are not without opposition.
In this panel, we seek to discuss how to establish value and quality in the sciences and the humanities and which role(s) should be reserved for reproducibility and replicability on institutional, organizational, and career levels. We seek to discuss how the replication drive interrupts and innovates knowledge making, across diverse epistemic practices. How does replicability, as a universal requirement, influence the situated character of research and the local character of volunteer, participant or patient expertises? Which institutional, organizational, and career narratives are conducive to emergent norms and which are not? Which other conceptualizations of ‘good’ research compete with replicable research? How plural and local are replication (attempts)? What counts as a successful replication and how can it be known?
Interrupting Open Science: Use, Reuse, and Misuse of Research Data and Code
(can’t post link (I guess because as it’s the fourth similar link, it may be considered spam) but panel can be encountered with a search on the conference site)
Use and reuse of research materials such as data and code are central themes of open science policies. Funding agencies, publishers, and researchers increasingly expect research products to be publicly available in order to promote scientific reproducibility and maximize benefits derived from funded research. Data and code are of particular interest for several reasons: the complexity of workflows and the opacity of analytical tools in computational science often invites skepticism and a corresponding demand for transparency; the perceived “softness” of data and code suggests appealing opportunities for adaptation and innovation; further, open science practices often treat research data and code as fungible and portable commodities. The underlying assumptions of open science (for example, that reuse is inherently good) are rarely questioned, however. Little attention has been given to the burdens, risks, and failures of open science.
This session includes papers that enrich, complicate, and problematize the values of use and reuse in open science. Potential questions to explore include: Whose interests are served in promoting open science, and at what expense? In what ways is it problematic to repurpose and adapt data and code in new contexts? When does reuse verge on misuse? What does reuse look like in practice and what forms of value can be expected to emerge from it (and not)? Finally, how should the distinction between use and reuse be conceptualized in the first place?
Apparently, my posts are flagged as spam. Is it because of hyperlinks? I tried to remove links in an attempt to edit, but they’re still hidden
I’m not sure why they were marked as spam initially (size, links, number ?..). Anyway, they appeared as to be reviewed , which I just did… Other participants invested in the forum will gain such permissions as they contribute more and more. That’s how discourse works and it makes sense.
Thanks, @alegrand . I suspected that’s a normal Discourse behavior towards newbies, and anti-spam measures. They do make sense, np.