Preview Mode Links will not work in preview mode

Idea Machines


Dec 8, 2018

My guest this week is Brian Nosek, co-Founder and the Executive Director of the Center for Open Science. Brian is also a professor in the Department of Psychology at the University of Virginia doing research on the gap between values and practices, such as when behavior is influenced by factors other than one's intentions and goals.

The topic of this conversation is how incentives in academia lead to problems with how we do science, how we can fix those problems, the center for open science, and how to bring about systemic change in general.

Show Notes

Brian’s Website

Brian on Twitter (@BrianNosek)

Center for Open Science

The Replication Crisis

Preregistration

Article in Nature about preregistration results

The Scientific Method

If you want more, check out Brian on Econtalk

Transcript

Intro

 

[00:00:00] This podcast I talked to Brian nosek about innovating on the very beginning of the Innovation by one research. I met Brian at the Dartmouth 60th anniversary conference and loved his enthusiasm for changing the way we do science. Here's his official biography. Brian nozik is a co-founder and the executive director for the center for open science cos is a nonprofit dedicated to enabling open and reproducible research practices worldwide.

Brian is also a professor in the department of psychology at the University of Virginia. He's received his PhD from Yale University in 2002 in 2015. He was on Nature's 10 list and the chronicle for higher education influence. Some quick context about Brian's work and the center for open science.

There's a general consensus in academic circles that there are glaring problems in how we do research today. The way research works is generally like this researchers usually based at a university do experiments then when they have a [00:01:00] result they write it up in a paper that paper goes through the peer-review process and then a journal publishes.

The number of Journal papers you've published and their popularity make or break your career. They're the primary consideration for getting a position receiving tenure getting grants and procedure in general that system evolved in the 19th century. When many fewer people did research and grants didn't even exist we get into how things have changed in the podcast.

You may also have heard of what's known as the replication crisis. This is the Fairly alarming name for a recent phenomena in which people have tried and failed to replicate many well-known studies. For example, you may have heard that power posing will make you act Boulder where that self-control is a limited resource.

Both of the studies that originated those ideas failed to replicate. Since replicating findings a core part of the scientific method unreplicated results becoming part of Cannon is a big deal. Brian has been heavily involved in the [00:02:00] crisis and several of the center for open science is initiatives Target replication.

So with that I invite you to join my conversation with Brian idzik.

 

How does open science accelerate innovation and what got you excited about it?

 

Ben: So the  theme that  I'm really interested in is  how do we accelerate Innovations? And so just to start off with I love to ask you sort of a really broad question of  in your mind. How does having a more open science framework help us accelerate Innovations? And I guess parallel to that. Why what got you excited about it first place.

Brian: Yeah, yeah, so that this is really a core of why we started the center for open science is to figure out how can we maximize the progress of science given that we see a number of different barriers to or number of different friction points to the PACE and progress of [00:03:00] Science.

And so there are a few things. I think that how. Openness accelerates Innovation, and I guess you can think of it as sort of multiple stages at the opening stage openness in terms of planning pre-registering what your study is about why you're doing this study that the study exists in the first place has a mechanism of helping to improve Innovation by increasing The credibility of the outputs.

Particularly in making a clear distinction between the things that we planned in advance that we're testing hypotheses of ideas that we have and we're acquiring data in order to test those ideas from the exploratory results the things that we learn once we've observed the data and we get insights but there are necessarily more uncertain and having a clear distinction between those two practices is a mechanism for.

Knowing the credibility of the results [00:04:00] and then more confidently applying results. That one observes in the literature after the fact for doing next steps. And the reason that's really important I think is that we have so many incentives in the research pipeline to dress up exploratory findings that are exciting and sexy and interesting but are uncertain as if they were hypothesis-driven, right?

We apply P values to them. We apply a story upfront to them we present them as. These are results that are highly credible from a confirmatory framework. Yeah, and that has been really hard for Innovation to happen. So I'll pause there because there's lots more but yeah, so listen, let's touch on that.

 

What has changed to make the problem worse?

 

Ben: There's there's a lot that right there. So you mentioned the incentives to basically make. Things that aren't really following the scientific method follow the clicker [00:05:00] following the scientific method and one of the things I'm always really interested in what has changed in the incentives because I think that there's definitely this.

Notion that this problem has gotten worse over time. And so that means that that something has has changed and so in your mind like what what changed to make to sort of pull science away from that like, you know sort of ice training ideal of you have your hypothesis and then you test that hypothesis and then you create a new hypothesis to this. System that you're pushing back against.

Brian: You know, it's a good question. So let me start with making the case for why we can say that nothing has changed and then what might lead to thinking something has changed in unpacking this please the potential reason to think that nothing has [00:06:00] changed is that the kinds of results that are the most rewarded results have always been the kinds of results that are more the most rewarded results, right?

If I find a novel Finding rather than repeating something someone else has done. I'm like. To be rewarded more with publication without latex cetera. If I find a positive result. I'm more likely to gain recognition for that. Then a negative result. Nothing's there versus this treatment is effective, which one's more interesting.

Well, we know which ones for interesting. Yeah. Yeah, and then clean and tidy story write it all fits together and it works and now I have this new explanation for this new phenomenon that everyone can can take seriously so novel positive clean and tidy story is the. They'll come in science and that's because it breaks new ground and offers a new idea and offers a new way of thinking about the world.

And so that's great. We want those. We've always wanted those things. So the reason to think well, this is a challenge always is [00:07:00] because. Who doesn't want that and and who hasn't wanted that right? It turns out my whole career is a bunch of nulls where I don't do anything and not only fits together.

It's just a big mess right on screen is not a way to pitch a successful career. So that challenge is there and what pre-registration or committing an advanced does is helps us have the constraints. To be honest about what parts of that are actual results of credible confrontations of pre-existing hypotheses versus stuff that is exploring and unpacking what it is we can find.

Okay, so that in this in the incentive landscape, I don't think has changed. Mmm what thanks have changed. Well, there are a couple of things that we can point to as potential reasons to think that the problem has gotten worse one is that data acquisition many fields is a lot easier than it ever was [00:08:00] and so with access more data and more ways to analyze it more efficient analysis, right?

We have computers that do this instead of slide rules. We can do a lot more adventuring in data. And so we have more opportunity to explore and exploit the malays and transform it into things signal. The second is that the competitive landscape is. Stronger, right there are fewer than the ratio of people that want jobs to jobs available is getting larger and larger and larger and that fact and then competitiveness for Grants and same way that competition than can.

Very easily amplify these challenges people who are more willing to exploit more researcher degrees of freedom are going to be able to get the kinds of results more easily that are rewarded in the system. And so that would have amplify the presence of those in people that managed to [00:09:00] survive that competitive firm got it.

So I think it's a reasonable hypothesis that people that it's gotten worse. I don't think there's definitive evidence but those would be the theoretical points. At least I would point to for that. That makes a lot of sense. So you had a just sort of jumping back. You had a couple a couple points and we had we have just touched on the first one.

 

Point Number Two about Accelerating Innovation

 

Ben: So I want to give you that chance to oh, yeah go back and to keep going through that.

Brian:  Right. Yeah. So accelerating Innovation is the idea, right? So that's a point of participation is accelerating Innovation by by clarifying The credibility of claims as they are produced. Yes, we do that better than I think will be much more efficient that will have a better understanding of the evidence base as it comes out.

Yeah second phase is the ability is the openness of the data and materials for the purposes of verify. Those [00:10:00] initial claims right? I do a study. I pre-registered. It's all great and I share it with you and you read it. And you say well that sounds great. But did you actually get that and what would have happened if you made different decisions here here and there right because I don't quite agree with the decisions that you made in your analysis Pipeline and I see some gaps there so you're being able to access the materials that I produced in the data that came from.

Makes it so that you can one just simply verify that you can reproduce the findings that I reported. Right? I didn't just screw up the analysis script or something and that as a minimum standard is useful, but even more than that, you can test the robustness in ways that I didn't and I came to that question with some approach that you might look at it and say well I would do it differently and the ability to reassess the data for the same question is a very useful thing for.

The robustness particularly in areas that are that have [00:11:00] complex analytic pipelines where there's are many choices to make so that's the second part then the third part is the ReUse. So not only should we be able to verify and test the robustness of claims as they happen, but data can be used for lots of different purposes.

Sometimes there are things that are not at all anticipated by the data originator. And so we can accelerate Innovation by making it a lot easier to aggregate evidence of claims across multiple Studies by having the data being more accessible, but then also making that data more accessible and usable for.

Studying things that no one no one ever anticipated trying to investigate. Yeah, and so the efficiency gain on making better use of the data that already exists rather than the Redundant just really do Revenue question didn't dance it your question you did as it is a massive efficiency. Opportunity because there is a lot of [00:12:00] data there is a lot of work that goes in why not make the most use of it began?

 

What is enabled by open science?

 

Ben: Yeah that makes a lot of sense. Do you have any like really good sort of like Keystone examples of these things in action like places where because people could replicate the. The the study they could actually go back to the pipeline or reuse the data that something was enabled. That wasn't that wouldn't have been possible. Otherwise,

Brian: yeah. Well, let's see. I'll give a couple of local mean personal examples just to just to illustrate some of the points, please so we have the super fun project that we did just to illustrate this second part of the pipeline right this robustness phase of. People may make different choices and those choices may have implications for the reliability results.

So what we did in this project was that we get we acquired a dataset [00:13:00] of a very rich data set of lots of players and referees and outcomes in soccer and we took that data set and then we recruit a different teams. 29 in the end different teams with lots of varied expertise and statistics and analyzing data and have them all investigate the same research.

Which is our players with darker skin tone more likely to get a red card then players with lighter skin tone. And so that's you know, that's a question. We'll of Interest people have studied and then we had provided this data set. Here's a data set that you can use to analyze that and. The teams worked on their own and developed an analysis strategies for how they're going to test that hypothesis.

They came up with their houses strategy. They submitted their analysis and their results to us. We remove the results and [00:14:00] then took their analysis strategies and then share them among the teams for peer review right different people looking at it. They have made different choices. They appear each other and then went back.

They took those peer reviews. They didn't know what each other found but they took. Because reviews and they wanted to update their analysis they could and so they did all that and then submitted their final analyses and what we observed was that a huge variation in analysis choices and variation in the results.

So as a simple Criterion for Illustrated the variation results two-thirds of the teams found a significant. Write P less than 0.05 standard for deciding whether you see something there in the data, right and Atherton teams found a null. So the and then of course they debated amongst each other which was analysis strategy was the right strategy but in the end it was very clear among the teams that there are lots of reasonable choices that could be made.

And [00:15:00] those reasonable choices had implications for the results that were observed from the same data. Yeah, and it's Standard Process. We do not see the how it's not easy to observe how the analytics choices influence the results, right? We see a paper. It has an outcome we say those are what the those fats those the outcomes of the data room.

Right, but what actually the case is that those are the outcomes the data revealed contingent on all those choices that the researcher made and so that I think just as an illustrative illustrative. So it helps to figure out the robustness of that particular finding given the many different reasonable choices.

That one would make where if we had just seen one would have had a totally different interpretation, right either. Yeah, it's there or it's not there.

 

How do you encode context for experiments esp. with People?

 

Ben:  Yeah, and in terms of sort of that the data and. [00:16:00] Really sort of exposing the the study more something that that I've seen especially in. These is that it seems like the context really matters and people very often are like, well there's there's a lot of context going on in addition to just the procedure that's reported.

Do you have any thoughts on like better ways of sort of encoding and recording that context especially for experiments that involve?

Brian: Yeah. Yeah. This is a big challenge is because we presume particularly in the social and life sciences that there are many interactions between the different variables.

Right but climate the temperature the time of day the circadian rhythms the personalities whatever it is that is the different elements of the subjects of the study whether they be the plants or people or otherwise, yeah. [00:17:00] And so the. There are a couple of different challenges here to unpack one is that in our papers?

We State claims at the maximal level of generality. We can possibly do it and that that's just a normal pattern of human communication and reasoning right? I do my study in my lab at the University of Virginia on University of Virginia undergraduates. I don't conclude in the. University of university University of Virginia undergraduates in this particular date this particular time period this particular class.

This is what people do with the recognition that that might be wrong right with recognition. There might be boundary conditions but not often with articulating where we think theoretically those boundary conditions could be so in one step of. Is actually putting what some colleagues in psychology of this great paper about about constraints on [00:18:00] generality.

They suggest what we need in all discussion sections of all papers is a sexually say when won't this hold yeah, just give them what you know, where where is this not going to hold and just giving people an occasion to think about that for a second say oh. - okay. Yeah, actually we do think this is limited to people that live in Virginia for these reasons right then or no, maybe we don't really think this applies to everybody but now we have to say so you can get the call it up.

So that alone I think would make a huge difference just because it would provide that occasion to sort of put the constraints ourselves as The Originators of findings a second factor, of course is just sharing as much of the materials as possible. But often that doesn't provide a lot of the context particularly for more complex experimental studies or if there are particular procedural factors right in a lot of the biomedical Sciences there.

There's a lot of nuance [00:19:00] into how it is that this particular reagent needs to be dealt with how they intervention needs to be administered Etc. And so I like the. Moves towards video of procedures right? So there is a journal Journal of visualized events jove visualized experiments that that that tries to that gives people opportunities to show the actual experimental protocol as it is administered.

To try to improve it a lot of people using the OSF put videos up of the experiment as they administered it. So to maximize your ability to sort of see how it is that it was done through. So those steps I think can really help to maximize the transparency of those things that are hard to put in words or aren't digitally encoded oil. Yeah, and those are real gaps

 

What is the ultimate version of open science?

 

Ben: got it. And so. In your mind what is sort of like the endgame of all this? What is it? Like what [00:20:00] would be the ideal sort of like best-case scenario of science? Like how would that be conducted? So I say you get to control the world and you get to tell everybody practicing science exactly what to do. What would that look like?

Brian: Well, if it if I really had control we would just all work on Wikipedia and we would just revising one big paper with the new applicants. Ask you got it continuously and we get all of our credit by. You know logging how many words that I changed our words that survived after people have made their revisions and whether those words changed are on pages that were more important for the overall scientific record versus the less important spandrels.

And so we would output one paper that is the summary of knowledge, which is what Wikipedia summarizes. All right, so maybe that's that's maybe going a little bit further than what like [00:21:00] that we can consider. The realm of conceptually possible. So if we imagine a little bit nearer term, what I would love to see is the ability to trace the history of any research project and that seems more achievable in the sense that.

If a every in fact, my laboratory is getting close to this, right every study that we do is registered on the OSF. And once we finish the studies, we post the materials and the data or as we're doing it if we're managing the materials and data and then we attach a paper if we write a paper at the end preprint or the final report so that people can Discover it and all of those things are linked together.

Be really cool if I had. Those data in a standardized framework of how it is that they are [00:22:00] coded so that they could be automatically and easily integrated with other similar kinds of data so that someone going onto the system would be able to say show me all the studies that ever investigated this variable associated with this variable and tell me what the aggregate result is Right real-time meta-analysis of the entire database of all data that I've ever been collected that.

Enough flexibility would help to really very rapidly. I think not just spur Innovations and new things but to but help to point out where there are gaps right there a particular kinds of relationships between things particular effects of predict interventions where we know a ton and then we have this big assumption in our theoretical framework about how we get from X to y.

And then as we look for variables that help us to identify whether X gets us to why we feel there just isn't stuff. The literature has not filled that Gap. So I think there are huge benefits for that [00:23:00] kind of aggregate ability. But mostly what I want to be able to do is instead of saying you have to do research in any particular way.

The only requirement is you have to show us how you did your research and your particular way so that the marketplace of ideas. Can operate as efficiently as possible and that really is the key thing? It's not preventing bad ideas from getting into the system. It's not about making sure that the different kinds of best things are the ones that immediately are through with not that about Gatekeepers.

It's about efficiency in how it is. We call that literature of figuring out which things are credible which things are not because it's really useful to. The ideas into the system as long as they can be. Self-corrected efficiently as well. And that's where I think we are not doing well in the current system.

We're doing great on generation. [00:24:00] We're General kinds of innovative ideas. Yeah, but we're not is parsing through those ideas as efficiently as it could decide which ones are worth actually investing more resources in jumping. A couple levels in advance that

 

Talmud for Science

 

Ben:  that makes a lot of sense and actually like I've definitely come across many papers just on the internet like you go and Google Scholar and you search and you find this paper and in fact, it has been refuted by another paper and there's no way to know that yeah, and so. I does your does the open science framework address that in any way?

Brian:  No, it doesn't yet. And this is a critical issue is the connectivity between findings and the updating of knowledge because the way that like I said doesn't an indirect way but it doesn't in the systematic way that actually would solve this problem.

The [00:25:00] main challenge is that we treat. Papers as static entities. When what their summarizing is happening very dynamically. Right. It may be that a year later. After that paper comes out one realizes. We should have analyze that data totally different. We actually analyzed it wrong is indefensible the way that we analyzed it.

Right right. There are very few mechanisms for efficiently updating that paper in a way that would actually update the knowledge and that's something where we all agree. That's analyze the wrong way, right? What are my options? I could. Retract the paper. So it's no longer in existence at all.

Supposedly, although even retracted papers still get cited we guess nuts. So that's a base problem. Right or I could write a correction, which is another paper that comments on that original paper that may not itself even be discoverable with the original paper that corrects the analysis. Yeah, and that takes months and years.

[00:26:00] All right. So the really what I think is. Fundamental for actually addressing this challenge is integrating Version Control with scholarly publishing. So that papers are seen as Dynamic objects not static objects. And so if you know what I would love to see so here's another Milestone of this if we if I could control everything another Milestone would be if a researcher could have a very productive career with.

Only working on a single paper for his or her whole life, right? So they have a really interesting idea. And they just continue to investigate and build the evidence and challenge it and figure, you know, just continue to unpack it and they just revise that paper over time. This is what we understand.

Now, this is where it is. Now. This is what we've learned over here are some other exceptions but they just keep fine-tuning it and then you get to see the versions of that paper over its [00:27:00] 50-year history as that phenomenon got unpacked that. Plus the integration with other literature would make this much more efficient for exactly the problem that you raised which is we with papers.

We don't know what the current knowledge base is. We have no real good way except for these. These attempts to summarize the existing literature with yet a new paper and that doesn't then supersede those old papers. It's just another paper is very inefficient system.

 

Can Social Sciences 'advance' in the same way as the physical sciences?

 

Ben: Ya know that that totally makes sense. Actually. I just I have sort of a meta question that I've argued with several people about which is do you feel like. We can make advances in our understanding of sort of like [00:28:00] human-centered science in the same way that we can in like chemistry or physics. Like people we very clearly have like building blocks of physics and the Builds on itself.

And there's I've had debates with people about whether you can do this in. In the humanities and the social sciences. What are your thoughts on that?

Brian:  Yeah. It is an interesting question and the. What seems to be the biggest barrier is not anything about methodology in particular but about complexity?

Yeah, right, if the problem being many different inputs can have similar impact cause similar kinds of outcomes and singular inputs can have multivariate outcomes that it influences and all of those different inputs in terms of causal elements may have interactive effects on the [00:29:00] outs, so. How can we possibly develop Rich enough theories to predict the actions effectively and then ultimately explain the actions effectively of humans in a complex environments.

It doesn't seem that we will get to the beautiful equations that underlie a lot of physics and chemistry and count for a substantial amount of evidence. So the thing that I don't feel like I under have any good hand along with that is if it's a theoretical or practical limit right is it just not possible because it's so complex and there isn't this predicted.

Or it's just that's really damn hard. But if we had big enough computers if you had enough data, if we were able to understand complex enough models, we would be able to predict it. Right so is as a mom cycle historians, right? They figure it out right the head. [00:30:00] Oxidizing web series righty they could account for 99.9 percent of the variance of what people do next and but of course, even there it went wrong and that was sort of the basis of the whole ceilings.

But yeah, I just don't know I don't have a way to. I don't yet have a framework for thinking about how is it that I could answer that question whether it's a practical or theoretical limit. Yeah. What do you think?

Ben:  What do I think I think that it's great. Yeah, so I usually actually come down on the I think it's a practical limit now how much it would take to get there might make it effectively a theoretical limit right now.

But that there's there's nothing actually preventing us from like if you if you could theoretically like measure everything why not? I [00:31:00] think that is just with again. It's like the it's really a measurement problem and we do get better at measuring things. So that's the that's that's where I come down on but I.

 

How do you shift incentives in science?

 

Yep, that's just purely like I have no good argument. going going back to the incentives. It seems to me like a lot of what like I'm completely convinced that these changes would. Definitely accelerate the number of innovations that we have and so and it seems like a lot of these changes require shifting scientists incentives. And so and that's like a notoriously hard thing so we both like how are you going about shifting those incentives right now and how might they be shifted in the future.

[00:32:00] Brian: Yeah, that's a great question. That's what we spend. A lot of our time worrying about in the sense of there is very little at least in my experience is very distal disagreement on the problems and the opportunities for improving the pace of Discovery and Innovation based on the solutions. It really is about the implementation.

How is it that you change that those cultural incentives so that we can align. The values that we have for science with the practices that researchers do on a daily basis and that's a social problem. Yeah, there are technical supports. But ultimately it's a social problem. And so the the near term approach that we have is to recognize the systems of rewards as they are.

And see how could we refine those to align with some of these improved practices? So we're not pitching. Let's all work on [00:33:00] Wikipedia because that's that is so far distant from. What they systems have reward for scientist actually surviving and thriving in science that we wouldn't be able to get actually pragmatic traction.

Right? So I'll give one example of can give a few but here's the starting with one of an example that integrates with current incentives but changes them in a fundamental way and that is the publishing model of registered reports. Sophie in the standard process right? I do my research. I write up my studies and then I submit them for peer review at the highest possible prestigious Journal that I can hoping that they will not see all the flaws and if they'll accept it.

I'll get all the do that process me and I understand it anyway - journal and the P plus Terminal C and eventually somewhere and get accepted. The register report model makes one change to the process and that is to move. The critical point of peer review [00:34:00] from after the results are known and I've written up the report and I'm all done with the research to after I've figured out what the question that I want to investigate is and what the methodology that I'm going to use so I don't have an observed the outcomes yet.

All I've done is frame question. An articulated why it's important and a methodology that I'm going to just to test that question and that's what the peer reviewers evaluate right? And so the key part is that it fits into the existing system perfectly, right? The the currency of advancement is publication.

I need to get as many Publications as I can in the most prestigious Outlets. I can to advance my career. We don't try to change that. Instead we just try to change. What is the basis for making a decision about publication and by moving the primary stage of peer reviewed before the results are known does a fundamental change in what I'm being rewarded for as the author [00:35:00] right?

Yeah, but I'm being rewarded for as the author in the current system is sexy results, right get the best most interesting most Innovative results. I can write and the irony of that. Is that the results of the one thing that I'm not supposed to be able to control in your study? Right? Right. What I'm supposed to be able to control is asking interesting questions and developing good methodologies to test those questions.

Of course that's oversimplifying a bit. There are in there. The presumption of emphasizing results is that my brilliant insights at the outset of the project are the reason that I was able to get those great results, right, but that depends on the credibility of that entire Pipeline and put that aside but the moving it to at the design stage means that my incentive as an author is to ask the most important questions that I can.

And develop the most compelling and effective and valid methodologies that I can to test them. [00:36:00] Yeah, and so that changes to what it is presumably we are supposed to be being rewarded for in science. The other thing that it changes in the there's a couple of other elements of incentive changes that it has an impact on that are important for the whole process right for reviewers instant.

It's. When I am asked to review a paper in my area of research when I when all the results are there, I have skin in the game as a reviewer. I'm an expert in that area. I may have made claims about things in that particular area. Yeah, if the paper challenges my cleanse make sure to find all kinds of problems with the methodology.

I can't believe they did this is this is a ridiculous thing, right? We write my paper. That's the biggest starting point problem challenge my results all well forget out of you. But the amount of course if it's aligned with [00:37:00] my findings and excites me gratuitously, then I will find lots of reasons to like the paper.

So I have these Twisted incentives to reinforce findings and behave ideologically as a reviewer in the existing system by moving peer review to the design stage. It fundamentally changes my incentives to right so say I'm in a very contentious area of research and there's only ten opponents on a particular claim when we are dealing with results You can predict the outcome right it people behave ideologically even when they're not trying to when you don't know the results.

Both people have the same interests, right? If I truly believe in the phenomenon that I'm studying and the opponents of my point of view also believe in their perspective, right then both want to review that study and that design and that methodology to maximize its quality to reveal the truth, which I think I [00:38:00] have and so that alignment actually makes adversaries.

To some extent allies and in review and makes the reviewer and the author more collaborative, right the feedback that I give on that paper can actually help the methodology get better. Whereas in the standard process when I say here's all the things you did wrong. All the author has this to say well geez, you're a jerk.

Like I can't do anything about that. I've already done the research and so I can't fix it. Yeah. So the that shifts earlier is much more collaborative and helps with that then the other question is the incentives for the journal right? So in the. Journal editors have strong incentives of their own they want leadership.

They want to have impact they don't want the one that destroyed their journal and so [00:39:00] the incentives and the in the existing model or to publish sexy results because more people were read those results. They might cite those results. They might get more attention for their Journal, right? And shifting that to on quality designs then shift their priorities to publishing the most rigorous research the most rust robust research and to be valued based on that now.

Yeah, so I'll pause there there's lots of other things to say, but those I think are some critical changes to the incentive landscape that still fits. Into the existing way that research is done in communicated.

 

Don't people want to read sexy results?

 

Ben: Yeah. I have a bunch of questions just to poke at that last point a little bit wouldn't people still read the journals that are publishing the most sexy results sort of regardless of whether they were web what stage they're doing that peer review.

Brian:  Yeah. This is a key concern of editors and thinking about adopting registered reports. [00:40:00] So we have about a hundred twenty-five journals that are offering this now, but we continue to pitch it to other groups and other other ones, but one of the big concerns that Hunters have is if I do this then I'm going to end up publishing a bunch of no results and no one will read my journal known will cite it and I will be the one that ruined my damn door.

All right. So it is a reasonable concern because of the way the system works now, so there's a couple answers to that but the one is empirical which is is it actually the case that these are less red or less cited than regular articles that are published in those. So we have a grant from the McDonald Foundation to actually study registered reports.

And the first study that we finished is a comparison of articles that were done as register reports with this in the same published in the same Journal. [00:41:00] Articles that were done the regularly to see if they are different altmetrics attention, right citation and attention and Oppa in media and news and social media and also citation impact at least early stage citation impact because the this model is new enough that it isn't it's only been working for since 2014.

In terms of first Publications and what we found in that is that at least in this initial data set. There's no difference in citation rates, and if anything the register report. Articles have gotten more altmetric impact social media news media. That's great. So at least the initial data suggests that who knows if that will sustain generalize, but the argument that I would make in terms of a conceptual argument is that if Studies have been vetted.

In terms of without knowing the results. These are important results to know [00:42:00] right? So that's what the actors and the reviewers have to decide is do we need to know the outcome of this study? Yeah, if the answer is yes that this is an important enough result that we need to know what happened that any result is.

Yeah, right. That's the whole idea is that we're doing the study harder find out what the world says about that particular hypothesis that particular question. Yeah, so it become citable. Whereas when were only evaluating based on the results. Well, yeah things that Purity people is that that's crazy, but it happened.

Okay, that's exciting. But if you have a paper where it's that's crazy and nothing happened. Then people say well that was a crazy paper. Yeah, and that paper would be less likely to get through the register report kind of model that makes a lot of sense. You could even see a world where because they're being pre-registered especially for more like the Press people can know to pay attention to it.

[00:43:00] So you can actually almost like generate a little bit more height. In terms of like oh we're not going to do this thing. Isn't that exciting? Yeah, exactly. So we have a reproducibility project in cancer biology that we're wrapping up now where we do we sample a set of studies and then try to replicate findings from those papers to see where where can we reproduce findings in the where are their barriers to be able to reproduce existing?

And all of these went through the journal elife has registered reports so that we got peer review from experts in advance to maximize the quality of the designs and they published instead of just registering them on OSF, which they are they also published the register reports as an article of its own and those did generate lots of Interest rule that's going to happen with this and that I think is a very effective way to sort of engage the community on.

The process of actual Discovery we don't know the answer to these [00:44:00] things. Can we build in a community-based process? That isn't just about let me tell you about the great thing that I just found and more about. Let me bring you into our process. How does were actually investigating this problem right and getting more that Community engagement feedback understanding Insight all along the life cycle of the research rather than just as the end point, which I think is much more inefficient than it could be.

 

Open Science in Competitive Fields and Scooping

 

Ben: Yeah and. On the note of pre-registering. Have you seen how it plays out in like extremely competitive Fields? So one of the world's that I'm closest to is like deep learning machine learning research and I have friends who keep what they're doing. Very very secret because they're always worried about getting scooped and they're worried about someone basically like doing the thing first and I could see people being hesitant to write down to [00:45:00] publicize what they're going to do because then someone else could do it. So, how do you see that playing out if at all?

Brian: Yeah scoping is a real concern in the sense that people have it and I think that is also a highly inflated concern based on the reality of what happens in practice but nevertheless because people have the concern systems have to be built to address it.

Yeah, so one simple answer on the addressing the concern and then reasons to be skeptical at the. The addressing the concern with the OSF you can pre-register an embargo your pre-registrations from to four years. And what that does is it still gets all the benefits of registering committing putting that into an external repository.

So you have independent verification of time and date and what you said you were going to do but then gives you as the researcher the flexibility to [00:46:00] say I need this to remain private for some period of time because of whatever reason. As I need it to be private, right? I don't want the recent participants that I am engaged in this project to discover what the design is or I don't want it competitors to discover what the design is.

So that is a pragmatic solution is sort of dress. Okay, you got that concern. Let's meet that concern with technology to help to manage the current landscape. There are a couple reasons to be skeptical that the concern is actually much of a real concerning practice Tristan. And one example comes from preprints.

So a lot of people when they pre princess sharing the paper you have of some area of research prior to going through peer review and being published in a journal write and in some domains like physics. It is standard practice the archive which is housed at Cornell is the standard for [00:47:00] anybody in America physics to share their research through archive prior to publication in other fields.

It's very new or unknown but emerging. But the exact same concern about scooping comes up regularly where they say there's so many people in our field if I share a preprint someone else with the lab that is productive lab is going to see my paper. They're going to run the studies really fast.

They're going to submit it to a journal that will publish and quickly and then I'll lose my publication because it'll come out in this other one, right and that's a commonly articulated concern. I think there are very good reasons to be skeptical of it in practice and the experience of archive is a good example.

It's been operating since 1991 physicists early in its life articulated similar kinds of concerns and none of them have that concern now, why is it that they don't have that concern now? Well the Norms have shifted from the way you establish priority [00:48:00] is not. When it's published in the journal, it's when you get it onto archive.

Right? Right. So a new practice becomes standard. It's when is it that the community knows about what it is you did that's the way you get that first finder Accolade and that still carries through to things like publication a second reason is that. We all have a very inflated sense of self importance that our great our kids right?

There's an old saw in in venture capital of take your best idea and try to give it to your competitor and most of the time you can write. We think of our own ideas really amazing and everyone else doesn't yeah people sleeping other people. Is Right Southern the idea that there are people looking their chops on waiting for your paper your registration to show up so they can steal your [00:49:00] idea and then use it and claim it as their own is is great.

It's shows High self-esteem. And that's great. I am all for high self. I don't know and then the last part is that. It is a norm violation to do that to such a strong degree to do the stealing of and not crediting someone else for their work, but it's actually very addressable in the daily practice of how science operates which is if you can show that you put that registration or that paper up on a independent service and then it was it appeared prior to the other person doing it.

And then that other group did try to steal it and claim it as their own. Well, that's misconduct. And if they did if they don't credit you as the originator then that's something that is a norm violation and how science operates and I'm actually pretty confident in the process of dealing with Norm [00:50:00] violations in the scientific Community.

I've had my own experience with the I think this very rarely happens, but I have had an experience with it. I've posted papers on my website before there were pretty print services in the behavioral sciences since I. Been a faculty member and I've got a Google Scholar one day and was reading. Yeah, the papers that I have these alerts set up for things that are related to my work and I paper showed up and I was like, oh that sounds related to some things.

I've been working on. So I've clicked on the link to the paper and I went to the website. So I'm reading the paper. I from these authors I didn't recognize and then I realized wait that's that's my paper. I need a second and I'm an author and I didn't submit it to that journal. And it was my paper.

They had taken a paper off of my website. They had changed the abstract. They run it through Google translate. It looks like it's all Gobbledy gook, but it was an abstract. But the rest of it was [00:51:00] essentially a carbon copy of our paper and they published. Well, you know, so what did I do? I like contacted the editor and we actually is on retraction watch this story about someone stealing my paper and retraction watch the laughing about it and it got retracted.

And as far as we heard the person that had gone it lost their job, and I don't know if that's true. I never followed. But there are systems place is the basic point to deal with the Regis forms of this. And so I have I am sanguine about those not be real issues. But I also recognize they are real concerns.

And so we have to have our Technology Solutions be able to address the concerns as they exist today. And I think the those concerns will just disappear as people gain experience.

 

Top down v Bottom up for driving change

 

Ben: Got it. I like that distinction between issues and concerns that they may not be the same thing. To I've been paying attention to   sort of the tactics that you're [00:52:00] taking to drive this adoption.

And there's  some bottom up things in terms of changing the culture and getting  one Journal at a time to change just by convincing them and there's also been some some top-down approaches that you've been using and I was wondering if you could just sort of go through those and what you feel like. Is is the most effective or what combinations of things are are the most effective for really driving this change?

Brian: Yeah. No, it's a good question because this is a culture change is hard especially with the decentralized system like science where there is no boss and the different incentive drivers are highly distributed.

Right, right. He has a richer have a unique set of societies. Are relevant to establishing my Norms you could have funders that fund my work a unique set of journals that I publish in and my own institution. And so every researcher [00:53:00] has that unique combination of those that all play a role in shaping the incentives for his or her behavior and so fundamental change if we're talking about just at the level of incentives not even at the level of values and goals requires.

Massive shift across all of those different sectors not massive in terms of the amount of things they need to shift but in the number of groups that need to make decisions tissue. Yeah, and so the we need both top-down and bottom-up efforts to try to address that and the top down ones are. That we work on at least are largely focused on the major stakeholders.

So funders institutions and societies particularly ones that are publishing right so journals whether through Publishers societies, can we get them like with the top guidelines, which is this framework that that has been established to promote? What are the transparency standards? What could we [00:54:00] require of authors or grantees or employees of our organizations?

Those as a common framework provide a mechanism to sort of try to convince these different stakeholders to adopt new standards new policies to that that then everybody that associated with that have to follow or incentivised to follow simultaneously those kinds of interventions don't necessarily get hearts and minds and a lot of the real work in culture change.

Is getting people to internalize what it is that mean is good science is rigorous work and that requires a very bottom up community-based approach to how Norms get established Within. What are effectively very siloed very small world scientific communities that are part of the larger research community.

And so with that we do a lot [00:55:00] of Outreach to groups search starting with the idealists right people who already want to do these practices are already practicing rigorous research. How can we give them resources and support to work on shifting those Norms in their small world communities and so. Out of like the preprint services that we host or other services that allow groups to form.

They can organize around a technology. There's a preprint service that our Unity runs and then drive the change from the basis of that particular technology solution in a bottom-up way and the great part is that to the extent that both of these are effective they become self reinforcing. So a lot of the stakeholder leaders and editor of a journal will say that they are reluctant.

They agree with all the things that we trying to pitch to them as ways to improve rigor and [00:56:00] research practices, but they don't they don't have the support of their Community yet, right. They need to have people on board with this right well in we can the bottom. It provides that that backing for that leader to make a change and likewise leaders that are more assertive are willing to sort of take some chances can help to drive attention and awareness in a way that facilitates the bottom-up communities that are fledgling to gain better standing and we're impact so we really think that the combination of the two is essential to get at.

True culture change rather than bureaucratic adoption of a process that now someone told me I have to do yeah, which could be totally counterproductive to Scientific efficiency and Innovation as you described.

Ben: Yeah, that seems like a really great place to to end. I know you have to get running. So I'm really grateful.

[00:57:00] This is this has been amazing and thank you so much. Yeah, my pleasure.