That is Lonni Besançon, new contributor to this weblog. A fast presentation earlier than I dive into the subject of this primary publish. I’m an assistant professor at Linköping College in Sweden. My analysis focus is on knowledge visualization (which Jessica typically talks about right here). Particularly I’ve gotten increasingly within the concept to keep away from widespread pitfalls of dichotomous pondering with statistics because of totally different visible representations. This concept was not too long ago mentioned on the blog here. This analysis and my curiosity for metascience and methodological work have slowly pushed me to focus increasingly on the significance of transparency (see e.g., my analysis of the lack of transparency of the papers published in the first 6 months of COVID-19, together with 200+ papers reviewed in a day or much less with 30% of them having editorial conflicts of curiosity), the significance of post-publication peer-review (e.g., a profitable re-analysis of revealed outcomes, and I say profitable as a result of Scientific Report ultimately retracted the paper), and the significance of ethics and sleuthing efforts.
Count on my contributions to the weblog to be on these subjects, I do hope that I can deliver some conversations and consciousness round these subjects.
This then brings me to at present’s publish. With collaborators, we now have analyzed 456 scientific/microbiology papers from a single institute to take a look at the ethics approval talked about. Our findings are published within the journal Analysis Integrity and Peer Overview. Now we have some issues: we now have discovered 248 heterogeneous research (totally different samples, totally different populations, totally different nations) with the identical single ethics approval quantity. You’ll find an in depth breakdown of how these research differ beneath. Now we have discovered related issues (e.g., recruitment of contributors earlier than ethics approval was obtained) in one other set of 200 papers.
![](https://statmodeling.stat.columbia.edu/wp-content/uploads/2023/08/Screenshot-2023-08-22-at-10.01.33-1024x709.png)
What’s worrying right here? First, there exist no actual COPE pointers on what might be achieved on an editorial or whistleblower degree. Then, we now have reached out to all editors who revealed these papers and never rather a lot have replied but. That is maybe unsurprising since such issues all the time take time to be resolved, however we consider that an EoC coud simply be appended to those articles whereas investigations happen. As well as, a overwhelming majority of those papers was revealed in journals that the authors had some editorial conflicts of curiosity with, as seen beneath (see this wonderful paper by Scanff et al. on nepotism in science which options a few of this institute’s analysis, or our Open Science during COVID paper again).
Now we have not, but, had the chance to test what the ethics approval reused 248 instances may cowl, so we solely increase issues up to now and can’t conclude something past the variety of reuses and the character of the research.
![](https://statmodeling.stat.columbia.edu/wp-content/uploads/2023/08/Screenshot-2023-08-22-at-10.04.01-1024x814.png)
Following these findings we advocate that submission processes be amended to incorporate moral info/approval numbers for scientific analysis such that an evaluation like ours is made simpler and maybe such that cross-checking if ethics approval could also be abused turns into simpler.
That is Lonni Besançon, new contributor to this weblog. A fast presentation earlier than I dive into the subject of this primary publish. I’m an assistant professor at Linköping College in Sweden. My analysis focus is on knowledge visualization (which Jessica typically talks about right here). Particularly I’ve gotten increasingly within the concept to keep away from widespread pitfalls of dichotomous pondering with statistics because of totally different visible representations. This concept was not too long ago mentioned on the blog here. This analysis and my curiosity for metascience and methodological work have slowly pushed me to focus increasingly on the significance of transparency (see e.g., my analysis of the lack of transparency of the papers published in the first 6 months of COVID-19, together with 200+ papers reviewed in a day or much less with 30% of them having editorial conflicts of curiosity), the significance of post-publication peer-review (e.g., a profitable re-analysis of revealed outcomes, and I say profitable as a result of Scientific Report ultimately retracted the paper), and the significance of ethics and sleuthing efforts.
Count on my contributions to the weblog to be on these subjects, I do hope that I can deliver some conversations and consciousness round these subjects.
This then brings me to at present’s publish. With collaborators, we now have analyzed 456 scientific/microbiology papers from a single institute to take a look at the ethics approval talked about. Our findings are published within the journal Analysis Integrity and Peer Overview. Now we have some issues: we now have discovered 248 heterogeneous research (totally different samples, totally different populations, totally different nations) with the identical single ethics approval quantity. You’ll find an in depth breakdown of how these research differ beneath. Now we have discovered related issues (e.g., recruitment of contributors earlier than ethics approval was obtained) in one other set of 200 papers.
![](https://statmodeling.stat.columbia.edu/wp-content/uploads/2023/08/Screenshot-2023-08-22-at-10.01.33-1024x709.png)
What’s worrying right here? First, there exist no actual COPE pointers on what might be achieved on an editorial or whistleblower degree. Then, we now have reached out to all editors who revealed these papers and never rather a lot have replied but. That is maybe unsurprising since such issues all the time take time to be resolved, however we consider that an EoC coud simply be appended to those articles whereas investigations happen. As well as, a overwhelming majority of those papers was revealed in journals that the authors had some editorial conflicts of curiosity with, as seen beneath (see this wonderful paper by Scanff et al. on nepotism in science which options a few of this institute’s analysis, or our Open Science during COVID paper again).
Now we have not, but, had the chance to test what the ethics approval reused 248 instances may cowl, so we solely increase issues up to now and can’t conclude something past the variety of reuses and the character of the research.
![](https://statmodeling.stat.columbia.edu/wp-content/uploads/2023/08/Screenshot-2023-08-22-at-10.04.01-1024x814.png)
Following these findings we advocate that submission processes be amended to incorporate moral info/approval numbers for scientific analysis such that an evaluation like ours is made simpler and maybe such that cross-checking if ethics approval could also be abused turns into simpler.