Trying to eliminate bias within the reviewing process
As empirical social scientists we hope to be non-biased and guided by data, but the reality is that we all have our own biases which can influence the review process.
For instance, some potential authors rightfully worry that reviewers may perceive that certain types of data or data from specific places is not reliable.
We hope that JSCM’s record of publishing all types of empirical research:
Some recent examples of different methods:
Case Studies – A Mid‐Range Theory of Control and Coordination in Service Triads, Marko Bastl, Mark Johnson & Max Finne
Secondary Data- Supply Chain Power and Real Earnings Management: Stock Market Perceptions, Financial Performance Effects, and Implications for Suppliers; Danny Lanier Jr. William F. Wempe & Morgan Swink
using data from many parts the globe counters that argument; while still acknowledging that each reviewer will be bringing their own biases to the processes.
Very recent examples of data not from North America
Data from Germany and Japan - Managing Coopetition in Supplier Networks – A Paradox Perspective: Miriam Wilhelm Jörg Sydow
As editors we cannot control these individual biases, but we can and do try and provide a fair and developmental review process by doing the following:
First, our reviewer pool, like the JSCM community is global.
Our Editorial Review Board includes members from over 18 countries.
Second, all reviews are double blind. We don’t want an early stage researcher being discounted for not yet having a name, nor do we want a researcher at a university with a low research profile to judged based on that profile. Research should be judged based on its contribution and fit with JSCM; not the authors or where they work. Double blind helps to guarantee this happens.
Third, we generally use a review team comprised of 3 reviewers plus an AE. While we all bring our biases to the process, having 4 people not 2 or 3 involved reduces the influence of any one person. In addition, using 3 reviewers makes it easier to maintain our standards, in that if a reviewer does not turn in a developmental review or if their review is some other way not professional, we need not use it. And we also follow up with reviewers who are not being developmental to make sure our expectations are clear. Reviewers who do not meet our community’s standards will not be reviewers for JSCM for long.
Forth, reviewers are selected based on either their topical or methodological expertise; the reviewers will typically have worked in the same space and or have used the same empirical tools. And another benefit of the larger review team is that this gives us more space to have both topical and methodological experts on the review team. Finally, when we use new reviewers we typically add them as a 4th reviewer in case they don’t understand the community’s expectations. And new reviewers are also given feedback from the AE and the Editor to help them improve.
Is this process bias free, of course not. That is why we are trying to engage with the community more frequently and in new ways. So if you have actionable suggestions on how we can improve we would like to hear them.