All pages
Powered by GitBook
1 of 23

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Reviewers from previous journal submissions

Did you just write a brilliant peer review for an economics (or social science, policy, etc.) journal? Your work should not be wasted, there should be a way to share your insights and get credit!

Consider transforming these insights into a public "independent evaluation" for . This will benefit the community and help make research better and more impactful. And we can share your work and provide you feedback. This will help you build a portfolio with The Unjournal, making it more likely we'll hire you for paid work and compensate you at the higher rate. And we offer prizes for the best work.

You can do this either anonymously or sign your name.

To say this in :

Journal peer review is critical for assessing and improving research, but too often these valuable discussions remain hidden behind closed doors. By publishing a version of your review, you can: (1) Amplify the impact of your reviewing efforts by contextualizing the research for a broader audience, (2) Facilitate more transparent academic discussions around the strengths and limitations of the work, (3) Get public recognition for your peer review contributions, which are often unseen and unrewarded (4) Reduce overall reviewing burdens by allowing your assessment to be reused, (5) Support a culture of open scholarship by modeling constructive feedback on public research

Considerations: The reviewer "owns the review", subject to constraints and norms

According to a COPE Discussion document: Who “owns” peer reviews (emphasis added)

While the depth of commentary may vary greatly among reviews, given the minimal thresholds set by copyright law, it can be presumed that most reviews meet the requirements for protection as an “original work of authorship”. As such, in the absence of an express transfer of copyright or a written agreement between the reviewer and publisher establishing the review as a “work for hire”, it may be assumed that, by law, the reviewer holds copyright to their reviewer comments and thus is entitled to share the review however the reviewer deems fit...

The COPE council notes precisely the benefits we are aiming to unlock. They mention an 'expectation of confidentiality' that seems incompletely specified.

For example, reviewers may wish to publish their reviews in order to demonstrate their expertise in a subject matter and to contribute to their careers as a researcher. Or they may see publication of their reviews as advancing discourse on the subject and thus acting for the benefit of science as a whole. Nevertheless, a peer reviewer’s comments are significantly different from many other works of authorship in that they are expressly solicited as a work product by a journal and—whatever the peer review model—are subject to an expectation of confidentiality. However, without an express agreement between the journal and the reviewer, it is questionable whether such obligation of confidentiality should be considered to apply only until a final decision is reached on the manuscript, or to extend indefinitely.

Permission and journals' constraints

Several journals explicitly agree that reviewers are welcome to publish the content of their reviews, with some important caveats. The Publish Your Reviews initiative gathered public statements from several journals and publishers confirming that they support reviewers posting their comments externally. However, they generally ask reviewers to remove any confidential information before sharing their reviews. This includes: the name of the journal, the publication recommendation (e.g., accept, revise, or reject), and any other details the journal or authors considered confidential, such as unpublished data.

For these journals, we are happy to accept and share/link the verbatim content as part of an independent Unjournal evaluation.

But even for journals that have not signed onto this, as the COPE mentioned Your peer review is your intellectual property, it is not owned by the journal!

There may be some terms and conditions you agreed to as part of submitting a referee report. Please consult these carefully.

However, you are still entitled to share your own expert opinions on publicly-shared research. You may want to rewrite the review somewhat. You should make it clear that it refers to the publicly-shared (working paper/preprint) version of the research, not the one the journal shared with you in confidence. As above, you should probably not mention the journal name, the decision, or any other sensitive information. You don't even need to mention that you did review the paper for a journal.

Even if a journal considers the specific review confidential, this doesn't prevent the reviewer from expressing their independent assessment elsewhere.

Make a difference

As an expert reviewer, you have unique insights that can improve the quality and impact of research. Making your assessment available through The Unjournal amplifies the reach and value of your efforts. You can publish evaluations under your name or remain anonymous.

Ready to make your peer reviews work harder for science? Consider submitting an independent evaluation, for recognition, rewards, and to improve research. Contact us anytime at contact@unjournal.org for guidance... We look forward to unlocking your valuable insights!

Press releases

Impactful research prize winners

Impactful Research Prize Winners

SFF Grant

Impactful Research Prize Winners

The Unjournal is delighted to announce the winners of our inaugural Impactful Research Prize. We are awarding our first prize to Takahiro Kubo (NIES Japan and Oxford University) and co-authors for their research titled "Banning wildlife trade can boost demand". The paper stood out for its intriguing question, the potential for policy impact, and methodological strength. We particularly appreciated the authors’ open, active, and detailed engagement with our evaluation process.

The second prize goes to Johannes Haushofer (NUS Singapore and Stockholm University) and co-authors for their work "The Comparative Impacts of Cash Transfers and a Psychotherapy Program on Psychological and Economic Wellbeing". Our evaluators rated this paper among the highest across a range of metrics. It was highly commended for its rigor, the importance of the topic, and the insightful discussion of cost-effectiveness.

We are recognizing exceptional evaluators for credible, insightful evaluations. Congratulations to Phil Trammell (Global Priorities Institute at the University of Oxford), Hannah Metzler (Complexity Science Hub Vienna), Alex Bates (independent researcher), and Robert Kubinec (NYU Abu Dhabi).

We would like to congratulate all of the winners on their contributions to open science and commitment to rigorous research. We also thank other authors who have submitted their work but have not been selected at this time - we received a lot of excellent submissions, and we are committed to supporting authors beyond this research prize.

Please see the full press release, as well as award details, below and linked here:

Outreach texts

An important part of making this a success will be to spread the word, to get positive attention for this project, to get important players on board, network externalities, and change the equilibrium. We are also looking for specific feedback and suggestions from "mainstream academics" in Economics, Psychology, and policy/program evaluation, as well as from the Open Science and EA communities.

Key points to convey

See

As social media blurbs

Good news (funding)

The "Unjournal" is happening, thanks to ACX and the LTFF! We will be organizing and funding:

  • Journal-independent peer review and rating,

  • ... of projects (not just "pdf-imprisoned papers"),

  • focusing on Economics, Psychology, and Impact Evaluation research,

  • relevant to the world's most pressing problems and most effective solutions.

Target: Academics, not necessarily EA aligned. But I don’t think this is deceptive because the funders should give a tipoff to anyone who digs, and ultimately The Unjournal might also go beyond EA-relevant stuff.

Tone: Factual, positive

Journal rents and hoops

Do you love for-profit journals

  • taking your labor and selling it back to your university library?

  • making you jump through arcane hoops to "format your article"?

  • forcing you through inscrutable sign-in processes?

Then please don't bother with The Unjournal.

Target: Academics, not necessarily EA aligned who are frustrated with this stuff.

Tone: Sarcastic, irreverent, trying to be funny

Breaking out of the bad equilibrium

Journals: Rent-extracting, inefficient, pdf-prisons, gamesmanship. But no researcher can quit them.

Until The Unjournal: Rate projects, shared feedback, pay reviewers.

No trees axed to print the latest "Journal of Fancy Manuscripts." We just evaluate the most impactful work.

Target, Tone: Same as above, but less sarcastic, using language from Economics … maybe also appealing to library and university admin people?

(Longer version of above)

Traditional academic journals: Rent-extracting, inefficient, delaying innovation. But no researcher or university can quit them.

Or maybe we do have some escape bridges. We can try to Unjournal. Projects get rated, feedback gets shared, reviewers get paid. No trees get chopped down to print the latest "Journal of Fancy Manuscripts." We are starting small, but it only takes one domino.

Disgruntled researchers, the wasteful journal game

Your paper got rejected after two glowing reviews? Up for tenure? How many more journals will you have to submit it to? Will you have to make the same points all over again? Or will the new referees tell you the exact opposite of the last ones?

Don't worry, there's a new game in town: The Unjournal. Submit your work. Get it reviewed and rated. Get public feedback. Move on . . . or continue to improve your project and submit it wherever else you like.*

*And we are not like the "Berkeley Electronic Press". We will never sell out, because we have nothing to sell.

Aim, tone: Similar to the above

Projects not (just) papers

Tired of the 'pdf prison'? Got...

  • a great web interface for your project, with expandable explanations

  • an R-markdown dynamic document, with interactive tools, data, code.

  • or your software or data is the project.

Can't submit it to a journal but need feedback and credible ratings? Try The Unjournal.

Target: More open-science and tech-savvy people

Peer reviewers should get paid and have their feedback matter

Referee requests piling up? You better write brilliant reviews for that whopping $0, so the author can be annoyed at you and they can disappear into the ether. Or you can help The Unjournal, where you get paid for your work, and reviews become part of the conversation.

Aim tone: similar to 2–3

Research should target global priorities

Social science research:

  • builds methods of inferring evidence from data;

  • builds clear logical arguments;

  • helps us understand behavior, markets, and society; and

  • informs "policy" and decision making . . . but for whom and for what goal?

The US government and traditional NGOs are often the key audience (and funders). "It's easier to publish about US data and US policy," they say. But most academics think more broadly than that. And Economics as a field has historically aimed at "the greatest social good." The Unjournal will prioritize research that informs the most effective interventions and global priorities, for humanity (and animals) now and in the future.

Target: EAs and EA-aligned researchers, researchers who might be "converted"

Tone: Straightforward, idealistic

EA organizations/researchers need feedback and credibility

You are a researcher at an organization trying to find the most effective ways to improve the world, reduce suffering, prevent catastrophic risks, and improve the future of humanity. You, your team, your funders, and the policymakers you want to influence . . . they need to know if your methods and arguments are strong, and if your evidence is believable. It would be great if academic experts could give their honest feedback and evaluation. But who will evaluate your best work, and how will they make this credible? Maybe The Unjournal can help.

Target: Researchers and research-related ops people at EA and EA-adjacent orgs. Perhaps OP in particular.

Tone: Casual but straightforward

How and where to promote and share

Pitch to ACX (and LTFF) media
  • ACX will announce this, I shared some text

  • Post on ACX substack

The Unjournal is in large part about shifting the equilibrium in academia/research. As I said in the application, I think most academics and researchers are happy and ready for this change but there's a coordination problem to resolve. (Everyone thinks "no one else will get on this boat," even though everyone agrees it's a better boat). I would love to let ACX readers (especially those in research and academia) know there's a "new game in town." Some further key points (please let me know if you think these can be stated better):

  • The project space is unjournal.org, which I'd love to share with the public ... to make it easy, it can be announced as "bit.ly/eaunjournal" as in "bitly dot com EA unjournal"... and everyone should let me know if they want editor access to the gitbook; also, I made a quick 'open comment space' in the Gdoc HERE.

  • I'm looking for feedback and for people interested in being part of this, and for 'nominations' of who might be interested (in championing this, offering great ideas, being part of the committee)

  • We will put together a committee to build some consensus on a set of workable rules and standards (especially for "how to choose referees," "what metrics should they report," and "how to define the scope of EA-relevant work to consider"). But we won't "hold meetings forever"; we want to build an MVP soon.

  • I think this could be a big win for EA and RP "getting more relevant research," for improving academia (and ultimately replacing the outdated system of traditional journals), and for building stronger ties between the two groups.

  • Researchers should know:

    • We will pay reviewers to offer feedback, assessment, and metrics, and reviews will be public (but reviewers might be anonymous -- this is a discussion point).

    • We will offer substantial cash prizes for the best projects/papers, and will likely ask the winners to present their work at an online seminar

    • You'll be able to submit your research project/paper to the unjournal (or recommend others' work) at any point in the "publication process"; it is not exclusive, and will not prevent you from 'publishing elsewhere'

    • You're encouraged to submit (time-stamped) 'projects' including dynamic documents connected to data, and interactive presentations

Social media/forums, etc (see Airtable 'media_roll')

Social media

  1. Twitter: Academia (esp. Econ, Psych, Global Health), Open science, EA

  2. Facebook

EA Forum post (and maybe AMA?)

EA orgs

Open science orgs (OSF, BITSS, ...)

Academic Economics (& other fields) boards/conferences/groups?

Universities/groupings of universities

Slack groups

  • Global EA

  • EA Psychology

  • Open science MooC?

Jobs and paid projects with The Unjournal

19 Feb 2024. We are not currently hiring, but expect to do so in the future

To indicate your potential interest in roles at The Unjournal, such as those described below, please fill out this quick survey form and link (or upload) your CV or webpage.

  • If you already filled out this form for a role that has changed titles, don’t worry. You will still be considered for relevant and related roles in the future.

  • If you add your name to this form, we may contact you to offer you the opportunity to do paid project work and paid work tasks.

Furthermore, if you are interested in conducting paid research evaluation for The Unjournal, or in joining our advisory board, please complete the form linked here.

Feel free to contact contact@unjournal.org with any questions.

Quick links to role descriptions below

Administration, operations and management roles

Research & operations-linked roles & projects

Standalone project: Impactful Research Scoping (temp. pause)

Additional information

Express interest in any of these roles in our survey form.

The Unjournal, a not-for-profit collective under the umbrella and fiscal sponsorship of the Open Collective Foundation, is an equal-opportunity employer and contractor. We are committed to creating an inclusive environment for all employees, volunteers, and contractors. We do not discriminate on the basis of race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetic information, disability, age, or veteran status.

See our data protection statement linked here.

In addition to the jobs and paid projects listed here, we are expanding our management team, advisory board, field specialist team pool, and evaluator pool. Most of these roles involve compensation/honorariums. See Advisory/team roles (research, management)

Related articles and work

We are not the only ones working and advocating in this space. For a small tip of the iceberg...

Improving peer review in Economics:

Evidence base

  • The effect of publishing peer review reports on referee behavior in five scholarly journals

  • Improving Peer Review in Economics: Stocktaking and Proposals

  • What Policies Increase Prosocial Behavior? An Experiment with Referees at the Journal of Public Economics

  • Are non-monetary rewards effective in attracting peer reviewers? A natural experiment

Previous updates

Progress notes since last update

"Progress notes": We will keep track of important developments here before we incorporate them into the ." Members of the UJ team can add further updates here or in this linked Gdoc; we will incorporate changes.

Update on recent progress: 21 July 2023

Funding

The SFF grant is now 'in our account' (all is public and made transparent on our OCF page). This makes it possible for us to

  • move forward in filling staff and contractor positions (see below); and

  • increase evaluator compensation and incentives/rewards (see below).

We are circulating a press release sharing our news and plans.

Timelines, and pipelines

Our "Pilot Phase," involving ten papers and roughly 20 evaluations, is almost complete. We just released the evaluation package for "The Governance Of Non-Profits And Their Social Impact: Evidence from a Randomized Program In Healthcare In DRC.” We are now waiting on one last evaluation, followed by author responses and then "publishing" the final two packages at https://unjournal.pubpub.org/. (Remember: we publish the evaluations, responses and synthesis; we link the research being evaluated.)

We will make decisions and award our Impactful Research Prize (and possible seminars) and evaluator prizes soon after. The winners will be determined by a consensus of our management team and advisory board (potentially consulting external expertise). The choices will be largely driven by the ratings and predictions given by Unjournal evaluators. After we make the choices, we will make our decision process public and transparent.

"What research should we prioritize for evaluation, and why?"

We continue to develop processes and policy around which research to prioritize. For example, we are considering whether we should set targets for different fields, for related outcome "cause categories," and for research sources. This discussion continues among our team and with stakeholders. We intend to open up the discussion further, making it public and bringing in a range of voices. The objective is to develop a framework and a systematic process to make these decisions. See our expanding notes and discussion on What is global-priorities relevant research?

In the meantime, we are moving forward with our post-pilot “pipeline” of research evaluation. Our management team is considering recent prominent and influential working papers from the National Bureau of Economics Research (NBER) and beyond, and we continue to solicit submissions, suggestions, and feedback. We are also reaching out to users of this research (such as NGOs, charity evaluators, and applied research think tanks), asking them to identify research they particularly rely on and are curious about. If you want to join this conversation, we welcome your input.

(Paid) Research opportunity: to help us do this

We are also considering hiring a small number of researchers to each do a one-off (~16 hours) project in “research scoping for evaluation management.” The project is sketched at Unjournal - standalone work task: Research scoping for evaluation management; essentially, summarizing a research theme and its relevance, identifying potentially high-value papers in this area, choosing one paper, and curating it for potential Unjournal evaluation.

We see a lot of value in this task and expect to actually use and credit this work.

If you are interested in applying to do this paid project, please let us know through our CtA survey form here.

Call for "Field Specialists"

Of course, we can't commission the evaluation of every piece of research under the sun (at least not until we get the next grant :) ). Thus, within each area, we need to find the right people to monitor and select the strongest work with the greatest potential for impact, and where Unjournal evaluations can add the most value.

This is a big task and there is a lot of ground to cover. To divide and conquer, we’re partitioning this space (looking at natural divisions between fields, outcomes/causes, and research sources) amongst our management team as well as among what we now call...

"Field Specialists" (FSs), who will

  • focus on a particular area of research, policy, or impactful outcome;

  • keep track of new or under-considered research with potential for impact;

  • explain and assess the extent to which The Unjournal can add value by commissioning this research to be evaluated; and

  • “curate” these research objects: adding them to our database, considering what sorts of evaluators might be needed, and what the evaluators might want to focus on; and

  • potentially serve as an evaluation manager for this same work.

Field specialists will usually also be members of our Advisory Board, and we are encouraging expressions of interest for both together. (However, these don’t need to be linked in every case.) .

Interested in a field specialist role or other involvement in this process? Please fill out this general involvement form (about 3–5 minutes).

Setting priorities for evaluators

We are also considering how to set priorities for our evaluators. Should they prioritize:

  • Giving feedback to authors?

  • Helping policymakers assess and use the work?

  • Providing a 'career-relevant benchmark' to improve research processes?

We discuss this topic here, considering how each choice relates to our Theory of Change.

Increase in evaluator compensation, incentives/rewards

We want to attract the strongest researchers to evaluate work for The Unjournal, and we want to encourage them to do careful, in-depth, useful work. We've increased the base compensation for (on-time, complete) evaluations to $400, and we are setting aside $150 per evaluation for incentives, rewards, and prizes.

Please consider signing up for our evaluator pool (fill out the good old form).

Adjacent initiatives and 'mapping this space'

As part of The Unjournal’s general approach, we keep track of (and keep in contact with) other initiatives in open science, open access, robustness and transparency, and encouraging impactful research. We want to be coordinated. We want to partner with other initiatives and tools where there is overlap, and clearly explain where (and why) we differentiate from other efforts. This Airtable view gives a preliminary breakdown of similar and partially-overlapping initiatives, and tries to catalog the similarities and differences to give a picture of who is doing what, and in what fields.

Also to report

New Advisory Board members

  • Gary Charness, Professor of Economics, UC Santa Barbara

  • Nicolas Treich, Associate Researcher, INRAE, Member, Toulouse School of Economics (animal welfare agenda)

  • Anca Hanea, Associate Professor, expert judgment, biosciences, applied probability, uncertainty quantification

  • Jordan Dworkin, Program Lead, Impetus Institute for Meta-science

  • Michael Wiebe, Data Scientist, Economist Consultant; PhD University of British Columbia (Economics)

Tech and platforms

We're working with PubPub to improve our process and interfaces. We plan to take on a KFG membership to help us work with them closely as they build their platform to be more attractive and useful for The Unjournal and other users.

Our hiring, contracting, and expansion continues

  • Our next hiring focus: Communications. We are looking for a strong writer who is comfortable communicating with academics and researchers (particularly in economics, social science, and policy), journalists, policymakers, and philanthropists. Project-based.

  • We've chosen (and are in the process of contracting) a strong quantitative meta-scientist and open science advocate for the project: “Aggregation of expert opinion, forecasting, incentives, meta-science.” (Announcement coming soon.)

  • We are also expanding our Management Committee and Advisory Board; see calls to action.

Potentially relevant events in the outside world

  • Institute for Replication grant

  • Clusterfake

Update on recent progress: 1 June 2023

Update from David Reinstein, Founder and Co-Director

A path to change

With the recent news, we now have the opportunity to move forward and really make a difference. I think The Unjournal, along with related initiatives in other fields, should become the place policymakers, grant-makers, and researchers go to consider whether research is reliable and useful. It should be a serious option for researchers looking to get their work evaluated. But how can we start to have a real impact?

Awareness∩Credibility∩Scale→ImpactAwareness \cap Credibility \cap Scale \rightarrow ImpactAwareness∩Credibility∩Scale→Impact

Over the next 18 months, we aim to:

  1. Build awareness: (Relevant) people and organizations should know what The Unjournal is.

  2. Build credibility: The Unjournal must consistently produce insightful, well-informed, and meaningful evaluations and perform effective curation and aggregation of these. The quality of our work should be substantiated and recognized.

  3. Expand our scale and scope: We aim to grow significantly while maintaining the highest standards of quality and credibility. Our loose target is to evaluate around 70 papers and projects over the next 18 months while also producing other valuable outputs and metrics.

I sketch these goals HERE, along with our theory of change, specific steps and approaches we are considering, and some "wish-list wins." Please free to add your comments and questions.

The pipeline flows on

While we wait for the new grant funding to come in, we are not sitting on our haunches. Our "pilot phase" is nearing completion. Two more sets of evaluations have been posted on our Pubpub.

  1. “Banning wildlife trade can boost demand for unregulated threatened species”

  2. "The Governance Of Non-Profits And Their Social Impact: Evidence From A Randomized Program In Healthcare In DRC”

With three more evaluations already in progress, this will yield a total of 10 evaluated papers. Once these are completed, we will decide, announce, and award the recipients for the Impactful Research Prize and the prizes for evaluators, and organize online presentations/discussions (maybe linked to an "award ceremony"?).

Contracting, hiring, expansion

No official announcements yet. However, we expect to be hiring (on a part-time contract basis) soon. This may include roles for:

  • Researchers/meta-scientists: to help find and characterize research to be evaluated, identify and communicate with expert evaluators, and synthesize our "evaluation output"

  • Communications specialists

  • Administrative and Operations personnel

  • Tech support/software developers

Here's a brief and rough description of these roles. And here’s a quick form to indicate your potential interest and link your CV/webpage.

You can also/alternately register your interest in doing (paid) research evaluation work for The Unjournal, and/or in being part of our advisory board, here.

We also plan to expand our Management Committee; please reach out if you are interested or can recommend suitable candidates.

Tech and initiatives

We are committed to enhancing our platforms as well as our evaluation and communication templates. We're also exploring strategies to nurture more beneficial evaluations and predictions, potentially in tandem with replication initiatives. A small win: our Mailchimp signup should now be working, and this update should be automatically integrated.

Welcoming new team members

We are delighted to welcome Jordan Dworkin (FAS) and Nicholas Treich (INRA/TSE) to our Advisory Board, and Anirudh Tagat (Monk Prayogshala) to our Management Committee!

  • Dworkin's work centers on "improving scientific research, funding, institutions, and incentive structures through experimentation."

  • Treich's current research agenda largely focuses on the intersection of animal welfare and economics.

  • Tagat investigates economic decision-making in the Indian context, measuring the social and economic impact of the internet and technology, and a range of other topics in applied economics and behavioral science. He is also an active participant in the COS SCORE project.

Update on recent progress: 6 May 2023

Grant funding from the Survival and Flourishing Fund

The Unjournal was recommended/approved for a substantial grant through the 'S-Process' of the Survival and Flourishing Fund. More details and plans to come. This grant will help enable The Unjournal to expand, innovate, and professionalize. We aim to build the awareness, credibility, scale, and scope of The Unjournal, and the communication, benchmarking, and useful outputs of our work. We want to have a substantial impact, building towards our mission and goals...

To make rigorous research more impactful, and impactful research more rigorous. To foster substantial, credible public evaluation and rating of impactful research, driving change in research in academia and beyond, and informing and influencing policy and philanthropic decisions.

Innovations: We are considering other initiatives and refinements (1) to our evaluation ratings, metrics, and predictions, and how these are aggregated, (2) to foster open science and robustness-replication, and (3) to provide inputs to evidence-based policy decision-making under uncertainty. Stay tuned, and please join the conversation.

Opportunities: We plan to expand our management and advisory board, increase incentives for evaluators and authors, and build our pool of evaluators and participating authors and institutions. Our previous call-to-action (see HERE) is still relevant if you want to sign up to be part of our evaluation (referee) pool, submit your work for evaluation, etc. (We are likely to put out a further call soon, but all responses will be integrated.)

Evaluation 'output'

We have published a total of 12 evaluations and ratings of five papers and projects, as well as three author responses. Four can be found on our PubPub page (most concise list here), and one on our Sciety page here (we aim to mirror all content on both pages). All the PubPub content has a DOI, and we are working to get these indexed on Google Scholar and beyond.

The two most recently released evaluations (of Haushofer et al, 2020; and Barker et al, 2022) both surround "Is CBT effective for poor households?" [link: EA Forum post]

Both papers consider randomized controlled trials (RCTs) involving cognitive behavioral therapy (CBT) for low-income households in two African countries (Kenya and Ghana). These papers come to very different conclusions as to the efficacy of this intervention.

See the evaluation summaries and ratings, with linked evaluations HERE (Haushofer et al) and HERE (Barker et al).

Update on recent progress: 22 April 2023

New 'output'

We are now up to twelve total evaluations of five papers. Most of these are on our PubPub page (we are currently aiming to have all of the work hosted both at PubPub and on Sciety, and gaining DOIs and entering the bibliometric ecosystem). The latest two are on an interesting theme, as noted in a recent EA Forum Post:

Two more Unjournal Evaluation sets are out. Both papers consider randomized controlled trials (RCTs) involving cognitive behavioral therapy (CBT) for low-income households in two African countries (Kenya and Ghana). These papers come to very different conclusions as to the efficacy of this intervention.

These are part of Unjournal's 'direct NBER evaluation' stream.

More evaluations coming out soon on themes including global health and development, the environment, governance, and social media.

Animal welfare

To round out our initial pilot: We're particularly looking to evaluate papers and projects relevant to animal welfare and animal agriculture. Please reach out if you have suggestions.

New features of this GitBook: GPT-powered 'chat' Q&A

You can now 'chat' with this page, ask questions, and get answers with links to other parts of the page. To try it out, go to "Search" and choose "Lens."

Update on recent progress: 17 Mar 2023

See our latest post on the EA Forum

  1. Our new platform (unjournal.pubpub.org), enabling DOIs and CrossRef (bibliometrics)

  2. Evaluations of "Artificial Intelligence and Economic Growth"; "self-correcting science"

  3. More evaluations soon

  4. We are pursuing collaborations with replication and robustness initiatives such as the "Institute for Replication" and repliCATS

  5. We are now 'fiscally sponsored' by the Open Collective Foundation; see our page HERE. (Note, this is an administrative thing, it's not a source of funding)

Update on recent progress: 19 Feb 2023

Content and 'publishing'

  1. Our Sciety Group is up...

  2. With our first posted evaluation ("Long Term Cost-Effectiveness of Resilient Foods"... Denkenberger et al. Evaluations from Scott Janzwood, Anca Hanea, and Alex Bates, and an author response.

  3. Two more evaluations 'will be posted soon' (waiting for final author responses.

Tip of the Spear ... right now we are:

  • Working on getting six further papers (projects) evaluated, most of which are part of our NBER"Direct evaluation" track

  • Developing and discussing tools for aggregating and presenting the evaluators' quantitative judgments

  • Building our platforms, and considering ways to better format and integrate evaluations

    • with the original research (e.g., through Hypothes.is collaborative annotation)

    • into the bibliometric record (through DOI's etc)

    • and with each other.

Funding, plans, collaborations

We are seeking grant funding for our continued operation and expansion (see grants and proposals below). We're appealing to funders interested in Open Science and in impactful research.

We're considering collaborations with other compatible initiatives, including...

  • replication/reproducibility/robustness-checking initiatives,

  • prediction and replication markets,

  • and projects involving the elicitation and 'aggregation of expert and stakeholder beliefs' (about both replication and outcomes themselves).

Management and administration, deadlines

  • We are now under the Open Collective Foundation 'fiscal sponsorship' (this does not entail funding, only a legal and administrative home). We are postponing the deadline for judging the Impactful Research Prize and the prizes for evaluators. Submission of papers and the processing of these has been somewhat slower than expected.

Other news and media

  • EA Forum: "Unjournal's 1st eval is up: Resilient foods paper (Denkenberger et al) & AMA": recent post and AMA (answering questions about the Unjournal's progress, plans, and relation to effective-altruism-relevant research

  • March 9-10: David Reinstein will present at the COS Unconference, session on "Translating Open Science Best Practices to Non-academic Settings". See agenda. David will discuss The Unjournal for part of this session.

Calls to action

See: How to get involved. These are basically still all relevant.

  1. Evaluators: We have a strong pool of evaluators.

Howev=er, atm we are particularly seeking evaluators:
  • with quantitative backgrounds, especially in economics, policy, and social-science

  • comfortable with statistics, cost-effectiveness, impact evaluation, and or Fermi Montecarlo models,

  • with interest and knowledge of key impact-relevant areas (see What is global-priorities-relevant research?; e.g., global health and development),

  • willing to dig into details, identify a paper's key claims, and consider the credibility of the research methodology and its execution.

Recall, we pay at least $250 per evaluation, we typically pay more in net ($350), and we are looking to increase this compensation further. Please fill out THIS FORM (about 3-5 min) if you are interested

  1. Research to evaluate/prizes: We continue to be interested in submitted and suggested work. One area we would like to engage with more: quantitative social science and economics work relevant to animal welfare.

Hope these updates are helpful. Let me know if you have suggestions.

An Introduction to The Unjournal

We are not a journal!

In a nutshell

The Unjournal seeks to make rigorous research more impactful and impactful research more rigorous. We are a team of researchers, practitioners, and open science advocates led by David Reinstein.

The Unjournal encourages better research by making it easier for researchers to get feedback and credible ratings. We coordinate and fund public journal-independent expert evaluation of hosted . We publish evaluations, ratings, manager summaries, author responses, and links to evaluated research on our PubPub page.

As the name suggests, we are not a journal!

We work independently of traditional academic journals. We're building an open platform and a sustainable system for feedback, ratings, and assessment. We're currently focusing on quantitative work that in .

How to get involved?

We're looking for research to evaluate, as well as evaluators. You can submit research here, or suggest research using this form. We offer financial prizes for suggesting research we end up evaluating. If you want to be an evaluator, apply here. You can use the same form to express your interest in joining our management team, advisory board, or reviewer pool. For more information, see our how to get involved guide.

Why The Unjournal? Peer review is great, but conventional academic publication processes are wasteful, slow, and rent-extracting. They discourage innovation and prompt researchers to focus more on "gaming the system" than on the quality of their research. We will provide an immediate alternative, and at the same time, offer a bridge to a more efficient, informative, useful, and transparent research evaluation system.

Does The Unjournal charge any fees?

No. We're a US-registered tax-exempt 501(c)(3) nonprofit, and we don't charge fees for anything. We compensate evaluators for their time and we even award prizes for strong research and evaluation work, in contrast to most traditional journals. We do so thanks to funding from the Long-Term Future Fund and Survival and Flourishing Fund.

At some point in the future, we might consider sliding-scale fees for people or organizations submitting their work for Unjournal evaluation, or for other services. If we do this, it would simply be a way to cover the compensation we pay evaluators and to cover our actual costs. Again, we are a nonprofit and we will stay that way.

How do we do this?

  1. Research submission/identification and selection: We identify, solicit, and select relevant research work to be hosted on any open platform in any format Authors are encouraged to present their work in the ways they find most comprehensive and understandable. We support the use of dynamic documents and other formats that foster replicability and open science. (See: the benefits of dynamic docs).

  2. Paid evaluators (AKA "reviewers"): We compensate evaluators (essentially, reviewers) for providing thorough feedback on this work. (Read more: Why do we pay?)

  3. Eliciting quantifiable and comparable metrics: We aim to establish and generate credible measures of research quality and usefulness. We benchmark these against traditional previous measures (such as journal tiers) and assess the reliability, consistency, and predictive power of these measures. (Read more: Why quantitative metrics?)

  4. Public evaluation: We publish the evaluation packages (including reports, ratings, author responses, and manager summaries) on our PubPub community. Making evaluation public facilitates dialogue, and supports , , , and .

  5. Linking, not publishing: Our process is not "exclusive." Authors can submit their work to a journal (or other evaluation service) at any time. This approach also allows us to against traditional publication outcomes.

  6. Prizes: We award financial prizes and hold public events to recognize the most credible, impactful, useful, and insightful research, as well as strong engagement with our evaluation process.

  7. Transparency: We aim for maximum transparency in our processes and judgments.

This is not an original idea, and there are others in this space, but...

For example, this is closely related to ELife's "Publish, Review, Curate" model; see their updated (Oct 2022) model here. COS is also building a "lifecycle journal". PREReview promotes public journal-independent evaluation. However, we cover a different research focus and make some different choices, discussed below. We also discuss other Parallel/partner initiatives and resources, many of whom we are building partnerships with. However, we think we are the only group funded to do this in this particular research area/focus. We are also taking a different approach to previous efforts, including funding evaluation (see Why pay evaluators (reviewers)?) and asking for quantified ratings and predictions (see Guidelines for evaluators).

Funding

29 Oct 2024: We have about a 9-12 month runway, which could be extended to cover our basic activities for a longer period. We are actively applying for grants and funding.

Our current support comes from:

Survival and Flourishing Fund (successful); funds deposited Summer 2023. ACX/LTFF grant proposal (as submitted, successful) grant (ACX passed it to the Long Term Future Fund, who awarded it). Extended through mid-2023. We have submitted some other grant applications; e.g., see our unsuccessful FTX application here; other grant applications are linked below. We are sharing these in the spirit of transparency.

Change is hard: overcoming academic inertia

Academics and funders have complained about this stuff for years and continue to do so every day on social media . . . and we're fairly confident our critiques of the traditional review and publication process will resonate with most readers.

So why haven't academia and the research community been able to move to something new? There is a difficult collective action problem. Individual researchers and universities find it risky to move unilaterally. But we believe we have a good chance of finally changing this model and moving to a better equilibrium. How? We will:

  • Take risks: Many members of The Unjournal management are not traditional academics; we can stick our necks out. We are also recruiting established senior academics who are less professionally vulnerable.

  • Bring in new interests, external funding, and incentives: There are a range of well-funded and powerful organizations—such as the Sloan Foundation and Open Philanthropy—with a strong inherent interest in high-impact research being reliable, robust, and reasoning-transparent. This support can fundamentally shift existing incentive structures.

  • Allow less risky "bridging steps": As noted above, The Unjournal allows researchers to submit their work to traditional journals. In fact, this will provide a benchmark to help build our quantitative ratings and demonstrate their value.

  • Communicate with researchers and stakeholders to make our processes easy, clear, and useful to them.

  • Make our output useful, in the meantime: It may take years for university departments and grant funders to incorporate journal-independent evaluations as part of their metrics and reward systems. The Unjournal can be somewhat patient: our evaluation, rating, feedback, and communication are already providing a valuable service to authors, policymakers, and other researchers.

  • Leverage new technology: A new set of open-access and AI-powered tools makes what we are trying to do easier, and makes more useful every day.

  • Reward early adopters with prizes and recognition: We can replace "fear of standing out" with "fear of missing out." In particular, authors and research institutions that commit to publicly engaging with evaluations and critiques of their work should be commended and rewarded. And we are doing this.

Our objectives

This GitBook is a knowledge base that supplements our main public page, unjournal.org. It serves as a platform to organize our ideas and resources and track our progress towards our dual objectives:

  1. Making "peer evaluation and rating" of open projects into a standard high-status outcome in academia and research, specifically within economics and social sciences. This stands in contrast to the conventional binary choice of accepting or rejecting papers to be published as PDFs and other static formats.

  2. Building a cohesive and efficient system for publishing, accruing credibility, and eliciting feedback for research aligned with effective altruism and global priorities. Our ultimate aim is to make rigorous research more impactful, and impactful research more rigorous.

Where do I find...? Where do I go next?

See Content overview

How to get involved

In brief

(see our ) wants your involvement, help, and feedback. We offer rewards and strive to compensate people for their time and effort.

  1. Join our team: Complete to apply for our...

    1. Evaluator pool: to be eligible to be commissioned and paid to evaluate and rate research, mainly in quantitative social science and policy

    2. Field specialist teams: help identify, prioritize, and manage research evaluation in a particular field or cause area. A related lower-commitment role: help suggest, prioritize, and discuss research>

    3. Management team or advisory board, to be part of our decision-making

  2. Suggest research for us to assess using . We offer bounty rewards. Submit your own research , or by contacting

  3. Do an Independent Evaluation to build your portfolio, receive guidance, and be eligible for promotion and prizes. See details

  4. Suggest for us to focus on

Give us feedback: Is anything unclear? What could be improved? Email contact@unjournal.org. We will offer rewards for the most useful suggestions.

Overview and call

is the founder and co- of The Unjournal. The organization is currently looking for field specialists and evaluators, as well as suggestions for relevant work for The Unjournal to evaluate.

The Unjournal is building a system for credible, public, journal-independent feedback and evaluation of research.

Briefly, The Unjournal’s basic process is:
  • Identify, invite, or select contributions of relevant research that on any open platform or archive in any format.

  • Pay evaluators to give careful feedback on this work, with prizes and incentives for strong evaluation work.

  • Elicit quantifiable and comparable metrics of research quality as credible measures of value (see: ). Synthesize the results of these evaluations in useful ways.

  • Publicly post and link all reviews of the work. Award financial prizes for the work judged strongest.

  • Allow evaluators to choose if they wish to remain anonymous or to "sign" their reviews.

  • Aim to be as transparent as possible in these processes.

We maintain an open call for participants for several different roles:

  1. (involving honorariums for time spent)

  2. members (no time commitment)

  3. (who will often also be on the Advisory Board)

  4. (low commitment)

  5. A pool of Evaluators (who will be paid for their time and their work; we also draw evaluators from outside this pool)

You can express your interest (and enter our database) .

Some particular research area/field priorities (Sept. 2024)

We're interested in researchers and research-users who want to help us prioritize work for evaluation, and manage evaluations, considering

... research in any social science/economics/policy/impact-assessment area, and

... research with the potential to be among the most globally-impactful.

Some particular areas where we are hoping to expand our expertise (as of 15 Aug 2023) include:

- Biological & pandemic risk

- AI governance, AI safety

- Long-term trends, demography

- Macroeconomics/growth/(public) finance

- Quantitative political science (voting, lobbying, etc.)

- Social impact of new technology (including AI)

Evaluators

We will reach out to evaluators (a.k.a. "reviewers") on a case-by-case basis, appropriate for each paper or project being assessed. This is dependent on expertise, the researcher's interest, and a lack of conflict of interest.

Time commitment: Case-by-case basis. For each evaluation, for the amount of time to spend.

Compensation: We pay a minimum of $200 (updated Aug. 2024) for a prompt and complete evaluation, $400 for experienced evaluators. We offer additional prizes and incentives, and are committed to an average compensation of at least $450 per evaluator. .

Who we are looking for: We are putting together a list of people interested in being an evaluator and doing paid referee work for The Unjournal. We generally prioritize the pool of evaluators who signed up for our database before reaching out more widely.

Interested? Please fill out (about 3–5 min, same form for all roles or involvement).

Ready to get started doing evaluations and building a track record? See our new initiative, offering prizes and recognition for the best work. You can evaluate work in our , or suggest and evaluate work.

Projects and papers

We are looking for high-quality, globally pivotal research projects to evaluate, particularly those embodying open science practices and innovative formats. We are putting out a call for relevant research. Please suggest research . (We offer bounties and prizes for useful suggestions – .) For details of what we are looking for, and some potential examples, and accompanying links.

You can also put .

Contact us

If you are interested in discussing any of the above in person, please email us () to arrange a conversation.

Note: This is under continual refinement; see our for more details.

Content overview

A "curated guide" to this GitBook; updated June 2023

You can now ask questions of this GitBook using a chatbot: click the search bar or press cmd-k and choose "ask Gitbook."

Some key sections and subsections

Learn more about The Unjournal, our goals and policies

For authors, evaluators, etc.

Writeups of the main points for a few different audiences

Important benefits of journal-independent public evaluation and The Unjournal's approach, with links to deeper commentary

How we choose papers/projects to evaluate, how we assign evaluators, and so on

Other resources and reading

Groups we work with; comparing approaches

What research are we talking about? What will we cover?

Detail, progress, and internal planning

These are of more interest to people within our team; we are sharing these in the spirit of transparency.

A "best feasible plan" for going forward

Successful proposals (ACX, SFF), other applications, initiatives

Key resources and links for managers, advisory board members, staff, team members and others involved with The Unjournal project.

Note: we have moved some of this "internal interest content" over to our Coda.io knowledge base.

Impactful Research Prize (pilot)

As of December 2023, the prizes below have been chosen and will be soon announced. We are also scheduling an event linked to this prize. However, we are preparing for even larger author and evaluator prizes for our next phase. to The Unjournal or serve as an evaluator to be eligible for future prizes (details to be announced).

Submit your work to be eligible for our “Unjournal: Impactful Research Prize” and a range of other benefits including the opportunity for credible public evaluation and feedback.

First-prize winners will be awarded $, and the runner-ups will receive $1000.

Note: these are the minimum amounts; we will increase these if funding permits.

Prize winners will have the opportunity (but not the obligation) to present their work at an online seminar and prize ceremony co-hosted by The Unjournal, , and

To be eligible for the prize, submit a link to your work for public evaluation .

  • Please choose “new submission” and “Submit a URL instead.”

  • The latter link requires an ORCID ID; if you prefer, you can email your submission to

The Unjournal, with funding from the and the , organizes and funds public-journal-independent feedback and evaluation. We focus on research that is highly relevant to global priorities, especially in economics, social science, and impact evaluation, and aim to expand this widely. We encourage better research by making it easier for researchers to get feedback and credible ratings on their work.

We aim to publicly evaluate 15 papers (or projects) within our pilot year. This award will honor researchers doing robust, credible, transparent work with a global impact. We especially encourage the submission of research in "open" formats such as hosted dynamic documents (Quarto, R-markdown, Jupyter notebooks, etc.).

The research will be chosen by our management team for public evaluation by 2–3 carefully selected, paid reviewers based on an initial assessment of a paper's methodological strength, openness, clarity, relevance to , and the usefulness of further evaluation and public discussion. We sketch out .

All evaluations, including quantitative ratings, will be made public by default; however, we will consider "embargos" on this for researchers with sensitive career concerns (the linked form asks about this). Note that submitting your work to The Unjournal does not imply "publishing" it: you can submit it to any journal before, during, or after this process.

If we choose not to send your work out to reviewers, we will try to at least offer some brief private feedback (please on this).

All work evaluated by The Unjournal will be eligible for the prize. Engagement with The Unjournal, including responding to evaluator comments, will be a factor in determining the prize winners. We also have a slight preference for giving at least one of the awards to an early-career researcher, but this need not be determinative.

Our management team and advisory board will vote on the prize winners in light of the evaluations, with possible consultation of further external expertise.

Deadline: Extended until 5 December (to ensure eligibility).

Note: In a subsection below, , we outline the basic requirements for submissions to The Unjournal.

How we chose the research prize winners (2023)

The prize winners for The Unjournal's Impactful Research Prize were selected through a multi-step, collaborative process involving both the management team and the advisory board. The selection was guided by several criteria, including the quality and credibility of the research, its potential for real-world impact, and the authors' engagement with The Unjournal's evaluation process.

  1. Initial Evaluation: All papers that were evaluated by The Unjournal were eligible for the prize. The discussion, evaluations, and ratings provided by external evaluators played a significant role in the initial shortlisting.

  2. Management and Advisory Board Input: Members of the management committee and advisory board were encouraged to write brief statements about papers they found particularly prize-worthy.

  3. Meeting and Consensus: A "prize committee" meeting was held with four volunteers from the management committee to discuss the shortlisted papers and reach a consensus. The committee considered both the papers and the content of the evaluations Members of the committee allocated a total of 100 points among the 10 paper candidates. We used this to narrow down a shortlist of five papers.

  4. Point Voting: The above shortlist and the notes from the accompanying discussion were shared with all management committee and advisory board members. Everyone in this larger group was invited to allocate up to 100 points among the shortlisted papers (and asked to allocate fewer points if they were less familiar with the papers and evaluations).

  5. Special Considerations: We decided that at least one of the winners had to be a paper submitted by the authors or one where the authors substantially engaged with The Unjournal's processes. However, this constraint did not prove binding. Early-career researchers were given a slight advantage in our consideration.

  6. Final Selection: The first and second prizes were given to the papers with the first- and second-most points, respectively.

This comprehensive approach aimed to ensure that the prize winners were selected in a manner that was rigorous, fair, and transparent, reflecting the values and goals of The Unjournal.

Frequently Asked Questions (FAQ)
Explanations & outreach
Why Unjournal?
Our policies: evaluation & workflow
Parallel/partner initiatives and resources
What is global-priorities-relevant research?
Plan of action
Grants and proposals
UJ Team: resources, onboarding

Advisory/team roles (research, management)

Dec 2024: We are still looking to bring in more field specialists to build our teams in a all areas, but particularly in the quantitative social science and economics/behavior modeling of catastrophic risks, AI governance and safety.

In addition to the "work roles," we are looking to engage researchers, research users, meta-scientists, and people with experience in open science, open access, and management of initiatives similar to The Unjournal.

We are continually looking to enrich our general team and board, including our Management committee members, Organizational roles and responsibilities These roles come with some compensation and incentives.

(Please see links and consider submitting an expression of interest).

Standalone project: Impactful Research Scoping (temp. pause)

Nov. 2023 update: We have paused this process focus to emphasize our field specialist positions. We hope to come back to hiring researchers to implement these projects soon.

Proposed projects

We are planning to hire 3–7 researchers for a one-off paid project.

There are two opportunities: Contracted Research (CR) and Independent Projects (IP).

Project Outline

  • What specific research themes in economics, policy, and social science are most important for global priorities?

  • What projects and papers are most in need of further in-depth public evaluation, attention, and scrutiny?

  • Where does "Unjournal-style evaluation" have the potential to be one of the most impactful uses of time and money? By impactful, we mean in terms of some global conception of value (e.g., the well-being of living things, the survival of human values, etc.).

This is an initiative that aims to identify, summarize, and conduct an in-depth evaluation of the most impactful themes in economics, policy, and social science to answer the above questions. Through a systematic review of selected papers and potential follow-up with authors and evaluators, this project will enhance the visibility, understanding, and scrutiny of high-value research, fostering both rigorous and impactful scholarship.

Contracted Research (CR) This is the main opportunity, a unique chance to contribute to the identification and in-depth evaluation of impactful research themes in economics, policy, and social science. We’re looking for researchers and research users who can commit a (once-off) 15–20 hours. CR candidates will:

  • Summarize a research area or theme, its status, and why it may be relevant to global priorities (~4 hours).

    • We are looking for fairly narrow themes. Examples might include:

      • The impact of mental health therapy on well-being in low-income countries.

      • The impact of cage-free egg regulation on animal welfare.

      • Public attitudes towards AI safety regulation.

  • Identify a selection of papers in this area that might be high-value for UJ evaluation (~3 hours).

    • Choose at least four of these from among NBER/"top-10 working paper" series (or from work submitted to the UJ – we can share – or from work where the author has expressed interest to you).

  • For a single paper, or a small set of these papers (or projects) (~6 hours)

    • Read the paper fairly carefully and summarize it, explaining why it is particularly relevant.

    • Discuss one or more aspects of the paper that need further scrutiny or evaluation.

    • Identify 3 possible evaluators, and explain why they might be particularly relevant to evaluate this work. (Give a few sentences we could use in an email to these evaluators).

    • Possible follow-up task: email and correspond with the authors and evaluators (~3 hours).

We will compensate you for your time at a rate reflecting your experience and skills ($25–$65/hour). This work also has the potential to serve as a “work sample” for future roles at The Unjournal, as it is highly representative of what our How to get involved andEvaluators are commissioned to do.

We are likely to follow up on your evaluation suggestions. We also may incorporate your writing into our web page and public posts; you can choose whether you want to be publicly acknowledged or remain anonymous.

Independent Projects (IP)

We are also inviting applications to do similar work as an “Independent Project” (IP), a parallel opportunity designed for those eager to engage but not interested in working under a contract, or not meeting some of the specific criteria for the Contracted Research role. This involves similar work to above.

If you are accepted to do an IP, we will offer some mentoring and feedback. We will also offer prize rewards/bounties for particularly strong IP work. We will also consider working with professors and academic supervisors on these IP projects, as part of university assignments and dissertations.

You can apply to the CR and IP positions together; we will automatically consider you for each.

Get Involved!

If you are interested in involvement in either the CR or IP side of this project, please let us know through our survey form here.

Administration, operations and management roles

These are principally not research roles, but familiarity with research and research environments will be helpful, and there is room for research involvement depending on the candidate’s interest, background, and skills/aptitudes.

There are currently one such role:

Communications, Writing, and Public Relations Specialist (As of November 2023, still seeking freelancers)

Further note: We previously considered a “Management support and administrative professional” role. We are not planning to hire for this role currently. Those who indicated interest will be considered for other roles.

Express interest here.

Communications, Writing, and Public Relations Specialist

As of November 2023, we are soliciting applications for freelancers with skills in particular areas

The Unjournal is looking to work with a proficient writer who is adept at communicating with academics and researchers (particularly in economics, social science, and policy), journalists, policymakers, and philanthropists. As we are in our early stages, this is a generalist role. We need someone to help us explain what The Unjournal does and why, make our processes easy to understand, and ensure our outputs (evaluations and research synthesis) are accessible and useful to non-specialists. We seek someone who values honesty and accuracy in communication; someone who has a talent for simplifying complex ideas and presenting them in a clear and engaging way.

The work is likely to include:

  1. Promotion and general explanation

    • Spread the word about The Unjournal, our approach, our processes, and our progress in press releases and short pieces, as well as high-value emails and explanations for a range of audiences

    • Make the case for The Unjournal to potentially skeptical audiences in academia/research, policy, philanthropy, effective altruism, and beyond

  2. Keeping track of our progress and keeping everyone in the loop

    • Help produce and manage our external (and some internal) long-form communications

    • Help produce and refine explanations, arguments, and responses

    • Help provide reports to relevant stakeholders and communities

  3. Making our rules and processes clear to the people we work with

    • Explain our procedures and policies for research submission, evaluation, and synthesis; make our systems easy to understand

    • Help us build flexible communications templates for working with research evaluators, authors, and others

  4. Other communications, presentations, and dissemination

    • Write and organize content for grants applications, partnership requests, advertising, hiring, and more

    • Potentially: compose non-technical write-ups of Unjournal evaluation synthesis content (in line with interest and ability)

Most relevant skills, aptitudes, interests, experience, and background knowledge:

  • Understanding of The Unjournal project

  • Strong written communications skills across a relevant range of contexts, styles, tones, and platforms (journalistic, technical, academic, informal, etc.)

  • Familiarity with academia and research processes and institutions

  • Familiarity with current conversations and research on global priorities within government and policy circles, effective altruism, and relevant academic fields

  • Willingness to learn and use IT, project management, data management, web design, and text-parsing tools (such as those mentioned below), with the aid of GPT/AI chat

Further desirable skills and experience:

  • Academic/research background in areas related to The Unjournal’s work

  • Operations, administrative, and project management experience

  • Experience working in a small nonprofit institution

  • Experience with promotion and PR campaigns and working with journalists and bloggers

Proposed terms:

  • Project-based contract "freelance" work

  • $30–$55/hour USD (TBD, depending on experience and capabilities). Hours for each project include some onboarding and upskilling time.

  • Our current budget can cover roughly 200 hours of this project work. We hope to increase and extend this (depending on our future funding and expenses).

  • This role is contract-based and supports remote and international applicants. We can contract people living in most countries, but we cannot serve as an immigration sponsor.

Express your interest here.

this form
this form
here
Independent evaluations (trial)
"Pivotal questions"
David Reinstein
team
evaluator guidelines
Management Committee members
Advisory Board
Field Specialists
Unjournal Research Affiliates
The roles are explained in more detail here.
here
here are some guidelines
See here for more details
this form
Independent evaluations (trial)
public database
here
see this post
forward your own work
policies
In a nutshell
Organizational roles and responsibilities
Submit your research
Rethink Priorities
EAecon.
here
Long Term Future Fund
Survival and Flourishing Fund
global priorities
these criteria here
Recap: submissions

Brief version of call

I (David Reinstein) am an economist who left UK academia after 15 years to pursue a range of projects (see my web page). One of these is The Unjournal:

The Unjournal (with funding from the Long Term Future Fund and the Survival and Flourishing Fund) organizes and funds public-journal-independent feedback and evaluation, paying reviewers for their work. We focus on research that is highly relevant to global priorities, especially in economics, social science, and impact evaluation. We encourage better research by making it easier for researchers to get feedback and credible ratings on their work.

We are looking for your involvement...

Evaluators

We want researchers who are interested in doing evaluation work for The Unjournal. We pay an average of evaluation, and we award monetary prizes for the strongest work. Right now we are particularly looking for economists and people with quantitative and policy-evaluation skills. We describe what we are asking evaluators to do here: essentially a regular peer review with some different emphases, plus providing a set of quantitative ratings and predictions. Your evaluation content would be made public (and receive a DOI, etc.), but you can choose if you want to remain anonymous or not.

To sign up to be part of the pool of evaluators or to get involved in The Unjournal project in other ways, please fill out this brief form or email contact@unjournal.org.

Research

We welcome suggestions for particularly impactful research that would benefit from (further) public evaluation. We choose research for public evaluation based on an initial assessment of methodological strength, openness, clarity, relevance to global priorities, and the usefulness of further evaluation and public discussion. We sketch these criteria here, and discuss some potential examples here (see research we have chosen and evaluated at unjournal.pubpub.org, and a larger list of research we're considering here).

If you have research—your own or others—that you would like us to assess, please fill out this form. You can submit your own work here (or by contacting ). Authors of evaluated papers will be eligible for our Impactful Research Prizes ().

Feedback

We are looking for both feedback on and involvement in The Unjournal project. Feel free to reach out at .

View our data protection statement

Organizational roles and responsibilities

14 Jan 2025: The Unjournal is still looking to build our team and evaluator pool. Please consider the roles below and express your interest here or contact us at contact@unjournal.org.

Management committee members

Activities of those on the management committee may involve a combination of the following (although you can choose your focus):

  • Contributing to the decision-making process regarding research focus, reviewer assignment, and prize distribution.

  • Collaborating with other committee members on the establishment of rules and guidelines, such as determining the metrics for research evaluation and defining the mode of assessment publication.

  • Helping plan The Unjournal’s future path.

  • Helping monitor and prioritize research for The Unjournal to evaluate (i.e., acting as a field specialist; see further discussion below). Acting as an evaluation manager for research in your area.

Time commitment: A minimum of 15–20 hours per year.

Compensation: We have funding for a $57.50 per hour honorarium for the first 20 hours, with possible compensation . Evaluation management work will be further compensated (at roughly $300–$450 per paper).

Who we are looking for: All applicants are welcome. We are especially interested in those involved in global priorities research (and related fields), policy research and practice, open science and meta-science, bibliometrics and scholarly publishing, and any other academic research. We want individuals with a solid interest in The Unjournal project and its goals, and the ability to meet the minimal time commitment. Applying is extremely quick, and those not chosen will be considered for other roles and work going forward.

Advisory board (AB) members

Beyond direct roles within The Unjournal, we're building a larger, more passive advisory board to be part of our network, to offer occasional feedback and guidance. There is essentially no minimum time commitment for advisory board members—only opportunities to engage. We sketch some of the expectations in the fold below.

Advisory board members: expectations

As an AB member...

  • you agree to be listed on our page as being on the advisory board.

  • you have the option (but not the expectation or requirement) to join our Slack, and to check in once in a while.

  • you will be looped in for your input on some decisions surrounding The Unjournal's policies and direction. Such communications might occur once per month, and you are not obligated to respond.

  • you may be invited to occasional video meetings (again optional).

  • you are “in our system” and we may consult you for other work.

  • you will be compensated for anything that requires a substantial amount of your time that does not overlap with your regular work.

Field Specialists and Unjournal Research Affiliates

Goals: Support open evaluation and impactful, credible research, helping The Unjournal keep on top of the latest, most relevant, influential, and globally-consequential work. Help raise awareness of The Unjournal, fostering a transition away from traditional journal peer review towards a more transparent, efficient, and rigorous approach.

What Field Specialists (FS's) and Unjournal Research Affiliates (URAs) do, specifically

Source, prioritize, and suggest research

Focus on a particular area of research, policy, or impactful outcome, in collaboration with other members of our area-focused teams.

Keep track of new or under-considered research with potential for impact and explain and assess the extent to which The Unjournal can add value by commissioning its evaluation. Provide a brief explanation of why each paper matters and how it aligns with The Unjournal’s approach.

Scope: Perhaps three papers per academic term; FS are encouraged to suggest more. This comes with compensation and incentives

Help track and curate this list of prioritized research, considering and discussing specific areas and issues that merit evaluation. We maintain a public database of research of potential interest here.

Prioritize research submitted by authors or suggested by other team members

Write and share "second opinion" assessments of this research, considering the importance and relevance for our evaluation. Vote on a short list of papers in your area to determine whether we should commission these for evaluation. Attend (very occassional) 'field group meetings' to discuss this.

Evaluation management (FS)

Field Specialists may serve as 'evaluation managers' (with additional compensation). Evaluation managers help us identify and commission evaluators for work we have prioritized, help manage this evaluation process, and write a summary report on the evaluations

Represent The Unjournal

Understand our mission and be ready to explain it to colleagues and (optionally) on social media.

Field specialist and URA "area teams"

We are organizing several teams of field specialists (and management and advisory board members). These teams will hold occasional online meetings (perhaps every 3 months) to discuss research to prioritize, and to help coordinate 'who covers what'. If team members are interested, further discussions, meetings, and seminars might be arranged, but this is very much optional.

As of Dec 2024, we have the following teams (organized around fields and outcomes)

  1. Development economics and global health and development

  2. Economics, welfare, and governance

  3. Psychology, behavioral science, and attitudes

  4. Innovation and meta-science, impact of emerging technologies, catastrophic risks

  5. Animal welfare: markets, attitudes

  6. Environmental economics

Other teams are being organized or considered

If you become a field specialist or URA, what happens next?

You will be asked to fill out to let us know what fields, topics, and sources of research you would like to "monitor" or dig into to help identify and curate work relevant for Unjournal evaluation, as well as outlining your areas of expertise (the form takes perhaps 5–20 minutes).

This survey helps us understand when to contact you to ask if you want to be an evaluation manager on a paper we have prioritized for evaluation.

Guided by this survey form (along with discussions we will have with you, and coordination with the team), we will develop an “assignment” that specifies the area you will cover. We will try to divide the space and not overlap between field specialists. This scope can be as broad or focused as you like.

Within your area, you keep a record of the research that seems relevant (and why, and what particularly needs evaluation, etc.) and enter it in our database. (Alternatively, you can pass your notes to us for recording.)

We will compensate you for the time you spend on this process (details tbd), particularly to the extent that the time you spend does not contribute to your other work or research. (See incentives and norms here.)

"Monitoring" a research area or source as a field specialist or URA

The Unjournal's field specialists choose an area they want to monitor. By this we mean that a field specialist will

  • Keep an eye on designated sources (e.g., particular working paper series) and fields (or outcomes or area codes), perhaps every month or so; consider new work, dig into archives

  • Let us know what you have been able to cover; if you need to reduce the scope, we can adjust it

  • Suggest/Input work into our database … papers/projects/research that seems relevant for The Unjournal to evaluate. Give some quick ‘prioritization ratings’

  • If you have time, give a brief on why this work relevant for UJ (impactful, credible, timely, open presentation, policy-relevant, etc) and what areas need particular evaluation and feedback

Benefits of being a Field Specialist or URA

  • Public Leadership: It publicly signals your commitment to open science principles, innovation, and research impact.

  • Professional Credibility & Recognition: URAs are chosen through a selective process, and will be publicly acknowledged. URAs gain valuable experience, strengthen their CV, and position themselves for potential future opportunities with The Unjournal (e.g., Field Specialist or management roles).

  • Networking & Community: Build contacts with others in your field and with related interests. Stay on top of cutting-edge research, discuss and understand its value and connection to the field and to policy/impact.

  • Influencing the Agenda: URAs help shape what research The Unjournal evaluates and promotes, furthering our mission of global impact, and advocating for your own priorities.

  • Compensation: Incentive pay for recommending and evaluating papers. Be compensated ($) for research surveying and scoping you may are already be doing. See compensation details here.

Field Specialists: specifics and norms

See the Norms and Compensation page.

Time commitment: There is no specific time obligation—only opportunities to engage. We may also consult you occasionally on your areas of expertise. Perhaps 1–4 hours a month is a reasonable starting expectation for people already involved in doing or using research, plus potential additional paid assignments. Our Incentives and norms document also provides some guidance on the nature of work and the time involved.

Compensation: We aim to fairly compensate people for time spent on work done to support The Unjournal, and to provide incentives for suggesting and helping to prioritize research for evaluation. See the Norms and Compensation page. Evaluation management work will be compensated at roughly $300–$450 per project.

Who we are looking for: For the FS roles, we are seeking active researchers, practitioners, and stakeholders with a strong publication record and/or involvement in the research and/or research-linked policy and prioritization processes. For the AB, also people with connections to academic, governmental, or relevant non-profit institutions, and/or involvement in open science, publication, and research evaluation processes. People who can offer relevant advice, experience, guidance, or help communicate our goals, processes, and progress.

Interested? Please fill out this form (about 3–5 min, using the same form for all roles).

Unjournal Research Affiliates: specifics and norms

Application criteria: Applicants will selected based on (1) their ability to assess, discuss, and communicate the global impact of research, (2) their understanding of The Unjournal approach, (3) their research skills, interests, and experience.

Low Time Commitment: This role is designed to fit into a busy academic or research schedule. You’ll stay involved at a comfortable level without overextending your commitments.

Compensation This role comes with , but URAs are eligible for incentive compensation for suggesting papers, for doing high-value assessment for prioritization, and more (see here).

Expectations for URAs

  1. Recommend 2–3 papers each academic term that you think have the potential to make a substantial impact. Provide a brief explanation of why each paper matters and how it aligns with The Unjournal’s approach. Earn compensation for your recommendations.

  2. Provide Second Opinions & Prioritization: Complete three “second opinion” assessments each term, offering a concise evaluation of a paper’s relevance and importance. Contribute to broader discussions about research prioritization.

  3. Vote & Engage: Participate in monthly voting on a short list of papers. Stay semi-active on Slack and Coda (with no strict requirement to check in regularly, beyond voting).

  4. Represent The Unjournal: Understand our mission and be ready to explain it to colleagues and on social media. Attend at least one Unjournal meeting (one hour) per year to stay connected.

  5. Optional: Consult on evaluation management Occasionally advise on evaluator suggestions for papers in your field (additional compensation may apply).

Why are we offering the URA role?

Why are we offering this role? Early career researchers and PhD students may be interested in getting involved but worried about making . But getting busy students and researchers involved even minimally could keep us in touch with the cutting edge of research and help us forge collaborations in academia. So we're offering the role, which is similar to the Field Specialist role, but with less responsibility and time commitment.

Apply here for this and other roles.

Self-contained public release: Open Call: Unjournal Research Affiliates (URA)

Contact us

If you are interested in discussing any of the above in person, please email us () to arrange a conversation.

We invite you to fill in this form (the same as that linked above) to leave your contact information and outline which parts of the project interest you.

Note: These descriptions are under continual refinement; see our policies for more details.

Research & operations-linked roles & projects

We are again considering application for the 'evaluation metrics/meta-science' role. We will also consider all applicants for our positions, and for roles that may come up in the future.

The potential roles discussed below combine research-linked work with operations and administrative responsibilities. Overall, this may include some combination of:

  • Assisting and guiding the process of identifying strong and potentially impactful work in key areas, explaining its relevance, its strengths, and areas warranting particular evaluation and scrutiny

  • Interacting with authors, recruiting, and overseeing evaluators

  • Synthesizing and disseminating the results of evaluations and ratings

  • Aggregating and benchmarking these results

  • Helping build and improve our tools, incentives, and processes

  • Curating outputs relevant to other researchers and policymakers

  • Doing "meta-science" work

See also our field specialist team pool and evaluator pool. Most of these roles involve compensation/honorariums. See

Possible role: Research and Evaluation Specialist (RES)

Possible role details

Potential focus areas include global health; development economics; markets for products with large externalities (particularly animal agriculture); attitudes and behaviors (altruism, moral circles, animal consumption, effectiveness, political attitudes, etc.); economic and quantitative analysis of catastrophic risks; the economics of AI safety and governance; aggregation of expert forecasts and opinion; international conflict, cooperation, and governance; etc.

Work (likely to include a combination of):

  • Identify and characterize research (in the area of focus) that is most relevant for The Unjournal to evaluate

  • Summarize the importance of this work, its relevance to global priorities and connections to other research, and its potential limitations (needing evaluation)

  • Help build and organize the pool of evaluators in this area

  • Assist evaluation managers or serve as evaluation manager (with additional compensation) for relevant papers and projects

  • Synthesize and communicate the progress of research in this area and insights coming from Unjournal evaluations and author responses; for technical, academic, policy, and intelligent lay audiences

  • Participate in Unjournal meetings and help inform strategic direction

  • Liaise and communicate with relevant researchers and policymakers

  • Help identify and evaluate prize winners

  • Meta-research and direct quantitative meta-analysis (see "Project" below)

Desirable skills and experience:

Note: No single skill or experience is necessary independently. If in doubt, we encourage you to express your interest or apply.

  • Understanding of the relevant literature and methodology (to an upper-postgraduate level) in this field or a related field and technical areas, i.e., knowledge of the literature, methodology, and policy implications

  • Research and policy background and experience

  • Strong communication skills

  • Ability to work independently, as well as to build coalitions and cooperation

  • Statistics, data science and "aggregation of expert beliefs"

Proposed terms:

  • 300 hours (flexible, extendable) at $25–$55/hour USD (TBD, depending on experience and skills)

  • This is a contract role, open to remote and international applicants. However, the ability to attend approximately weekly meetings and check-ins at times compatible with the New York timezone is essential.

Length and timing:

  • Flexible; to be specified and agreed with the contractor.

  • We are likely to hire one role starting in Summer 2023, and another starting in Autumn 2023.

  • Extensions, growth, and promotions are possible, depending on performance, fit, and our future funding.

. (Nov. 2023: Note, we can not guarantee that we will be hiring for this role, because of changes in our approach.)

Independent evaluations (trial)

Kickstarter incentive: After the first 8 quality submissions (or by Jan. 1, 2025 - whichever comes later) we will award a prize of $500 to the strongest evaluation.

Note on .

Initiative: ‘independent evaluations’

is seeking academics, researchers, and students to submit structured evaluations of the most impactful research . Strong evaluations will be posted or linked on our , offering readers a perspective on the implications, strengths, and limitations of the research. These evaluations can be submitted using for academic-targeted research or for ; evaluators can publish their name or maintain anonymity; we also welcome collaborative evaluation work. We will facilitate, promote, and encourage these evaluations in several ways, described below.

Who should do these evaluations?

We are particularly looking for people with research training, experience, and expertise in quantitative social science and statistics including cost-benefit modeling and impact evaluation. This could include professors, other academic faculty, postdocs, researchers outside of academia, quantitative consultants and modelers, PhD students, and students aiming towards PhD-level work (pre-docs, research MSc students etc.) But anyone is welcome to give this a try — when in doubt, piease go for it.

We are also happy to support collaborations and group evaluations. There is a good track record for this — see: “”, ASAPBio’s, and for examples in this vein. We may also host live events and/or facilitate asynchronous collaboration on evaluations

Instructors/PhD, MRes, Predoc programs: We are also keen to work with students and professors to integrate ‘independent evaluation assignments’ (aka ‘learn to do peer reviews’) into research training.

Why should you do an evaluation?

Your work will support The Unjournal’s core mission — improving impactful research through journal-independent public evaluation. In addition, you’ll help research users (policymakers, funders, NGOs, fellow researchers) by providing high quality detailed evaluations that rate and discuss the strengths, limitations, and implications of research.

Doing an independent evaluation can also help you. We aim to provide feedback to help you become a better researcher and reviewer. We’ll also give prizes for the strongest evaluations. Lastly, writing evaluations will help you build a portfolio with The Unjournal, making it more likely we will commission you for paid evaluation work in the future.

Which research?

We focus on rigorous, globally-impactful research in quantitative social science and policy-relevant research. (See for details.) We’re especially eager to receive independent evaluations of:

  1. Research we publicly prioritize: see our we've prioritized or evaluated. ()

  2. Research we previously evaluated (see , as well as )

  3. Work that other people and organizations suggest as having high potential for impact/value of information (also see)

You can also suggest research yourself and then do an independent evaluation of it.

What sort of ‘evaluations’ and what formats?

We’re looking for careful methodological/technical evaluations that focus on research credibility, impact, and usefulness. We want evaluators to dig into the weeds, particularly in areas where they have aptitude and expertise. See our.

The Unjournal’s structured evaluation forms: We encourage evaluators to do these using either:

  1. Our : If you are evaluating research aimed at an academic journal or

  2. Our : If you are evaluating research that is probably not aimed at an academic journal. This may include somewhat less technical work, such as reports from policy organizations and think tanks, or impact assessments and cost-benefit analyses

Other public evaluation platforms: We are also open to engaging with evaluations done on existing public evaluation platforms such as. Evaluators: If you prefer to use another platform, please let us know about your evaluation using one of the forms above. If you like, you can leave most of our fields blank, and provide a link to your evaluation on the other public platform.

Academic (~PhD) assignments and projects: We are also looking to build ties with research-intensive university programs; we can help you structure academic assignments and provide external reinforcement and feedback. Professors, instructors, and PhD students: please contact us ().

How will The Unjournal engage?

1. Posting and signal-boosting

We will encourage all these independent evaluations to be publicly hosted, and will share links to these. We will further promote the strongest independent evaluations, potentially (such as )

However, when we host or link these, we will keep them clearly separated and signposted as distinct from the commissioned evaluations; independent evaluations will not be considered official, and their ratings won’t be included in our (see dashboard; see ).

2. Offering incentives

Bounties: We will offer prizes for the ‘most valuable independent evaluations’.

As a start, after the first eight (or by Jan. 1 2025, whichever comes later), we will award a prize of $500 to the most valuable evaluation.

Further details tbd.

All evaluation submissions will be eligible for these prizes and “grandfathered in” to any prizes announced later. We will announce and promote the prize winners (unless they opt for anonymity).

Evaluator pool: People who submit evaluations can elect to join our evaluator pool. We will consider and (time-permitting) internally rate these evaluations. People who do the strongest evaluations in our focal areas are likely to be commissioned as paid evaluators for The Unjournal.

We’re also moving towards a two-tiered base We will offer a higher rate to people who can demonstrate previous strong review/evaluation work. These independent evaluations will count towards this ‘portfolio’.

3. Providing materials, resources and guidance/feedback

Our provides examples of strong work, including the.

We will curate guidelines and learning materials from relevant fields and from applied work and impact-evaluation. For a start, see

4. Partnering with academic institutions

We are reaching out to PhD programs and pre-PhD research-focused programs. Some curricula already involve “mock referee report” assignments. We hope professors will encourage their students to do these through our platform. In return, we’ll offer the incentives and promotion mentioned above, as well as resources, guidance, and some further feedback

5. Fostering a positive environment for anonymous and signed evaluations

We want to preserve a positive and productive environment. This is particularly important because we will be accepting anonymous content. We will take steps to ensure that the system is not abused. If the evaluations have an excessively negative tone, have content that could be perceived as personal attacks, or have clearly spurious criticism, we will ask the evaluators to revise this, or we may decide not to post or link it.

How does this benefit The Unjournal and our mission?

  1. Crowdsourced feedback can add value in itself; encouraging this can enable some public evaluation and discussion of work that The Unjournal doesn’t have the bandwidth to cover

  2. Improving our evaluator pool and evaluation standards in general.

    1. Students and ECRs can practice and (if possible) get feedback on independent evaluations

    2. They can demonstrate their ability this publicly, enabling us to recruit and commission the strongest evaluators

  3. Examples will help us build guidelines, resources, and insights into ‘what makes an evaluation useful’.

  4. This provides us opportunities to engage with academia, especially in Ph.D programs and research-focused instruction.

About The Unjournal (unfold)

commissions public evaluations of impactful research in quantitative social sciences fields. We are an alternative and a supplement to traditional academic peer-reviewed journals – separating evaluation from journals unlocks a . We ask expert evaluators to write detailed, constructive, critical reports. We also solicit a set of structured ratings focused on research credibility, methodology, careful and calibrated presentation of evidence, reasoning transparency, replicability, relevance to global priorities, and usefulness for practitioners (including funders, project directors, and policymakers who rely on this research). While we have mainly targeted impactful research from academia, our covers impactful work that uses formal quantitative methods but is not aimed at academic journals. So far, we’ve commissioned about 50 evaluations of 24 papers, and published these evaluation packages , linked to academic search engines and bibliometrics.

Reinstein's story in brief

I was in academia for about 20 years (PhD Economics, UC Berkeley; Lecturer, University of Essex; Senior Lecturer, University of Exeter). I saw how the journal system was broken.

  • Academics constantly complain about it (but don't do anything to improve it).

  • Most conversations are not about research, but about 'who got into what journal' and 'tricks for getting your paper into journals'

  • Open science and replicability are great, and dynamic documents make research a lot more transparent and readable. But these goals and methods are very hard to apply within the traditional journal system and its 'PDF prisons'.

Now I'm working outside academia and can stick my neck out. I have the opportunity to help fix the system. I work with research organizations and large philanthropists involved with effective altruism and global priorities. They care about the results of research in areas that are relevant to global priorities. They want research to be reliable, robust, reasoning-transparent, and well-communicated. Bringing them into the equation can change the game.

field specialist
Advisory/team roles (research, management)
Express your interest here
The Unjournal
PubPub community
this form
this form
What is a PREreview Live Review?
Crowd preprint review
I4replication.org
repliCATS
“What specific areas do we cover?”
public list of research
public list
https://unjournal.pubpub.org/
Evaluating Pivotal Questions
here
guidelines
Academic (main) stream form
‘Applied stream’ form
See here for guidance on using these forms for independent evaluations
PREreview.org
contact@unjournal.org
unjournal.pubpub.org
‘main data’
here
PubPub page
prize-winning evaluations
"Conventional guidelines for referee reports" in our knowledge base.
The Unjournal
range of benefits
‘applied stream’
on our PubPub community
davidreinstein.org

Our team

See also: Governance of The Unjournal

The Unjournal was founded by David Reinstein , who maintains this wiki/GitBook and other resources.

The information below may be outdated.

  • See our "Team page" at Unjournal.org for an UPDATED view of our team members.

  • Team members can see more details in our Coda page here.

Management Committee

( on terminology)

See description under roles.

  • David Reinstein, Founder and Co-director

  • Gavin Taylor, Interdisciplinary Researcher at IGDORE; Co-director

  • Ryan Briggs, Social Scientist and Associate Professor in the Guelph Institute of Development Studies and Department of Political Science at the University of Guelph, Canada

  • Kris Gulati, Economics PhD student at the University of California, Merced

  • Hansika Kapoor, Research Author at the Department of Psychology, Monk Prayogshala (India)

  • Tanya O'Garra, Senior Research Fellow, Institute of Environment & Sustainability, Lee Kuan Yew of School of Public Policy, National University of Singapore

  • Emmanuel Orkoh, Research Scientist (fellow) at North-West University (South Africa)

  • Anirudh Tagat, Research Author at the Department of Economics at Monk Prayogshala (India)

  • Bob Kubinec, University of South Carolina

Advisory board

See description under roles.

Sam Abbott, Infectious Disease Researcher, London School of Hygiene and Tropical Medicine

Jonathan Berman, Associate Professor of Marketing, London Business School

Rosie Bettle, Applied Researcher (Global Health & Development) at Founder's Pledge

Gary Charness, Professor of Economics, UC Santa Barbara

Daniela Cialfi, Post-Doctoral Researcher in the Department of Quantitative Methods and Economic Theory at the University of Chieti (Italy)

Jordan Dworkin, Metascience Program Lead, Federation of American Scientists

Jake Eaton, Managing Editor at Asterisk Mag: writing and research on global health, development, and nutrition

Andrew Gelman, Professor of Statistics and Political Science at Columbia University (New York)

Anca Hanea, Associate Professor, University of Melbourne (Australia): expert judgment, biosciences, applied probability, uncertainty quantification

Alexander Herwix, Late-Stage PhD Student in Information Systems at the University of Cologne, Germany

Conor Hughes, PhD Student, Applied Economics, University of Minnesota

Jana Lasser, Postdoctoral researcher, Institute for Interactive Systems and Data Science at Graz University of Technology (Austria)

Nicolas Treich, Associate Researcher, INRAE, Member, Toulouse School of Economics (France)

Michael Wiebe, Data Scientist, Economist Consultant; PhD University of British Columbia (Economics)

Field Specialists

The table below shows all the members of our team (including field specialists) taking on a research-monitoring role (see here for a description of this role).

Staff, contractors, and consultants

, Research Specialist: Data science, metascience, aggregation of expert judgment

Jordan Pieters, Operations generalist

Kynan Behan, Generalist assistance

Laura Sofia-Castro, Communications (academic research/policy)

Adam Steinberg, Communications and copy-editing

Toby Weed, Communications and consulting

Nesim Sisa, technical software support

Red Bermejo, Mikee Mercado, Jenny Siers – consulting (through Anti-Entropy) on strategy, marketing, and task management tools

We are a member of Knowledge Futures. They are working with us to update PubPub and incorporate new features (editorial management, evaluation tools, etc.) that will be particularly useful to The Unjournal and other members.

Other people and initiatives we are in touch with

Substantial advice, consultation, collaborative discussions
  • Abel Brodeur, Founder/chair of the Institute for Replication

  • The repliCATS project

  • Eva Vivalt, Assistant Professor in the Department of Economics at the University of Toronto

  • Other academic and policy economists, such as Julian Jamison, Todd Kaplan, Kate Rockett, David Rhys-Bernard, David Roodman, and Anna Dreber Almenberg

  • Cooper Smout, head of https://freeourknowledge.org/

  • Brian Nosek, Center for Open Science

  • Ted Miguel, Faculty Director, Berkeley Initiative for Transparency in the Social Sciences (BITSS)

  • Daniel Saderi, PreReview

  • Yonatan Cale, who helped me put this proposal together through asking a range of challenging questions and offering his feedback

  • Daniel Lakens, Experimental Psychologist at the Human-Technology Interaction group at Eindhoven University of Technology (Netherlands), has also completed research with the Open Science Collaboration and the Peer Reviewers’ Openness Initiative

Some other people we have consulted/communicating, details, other notes
  • Cooper Smout, FoK collaboration possibilities: through their pledges, and through an open access journal Cooper is putting together, which the Unjournal could feed into, for researchers needing a ‘journal with an impact factor’

  • Participants in the GPI seminar luncheon

  • Paolo Crosetto (Experimental Economics, French National Research Institute for Agriculture, Food and Environment) https://paolocrosetto.wordpress.com/

  • Cecilia Tilli, Foundation to Prevent Antibiotics Resistance and EA research advocate

  • Sergey Frolov (Physicist), Prof. J.-S. Caux, Physicist and head of https://scipost.org/

  • Peter Slattery, Behaviourworks Australia

  • Alex Barnes, Business Systems Analyst, https://eahub.org/profile/alex-barnes/

  • Paola Masuzzo of IGDORE (biologist and advocate of open science)

  • William Sleegers (Psychologist and Data Scientist, Rethink Priorities)

  • Nathan Young https://eahub.org/profile/nathan-young/; considering connecting The Unjournal to Metaculus predictions

  • Edo Arad https://eahub.org/profile/edo-arad/ (mathematician and EA research advocate)

  • Hamish Huggard (Data science, ‘literature maps’)

See also List of people consulted (in ACX grant proposal).

Related: EA/global priorities seminar series

Plan of action

Explanations & outreach

Several expositions for different audiences, fleshing out ideas and plans

TLDR: In a nutshell

Podcasts, presentations, and video

See/subscribe to our YouTube channel

Journal independent evaluation and The Unjournal

EA Anywhere (Youtube) – bridging the gap between EA and academia

  • See slide deck (Link: bit.ly/unjourrnalpresent; offers comment access)

  • Presentation summarized with time-stamped hyperlinks (+~AI generated content)

ReproducibiliTea podcast

Slide decks

Presentation for EA Anywhere, online event, 5 Nov. 2023 1-2pm ET

(Link: bit.ly/unjourrnalpresent; offers comment access)

Earlier slide decks

July 2023: The slide deck below was last updated in late 2022 and needs some revision. Nonetheless, it illustrates many of the key points that remain relevant.

bit.ly/unjourrnalpresent

Nov 2022: Version targeted towards OSF/Open Science HERE

"Slaying the journals": Google doc

Earlier discussion document, aimed at EA/global priorities, academic, and open-science audiences [link]

"Moving science beyond ... static journals" ... How EA/nonprofits can help

Moving science beyond closed, binary, static journals; a proposed alternative; how the "Effective Altruist" and nontraditional nonprofit sector can help make this happen

  • 2021 A shorter outline posted on onscienceandacademia.org

EA forum posts

Plan of action

Building a "best feasible plan"..

What is this Unjournal?... See our summary.

Post-pilot goals

See the vision and broad plan presented here (and embedded below), updated August 2023.

Pilot targets

What we need our pilot (~12 months) to demonstrate
  1. We actually "do something."

  2. We can provide credible reviews and ratings that have value as measures of research quality comparable to (or better than) traditional journal systems.

  3. We identify important work that informs global priorities.

  4. We boost work in innovative and transparent and replicable formats (especially dynamic documents).

  5. Authors engage with our process and find it useful.

  6. (As a push) Universities, grantmakers, and other arbiters assign value to Unjournal ratings.

Updated:

Building research "unjournal"

See here for proposed specifics.

Setup and team

✔️ Pilot: Building a founding committee

✔️/⏳ Define the broad scope of our research interest and key overriding principles. Light-touch, to also be attractive to aligned academics

⏳ Build "editorial-board-like" teams with subject or area expertise

Status: Mostly completed/decided for pilot phase

Create a set of rules for "submission and management"

  • Which projects enter the review system (relevance, minimal quality, stakeholders, any red lines or "musts")

    • ⏳ See here for a first pass.

  • How projects are to be submitted

  • How reviewers are to be assigned and compensated

Status: Mostly completed/decided for pilot phase; will review after initial trial

Rules for reviews/assessments

  • To be done on the chosen open platform (Kotahi/Sciety) unless otherwise infeasible (10 Dec 2022 update)

  • Share, advertise, promote this; have efficient meetings and presentations

    • Establish links to all open-access bibliometric initiatives (to the extent feasible)

  • Harness and encourage additional tools for quality assessment, considering cross-links to prediction markets/Metaculus, to coin-based 'ResearchHub', etc.

See our guidelines for evaluators.

Status: Mostly completed/decided for pilot phase; will review after the initial trial

Further steps

See our 12-month plan.

Key next steps (pasted from FTX application)

The key elements of the plan:

Build a "founding committee" of 5–8 experienced and enthusiastic EA-aligned/adjacent researchers at EA orgs, research academics, and practitioners (e.g., draw from speakers at recent EA Global meetings).

  1. Host a meeting (and shared collaboration space/document), to come to a consensus/set of practical principles.

  2. Post and present our consensus (coming out of this meeting) on key fora. After a brief "followup period" (~1 week), consider adjusting the above consensus plan in light of feedback, and repost (and move forward).

  3. Set up the basic platforms for posting and administering reviews and evaluations and offering curated links and categorizations of papers and projects. Note: I am strongly leaning towards https://prereview.org/ as the main platform, which has indicated willingness to give us a flexible ‘experimental space’ Update: Kotahi/Sciety seems a more flexible solution.

  4. Reach out to researchers in relevant areas and organizations and ask them to "submit" their work for "feedback and potential positive evaluations and recognition," and for a chance at a prize. The Unjournal will not be an exclusive outlet. Researchers are free to also submit the same work to 'traditional journals' at any point. However, whether submitted elsewhere or not, papers accepted by The Unjournal must be publicly hosted, with a DOI. Ideally the whole project is maintained and updated, with all materials, in a single location. 21 Sep 2022 status:_ 1-3 mostly completed. We have a good working and management group. We decided a platform and we're configuring it, and we have an interim workaround. We've reached out to researchers and organizations and got some good responses, but we need to find more platforms to disseminate and advertise this. We've identified and are engaging with four papers for the initial piloting. We aim to put out a larger prize-driven call soon and intake about 10 more papers or projects.

Aside: "Academic-level" work for EA research orgs (building on post at onscienceandacademia.org)

The approach below is largely integrated into the Unjournal proposal, but this is a suggestion for how organizations like RP might consider how to get feedback and boost credibility:

  1. Host article (or dynamic research project or 'registered report') on OSF or another place allowing time stamping & DOIs (see my resources list in Airtable for a start)

  2. Link this to PREreview (or similar tool or site) to solicit feedback and evaluation without requiring exclusive publication rights (again, see Airtable list)

  3. Directly solicit feedback from EA-adjacent partners in academia and other EA-research orgs

Next steps towards this approach:

  • Build our own systems (assign "editors") to do this without bias and with incentives

  • Build standard metrics for interpreting these reviews (possibly incorporating prediction markets)

  • Encourage them to leave their feedback through the PREreview or another platform

Also: Commit to publish academic reviews or share in our internal group for further evaluation and reassessment or benchmarking of the ‘PREreview’ type reviews above (perhaps taking the FreeOurKnowledge pledge relating to this).

Status: We are still working with Google Docs and building an external survey interface. We plan to integrate this with PubPub over the coming months (August/Sept. 2023)

Updates (earlier)

22 Aug 2024: we will be moving our latest updates to our main home page 'news'.

March 25 2024: Workshop: Innovations in Research Evaluation, Replicability, and Impact

Research evaluation is changing: New approaches go beyond the traditional journal model, promoting transparency, replicability, open science, open access, and global impact. You can be a part of this.

Join us on March 25 for an interactive workshop, featuring presentations from Macie Daley (Center for Open Science), David Reinstein (The Unjournal), Gary Charness (UC Santa Barbara), and The Unjournal’s Impactful Research Prize and Evaluator Prize winners. Breakout discussions, Q&A, and interactive feedback sessions will consider innovations in open research evaluation, registered revisions, research impact, and open science methods and career opportunities.

The event will be held fully online on Zoom, on March 25 from 9AM- 11:30 AM (EST) and 9:30 PM - Midnight (EST) to accommodate a range of time zones. UTC: 25-March 1pm-3:30pm and 26-March 1:30am-4am. The event is timetabled: feel free to participate in any part you wish.

See the event page here for all details, and to registr.

Jan 2024: Impactful Research and Evaluation Prizes winners announced

Impactful Research Prize Winners

Aug. 30, 2023: "Pilot's done, what has been won (and learned)?"

Pilot = completed!

With the completed set of evaluations of "Do Celebrity Endorsements Matter? A Twitter Experiment Promoting Vaccination in Indonesia" and "The Governance of Non-Profits and Their Social Impact: Evidence from a Randomized Program in Healthcare in DRC,” our pilot is complete:

  • 10 research papers evaluated

  • 21 evaluations

  • 5 author responses

You can see this output most concisely in our PubPub collection here (evaluations are listed as "supplements," at least for the time being).

For a continuously updated overview of our process, including our evaluation metrics, see our "data journalism" notebook hosted here.

Just a peek at the content you can find in our lovely data notebook! Mind the interactive hover-overs etc.

Remember, we assign individual DOIs to all of these outputs (evaluation, responses, manager syntheses) and aim to get the evaluation data into all bibliometrics and scholarly databases. So far, Google Scholar has picked up one of our outputs. (The Google Scholar algorithm is a bit opaque—your tips are welcome.)

Following up on the pilot: prizes and seminars

We will make decisions and award our pilot Impactful Research Prize and evaluator prizes soon (aiming for the end of September). The winners will be determined by a consensus of our management team and advisory board (potentially consulting external expertise). The choices will largely be driven by the ratings and predictions given by Unjournal evaluators. After we make the choices, we will make our decision process public and transparent.

Following this, we are considering holding an online workshop (that will include a ceremony for the awarding of prizes). Authors and (non-anonymous) evaluators will be invited to discuss their work and take questions. We may also hold an open discussion and Q&A on The Unjournal and our approach. We aim to partner with other organizations in academia and in the impactful-research and open-science spaces. If this goes well, we may make it the start of a regular thing.

"Impactful research online seminar": If you or your organization would be interested in being part of such an event, please do reach out; we are looking for further partners. We will announce the details of this event once these are finalized.

Other planned follow-ups from the pilot

Our pilot yielded a rich set of data and learning-by-doings. We plan to make use of this, including . . .

  • synthesizing and reporting on evaluators' and authors' comments on our process; adapting these to make it better;

  • analyzing the evaluation metrics for patterns, potential biases, and reliability measures;

  • "aggregating expert judgment" from these metrics;

  • tracking future outcomes (traditional publications, citations, replications, etc.) to benchmark the metrics against; and

  • drawing insights from the evaluation content, and then communicating these (to policymakers, etc.).

The big scale-up

Evaluating more research: prioritization

We continue to develop processes and policies around "which research to prioritize." For example, we are discussing whether we should set targets for different fields, for related outcome "cause categories," and for research sources. We intend to open up this discussion to the public to bring in a range of perspectives, experience, and expertise. We are working towards a grounded framework and a systematic process to make these decisions. See our expanding notes, discussion, and links on "what is global-priorities relevant research?"

We are still inviting applications for the paid standalone project helping us accomplish these frameworks and processes. Our next steps:

  1. Building our frameworks and principles for prioritizing research to be evaluated, a coherent approach to implementation, and a process for weighing and reassessing these choices. We will incorporate previous approaches and a range of feedback. For a window into our thinking so far, see our "high-level considerations" and our practical prioritization concerns and goals.

  2. Building research-scoping teams of field specialists. These will consider agendas in different fields, subfields, and methods (psychology, RCT-linked development economics, etc.) and for different topics and outcomes (global health, attitudes towards animal welfare, social consequences of AI, etc.) We begin to lay out possible teams and discussions here (the linked discussion spaces are private for now, but we aim to make things public whenever it's feasible). These "field teams" will

    • discuss and report on the state of research in their areas, including where and when relevant research is posted publicly, and in what state;

    • the potential for Unjournal evaluation of this work as well as when and how we should evaluate it, considering potential variations from our basic approach; and

    • how to prioritize work in this area for evaluation, reporting general guidelines and principles, and informing the aforementioned frameworks.

    Most concretely, the field teams will divide up the space of research work to be scoped and prioritized among the members of the teams.

Growing The Unjournal Team

Our previous call for field specialists is still active. We received a lot of great applications and strong interest, and we plan to send out invitations soon. But the door is still open to express interest!

New members of our team: Welcome Rosie Bettle (Founder's Pledge) to our advisory board, as a field specialist.

Improving the evaluation process and metrics

As part of our scale-up (and in conjunction with supporting PubPub on their redesigned platform), we're hoping to improve our evaluation procedure and metrics. We want to make these clearer to evaluators, more reliable and consistent, and more useful and informative to policymakers and other researchers (including meta-analysts).

We don't want to reinvent the wheel (unless we can make it a bit more round). We will be informed by previous work, such as:

  • existing research into the research evaluation process, and on expert judgment elicitation and aggregation;

  • practices from projects like RepliCATS/IDEAS, PREreview BITSS Open Policy Analysis, the “Four validities” in research design, etc.; and

  • metrics used (e.g., "risk of bias") in systematic reviews and meta-analyses as well as databases such as 3ie's Development Evidence Portal.

Of course, our context and goals are somewhat distinct from the initiatives above.

We also aim to consult potential users of our evaluations as to which metrics they would find most helpful.

(A semi-aside: The choice of metrics and emphases could also empower efforts to encourage researchers to report policy-relevant parameters more consistently.)

We aim to bring a range of researchers and practitioners into these questions, as well as engaging in public discussion. Please reach out.

"Spilling tea"

Yes, I was on a podcast, but I still put my trousers on one arm at a time, just like everyone else! Thanks to Will Ngiam for inviting me (David Reinstein) on "ReproducibiliTea" to talk about "Revolutionizing Scientific Publishing" (or maybe "evolutionizing" ... if that's a word?). I think I did a decent job of making the case for The Unjournal, in some detail. Also, listen to find out what to do if you are trapped in a dystopian skating rink! (And find out what this has to do with "advising young academics.")

I hope to do more of this sort of promotion: I'm happy to go on podcasts and other forums and answer questions about The Unjournal, respond to doubts you may have, consider your suggestions and discuss alternative initiatives.

Some (other) ways to follow The Unjournal's progress

  • Check out our PubPub page to read evaluations and author responses.

  • Follow @GivingTools (David Reinstein) on Twitter or Mastodon, or the hashtag #unjournal (when I remember to use it).

  • Visit Action and progress for an overview.

MailChimp link: Sign up below to get these progress updates in your inbox about once per fortnight, along with opportunities to give your feedback.

Alternatively, fill out this quick survey to get this newsletter and tell us some things about yourself and your interests. The data protection statement is linked here.

Progress notes since last update

Progress notes: We will keep track of important developments here before we incorporate them into the ." Members of the UJ team can add further updates here or in this linked Gdoc; we will incorporate changes.

See also Previous updates

Hope these updates are helpful. Let me know if you have suggestions.

The case, the basic idea
Economists want to see changes to their peer review system. Let’s do something about it.CEPR
The big idea: should we get rid of the scientific paper?the Guardian
Stuart Ritchie
Logo
EVALUATING RESEARCHEVALUATING RESEARCH
Wyclif's Dust
"A Journal is just a Twitter feed"
Logo
Peer Review: Implementing a "publish, then review" model of publishingeLife
Unjournal: Evaluations of "Artificial Intelligence and Economic Growth", and new hosting space - EA Forum
Logo
Logo
Logo
Logo
Key links
  • Why Unjournal?

  • Guidelines for evaluators

  • Explanations & outreach, Slide deck

  • Our evaluation packages on PubPub

  • How to get involved

  • Unjournal.org (public-facing home page)

You can also search and query this Gitbook (press control-K or command -k)

EA Anywhere
Proposal: alternative to traditional academic journals for EA-relevant research (multi-link post) - EA Forum
contact@unjournal.org
reach out to us
contact@unjournal.org
contact@unjournal.org
contact@unjournal.org
contact@unjournal.org
contact@unjournal.org
The Unjournal
The Unjournal
Sign up to our mailing list to receive updates!
What “pivotal” and useful research ... would you like to see assessed? (Bounty for suggestions) - EA Forum
Logo
Why should I submit my work to The Unjournal? Why should I engage with them?
Julia Bottesini
Logo