All pages
Powered by GitBook
1 of 10

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Choosing evaluators (considerations)

Things we consider in choosing evaluators (i.e., 'reviewers')

  1. Did the people who suggested the paper suggest any evaluators?

  2. We prioritize our "evaluator pool" (people who signed up; see "how to get involved")

  3. Expertise in the aspects of the work that need evaluation

  4. Interest in the topic/subject

  5. Conflicts of interest (especially co-authorships)

  6. Secondary concerns: Likely alignment and engagement with Unjournal's priorities. Good writing skills. Time and motivation to write the evaluation promptly and thoroughly.

Avoiding COI

Mapping collaborator networks through Research Rabbit

We use a website called Research Rabbit (RR).

Our RR database contains papers we are considering evaluating. To check potential COI, we use the following steps:

  1. After choosing a paper, we select the button "these authors." This presents all the authors for that paper.

  2. After this, we choose "select all," and click "collaborators." This presents all the people that have collaborated on papers with the authors.

  3. Finally, by using the "filter" function, we can determine whether the potential evaluator has ever collaborated with an author from the paper.

  4. If a potential evaluator has no COI, we will add them to our list of possible evaluators for this paper.

Note: Coauthorship is not a disqualifier for a potential evaluator; however, we think it should be avoided where possible. If it cannot be avoided, we will note it publicly.

Status, expenses, and payments

Our status

The Unjournal is now an independent 501(c)(3) organization. We have new (and hopefully simpler and easier) systems for submitting expenses.

Submitting for payments and expenses

Evaluators: to claim your payment for evaluation work, please complete this very brief form.

You will receive your payment via a Wise transfer (they may ask you for your bank information if you don't have an account with them).

We aim to process all payments within one week.

Confidentiality: Please note that even though you are asked to provide your name and email, your identity will only be visible to The Unjournal administrators for the purposes of making this payment. The form asks you for the title of the paper you are evaluating. If you are uncomfortable doing this, please let us know and we can find another approach to this.

Anonymity and 'salted hash' codes

This information should be moved to a different section

Why do we call it a 'salted hash'

The 'hash' itself represents a one-way encryption of either your name or email. We store this information in a database shared only internally at The Unjournal. If you are asking for full anonymity, this information is only kept on the hard drive of our co-manager, operations RA, and potentially the evaluator. But if we used this anyone who knows your name or email could potentially 'check' if you were the person it pertained to. That's why we 'salt' it: we add an additional bit of 'salt', a password only known to our co-managers and operations RA before we encrypt it. This better protects your anonymity.

What bank/payment information might we need?

Type: ABA [or?] Account Holder: name

Email:

Abartn: ?????????

City:

State:

Country:

Post Code:

First Line:

Legal Type: PRIVATE

Account Type: CHECKING [or ?]

Account Number: ...

Additional invoice information

Management details [mostly moved to Coda]

9 Apr 2024: This section outlines our management structure and polices. More detailed content is being moved to our private (Coda.io) knowledge base.

Tech, tools and resources has been moved to it's own section

Tech, tools and resources

Governance of The Unjournal

Updated 11 Jan 2023

Administrators, accounts

The official administrators are David Reinstein (working closely with the Operations Lead) and Gavin Taylor; both have control and oversight of the budget.

Roles: Founding and management committee

Major decisions are made by majority vote by the Founding Committee (aka the ‘Management Committee’).

Members:

Roles: Advisory board

Advisory board members are kept informed and consulted on major decisions, and relied on for particular expertise.

Advisory Board Members:

Evaluation manager process

Update Feb. 2024: We are moving the discussion of the details of this process to an internal Coda link (here, accessible by team members only). We will present an overview in broad strokes below.

See also Mapping evaluation workflowfor an overview and flowchart of our full process (including the evaluation manager role).

Compensation: As of April Dec 2023, evaluation managers are compensated a minimum of $300 per project, and up to $500 for detailed work. Further work on 'curating' the evaluation, engaging further with authors and evaluators, writing detailed evaluation summary content, etc., can earn up to an additional $200.

If you are the evaluation manager please follow the process described in our private Coda space here

In brief, evaluation managers:

  1. Engage with our previous discussion of the papers; why we prioritized this work, what sort of evaluators would be appropriate, what to ask them to do.

  2. Inform and engage with the paper's authors, asking them for updates and requests for feedback. The process varies depending on whether the work is part of our "Direct evaluation" track or whether we require authors' permission.

  3. Find potential evaluators with relevant expertise, contact them. We generally seek two evaluators per paper.

  4. Suggest research-specific issues for evaluators to consider. Guide evaluators on our process.

  5. Read the evaluations as they come in, suggest additions or clarifications if necessary.

  6. Rate the evaluations for awards and bonus incentives.

  7. Share the evaluations with the authors, requesting their response.

  8. Optionally, provide a brief "evaluation manager's report" (synthesis, discussion, implications, process) to accompany the evaluation package.

See also:

See also: Protecting anonymity

Some other important details

  1. We give the authors two weeks to respond before publishing the evaluation package (and they can always respond afterwards).

  2. Once the evaluations are up on PubPub, reach out the evaluators again with the link, in case they want to view their evaluation and the others. The evaluators may be allowed to revise their evaluation, e.g., if the authors find an oversight in the evaluation. (We are working on a policy for this.)

  3. At the moment (Nov. 2023) we don't have any explicit 'revise and resubmit' procedure, as part of the process. Authors are encouraged to share changes they plan to make, and a (perma)-link to where their revisions can be found. They are also welcome to independently (re)-submit an updated version of their work for a later Unjournal evaluation.

Choosing evaluators (considerations)

UJ Team: resources, onboarding

This page should explain or link clear and concise explanations of the key resources, tools, and processes relevant to members of The Unjournal team, and others involved.

5 Sep 2024: Much of the information below is out of date. We have moved most of this content to our internal (Coda) system (but may move some of it back into hidden pages here to enable semantic search)

See also (and integrate): Jordan's 'Onboarding notes'

Management team and administrators

The main platforms for the management team are outlined below with links provided.

Slack group and channels

Please ask for group access, as well as access to private channels, especially "management-policies". Each channel should have a description and some links at the top.

Airtable

We are no longer using Airtable; the process, and instructions. have been moved into Coda.

GitBook (edit access optional)

See Tech scoping

Management team: You don't need to edit the GitBook if you don't want to, but we're trying to use it as our main place to 'explain everything' to ourselves and others. We will try to link all content here. Note you can use 'search' and 'lens' to look for things.

PubPub

Access to the PubPub is mainly only needed for doing 'full-service evaluation manager work'.

Link to our PubPub page

Google drive: Gdocs and Gsheets

Please ask for access to this drive. This drive contains meeting notes, discussion, grant applications and tech details.

Link to our Google Drive

Open Collective Foundation

This is for submitting invoices for your work.

Link to our OCF account

Advisory board

The main platforms needed for the advisory board are outlined below with links provided.

Slack group and channels

Members of the advisory board can join our Slack (if they want). They can have access to private channels (subject to ) other than the 'management-policies' channel

Airtable: with discretion

We are no longer using Airtable (except to recover some older content; the process, and instructions have been moved into Coda.io

Evaluation managers/managing evaluations

In addition to the management team platforms explained above, additional information for how to use the platforms specifically for managing evaluations is outlined below.

Airtable

We are no longer using Airtable; the process, and instructions. have been moved into Coda.

PubPub

Link to our PubPub page

For details on our current PubPub process please see this google doc. To find this in the google drive, it is under "hosting and tech".

Research-linked contractors

Evaluators

Guidelines for evaluators

Authors

Guidelines for evaluators

Notes:

  1. Airtable: Get to know it's features, it's super-useful. E.g., 'views' provide different pictures of the same information. 'Link' field types connect different tables by their primary keys, allowing information and calculations to flow back and forth.

  2. Airtable table descriptions: as well as by hovering over the '(i)' symbol for each tab. Many of the columns in each tab also have descriptions.

  3. Additional Airtable security: We also keep more sensitive in this AIrtable encrypted, or moved to a different table that only David Reinstein has access to.

  4. Use discretion in sharing: advisory board members might be authors, evaluators, job candidates, or parts of external organizations we may partner with

Policies/issues discussion

This page is mainly for The Unjournal management, advisory board and staff, but outside opinions are also valuable.

Unjournal team members:

  • Priority 'ballot issues' are given in our 'Survey form', linked to the Airtable (ask for link)

  • Key discussion questions in the broad_issue_stuffview inquestions table, linking discussion Google docs

Considering papers/projects

Direct-evaluation track: when to proceed with papers that have "R&R's" at a journal?

'Policy work' not (mainly) intended for academic audiences?

We are considering a second stream to evaluate non-traditional, less formal work, not written with academic standards in mind. This could include the strongest work published on the EA Forum, as well as a range of further applied research from EA/GP/LT linked organizations such as GPI, Rethink Priorities, Open Philanthropy, FLI, HLI, Faunalytics, etc., as well as EA-adjacent organizations and relevant government white papers. See comments here; see also Pete Slattery’s proposal here, which namechecks the Unjournal.

E.g., for

We further discuss the case for this stream and sketch and consider some potential policies for this HERE.

Evaluation procedure and guidelines

Internal discussion space: Unjournal Evaluator Guidelines & Metrics

Feedback and discussion vs. evaluations

DR: I suspect that signed reviews (cf blog posts) provide good feedback and evaluation. However, when it comes to rating (quantitative measures of a paper's value), my impression from existing initiatives and conversations is that people are reluctant to award anything less than 5/5 'full marks'.

Why Single-blind?

  • Power dynamics: referees don't want to be 'punished', may want to flatter powerful authors

  • Connections and friendships may inhibit honesty

  • 'Powerful referees signing critical reports' could hurt ECRs

Why signed reports?

  • Public reputation incentive for referees

    • (But note single-blind paid review has some private incentives.)

  • Fosters better public dialogue

  • Inhibits obviously unfair and impolite 'trashing'

Compromise approaches

  • Author and/or referee choose whether it should be single-blind or signed

  • Random trial: We can compare empirically (are signed reviews less informative?)

  • Use a mix (1 signed, 2 anonymous reviews) for each paper

Anonymity of evaluators

We may revisit our "evaluators decide if they want to be anonymous" policy. Changes will, of course never apply retroactively: we will carefully keep our promises. However, we may consider requesting certain evaluators/evaluations to specifically be anonymous, or to publish their names. A mix of anonymous and signed reviews might be ideal, leveraging some of the benefits of each.

Which metrics and predictions to ask, and how?

We are also researching other frameworks, templates, and past practices; we hope to draw from validated, theoretically grounded projects such as RepliCATS.

Discussion amongst evaluators, initial and revised judgments?

See the 'IDEAS protocol' and Marcoci et al, 2022

Revisions as part of process?

Timing of releasing evaluations

Should we wait until all commissioned evaluations are in, as well as authors' responses, and release these as a group, or should we sometimes release a subset of these if we anticipate a long delay in others? (If we did this, we would still stick by our guarantee to give authors two weeks to respond before release.)

Non-Anonymity of Managing editors

Considerations

My memory is that when submitting a paper, I usually learn who the Senior Editor was but not the managing editor. But there are important differences in our case. For a traditional journal the editors make an ‘accept/reject/R&R’ decision. The referee’s role is technically an advisory one. In our case, there is no such decision to be made. For The Unjournal, ME’s are choosing evaluators, corresponding with them, explaining our processes, possibly suggesting what aspects to evaluate, and perhaps putting together a quick summary of the evaluations to be bundled into our output. But we don’t make any ‘accept/reject/R&R’ decisions … once the paper is in our system and on our track, there should be a fairly standardized approach. Because of this, my thinking is:

  1. We don’t really need so many ‘layers of editor’ … a single Managing Editor (or co-ME’s) who consult other people on the UJ team informally … should be enough

  2. ME anonymity is probably not necessary; there is less room for COI, bargaining, pleading, reputation issues etc.

Presenting and hosting our output

Use of Hypothes.is and collaborative annotation

Communication and style

Style

To aim for consistency of style in all UJ documentation, a short style guide for the GitBook has been posted here. Feel free to suggest changes or additions using the comments. Note this document, like so many, is under construction and likely to change without notice. The plan is to make use of it for any outward-facing communications.

Management Committee
Advisory board

Research scoping discussion spaces

15 Aug 2023: We are organizing some meetings and working groups, and building some private spaces ... where we are discussing 'which specified research themes and papers/projects we should prioritize for UJ evaluation.'

This is guided by concerns we discuss in other sections (e.g., 'what research to target', 'what is global priorities relevant research')

Research we prioritize, and short comments and ratings on its prioritization is currently maintained in our Airtable database (under 'crucial_research'). We consider 'who covers and monitors what' (in our core team) in the 'mapping_work' table). This exercise suggested some loose teams and projects. I link some (private) Gdocs for those project discussions below. We aim to make a useful discussion version/interface public when this is feasible.

Team members and field specialists: You should have access to a Google Doc called "Unjournal Field Specialists+: Proposed division (discussion), meeting notes", where we are dividing up the monitoring and prioritization work.

Some of the content in the sections below will overlap.

General discussions of prioritization

Unjournal: Which research? How to prioritize/process it?

Development economics, global health, adjacent

  1. NBER, CEPR, etc: 'Who covers what'?

  2. 'Impactful, Neglected, Evaluation-Tractable' work in the global health & RCT-driven intervention-relevant part of development economics

  3. Mental health and happiness; HLI suggestions

  4. Givewell specific recommendations and projects

  5. Governance/political science

  6. Global poverty: Macro, institutions, growth, market structure

  7. Evidence-based policy organizations, their own assessments and syntheses (e.g., 3ie)

  8. How to consider and incorporate adjacent work in epidemiology and medicine

Economics as a field, sub-areas

  1. Syllabi (and ~agendas): Economics and global priorities (and adjacent work)

  2. Microeconomic theory and its applications? When/what to consider?

Animal welfare

  1. The economics of animal welfare (market-focused; 'ag econ'), implications for policy

  2. Attitudes towards animals/animal welfare; behavior change and 'go veg' campaigns

  3. Impact of political and corporate campaigns

The environment

  1. Environmental economics and policy

Psychology and 'attitudes/behavioral'

  1. Unjournal/Psychology research: discussion group: How can UJ source and evaluate credible work in psychology? What to cover, when, who, with what standards...

  2. Moral psychology/psychology of altruism and moral circles

Innovation, scientific progress, technology

  1. Innovation, R&D, broad technological progress

  2. Meta-science and scientific productivity

  3. Social impact of AI (and other technology)

  4. Techno-economic analysis of impactful products (e.g., cellular meat, geo-engineering)

Catastrophic risks (economics, social science, policy)

  1. Pandemics and other biological risks

  2. Artificial intelligence; AI governance and strategy (is this in the UJ wheelhouse?)

  3. International cooperation and conflict

Applied research/Policy research stream

See discussion here.

Other

  1. Long term population, growth, macroeconomics

  2. Normative/welfare economics and philosophy (should we cover this?)

  3. Empirical methods (should we consider some highly-relevant subset, e.g., meta-analysis?)

Mapping evaluation workflow