Home · Beta · Open Science Collaboration — psychological science replication rate
Beta · Scientific papers · cited; not independently recomputed
Open Science Collaboration — psychological science replication rate
- Source class
- Scientific papers
- Metric
- Replication rate + effect-size shrinkage
- Reported value
- 36% of replications produced a statistically significant result (vs. 97% in originals); mean effect size halved on replication
- Measured
- 2015-08-28
Context
Landmark large-scale replication of 100 psychology experiments published in three top journals. Findings provide a base rate against which any future per-paper or per-journal calibration claim must be evaluated. Comparable replication studies in economics (Camerer et al. 2016) and biomedical sciences are cited in the original paper for cross-discipline context.
Citation
Open Science Collaboration (2015). Estimating the reproducibility of psychological science. Science 349 (6251), aac4716.
https://doi.org/10.1126/science.aac4716
What Phase 1 launch will add
Calibration Ledger has not independently recomputed the value above. Phase 1 launch (target Q3 2027, gated on prerequisites) will add for this source class:
- Independent recomputation from the original outcome data, under data-licensing agreement
- Time-windowed breakdown (rolling 3-month, 12-month, lifetime)
- Cross-domain calibration (does this source calibrate uniformly across topical verticals?)
- Append-only timestamp anchoring of every score so retroactive revisions are visible
- Per-source citation page with full Murphy decomposition (Reliability − Resolution + Uncertainty)
Other findings in the same source class
- Camerer et al. — social science experiment replication (Nature/Science 2010-2015) — Replication rate + median effect-size shrinkage
- Hausfather et al. — climate model projections vs. observed warming — Implied transient climate response error; observed-vs-projected warming
All other findings
- Good Judgment Project Superforecasters (Human forecasters)
- Metaculus community-prediction aggregate (Forecaster aggregator platform)
- Manifold Markets — platform calibration (Prediction market)
- GPT-4 (OpenAI) — pre-RLHF vs post-RLHF calibration (AI models)
- Sell-side equity analysts — earnings forecast accuracy (Analyst firms)
- Anthropic — Claude / language model self-knowledge (AI models)
- Federal Reserve Survey of Professional Forecasters — GDP / inflation accuracy (Analyst firms)
Related
- All beta findings — at-a-glance + JSON + BibTeX exports
- Methodology v1.1 — full Brier + Murphy + append-only framework
- Operator track record — methodology applied to Paulo de Vries’s own dated forecasts
- Source classes — what each of the 6 source classes will score at Phase 1
- Roadmap — milestone status + Q3 2027 launch gate + kill criterion
Last verified: 2026-04-28. Cited; Calibration Ledger has not independently recomputed this finding. Independent recomputation in Phase 1 (Q3 2027). Operator: Paulo de Vries. Contact: contact@calibrationledger.com.