<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[In One Lifetime: Meta Science]]></title><description><![CDATA[Writings on meta-science.]]></description><link>https://www.paullitvak.com/s/meta-science</link><generator>Substack</generator><lastBuildDate>Tue, 05 May 2026 09:23:11 GMT</lastBuildDate><atom:link href="https://www.paullitvak.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Paul Litvak]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[phowa@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[phowa@substack.com]]></itunes:email><itunes:name><![CDATA[Paul Litvak]]></itunes:name></itunes:owner><itunes:author><![CDATA[Paul Litvak]]></itunes:author><googleplay:owner><![CDATA[phowa@substack.com]]></googleplay:owner><googleplay:email><![CDATA[phowa@substack.com]]></googleplay:email><googleplay:author><![CDATA[Paul Litvak]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[We can create the future of science right now]]></title><description><![CDATA[Most of the parts we need are already being built]]></description><link>https://www.paullitvak.com/p/we-can-create-the-future-of-science</link><guid isPermaLink="false">https://www.paullitvak.com/p/we-can-create-the-future-of-science</guid><dc:creator><![CDATA[Paul Litvak]]></dc:creator><pubDate>Wed, 29 Apr 2026 16:43:31 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!LgLY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F647ef08d-844b-4545-a518-fbf21f3a4a56_1016x1060.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>The bottleneck</h3><p>At this point it is uncontroversial to say that science needs to stop using the PDF article as the unit of knowledge and currency. The unbearable slowness of scientific publishing, the profit motive and margins. I&#8217;m not saying anything new. The PDF also sucks because it&#8217;s hard to extract structured information from, which makes it hard to do evidence synthesis. As a result, we do much less evidence synthesis than is needed. And evidence synthesis ultimately undergirds most policy and medical decision making. I can see second by second real time odds for any sporting or newsworthy event, but a school board can&#8217;t see the best evidence on whether their 8th graders should be taught algebra. As a society we don&#8217;t treat this as an important problem. Again, not controversial. </p><p>Not only is the problem well understood, but the solution has already been laid out. What we need is AI-assisted living evidence synthesis - (1) an open knowledge graph of atomic claims (2) claims linked to evidence (3) assessment and synthesis of each piece of evidence (4) continuous updating with new data. </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.paullitvak.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading In One Lifetime! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>What few realize (yet) is that the technical capacity to build this vision for a significant portion of science already exists. Not only that&#8212; scientists and startup teams are already building many of these components. I know this because I&#8217;ve been surveying the space and talking to many of the builders. There are some missing pieces: for example evaluations of how well some of the components work. But at this point most of what&#8217;s missing is a fully end to end working integration of all of these parts. In the rest of this essay, I&#8217;m going to lay out all the parts of a working living evidence layer for science and who is working on them, and propose concrete next steps for building this system.</p><h3>What&#8217;s now possible</h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!LgLY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F647ef08d-844b-4545-a518-fbf21f3a4a56_1016x1060.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!LgLY!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F647ef08d-844b-4545-a518-fbf21f3a4a56_1016x1060.png 424w, https://substackcdn.com/image/fetch/$s_!LgLY!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F647ef08d-844b-4545-a518-fbf21f3a4a56_1016x1060.png 848w, https://substackcdn.com/image/fetch/$s_!LgLY!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F647ef08d-844b-4545-a518-fbf21f3a4a56_1016x1060.png 1272w, https://substackcdn.com/image/fetch/$s_!LgLY!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F647ef08d-844b-4545-a518-fbf21f3a4a56_1016x1060.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!LgLY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F647ef08d-844b-4545-a518-fbf21f3a4a56_1016x1060.png" width="1016" height="1060" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/647ef08d-844b-4545-a518-fbf21f3a4a56_1016x1060.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1060,&quot;width&quot;:1016,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:160607,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.paullitvak.com/i/195463251?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F647ef08d-844b-4545-a518-fbf21f3a4a56_1016x1060.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!LgLY!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F647ef08d-844b-4545-a518-fbf21f3a4a56_1016x1060.png 424w, https://substackcdn.com/image/fetch/$s_!LgLY!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F647ef08d-844b-4545-a518-fbf21f3a4a56_1016x1060.png 848w, https://substackcdn.com/image/fetch/$s_!LgLY!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F647ef08d-844b-4545-a518-fbf21f3a4a56_1016x1060.png 1272w, https://substackcdn.com/image/fetch/$s_!LgLY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F647ef08d-844b-4545-a518-fbf21f3a4a56_1016x1060.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The diagram above outlines the components of a living evidence synthesis platform, including some of the teams working on each component<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>. Scientific PDFs are processed into claims with associated evidence. The evidence is subjected to a forensic audit, methodological evaluation and robustness and reproducibility checks. Finally it&#8217;s given a weight in a continuously updating synthesis. What follows is a description and status of each component and a few of the teams working on them.</p><h4>Document understanding</h4><p>The first thing you need to be able to do is turn an article into structured data. Mostly that means parsing PDFs. There are often multicolumn layouts that confuse non-specialized PDF to text processing libraries. For scientific papers, there is the added complexity of parsing formulas and tables and figures. This is a really hot area - there are startups offering APIs, and it seems like a new open source package gets posted to Github every few weeks.  What follows isn&#8217;t exhaustive. A package called <a href="https://github.com/kermitt2/grobid">GROBID</a> was the state of the art for a while, they didn&#8217;t update their package for nearly two years until very recently. In the meantime <a href="https://reducto.ai/">reducto.ai</a> released an AI powered PDF extraction API, <a href="https://github.com/PaddlePaddle/PaddleOCR">PaddleOCR</a> became popular, IBM released a model called <a href="https://github.com/docling-project/docling">Docling</a>, and both <a href="https://mistral.ai/">Mistral</a> and <a href="https://ai.google.dev/gemini-api/docs/document-processing">Gemini</a> created models and libraries. I also know of at least one other well-funded psychology research group working on a paper parser. By contrast,  there are few open evals in this space, with no extensive evals for complex table comprehension in particular. Nonetheless, I'm confident this will be a solved problem soon, given the combination of LLM advances and developer interest.</p><h4>Hypothesis level extraction</h4><p>There has been increasing interest in comprehending the extracted text of papers and linking information to evidence for each hypothesis. A lot of work has already been done.  <a href="https://github.com/ijmarshall/trialstreamer">Trialstreamer</a> (<a href="https://academic.oup.com/jamia/article/27/12/1903/5907063">Marshall et al. 2020</a>) and <a href="https://pypi.org/project/robotreviewer/">RobotReviewer LIVE</a> (Marshall et al. 2023) demonstrated automated extraction of trial population, intervention, and outcome at scale on clinical RCTs. <a href="https://github.com/Future-House/paper-qa">PaperQA2</a> (<a href="https://arxiv.org/abs/2409.13740">Skarlinski et al. 2024</a>) and <a href="https://scholarqa.allen.ai/">Ai2 ScholarQA</a> (2024) extended this to retrieval-augmented question answering with citation grounding. <a href="https://elicit.com/">Elicit</a>, <a href="https://consensus.app/">Consensus</a>, and <a href="https://scispace.com/">SciSpace</a> operationalized claim-level extraction for end users. <a href="https://github.com/OpenEvalProject/evals">OpenEval</a> (<a href="https://www.biorxiv.org/content/10.64898/2026.01.30.702911v1">Booeshaghi et al. 2026</a>) is the most recent and most ambitious: 1.96 million atomic claims extracted from 16,087 eLife manuscripts using Claude Sonnet 4.5, grouped into ~299,000 results, with LLM evaluations showing 81% agreement with human peer review on a 2,487-paper subset. None of these solutions link claims to test statistics, as you would need to evaluate randomized control trials. This is why I built the <a href="https://evidence.guide/">evidence.guide</a> API - to extract hypotheses and associated test statistics from behavioral science papers. The best public eval of this kind of extraction I&#8217;m aware of comes from the recent <a href="https://www.darpa.mil/program/systematizing-confidence-in-open-research-and-evidence">SCORE project</a> - they had humans code thousands of psychology papers to extract their claims by hand. It would be extremely helpful to the world if all scientific PDFs were available as structured open data. I&#8217;ve been working to make this happen, both directly at Berkeley and through coordination with large entities I can&#8217;t yet speak of;  as hard as it is to do, I think it&#8217;s possible<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>.</p><h4>Forensic audit</h4><p>A lot of work has been done on forensic audit, but some gaps remain. Of course, for biology papers that rely on images for evidence, there are a variety of tools (notably <a href="https://www.proofig.com/">Proofig</a> and <a href="https://imagetwin.ai/">ImageTwin</a>) to spot anomalies. These are still well short of what sleuths like <a href="https://scienceintegritydigest.com/">Elizabeth Bik</a> can do on her own, but these tools are constant companions among fraud analysts. There&#8217;s someone working on auditing Excel files for anomalies, and a number of teams are automating numerical checks like GRIM and SPRITE, including <a href="https://lhdjung.github.io/scrutiny/">Scrutiny project</a>, the <a href="https://www.medrxiv.org/content/10.1101/2025.09.03.25334905v2">INSPECT-SR</a> team as well as <a href="https://statcheck.io/">statcheck</a>. The <a href="https://arxiv.org/abs/2601.13330">regcheck</a> team is building a way to use AI to compare preregistrations to analyses in papers, to ensure there aren&#8217;t significant deviations. Nonetheless, there are many other kinds of anomalies to screen for, both public and less publicly known. And there are no formal evals for anomaly detection that I&#8217;m aware of. Still, there&#8217;s a lot to draw from in this space and I&#8217;m pretty certain we will be able to scan papers for most kinds of obvious anomalies in the near future.</p><h4>Methodological review</h4><p>This area has been white hot, though I fear for many of the startups in this space, because this capability may become commoditized. There are at least six different AI peer review companies, including <a href="https://refine.ink/">Refine.ink</a>, <a href="https://reviewer3.com/">Reviewer3</a>, <a href="https://www.reviewerzero.ai/">ReviewerZero.ai</a>, <a href="https://www.qedscience.com/">Q.E.D. Science</a>, <a href="https://paper-wizard.com/">Paper Wizard</a>, and <a href="https://isitcredible.com/">Isitcredible</a>. <a href="https://coarse.ink/">Coarse</a> (a pun on refine) was also recently created as an open source alternative. These systems provide qualitative feedback on the content of papers, spotting methodological weaknesses and mathematical errors. They seem to work pretty well, and many academics report bitterly that they exceed the average quality of typical peer reviewers. But there are few evals here either. What evals exist so far involve using LLM-as-judge (circularity problems abound) or comparing against human reviews of questionable quality. What you&#8217;d ideally want is an eval that measures capturing known errors in papers<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a>. </p><h4>Reproducibility and robustness</h4><p>Another active area has been using AI agents to automate computational reproducibility<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> and robustness<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> checks in papers that report numerical results. For more recent papers where data and code are available, AI agents can see whether they can re-run the analyses and produce the numbers reported in the published paper. In addition to a handful of individual academics who have been experimenting using Claude Code for this, the <a href="https://i4replication.org/">Institute for Replication</a> is a leading group working on building an end to end system. The evals related to this problem are the most mature, with <a href="https://arxiv.org/abs/2409.11363">CORE-Bench</a> (Siegel, Kapoor, Narayanan 2024) and <a href="https://arxiv.org/abs/2504.01848">PaperBench</a> (OpenAI 2025) available to benchmark agents on this task. There is also work on getting AI agents to test alternative ways of analyzing the data to ensure the results are robust to small analytic design choices.</p><h4>Synthesis</h4><p>This is the most underdeveloped area where significant investment is required. Although some automated evidence synthesis systems exist &#8212; for example, <a href="https://ottosr.com/">otto-sr</a> is building an AI agent to write systematic reviews &#8212; none of these incorporate the full range of paper level signals to weight evidence appropriately. Nor is there anything like an eval or a gold standard for a good systematic review. Arguably <a href="https://www.cochranelibrary.com/cdsr/reviews">Cochrane reviews</a> are the closest we have to gold standard human systematic reviews, though I&#8217;ve heard academics in the know complain about their uneven quality. A key question for a synthesis platform is how to weight anomalies and methodological issues in assessing the quality of a piece of evidence. This is an unsolved problem and one I&#8217;m very keen to work on.</p><h4>Continuous updating</h4><p>There are many pieces of basic infrastructure available for monitoring for new research and initiating updates. <a href="https://openalex.org/">OpenAlex</a>  is the current open citation graph. <a href="https://retractionwatch.com/">Retraction Watch</a> integrated into <a href="https://www.crossref.org/">Crossref</a> in October 2023. <a href="https://scite.ai/">Scite</a> tracks how citations support, contrast, or mention prior claims. The <a href="https://community.cochrane.org/review-development/resources/living-systematic-reviews">Living Evidence Network</a> demonstrated continuous-update workflows in clinical guidelines. Engineering this is a relatively straightforward task. </p><p>When you look over this technical architecture and all the progress being made, it&#8217;s hard not to be optimistic that a living guide to scientific evidence will be built.</p><h3>The stakeholders are ready</h3><p>The social infrastructure for this is starting to coalesce &#8212; it&#8217;s not just a pie in the sky academic exercise to imagine this coming into existence. Institutions like the <a href="https://www.cos.io/">Center for Open Science</a>, the Institute for Replication, the INSPECT-SR, the Living Evidence Network and more are all working on scaling work to improve research quality. </p><p>Funders are also aligned. The <a href="https://sloan.org/">Sloan Foundation</a> has funded living evidence work through COS. <a href="https://coefficientgiving.org/">Coefficient Giving</a> supports the Institute for Replication and COS. The <a href="https://astera.org/">Astera Institute</a> and the <a href="https://ifp.org/">Institute for Progress</a> has shown interest in this space. NIH has established an <a href="https://www.nih.gov/replicationandreproducibility">Office for Replication and Reproducibility</a>. Although there are (very unfortunately) serious headwinds in science funding generally, there is an active group of funders interested in metascience.</p><p>A brief word about what I&#8217;ve been doing at RDI. First, as a Visiting Scholar at Berkeley I&#8217;ve been actively figuring out how a non-profit and a public university can conduct and make public the results of large-scale academic article data mining. With some of the money I raised from donors, I commissioned a legal analysis of recent case law and publisher text data mining (TDM) agreements in order to understand whether a massive open data mining of academic articles is possible (with caveats, it is). I&#8217;ve also been working to bring together stakeholders in this space, and identify gaps. I&#8217;ve also been doing some software development in this space, with more to come. </p><h3>A pilot proposal</h3><p>The assumption undergirding all of this is that an AI, given all this information, would make the right judgment about a scientific claim with lots of conflicting evidence, weighing all the factors appropriately. That&#8217;s the hypothesis we need to test. </p><p>Randomized control trial research is the best place to focus on first. RCTs are used to make many of the important decisions in society - from medical trials to public policy changes. And they use a relatively uniform set of inferential statistics with lots of known and available diagnostics. Behavioral science experiments, within the broader realm of RCTs, should be first used as a testbed whose results can be generalized. Because behavioral science is at the vanguard of open science practices, replications abound (there are thousands of them) to serve as ground truth training data. </p><h5>Key Hypothesis </h5><p>Therefore the pilot would test, in behavioral RCTs that have been replicated, whether the quality of evidence for a claim can be used to accurately predict whether that claim will replicate. </p><h5>Secondary Hypotheses</h5><ol><li><p>Compared to claims that replicated,  non-replicated claims demonstrate a greater share of forensic anomalies in their source literatures.</p></li><li><p>Hypothesis level claim and statistic extraction is accurate enough to  scale living evidence without onerous human review costs.</p></li><li><p>Replication prediction is more accurate than prediction markets<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> or journal prestige.</p></li></ol><p>If all the different quality signals we gather do accurately predict which studies will replicate, then we can use that model to score evidence to power the living evidence layer.</p><h4>Why this is informative regardless of outcome</h4><p>If the pilot succeeds, the architecture extends to medical RCTs (where Living Evidence already operates and integration is mostly about claim representation), then to slices of basic biology with stable replication structure. If it fails, the field learns which quality signals are load-bearing and which ones metascience has oversold. Either result is a contribution to knowing what the literature supports.</p><h3>Conclusion</h3><p>The drawbacks of the current scientific publishing system are known. Scientists agree, metascientists agrees, philanthropists agree: the published PDF plus citation graph isn&#8217;t the right substrate for maintaining a representation of the evidence base in science. The pieces needed to build the alternative either already exist or are rapidly taking shape. The community is forming around exactly this problem, with concrete partnerships and shared infrastructure. A pilot should start on behavioral science RCTs because that's the slice of empirical science most amenable to legibility, where replication ground truth is richest, and where the failure modes are best documented. What's been missing is the galvanizing mission to assemble these pieces into something that works. That's what I'm proposing to build. </p><p></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>I have a broader field map that I&#8217;ll release publicly soon. This is me, building in public!</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>If this is something that you are excited about, please reach out and talk to me.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>More on this very soon too!</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>This tests whether, given the code and the data, you can get the same statistics as reported in the published paper.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>This tests whether the results are the same as a paper&#8217;s given alternative analytical decisions in conducting the analysis (like outlier omission). Closely related is the idea of a &#8220;multiverse&#8221; where you come up with many different ways of answering the same underlying research question with the same data, and test whether the results hold in all those alternative methods. There&#8217;s been work on the latter as well.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>Some of the replications, e.g. those from the SCORE project, had paired the experiments with forecasts from prediction markets. So we get to look at this for free.</p></div></div>]]></content:encoded></item><item><title><![CDATA[What If Everyone Knew Which Science to Trust?]]></title><description><![CDATA[And now for something completely different. . .]]></description><link>https://www.paullitvak.com/p/what-if-everyone-knew-which-science</link><guid isPermaLink="false">https://www.paullitvak.com/p/what-if-everyone-knew-which-science</guid><dc:creator><![CDATA[Paul Litvak]]></dc:creator><pubDate>Mon, 15 Dec 2025 13:46:44 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!hCOn!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09b955cb-1b0a-48c0-9114-da518a90c6b7_1070x1426.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In <a href="https://www.cmu.edu/dietrich/sds/">graduate school</a> I studied decision science.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> I learned methods, the great (in)controvertible findings of the field, ran dozens of experiments, crunched many numbers. I also learned something else: if your results weren&#8217;t significant, your career was in trouble.</p><p>I felt pressure to p-hack. To run a few more subjects and check the results, drop a condition, move failed studies into the file drawer, anything to cross that magical p &lt; .05 threshold. My dissertation proposal was accepted on the condition that I get a positive result in an experiment. But I didn&#8217;t want to play that game. Instead I published a meta-analysis with a null effect as part of my dissertation. The findings didn&#8217;t support the hypothesis. That&#8217;s what the data said, so that&#8217;s what I reported.</p><p>Then I left academia.</p><div><hr></div><h2>The Problem Followed Me</h2><p>I went into tech, working at top companies like Facebook, Google and Airbnb. I learned a lot about what it takes to build great products and run effective teams. Eventually I co-founded my own company, a Stanford nanotech spinout building a health-sensing toothbrush. The core technology was based on published research. Thousands of papers supported the sensor approach we were using.</p><p>The technology didn&#8217;t work. When we tried to replicate the foundational science, we couldn&#8217;t. Thousands of papers, and the basic claims didn&#8217;t hold up.</p><p>The problem had followed me out of academia and into industry! Meanwhile, the news kept confirming what I&#8217;d experienced firsthand.</p><h2>Fraud Makes Headlines</h2><p>In 2022 we learned that a landmark paper on a protein called A&#946;*56, cited nearly 2,500 times and the basis for over a billion dollars in annual Alzheimer&#8217;s research funding, contained fabricated images.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> Other labs had quietly failed to find the protein for years, but those null results went unpublished. The field had spent sixteen years chasing a lead that was never real.</p><p>Marc Tessier-Lavigne, president of Stanford, resigned after investigations revealed problems in papers he&#8217;d authored, flagged by Elisabeth Bik, a microbiologist who has personally scanned over 20,000 papers for image manipulation.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a></p><p>And in May 2025, Harvard revoked tenure for the first time in 80 years.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> The professor was Francesca Gino, who had built her career studying honesty and ethics, and was fired after forensic analysis revealed she had allegedly fabricated data across multiple studies. The irony of a dishonesty researcher faking her honesty research would be funny if it weren&#8217;t so devastating.</p><div><hr></div><h2>The Scale of the Problem</h2><p>The numbers are hard to fully absorb. The landmark 2015 Reproducibility Project tested 100 psychology studies from top journals and found that only 36 percent successfully replicated.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> An eight-year project attempting to verify high-impact cancer biology found that less than half of experimental effects could be reproduced, and the successful replications showed effects 85 percent smaller than originally claimed.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> Over 10,000 papers were retracted in 2023 alone, a new record, double the previous year.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a></p><p>One widely-cited estimate puts the cost of irreproducible preclinical research in the United States at tens of billions of dollars annually.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a> The authors acknowledge significant uncertainty in that figure, but even the lower bound suggests an enormous waste of resources. And that&#8217;s just the direct costs. It doesn&#8217;t count the years lost chasing false leads, the patients enrolled in trials testing hypotheses that were never true, the graduate students whose careers collapsed when they couldn&#8217;t reproduce their mentors&#8217; results.</p><div><hr></div><h2>Something Changed: AI Made a New Approach Possible</h2><p>Methods to address these problems exist, but they are too tedious to compute manually for all of science. Existing efforts have only scaled to thousands of papers, while fieldwide efforts have been unable to discriminate between relevant and irrelevant p-values.</p><p>For example,  p-curves can detect when a literature has too many p-values clustering just below .05, a telltale sign of p-hacking.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a> Meta-analytic techniques can assess whether effect sizes are consistent across studies. Preregistration checks can verify whether researchers tested what they said they&#8217;d test. Statistical forensics can flag impossible numbers.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-10" href="#footnote-10" target="_self">10</a></p><p>Having moved into AI product development as an applied practitioner, I started to see a new possibility: AI can now do the tedious extraction work, pulling hypotheses, sample sizes, test statistics, and p-values from thousands of papers automatically, so these analyses can run at scale.</p><p>I started building tools to do exactly this. And I realized the impact it could have.</p><div><hr></div><h2>The Gap in Today&#8217;s Tools</h2><p>There&#8217;s a whole ecosystem of scientific AI tools emerging right now. But they all stop short of what&#8217;s really needed.</p><p>Scientific search engines like <a href="https://elicit.com/">Elicit</a> and <a href="https://consensus.app">Consensus</a> are genuinely useful. Elicit searches 138 million papers. Consensus shows you how many studies support or oppose a claim. But as Consensus acknowledges, each claim counts the same regardless if it comes from a meta-analysis of a thousand studies or the study of a single individual.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-11" href="#footnote-11" target="_self">11</a> They help you find papers. They can&#8217;t tell you which ones to trust.</p><p>Automated peer review systems are proliferating<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-12" href="#footnote-12" target="_self">12</a>, tools that scan papers for methodological issues and suggest improvements. They&#8217;re helpful for researchers and they surface potential problems that need to be addressed. But they stop short of actual evaluation or scoring. They won&#8217;t tell you that a finding is probably unreliable.</p><p>Specialized fraud-detection tools exist for specific problems. ImageTwin and Proofig catch duplicated or manipulated images.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-13" href="#footnote-13" target="_self">13</a> StatCheck flags calculation errors.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-14" href="#footnote-14" target="_self">14</a> The GRIM test detects impossible means. The <a href="https://i4replication.org/">Institute for Replication</a> is building an AI engine to re-execute code and verify computational reproducibility.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-15" href="#footnote-15" target="_self">15</a></p><p>These are all good. But they&#8217;re siloed. Nobody is building the integrated system, one that brings all the signals together, reasons about individual papers the way a critical scientist would, looks at the full body of meta-analytic evidence, and produces an overall assessment of how much you should trust a given claim.</p><p>That&#8217;s what I&#8217;m building.</p><div><hr></div><h2>Vision: What a Critical Scientist Looks Like at Scale</h2><p>Imagine you could ask: what&#8217;s the evidence that intervention X actually works? And instead of getting a list of papers, you got an assessment.</p><p>Here are 47 studies testing this claim. 12 were preregistered; of those, 8 found the predicted effect. The non-preregistered studies show a suspicious clustering of p-values just below .05. Three independent replications failed to find the effect. The original study was underpowered and has never been directly replicated. Two papers have statistical errors that, when corrected, eliminate significance. Bottom line: weak evidence, high risk of false positive.</p><p>This is how a careful, critical scientist thinks about evidence. They don&#8217;t just count papers. They weigh methodology, check for red flags, look at replication status, consider the full pattern. We can build AI systems that do this&#8212;systems that make this kind of careful evaluation possible for every claim, not just the few that get manual scrutiny.</p><p>The output should be meaningfully predictive of two things: whether a finding will replicate, and whether it is in line with the thinking of skeptical experts. Those are the real tests of credibility.</p><div><hr></div><h2>Who Needs This?</h2><p>Almost everyone, it turns out.</p><p>Grantmakers and philanthropists deciding which interventions to fund. Right now they rely on manual literature reviews that can&#8217;t possibly keep up.</p><p>Policymakers basing policies on research. The growth mindset interventions that schools adopted based on Carol Dweck&#8217;s work? A large-scale UK trial found zero statistically significant effects on any academic outcome.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-16" href="#footnote-16" target="_self">16</a></p><p>Journalists trying to report on science accurately. Every week brings new studies with dramatic claims. Which ones should they cover? Which should they be skeptical of?</p><p>Government research agencies trying to improve the quality of science they fund. You can&#8217;t reform what you can&#8217;t measure, and right now there&#8217;s no way to monitor whether policy changes&#8212;like preregistration requirements&#8212;actually shift the distribution of evidence quality across a portfolio.</p><p>The general public, increasingly trying to navigate scientific papers themselves. If you&#8217;ve ever Googled a health question and tried to read the studies, you know how hard it is to evaluate what you&#8217;re reading.</p><p>AI labs and companies building on scientific literature. As AI systems increasingly use scientific papers for training and retrieval, they need to know which papers to trust. Garbage in, garbage out, at unprecedented scale.</p><p>This is a vital public good. Reliable information about what science actually knows, and doesn&#8217;t know, is infrastructure for a functioning society.</p><div><hr></div><h2>What We&#8217;ve Built So Far</h2><p>The first piece already exists: an API at <a href="https://evidence.guide">evidence.guide</a> that extracts structured data from papers. Upload a PDF, get back JSON with study details, hypotheses, test statistics, and p-values. I&#8217;ve validated it against hundreds of hand-coded papers with 92%+ accuracy on p-value extraction. There are no other extraction APIs currently out there.</p><p>The next step is to build out a more comprehensive set of quality signals and a meta-analytic reasoning model that can weigh them appropriately.</p><div><hr></div><h2>How You Can Help</h2><p>I need help.</p><p><strong>Money.</strong> We&#8217;re a 501(c)(3) nonprofit. <a href="https://www.dawes.institute/#donate">Donations</a> fund development, compute, and eventually a small team. Even small amounts help.</p><p><strong>Compute.</strong> Running sophisticated analyses on millions of papers requires serious compute. The <a href="https://renderfoundation.com/">Render Network Foundation</a> has generously provided initial support. If you have access to compute resources and want to support open science infrastructure, I want to talk to you.</p><p><strong>Engineering talent.</strong> I&#8217;m looking for a junior (potentially new grad!) full-stack engineer or data engineer, someone passionate about this problem who can work on a tightly scoped 3-month project with potential for a full-time role. If that&#8217;s you, or you know someone, reach out.</p><p><strong>Introductions.</strong> If you know funders, researchers, or organizations who should be aware of this work, I&#8217;d be grateful for connections.</p><p><strong>Feedback.</strong> If you&#8217;re a researcher who would use these tools, I want to hear from you. What signals matter most? What would make this useful for your work?</p><div><hr></div><h2>Why This Matters</h2><p>I left academia because the incentives were broken. I watched my startup fail because the foundational science wasn&#8217;t real. I&#8217;ve seen the toll this takes, on researchers who can&#8217;t replicate their mentors&#8217; work, on patients enrolled in trials testing fabricated hypotheses, on the public&#8217;s trust in science itself.</p><p>The replication crisis is mostly a story of broken incentives, not bad actors. Most researchers want to do good work. The system rewards cutting corners. We need infrastructure that makes reliable science visible and unreliable science obvious.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.paullitvak.com/p/what-if-everyone-knew-which-science?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.paullitvak.com/p/what-if-everyone-knew-which-science?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.paullitvak.com/p/what-if-everyone-knew-which-science?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><p></p><div><hr></div><p><strong>Learn more:</strong> <a href="https://dawes.institute/">dawes.institute</a> | <a href="https://evidence.guide/">evidence.guide</a></p><p><strong>Support the work:</strong> <a href="https://dawes.institute/#donate">Donate</a></p><p><strong>Get in touch:</strong> <a href="mailto://info@dawes.institute">info@dawes.institute</a></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>For readers of this Substack, the sudden change in topic may feel a bit jarring. Don&#8217;t worry, more dharma-related content is coming! For those of you that aren&#8217;t regular readers, welcome! I write about Buddhist-related things and now metascience too!</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Piller, C. (2022). Blots on a field? <em>Science</em>. <a href="https://www.science.org/content/article/potential-fabrication-research-images-threatens-key-theory-alzheimers-disease">https://www.science.org/content/article/potential-fabrication-research-images-threatens-key-theory-alzheimers-disease</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Bik, E. (2024). Einstein Foundation Award recipient profile. <a href="https://award.einsteinfoundation.de/award-winners-finalists/recipients-2024/elisabeth-bik">https://award.einsteinfoundation.de/award-winners-finalists/recipients-2024/elisabeth-bik</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>NBC News. (2025). Harvard professor Francesca Gino&#8217;s tenure revoked amid data fraud investigation. <a href="https://www.nbcnews.com/news/us-news/know-harvard-professor-francesca-gino-tenure-revoked-data-fraud-invest-rcna209219">https://www.nbcnews.com/news/us-news/know-harvard-professor-francesca-gino-tenure-revoked-data-fraud-invest-rcna209219</a></p><p>Simonsohn, U., Nelson, L., &amp; Simmons, J. (2023). Data Falsificada (Part 2): &#8220;My Class Year Is Harvard.&#8221; <em>Data Colada</em>. <a href="https://datacolada.org/110">https://datacolada.org/110</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>Open Science Collaboration. (2015). <a href="https://www.science.org/doi/10.1126/science.aac4716">Estimating the reproducibility of psychological science</a>. <em>Science</em>, 349(6251), aac4716. &#8212; Note that the 36% is a contested number depending on how you define successful replication. Depending on how you measure it, you could argue the % is somewhat higher, but I don&#8217;t think you could call the resulting replication rate good. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>Errington, T. M., et al. (2021). Investigating the replicability of preclinical cancer biology. <em>eLife</em>, 10, e71601. <a href="https://elifesciences.org/articles/71601">https://elifesciences.org/articles/71601</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>Van Noorden, R. (2023). More than 10,000 research papers were retracted in 2023&#8212;a new record. <em>Nature</em>. <a href="https://www.nature.com/articles/d41586-023-03974-8">https://www.nature.com/articles/d41586-023-03974-8</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>Freedman, L. P., Cockburn, I. M., &amp; Simcoe, T. S. (2015). The economics of reproducibility in preclinical research. <em>PLOS Biology</em>, 13(6), e1002165. <a href="http://Freedman, L. P., Cockburn, I. M., &amp; Simcoe, T. S. (2015). The economics of reproducibility in preclinical research. PLOS Biology, 13(6), e1002165. https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1002165">https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1002165</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>Simonsohn, U., Nelson, L. D., &amp; Simmons, J. P. (2014). <a href="https://pages.ucsd.edu/~cmckenzie/Simonsohnetal2014JEPGeneral.pdf">P-curve: A key to the file-drawer.</a> <em>Journal of Experimental Psychology: General</em>, 143(2), 534.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-10" href="#footnote-anchor-10" class="footnote-number" contenteditable="false" target="_self">10</a><div class="footnote-content"><p>Brown, N. J. L., &amp; Heathers, J. A. J. (2017). <a href="https://peerj.com/preprints/2064/">The GRIM test: A simple technique detects numerous anomalies in the reporting of results in psychology.</a> <em>Social Psychological and Personality Science</em>, 8(4), 363-369.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-11" href="#footnote-anchor-11" class="footnote-number" contenteditable="false" target="_self">11</a><div class="footnote-content"><p>Consensus. (2024). Consensus Meter: Guardrails and Limitations. <a href="https://consensus.app/home/blog/consensus-meter/">https://consensus.app/home/blog/consensus-meter/</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-12" href="#footnote-anchor-12" class="footnote-number" contenteditable="false" target="_self">12</a><div class="footnote-content"><p>One of the best (and only actually available) ones is <a href="https://refine.ink">refine.ink</a>. It&#8217;s really impressive! And it costs around $50 dollars per paper &#8212; not easily scalable to evaluate the entire scientific record.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-13" href="#footnote-anchor-13" class="footnote-number" contenteditable="false" target="_self">13</a><div class="footnote-content"><p>Proofig. (2024). How Scientific Journals Are Fighting Image Manipulation with AI. <a href="https://www.proofig.com/newsroom/nature-shares-how-scientific-journals-are-using-tools-like-proofig-ai-to-combat-image-integrity-issues">https://www.proofig.com/newsroom/nature-shares-how-scientific-journals-are-using-tools-like-proofig-ai-to-combat-image-integrity-issues</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-14" href="#footnote-anchor-14" class="footnote-number" contenteditable="false" target="_self">14</a><div class="footnote-content"><p>Nuijten, M. B., et al. (2016). <a href="https://link.springer.com/article/10.3758/s13428-015-0664-2">The prevalence of statistical reporting errors in psychology </a>(1985&#8211;2013). <em>Behavior Research Methods</em>, 48(4), 1205-1226.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-15" href="#footnote-anchor-15" class="footnote-number" contenteditable="false" target="_self">15</a><div class="footnote-content"><p>Institute for Replication. (2024). The AI Replication Engine: Automating Research Verification. <a href="https://i4replication.org/the-ai-replication-engine-automating-research-verification/">https://i4replication.org/the-ai-replication-engine-automating-research-verification/</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-16" href="#footnote-anchor-16" class="footnote-number" contenteditable="false" target="_self">16</a><div class="footnote-content"><p> Foliano, F., Rolfe, H., Buzzeo, J., Runge, J., &amp; Wilkinson, D. (2019). <a href="https://educationendowmentfoundation.org.uk/projects-and-evaluation/projects/changing-mindsets">Changing Mindsets: Effectiveness trial. Education Endowment Foundation.</a></p></div></div>]]></content:encoded></item></channel></rss>