
26 March 2026
Artificial intelligence (AI) is becoming more integrated into scientific research, shaping how knowledge is produced, how research is conducted, and which questions are formulated and prioritized. Across these readings, AI appears as both a tool that can support speeding up tasks such as data analysis, hypothesis generation, and experiment design, and as a force that is influencing research practices, incentives, and areas of focus. This edition of selected readings show that while AI can increase the speed and scale of research, it also raises questions about how knowledge is developed, how scientific work is evaluated, and how reliance on AI may affect learning, diversity of ideas, and long-term scientific progress.
Three Key Takeaways from our curation
AI is changing how research is conducted across the scientific process. It is being used in multiple stages, including reviewing existing knowledge, generating hypotheses, designing studies, and analyzing results, but human input remains necessary for interpretation, decision making, and validation.
Increased efficiency may come with trade-offs for knowledge and innovation. While AI can improve productivity and support individual researchers, several readings highlight risks such as reduced incentives to learn, narrower research agendas, and less diversity in scientific questions and approaches.
The use of AI introduces new challenges for evaluation, governance, and research quality. Issues such as how to measure AI’s scientific capabilities, ensure reliability of results, manage data access, and address ethical and legal concerns are becoming more central as AI is used more widely in research.
Below, we share our selection of papers and provide summaries of each paper in alphabetical order:
Acemoglu, Daron, Dingwen Kong, and Asuman Ozdaglar. 2026. “AI, Human Cognition and Knowledge Collapse.” Working Paper No. 34910. Working Paper Series. National Bureau of Economic Research, February.
Daron Acemoglu, Dingwen Kong, and Asuman Ozdaglar examine the relationship between artificial intelligence and human learning in a working paper published by the National Bureau of Economic Research. The paper develops a theoretical model to explain how AI systems, especially those that provide personalized recommendations, affect how people learn and how knowledge is built over time. The authors argue that while AI can improve short-term decision making by offering accurate, context-specific advice, it can also reduce people’s incentives to learn and contribute to shared knowledge. Over time, this can weaken the overall pool of general knowledge that society relies on, a process they describe as “knowledge collapse,” where collective understanding declines even as individuals continue to receive useful AI guidance. The paper highlights a key tension between immediate benefits and long-term risks, showing that the impact of AI depends on how it interacts with human effort, learning systems, and the ways knowledge is shared and maintained.
Agrawal, Ajay K., John McHale, and Alexander Oettl. 2026. “AI in Science.” Working Paper No. 34953. Working Paper Series. National Bureau of Economic Research, March.
Economists, Ajay K. Agrawal, John McHale, and Alexander Oettl examine how artificial intelligence is changing scientific research in a working paper published by the National Bureau of Economic Research. The paper explains that AI can improve how scientists work by helping them search through large sets of possible ideas, designs, and experiments more quickly and efficiently. It describes science as a process with several stages, including forming questions, developing ideas, designing solutions, and testing them, and shows that AI can support each stage in different ways. The authors also highlight that AI works best in areas with large amounts of data, while human judgment is still important for interpreting results, making decisions, and working in areas where data is limited. The paper presents AI as a tool that can increase the speed and scale of research, but its impact depends on how it is used alongside human expertise and within existing research systems.
Brown, Megan A., Andrew Gruen, Gabe Maldoff, Solomon Messing, Zeve Sanderson, and Michael Zimmer. 2025. “Web Scraping for Research: Legal, Ethical, Institutional, and Scientific Considerations.” Big Data & Society 12 (4): 20539517251381686.
This research article, published in Big Data & Society, examines the growing use of web scraping as a method for collecting data in scientific research. The paper, written by Megan A. Brown and a team of researchers, explains that as platforms restrict access to data through official channels, more researchers are turning to scraping, which raises a range of legal, ethical, institutional, and scientific questions. It outlines how laws around data access and privacy are complex and still evolving, and describes the importance of understanding terms of service, intellectual property, and data protection rules. The article also highlights ethical concerns, such as consent, privacy, and potential harm to individuals, as well as practical challenges related to data quality, sampling, and reliability, and presents a set of considerations researchers may need to address when using scraped data.
Committee on Foundation Models for Scientific Discovery and Innovation, Board on Mathematical Sciences and Analytics, Division on Engineering and Physical Sciences, and National Academies of Sciences, Engineering, and Medicine. 2025. Foundation Models for Scientific Discovery and Innovation: Opportunities Across the Department of Energy and the Scientific Enterprise. National Academies Press.
This report, published by the National Academies of Sciences, Engineering, and Medicine, examines how foundation models could shape scientific research, particularly in the context of the U.S. Department of Energy’s mission. The report explains that these AI systems can analyze very large datasets, generate findings, and support tasks such as literature review, experiment planning, and data analysis. It also highlights how foundation models can be combined with traditional scientific models, which remain important for accuracy, interpretability, and alignment with physical laws. At the same time, the report outlines key challenges, including limitations in validation, reliability, and data quality, as well as risks related to security and misuse. Overall, it presents foundation models as tools that could improve the speed and scope of scientific discovery, while emphasizing the need for careful integration, strong data infrastructure, and continued human oversight.
Dolgin, Elie. 2026. “AI Boosts Research Careers but Flattens Scientific Discovery: New Analysis Suggests AI Tools Narrow the Span of Ideas Explored.” IEEE Spectrum, January 19.
Elie Dolgin, a science journalist, reports in, published in IEEE Spectrum, on research examining how artificial intelligence is influencing scientific work. The article draws on a large-scale analysis of more than 40 million academic papers, which finds that researchers who use AI tools tend to publish more, receive more citations, and reach leadership positions more quickly than those who do not. At the same time, the study shows that AI-supported research is more concentrated around well-established, data-rich topics, resulting in a narrower range of ideas being explored. The piece highlights a broader trade-off between individual career gains and collective scientific progress, suggesting that while AI increases efficiency and output, it may also reduce diversity in research questions and limit the exploration of less-developed areas.
Evans, James, and Eamon Duede. 2025. “After Science.” Science 390 (6774): eaec7650.
In an article published in Science, sociologist James Evans and philosopher Eamon Duede examine how advances in artificial intelligence are reshaping the practice and purpose of scientific research. The authors argue that science is entering a new phase in which AI systems increasingly drive discovery, not only accelerating research but also producing results that may exceed human understanding. Historically, science has balanced explanation (understanding) with prediction and control, but AI may shift this balance toward control without full interpretability. The article outlines several implications of this shift, including the risk of reduced human curiosity, the narrowing of research diversity due to reliance on dominant AI methods, and the proliferation of low-quality or misleading findings generated at scale. Evans and Duede suggest that sustaining scientific progress in this context will require new forms of oversight, investment in verification and quality control, and a continued emphasis on diversity and curiosity within scientific systems, even as AI becomes more central to the research process.
Hulick, Kathryn. 2026. Have We Entered a New Age of AI-Enabled Scientific Discovery? February 18.
Published in Science News, this article by Kathryn Hulick examines whether recent advances in artificial intelligence mark a true shift in how scientific discovery happens. The article describes how AI systems are already being used to generate hypotheses, design experiments, and assist in areas such as drug discovery and materials science. It provides examples of researchers using AI as a tool to speed up parts of the research process and, in some cases, help uncover new findings. At the same time, the piece highlights ongoing limits, including errors, lack of true creativity, and the need for human expertise to evaluate results and guide research. It also notes that scientific validation still depends on real-world testing, which AI cannot replace. Overall, the article presents AI as a tool that is changing how science is done, but not yet replacing the role of human scientists.
Messing, Solomon, and Joshua A. Tucker. 2026. The Train Has Left the Station: Agentic AI and the Future of Social Science Research. March 3.
Recent developments in agentic AI are examined in this commentary published by the Brookings Institution. Authored by Solomon Messing and Joshua A. Tucker, the piece explains how AI systems that can write code, collect data, analyze results, and produce reports are beginning to change how social science research is conducted. The authors describe how these tools can complete complex research tasks much faster than traditional methods, which could increase productivity and lower barriers to entry. At the same time, they highlight several risks, including the need for careful oversight, potential errors in AI-generated work, security concerns, and the possibility that researchers may rely too heavily on these systems. The article also raises broader implications for the field, such as an increase in the volume of research, changes to peer review and training, and questions about how to evaluate and credit scientific work.
Metz, Cade. 2026. Can A.I. Generate New Ideas? The New York Times. January 14.
In this article, published in The New York Times, Cade Metz, a technology journalist, explores whether artificial intelligence can generate truly new ideas in scientific research. The piece looks at recent examples where AI systems have helped solve mathematical problems and suggest new research directions, showing that these tools can process large amounts of information and identify useful patterns. At the same time, experts cited in the article question whether AI is actually producing original ideas or mainly reorganizing and surfacing existing knowledge. The article explains that while AI can assist researchers by narrowing down possibilities and speeding up work, human expertise is still needed to guide the process, interpret results, and determine what is meaningful.
Traberg, Cecilie Steenbuch, Jon Roozenbeek, and Sander van der Linden. 2026. “AI Is Turning Research into a Scientific Monoculture.” Communications Psychology 4 (1): 37.
Cecilie Steenbuch Traberg, Jon Roozenbeek, and Sander van der Linden are researchers in psychology and behavioral science who examine how artificial intelligence is shaping research practices in an article published in Communications Psychology. The article argues that the growing focus on AI across scientific fields is leading to a form of “scientific monoculture,” where research topics, methods, and language are becoming more similar. The authors describe a feedback loop in which institutional incentives, such as funding and publication trends, encourage researchers to focus on AI, while AI tools themselves further reinforce this trend by shaping how studies are designed and written. As a result, a wider range of research questions and approaches may be overlooked, and scientific diversity may decline. The article highlights concerns that this narrowing of research could reduce innovation, limit the types of questions being explored, and make scientific fields less adaptable over time.
Zhao, Celina. 2026. How Will We Know If AI Is Smart Enough to Do Science? Technology. February 27.
Questions about whether AI can truly “do science” are explored in this news article published in Science. Written by science journalist Celina Zhao, the piece focuses on how researchers are trying to measure the scientific capabilities of large language models. It explains that new benchmarks, or standardized tests, are being developed to evaluate whether AI can move beyond recalling information to actually reasoning through scientific problems and contributing to discovery. The article describes different types of benchmarks, from those that test knowledge with difficult questions to others that simulate real research tasks, such as forming hypotheses and working through multi-step problems. It highlights that current AI systems can perform well on certain tasks but often struggle with more complex, open-ended research processes. The article shows that there is no single way to measure AI’s ability to do science, and that progress depends on developing better evaluation methods that reflect how science works in practice.
Zhang, Yanbo, Sumeer A. Khan, Adnan Mahmud, et al. 2025. “Exploring the Role of Large Language Models in the Scientific Method: From Hypothesis to Discovery.” NPJ Artificial Intelligence 1 (1): 14.
This article, published in Nature Partner Journals (NPJ) Artificial Intelligence, examines how large language models are being used across different stages of scientific research. Authored by Yanbo Zhang and a group of interdisciplinary researchers, the paper explains how these systems can assist with tasks such as reviewing literature, generating hypotheses, designing experiments, and analyzing results. It describes the scientific process as a cycle of observation, hypothesis, and testing, and shows how AI tools can support each step by handling large amounts of data and automating parts of the workflow. At the same time, the article highlights current limitations, including errors, lack of transparency, and challenges with reasoning, and emphasizes that human oversight is still needed to guide and verify results. The paper presents AI as a tool that can speed up and expand research, while raising questions about how it should be used within the scientific process.