Fundamental Development Gap Map v1.0

Welcome! This is a new web portal by the team at Convergent Research that we built out with the help of scientists and researchers in our ecosystem to explore the landscape of R&D gaps holding back science and the bridge-scale fundamental development efforts that might allow humanity to solve them. To illustrate this, we’ve created a dataset showing the relationships between the following:

  • R&D Gaps: we’ve focused on “fundamental development” gaps that may need coordinated research programs to solve by building a mid-sized piece of infrastructure.
  • Foundational Capabilities: tools, technologies, data (mid-sized science infrastructure) that could fill the related R&D gap
  • Resources: for each foundational capability, the initiatives and organizations working toward it; relevant research papers, roadmaps, and technology seeds; and ideas (proposals, whitepapers, essays, etc.) illustrating the capability.

This is just v1.0 of the map, so it isn’t comprehensive - at all! We’re putting it out to help you get an intuition for how folks in our ecosystem think about the scientific landscape, spark discussion, and to get your constructive and critical feedback.

You can read more about the “how” and “why” behind it here, learn more about the contents and how to use it here, and suggest contributions to it here. Have other questions/suggestions? Reach out to us at gapmap@convergentresearch.org.

Loading grid view
Many current imaging techniques lack the resolution to image materials on an atomic scale, limiting our understanding of material properties at the most fundamental level.
Our overall knowledge of the chemical reaction space, including the catalysts that drive these reactions, is still rudimentary. We also lack detailed the large materials synthesis and processing datasets needed to enable highly predictive models.
While long-chain nucleic acid synthesis is advancing rapidly, the programmable synthesis of other polymers remains underdeveloped, limiting our capacity to design and produce diverse synthetic polymers.
We currently lack a comprehensive model explaining how biological systems decode and classify chemical signals through olfaction. Understanding this process is critical for applications ranging from flavor science to disease diagnostics to understanding and harnessing animal communication.
Chemical synthesis remains largely manual, limiting throughput and reproducibility. The field requires robust automation to accelerate discovery and production of new molecules.
Protein engineering has largely focused on designing static structures that closely mimic natural proteins. This narrow approach limits the creation of truly novel or highly functional enzymes.
Scientists are constrained to a small number of microbial hosts for bioproduction, limiting the diversity and efficiency of engineered biological systems. Expanding the repertoire of microbial hosts could unlock novel biochemical pathways, enabling the production of a wider array of biomolecules and improving the efficiency of biosynthetic processes. It is important to address any biosafety and biosecurity risks associated with developing such technologies.
Current genetic tools primarily enable modification of simple organisms. Programming more complex organisms and orchestrating entire developmental pathways remains a major challenge.
Applied synthetic biology is underutilized in applications such as building sustainable food systems and repairing the environmental damage caused by conventional agriculture and industry. Despite advances in tools and chassis engineering, there are few robust platforms that translate synthetic biology into scalable, field-ready solutions. This includes not only the production of low-impact proteins and agricultural inputs but also bioremediation technologies for legacy pollutants—such as pesticide-laden soils, heavy metals, and nutrient runoff—that degrade ecosystems and constrain land use. A new generation of synthetic biology platforms is needed to address both sides of the problem: replacing harmful production methods and cleaning up their long-term consequences.
Current bioreactor designs are inefficient when scaling up production processes, limiting the ability to produce bioproducts at industrial scales.
We currently perform synthetic biology using naturally evolved (“kludgy”) cells rather than truly bottom-up engineered cells. This bottleneck limits our ability to design fully customizable biological systems.
Our current methods do not allow precise control over the positional placement of atoms or groups during chemical synthesis, limiting our ability to build molecules with atomic precision. A general-purpose approach to atomically precise fabrication was envisioned by Drexler in the 1980s and Feynman in the late 1950s. DNA origami made a leap in 2006, but DNA is in some key ways a much less precise and versatile nanoscale building material than proteins/peptides. A promising path would extend “DNA origami” to “protein carpentry” by adapting Beta Solenoid proteins, or other modular protein components with programmable binding properties, as lego-like building blocks and then using the latter to construct massively parallel protein-based 3D printers for lego-like covalent assembly of a restricted set of chemical building blocks. This one is riskier: how programmably can we really control protein assembly, and could we bootstrap from initial crappy prototype protein-carpentry-and-or-DNA-ori...
Modern chip fabs are enormous, multi-billion-dollar facilities with limited versatility in what they can produce. This bottleneck restricts the ability to create assemblies with diverse molecular components on a small scale.
“New materials create fundamentally new human capabilities. And yet…new materials-enabled human capabilities have been rare in the past 50 years.” The core challenge lies in our inability to reliably design and manufacture materials that meet specific engineering requirements–and to do so at an industrial scale and reasonable cost.  Identifying promising new materials is hampered by the slow pace of exploration. The integration of machine learning, physics-based property prediction, and self-driving laboratories could dramatically accelerate this process. A significant opportunity lies in modeling the vast, unexplored space of potential materials in silico.
Crystallization is crucial for determining molecular structure, yet many molecules resist forming crystals. Improved computational models of crystal growth are needed to guide experimental efforts.
While many promising materials have been discovered in the lab, current synthesis methods are often too expensive to produce these materials in sufficient quantities. Some examples of novel materials that would be highly enabling include:  • Low activation, thermally conductive materials that are resistant to radiation damage are needed to enable fusion reactors (the first wall material is currently a limitation), spacecraft, etc. • Materials that emit at the transparency window of the atmosphere (that were easy to apply like paint) to drastically diminish solar earth heating (example) • Hyper-efficient thermoelectrics that could directly turn heat into electricity.  • Materials that autonomously heal to improve our infrastructure and prevent system failures due to material defects. • Materials that have the insulating properties and high melting points of ceramics but the formability and ductility of metals for jet engines and atmospheric reentry vehicles.
The cost of materials is often dominated by the cost to obtain their constituent elements. What presents commercially as the “critical minerals problem” masks a larger scientific bottleneck on how we acquire, concentrate, and substitute chemical elements.
Modern manufacturing system design remains complex, with traditional methods relying on outdated processes. AI-based design approaches have the potential to reimagine these systems without relying on the legacy of humanoid robots.
Architectural design and construction planning are complex and labor-intensive. Advanced computational design and AI-driven optimization have the potential to revolutionize how buildings and construction plans are generated.
Despite advances in automation, many bioengineering processes remain highly manual, limiting throughput and reproducibility in laboratory settings.
The simulation and modeling of complex mechanical systems is challenging due to the intricate interplay of multiple physical phenomena. Improved computational models can enhance design and optimization.
Robots have the potential to revolutionize manufacturing, logistics, and many other industries—but only if they are both affordable and capable of high performance. Today’s robotic hardware is often prohibitively expensive and built using legacy designs that do not prioritize cost reduction, modularity, or scalability. Moreover, many robots struggle with dexterity and tactile sensing, and current design practices decouple hardware and software, preventing a co-evolution that could unlock new performance regimes. Overcoming these limitations requires a rethinking of both robot morphology and control, with an emphasis on integrated design, cost-effective production, and enhanced functionality.
Modern manufacturing systems largely rely on paradigms developed in the last century where large machines produce components smaller than themselves. This approach is increasingly limited by scaling challenges and cost inefficiencies. To meet future demands, we need to reimagine manufacturing by developing universal robotic construction systems and low-capital, high-energy manufacturing solutions that leverage emerging technologies such as advanced robotics, precision machining, and renewable energy integration. These innovations could, for example, dramatically lower the cost of machining high-performance materials like titanium or enable widespread automation in sectors like desalination.
Bridging the gap between simulated robot behavior and real-world performance remains a significant challenge, particularly for tactile interactions and complex environments.
Modern deep learning and general computation demand enormous energy, limiting scalability and sustainability. Addressing energy efficiency is critical for the next generation of computing platforms, though it also supports potential proliferation of advanced AI and should be advanced alongside AI safety and governance considerations.
Both human mathematicians and current AI systems struggle with proving complex math theorems. Enhancing theorem proving through interactive and automated methods could push the boundaries of mathematical reasoning.
The potential for AI systems to behave unpredictably or dangerously (“go rogue”) is a critical concern. Ensuring safe and controllable AI architectures is essential for reliable operation. See also:  • https://www.lesswrong.com/posts/fAW6RXLKTLHC3WXkS/shallow-review-of-technical-ai-safety-2024 • https://deepmind.google/discover/blog/taking-a-responsible-path-to-agi/
The risk of AI being misused—whether through malicious intent or unintended consequences—necessitates robust safeguards and countermeasures.
Insecure software can lead to vulnerabilities that undermine the reliability and safety of computational systems. Formal methods and rigorous verification are needed to synthesize secure software.
Current AI systems exhibit narrow reasoning and planning capabilities compared to human cognition. Broadening AI training methods to include holistic, brain-inspired architectures and cognitive frameworks can advance general intelligence (flagging that there is an AI safety risk here).
Biological systems are the sole example we have of complex, evolved computation. Replicating this level of complexity in digital systems could unlock entirely new computational paradigms.
Current models struggle to accurately predict climate tipping points due to the intricate interplay of diverse climatic factors, hindering proactive intervention efforts. Additionally, designing optimal climate control strategies is challenging because of the nonlinear and multifaceted interactions among economic, technological, and social factors.
We have limited capacity to predict key disruptive events, such as solar flares that threaten power grids and communications, alongside an incomplete understanding of natural processes (atmospheric, ocean, etc.) that underpin climate models. We need better monitoring tools for characterizing phenomena that impact climate dynamics, such as aerosol-cloud interactions, and assessing potential interventions such as marine cloud brightening. These issues underscore the need for enhanced observational tools and more sophisticated models of climate processes.
There is a critical need for more precise, rapid, and localized climate intervention strategies. Current approaches lack the fine-grained models and rapid response mechanisms required to adapt to diverse climate impacts, such as heatwaves, which demand swift and effective action. The ability to control local weather phenomena—including cloud formation and hurricanes—could help mitigate climate risks.
There are currently no direct interventions to address climate tipping points such as glacier melt, leaving some critical processes unmitigated. The fundamental science and engineering principles behind emergency climate interventions remain largely untested at relevant scales, limiting our preparedness for rapid climate change. See: https://www.outlierprojects.org/
We need more effective approaches to removing greenhouse gases from the atmosphere to mitigate climate impacts. However, challenges remain in harnessing natural carbon removal systems—due to difficulties in accurately measuring their environmental impact—and in reducing methane emissions from sources like the cow rumen. Innovative strategies, including modifying cow microbiomes and deploying scalable measurement and validation platforms, are essential to advance greenhouse gas removal efforts. See also:  https://www.bezosearthfund.org/news-and-insights/bezos-earth-fund-releases-global-roadmap-to-scale-greenhouse-gas-removal-technologies and https://gaps.frontierclimate.com/
We need to improve our management of natural systems. The world's oceans suffer from extensive pollution, undermining marine ecosystems and disrupting global climate processes. Inefficient wildfire management is a prime example of problematic management of natural fire cycles.
The current model for building space telescopes is cost-prohibitive and slow, often requiring decades of development. New approaches that exploit reduced launch costs and modular assembly are needed to accelerate telescope construction and reduce costs, and there needs to be the organizational structure and hunger to adopt such methods.
Some of the most important planetary science and astrobiology missions remain unrealized by traditional government agencies like NASA. Alternative, independent initiatives are needed to explore these high-priority scientific questions.
Detecting gravitational waves allows us to observe cosmic events like black hole mergers and neutron star collisions that are invisible through traditional telescopes. Current gravitational wave detectors are primarily sensitive to audio-band signals. Some phenomena, including speculative ones such as high-frequency emissions from advanced propulsion systems, might only be detectable with novel approaches.
Space telescopes offer vastly superior sensitivity to ground-based systems, but enhancing their resolution requires spacecraft with sub-micron precision. Angular resolution of telescopes is limited by the size of the primary optic. Coherent aperture synthesis (interferometry) gets around this by coherently combining signal from separated telescopes where the resolution is proportional to the baseline separation. This has been very successful in the radio (see Event Horizon Telescope) but in the optical regime requires extremely difficult optomechanics and controls, and the sensitivity on the ground is inherently limited by the coherence time of the atmosphere.
Traditional particle accelerators are enormous and costly, limiting experimental flexibility. Compact, benchtop accelerators could democratize high-energy physics and open new avenues in applications such as medical isotope production.
Many materials’ internal structures are difficult to image with current technologies, limiting our understanding of their properties at the nanoscale.
Stable plasma confinement is a major obstacle in achieving practical fusion energy. Advanced control systems and novel confinement techniques are needed.
There remains significant uncertainty over whether metallic hydrogen can exhibit room-temperature superconductivity at reasonable pressures, and measurements of other systems have been irreproducible and fragmented.
Quantum gravity remains elusive, with experimental constraints hindered by the need for extremely large-scale or prohibitively expensive experiments.
While low-energy nuclear reactions (LENRs) have received substantial attention and there is no good evidence they exist, there may still be other mechanisms or parameter combinations that are underexplored.
Modeling turbulence remains one of the most challenging problems in physics due to its nonlinear and chaotic nature.
We lack the models and infrastructure to monitor and predict how ecosystems behave under stress or when they might collapse, as well as what metrics to use to determine that restoration is effective. We need to understand the underlying dynamics, feedback loops, and thresholds that lead to ecosystem degradation/ collapse, as well as to secure essential systems like pollination.
Regenerating degraded environments and designing self-sustaining systems require a unified understanding of ecological dynamics. Our current models fall short in predicting complex interactions—such as feedback loops and stability thresholds—that determine ecosystem behavior.  To close this gap, we need better datasets and models of biodiversity and animal movements, as well as tools to predict and contain invasive species. We also need the ability to experiment with restoration strategies, and validate approaches ranging from rewilding to engineering de-extinction technologies.
Much of Earth's biosphere—from the deep ocean to atmospheric bioaerosols—remains unexplored, with the microbial majority largely uncharted. Advancing new exploration technologies and systematically cataloging the Earth's microbiome could unlock discoveries of new life forms and biological insights that could impact health, climate, geoengineering, agriculture, and fundamental biology.
Nature’s blueprints span from the unseen nanoworld to the enigmatic origins of life. Despite the incredible diversity of nanostructures, many remain hidden due to current imaging limitations. Likewise, the mysteries of animal communication—crucial for decoding behavioral cues and social structures—await breakthrough insights. Moreover, the primordial conditions of our planet, essential for understanding life’s genesis, are obscured by the absence of rocks from before 4.1 Ga and fossils before 3.5 Ga (although life almost certainly established itself on our planet before that). Addressing these challenges can unlock new understandings in biology, ecology, and more.
Only one new space station (天宫) has been launched this century, due to high costs and reliance on traditional, government-led megaprojects.
There is currently no established field for systematically studying and applying planetary terraforming methods, leaving key challenges in transport, energy supply, and civil engineering largely unaddressed.
Pathogenic microbes evolve quickly, and bad actors may exploit biotechnology for harmful purposes. Our current defenses struggle to keep pace with these evolving threats.
We lack the many technologies to support exploration and survival off earth. Our exploration efforts remain confined to our solar system, limiting our potential to explore beyond and understand the broader cosmos.
Our ability to block the transmission of pathogens is limited. Without effective strategies, airborne and surface-based transmission continues to spread diseases. A meta roadmap is here.
Rapid detection of emerging bio-threats is critical for effective intervention and containment. However, many pathogens are detected only after widespread transmission has occurred. In addition, attributing the source of these threats remains challenging.
Advances in synthetic biology have unlocked unprecedented innovations, but also raise concerns about the potential for harmful bioengineering. Preventing misuse requires robust screening and control measures around DNA synthesis. Implementation must be coordinated and universal to effectively minimize the risk of malicious actors.
Many critical supply chains and infrastructure systems are fragile and lack robust backup mechanisms, leaving society vulnerable.
In an era of relentless information overload and pervasive misinformation—fueled by algorithms that prioritize fleeting engagement over meaningful value—we have the opportunity to reshape our digital spaces. By leveraging AI and more intentional design of social media spaces for epistemic improvement, we can empower users to curate, evaluate, and contextualize content more effectively to create a healthier digital world.
As AI systems become the cornerstone of competitive advantage, they can inadvertently marginalize human roles and decision-making. The drive for efficiency and cost reduction may lead organizations to rely predominantly on AI, sidelining human judgment, creativity, and accountability. This dynamic risks creating environments where economic and social inequities widen, and the intrinsic value of human input is systematically undermined (see examples). The gradual disempowerment of individuals under such competitive pressures poses significant challenges for societal well-being and democratic governance.  See: https://gradual-disempowerment.ai/
Our current systems for democratic participation are hindered by outdated platforms and tools that fail to scale with modern needs. Limited survey infrastructure, insecure voting methods, and under-informative deliberative tools restrict our capacity for informed, collective decision-making. By harnessing AI to facilitate clearer expression of public opinion and leveraging innovative technologies for secure, scalable engagement, we can transform civic participation into a more robust, effective, and inclusive process.
Policy development and evaluation processes today rely heavily on manual human review to ensure accountability. However, as AI systems increasingly support or automate these processes, this human-centered accountability becomes challenging. Human reviewers risk becoming a critical bottleneck, slowing policy implementation. New tools are needed to streamline policy creation and evaluation, and to ensure consistency and compliance before deployment.
The social sciences need new tools to help researchers identify and prioritize important questions that will have an impact, and better infrastructure to collect qualitative data. Qualitative methods are powerful for understanding the how and why behind social outcomes, yet even the most comprehensive surveys don’t capture all the factors that contribute to social outcomes. AI-enabled qualitative methods could super-charge the social sciences, but there is much work to be done. Similarly, many archaeological methods remain manual and lack the technological revolution seen in other fields, limiting discovery and analysis.
Current education systems face structural inefficiencies such as excessive administrative workloads on educators, overcrowded classrooms, and inequitable resource distribution. Innovative technologies have the potential to significantly reduce these burdens by providing tools that assist teachers with scheduling, grading, and creating personalized, adaptive lesson plans. Digital platforms could dynamically tailor learning experiences to individual student progress, complementing classroom teaching. Additionally, technology-driven improvements in administrative efficiency could free valuable resources, enhancing educational equity and overall student experiences. “US K-12 teachers are 30% more likely to face burnout than U.S. soldiers, whose lives are defined by relentless duty, perpetual war and low wages.” - Adrienne Williams  “Given recent improvements in the quality, affordability, and usability of technologies like AI, computer vision, and AR/VR, we can reimagine a more personali...
Current economic models are often too simplistic to capture the intricate dynamics of our global economy, limiting effective policy-making and forecasting. Experimentation with innovative economic models—such as those incorporating universal basic income or alternative market systems—is rare, leaving us unprepared for emerging trends. The inherent complexity of global systems further complicates accurate forecasting, underscoring the urgent need for more sophisticated, adaptive tools that can better predict and navigate the economic landscape of tomorrow.
There exists a disconnect between academic research and the practical implementation of development economics, hampering the conversion of theoretical insights into effective real-world interventions. Current mechanisms for delivering public goods and fostering collective cooperation are inefficient, limiting our capacity to coordinate resources and drive meaningful change. Innovative approaches are needed to bridge this gap, ensuring that cutting-edge economic theories can be transformed into actionable policies and scalable interventions that truly improve development outcomes.
Much critical data is stored on proprietary platforms and is at risk of disappearing, hindering long-term research and reproducibility.
Scientific literature is plagued by fraudulent publications, undermining trust and slowing progress.
The volume of scientific publications is overwhelming, making it difficult for humans to read, comprehend, and synthesize the entire body of literature. How can AI-generated knowledge become cumulative? What should a machine-human shared Wikipedia look like? We should collect and synthesize all the world’s knowledge, accelerate its development, and make it universally available in a compelling form.
Traditional structures dominate in how research is conducted and how its outputs are disseminated. Expensive publishing practices restrict and slow the spread of knowledge. We should replace outdated publishing practices and complement research practices with new approaches that leverage frugal innovation, community-led platforms, and open access. We imagine a future where scientific discovery is more inclusive and dynamic.
Current clinical trial designs are not sufficiently optimized for gathering robust evidence, leading to inefficiencies and suboptimal outcomes.
This is more of a meta-bottleneck. But scientists are spending a lot of time not doing science, and the institutional structures in which they work are often set up with incentive structures that hinder certain kinds of outcomes, like more coordinated research.
Many of the world’s most deadly diseases—such as tuberculosis, Group A Streptococcus, hepatitis C, hepatitis B, and syphilis—lack effective vaccines or cures. Additionally, the pace of developing effective, low-cost, therapeutics for emerging pathogens in low resource settings is too slow to meet global health needs. Malnutrition exacerbates susceptibility to disease and impedes recovery; food security is important especially for early child development. Understanding the basic science of malnutrition during development is important for the development of more effective interventions.
Current diagnostic tests are costly or often offer only limited information, failing to reveal the cause of disease and delaying or preventing administration of available treatments. Moreover, early detection systems for emerging pathogens are fragmented, delaying critical public health interventions.
Global health outcomes are compromised by insufficient health systems and infrastructure that limit our ability to prevent and control infectious diseases. Key deficiencies include the lack of cost-effective antimicrobial materials to block pathogen transmission, underdeveloped intervention models for effective public health strategies, and outdated sanitation solutions that fail to meet the needs of vulnerable populations.
In-silico molecular simulation has not received the necessary push, despite the promise of machine learning-based surrogate models. Moreover, advancements in quantum chemistry—both AI accelerated and quantum/ASIC-enabled—remain underexploited.
Most molecular structure determination methods lose critical information, and solving the inverse problem remains challenging. This limits our ability to accurately reconstruct molecular structures from spectral data.
Our ability to analyze organisms holistically as systems which emerge from fundamental physics is limited by our lack of formal frameworks for distinguishing living and nonliving systems which are precise enough to be useful for practical scientific problems
Techniques that achieve deep nanoscale resolution in live cell imaging often destroy the sample, limiting the ability to conduct longitudinal studies on the same specimen.
Delivering physical probes for imaging into living cells is challenging due to barriers in cell membranes and potential perturbation of cellular function. New approaches are required to enable high-dimensional biosensing without invasive probes.
Living tissue exhibits strong light scattering, which hampers deep-tissue imaging and limits resolution. Overcoming this barrier is critical for mapping neural activity and enabling noninvasive diagnostic imaging.
Despite theoretical predictions, quantum effects in biological systems remain largely unmeasured. Direct experimental evidence is needed to explore how quantum phenomena influence biomolecular interactions.
Current structure prediction tools like AlphaFold excel for stable proteins but struggle with highly dynamic proteins whose structures fluctuate continuously, leaving a gap in our understanding of intrinsically disordered proteins and protein allostery.
Membrane proteins are notoriously difficult to analyze experimentally and to incorporate into technological applications due to their inherent insolubility in aqueous environments. Their recalcitrance limits our capacity to study their structure and function in detail. Other challenges include the difficulty of studying small proteins with cryo-EM.
Drug development is often hampered by failures related to absorption, distribution, metabolism, excretion, and toxicity (ADME/Tox). Improved predictive models for molecular interactions are essential for designing safer, more effective drugs, as well as evaluating the impact of environmental chemicals. Additionally, there is a significant gap in our knowledge of what exactly is present in foods and how these components affect human biology. A comprehensive mapping of the “foodome” and studies on food component functionality are needed to advance nutrition science and personalized dietary interventions.
Our understanding of human physiology and disease remains incomplete. In the last century, we have developed cures for many diseases with well-defined root causes (polio, smallbox, cholera, SMA, cervical cancer, etc.). However, a wide array of conditions still eludes cures and treatments. We have yet to fully decipher the dynamic interplay between brain and peripheral systems, the bioenergetic processes underlying chronic conditions, and the multifactorial pathways that drive aging. The biological mechanisms driving complex diseases and the aging process are multifactorial, involving multiple interacting pathways.  Although we understand some individual aging mechanisms, we do not yet have line of sight to comprehensively rejuvenating mammals or extending lifespan. To overcome these challenges, we need combinatorial approaches that can modulate multiple mechanisms simultaneously, allowing us to measure multi-system impacts and develop effective interventions.
A comprehensive understanding of human health over time is hindered by the lack of longitudinal data from cohorts that are diverse and globally representative. Such datasets are essential to track developmental, nutritional, and environmental influences on long-term health outcomes.
Current in-vivo delivery systems (viral vectors, nanoparticles, microchips) face challenges such as off-target accumulation and inefficiency, particularly in delivering therapies to the brain. Novel delivery approaches are needed to improve targeting and performance.
Current preclinical models of human physiology, including animals and organoids, do not fully capture the complexity of human physiology, limiting the predicting power of preclinical experiments and explaining, in part, the costly failures of drug development in clinical trials. This is especially true for complex disorders including those of aging, neurological disorders, and female reproductive biology. More systematic and representative models—including ex vivo human organ systems or even whole bodies and novel animal species—are needed to improve the predictive power of biomedical research. These technologies also have applications in addressing organ shortages, improving neonatal care, and other unmet medical needs.
Clinical trial designs are often inefficient, resulting in high costs, lengthy timelines, and suboptimal patient outcomes. Innovative trial designs and decision-support tools are required to streamline the clinical evaluation process and accelerate therapeutic development.
Cellular state is a multifaceted and complex phenomenon, involving multiple overlapping omics layers that vary in time and space. Capturing and representing this multimodal complexity is essential for predictive modeling of cell behavior and for advancing our understanding of cellular function.
We are currently limited in our ability to image molecules in their native contexts—for example, within live 3D tissues. Achieving scalable, high-resolution imaging of biomolecules in situ would de-risk many areas of biomedical science by enabling integrative, comprehensive molecular mapping within intact specimens.
Many of the fundamental actors in cells—proteins, lipids, and metabolites—are still mostly invisible to us, especially when considering their extensive multiplexity, diversity, cell-to-cell heterogeneity, and temporal variation. Without scalable, cost-effective technologies to capture these molecular details, our comprehensive analysis of complex biological systems remains limited.
A better understanding of how the immune system interacts at the molecular level with threats and triggers is critical. This knowledge would enable the development of predictive tools and technologies to augment immune responses—improving interventions against infections, cancers, and autoimmune disorders.
Immunological diseases often have nonobvious, complex etiologies and pathophysiologies that are difficult to identify.
Capturing the dynamics of large brain networks at single-neuron resolution in vivo is extremely challenging. Advanced imaging methods that record fast, high-resolution activity without destructive intervention are required to unravel the complex interplay of neuronal circuits in real time.
Current in vivo and in vitro models often fail to capture human brain function. Innovative model systems—including digital reconstructions, embodied simulations, and new biological models—are needed.
Large portions of the living human brain are difficult to observe and modulate with current technologies. Safer, noninvasive, or minimally invasive methods are needed to capture real-time brain state information. One funding program dedicated to making advancements in this space is that of ARIA (UK science R&D agency), which launched the Scalable Neural Interfaces opportunity space to support a new suite of tools to interface with the human brain at scale.
Understanding the complete wiring of the brain at single–cell resolution, along with detailed molecular annotations, is critical for revealing how neural circuits support learning, memory, and behavior. Current technologies are prohibitively expensive and lack scalability, limiting our ability to link molecular composition with circuit connectivity and to understand the alterations present in brain disorders. This gap fundamentally makes diagnosis, treatment, and prevention of many brain disorders more difficult. Beyond the biomedical applications, maps of brain circuitry could play a fundamental role in grounding principles of safety for brain-like AI systems. Initiatives like the NIH BRAIN Initiative’s transformative projects (the BRAIN Initiative Cell Atlas Network (BICAN), the BRAIN Initiative Connectivity Across Scales (BRAIN CONNECTS) Network, and the Armamentarium for Precision Brain Cell Access) represent important efforts to illuminate foundational principles governing the ci...