Modelling complex systems — from cognition and microbial ecology to AI safety and coordination.
I’m a Senior Lecturer in the School of Biological Sciences at the University of Edinburgh. The common thread across my work is modelling complex systems — using game theory, neural networks, phylogenetics, and computational approaches to understand how interactions between agents produce system-level outcomes.
I started with human and primate cognition, asking how cooperation, deception and cognition co-evolved. I moved into microbial dynamics — social interactions, community ecology, antimicrobial resistance. Now I work primarily on AI safety, applying evolutionary game theory to predict the behaviour of powerful AI systems and to study coordination mechanisms for AI governance.
My research uses computational and mathematical models to study complex systems. The connecting thread is understanding how interactions between agents — whether neurons, microbes, or AI systems — produce emergent dynamics at the system level.
How did cooperation, deception and cognition co-evolve in humans and primates? Using neural networks, game theory and phylogenetic analyses to model the evolutionary dynamics of social intelligence.
Modelling social interactions in microbial communities, gut microbiome dynamics, and antimicrobial resistance. Includes work on the Global Sewage Surveillance Project and machine learning approaches to predicting how AMR burden changes with population demographics.
Applying evolutionary game theory to predict the behaviour of powerful AI systems and to study coordination mechanisms that could slow unsafe AI development. Also designing experiments to study AI cooperation and deception, and developing benchmarks for AI research prediction.
As Resident Data Scientist at Nesta, developed AI-powered tools for evidence synthesis in policymaking — using NLP, large language models and complex systems modelling to help policymakers navigate evidence about which interventions work, with a focus on early-years outcomes.
I think AI development poses serious existential risks that current policy doesn’t adequately address. Alongside my research, I work on AI governance advocacy and train others to engage with policymakers on these issues.
Game theory — particularly evolutionary game theory — provides useful tools for thinking about coordination failures in AI development: situations where uncoordinated competition between actors produces outcomes none of them wanted. I apply these frameworks both in research and in policy engagement.
“Uncoordinated competition producing outcomes no actor wanted is a well-studied failure mode in evolutionary biology. It’s also a reasonable description of the current AI development landscape.”— On coordination failures in AI development
Training advocates to brief politicians and policymakers on AI existential risk through the Direct Institutional Plan (DIP) programme.
Developed AI-powered tools for evidence synthesis in policymaking, using NLP and complex systems modelling to help policymakers assess which interventions work, with a focus on early-years outcomes.
Happy to hear from researchers, policymakers, journalists, or anyone interested in complex systems, AI safety, or science policy.
luke.mcnally@ed.ac.uk