Discovering what language models can do

Modulo conducts alignment research and evaluates the capabilities of language models, searching for insights that can help companies and policymakers make informed decisions about AI development, deployment and regulation.

The mission of Modulo Research is to increase the probability that advanced artificial intelligence leads to net positive long-run outcomes for society. We do this through empirical research, model evaluations, and sharing knowledge and datasets with the AI safety research community.


Evaluating language models to understand their capabilities, limitations, and potential risks


Testing scalable techniques for aligning language models to human intentions


Developing and sharing specialized datasets to facilitate empirical alignment research

Reach out to learn more and collaborate