Our Regrants
Dylan Iskandar
This regrant will support a summer research internship at Columbia University in Barnard’s Programming Languages Lab under Prof. Santolucito, as well as equipment and conference attendance. The research project is aimed at modeling and aligning program synthesis models in the setting of computational creativity.
Charbel-Raphaël Segerie
This regrant will support six months of salary for Charbel-Raphaël Segerie to work on AI safety mentoring and study.
Sharon Hewitt Rawlette
This regrant will fund the writing of a book about classical utilitarianism and its foundations and implications.
Machine Learning Moral Uncertainty Competition
This regrant will fund the prizes for a competition on using machine learning models to estimate when human values are in conflict and estimate moral uncertainty.
The Inside View
This regrant will support Michael Trazzi to make 20 episodes of his podcast and YouTube channel about research on AI and its risks called "The Inside View." Funds compensated Michael for his time, for equipment, and for marketing.
Noa Nabeshima
This regrant will support three months of independent AI interpretability research.
University of California, Santa Cruz, Cihang Xie
This regrant will support Professor Cihang Xie's research on adversarial robustness and honest AI.
Space Future Initiative
This regrant will support funding for students at Harvard and Cornell to establish student groups to learn about and research longtermist space governance, including events, stipends, and prizes.
University of Pennsylvania, Professor Geoff Goodwin
This regrant will support three years of work on the Global Priorities Research psychology research agenda, for a post doc, conference attendance, research assistants, and payments for experimental participants, among other expenses.
The International Commission on Non-Ionizing Radiation Protection, Dr. Rodney Croft
This regrant supports work investigating FarUVC, a wavelength of light which could safely sterilize pathogens in occupied rooms. This regrant supports the International Commission on Non-Ionizing Radiation Protection, a nonprofit group that sets international exposure guidelines, to 1) determine what additional information is needed to re-evaluate the guidelines for 180-230nm light, and 2) specify experiments that would provide that knowledge, including methodological details.
Safety Philosophers Program
This regrant will cover the costs of running a seven month AI safety research fellowship for philosophy academics. The funds will cover the costs of travel, housing stipend, office space, guest speaker honorariums, and the hiring of researchers, advisors, a project manager, and organizers for the seven month period.
The Treacherous Turn Game
This regrant will support the design and creation of a tabletop roleplaying game in which simulates potential risks from advanced AI.
David Chapman
This regrant will support an independent researcher to investigate the magnitude of AI risk and the most useful actions to take to address it, how to improve Effective Altruist decisionmaking according to "metarational" principles, and potentially valuable projects in metascience.
Evan R Murphy
This regrant will support six months of independent research on interpretability and other AI safety topics.
University of Chicago, Professor Chenhao Tan
This regrant will support the hiring of a PhD student or postdoc to help orient their lab toward AI safety.
Princeton University, Professor Adji Bousso Dieng
This regrant will fund their research agenda into robust generative models in the face of non-randomly missing data, targeting use cases in healthcare, materials science, and climate.
Ben Marcovitz
This regrant will support leadership coaching for leaders of Effective Altruist projects.
AI Governance Summer Program
This regrant will support ten fellows to work on mentored AI Governance projects over the summer.
Jenny Xiao
This regrant will support a teaching buyout which will allow Jenny to spend 20 hours a week on AI Governance research.
Professor David Bau
This regrant will support several research directions in interpretability for 2-3 years, including: empirical evaluation of knowledge and guessing mechanisms in large language models, clarifying large language models’ ability to be aware of and control use of internal knowledge, a theory for defining and enumerating knowledge in large language models, and building systems that enable human users to tailor a model’s composition of its internal knowledge.
Yan Zhang
This regrant will support a teaching buyout to allow Yan to focus on forecasting AGI scenarios, as well as research on beneficial alignment and deployment.
Konstantin Pilz
This regrant will support independent study and research on AI safety and alignment alongside research assistant work for Lennart Heim.
University of Maryland, Baltimore County, Professor Tim Oates
This regrant will fund Professor Tim Oates and a graduate student to work on AI alignment research.
Sandra Malagon
This regrant will support a year of coordinating several EA mentoring and community building activities with South American Spanish speakers.
Adam Jermyn
This regrant will support four months of theoretical AI safety research under Evan Hubinger’s mentorship.
Alex Mennen
This regrant will support six months of AI safety work.
University of Western Ontario, Professor Joshua Pearce
This regrant will fund Professor Pearce’s graduate students to conduct research projects on resilient food, specifically on manufacturing rope for seaweed farming and on making leaf protein concentrate.
Existential Risk NeurIPS prize
This regrant will fund prizes for papers at a NeurIPS Workshop, with $50k for papers that have the best discussion of AI x-risk, $50k for best overall papers.
Global Challenges Project
This regrant will support operations and logistics for the Harvard-MIT Existential Risk Summit held in Berkeley at the end of July. This four-day retreat will bring together incoming Harvard and MIT first years with Berkeley EAs and members of the Harvard/MIT EA communities.
Hear This Idea Regranting Program
This regrant will support a small grants program managed and advertised via Hear This Idea which will give funding to people to launch podcasts or audio projects focused on ideas that matter.
Adapt Research
This regrant will support the development of a first draft of a resilience plan for nuclear winter/extreme pandemic.
Javier López Corpas and Cristina Rodríguez Doblado
This regrant will support a year of work translating popular content on Effective Altruism into Spanish.
Future Forum
This regrant will support general expenses for running the Future Forum conference.
Tom Green
This regrant will support six months of research on technology policy case studies of key historical figures who influenced technology development.
Haoxing Du
This regrant will support a career transition grant from physics to AI safety.
Anton Howes
This regrant will support historical research contracting work.
Cornell University, Professor Emma Pierson
This regrant will fund Professor Pierson to expand her program of research on fair and robustly aligned ML systems, with a focus on methods that will remain relevant as capabilities improve.
Neel Nanda
This regrant will provide a year of salary, funding to hire a research assistant, and compute for an independent AI safety researcher.
Flourishing Humanity Corporation
This regrant invested in two mental health, productivity, and personal growth applications.
Center for Space Governance
This regrant will provide funding for six people to spend three months producing the Center’s research agenda and creating and registering the organization.
David Mears
This regrant will support a career transition grant to allow David to pivot to working as a frontend developer for Effective Altruist organizations, becoming a 'language model psychologist,' or participating in charity entrepreneurship.
Sam Glendenning
This regrant will allow Sam to manage a group of students to start a 'London Existential Risk Initiative' (LERI). This provided funding for that student group, and in particular will be supporting three UCL student organizers.
High Impact Medicine
This regrant will support 12 months of High Impact Medicine so that they continue (and scale up) their activities.
Bertha von Suttner-Studienwerk
This regrant will fund BvS, a newly set up humanist student scholarship foundation, to take on 10 more fellows.
SoGive
This regrant will support four months of their work on in-depth analyses of charities.
Joseph Bloom
This regrant will support Joseph Bloom to spend 3-6 months transitioning his career to EA/Longtermist work, including conducting research on biosecurity.
Montaigne Institute
This regrant will allow the Montaigne Institute to hire a full time staff member to work on AI policy.
X-Axis
This regrant will support one year of funding to found and run a broad-interest, high-end website, featuring high-quality writing and appealing visualizations focused on promoting longer-term thinking.
José Alonso Flores Coronado
This regrant will support 6 months of salary for work on biosecurity projects, partly mentored by Tessa Alexanian, Officer of Safety and Security at iGEM.
Aleksei Paletskikh
This regrant will support six months of work on distilling AI alignment research.
Cornell University, Tom Gilbert
This regrant will fund students and postdocs to attend a conference on Human Centered AI at UC Berkeley and work on a resulting report.
University of Toronto, Social Science Prediction Platform
This regrant will fund one year of operations for the Social Science Prediction Platform, a platform which allows economists and other social science academics to use forecasting in their research.
The Royal Statistical Society
This regrant will support a course to teach civil servants in the UK to read, process, and use scientific research in their policy work. The course will focus on examples from emergency scenarios including pandemics and other existential risks.
Aligned AI
This regrant invested in Aligned AI, an AI safety organization seeking bridging funding.
Gryphon Scientific
This regrant will support a risk-benefit analysis of virology research.
Fund for Alignment Research
This regrant will fund the operations of the Fund for Alignment Research, which primarily provides engineering support to AI alignment researchers.
Redwood Research
We recommended a grant to support Redwood Research, an AI safety organization, and MLAB, a Redwood project to teach machine learning to individuals interested in AI safety, for 18 months which will support salaries, primarily for their researchers and technical interns, and the rest for compute.
Pranav Gade
This regrant will support three years of work, paid out conditional on progress reports for a cybersecurity project for AI research.
Topos Institute
This regrant will support the second “Finding the Right Abstractions” workshop, likely to take place around January 2023, which connects applied category theory and AI alignment communities in an effort to understand self-selecting systems.
Peter Hartree
This regrant will support the recording of audio narrations of important philosophical papers. The recordings will be packaged up as a podcast feed with its own website, brand and YouTube channel.
Simeon Campos
This regrant will support Simeon Campos to work for a fraction of his year on AI safety projects.
Collective Intelligence Project
This regrant will support two researchers, Matthew Prewitt and Divya Siddarth, to create a research agenda on “collective intelligence” specifically aimed at mitigating Global Catastrophic and Existential risks, and run some trial projects based on it.
Chatham House
This regrant will support ongoing research and convening with representatives from UK government, academia/think-tanks, and AI industry throughout 2022-23. These activities will be supplemented by written outputs and the program will focus on UK AI policy and risk reduction, pandemic prevention policy, defense policy, cybersecurity policy, and UK navigation of great power technology relations.
Elizabeth Van Nostrand
This regrant will support continued mentorship of a medical researcher and work generating novel medical research literature reviews, on a variety of topics, including long covid.
Akash Wasil
This regrant will support five months of building and supporting the longtermist and AI alignment communities.
Condor Camp
This regrant will support additional funding for Condor Camp, the program for top Brazilian university students to come to Peru for a retreat to learn about Effective Altruism.
Korbinian Kettnaker
This regrant will support a promising student who is working to bridge algorithmic information theory with epistemology and AGI research for his PhD at the University of Cambridge.
Global Priorities Encyclopedia
This regrant will support a 6-month trial of setting up the Global Priorities Encyclopedia (GPE), a reference work on global priorities research featuring articles by academics and researchers from the effective altruism community. Funding would pay the project lead, web developers, and article contributions.
Gryphon Scientific
This regrant will fund Gryphon Scientific to analyze options for and hold a workshop on protecting potential pandemic pathogen genomes.
AI Safety Community Building Hub
This regrant supports the creation and initial hiring for an organization that is centered around outreach to machine learning professionals about AI safety and alignment.
VIVID
This regrant is an investment in VIVID, a customization-based mindset improvement app that aims to improve peoples' self-reflection and productivity.
HR Luna Park
This regrant will support an experiment where a recruiting agency will headhunt ML Engineers for AI safety organizations.
School of Thinking
This regrant will support a global media outreach project to create high quality video and social media content about rationalism, longtermism and Effective Altruism.
Legal Services Planning Grant
This regrant will support six months of research on topics including how legal services can be effectively provided to the Effective Altruism community, materials to be included in a legal services handbook for EA organizations, novel legal questions particular to the EA community that might benefit from further research initiatives, and ways to create an effective EA professional network for practicing lawyers.
Manifold Markets
This regrant will support Manifold Markets in building a play-money prediction market platform. The platform is also experimenting with impact certificates and charity prediction markets.
David Xu
Trojan Detection Challenge at NeurIPS 2022
This regrant will support prizes for a trojan detection competition at NeurIPS, which involves identifying whether a deep neural network will suddenly change behavior if certain unknown conditions are met.
Effective Altruism Office Zurich
Akash Wasil
Fiona Pollack
Peter McLaughlin
Dwarkesh Patel
ALERT
This regrant will support the creation of the Active Longtermist Emergency Response Team, an organization to rapidly manage emerging global events like Covid-19.
EA Critiques and Red Teaming Prize
This regrant will support prize money for a writing contest for critically engaging with theory or work in Effective Altruism. The goal of the contest is to produce thoughtful, action oriented critiques.
Federation for American Scientists
This regrant will support a researcher and research assistant to work on high-skill immigration and AI policy at FAS for three years.
Ought
This regrant will support Ought’s work building Elicit, a language-model based research assistant. This work contributes to research on reducing alignment risk through scaling human supervision via process-based systems.
ML Safety Scholars Program
This regrant will fund a summer program for up to 100 students to spend 9 weeks studying machine learning, deep learning, and technical topics in safety.
AntiEntropy
This regrant will support a project to create and house operations-related resources and guidance for EA-aligned organizations.
Everett Smith
Olle Häggström, Chalmers University of Technology
Essay Contest on Existential Risk in US Cost Benefit Analysis
This regrant will support an essay contest on “Accounting for Existential Risks in US Cost-Benefit Analysis,” with the aim of contributing to the revision of OMB Circular-A4, a document which guides US government cost-benefit analysis. The Legal Priorities Project is administering the contest.
MineRL BASALT competition at NeurIPS
This regrant will support a NeurIPS competition applying human feedback in a non-language-model setting, specifically pretrained models in Minecraft. The grant will be administered by the Berkeley Existential Risk Initiative.
QURI
This regrant will support QURI to develop a programming language called "Squiggle" as a tool for probabilistic estimation. The hope is this will be a useful tool for forecasting and fermi estimates.
Andi Peng
CSIS
Aaron Scher
Kris Shrishak
AI Impacts
This regrant will support rerunning the highly-cited survey “When Will AI Exceed Human Performance? Evidence from AI Experts” from 2016, analysis, and publication of results.
Chinmay Ingalagavi
Apart Research
This regrant will support the creation of an AI Safety organization which will create a platform to share AI safety research ideas and educational materials, connect people working on AI safety, and bring new people into the field.
Tereza Flidrova
J. Peter Scoblic
AI Risk Public Materials Competition
Moncef Slaoui
This regrant will fund the writing of Slaoui's memoir, especially including his experience directing Operation Warp Speed.
Artificial Intelligence Summer Residency Program
Public Editor
This regrant will support a project to use a combination of human feedback and Machine Learning to label misinformation and reasoning errors in popular news articles.
The Good Ancestors Project
This regrant will support the creation of The Good Ancestors Project, an Australian-based organization to host research and community building on topics relevant to making the long term future go well.
Thomas Kwa
Joshua Greene, Harvard University
Braden Leach
Adversarial Robustness Prizes at ECCV
This regrant will support three prizes for the best papers on adversarial robustness research at a workshop at ECCV, the main fall computer vision conference. The best papers are selected to have higher relevance to long-term threat models than usual adversarial robustness papers.
Confido Institute
The Confido Institute is working on developing a user-friendly interactive app, Confido, for making forecasts and communicating beliefs and uncertainty within groups and organizations. They are also building interactive educational programs about forecasting and working with uncertainty based around this app.
Supporting Agent Foundations AI safety research at ALTER
This regrant will support 1.5-3 years of salary for a mathematics researcher to work with Vanessa Kosoy on the learning-theoretic AI safety agenda.
Modeling Transformative AI Risks (Aryeh Englander, Sammy Martin, Analytica Consulting)
This regrant will support two AI researchers, one or two additional assistants, and a consulting firm to continue to build out and fully implement the quantitative model for how to understand risks and interventions around AI safety, expanding on their earlier research on “Modeling Transformative AI Risk.”
Impact Markets
This regrant will support the creation of an “impact market.” The hope is to improve charity fundraising by allowing profit-motivated investors to earn returns by investing in charitable projects that are eventually deemed impactful.
AI Alignment Prize on Inverse Scaling
Swift Centre for Applied Forecasting
This regrant will support the creation of the Swift Centre for Applied Forecasting, including salary for a director and a team of expert forecasters. They will forecast trends from Our World in Data charts, as well as other topics related to ensuring the long term future goes well, with a particular focus on explaining the “why” of forecast estimates.
Lawrence Newport
Aidan O’Gara
1. All grantees and investees were given an opportunity to review their listing and offer corrections before this list was published. Please email [email protected] to request edits. As with our direct grants and investments, we sometimes do not publish grants because the grantee asks us not to or because we believe it would undermine our or the grantee’s work. We also do not necessarily publish all grants that are small, initial, or exploratory.
2. The Future Fund is a project of the FTX Foundation, a philanthropic collective. Grants and donations are made through various entities in our family of organizations, including FTX Philanthropy Inc., a nonprofit entity. Investment profits are reserved for philanthropic purposes.