Our Grants, Investments, and Regrants
Dylan Iskandar
This regrant will support a summer research internship at Columbia University in Barnard’s Programming Languages Lab under Prof. Santolucito, as well as equipment and conference attendance. The research project is aimed at modeling and aligning program synthesis models in the setting of computational creativity.
Our World in Data
We recommended a grant over three years to support OWID’s work producing high quality data and analysis of global trends like the rise in living standards and effects of COVID-19. This grant will specifically support work tracking trends that are relevant to humanity’s long term prospects.
Global Priorities Institute
We recommended a grant to allow GPI to hire pre-docs and post-docs from economics, philosophy, and psychology and publish analysis on how to do the most good.
Sherry Glied
We recommended a grant to run two pilot studies in order to develop a platform that will allow policymakers to rapidly get reliable estimates about important quantitative parameters to inform policy measures during times of crisis, like during a pandemic. The pilot will focus on evaluating the cost-benefit profile of non-pharmaceutical interventions during pandemics.
Long Term Future Fund
We recommended funding to support their longtermist grantmaking.
LEEP
We recommended a grant to an organization aiming to reduce childhood lead exposure and eliminate childhood lead poisoning worldwide.
Differential Projections
We recommended a grant to support the creation of a small research organization that will use probabilistic models to forecast important questions about the future, in particular involving transformative artificial intelligence.
ALLFED
We recommended a grant to create a nuclear winter food scale-up plan which governments can adopt.
Longview Philanthropy
We recommended a grant to support Longview Philanthropy creating a longtermist coworking office in London.
Constellation
We recommended a grant to support 18 months of operations for a longtermist coworking space in Berkeley
Alignment Research Center
We recommended a grant to support researcher salaries, funding for contractors for evaluating AI capabilities, projects like the ELK prizes and workshops, and administrative expenses.
Charbel-Raphaël Segerie
This regrant will support six months of salary for Charbel-Raphaël Segerie to work on AI safety mentoring and study.
Sharon Hewitt Rawlette
This regrant will fund the writing of a book about classical utilitarianism and its foundations and implications.
Machine Learning Moral Uncertainty Competition
This regrant will fund the prizes for a competition on using machine learning models to estimate when human values are in conflict and estimate moral uncertainty.
The Inside View
This regrant will support Michael Trazzi to make 20 episodes of his podcast and YouTube channel about research on AI and its risks called "The Inside View." Funds compensated Michael for his time, for equipment, and for marketing.
Noa Nabeshima
This regrant will support three months of independent AI interpretability research.
University of California, Santa Cruz, Cihang Xie
This regrant will support Professor Cihang Xie's research on adversarial robustness and honest AI.
Space Future Initiative
This regrant will support funding for students at Harvard and Cornell to establish student groups to learn about and research longtermist space governance, including events, stipends, and prizes.
University of Pennsylvania, Professor Geoff Goodwin
This regrant will support three years of work on the Global Priorities Research psychology research agenda, for a post doc, conference attendance, research assistants, and payments for experimental participants, among other expenses.
The International Commission on Non-Ionizing Radiation Protection, Dr. Rodney Croft
This regrant supports work investigating FarUVC, a wavelength of light which could safely sterilize pathogens in occupied rooms. This regrant supports the International Commission on Non-Ionizing Radiation Protection, a nonprofit group that sets international exposure guidelines, to 1) determine what additional information is needed to re-evaluate the guidelines for 180-230nm light, and 2) specify experiments that would provide that knowledge, including methodological details.
Safety Philosophers Program
This regrant will cover the costs of running a seven month AI safety research fellowship for philosophy academics. The funds will cover the costs of travel, housing stipend, office space, guest speaker honorariums, and the hiring of researchers, advisors, a project manager, and organizers for the seven month period.
The Treacherous Turn Game
This regrant will support the design and creation of a tabletop roleplaying game in which simulates potential risks from advanced AI.
David Chapman
This regrant will support an independent researcher to investigate the magnitude of AI risk and the most useful actions to take to address it, how to improve Effective Altruist decisionmaking according to "metarational" principles, and potentially valuable projects in metascience.
Evan R Murphy
This regrant will support six months of independent research on interpretability and other AI safety topics.
University of Chicago, Professor Chenhao Tan
This regrant will support the hiring of a PhD student or postdoc to help orient their lab toward AI safety.
Princeton University, Professor Adji Bousso Dieng
This regrant will fund their research agenda into robust generative models in the face of non-randomly missing data, targeting use cases in healthcare, materials science, and climate.
Ben Marcovitz
This regrant will support leadership coaching for leaders of Effective Altruist projects.
AI Governance Summer Program
This regrant will support ten fellows to work on mentored AI Governance projects over the summer.
Jenny Xiao
This regrant will support a teaching buyout which will allow Jenny to spend 20 hours a week on AI Governance research.
Professor David Bau
This regrant will support several research directions in interpretability for 2-3 years, including: empirical evaluation of knowledge and guessing mechanisms in large language models, clarifying large language models’ ability to be aware of and control use of internal knowledge, a theory for defining and enumerating knowledge in large language models, and building systems that enable human users to tailor a model’s composition of its internal knowledge.
Yan Zhang
This regrant will support a teaching buyout to allow Yan to focus on forecasting AGI scenarios, as well as research on beneficial alignment and deployment.
Konstantin Pilz
This regrant will support independent study and research on AI safety and alignment alongside research assistant work for Lennart Heim.
University of Maryland, Baltimore County, Professor Tim Oates
This regrant will fund Professor Tim Oates and a graduate student to work on AI alignment research.
Sandra Malagon
This regrant will support a year of coordinating several EA mentoring and community building activities with South American Spanish speakers.
Adam Jermyn
This regrant will support four months of theoretical AI safety research under Evan Hubinger’s mentorship.
Alex Mennen
This regrant will support six months of AI safety work.
University of Western Ontario, Professor Joshua Pearce
This regrant will fund Professor Pearce’s graduate students to conduct research projects on resilient food, specifically on manufacturing rope for seaweed farming and on making leaf protein concentrate.
Existential Risk NeurIPS prize
This regrant will fund prizes for papers at a NeurIPS Workshop, with $50k for papers that have the best discussion of AI x-risk, $50k for best overall papers.
Global Challenges Project
This regrant will support operations and logistics for the Harvard-MIT Existential Risk Summit held in Berkeley at the end of July. This four-day retreat will bring together incoming Harvard and MIT first years with Berkeley EAs and members of the Harvard/MIT EA communities.
Hear This Idea Regranting Program
This regrant will support a small grants program managed and advertised via Hear This Idea which will give funding to people to launch podcasts or audio projects focused on ideas that matter.
Adapt Research
This regrant will support the development of a first draft of a resilience plan for nuclear winter/extreme pandemic.
Javier López Corpas and Cristina Rodríguez Doblado
This regrant will support a year of work translating popular content on Effective Altruism into Spanish.
Future Forum
This regrant will support general expenses for running the Future Forum conference.
Tom Green
This regrant will support six months of research on technology policy case studies of key historical figures who influenced technology development.
Haoxing Du
This regrant will support a career transition grant from physics to AI safety.
Anton Howes
This regrant will support historical research contracting work.
Cornell University, Professor Emma Pierson
This regrant will fund Professor Pierson to expand her program of research on fair and robustly aligned ML systems, with a focus on methods that will remain relevant as capabilities improve.
Neel Nanda
This regrant will provide a year of salary, funding to hire a research assistant, and compute for an independent AI safety researcher.
Flourishing Humanity Corporation
This regrant invested in two mental health, productivity, and personal growth applications.
Center for Space Governance
This regrant will provide funding for six people to spend three months producing the Center’s research agenda and creating and registering the organization.
David Mears
This regrant will support a career transition grant to allow David to pivot to working as a frontend developer for Effective Altruist organizations, becoming a 'language model psychologist,' or participating in charity entrepreneurship.
Sam Glendenning
This regrant will allow Sam to manage a group of students to start a 'London Existential Risk Initiative' (LERI). This provided funding for that student group, and in particular will be supporting three UCL student organizers.
High Impact Medicine
This regrant will support 12 months of High Impact Medicine so that they continue (and scale up) their activities.
Bertha von Suttner-Studienwerk
This regrant will fund BvS, a newly set up humanist student scholarship foundation, to take on 10 more fellows.
SoGive
This regrant will support four months of their work on in-depth analyses of charities.
Joseph Bloom
This regrant will support Joseph Bloom to spend 3-6 months transitioning his career to EA/Longtermist work, including conducting research on biosecurity.
Montaigne Institute
This regrant will allow the Montaigne Institute to hire a full time staff member to work on AI policy.
X-Axis
This regrant will support one year of funding to found and run a broad-interest, high-end website, featuring high-quality writing and appealing visualizations focused on promoting longer-term thinking.
José Alonso Flores Coronado
This regrant will support 6 months of salary for work on biosecurity projects, partly mentored by Tessa Alexanian, Officer of Safety and Security at iGEM.
Aleksei Paletskikh
This regrant will support six months of work on distilling AI alignment research.
Cornell University, Tom Gilbert
This regrant will fund students and postdocs to attend a conference on Human Centered AI at UC Berkeley and work on a resulting report.
University of Toronto, Social Science Prediction Platform
This regrant will fund one year of operations for the Social Science Prediction Platform, a platform which allows economists and other social science academics to use forecasting in their research.
The Royal Statistical Society
This regrant will support a course to teach civil servants in the UK to read, process, and use scientific research in their policy work. The course will focus on examples from emergency scenarios including pandemics and other existential risks.
Aligned AI
This regrant invested in Aligned AI, an AI safety organization seeking bridging funding.
Gryphon Scientific
This regrant will support a risk-benefit analysis of virology research.
Fund for Alignment Research
This regrant will fund the operations of the Fund for Alignment Research, which primarily provides engineering support to AI alignment researchers.
Redwood Research
We recommended a grant to support Redwood Research, an AI safety organization, and MLAB, a Redwood project to teach machine learning to individuals interested in AI safety, for 18 months which will support salaries, primarily for their researchers and technical interns, and the rest for compute.
Pranav Gade
This regrant will support three years of work, paid out conditional on progress reports for a cybersecurity project for AI research.
Topos Institute
This regrant will support the second “Finding the Right Abstractions” workshop, likely to take place around January 2023, which connects applied category theory and AI alignment communities in an effort to understand self-selecting systems.
Peter Hartree
This regrant will support the recording of audio narrations of important philosophical papers. The recordings will be packaged up as a podcast feed with its own website, brand and YouTube channel.
Simeon Campos
This regrant will support Simeon Campos to work for a fraction of his year on AI safety projects.
Collective Intelligence Project
This regrant will support two researchers, Matthew Prewitt and Divya Siddarth, to create a research agenda on “collective intelligence” specifically aimed at mitigating Global Catastrophic and Existential risks, and run some trial projects based on it.
Chatham House
This regrant will support ongoing research and convening with representatives from UK government, academia/think-tanks, and AI industry throughout 2022-23. These activities will be supplemented by written outputs and the program will focus on UK AI policy and risk reduction, pandemic prevention policy, defense policy, cybersecurity policy, and UK navigation of great power technology relations.
Elizabeth Van Nostrand
This regrant will support continued mentorship of a medical researcher and work generating novel medical research literature reviews, on a variety of topics, including long covid.
Akash Wasil
This regrant will support five months of building and supporting the longtermist and AI alignment communities.
Condor Camp
This regrant will support additional funding for Condor Camp, the program for top Brazilian university students to come to Peru for a retreat to learn about Effective Altruism.
Korbinian Kettnaker
This regrant will support a promising student who is working to bridge algorithmic information theory with epistemology and AGI research for his PhD at the University of Cambridge.
Global Priorities Encyclopedia
This regrant will support a 6-month trial of setting up the Global Priorities Encyclopedia (GPE), a reference work on global priorities research featuring articles by academics and researchers from the effective altruism community. Funding would pay the project lead, web developers, and article contributions.
Gryphon Scientific
This regrant will fund Gryphon Scientific to analyze options for and hold a workshop on protecting potential pandemic pathogen genomes.
AI Safety Community Building Hub
This regrant supports the creation and initial hiring for an organization that is centered around outreach to machine learning professionals about AI safety and alignment.
VIVID
This regrant is an investment in VIVID, a customization-based mindset improvement app that aims to improve peoples' self-reflection and productivity.
HR Luna Park
This regrant will support an experiment where a recruiting agency will headhunt ML Engineers for AI safety organizations.
School of Thinking
This regrant will support a global media outreach project to create high quality video and social media content about rationalism, longtermism and Effective Altruism.
Legal Services Planning Grant
This regrant will support six months of research on topics including how legal services can be effectively provided to the Effective Altruism community, materials to be included in a legal services handbook for EA organizations, novel legal questions particular to the EA community that might benefit from further research initiatives, and ways to create an effective EA professional network for practicing lawyers.
Manifold Markets
This regrant will support Manifold Markets in building a play-money prediction market platform. The platform is also experimenting with impact certificates and charity prediction markets.
David Xu
Trojan Detection Challenge at NeurIPS 2022
This regrant will support prizes for a trojan detection competition at NeurIPS, which involves identifying whether a deep neural network will suddenly change behavior if certain unknown conditions are met.