Please note, this is a STATIC archive of website ftxfuturefund.org from 16 Nov 2022, cach3.com does not collect or store any user information, there is no "phishing" involved.

Our Regrants

You can use this database to see information about a wide selection of the grants and investments recommended by our regrantors.1

Our hope is to empower a range of interesting, ambitious, and altruistic people to drive funding decisions through a rewarding, low-friction process. Thus far they have recommended grants and investments totaling over $50 million.2

We’re currently running a 6 month test of this model. We gave over 100 people access to discretionary budget and another group over 60 access to a streamlined grant recommendation form but no discretionary budget. Read more about our regranting program here.

The specific projects or organizations supported by these grants are not necessarily endorsed by the Future Fund. For regrants from discretionary pots, we screen for downsides, community effects, conflicts of interest, and similar, but are largely giving the regrantors autonomy. 

We are only publishing regrants above $25k. We had 172 regrants below $25k, totaling ~$1.8M. Many of these were stipends for summer research work, support to attend conferences, funding for talent development, or funding for writing or other media on impactful topics.

 

Last updated: September 1, 2022

Organization Name
Funding Type
Area of Interest
Date of Grant
Amount
Funding Stream
June 2022

Dylan Iskandar

This regrant will support a summer research internship at Columbia University in Barnard’s Programming Languages Lab under Prof. Santolucito, as well as equipment and conference attendance. The research project is aimed at modeling and aligning program synthesis models in the setting of computational creativity.

$25,000
August 2022

Charbel-Raphaël Segerie

This regrant will support six months of salary for Charbel-Raphaël Segerie to work on AI safety mentoring and study.

$25,000
August 2022

Sharon Hewitt Rawlette

This regrant will fund the writing of a book about classical utilitarianism and its foundations and implications.

$70,000
August 2022

Machine Learning Moral Uncertainty Competition

This regrant will fund the prizes for a competition on using machine learning models to estimate when human values are in conflict and estimate moral uncertainty.

$100,000
August 2022

The Inside View

This regrant will support Michael Trazzi to make 20 episodes of his podcast and YouTube channel about research on AI and its risks called "The Inside View." Funds compensated Michael for his time, for equipment, and for marketing.

$51,980
August 2022

Noa Nabeshima

This regrant will support three months of independent AI interpretability research.

$30,000
August 2022

University of California, Santa Cruz, Cihang Xie

This regrant will support Professor Cihang Xie's research on adversarial robustness and honest AI.

$350,000
August 2022

Space Future Initiative

This regrant will support funding for students at Harvard and Cornell to establish student groups to learn about and research longtermist space governance, including events, stipends, and prizes.

$90,000
August 2022

University of Pennsylvania, Professor Geoff Goodwin

This regrant will support three years of work on the Global Priorities Research psychology research agenda, for a post doc, conference attendance, research assistants, and payments for experimental participants, among other expenses.

$282,700
July 2022

The International Commission on Non-Ionizing Radiation Protection, Dr. Rodney Croft

This regrant supports work investigating FarUVC, a wavelength of light which could safely sterilize pathogens in occupied rooms. This regrant supports the International Commission on Non-Ionizing Radiation Protection, a nonprofit group that sets international exposure guidelines, to 1) determine what additional information is needed to re-evaluate the guidelines for 180-230nm light, and 2) specify experiments that would provide that knowledge, including methodological details.

$1,206,000
July 2022

Safety Philosophers Program

This regrant will cover the costs of running a seven month AI safety research fellowship for philosophy academics. The funds will cover the costs of travel, housing stipend, office space, guest speaker honorariums, and the hiring of researchers, advisors, a project manager, and organizers for the seven month period.

$1,331,500
philosophy.safe.ai
July 2022

The Treacherous Turn Game

This regrant will support the design and creation of a tabletop roleplaying game in which simulates potential risks from advanced AI.

$96,000
July 2022

David Chapman

This regrant will support an independent researcher to investigate the magnitude of AI risk and the most useful actions to take to address it, how to improve Effective Altruist decisionmaking according to "metarational" principles, and potentially valuable projects in metascience.

$48,000
July 2022

Evan R Murphy

This regrant will support six months of independent research on interpretability and other AI safety topics.

$30,000
July 2022

University of Chicago, Professor Chenhao Tan

This regrant will support the hiring of a PhD student or postdoc to help orient their lab toward AI safety.

$250,000
July 2022

Princeton University, Professor Adji Bousso Dieng

This regrant will fund their research agenda into robust generative models in the face of non-randomly missing data, targeting use cases in healthcare, materials science, and climate.

$250,000
July 2022

Ben Marcovitz

This regrant will support leadership coaching for leaders of Effective Altruist projects.

$150,000
July 2022

AI Governance Summer Program

This regrant will support ten fellows to work on mentored AI Governance projects over the summer.

$91,417
July 2022

Jenny Xiao

This regrant will support a teaching buyout which will allow Jenny to spend 20 hours a week on AI Governance research.

$26,000
July 2022

Professor David Bau

This regrant will support several research directions in interpretability for 2-3 years, including: empirical evaluation of knowledge and guessing mechanisms in large language models, clarifying large language models’ ability to be aware of and control use of internal knowledge, a theory for defining and enumerating knowledge in large language models, and building systems that enable human users to tailor a model’s composition of its internal knowledge.

$765,805
July 2022

Yan Zhang

This regrant will support a teaching buyout to allow Yan to focus on forecasting AGI scenarios, as well as research on beneficial alignment and deployment.

$56,000
July 2022

Konstantin Pilz

This regrant will support independent study and research on AI safety and alignment alongside research assistant work for Lennart Heim.

$43,000
July 2022

University of Maryland, Baltimore County, Professor Tim Oates

This regrant will fund Professor Tim Oates and a graduate student to work on AI alignment research.

$183,040
July 2022

Sandra Malagon

This regrant will support a year of coordinating several EA mentoring and community building activities with South American Spanish speakers.

$79,530
July 2022

Adam Jermyn

This regrant will support four months of theoretical AI safety research under Evan Hubinger’s mentorship.

$65,000
July 2022

Alex Mennen

This regrant will support six months of AI safety work.

$40,000
July 2022

University of Western Ontario, Professor Joshua Pearce

This regrant will fund Professor Pearce’s graduate students to conduct research projects on resilient food, specifically on manufacturing rope for seaweed farming and on making leaf protein concentrate.

$150,000
July 2022

Existential Risk NeurIPS prize

This regrant will fund prizes for papers at a NeurIPS Workshop, with $50k for papers that have the best discussion of AI x-risk, $50k for best overall papers.

$100,000
July 2022

Global Challenges Project

This regrant will support operations and logistics for the Harvard-MIT Existential Risk Summit held in Berkeley at the end of July. This four-day retreat will bring together incoming Harvard and MIT first years with Berkeley EAs and members of the Harvard/MIT EA communities.

$50,000
July 2022

Hear This Idea Regranting Program

This regrant will support a small grants program managed and advertised via Hear This Idea which will give funding to people to launch podcasts or audio projects focused on ideas that matter.

$25,000
July 2022

Adapt Research

This regrant will support the development of a first draft of a resilience plan for nuclear winter/extreme pandemic.

$152,000
July 2022

Javier López Corpas and Cristina Rodríguez Doblado

This regrant will support a year of work translating popular content on Effective Altruism into Spanish.

$75,000
July 2022

Future Forum

This regrant will support general expenses for running the Future Forum conference.

$100,000
July 2022

Tom Green

This regrant will support six months of research on technology policy case studies of key historical figures who influenced technology development.

$50,000
July 2022

Haoxing Du

This regrant will support a career transition grant from physics to AI safety.

$50,000
July 2022

Anton Howes

This regrant will support historical research contracting work.

$40,000
June 2022

Cornell University, Professor Emma Pierson

This regrant will fund Professor Pierson to expand her program of research on fair and robustly aligned ML systems, with a focus on methods that will remain relevant as capabilities improve.

$250,000
June 2022

Neel Nanda

This regrant will provide a year of salary, funding to hire a research assistant, and compute for an independent AI safety researcher.

$125,000
June 2022

Flourishing Humanity Corporation

This regrant invested in two mental health, productivity, and personal growth applications.

$80,000
June 2022

Center for Space Governance

This regrant will provide funding for six people to spend three months producing the Center’s research agenda and creating and registering the organization.

$80,000
June 2022

David Mears

This regrant will support a career transition grant to allow David to pivot to working as a frontend developer for Effective Altruist organizations, becoming a 'language model psychologist,' or participating in charity entrepreneurship.

$60,000
June 2022

Sam Glendenning

This regrant will allow Sam to manage a group of students to start a 'London Existential Risk Initiative' (LERI). This provided funding for that student group, and in particular will be supporting three UCL student organizers.

$42,798
June 2022

High Impact Medicine

This regrant will support 12 months of High Impact Medicine so that they continue (and scale up) their activities.

$170,000
June 2022

Bertha von Suttner-Studienwerk

This regrant will fund BvS, a newly set up humanist student scholarship foundation, to take on 10 more fellows.

$110,000
June 2022

SoGive

This regrant will support four months of their work on in-depth analyses of charities.

$92,715
June 2022

Joseph Bloom

This regrant will support Joseph Bloom to spend 3-6 months transitioning his career to EA/Longtermist work, including conducting research on biosecurity.

$25,000
June 2022

Montaigne Institute

This regrant will allow the Montaigne Institute to hire a full time staff member to work on AI policy.

$67,688
June 2022

X-Axis

This regrant will support one year of funding to found and run a broad-interest, high-end website, featuring high-quality writing and appealing visualizations focused on promoting longer-term thinking.

 

$160,000
June 2022

José Alonso Flores Coronado

This regrant will support 6 months of salary for work on biosecurity projects, partly mentored by Tessa Alexanian, Officer of Safety and Security at iGEM.

$38,500
June 2022

Aleksei Paletskikh

This regrant will support six months of work on distilling AI alignment research.

$27,000
June 2022

Cornell University, Tom Gilbert

This regrant will fund students and postdocs to attend a conference on Human Centered AI at UC Berkeley and work on a resulting report.

$25,000
June 2022

University of Toronto, Social Science Prediction Platform

This regrant will fund one year of operations for the Social Science Prediction Platform, a platform which allows economists and other social science academics to use forecasting in their research.

$492,119
socialscienceprediction.org
June 2022

The Royal Statistical Society

This regrant will support a course to teach civil servants in the UK to read, process, and use scientific research in their policy work. The course will focus on examples from emergency scenarios including pandemics and other existential risks.

$251,925
June 2022

Aligned AI

This regrant invested in Aligned AI, an AI safety organization seeking bridging funding.

$150,000
June 2022

Gryphon Scientific

This regrant will support a risk-benefit analysis of virology research.

$144,000
August 2022

Fund for Alignment Research

This regrant will fund the operations of the Fund for Alignment Research, which primarily provides engineering support to AI alignment researchers.

$1,025,000
July 2022

Redwood Research

We recommended a grant to support Redwood Research, an AI safety organization, and MLAB, a Redwood project to teach machine learning to individuals interested in AI safety, for 18 months which will support salaries, primarily for their researchers and technical interns, and the rest for compute.

$6,600,000
July 2022

Pranav Gade

This regrant will support three years of work, paid out conditional on progress reports for a cybersecurity project for AI research.

$50,000

Topos Institute

This regrant will support the second “Finding the Right Abstractions” workshop, likely to take place around January 2023, which connects applied category theory and AI alignment communities in an effort to understand self-selecting systems.

$30,000
July 2022

Peter Hartree

This regrant will support the recording of audio narrations of important philosophical papers. The recordings will be packaged up as a podcast feed with its own website, brand and YouTube channel.

$36,405
July 2022

Simeon Campos

This regrant will support Simeon Campos to work for a fraction of his year on AI safety projects.

$25,000
July 2022

Collective Intelligence Project

This regrant will support two researchers, Matthew Prewitt and Divya Siddarth, to create a research agenda on “collective intelligence” specifically aimed at mitigating Global Catastrophic and Existential risks, and run some trial projects based on it.

$250,000
July 2022

Chatham House

This regrant will support ongoing research and convening with representatives from UK government, academia/think-tanks, and AI industry throughout 2022-23. These activities will be supplemented by written outputs and the program will focus on UK AI policy and risk reduction, pandemic prevention policy, defense policy, cybersecurity policy, and UK navigation of great power technology relations.

$400,000
June 2022

Elizabeth Van Nostrand

This regrant will support continued mentorship of a medical researcher and work generating novel medical research literature reviews, on a variety of topics, including long covid.

$30,000
June 2022

Akash Wasil

This regrant will support five months of building and supporting the longtermist and AI alignment communities.

$37,500
July 2022

Condor Camp

This regrant will support additional funding for Condor Camp, the program for top Brazilian university students to come to Peru for a retreat to learn about Effective Altruism.

$50,000
June 2021

Korbinian Kettnaker

This regrant will support a promising student who is working to bridge algorithmic information theory with epistemology and AGI research for his PhD at the University of Cambridge.

$158,000
June 2022

Global Priorities Encyclopedia

This regrant will support a 6-month trial of setting up the Global Priorities Encyclopedia (GPE), a reference work on global priorities research featuring articles by academics and researchers from the effective altruism community. Funding would pay the project lead, web developers, and article contributions.

$241,500
August 2022

Gryphon Scientific

This regrant will fund Gryphon Scientific to analyze options for and hold a workshop on protecting potential pandemic pathogen genomes.

$177,583
July 2022

AI Safety Community Building Hub

This regrant supports the creation and initial hiring for an organization that is centered around outreach to machine learning professionals about AI safety and alignment.

$500,000
August 2022

VIVID

This regrant is an investment in VIVID, a customization-based mindset improvement app that aims to improve peoples' self-reflection and productivity.

$800,000
vivid-app.me
August 2022

HR Luna Park

This regrant will support an experiment where a recruiting agency will headhunt ML Engineers for AI safety organizations.

$200,000
May 2022

School of Thinking

This regrant will support a global media outreach project to create high quality video and social media content about rationalism, longtermism and Effective Altruism.

 

$250,000
May 2022

Legal Services Planning Grant

This regrant will support six months of research on topics including how legal services can be effectively provided to the Effective Altruism community, materials to be included in a legal services handbook for EA organizations, novel legal questions particular to the EA community that might benefit from further research initiatives, and ways to create an effective EA professional network for practicing lawyers.

$100,000
March 2022

Manifold Markets

​​This regrant will support Manifold Markets in building a play-money prediction market platform. The platform is also experimenting with impact certificates and charity prediction markets.

$1,000,000
manifold.markets
March 2022

David Xu

This regrant will support six months of research on AI safety.
$50,000
May 2022

Trojan Detection Challenge at NeurIPS 2022

This regrant will support prizes for a trojan detection competition at NeurIPS, which involves identifying whether a deep neural network will suddenly change behavior if certain unknown conditions are met.

$50,000
May 2022

Effective Altruism Office Zurich

This regrant will support renting and furnishing an office space for a year.
$52,000
March 2022

Akash Wasil

This regrant will support an individual working on supporting students who are interested in focusing their careers on the world’s most pressing problems.
$26,000
April 2022

Fiona Pollack

This regrant will support six months of salary for an individual working to support Harvard students interested in working on the world’s most pressing problems and protecting and improving the long term future.
$30,000
April 2022

Peter McLaughlin

This regrant will support six months of research on criticisms of effective altruism.
$46,000
April 2022

Dwarkesh Patel

This regrant will support a promising podcaster to hire a research assistant and editor, purchase equipment, and cover travel to meet guests in person. The podcast covers technological progress, existential risk, economic growth, and the long term future.
$76,000
May 2022

ALERT

This regrant will support the creation of the Active Longtermist Emergency Response Team, an organization to rapidly manage emerging global events like Covid-19.

$150,000
forum.effectivealtruism.org
May 2022

EA Critiques and Red Teaming Prize

This regrant will support prize money for a writing contest for critically engaging with theory or work in Effective Altruism. The goal of the contest is to produce thoughtful, action oriented critiques.

$100,000
forum.effectivealtruism.org
May 2022

Federation for American Scientists

This regrant will support a researcher and research assistant to work on high-skill immigration and AI policy at FAS for three years.

$1,000,000
fas.org
May 2022

Ought

This regrant will support Ought’s work building Elicit, a language-model based research assistant. This work contributes to research on reducing alignment risk through scaling human supervision via process-based systems.

$5,000,000
April 2022

ML Safety Scholars Program

This regrant will fund a summer program for up to 100 students to spend 9 weeks studying machine learning, deep learning, and technical topics in safety.

$490,000
course.mlsafety.org
March 2022

AntiEntropy

This regrant will support a project to create and house operations-related resources and guidance for EA-aligned organizations.

$120,000
resourceportal.antientropy.org
May 2022

Everett Smith

This regrant will support a policy retreat on governing artificial intelligence.
$35,000
May 2022

Olle Häggström, Chalmers University of Technology

This regrant will support research on statistical arguments relating to existential risk and work on risks from artificial intelligence, as well as outreach, supervision, and policy work on these topics.
$380,000
May 2022

Essay Contest on Existential Risk in US Cost Benefit Analysis

This regrant will support an essay contest on “Accounting for Existential Risks in US Cost-Benefit Analysis,” with the aim of contributing to the revision of OMB Circular-A4, a document which guides US government cost-benefit analysis. The Legal Priorities Project is administering the contest.

$137,500
legalpriorities.org
May 2022

MineRL BASALT competition at NeurIPS

This regrant will support a NeurIPS competition applying human feedback in a non-language-model setting, specifically pretrained models in Minecraft. The grant will be administered by the Berkeley Existential Risk Initiative.

$155,000
minerl.io
May 2022

QURI

This regrant will support QURI to develop a programming language called "Squiggle" as a tool for probabilistic estimation. The hope is this will be a useful tool for forecasting and fermi estimates.

$200,000
squiggle-language.com
May 2022

Andi Peng

This regrant will support four months of salary and compute for research on AI alignment.
$42,600
May 2022

CSIS

This regrant will support initiatives including a CSIS public event focused on the importance of investments in human capital to ensure US national security; roundtables with policymakers, immigration experts, national security professionals, and company representatives to discuss key policy actions that should be taken to bolster US national security through immigration reform; and two episodes of the “Vying for Talent” podcast focusing on the importance of foreign talent in bolstering America’s innovative capacity.
$75,000
May 2022

Aaron Scher

This regrant will support a summer of research on AI alignment in Berkeley.
$28,500
April 2022

Kris Shrishak

This regrant will support research on how cryptography might be applied to AI safety research.
$28,000
June 2022

AI Impacts

This regrant will support rerunning the highly-cited survey “When Will AI Exceed Human Performance? Evidence from AI Experts” from 2016, analysis, and publication of results.

$250,000
May 2022

Chinmay Ingalagavi

This regrant will support a Masters at LSE for a talented STEM student.
$50,000
May 2022

Apart Research

This regrant will support the creation of an AI Safety organization which will create a platform to share AI safety research ideas and educational materials, connect people working on AI safety, and bring new people into the field.

$95,000
apartresearch.com aisafetyideas.com
May 2022

Tereza Flidrova

This regrant will support a one year master’s program in architecture for a student interested in building civilizational shelters.
$32,000
May 2022

J. Peter Scoblic

This regrant will fund a nuclear risk expert to construct nuclear war-related forecasting questions and provide forecasts and explanations on key nuclear war questions.
$25,000
April 2022

AI Risk Public Materials Competition

This regrant will support two competitions to produce better public materials on the existential risk from AI.
$40,000
May 2022

Moncef Slaoui

This regrant will fund the writing of Slaoui's memoir, especially including his experience directing Operation Warp Speed.

$150,000
May 2022

Artificial Intelligence Summer Residency Program

This regrant will support a six week summer residency in Berkeley on AI safety.
$60,000
March 2022

Public Editor

This regrant will support a project to use a combination of human feedback and Machine Learning to label misinformation and reasoning errors in popular news articles.

$500,000
publiceditor.io
May 2022

The Good Ancestors Project

This regrant will support the creation of The Good Ancestors Project, an Australian-based organization to host research and community building on topics relevant to making the long term future go well.

$75,000
goodancestorsproject.org.au
April 2022

Thomas Kwa

This regrant will support three months of research on AI safety.
$37,500
March 2022

Joshua Greene, Harvard University

This regrant will support the real-world testing and roll-out of 'Red Brain, Blue Brain', an online quiz designed to reduce negative partisanship between Democrats and Republicans in the US.
$250,000
April 2022

Braden Leach

This regrant supported a recent law school graduate to work on biosecurity. Braden will research and write at the Johns Hopkins Center for Health Security.
$175,000
April 2022

Adversarial Robustness Prizes at ECCV

This regrant will support three prizes for the best papers on adversarial robustness research at a workshop at ECCV, the main fall computer vision conference. The best papers are selected to have higher relevance to long-term threat models than usual adversarial robustness papers.

$30,000
May 2022

Confido Institute

The Confido Institute is working on developing a user-friendly interactive app, Confido, for making forecasts and communicating beliefs and uncertainty within groups and organizations. They are also building interactive educational programs about forecasting and working with uncertainty based around this app.

$190,000
confido.tools
April 2022

Supporting Agent Foundations AI safety research at ALTER

This regrant will support 1.5-3 years of salary for a mathematics researcher to work with Vanessa Kosoy on the learning-theoretic AI safety agenda.

$200,000
lesswrong.com
May 2022

Modeling Transformative AI Risks (Aryeh Englander, Sammy Martin, Analytica Consulting)

This regrant will support two AI researchers, one or two additional assistants, and a consulting firm to continue to build out and fully implement the quantitative model for how to understand risks and interventions around AI safety, expanding on their earlier research on “Modeling Transformative AI Risk.”

$272,000
alignmentforum.org
March 2022

Impact Markets

This regrant will support the creation of an “impact market.” The hope is to improve charity fundraising by allowing profit-motivated investors to earn returns by investing in charitable projects that are eventually deemed impactful.

$215,000
impactmarkets.io
May 2022

AI Alignment Prize on Inverse Scaling

This regrant will support prizes for a contest to find tasks where larger language models do worse (“inverse scaling”).
$250,000
March 2022

Swift Centre for Applied Forecasting

This regrant will support the creation of the Swift Centre for Applied Forecasting, including salary for a director and a team of expert forecasters. They will forecast trends from Our World in Data charts, as well as other topics related to ensuring the long term future goes well, with a particular focus on explaining the “why” of forecast estimates.

$2,000,000
swiftcentre.org
March 2022

Lawrence Newport

This regrant will support the launch and first year of a youtube channel focusing on video essays presented by Dr Lawrence Newport on longtermism, the future of humanity, and related topics.
$95,000
May 2022

Aidan O’Gara

This regrant will find salary, compute, and a scholarship for an undergraduate student doing career development and research on language model safety.
$46,000

1. All grantees and investees were given an opportunity to review their listing and offer corrections before this list was published. Please email [email protected] to request edits. As with our direct grants and investments, we sometimes do not publish grants because the grantee asks us not to or because we believe it would undermine our or the grantee’s work. We also do not necessarily publish all grants that are small, initial, or exploratory.

2. The Future Fund is a project of the FTX Foundation, a philanthropic collective. Grants and donations are made through various entities in our family of organizations, including FTX Philanthropy Inc., a nonprofit entity. Investment profits are reserved for philanthropic purposes.