Please note, this is a STATIC archive of website ftxfuturefund.org from 16 Nov 2022, cach3.com does not collect or store any user information, there is no "phishing" involved.

Areas of Interest

This page describes some areas that are (we think) especially important for making the future go well. 

We’d be excited to fund proposals that might make progress on these areas. For some concrete examples, see our Project Ideas page. We’re also very open to funding work on ideas and areas that we’ve overlooked. 

We know this page is incomplete, uncertain, underdeveloped, and often quite speculative. We could be getting a lot of important things wrong. So we know we might look quite foolish ten years from now—or much sooner, if someone points out our errors on Twitter. That’s okay with us, because we think we’ll learn more and learn faster this way. We hope that others will help us identify our mistakes and highlight ideas we’ve overlooked.

These aren’t the only areas the FTX Foundation cares about. For our work on other issues, please see the FTX Foundation website.

Artificial Intelligence

We think artificial intelligence (AI) is the development most likely to dramatically alter the trajectory of humanity this century. AI is already posing serious challenges: transparency, interpretability, algorithmic bias, and robustness, to name just a few. Before too long, advanced AI could automate the process of scientific and technological discovery, leading to economic growth rates well over 10% per year (see Aghion et al 2017, this post, and Davidson 2021).

As a result, our world could soon look radically different. With the help of advanced AI, we could make enormous progress toward ending global poverty, animal suffering, early death and debilitating disease. But two formidable new problems for humanity could also arise:

  1. Loss of control to AI systems
    Advanced AI systems might acquire undesirable objectives and pursue power in unintended ways, causing humans to lose all or most of their influence over the future.
  2. Concentration of power
    Actors with an edge in advanced AI technology could acquire massive power and influence; if they misuse this technology, they could inflict lasting damage on humanity’s long-term future.

For more on these problems, we recommend Holden Karnofsky’s “Most Important Century,” Nick Bostrom’s Superintelligence, and Joseph Carlsmith’s “Is power-seeking AI an existential risk?”.

We’re not sure what to do about these problems. In general, though, we’d be excited about anything that addresses one of these problems without making the other one worse. Possibilities include:

  • Technical AI safety work
    Research on AI safety might allow us to ensure that we don’t lose control of advanced AI systems. We think this Open Philanthropy Project Request for Projects and this work by the Alignment Research Center are promising places to start. We welcome applications focused on other research directions, as well, provided that they have clear relevance to the “loss of control” problem outlined above.
  • Governance of AI
    We’d like to see the next generation of leaders approach AI safety and governance in a thoughtful and careful way, with the long-term interests of all humanity front and center.
  • Thinking hard about the future of AI
    We’d like to see more work that carefully explores the possible futures of AI, like Superintelligence and the “Most Important Century” series.
  • Growing the field
    Even if we can’t see how to solve these problems now, we can try to increase the number of smart and well-motivated people with expertise in machine learning and AI policy. We think it’s a good bet that these people will find ways to help in the future.

Biorisk and Recovery from Catastrophe

If our civilization doesn’t survive, we can’t build a society free from poverty and discrimination, push the frontiers of scientific knowledge and artistic expression, or accomplish anything else we care about. We face a variety of catastrophic threats in the next century, and humanity could go extinct. We want to improve our odds of surviving, and we’re open to all ways of doing that.

Biorisk is particularly worrying to us. We are extremely concerned about the accidental or deliberate release of a biological weapon optimized to produce as much damage as possible. Progress in synthetic biology could enable the creation of pathogens with the power to kill billions of people, or end the human species altogether. These pathogens could be deployed intentionally or unintentionally, by state actors or terrorist groups.

There is a lot to be done to directly improve our resilience to this type of attack. Some examples are early warning systems for new pathogens, better PPE and other protective equipment, rapid development of medical countermeasures, and better international governance of bioweapons.

We’re also interested in a “plan B” to recover from biological or nuclear catastrophe—infrastructure and plans for repairing society after a disaster, with the ability to rapidly scale things like food production.

For more on reducing catastrophic biorisk, see Andrew Snyder-Beattie and Ethan Alley’s thoughts here.

Epistemic Institutions

Society often fails to sort truth from falsehood, to make appropriate decisions in the face of uncertainty, and to direct its attention to what’s most important. We think there’s room for dramatic improvement. We’re interested in interventions that could significantly improve collective processes of belief formation and revision—either across the board, or for the most sophisticated consumers of information. We’re particularly interested in this work when it bears on topics of great importance to the long-term future.

Our interests include:

  • More forecasting. We’re huge fans of prediction markets and forecasting tournaments. We’d love to see these widely adopted and used to inform political decision-making. We’re particularly excited about long-term forecasting (10+ years out), and methods that might make long-term forecasting more feasible.
  • Expert opinion aggregation. The IGM Economic Experts Panel and this survey by Grace et al. 2017 are good examples.
  • More rigorous news. We’d be excited to see major media outlets and newspapers operate with more rigorous epistemic standards than they often do. We’d also be interested in new outlets and newspapers that place a premium on epistemic rigor.

Values and Reflective Processes

The values that guide human society might be quite path-dependent. If the Second World War had unfolded differently, for example, fascism might have become the predominant global ideology. In the future, advanced artificial intelligence could be used to rule a society, or enforce its constitution, indefinitely. So whether the future is governed by values that are authoritarian or egalitarian, benevolent or sadistic, tolerant or intolerant, could be largely determined by the values and reflective processes we cultivate today. 

We think it’s important for societies to be structured in a way that allows the best ideas to win out in the long run. This process is threatened when states suppress free inquiry and reasoned debate or entrench political incumbents against meaningful challenge. 

We want to protect processes of democratic discourse and deliberation. We’re interested in efforts to safeguard the political process, by reducing partisan corruption and ensuring that votes are faithfully translated into political outcomes. 

We’re also interested in the relationship between the media, culture, and values. Books and media are enormously important for exploring ideas, shaping the culture, and focusing society’s attention on specific issues. There is a long track record of major impact (not always positive). In non-fiction, examples include An Analysis of Roman Government, The Spirit of the Laws, Two Treatises of Government, A Vindication of the Rights of Woman, On Liberty, Utilitarianism, Das Kapital, Capitalism and Freedom, Animal Liberation, and An Inconvenient Truth. In fiction, examples include Uncle Tom’s Cabin and Nineteen Eighty-Four

We’re excited about media that will explore ideas, like existential risk and artificial intelligence, that are especially important for positively shaping the long-term future. The Precipice and Superintelligence are examples. We’re interested in books, documentaries, movies, blogs, Twitter, journalism, and more.

Economic Growth

Inclusive economic growth is a powerful force for human progress. We’re interested in exploring unusually effective interventions to accelerate growth. Immigration reform and slowing down demographic decline could enable more people to contribute to technological development. So could innovative educational experiments to empower young people with exceptional potential.

We believe that progress in this area could substantially improve humanity’s long-term prospects by (i) improving the prospects for free inquiry and moral progress, (ii) reducing existential risk by shortening the “time of perils” and (iii) reducing the duration and likelihood of long-term economic stagnation.

Great Power Relations

When countries with powerful militaries and global interests compete, the effects are large and long-lasting. The Napoleonic Wars redrew the borders of Europe.  World War II spurred the development of nuclear weapons, halted the ascent of fascism, and set up decades of conflict between liberal democracy and communism. During the Cold War, superpowers built up arsenals of thousands of nuclear warheads, ready to launch at the first sign of danger. 

We expect competition and cooperation between powerful countries like the US, China, Russia, and India to have a similarly large influence on human affairs in the twenty-first century. 

While great power relations are clearly important, it’s not clear how to influence outcomes in this area for the better. We know there are complex dynamics at play and we’re wary of making things worse. Still, we hope to engage with experts in order to build up capacity and knowledge in this domain. Here are some opportunities we’d be excited to explore:

  • Developing and promoting policy proposals that directly reduce conflict risks, like Thomas Schelling’s work on the direct communication line between Washington and Moscow.
  • Initiatives that increase understanding and cooperation between the great powers, on issues such as arms control and information sharing, to reduce the risk of accidents.
  • Research into neglected issues like emerging technologies, the possibility of new weapons of mass destruction, and the very long-term effects of great power competition.

Space Governance

(The ideas in this section are more unusual and less developed than those in many other sections. We’re less sure that we’re being reasonable here, but we thought it would be interesting to put them forward anyway.)

The governance of space could soon become a topic of immense and pressing importance. But relatively few people are thinking about space governance in depth. We hope that will change. 

In addition to issues such as the governance of planets like Mars, we believe it’s important to think through plausible scenarios that may seem more exotic. Here’s a scenario that we think could plausibly happen this century:

  • Advances in AI automate the process of scientific and technological advancement, resulting in a dramatic acceleration of the rate of technological progress. (See Aghion et al 2017 and this post.)
  • Humanity becomes capable of settling a large fraction of the accessible universe. In a relatively short period of time, expeditions are launched to settle almost all of the universe that will ever be reached (see e.g. Armstrong and Sandberg 2013 for one version of this scenario).

We’d like to see rigorous work on confirming or refuting the plausibility of such a scenario. More generally, we hope to see thoughtful people carefully explore the different ways that space might be settled. 

We’d also like to see careful thinking about long-term space governance. What rules will determine which countries, if any, govern which planets and solar systems? Will there be a constitutional convention-like process to determine this? What should the constitution say? How do we get from our existing legal and political framework to the governance system we will ultimately need?

Empowering Exceptional People

We’re excited about projects that seek to identify outstandingly talented young people, especially those without access to educational opportunity—such as children living in extreme poverty. With scholarships and support, these young people could help solve humanity’s most pressing problems. 

We also want to enable outstanding professionals, academics, and students to step away from their current line of work or research, in order to work on issues with special relevance to protecting the long-term future. We’re excited to explore fellowships, prizes, teaching buyouts, increased compensation, start-up incubators, and any other promising mechanism for achieving this goal. 

Finally, we’re excited about incentivizing and supporting people to work on research projects of enormous scope and ambition, like Superintelligence, Engines of Creation, Animal Liberation, The Spirit of the Laws, and A Vindication of the Rights of Woman.

Effective Altruism

We’re very excited about growing the effective altruism community. And we want to critically examine its ideas, to help ensure that effective altruists cultivate open-mindedness, resist insular thinking, and avoid homogeneity. 

We think that expanding the effective altruism community will get more people working on critical problems for securing humanity’s future. That includes the problems listed on this page, as well as problems we haven’t discussed here or aren’t yet aware of. 

By thoughtfully expanding the effective altruism community, we can also improve the quality of its decision-making, by drawing upon a wider range of experiences and points of view. As the EA community grows, it needs to strengthen its efforts to increase its racial, gender, geographical, and educational diversity. We want to find the most effective strategies for doing so. And we want to continue building out EA student groups at universities, especially in parts of the US and the world that are currently underrepresented in EA. 

We want to see effective altruism grow, but we also think it’s extremely important that effective altruists don’t fall prey to insularity. That’s why the Future Fund is eager to receive proposals for funding and collaboration from anyone who wants to protect the long-term future—regardless of whether they identify as effective altruists, longtermists, or anything else. 

Research That Can Help Us Improve

There’s so much we just don’t know. Many of the ideas discussed on this page are relatively new, so it’s likely there are crucial considerations and causes that we are totally missing. Even when we’re focusing on the right issues, our reasoning about them could undoubtedly improve. 

We think that most people interested in protecting the long-term future should focus on making concrete, practical contributions. Still, we think that some folks could make an enormous contribution via research. We are especially interested in research that might help us improve our own decision-making, and the decision-making of other organizations and individuals focused on improving the long-term future. In particular:

  • We’d love to see thoughtful plans for how to spend $1-100B to make humanity’s future as good as possible. We might learn a great deal from these plans, even when we don’t put them into action—and in some cases, we might do that too!
  • We don’t know what to make of things like infinite ethics, acausal trade, or the simulation hypothesis, and we mostly ignore them in our work. We’d be interested to see arguments about whether that’s a big mistake, and to examine thoughtful proposals for what we should do differently in light of such issues.
  • We’d be excited and grateful to see rigorous criticisms of our priorities and concerns.