{"id":28,"date":"2022-01-25T20:51:03","date_gmt":"2022-01-25T20:51:03","guid":{"rendered":"https:\/\/ftxfuturefund.org\/?page_id=28"},"modified":"2022-09-23T10:15:14","modified_gmt":"2022-09-23T18:15:14","slug":"area-of-interest","status":"publish","type":"page","link":"https:\/\/ftxfuturefund.org\/area-of-interest\/","title":{"rendered":"Areas of Interest"},"content":{"rendered":"\n
— This page describes some areas that are (we think) especially important for making the future go well. <\/strong><\/p>\n\n\n\n We\u2019d be excited to fund proposals that might make progress on these areas. For some concrete examples, see our Project Ideas<\/a> page. We\u2019re also very open to funding work on ideas and areas that we\u2019ve overlooked. <\/p>\n\n\n\n We know this page is incomplete, uncertain, underdeveloped, and often quite speculative. We could be getting a lot of important things wrong. So we know we might look quite foolish ten years from now\u2014or much sooner, if someone points out our errors on Twitter. That’s okay with us, because we think we’ll learn more and learn faster this way. We hope that others will help us identify our mistakes and highlight ideas we\u2019ve overlooked.<\/p>\n\n\n\n These aren\u2019t the only areas the FTX Foundation cares about. For our work on other issues, please see the FTX Foundation<\/a> website.<\/p>\n\n\n\n We think artificial intelligence (AI) is the development most likely to dramatically alter the trajectory of humanity this century. AI is already posing serious challenges: transparency, interpretability, algorithmic bias, and robustness, to name just a few. Before too long, advanced AI could automate the process of scientific and technological discovery, leading to economic growth rates well over 10% per year (see Aghion et al 2017<\/a>, this post<\/a>, and Davidson 2021<\/a>). <\/p>\n\n\n\n As a result, our world could soon look radically different. With the help of advanced AI, we could make enormous progress toward ending global poverty, animal suffering, early death and debilitating disease. But two formidable new problems for humanity could also arise:<\/p>\n\n\n\n For more on these problems, we recommend Holden Karnofsky\u2019s \u201cMost Important Century<\/a>,\u201d Nick Bostrom\u2019s Superintelligence<\/em><\/a>, <\/em>and Joseph Carlsmith\u2019s \u201cIs power-seeking AI an existential risk?<\/a>\u201d.<\/em><\/p>\n\n\n\n We’re not sure what to do about these problems. In general, though, we\u2019d be excited about anything that addresses one of these problems without making the other one worse. Possibilities include:<\/p>\n\n\n\n If our civilization doesn\u2019t survive, we can\u2019t build a society free from poverty and discrimination, push the frontiers of scientific knowledge and artistic expression, or accomplish anything else we care about. We face a variety of catastrophic threats in the next century, and humanity could go extinct. We want to improve our odds of surviving, and we\u2019re open to all ways of doing that.<\/p>\n\n\n\n Biorisk is particularly worrying to us. We are extremely concerned about the accidental or deliberate release of a biological weapon optimized to produce as much damage as possible. Progress in synthetic biology could enable the creation of pathogens with the power to kill billions of people, or end the human species altogether. These pathogens could be deployed intentionally or unintentionally, by state actors or terrorist groups.<\/p>\n\n\n\n There is a lot to be done to directly improve our resilience to this type of attack. Some examples are early warning systems for new pathogens, better PPE and other protective equipment, rapid development of medical countermeasures, and better international governance of bioweapons.<\/p>\n\n\n\n We\u2019re also interested in a \u201cplan B\u201d to recover from biological or nuclear catastrophe\u2014infrastructure and plans for repairing society after a disaster, with the ability to rapidly scale things like food production. <\/p>\n\n\n\n For more on reducing catastrophic biorisk, see Andrew Snyder-Beattie and Ethan Alley\u2019s thoughts here<\/a>.<\/p>\n\n\n\n Society often fails to sort truth from falsehood, to make appropriate decisions in the face of uncertainty, and to direct its attention to what\u2019s most important. We think there\u2019s room for dramatic improvement. We\u2019re interested in interventions that could significantly improve collective processes of belief formation and revision\u2014either across the board, or for the most sophisticated consumers of information. We\u2019re particularly interested in this work when it bears on topics of great importance to the long-term future.<\/p>\n\n\n\n Our interests include:<\/p>\n\n\n\n The values that guide human society might be quite path-dependent. If the Second World War had unfolded differently, for example, fascism might have become the predominant global ideology. In the future, advanced artificial intelligence could be used to rule a society, or enforce its constitution, indefinitely. So whether the future is governed by values that are authoritarian or egalitarian, benevolent or sadistic, tolerant or intolerant, could be largely determined<\/a> by the values and reflective processes we cultivate today. <\/p>\n\n\n\n We think it’s important for societies to be structured in a way that allows the best ideas to win out in the long run. This process is threatened when states suppress free inquiry and reasoned debate or entrench political incumbents against meaningful challenge. <\/p>\n\n\n\n We want to protect processes of democratic discourse and deliberation. We\u2019re interested in efforts to safeguard the political process, by reducing partisan corruption and ensuring that votes are faithfully translated into political outcomes. <\/p>\n\n\n\n We’re also interested in the relationship between the media, culture, and values. Books and media are enormously important for exploring ideas, shaping the culture, and focusing society\u2019s attention on specific issues. There is a long track record of major impact (not always positive). In non-fiction, examples include An Analysis of Roman Government<\/em>, The Spirit of the Laws<\/em>, Two Treatises of Government<\/em>, A Vindication of the Rights of Woman, On Liberty<\/em>, Utilitarianism<\/em>, Das Kapital<\/em>, Capitalism and Freedom<\/em>, Animal Liberation<\/em>, and An Inconvenient Truth<\/em>. In fiction, examples include Uncle Tom\u2019s Cabin<\/em> and Nineteen Eighty-Four<\/em>. <\/p>\n\n\n\n We\u2019re excited about media that will explore ideas, like existential risk and artificial intelligence, that are especially important for positively shaping the long-term future. The Precipice<\/em> and Superintelligence<\/em> are examples. We\u2019re interested in books, documentaries, movies, blogs, Twitter, journalism, and more.<\/p>\n\n\n\n Inclusive economic growth is a powerful force for human progress. We\u2019re interested in exploring unusually effective interventions to accelerate growth. Immigration reform and slowing down demographic decline<\/a> could enable more people to contribute to technological development. So could innovative educational experiments to empower young people with exceptional potential.<\/p>\n\n\n\n We believe that progress in this area could substantially improve humanity’s long-term prospects by (i) improving the prospects for free inquiry and moral progress<\/a>, (ii) reducing existential risk by shortening the “time of perils”<\/a> and (iii) reducing the duration and likelihood of long-term economic stagnation<\/a>.<\/p>\n\n\n\n When countries with powerful militaries and global interests compete, the effects are large and long-lasting. The Napoleonic Wars redrew the borders of Europe. World War II spurred the development of nuclear weapons, halted the ascent of fascism, and set up decades of conflict between liberal democracy and communism. During the Cold War, superpowers built up arsenals of thousands of nuclear warheads, ready to launch at the first sign of danger. <\/p>\n\n\n\n We expect competition and cooperation between powerful countries like the US, China, Russia, and India to have a similarly large influence on human affairs in the twenty-first century. <\/p>\n\n\n\n While great power relations are clearly important, it\u2019s not clear how to influence outcomes in this area for the better. We know there are complex dynamics at play and we\u2019re wary of making things worse. Still, we hope to engage with experts in order to build up capacity and knowledge in this domain. Here are some opportunities we\u2019d be excited to explore:<\/p>\n\n\n\n (The ideas in this section are more unusual and less developed than those in many other sections. We’re less sure that we’re being reasonable here, but we thought it would be interesting to put them forward anyway.)<\/p>\n\n\n\n The governance of space<\/a> could soon become a topic of immense and pressing importance. But relatively few people are thinking about space governance in depth. We hope that will change. <\/p>\n\n\n\n In addition to issues such as the governance of planets like Mars, we believe it\u2019s important to think through plausible scenarios that may seem more exotic. Here\u2019s a scenario that we think could plausibly happen this century:<\/p>\n\n\n\n We\u2019d like to see rigorous work on confirming or refuting the plausibility of such a scenario. More generally, we hope to see thoughtful people carefully explore the different ways that space might be settled. <\/p>\n\n\n\n We\u2019d also like to see careful thinking about long-term space governance. What rules will determine which countries, if any, govern which planets and solar systems? Will there be a constitutional convention-like process to determine this? What should the constitution say? How do we get from our existing legal and political framework to the governance system we will ultimately need?<\/p>\n\n\n\n We\u2019re excited about projects that seek to identify outstandingly talented young people, especially those without access to educational opportunity\u2014such as children living in extreme poverty. With scholarships and support, these young people could help solve humanity\u2019s most pressing problems. <\/p>\n\n\n\n We also want to enable outstanding professionals, academics, and students to step away from their current line of work or research, in order to work on issues with special relevance to protecting the long-term future. We\u2019re excited to explore fellowships, prizes, teaching buyouts, increased compensation, start-up incubators, and any other promising mechanism for achieving this goal. <\/p>\n\n\n\n Finally, we\u2019re excited about incentivizing and supporting people to work on research projects of enormous scope and ambition, like Superintelligence, Engines of Creation, Animal Liberation, The Spirit of the Laws, <\/em>and A Vindication of the Rights of Woman<\/em>.<\/p>\n\n\n\n We\u2019re very excited about growing the effective altruism<\/a> community. And we want to critically examine its ideas, to help ensure that effective altruists cultivate open-mindedness, resist insular thinking, and avoid homogeneity. <\/p>\n\n\n\n We think that expanding the effective altruism community will get more people working on critical problems for securing humanity\u2019s future. That includes the problems listed on this page, as well as problems we haven\u2019t discussed here or aren’t yet aware of. <\/p>\n\n\n\n By thoughtfully expanding the effective altruism community, we can also improve the quality of its decision-making, by drawing upon a wider range of experiences and points of view. As the EA community grows, it needs to strengthen its efforts to increase its racial, gender, geographical, and educational diversity. We want to find the most effective strategies for doing so. And we want to continue building out EA student groups at universities, especially in parts of the US and the world that are currently underrepresented in EA. <\/p>\n\n\n\n We want to see effective altruism grow, but we also think it\u2019s extremely important that effective altruists don\u2019t fall prey to insularity. That\u2019s why the Future Fund is eager to receive proposals for funding and collaboration from anyone who wants to protect the long-term future\u2014regardless of whether they identify as effective altruists, longtermists, or anything else. <\/p>\n\n\n\n There\u2019s so much we just don\u2019t know. Many of the ideas discussed on this page are relatively new, so it\u2019s likely there are crucial considerations and causes that we are totally missing. Even when we\u2019re focusing on the right issues, our reasoning about them could undoubtedly improve. <\/p>\n\n\n\n We think that most people interested in protecting the long-term future should focus on making concrete, practical contributions. Still, we think that some folks could make an enormous contribution via research. We are especially interested in research that might help us improve our own decision-making, and the decision-making of other organizations and individuals focused on improving the long-term future. In particular:<\/p>\n\n\n\n — This page describes some areas that are (we think) especially important for making the future go well. We\u2019d be excited to fund proposals that might make progress on these areas. For some concrete examples, see our Project Ideas page. We\u2019re also very open to funding work on ideas and areas that we\u2019ve overlooked. We […]<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"inline_featured_image":false},"yoast_head":"\n[ai] Artificial Intelligence<\/h2>\n\n\n\n
Advanced AI systems might acquire undesirable objectives and pursue power in unintended ways, causing humans to lose all or most of their influence over the future.<\/li>
Actors with an edge in advanced AI technology could acquire massive power and influence; if they misuse this technology, they could inflict lasting damage on humanity\u2019s long-term future.<\/li><\/ol>\n\n\n\n
Research on AI safety might allow us to ensure that we don’t lose control of advanced AI systems. We think this Open Philanthropy Project Request for Projects<\/a> and this work by the Alignment Research Center<\/a> are promising places to start. We welcome applications focused on other research directions, as well, provided that they have clear relevance to the “loss of control” problem outlined above.<\/li>
We’d like to see the next generation of leaders approach AI safety and governance in a thoughtful and careful way, with the long-term interests of all humanity front and center.<\/li>
We\u2019d like to see more work that carefully explores the possible futures of AI, like Superintelligence<\/em> and the \u201cMost Important Century\u201d series.<\/li>
Even if we can’t see how to solve these problems now, we can try to increase the number of smart and well-motivated people with expertise in machine learning and AI policy. We think it\u2019s a good bet that these people will find ways to help in the future.<\/li><\/ul>\n\n\n\n[extinction-and-catastrophe] Biorisk and Recovery from Catastrophe<\/h2>\n\n\n\n
[institution] Epistemic Institutions<\/h2>\n\n\n\n
[institutions-and-culture] Values and Reflective Processes<\/h2>\n\n\n\n
[economic-growth] Economic Growth<\/h2>\n\n\n\n
[great-power-war<\/strong>] Great Power Relations<\/h2>\n\n\n\n
[space-settlement] Space Governance<\/h2>\n\n\n\n
[finding-and-empowering-talent] Empowering Exceptional People<\/h2>\n\n\n\n
[growing-effective-altruism] Effective Altruism<\/h2>\n\n\n\n
[research] Research That Can Help Us Improve<\/h2>\n\n\n\n