{"id":28,"date":"2022-01-25T20:51:03","date_gmt":"2022-01-25T20:51:03","guid":{"rendered":"https:\/\/ftxfuturefund.org\/?page_id=28"},"modified":"2022-09-23T10:15:14","modified_gmt":"2022-09-23T18:15:14","slug":"area-of-interest","status":"publish","type":"page","link":"https:\/\/ftxfuturefund.org\/area-of-interest\/","title":{"rendered":"Areas of Interest"},"content":{"rendered":"\n

This page describes some areas that are (we think) especially important for making the future go well. <\/strong><\/p>\n\n\n\n

We\u2019d be excited to fund proposals that might make progress on these areas. For some concrete examples, see our Project Ideas<\/a> page. We\u2019re also very open to funding work on ideas and areas that we\u2019ve overlooked. <\/p>\n\n\n\n

We know this page is incomplete, uncertain, underdeveloped, and often quite speculative. We could be getting a lot of important things wrong. So we know we might look quite foolish ten years from now\u2014or much sooner, if someone points out our errors on Twitter. That’s okay with us, because we think we’ll learn more and learn faster this way. We hope that others will help us identify our mistakes and highlight ideas we\u2019ve overlooked.<\/p>\n\n\n\n

These aren\u2019t the only areas the FTX Foundation cares about. For our work on other issues, please see the FTX Foundation<\/a> website.<\/p>\n\n\n\n

[ai] Artificial Intelligence<\/h2>\n\n\n\n

We think artificial intelligence (AI) is the development most likely to dramatically alter the trajectory of humanity this century. AI is already posing serious challenges: transparency, interpretability, algorithmic bias, and robustness, to name just a few. Before too long, advanced AI could automate the process of scientific and technological discovery, leading to economic growth rates well over 10% per year (see Aghion et al 2017<\/a>, this post<\/a>, and Davidson 2021<\/a>). <\/p>\n\n\n\n

As a result, our world could soon look radically different. With the help of advanced AI, we could make enormous progress toward ending global poverty, animal suffering, early death and debilitating disease. But two formidable new problems for humanity could also arise:<\/p>\n\n\n\n

  1. Loss of control to AI systems<\/strong>
    Advanced AI systems might acquire undesirable objectives and pursue power in unintended ways, causing humans to lose all or most of their influence over the future.<\/li>
  2. Concentration of power<\/strong>
    Actors with an edge in advanced AI technology could acquire massive power and influence; if they misuse this technology, they could inflict lasting damage on humanity\u2019s long-term future.<\/li><\/ol>\n\n\n\n

    For more on these problems, we recommend Holden Karnofsky\u2019s \u201cMost Important Century<\/a>,\u201d Nick Bostrom\u2019s Superintelligence<\/em><\/a>, <\/em>and Joseph Carlsmith\u2019s \u201cIs power-seeking AI an existential risk?<\/a>\u201d.<\/em><\/p>\n\n\n\n

    We’re not sure what to do about these problems. In general, though, we\u2019d be excited about anything that addresses one of these problems without making the other one worse. Possibilities include:<\/p>\n\n\n\n