{"id":944,"date":"2022-09-23T08:07:09","date_gmt":"2022-09-23T16:07:09","guid":{"rendered":"https:\/\/ftxfuturefund.org\/?p=944"},"modified":"2022-09-30T16:40:04","modified_gmt":"2022-10-01T00:40:04","slug":"announcing-the-future-funds-ai-worldview-prize","status":"publish","type":"post","link":"https:\/\/ftxfuturefund.org\/announcing-the-future-funds-ai-worldview-prize\/","title":{"rendered":"Announcing the Future Fund’s AI Worldview Prize"},"content":{"rendered":"\n
Today we are announcing a competition with prizes ranging from $15k to $1.5M for work that informs the Future Fund’s fundamental assumptions about the future of AI, or is informative to a panel of superforecaster<\/a> judges selected by Good Judgment Inc<\/a>. These prizes will be open for three months\u2014until Dec 23\u2014after which we may change or discontinue them at our discretion. We have two reasons for launching these prizes.<\/p>\n\n\n\n First, we hope to expose our assumptions about the future of AI to intense external scrutiny and improve them. We think artificial intelligence (AI) is the development most likely to dramatically alter the trajectory of humanity this century, and it is consequently one of our top funding priorities. Yet our philanthropic interest in AI is fundamentally dependent on a number of very difficult judgment calls, which we think have been inadequately scrutinized by others. <\/p>\n\n\n\n As a result, we think it’s really possible that:<\/p>\n\n\n\n If any of those three options is right\u2014and we strongly suspect at least one of them is\u2014we want to learn about it as quickly as possible because it would change how we allocate hundreds of millions of dollars (or more) and help us better serve our mission of improving humanity’s longterm prospects.<\/p>\n\n\n\n Second, we are aiming to do bold and decisive tests<\/a> of prize-based philanthropy, as part of our more general aim of testing highly scalable approaches to funding. We think these prizes contribute to that work. If these prizes work, it will be a large update in favor of this approach being capable of surfacing valuable knowledge that could affect our prioritization. If they don’t work, that could be an update against this approach surfacing such knowledge (depending how it plays out).<\/p>\n\n\n\n The rest of this post will:<\/p>\n\n\n\n On our areas of interest<\/a> page, we introduce our core concerns about AI as follows:<\/p>\n\n\n\n We think artificial intelligence (AI) is the development most likely to dramatically alter the trajectory of humanity this century. AI is already posing serious challenges: transparency, interpretability, algorithmic bias, and robustness, to name just a few. Before too long, advanced AI could automate the process of scientific and technological discovery, leading to economic growth rates well over 10% per year (see Aghion et al 2017<\/a>, this post<\/a>, and Davidson 2021<\/a>).<\/em><\/p>\n\n\n\n As a result, our world could soon look radically different. With the help of advanced AI, we could make enormous progress toward ending global poverty, animal suffering, early death and debilitating disease. But two formidable new problems for humanity could also arise:<\/em><\/p>\n\n\n\n For more on these problems, we recommend Holden Karnofsky\u2019s \u201cMost Important Century<\/a>,\u201d Nick Bostrom\u2019s Superintelligence<\/a>, and Joseph Carlsmith\u2019s \u201cIs power-seeking AI an existential risk?<\/a>\u201d.<\/em><\/p>\n\n\n\n Here is a table identifying various questions about these scenarios that we believe are central, our current position on the question (for the sake of concreteness), and alternative positions that would significantly alter the Future Fund’s thinking about the future of AI: 1,2<\/sup><\/p>\n\n\n\n\nPrize conditions<\/h2>\n\n\n\n
Advanced AI systems might acquire undesirable objectives and pursue power in unintended ways, causing humans to lose all or most of their influence over the future.<\/em><\/li>
Actors with an edge in advanced AI technology could acquire massive power and influence; if they misuse this technology, they could inflict lasting damage on humanity\u2019s long-term future.<\/em><\/li><\/ol>\n\n\n\n