Our Grants, Investments, and Regrants
Dylan Iskandar
This regrant will support a summer research internship at Columbia University in Barnard’s Programming Languages Lab under Prof. Santolucito, as well as equipment and conference attendance. The research project is aimed at modeling and aligning program synthesis models in the setting of computational creativity.
Our World in Data
We recommended a grant over three years to support OWID’s work producing high quality data and analysis of global trends like the rise in living standards and effects of COVID-19. This grant will specifically support work tracking trends that are relevant to humanity’s long term prospects.
Global Priorities Institute
We recommended a grant to allow GPI to hire pre-docs and post-docs from economics, philosophy, and psychology and publish analysis on how to do the most good.
Sherry Glied
We recommended a grant to run two pilot studies in order to develop a platform that will allow policymakers to rapidly get reliable estimates about important quantitative parameters to inform policy measures during times of crisis, like during a pandemic. The pilot will focus on evaluating the cost-benefit profile of non-pharmaceutical interventions during pandemics.
Long Term Future Fund
We recommended funding to support their longtermist grantmaking.
LEEP
We recommended a grant to an organization aiming to reduce childhood lead exposure and eliminate childhood lead poisoning worldwide.
Differential Projections
We recommended a grant to support the creation of a small research organization that will use probabilistic models to forecast important questions about the future, in particular involving transformative artificial intelligence.
ALLFED
We recommended a grant to create a nuclear winter food scale-up plan which governments can adopt.
Longview Philanthropy
We recommended a grant to support Longview Philanthropy creating a longtermist coworking office in London.
Constellation
We recommended a grant to support 18 months of operations for a longtermist coworking space in Berkeley
Alignment Research Center
We recommended a grant to support researcher salaries, funding for contractors for evaluating AI capabilities, projects like the ELK prizes and workshops, and administrative expenses.
Charbel-Raphaël Segerie
This regrant will support six months of salary for Charbel-Raphaël Segerie to work on AI safety mentoring and study.
Sharon Hewitt Rawlette
This regrant will fund the writing of a book about classical utilitarianism and its foundations and implications.
Machine Learning Moral Uncertainty Competition
This regrant will fund the prizes for a competition on using machine learning models to estimate when human values are in conflict and estimate moral uncertainty.
The Inside View
This regrant will support Michael Trazzi to make 20 episodes of his podcast and YouTube channel about research on AI and its risks called "The Inside View." Funds compensated Michael for his time, for equipment, and for marketing.
Noa Nabeshima
This regrant will support three months of independent AI interpretability research.
University of California, Santa Cruz, Cihang Xie
This regrant will support Professor Cihang Xie's research on adversarial robustness and honest AI.
Space Future Initiative
This regrant will support funding for students at Harvard and Cornell to establish student groups to learn about and research longtermist space governance, including events, stipends, and prizes.
University of Pennsylvania, Professor Geoff Goodwin
This regrant will support three years of work on the Global Priorities Research psychology research agenda, for a post doc, conference attendance, research assistants, and payments for experimental participants, among other expenses.
The International Commission on Non-Ionizing Radiation Protection, Dr. Rodney Croft
This regrant supports work investigating FarUVC, a wavelength of light which could safely sterilize pathogens in occupied rooms. This regrant supports the International Commission on Non-Ionizing Radiation Protection, a nonprofit group that sets international exposure guidelines, to 1) determine what additional information is needed to re-evaluate the guidelines for 180-230nm light, and 2) specify experiments that would provide that knowledge, including methodological details.
Safety Philosophers Program
This regrant will cover the costs of running a seven month AI safety research fellowship for philosophy academics. The funds will cover the costs of travel, housing stipend, office space, guest speaker honorariums, and the hiring of researchers, advisors, a project manager, and organizers for the seven month period.
The Treacherous Turn Game
This regrant will support the design and creation of a tabletop roleplaying game in which simulates potential risks from advanced AI.
David Chapman
This regrant will support an independent researcher to investigate the magnitude of AI risk and the most useful actions to take to address it, how to improve Effective Altruist decisionmaking according to "metarational" principles, and potentially valuable projects in metascience.
Evan R Murphy
This regrant will support six months of independent research on interpretability and other AI safety topics.
University of Chicago, Professor Chenhao Tan
This regrant will support the hiring of a PhD student or postdoc to help orient their lab toward AI safety.
Princeton University, Professor Adji Bousso Dieng
This regrant will fund their research agenda into robust generative models in the face of non-randomly missing data, targeting use cases in healthcare, materials science, and climate.
Ben Marcovitz
This regrant will support leadership coaching for leaders of Effective Altruist projects.
AI Governance Summer Program
This regrant will support ten fellows to work on mentored AI Governance projects over the summer.
Jenny Xiao
This regrant will support a teaching buyout which will allow Jenny to spend 20 hours a week on AI Governance research.
Professor David Bau
This regrant will support several research directions in interpretability for 2-3 years, including: empirical evaluation of knowledge and guessing mechanisms in large language models, clarifying large language models’ ability to be aware of and control use of internal knowledge, a theory for defining and enumerating knowledge in large language models, and building systems that enable human users to tailor a model’s composition of its internal knowledge.
Yan Zhang
This regrant will support a teaching buyout to allow Yan to focus on forecasting AGI scenarios, as well as research on beneficial alignment and deployment.
Konstantin Pilz
This regrant will support independent study and research on AI safety and alignment alongside research assistant work for Lennart Heim.
University of Maryland, Baltimore County, Professor Tim Oates
This regrant will fund Professor Tim Oates and a graduate student to work on AI alignment research.
Sandra Malagon
This regrant will support a year of coordinating several EA mentoring and community building activities with South American Spanish speakers.
Adam Jermyn
This regrant will support four months of theoretical AI safety research under Evan Hubinger’s mentorship.
Alex Mennen
This regrant will support six months of AI safety work.
University of Western Ontario, Professor Joshua Pearce
This regrant will fund Professor Pearce’s graduate students to conduct research projects on resilient food, specifically on manufacturing rope for seaweed farming and on making leaf protein concentrate.
Existential Risk NeurIPS prize
This regrant will fund prizes for papers at a NeurIPS Workshop, with $50k for papers that have the best discussion of AI x-risk, $50k for best overall papers.
Global Challenges Project
This regrant will support operations and logistics for the Harvard-MIT Existential Risk Summit held in Berkeley at the end of July. This four-day retreat will bring together incoming Harvard and MIT first years with Berkeley EAs and members of the Harvard/MIT EA communities.
Hear This Idea Regranting Program
This regrant will support a small grants program managed and advertised via Hear This Idea which will give funding to people to launch podcasts or audio projects focused on ideas that matter.
Adapt Research
This regrant will support the development of a first draft of a resilience plan for nuclear winter/extreme pandemic.
Javier López Corpas and Cristina Rodríguez Doblado
This regrant will support a year of work translating popular content on Effective Altruism into Spanish.
Future Forum
This regrant will support general expenses for running the Future Forum conference.
Tom Green
This regrant will support six months of research on technology policy case studies of key historical figures who influenced technology development.
Haoxing Du
This regrant will support a career transition grant from physics to AI safety.
Anton Howes
This regrant will support historical research contracting work.
Cornell University, Professor Emma Pierson
This regrant will fund Professor Pierson to expand her program of research on fair and robustly aligned ML systems, with a focus on methods that will remain relevant as capabilities improve.
Neel Nanda
This regrant will provide a year of salary, funding to hire a research assistant, and compute for an independent AI safety researcher.
Flourishing Humanity Corporation
This regrant invested in two mental health, productivity, and personal growth applications.
Center for Space Governance
This regrant will provide funding for six people to spend three months producing the Center’s research agenda and creating and registering the organization.
David Mears
This regrant will support a career transition grant to allow David to pivot to working as a frontend developer for Effective Altruist organizations, becoming a 'language model psychologist,' or participating in charity entrepreneurship.
Sam Glendenning
This regrant will allow Sam to manage a group of students to start a 'London Existential Risk Initiative' (LERI). This provided funding for that student group, and in particular will be supporting three UCL student organizers.
High Impact Medicine
This regrant will support 12 months of High Impact Medicine so that they continue (and scale up) their activities.
Bertha von Suttner-Studienwerk
This regrant will fund BvS, a newly set up humanist student scholarship foundation, to take on 10 more fellows.
SoGive
This regrant will support four months of their work on in-depth analyses of charities.
Joseph Bloom
This regrant will support Joseph Bloom to spend 3-6 months transitioning his career to EA/Longtermist work, including conducting research on biosecurity.
Montaigne Institute
This regrant will allow the Montaigne Institute to hire a full time staff member to work on AI policy.
X-Axis
This regrant will support one year of funding to found and run a broad-interest, high-end website, featuring high-quality writing and appealing visualizations focused on promoting longer-term thinking.
José Alonso Flores Coronado
This regrant will support 6 months of salary for work on biosecurity projects, partly mentored by Tessa Alexanian, Officer of Safety and Security at iGEM.
Aleksei Paletskikh
This regrant will support six months of work on distilling AI alignment research.
Cornell University, Tom Gilbert
This regrant will fund students and postdocs to attend a conference on Human Centered AI at UC Berkeley and work on a resulting report.
University of Toronto, Social Science Prediction Platform
This regrant will fund one year of operations for the Social Science Prediction Platform, a platform which allows economists and other social science academics to use forecasting in their research.
The Royal Statistical Society
This regrant will support a course to teach civil servants in the UK to read, process, and use scientific research in their policy work. The course will focus on examples from emergency scenarios including pandemics and other existential risks.
Aligned AI
This regrant invested in Aligned AI, an AI safety organization seeking bridging funding.
Gryphon Scientific
This regrant will support a risk-benefit analysis of virology research.
Fund for Alignment Research
This regrant will fund the operations of the Fund for Alignment Research, which primarily provides engineering support to AI alignment researchers.
Redwood Research
We recommended a grant to support Redwood Research, an AI safety organization, and MLAB, a Redwood project to teach machine learning to individuals interested in AI safety, for 18 months which will support salaries, primarily for their researchers and technical interns, and the rest for compute.
Pranav Gade
This regrant will support three years of work, paid out conditional on progress reports for a cybersecurity project for AI research.
Topos Institute
This regrant will support the second “Finding the Right Abstractions” workshop, likely to take place around January 2023, which connects applied category theory and AI alignment communities in an effort to understand self-selecting systems.
Peter Hartree
This regrant will support the recording of audio narrations of important philosophical papers. The recordings will be packaged up as a podcast feed with its own website, brand and YouTube channel.
Simeon Campos
This regrant will support Simeon Campos to work for a fraction of his year on AI safety projects.
Collective Intelligence Project
This regrant will support two researchers, Matthew Prewitt and Divya Siddarth, to create a research agenda on “collective intelligence” specifically aimed at mitigating Global Catastrophic and Existential risks, and run some trial projects based on it.
Chatham House
This regrant will support ongoing research and convening with representatives from UK government, academia/think-tanks, and AI industry throughout 2022-23. These activities will be supplemented by written outputs and the program will focus on UK AI policy and risk reduction, pandemic prevention policy, defense policy, cybersecurity policy, and UK navigation of great power technology relations.
Elizabeth Van Nostrand
This regrant will support continued mentorship of a medical researcher and work generating novel medical research literature reviews, on a variety of topics, including long covid.
Akash Wasil
This regrant will support five months of building and supporting the longtermist and AI alignment communities.
Condor Camp
This regrant will support additional funding for Condor Camp, the program for top Brazilian university students to come to Peru for a retreat to learn about Effective Altruism.
Korbinian Kettnaker
This regrant will support a promising student who is working to bridge algorithmic information theory with epistemology and AGI research for his PhD at the University of Cambridge.
Global Priorities Encyclopedia
This regrant will support a 6-month trial of setting up the Global Priorities Encyclopedia (GPE), a reference work on global priorities research featuring articles by academics and researchers from the effective altruism community. Funding would pay the project lead, web developers, and article contributions.
Gryphon Scientific
This regrant will fund Gryphon Scientific to analyze options for and hold a workshop on protecting potential pandemic pathogen genomes.
AI Safety Community Building Hub
This regrant supports the creation and initial hiring for an organization that is centered around outreach to machine learning professionals about AI safety and alignment.
VIVID
This regrant is an investment in VIVID, a customization-based mindset improvement app that aims to improve peoples' self-reflection and productivity.
HR Luna Park
This regrant will support an experiment where a recruiting agency will headhunt ML Engineers for AI safety organizations.
School of Thinking
This regrant will support a global media outreach project to create high quality video and social media content about rationalism, longtermism and Effective Altruism.
Legal Services Planning Grant
This regrant will support six months of research on topics including how legal services can be effectively provided to the Effective Altruism community, materials to be included in a legal services handbook for EA organizations, novel legal questions particular to the EA community that might benefit from further research initiatives, and ways to create an effective EA professional network for practicing lawyers.
Manifold Markets
This regrant will support Manifold Markets in building a play-money prediction market platform. The platform is also experimenting with impact certificates and charity prediction markets.
David Xu
Trojan Detection Challenge at NeurIPS 2022
This regrant will support prizes for a trojan detection competition at NeurIPS, which involves identifying whether a deep neural network will suddenly change behavior if certain unknown conditions are met.
Effective Altruism Office Zurich
Akash Wasil
Fiona Pollack
Peter McLaughlin
Dwarkesh Patel
ALERT
This regrant will support the creation of the Active Longtermist Emergency Response Team, an organization to rapidly manage emerging global events like Covid-19.
EA Critiques and Red Teaming Prize
This regrant will support prize money for a writing contest for critically engaging with theory or work in Effective Altruism. The goal of the contest is to produce thoughtful, action oriented critiques.
Federation for American Scientists
This regrant will support a researcher and research assistant to work on high-skill immigration and AI policy at FAS for three years.
Ought
This regrant will support Ought’s work building Elicit, a language-model based research assistant. This work contributes to research on reducing alignment risk through scaling human supervision via process-based systems.
ML Safety Scholars Program
This regrant will fund a summer program for up to 100 students to spend 9 weeks studying machine learning, deep learning, and technical topics in safety.
AntiEntropy
This regrant will support a project to create and house operations-related resources and guidance for EA-aligned organizations.
Everett Smith
Olle Häggström, Chalmers University of Technology
Essay Contest on Existential Risk in US Cost Benefit Analysis
This regrant will support an essay contest on “Accounting for Existential Risks in US Cost-Benefit Analysis,” with the aim of contributing to the revision of OMB Circular-A4, a document which guides US government cost-benefit analysis. The Legal Priorities Project is administering the contest.
MineRL BASALT competition at NeurIPS
This regrant will support a NeurIPS competition applying human feedback in a non-language-model setting, specifically pretrained models in Minecraft. The grant will be administered by the Berkeley Existential Risk Initiative.
QURI
This regrant will support QURI to develop a programming language called "Squiggle" as a tool for probabilistic estimation. The hope is this will be a useful tool for forecasting and fermi estimates.
Andi Peng
CSIS
Aaron Scher
Kris Shrishak
AI Impacts
This regrant will support rerunning the highly-cited survey “When Will AI Exceed Human Performance? Evidence from AI Experts” from 2016, analysis, and publication of results.
Chinmay Ingalagavi
Apart Research
This regrant will support the creation of an AI Safety organization which will create a platform to share AI safety research ideas and educational materials, connect people working on AI safety, and bring new people into the field.
Tereza Flidrova
J. Peter Scoblic
AI Risk Public Materials Competition
Moncef Slaoui
This regrant will fund the writing of Slaoui's memoir, especially including his experience directing Operation Warp Speed.
Artificial Intelligence Summer Residency Program
Public Editor
This regrant will support a project to use a combination of human feedback and Machine Learning to label misinformation and reasoning errors in popular news articles.
The Good Ancestors Project
This regrant will support the creation of The Good Ancestors Project, an Australian-based organization to host research and community building on topics relevant to making the long term future go well.
Thomas Kwa
Joshua Greene, Harvard University
Braden Leach
Adversarial Robustness Prizes at ECCV
This regrant will support three prizes for the best papers on adversarial robustness research at a workshop at ECCV, the main fall computer vision conference. The best papers are selected to have higher relevance to long-term threat models than usual adversarial robustness papers.
Confido Institute
The Confido Institute is working on developing a user-friendly interactive app, Confido, for making forecasts and communicating beliefs and uncertainty within groups and organizations. They are also building interactive educational programs about forecasting and working with uncertainty based around this app.
Supporting Agent Foundations AI safety research at ALTER
This regrant will support 1.5-3 years of salary for a mathematics researcher to work with Vanessa Kosoy on the learning-theoretic AI safety agenda.
Modeling Transformative AI Risks (Aryeh Englander, Sammy Martin, Analytica Consulting)
This regrant will support two AI researchers, one or two additional assistants, and a consulting firm to continue to build out and fully implement the quantitative model for how to understand risks and interventions around AI safety, expanding on their earlier research on “Modeling Transformative AI Risk.”
Impact Markets
This regrant will support the creation of an “impact market.” The hope is to improve charity fundraising by allowing profit-motivated investors to earn returns by investing in charitable projects that are eventually deemed impactful.
AI Alignment Prize on Inverse Scaling
Swift Centre for Applied Forecasting
This regrant will support the creation of the Swift Centre for Applied Forecasting, including salary for a director and a team of expert forecasters. They will forecast trends from Our World in Data charts, as well as other topics related to ensuring the long term future goes well, with a particular focus on explaining the “why” of forecast estimates.
Lawrence Newport
Aidan O’Gara
Legal Priorities Project
We recommended a grant to support the Legal Priorities Project’s ongoing research and outreach activities. This will allow LPP to pay two new hires and to put on a summer institute for non-US law students in Oxford.
Oded Galor, Brown University
We recommended a grant to support two years of academic research on long-term economic growth.
The Atlas Fellowship
We recommended a grant to support scholarships for talented and promising high school students to use towards educational opportunities and enrolling in a summer program.
Sherlock Biosciences
We recommended an investment to support the development of universal CRISPR-based diagnostics, including paper-based diagnostics that can be used in developing-country settings without electricity.
Rethink Priorities
SecureBio
We recommended a grant to support the hiring of several key staff for Dr. Kevin Esvelt’s pandemic prevention work. SecureBio is working to implement universal DNA synthesis screening, build a reliable early warning system, and coordinate the development of improved personal protective equipment and its delivery to essential workers when needed.
Lionel Levine, Cornell University
We recommended a grant to Cornell University to support Prof. Levine, as well as students and collaborators, to work on alignment theory research at the Cornell math department.
Claudia Shi, Academic CS Research at Columbia University
We recommended a grant to pay for research assistants over three years to support the work of a PhD student working on AI safety at Columbia University.
Institute for Progress
We recommended a grant to support the Institute’s research and policy engagement work on high skilled immigration, biosecurity, and pandemic prevention.
Good Judgment Project
Peter Hrosso, Researcher
We recommended a grant to support a project aimed at training large language models to represent the probability distribution over question answers in a prediction market.
Michael Jacob, MITRE
We recommended a grant to support research that we hope will be used to help strengthen the bioweapons convention and guide proactive actions to better secure those facilities or stop the dangerous work being done there.
Michael Robkin
We recommended an investment to support the creation of Pretty Good PPE that is comfortable, storable, simple, and inexpensive. PGPPE aims to provide protection that is better than disposable masks and cheaper than both hazmat suits and N95s.
Legal Priorities Project
This grant will support one year of operating expenses and salaries at the Legal Priorities Project, a longtermist legal research and field-building organization.
AI Safety Camp
Anca Dragan, UC Berkeley
Association for Long Term Existence and Resilience
We recommended a grant to support ALTER, an academic research and advocacy organization, which hopes to investigate, demonstrate, and foster useful ways to improve the future in the short term, and to safeguard and improve the long-term trajectory of humanity. The organization's initial focus is building bridges to academia via conferences and grants to find researchers who can focus on AI safety, and on policy for reducing biorisk.
Manifold Markets
We recommended a grant to support Manifold Markets in building a charity prediction market, as an experiment for enabling effective forecasters to direct altruistic donations.
Guoliang (Greg) Liu, Virginia Tech
Stimson South Asia Program
Prometheus Science Bowl
We recommended a grant to support a competition for work on Eliciting Latent Knowledge, an open problem in AI alignment, for talented high school and college students who are participating in Prometheus Science Bowl.
Maxwell Tabarrok
HelixNano
We recommended an investment to support Helix Nano running preclinical and Phase 1 trials of a pan-variant Covid-19 vaccine.
Giving What We Can
We recommended a grant to support Giving What We Can’s mission to create a world in which giving effectively and significantly is a cultural norm.
Gabriel Recchia, University of Cambridge
We recommended a grant to support research on how to fine-tune GPT-3 models to identify flaws in other fine-tuned language models' arguments for the correctness of their outputs, and to test whether these help nonexpert humans successfully judge such arguments.
Simon Institute for Longterm Governance
We recommended a grant to support SI’s policy work with the United Nations system on the prevention of existential risks to humanity.
Centre for Effective Altruism
We recommended a grant for general support for their activities, including running conferences, supporting student groups, and maintaining online resources.
Nonlinear
Konstantinos Konstantinidis
Apollo Academic Surveys
We recommended a grant to support Apollo’s work aggregating the views of academic experts in many different fields and making them freely available online.
AI Safety Support
Daniel Brown, University of Utah
Khalil Lab at Boston University
We recommended a grant to support the development of a cheap, scalable, and decentralized platform for the rapid generation of disease-neutralizing therapeutic antibodies.
Sergey Levine, UC Berkeley
Non-trivial Pursuits
We recommended a grant to support outreach to help students to learn about career options, develop their skills, and plan their careers to work on the world’s most pressing problems.
Rational Animations
Justin Mares, Biotech Researcher
We recommended a grant to support research on the feasibility of inactivating viruses via electromagnetic radiation.
Lightcone Infrastructure
We recommended a grant to support Lightcone’s ongoing projects including running the LessWrong forum, hosting conferences and events, and maintaining an office space for Effective Altruist organizations.
Confirm Solutions
High Impact Athletes
We recommended a grant to support HIA’s work encouraging professional athletes to donate more of their earnings to high impact charities and causes, and to promote a culture of giving among their fans.
High Impact Professionals
Berkeley Existential Risk Initiative
We recommended a grant to support BERI in hiring a second core operations employee to contribute to BERI’s work supporting university research groups.
Nathan Young
Bear F. Braumoeller, Department of Political Science, The Ohio State University
We recommended a grant to support a postdoc and two research assistants for Professor Braumoeller’s MESO Lab for two years to carry out research on international orders and how they affect the probability of war.
Siddharth Hiregowdara, AI Safety Introductory Materials
Longview
We recommended a grant to support Longview’s independent grantmaking on global priorities research, nuclear weapons policy, and other longtermist issues.
Global Guessing
Brian Christian, Author
We recommended a grant to support the completion of a book which explores the nature of human values and the implications for aligning AI with human preferences.
Sage
EffiSciences
We recommended a grant to support EffiSciences’s work promoting high impact research on global priorities (e.g. AI safety, biosecurity, and climate change) among French students and academics, and building up a community of people willing to work on important topics.
Anysphere
We recommended an investment to build a communication platform that provably leaks zero metadata.
1Day Sooner
We recommended a grant to support 1DS’ work on pandemic preparedness, including advocacy for advance market purchase commitments, collaboration with the UK Pandemic Ethics Accelerator on challenge studies, and advocacy with 1Day Africa and the West African Health Organization for a global pandemic insurance fund.
Cecil Abungu, Centre for the Study of Existential Risk, University of Cambridge
Luke Hewitt
We recommended a grant to support the development and application of a Minimum Viable Product of a data-driven approach to improving advocacy in areas of importance to societal well-being such as immigration policy.
Dr. Emilio I. Alarcón, University of Ottawa Heart Institute & University of Ottawa
This grant will support a project to develop new plastic surfaces incorporating molecules that can be activated with low-energy visible light to eradicate bacteria and kill viruses continuously. If successful, this project will change how plastic surfaces are currently decontaminated.
Rajalakshmi Children Foundation
We recommended a grant to support the identification of children in India from under-resourced areas who excel in math, science, and technology, and enable them to obtain high quality online education by digitally connecting them with mentors and teachers.
Nikki Teran, Institute for Progress
James Lin
Ray Amjad
We recommended a grant to support the creation of a talent search organization which will help identify top young students around the world through a free to use website consisting of both challenging math and physics olympiad-style problems and discussion forums. Efforts will be particularly focused across India and China. These students will later be connected to support and programs so they can go on to work on the world's most pressing issues.
The Center for Election Science
We recommended a grant to support the development of statewide ballot initiatives to institute approval voting. Approval voting is a simple voting method reform that lets voters select all the candidates they wish.
AVECRIS Pte. Ltd.
Council on Strategic Risks
Effective Ideas Blog Prize
Longview Philanthropy and the Future Fund recommended a grant to support prizes for outstanding writing which encourages a broader public conversation around effective altruism and longtermism.
Pathos Labs, PopShift
We recommended a grant to support Pathos Labs to produce a PopShift convening connecting experts on the future of technology and existential risks with television writers to inspire new ideas for their shows.