The BERI Support Fund (BSF) is an independent 501(c)(3) public charity, and a supporting organization of BERI. Once BSF is fully operational, we will make grants to charities in support of BERI’s mission: to improve human civilization’s long-term prospects for survival and flourishing. BSF’s Board of Directors will be focussed entirely on overseeing grant-making, and will also be bound to BERI’s mission as overseen by BERI’s Board of Directors. Thus, BSF’s formation will help to create more oversight of grant-making than BERI alone could provide. By creating more oversight for funds, we hope to attract more funding for existential risk reduction efforts.
ChairAndrew Critch (email) is currently a full-time research scientist in the EECS department at UC Berkeley, at Stuart Russell's Center for Human Compatible AI. He earned his PhD in mathematics at UC Berkeley studying applications of algebraic geometry to machine learning models. During that time, he cofounded the Center for Applied Rationality and SPARC. Andrew has been offered university faculty positions in mathematics, mathematical biosciences, and philosophy, worked as an algorithmic stock trader at Jane Street Capital's New York City office, and as a research fellow at the Machine Intelligence Research Institute. His current research interests include logical uncertainty, open source game theory, and avoiding arms race dynamics between nations and companies in AI development. Andrew first became interested in existential risk as a child, from reading popular writings by Stephen Hawking about cosmology and the future of humanity's ability to understand science. He decided to focus on existential risk professionally where he realized, as a result of meeting Anna Salamon in 2010, that he wouldn't be alone in making x-risk reduction his primary career ambition.
TreasurerKenzi Amodei graduated from Stanford University with a B.A. in Drama and worked throughout the Bay Area as a professional stage manager before going back to receive a B.S. in Biology from the University of Oregon. She was accepted to Tufts University School of Medicine in 2013, an offer she declined to work at the Center for Applied Rationality as a curriculum developer and their Director of Operations; this is when she first became interested in existential risk. At CFAR, her curricular work included developing and teaching courses on navigating disagreements, expected value estimates, and accurate implicit forecasting techniques; she also worked as a lead curriculum developer for CFAR's first programs targeted at potential x-risk researchers, after reading Nick Bostrom's Superintelligence caused her to become more seriously interested in working on x-risk. Kenzi has presented rationality material at over 30 of CFAR's immersive workshops, as well as at the Summer Program on Applied Rationality and Cognition, conferences such as SkeptiCal, Effective Altruism Global, and SSA West, and at tech companies like Heroku and Asana.
SecretaryRaymond Arnold is a lead developer at Lesserwrong.com, a discussion platform aimed at refining the art of rationality and applying it to important topics. His day to day work includes strategizing on how to design the site, such that it fosters important intellectual progress (with a special focus on existential risk). Prior to Lesserwrong, he spent 5 years working on in-person rationality community development, building a culture that helps people to think clearly and gain the skills necessary to tackle important problems. In 2012, he developed the Secular Solstice, a holiday festival celebrating science and human achievement, now held in several cities across the world each year. Raymond studied computer animation at Full Sail University before pivoting into web development. He became convinced of the importance of existential risk in 2011, when reading about the subject on Lesswrong.com. After seeing many advances in current machine learning technology, he decided to get more proactively involved in 2016.