Collaborations
BERI’s primary work is with university institutions and researchers focused on mitigating x-risk. We collaborate with our partners in a variety of ways. Sometimes, we can quickly provide funding for an activity that is difficult to fund through university mechanisms. Other times, we provide researchers with a service that they need.
Our main collaborations are with the following groups:
- CHAI — the Center for Human Compatible AI at UC Berkeley
- CSER — the Centre for the Study of Existential Risk at the University of Cambridge
- SERI — the Stanford Existential Risks Initiative
- ALL — the Autonomous Learning Laboratory at UMass Amherst
- InterACT — the Interactive Autonomy and Collaborative Technologies Laboratory at UC Berkeley
- KASL — the Krueger AI Safety Lab at the University of Cambridge
- CLTC — the Center for Long-Term Cybersecurity at UC Berkeley
- MATS - ML Alignment & Theory Scholars Program
- OCPL — The Oxford China Policy Lab
- The Safe Robotics Laboratory at Princeton University
In addition, we are currently exploring trial collaborations with the following groups and individuals:
- AAG — The Algorithmic Alignment Group at MIT
- ARAAC - The Australian Responsible Autonomous Agents Collective at Federation University Australia
- Ben Levinstein’s group at the University of Illinois
- Charles Whittaker's group at Imperial College London
- DMIP - The Data Mining, Machine Intelligence and Inductive Programming Group (DMIP) at the Universitat Politècnica de València
- Duke Center on Risk at Duke University
- ILIAD — The Intelligent and Interactive Autonomous Systems Lab
- Joshua Lewis's group at New York University
- The Periscope Lab at the University of Toronto
- Lionel Levine's lab at Cornell University
- Lira Lab at the University of Southern California
- Oliver Crook, Research Fellow at the University of Oxford
- Oxford Control and Verification Group at the University of Oxford
- Oxford Martin AI Governance Initiative at the University of Oxford
- R. Daniel Bressler, PhD candidate at Columbia University
- Roger Grosse’s Lab at the University of Toronto
- Samwald Research Group at the Medical University of Vienna
- SpyLab - Secure and Private AI Lab at ETH Zurich
- Torr Vision Group at the University of Oxford
- The NYU Alignment Research Group led by Sam Bowman
- The Social Science Prediction Platform
If you work with any of these groups, BERI encourages your requests for support!
How to request support
If you are interested in receiving support from BERI, the best way to explore this possibility is to email contact@existence.org. We will then start a conversation about your plans, moving forward as we feel is appropriate for the situation.