BERI wants to help CS students and researchers at UC Berkeley who care about existential risk, by hiring a team of engineers to collaborate with them on their work. It might not be obvious why outside help is needed for this, and the purpose of this post is explain why.

Imagine being an academic

Just for a few minutes, imagine you’re a young computer science researcher who understands how fragile the world is. You can see the unfolding impact of new technologies, both good and bad, and you understand that simple actions today can have a huge impact on the future. And yet, you notice that almost no one’s job description tells them to think at length about the impact of their work in decades or centuries to come, much less to translate that reasoning into action. Instead, you observe most people and institutions following short-sighted gradients, where a bit more effort leads to a bit more reward.

You want to be different. You want the code you write now to have an impact 50 years from now, even if you’ll have to wait that long to see it. But to further your career and gain the influence you feel you need to make a difference, you have to stay focused on writing your next paper. Papers, not Python packages, are the fundamental unit of success for a young academic. So, when you finish publishing the findings of your latest project, and you stop to think about how to make your code more usable to colleagues just outside your circle of potential coauthors, you have to pause and ask: am I taking a career hit for doing this? Am I going to miss out on a professorship that would empower me to steer the discourse of the intellectual elite toward a safer future, because I spent too much time open-sourcing my code and not enough time writing and presenting my findings?

How BERI plans to help

By bringing in resources from outside of academia, we hope to form part of an incentive gradient that causes more organizational support and human capital to flow swiftly toward people and institutions who care and think deeply about the long-term future, and in particular, existential risk from artificial intelligence (AI x-risk).

As part of this plan, we are seeking to hire a machine learning engineer (job posting) who can seed the development of an engineering team that works with young AI researchers at UC Berkeley who have career-scale ambitions to reduce AI x-risk. Our long-term goal is for BERI’s engineers to work alongside the Center for Human-Compatible Artificial Intelligence (CHAI) and perhaps other academic groups concerned with AI x-risk (news article). We want to ensure that students and faculty focused on x-risk reduction have access to the latest machine learning tools and development environments, and to help them to design and carry out experiments in those frameworks.

There are three reasons we believe an outside engineering effort is needed to achieve this goal:

  1. Academic incentives lead to a “tragedy of the code-sharing commons” effect. This is described in the vignette above about the student at Berkeley. The primary unit of measurement for the success of a young researcher in computer science is the number and quality of the papers they write, rather than the shareability and usability of their code. Thus, once their papers are published, taking time away from writing papers to make their code usable to peers can mean taking a career hit, unless they know sharing the code will mean they get to co-author more papers. BERI wants to alleviate stress and sacrifice for these young researchers so they can stay focused on their career development as experimenters and presenters. As they do so, our engineers can simultaneously follow a career path focussed on writing quality code, with the happiness of users—researchers—being their main success metric. This way, our engineers’ incentives will be naturally aligned with abating the tragedy of the code-sharing commons, at least among researchers focused on mitigating x-risk.

  2. Hiring our own team allows us to offer reasonable wages to x-risk-motivated engineers. Universities are not able to easily pay high salaries for engineers. By contrast, machine learning engineers at for-profit companies like Google can easily earn over $200k/year by joining teams like Google Brain. BERI hopes to pay salaries somewhere in between. We know many engineers who would like to work more closely on projects to reduce existential risk, and BERI would like to offer them satisfying career paths to do so. In the long run, we would like to match the salary precedent set by other well-funded research non-profits, such as OpenAI, while working in close proximity with universities like UC Berkeley to nurture the next generation of scientists.

  3. Our mission will keep the engineering efforts targeted at x-risk. BERI’s mission is “to improve human civilization’s long-term prospects for survival and flourishing.” Our commitment to this mission is legally binding, and our Board of Directors is passionate about realizing it. Thus, by building an engineering team within BERI, we hope to maintain the team’s mission-alignment more securely than we would by operating within a larger institution.

Starting humbly and failing gracefully

While we think it’s important for our engineering team to have a long-term ambition to grow and increase our output as time goes on, we also think it’s important to begin this project with a degree of humility about the culture we need to develop. BERI’s engineering team will integrate well with the lives and workflows of students and professors at UC Berkeley, which might take some time to figure out. And, while we are dedicated to our mission, we must avoid making overly strong assumptions about what our collaborators will ultimately find most helpful.

So, we are looking for candidates who are easy to work with and ready to adjust to the needs of the researchers BERI chooses to work with, and who don’t mind growing our team slowly over a course of years, one engineer at a time.

Additionally, if it turns out that developing the culture necessary for BERI’s engineering team to grow usefully is just too difficult, we’ll want to make sure that BERI’s net impact on the careers of its chosen collaborators will be a good one. In particular, our approach will not be to take risks that have a chance of growing the team at the expense of wasting our collaborators’ time. Wherever possible, BERI will try to grow by a series of Pareto improvements that will leave our collaborators happy if we decide to stop expanding.

Interested?

There’s no time like the present! If you think you’d like to join our team, please apply for our Machine Learning Engineer position (http://existence.org/work/ml-engineering). Or, if machine learning isn’t your area of interest, please pass it on to your friends. BERI is a very young organization, and we can use all the help we can get in recruiting for this role.