Activity Update - May
This post is part of our monthly activity update series. Each activity update briefly describes what we've been up to in the last month and, hopefully, helps to shed light on how our priorities shift over time. Note that some of our time each month will be spent investigating future project opportunities that may not be covered in these posts until our plans are more solid.
This post is part of our monthly activity update series. Each activity update briefly describes what we've been up to in the last month and, hopefully, helps to shed light on how our priorities shift over time. Note that some of our time each month will be spent investigating future project opportunities that may not be covered in these posts until our plans are more solid.
Accomplishments
In collaboration with the Center for Human-Compatible AI (CHAI), we...
Created a new position for an Office Assistant for CHAI. We also updated our Machine Learning Engineer job posting to reflect a more collaborative hiring process with CHAI.
Hired 2 interns for CHAI’s intern program. We are also providing relocation support for CHAI interns.
In collaboration with the Centre for the Study of Existential Risk (CSER), we hired contractors to...
Set up the APPG for Future Generations and host the APPG’s launch event.
Provide research assistance to Sir Partha Dasgupta.
Provide research assistance to Dr. Simon Beard.
We renewed a contract with John Halstead to continue supporting Toby Ord’s upcoming book on x-risk.
We completed several significant internal activities:
We updated our “Grants to Organizations” investigation process, incorporating learnings from the Open Philanthropy Project’s grant investigation procedures (Open Phil’s public examples are here and here) and clarifying the steps for grant investigators.
We improved our systems for collaborating with partner organizations and for managing BERI contractors who primarily work with external collaborators.
We increased the BERI Support Fund’s capacity to accept a more diverse pool of funding.
Also, while both of these accomplishments occurred in early June, we would like to highlight them here as most of the work was completed in May:
We submitted the extensive financial documentation required by our auditors. (BERI has commissioned an outside auditor because we took in over $2 million in revenue last year; our audit is in June).
We announced BERI’s first Individual Grants program! We are hosting a Project Grants round; applicants who are interested in funding for a x-risk-reducing project can apply here.
Works in Progress
We are…
Currently trialling a full-time Project Manager, Josh Jacobson.
Investigating six organizations for possible grants from BERI.
In the process of hiring several contractors to work with the Future of Humanity Institute.
Looking to add to our contractor roster. In particular we hope to find a web designer, web developer, and senior programmer.
Project Grants Program - Round 1
We are pleased to announce BERI’s first “Project Grants” round. This kicks off BERI’s larger Individual Grants Program, which we intend to write about in more detail later. Below, we explain the types of projects that qualify and how to apply for this round.
Update November 2018: Round 1 Project Grants have been awarded. See the announcement here.
We are pleased to announce BERI’s first “Project Grants” round. This kicks off BERI’s larger Individual Grants Program, which we intend to write about in more detail later. Below, we explain the types of projects that qualify and how to apply for this round.
What is a “Project Grant” ?
BERI Project Grants fund individuals to work on projects directly relevant to BERI's mission, when those individuals would need less oversight than a typical contractor would need to ensure the value of their work to our mission.
Projects can vary widely in scope and cost. A few examples of projects that BERI could imagine funding include:
Writing a book, research paper, or other materials.
Providing operations support to existing x-risk focused organizations or individuals for a set amount of time.
Developing a high-quality educational video.
Forming a reading group for x-risk-motivated STEM students.
Organizing a conference, talk, or workshop.
We are open to any ideas you have, as long as you can explain how the project will contribute to improving human civilization’s long-term prospects for survival and flourishing.
In some cases, a grant will give an individual the freedom to reduce their time-commitment to their current role, and spend their remaining time investigating or working on existential risk reduction in a serious way, with funding from BERI. In other cases, individuals may not have prior time-commitments, but will be motivated to use their free time to work on an existential-risk reducing project.
If you are interested in applying for a BERI Project Grant, please fill out our online application form by June 30th.
Amounts
We expect that the budgets for most Project Grants will include both a) project expenses and b) a salary to compensate the applicant for their time.
Amounts awarded will range depending on the scope of the project and the time commitment from the applicant. The maximum any applicant can receive for a full-time project is $300,000 per year (this includes both project expenses and salary), multiplied by their percentage time commitment. This means that, with a 20% time commitment to the grant (one 8-hour day per week), the maximum amount an applicant could receive would be 20%*$300,000 = $60,000 per year.
Annualized amounts over $150,000 are not available as salary to the grantee. For example, with a 40% time commitment to the grant, amounts over 40%*$150,000 = $60,000 must be used for project expenses.
Note: The grantee may elect for BERI to retain a portion of their grant, to be used by BERI for non-salary expenses in support of their project (so the grantee does not receive that portion as income). For example, the retained portion could be used to reimburse the grantee for tax-exempt project expenses following BERI’s Procurement Policy, or to pay contractors assisting with the project.
Eligibility
Applicants from any country are eligible. Please note that the application process will be conducted in English. Given this, we only expect to provide grants to candidates with working English proficiency.
Time Commitment Restrictions
Individuals are not eligible to receive a BERI Project grant if they are unable to negotiate an expected combined time commitment of ≤ 50 hours/week to BERI and their other employer(s) combined.
If offered a grant, and if the grant would cause the applicant to fulfill less than their current time commitment to their current employer(s), the applicant must be willing to openly negotiate a reduced time commitment to their current employer(s). In this case, we request a written notice from the employer(s) agreeing to the reduced time commitment for the duration of the grant.
Selection
Grant winners will be chosen by an anonymous selection committee comprising at least three people, each with at least one year of leadership, management, or research experience with an institution focusing on ensuring humanity's long-term survival and flourishing. At least 21% of the committee will come from outside of BERI. The committee will rank-order projects and award grants to the top-ranked projects falling within our budget ($750,000). We may adjust the budget based on the quality of applications.
Requirements and Supervision
BERI Project grantees are required to complete their project or, in the case of non-completion, provide a report to BERI transparently explaining the difficulties they encountered and what they learned. If BERI does not feel that a strong effort was made to complete the project, and the learnings from failure were not particularly valuable, BERI may require that the grant funding be returned. However, grantees should be reassured that BERI considers exploratory work to be valuable and learning from failure to be a generally acceptable outcome of ventures with a high expected value. A Grant Investigator will be assigned to each project to ensure grant requirements are met.
Intellectual Property and Grantee Independence
The grant project may be treated as a collaboration with the grantee’s current employer, if the grantee requests it in their application. In this case, the grantee must be treated as an independent collaborator by their current employer, for the time commitment allotted to them by the grant project. BERI allows this in case the applicant’s current employer could provide them with valuable resources and opportunities useful to their project.
If the grantee’s current employer is a public charity or university, BERI concedes all intellectual property rights to the applicant’s current employer. BERI does this to simplify negotiations between the applicant and their employer, and because we view the accrual of knowledge relevant to existential risk within other public charities to be a valuable accomplishment.
Unfortunately, BERI is not able to concede all intellectual property rights to other kinds of employers. The applicant may, however, submit with their application an intellectual property agreement offer from their current employer with terms that are favorable to BERI and its mission, for the selection committee to consider in Phase 2 (see below).
Application Process
Phase 1: (Application deadline June 30, 2018, 11:59PM GMT-12:00)
Interested candidates apply online: Application Form for BERI Project Grants - Round 1
Optionally, applicants may ask one or two people to use the following form to submit a recommendation for their application before the June 30th deadline: Recommendation Form - BERI Project Grants - Round 1
Phase 2: (Begins by August 31)
Top applicants are notified that they have been chosen to receive grants, and given time to work with their current employers, if any, to negotiate a reduction in their time commitment and/or IP agreement between their employer and BERI, if needed for the grant.
Shortlisted applicants are notified that they may be invited to Phase 2 if some top applicants fail within 4 weeks to acquire a time reduction and/or an intellectual property agreement acceptable to BERI.
Frequently Asked Questions
What if I want to work with another person on my project, who is also looking for funding?
If you are working on a project with multiple other people, you have two options when applying:
You can have each person apply separately for funding. We encourage you to take this route if you prefer an outcome where some people on your team get funded over an outcome where no one on your team gets funded.
You can have a "team leader" submit a single application for the entire team. You should take this route if the only way you're willing to proceed with the project is if everyone on the team gets funded by BERI. When submitting such an application, please include salaries of the other team members as line items in the budget.
We think the likelihood that we have to ask a grantee to return their funding is quite low. BERI will aim to choose candidates with integrity who will make a good faith attempt at completing their proposed project. We expect several of the projects we fund will fail, and that's OK—we suspect the learnings from these failures will still be sufficiently valuable to justify the expense to BERI.
However, we'd be happy to make arrangements with any grantees who were particularly nervous about this aspect of our program. Monthly or quarterly check-ins could be one such arrangement; clear, upfront agreements about what would count as "a strong effort" to complete a project could be another. We're open to other proposals from grantees too.
Is it okay to express uncertainty in my grant application?
Yes! We don't want people to feel like they have to express fake levels of confidence in order to get grants. Just tell us what you’re actually considering doing. If you're really uncertain, you can always apply with a branching decision tree, like “I’m planning to either X, Y, or Z, and I’m not sure which yet. I’ll spend the first month deliberating and collecting information and advice from about which of X, Y, or Z would be most valuable, and then do the best thing.”
Questions
If you have any questions please email contact@existence.org.
Activity Update - April
This post is part of our monthly activity update series. Each activity update briefly describes what we've been up to in the last month and, hopefully, helps to shed light on how our priorities shift over time. Note that some of our time each month will be spent investigating future project opportunities that may not be covered in these posts until our plans are more solid.
This post is part of our monthly activity update series. Each activity update briefly describes what we've been up to in the last month and, hopefully, helps to shed light on how our priorities shift over time. Note that some of our time each month will be spent investigating future project opportunities that may not be covered in these posts until our plans are more solid.
Accomplishments
Our Grants Program awarded the following grants:
$300,000 to the Future of Life Institute (FLI) for general support of FLI's activities, including an AI x-risk focused conference FLI expects to host in early 2019.
$200,000 to the Center for the Study of Existential Risk (CSER) for the general support of CSER's activities.
We hired 2 contractors to help us in supporting our partners:
Laura Pomarius provided graphic design services in support of the second annual conference of the Center for Human-Compatible Artificial Intelligence (CHAI).
Tom McGrath will be working with Owain Evans at the Future of Humanity Institute (FHI) to help build models for the Slow Judgments Project, a collaboration between FHI and Ought.
We assisted two highly x-risk-motivated individuals with the application process for the European Commission's newly created High-Level Expert Group on Artificial Intelligence.
We commissioned a small, informal review on the relative prevalence of public interest in existential risks from AI, compared to public interest in other negative aspects of AI.
BERI held its second board meeting of 2018. Due to BERI's recent growth, BERI’s board members are devoting more time to BERI and developing robust oversight processes.
Internally, we began to implement our Fiscal Operating Procedures in preparation for our upcoming audit in June (BERI is commissioning an outside auditor because we took in over $2 million in revenue last year).
We revamped our plans and processes for our Academic Awards grants program.
We assisted CHAI's annual workshop by:
graphically designing and printing handouts (this was Laura's project, mentioned above), and
transporting a VIP to the workshop.
Mistakes
We misclassified one of our contractors as an employee, due to some miscommunication and lack of an internal check to ensure proper classification of new hires. This mistake has now been rectified, and we have improved our onboarding processes so that this will be unlikely to occur again.
Works in Progress
We are…
Hiring for additional Project Managers, and a Program Investigator and Manager position. You can learn more about each position at our jobs page.
Assisting the BERI Support Fund (BSF) in applying for non-profit status and other operational set-up tasks, especially processes to receive donations.
Evaluating the possibility of setting up an organization in the UK to run BERI-like activities for our partners across the ocean.
Currently trialling a full-time Project Manager, Josh Jacobson, who has already helped out on numerous tasks. His most notable current projects include:
Exploring the possibility of setting up a Residential Fellowship for x-risk-motivated graduate students in the Bay Area.
Completing a grant investigation.
Engaging a potential partner institution in conversations about possible collaborations.
Searching for permanent office space for BERI's growing team.
Activity Update - February and March 2018
This post is part of our monthly activity update series. Each activity update briefly describes what we've been up to in the last month and, hopefully, helps to shed light on how our priorities shift over time. Note that some of our time each month will be spent investigating future project opportunities that may not be covered in these posts until our plans are more solid.
This post is part of our monthly activity update series. Each activity update briefly describes what we've been up to in the last month and, hopefully, helps to shed light on how our priorities shift over time. Note that some of our time each month will be spent investigating future project opportunities that may not be covered in these posts until our plans are more solid.
This post covers two months, February and March. We did not previously post an update in February due to other project priorities and travel.
Accomplishments
Our Grants Program awarded the following grants:
$100,000 to the Center for Applied Rationality for the support of activities to develop and improve the website LessWrong 2.0.
$5,000 to the Association for the Advancement of Artificial Intelligence, in support of the 2018 Spring Symposium on AI and Society: Ethics, Safety and Trustworthiness in Intelligent Agents.
We supported the creation of the Bay Area X-risk Community Initiative (BAXCI). You can read more about what BAXCI is in our announcement.
Our core staff traveled to the UK to convene with the Future of Humanity Institute (FHI) and the Centre for the Study of Existential Risk (CSER). Our broad goals for this trip were to 1) increase our awareness of opportunities to support these research institutions and others, 2) improve our ability to communicate and collaborate with individuals at FHI and CSER, and 3) inspire our core staff to "get stuff done" for the x-risk ecosystem. We believe all three of these goals were met.
We hired five contractors to assist us in supporting two of our partners, FHI and CSER. These contractors are working in a variety of areas, including: mathematical modeling, research, writing and editing, graphic design, communications, website maintenance, and the development of a new academic program.
We hosted an event on March 31st at the CFAR office to help FHI gather data for its collaboration with Ought on a project to help build systems that predict human preferences after deliberation.
BERI held its first board meeting of 2018. Due to BERI's rapid growth over the last year, we decided to convene board meetings on a quarterly basis going forward.
Internally, we developed our first official Fiscal Operating Procedures to improve clarity about BERI's financial processes both for ourselves and for external reviewers.
Setbacks
We were unable to expedite setting up the financials of the BERI Support Fund, thereby losing a significant amount of potential funding we had hoped it might receive early this year.
Works in Progress
We are…
Exploring the possibility of assisting highly x-risk-motivated persons with applying to the European Commission's newly created High-Level Expert Group on Artificial Intelligence.
Working to set up several grant and award programs, targeting different potential x-risk-motivated recipients.
Hiring for additional Project Managers, and a Program Investigator and Manager position. You can learn more about each position at our jobs page.
Assisting BSF in applying for non-profit status and other legal and operational set-up tasks.
Finalizing our 2017 financial statements in preparation for our upcoming annual financial audit.
Evaluating the possibility of setting up an organization in the UK to run BERI-like activities for our partners across the ocean.
In the process of investigating several new grants.
Activity Update - January 2018
This post is part of our monthly activity update series. Each activity update briefly describes what we've been up to in the last month and, hopefully, helps to shed light on how our priorities shift over time. Note that some of our time each month will be spent investigating future project opportunities that may not be covered in these posts until our plans are more solid.
This post is part of our monthly activity update series. Each activity update briefly describes what we've been up to in the last month and, hopefully, helps to shed light on how our priorities shift over time. Note that some of our time each month will be spent investigating future project opportunities that may not be covered in these posts until our plans are more solid.
Accomplishments
Our Grants Program awarded an $800k general support grant to the Center for Applied Rationality (CFAR), in order to support CFAR buying a permanent venue for its workshops.
We facilitated the creation of the BERI Support Fund (BSF), which was incorporated in January 2018. See http://bsf.existence.org/ for more information about BSF.
We systematized some aspects of our Grants Program so that we can involve more BERI staff in our grant investigations. This involved drafting initial procedures and criteria for future grants BERI would like to make.
Setbacks
Our academic awards program is on hold until further notice.
Works in Progress
We are…
Investigating how to expand our base of potential grantees, for example, by making grants to individuals.
Planning a trip in which our core staff will visit the Future of Humanity Institute (FHI) and Centre for the Study of Existential Risk (CSER). Our broad goals for this trip are to 1) increase our awareness of opportunities to support these research institutions and others, 2) improve our ability to communicate and collaborate with individuals at FHI and CSER, and 3) inspire our core staff to "get stuff done" for the x-risk ecosystem.
Hiring for additional Project Managers, and a Program Investigator and Manager position. You can learn more about each position at our jobs page.
Supporting BSF in applying for non-profit status and other legal and operational set-up tasks.
Finalizing our 2017 financial statements in preparation for our upcoming annual financial audit.
In the process of investigating several new grants.
Activity Update - December 2017
This post is part of our monthly activity update series. Each activity update briefly describes what we've been up to in the last month and, hopefully, helps to shed light on how our priorities shift over time. Note that some of our time each month will be spent investigating future project opportunities that may not be covered in these posts until our plans are more solid.
This post is part of our monthly activity update series. Each activity update briefly describes what we've been up to in the last month and, hopefully, helps to shed light on how our priorities shift over time. Note that some of our time each month will be spent investigating future project opportunities that may not be covered in these posts until our plans are more solid.
Accomplishments
Jaan Tallinn has made another donation of approximately $5 million to BERI's Grants Program. We are extremely grateful to see this much support for existential risk reduction and are excited to help direct the funding through a grant investigation process that we are currently developing.
BERI also received two unrestricted $100k donations, one from an anonymous donor and one from the Casey and Family Foundation. We are deeply appreciative of this generous support and hope to use some of these funds to expand our team this year.
BERI co-hosted an AI Safety networking reception at the Neural Information Processing Systems (NIPS) conference (website) in Los Angeles. Over 60 people attended.
We commissioned Michael Keenan to build an open source tool for publishing blog posts. The tool is still under development, but it has already helped BERI with several blog posts. We hope that the tool will be useful to other groups using markdown for blogs, such as the Center for Human-Compatible AI or LessWrong 2.0.
We began supporting the Centre for the Study of Existential Risk (CSER) on several minor projects. With BERI's support, CSER will purchase equipment for their new office, hire a graphic designer for their upcoming AI policy report, and fund a group of UK students in forming an All-Party Parliamentary Group on Future Generations.
Our Grants Program awarded a second $100k general support grant to the Machine Intelligence Research Institute and a $100k general support grant to the Center for Applied Rationality.
Setbacks
We continue to be delayed in posting a scholarship application form to our website, because of conflicting information about how we should run the program.
Works in Progress
We are…
Investigating several potential grant opportunities. We hope to increase our capacity for evaluating and making grants in 2018.
Continuing to trial a promising ML engineer and an Operations & Finance Manager candidate.
Activity Update - November 2017
This post is part of our monthly activity update series. Each activity update briefly describes what we've been up to in the last month and, hopefully, helps to shed light on how our priorities shift over time. Note that some of our time each month will be spent investigating future project opportunities that may not be covered in these posts until our plans are more solid.
This post is part of our monthly activity update series. Each activity update briefly describes what we've been up to in the last month and, hopefully, helps to shed light on how our priorities shift over time. Note that some of our time each month will be spent investigating future project opportunities that may not be covered in these posts until our plans are more solid.
Accomplishments
We have set up a Computing Grants program, designed to provide PhD students and post-docs with additional computing resources (GPUs) for their work on AI ethics, safety, or x-risk.
Our Grants Program approved a $20k grant to the Association for the Advancement of Artificial Intelligence in support of the conference on Artificial Intelligence, Ethics, and Society.
We are now providing consistent purchasing support to the Future of Humanity Institute (FHI), allowing FHI to quickly and smoothly buy supplies (e.g., textbooks, office snacks, etc.) in support of its mission.
We hosted two "meet-and-greet" events to introduce ourselves to the EA community.
We selected a candidate to trial for our Operations and Finance Manager position; she will also experiment with taking on some of our Project Management tasks.
We selected one CPA to conduct our audit and found an accountant to work with BERI on ensuring our finances are in excellent shape.
Setbacks
We continue to be delayed in posting a scholarship application form to our website.
Works in Progress
We are...
Continuing to research our options for supporting several x-risk-motivated professors' work.
In the process of trialling another promising ML engineer.
Announcing BERI Computing Grants
We are happy to announce that BERI is now receiving applications for our experimental Computing Grant program. The program is designed for researchers at the graduate level and above who could use additional computing resources (GPUs) for their relevant projects.
Update: As of June 5, 2018, this program is paused and no longer accepting applications. It has been superceded by BERI's Project Grants program.
We are happy to announce that BERI is now receiving applications for our experimental Computing Grant program. The program is designed for researchers at the graduate level and above who could use additional computing resources (GPUs) for their relevant projects.
BERI is interested in supporting three distinct but synergistic categories of technical research within the field of artificial intelligence:
AI ethics: technical research targeted at applying ethical constraints to the way in which AI systems operate, e.g. fairness, transparency, and accountability for AI systems that award microloans.
AI safety: technical research targeted at making robots and other autonomous systems generally safer, e.g., autonomous vehicle control.
AI x-risk: technical research targeted directly at reducing the risk that human civilization will be destroyed or permanently curtailed by the future development of artificial intelligence.
You may apply for the grants via the following application form:
Applications are accepted on a rolling basis and reviewed in batches every couple of months. Once you have submitted your application we'll let you know when you can expect a decision from us.
FAQ
This section will be updated over time in response to questions about this program.
Why is BERI interested in AI safety and AI ethics?
It may not be immediately obvious why BERI is supporting research in AI ethics and AI safety, rather than only research targeted directly at AI x-risk. The reason is that we believe some fraction of the mathematical principles underlying AI ethics and AI safety can also be helpful for reducing existential risk. Ideally, researchers focused directly on AI x-risk may be able to take ideas and inspiration from more general work on AI safety and ethics, selecting for the solutions that they believe are most likely to be applicable to longer-term control and risk reduction.
Why is BERI directly providing computing resources?
BERI could simply provide funding to other research groups, so why provide the computing directly? The reason is that by providing the computing directly as a free service, we can also provide centralized tech support for any of our grant recipients, and take note of any requests they have for improving the computing resources provided. Ideally, the program might evolve to provide computing resources similar to what research non-profits like OpenAI are able to provide in-house, but furnished directly to researchers at a variety of institutions, with low-overhad cost, and with a special focus on supporting work that we think will reduce existential risk. We won't know right away whether this ideal will be sufficiently attainable to justify continuing the program instead of just making more monetary grants, but we do expect to learn the answer to this question within the first year of the program's operation.
Activity Update - October 2017
This post is part of our monthly activity update series. Each activity update briefly describes what we've been up to in the last month and, hopefully, helps to shed light on how our priorities shift over time. Note that some of our time each month will be spent investigating future project opportunities that may not be covered in these posts until our plans are more solid.
This post is part of our monthly activity update series. Each activity update briefly describes what we've been up to in the last month and, hopefully, helps to shed light on how our priorities shift over time. Note that some of our time each month will be spent investigating future project opportunities that may not be covered in these posts until our plans are more solid.
Accomplishments
BERI has been focusing on recruitment this month. We have compiled a list of potential candidates for two new roles: Operations and Finance Manager and Project Manager. We also wrote two blog posts explaining why we are recruiting and how we hope incoming staff will fit into BERI's overall mission (first blog post, second blog post).
We hired Kyle Scott as a part-time Project Manager.
We completed one trial with a machine learning engineer interested in supporting CHAI's research; while the engineer will not be joining our team, this has allowed us to move onto the next few candidates in our pipeline.
Our Grants Program made a second general support grant to the Future of Life Institute, for $50,000.
Setbacks
We had expected to post a scholarship application form on our website this month; however, this project has been slightly delayed. We hope to post the application form in the next month.
Works in Progress
BERI is currently researching our options for supporting several x-risk-motivated professors' work.
BERI is also exploring the feasibility of offering computational resources to x-risk motivated researchers and graduate students.
We are planning a "BERI meet-and-greet" event to introduce ourselves to the Bay Area effective altruism community. This event will be held at 7pm on Monday, November 13th, at the CFAR office (2030 Addison St, 7th Floor, Berkeley).
We are also planning an online meet-and-greet for folks who are not in the Bay Area; it will be held on Thursday, November 16th, at 3pm Pacific Time. More details on how to join can be found in our Facebook event.
Forming an engineering team
BERI wants to help CS students and researchers at UC Berkeley who care about existential risk, by hiring a team of engineers to collaborate with them on their work. It might not be obvious why outside help is needed for this, and the purpose of this post is explain why.
BERI wants to help CS students and researchers at UC Berkeley who care about existential risk, by hiring a team of engineers to collaborate with them on their work. It might not be obvious why outside help is needed for this, and the purpose of this post is explain why.
Imagine being an academic
Just for a few minutes, imagine you're a young computer science researcher who understands how fragile the world is. You can see the unfolding impact of new technologies, both good and bad, and you understand that simple actions today can have a huge impact on the future. And yet, you notice that almost no one’s job description tells them to think at length about the impact of their work in decades or centuries to come, much less to translate that reasoning into action. Instead, you observe most people and institutions following short-sighted gradients, where a bit more effort leads to a bit more reward.
You want to be different. You want the code you write now to have an impact 50 years from now, even if you'll have to wait that long to see it. But to further your career and gain the influence you feel you need to make a difference, you have to stay focused on writing your next paper. Papers, not Python packages, are the fundamental unit of success for a young academic. So, when you finish publishing the findings of your latest project, and you stop to think about how to make your code more usable to colleagues just outside your circle of potential coauthors, you have to pause and ask: am I taking a career hit for doing this? Am I going to miss out on a professorship that would empower me to steer the discourse of the intellectual elite toward a safer future, because I spent too much time open-sourcing my code and not enough time writing and presenting my findings?
How BERI plans to help
By bringing in resources from outside of academia, we hope to form part of an incentive gradient that causes more organizational support and human capital to flow swiftly toward people and institutions who care and think deeply about the long-term future, and in particular, existential risk from artificial intelligence (AI x-risk).
As part of this plan, we are seeking to hire a machine learning engineer (job posting) who can seed the development of an engineering team that works with young AI researchers at UC Berkeley who have career-scale ambitions to reduce AI x-risk. Our long-term goal is for BERI's engineers to work alongside the Center for Human-Compatible Artificial Intelligence (CHAI) and perhaps other academic groups concerned with AI x-risk (news article). We want to ensure that students and faculty focused on x-risk reduction have access to the latest machine learning tools and development environments, and to help them to design and carry out experiments in those frameworks.
There are three reasons we believe an outside engineering effort is needed to achieve this goal:
Academic incentives lead to a "tragedy of the code-sharing commons" effect. This is described in the vignette above about the student at Berkeley. The primary unit of measurement for the success of a young researcher in computer science is the number and quality of the papers they write, rather than the shareability and usability of their code. Thus, once their papers are published, taking time away from writing papers to make their code usable to peers can mean taking a career hit, unless they know sharing the code will mean they get to co-author more papers. BERI wants to alleviate stress and sacrifice for these young researchers so they can stay focused on their career development as experimenters and presenters. As they do so, our engineers can simultaneously follow a career path focussed on writing quality code, with the happiness of users—researchers—being their main success metric. This way, our engineers' incentives will be naturally aligned with abating the tragedy of the code-sharing commons, at least among researchers focused on mitigating x-risk.
Hiring our own team allows us to offer reasonable wages to x-risk-motivated engineers**.** Universities are not able to easily pay high salaries for engineers. By contrast, machine learning engineers at for-profit companies like Google can easily earn over $200k/year by joining teams like Google Brain. BERI hopes to pay salaries somewhere in between. We know many engineers who would like to work more closely on projects to reduce existential risk, and BERI would like to offer them satisfying career paths to do so. In the long run, we would like to match the salary precedent set by other well-funded research non-profits, such as OpenAI, while working in close proximity with universities like UC Berkeley to nurture the next generation of scientists.
Our mission will keep the engineering efforts targeted at x-risk. BERI's mission is "to improve human civilization’s long-term prospects for survival and flourishing." Our commitment to this mission is legally binding, and our Board of Directors is passionate about realizing it. Thus, by building an engineering team within BERI, we hope to maintain the team’s mission-alignment more securely than we would by operating within a larger institution.
Starting humbly and failing gracefully
While we think it's important for our engineering team to have a long-term ambition to grow and increase our output as time goes on, we also think it’s important to begin this project with a degree of humility about the culture we need to develop. BERI’s engineering team will integrate well with the lives and workflows of students and professors at UC Berkeley, which might take some time to figure out. And, while we are dedicated to our mission, we must avoid making overly strong assumptions about what our collaborators will ultimately find most helpful.
So, we are looking for candidates who are easy to work with and ready to adjust to the needs of the researchers BERI chooses to work with, and who don't mind growing our team slowly over a course of years, one engineer at a time.
Additionally, if it turns out that developing the culture necessary for BERI's engineering team to grow usefully is just too difficult, we’ll want to make sure that BERI’s net impact on the careers of its chosen collaborators will be a good one. In particular, our approach will not be to take risks that have a chance of growing the team at the expense of wasting our collaborators’ time. Wherever possible, BERI will try to grow by a series of Pareto improvements that will leave our collaborators happy if we decide to stop expanding.
Interested?
There's no time like the present! If you think you’d like to join our team, please apply for our Machine Learning Engineer position (http://existence.org/work/ml-engineering). Or, if machine learning isn’t your area of interest, please pass it on to your friends. BERI is a very young organization, and we can use all the help we can get in recruiting for this role.
What we’re thinking about as we grow - ethics, oversight, and getting things done
BERI has only been around for nine months, but we've already learned about significantly more needs than our current capacity can support. We’ve encountered many ideas that we believe would be great to execute, such as providing additional computational resources to x-risk oriented PhD students, hiring writers and editors for influential publications, investigating grant opportunities for our grants program. We feel almost as if BERI’s existence has "raised an antenna" which receives requests for help on x-risk reduction projects—and those requests are coming in rapidly.
BERI has only been around for nine months, but we've already learned about significantly more needs than our current capacity can support. We’ve encountered many ideas that we believe would be great to execute, such as providing additional computational resources to x-risk oriented PhD students, hiring writers and editors for influential publications, investigating grant opportunities for our grants program. We feel almost as if BERI’s existence has "raised an antenna" which receives requests for help on x-risk reduction projects—and those requests are coming in rapidly.
In light of the level of demand for support that BERI has been receiving from the "x-risk ecosystem", we believe we should try to become more growth-oriented over the course of the next year. In other words, we want to increase the number of projects we can take on and execute. BERI wants to become a “yes we can” organization that can swiftly support the many thoughtful and caring individuals who want to kick off low-externality, high-upside projects to reduce x-risk.
Doing this will require some degree of organizational maturation. In particular, we don't yet have any full-time staff to get the ball rolling on all our incoming requests, so we are planning to hire at least one full-time project manager (job posting) over the next few months.
Furthermore, to accommodate these additional activities, we will need to hire an operations and finance manager (job posting) to handle accounting, onboarding, and other "backbone" activities that strengthen BERI's institutional legitimacy, as we transition our current operations manager to focus more on writing projects.
The importance of organizational growth as a goal
To improve the world's capacity for reducing existential risk, we believe it’s important to offer robust, x-risk-focused, growth-oriented career paths. This means new institutions are needed that:
can provide stable career opportunities,
are devoted to the mission of reducing x-risk, and
can scale with the competence and mission-orientation of their employees.
This last point is important: as x-risk-motivated individuals embark on career paths devoted to reducing x-risk, they gain competence and learn that they can manage larger, more impactful projects than when they started. If organizations fail to expand their capacity to match the growing capacities of their employees, teams will fragment as mission-driven employees leave to seek better opportunities for impact.
Thus, BERI has adopted the following stance toward growth: we will attempt to scale our capacity for projects to match our employees' mission-driven career growth. This means that we need:
Operations work to enable us to grow both efficiently and ethically, and
Full-time responsiveness to incoming requests.
We expand upon these points below.
How operations work enables ethical growth
In order for an organization to increase the scale and number of its projects, additional accounting, legal, and infrastructural work becomes necessary. High-quality operations work is not necessary merely because it enables an expansion of an organization's activity, but because it enables that expansion to happen in an ethical way. This principle is so fundamental that scalable oversight has been named as a fundamental research problem in artificial intelligence research (news article; arxiv paper).
There are two primary groups whose oversight of BERI we want to enable as we scale, so that we honor our ethical obligation to be transparent to those groups:
Our direct supporters, such as donors, and
Our governing bodies, such as the IRS.
Transparency to our supporters is important. We want them to be able to verify that we're using their resources wisely as we expand. For example, they should be able to check that we’re being careful about hiring skilled, mission-oriented employees, we’re spending their money efficiently, and we’re taking on mission-relevant projects. Our operations work allows us to convey our activities more clearly to supporters (e.g., good accounting helps our donors see how we’re spending their money).
However, transparency to donors and direct supporters is not all that matters. Transparency to our governing bodies is also ethically important to us, and we believe this principle is too often neglected. Activities like following legal best practices, which make us more transparent to the governing bodies that enable our existence, are virtuous and ethically powerful.
After all, BERI is a sort of artificial intelligence—albeit a very slow one—built out of some combination of humans and the agreements between them. By adhering to high-quality accounting and legal practices, we make ourselves an example of the sort of agent that is transparent enough for humanity to want to build it. Our executive director (Andrew Critch) has spent considerable time thinking about how intelligent systems can deserve the trust of other agents (technical post / non-technical post), and feels strongly that such best practices are necessary for BERI to be deserving of trust from the public.
BERI is therefore seeking additional operations staff (job posting) who are excited to form the "organizational backbone" needed for BERI to “get things done” in a transparent and accountable way. This work will create an opportunity for BERI to safely and legitimately take on more projects in service of x-risk reduction.
The importance of full-time responsiveness
Because there is high-uncertainty as to how soon existential risks from emerging technologies could manifest, BERI prefers to be able to respond quickly to incoming requests to assist or enable new projects to reduce x-risk. Responding quickly and reliably reduces our collaborators' uncertainty about whether projects will move forward, which lowers the activation barrier to getting good ideas off the ground.
This means we need highly responsive, full-time staff who are available during business hours to "keep the ball rolling" on outward-facing activities like hiring, recruitment, and communications with the groups and individuals contacting us for help. However, BERI started out as “a small initiative [...] operated on a volunteer and part-time basis by a handful of individuals with other part-time or full-time jobs” (source). We now recognize a need to re-orient toward hiring at least one full-time staff member who can serve as a central point of communication & coordination, internally and externally, and possibly more as long as demand for additional work to reduce x-risk continues to exist.
Thus, our new Project Manager position is designed for someone who wants to help BERI to become a "yes we can" organization that responds quickly to whatever the world needs from us to reduce x-risk.
Going forward
We want to be clear that these changes will not represent a change in our mission; BERI's purpose will remain the same. If we do succeed in hiring full-time staff, we will update our homepage to reflect that we are no longer operated entirely on a part-time basis. But to avoid contributing to the illusion that more full-time work is being carried out in service of existential risk than is actually the case, we want to remain modest in our self-description until we actually succeed in expanding.
So, as we grow, we are looking for individuals who will join our team, not for what we claim to be, but for what we hope to become. If you or someone you know might be that sort of person, please apply or share this post.
Activity Update - September 2017
BERI has been shifting gears somewhat, and we thought it would be helpful to our collaborators and supporters to start a monthly blog post series. Each blog post will briefly describe some of what we’ve been up to each month and hopefully shed light on how our priorities shift over time. Note that some of our time each month will be spent investigating future project opportunities that may not be covered in these posts until our plans are more solid.
BERI has been shifting gears somewhat, and we thought it would be helpful to our collaborators and supporters to start a monthly blog post series. Each blog post will briefly describe some of what we’ve been up to each month and hopefully shed light on how our priorities shift over time. Note that some of our time each month will be spent investigating future project opportunities that may not be covered in these posts until our plans are more solid.
Accomplishments
This September, we decided on the preliminary structure of our grants program (announced here).
As mentioned in the announcement, we also made our first two grants (one to MIRI, and one to FLI).
We helped organize an event hosting Max Tegmark at UC Berkeley, in which Prof. Tegmark spoke about his latest book Life 3.0: Being Human in the Age of Artificial Intelligence. Approximately 100-200 students attended.
Works in progress
We are...
In discussions with several professors about how BERI can support their research activities. There appear to be several promising opportunities for collaboration, and we have commenced researching our options with regard to these opportunities.
Developing a scholarship program, which we expect will primarily support graduate students doing x-risk relevant research at top universities who have a track record of demonstrated personal interest in reducing existential risk. We expect to have an application for such scholarships available on our website in the next month.
Recruiting engineers for trial projects to collaborate with academics on x-risk relevant work.
Other Updates
We have learned more about what kinds of services BERI can and cannot provide in the course of reducing existential risks. For example, there are certain types of “personal assistant” services that are difficult for BERI to provide to x-risk researchers, even if we were to be funded for those services by non-tax-exempt payments rather than donations. As such, we believe that other organizations with different legal structures may need to be created to fill gaps in the x-risk ecosystem that BERI cannot currently fill.
Announcing BERI’s first grants program
We are pleased to announce that BERI has received approximately $2 million in donations from entrepreneur and philanthropist Jaan Tallinn to establish a grants program for supporting organizations and individuals doing work that we believe will reduce existential risks for humanity.
Update: See our grants page for the latest information about our grants program.
We are pleased to announce that BERI has received approximately $2 million in donations from entrepreneur and philanthropist Jaan Tallinn to establish a grants program for supporting organizations and individuals doing work that we believe will reduce existential risks for humanity.
Grants will be decided by a committee comprising Andrew Critch (BERI’s Executive Director), Kenzi Amodei (BERI’s Interim Deputy Director), and Mr. Tallinn (acting in a volunteer capacity). Like BERI itself, our grants program is currently focused primarily on x-risks from artificial intelligence, but is open to work that will reduce other existential risks as well.
This month, we decided on our first two grants:
a $100,000 general support grant to the Machine Intelligence Research Institute (MIRI) in Berkeley, California.
a $100,000 general support grant to the Future of Life Institute (FLI) in Cambridge, Massachusetts.
Broadly, we believe these groups to have done good work in the past for reducing existential risk and wish to support their continued efforts. Over the next few months, we may write more about our reasoning behind these and other grants. In the short run, we expect our grants will be awarded primarily to:
other organizations with a track record of projects that we believe to have been useful for reducing existential risk,
individuals with a track record of accomplishments who wish to undertake low-cost, high-upside experimental projects that we believe could develop into new approaches to x-risk reduction, and
graduate students doing x-risk relevant research at top universities who have a track record of demonstrated personal interest in reducing existential risk.
If you have suggestions or proposals for us, you may reach us at contact@existence.org.
BERI's semi-annual report, August 2017
It's been slightly more than six months since BERI was incorporated. In our first half year, we have:
Assisted the Center for Human-Compatible AI (CHAI) with several projects, including the coordination of CHAI's first conference.
Hired two employees and eight contractors.
Raised over $400,000 for future projects, including activities that support CHAI.
It's been slightly more than six months since BERI was incorporated. In our first half year, we have:
Assisted the Center for Human-Compatible AI (CHAI) with several projects, including the coordination of CHAI's first conference.
Hired two employees and eight contractors.
Raised over $400,000 for future projects, including activities that support CHAI.
You can read about our progress and plans for the future in our first semi-annual report.
We are extremely grateful to all of the donors and volunteers who have supported BERI during its early stages, and we look forward to another six months of helping AI alignment research move forward swiftly!