New University Collaborators
We’ve completed our evaluation of the applications for new collaborators. We received a total of 16 applications from groups and individuals, across a wide variety of academic disciplines, geographical areas, and project types. After careful consideration, we’ve decided to start collaborations with 5 groups.
We’ve completed our evaluation of the applications for new collaborators. We received a total of 16 applications from groups and individuals, across a wide variety of academic disciplines, geographical areas, and project types. After careful consideration, we’ve decided to start collaborations with the following groups:
The Center for Long-Term Cybersecurity (CLTC) at UC Berkeley
David Krueger's lab at the University of Cambridge
The Anh Han's group at Teesside University
The Safe Robotics Laboratory at Princeton University, led by Jaime Fernández Fisac
The Sculpting Evolution Group at the MIT Media Lab, led by Kevin Esvelt
These groups join CHAI, FHI, CSER, SERI, and more as active BERI collaborators.
We consider these five new groups to be “trial” collaborations. They will initially be supported with money we raised in our December fundraiser, as opposed to collaboration-specific donations like with our main collaborations. If a trial collaboration is successful (i.e. they find it useful and we find it cost-effective), we will likely attempt to raise additional funds in support of specific collaborations. For more information about BERI's trial collaborations, see this blog post.
We greatly appreciate the time each applicant took to apply. It's inspiring to see so many people passionate about the long-term survival and flourishing of humankind!
Board changes
Eric Rogstad has stepped down from BERI's Board of Directors.
The Board has elected Jess Riedel to take Eric's place as Treasurer.
Eric Rogstad has stepped down from BERI's Board of Directors. Eric has been with BERI since the very beginning, and BERI would not exist in its current form without his contributions. I have personally benefitted greatly from his insights and advice.
The Board has elected Jess Riedel to take Eric's place as Treasurer. With Eric's resignation, Jess was our top choice for this position, and we're very excited for him to join the BERI team. You can read his bio here.
New University Collaborators
BERI is once again accepting applications from university-affiliated groups and individuals interested in receiving our support.
BERI is once again accepting applications from university-affiliated groups and individuals interested in receiving our support.
Winning applicants would be eligible for free services from BERI, like purchasing equipment, food, and software, maintaining an Uber account for easy travel, and hiring experts for research support; see How does BERI help our collaborators? for more info on that. If you’re a member of a research group, or an individual researcher, working on long-termist projects, I encourage you to apply. If you know anyone who might be interested, please share this with them!
BERI is a public charity whose mission is to improve human civilization’s long-term prospects for survival and flourishing. We’ve been working with university groups since 2017, and have provided over $2 million of administrative and operational support on long-termist projects with groups at UC Berkeley, Oxford, Cambridge, and elsewhere.
Applications are due June 20th. For more information on what we do for our collaborators, see our FAQ.
If you have any questions, email contact@existence.org.
Trial collaborations
BERI has received grants in support of two of our trial collaborations.
The purpose of this blog post is to explain BERI’s “trial collaborations”: why they exist, how they differ from “main collaborations,” and the role they play in BERI’s plans for expansion.
Goal of a trial collaboration
BERI’s trial collaborations are best understood in the context of our main collaborations, because the goal of a trial collaboration is to evaluate whether or not it can be expanded into a main collaboration.
BERI currently has four main collaborations, with FHI, CHAI, CSER, and SERI. In each main collaboration, BERI’s primary contribution is that we support the collaborator’s work in ways that would be administratively or financially difficult for their home university. These services are the main reason BERI was founded in 2017, and they remain our sole focus as an organization in support of x-risk-oriented researchers. (In 2018-2019 we experimented with some other programs that were not collaboration-focused, but in 2020 we narrowed our focus again to university collaborations.)
To help us assess the potential of additional organizations to engage with BERI as main collaborators—both in terms of whether they need our help and whether their work is aligned with our mission—in 2020 we created the concept of the trial collaboration and launched six of them.
“Life cycle” of a trial collaboration
Before starting a particular trial collaboration, BERI raises a relatively small amount of money from funders who want to support BERI’s core mission and approach.
For trials started in 2020, our funding came mostly from Open Philanthropy and Jaan Tallinn.
For trials starting in 2021, our funding will come from our 2020 public fundraiser.
2. We invest this money (between $1k-$10k per group) in “trialling” new collaborators.
As described in our fundraising blog post: These trial collaborations create opportunities for learning on both sides of the equation—BERI will learn more about which potential new services we are well-equipped to provide, and the collaborator will learn more about what they need, and which of those needs are best fulfilled by an external entity like BERI.
Inevitably, some of our trial collaborations won’t go any further: Maybe we discover that BERI isn’t a great fit for the type of help those researchers need, or maybe it turns out that they can get everything they need directly from their host university.
We expect that for some of these groups, BERI will be a great fit for their needs. Trial collaborations are designed to provide evidence for compatibility: If the collaborator finds it useful and BERI finds it cost effective, then the trial has been successful.
3. Once we consider a trial collaboration successful, we use the evidence of our success to apply for funding from an institutional donor aligned with the missions of both BERI and the collaborator in question. We believe that applications like this are easier for funders to evaluate, compared to either a general BERI application or a pre-trial collaboration application.
If we are successful in raising funding for a long-term collaboration with a given group, that work becomes part of our “main” collaboration program (contrasted with “trial” collaborations). We recently raised $60k to promote our trial collaboration with SERI to a main collaboration, and we’re optimistic that more of our current trials will “convert” to main collaborations before the end of 2021. Designing new forms of BERI support is a major focus of our trial collaborations program, and our goal is for BERI to become a valuable long-term partner to all of these groups.
The future of trial collaborations
Trial collaborations are BERI’s main strategy for developing our niche and increasing the amount of value we provide to the x-risk ecosystem. If the process described above is successful in establishing several large collaborations by 2023, then we will probably continue this pattern into the future, holding annual or biennial application rounds for new trials.
If this process is not successful, and we fail to establish a sufficient number of new main collaborations by 2023, we’ll probably reevaluate this approach: Maybe the demand for BERI-like support is less widespread than we expected, or maybe small trial projects aren’t a good way to surface important needs. Possible alternatives to the trial collaboration strategy are outside the scope of this post.
Conclusion
BERI’s current medium-term goal (roughly 1-3 years) is to develop our collaborations to the point where all of our core operating expenses are funded by collaboration-specific donations. This keeps our focus on questions of operational efficiency, and emphasizes BERI’s role as a supporting organization.
Currently, using overhead rates we think are fair, we don’t have enough main collaboration activity to fund even one full-time employee—our core operating expenses are still partially funded by general donations from Open Philanthropy and Jaan Tallinn. We’re using this funding to develop the trial collaboration process until we can accomplish our medium-term goal.
Long-term, BERI wants to expand to exactly the right size for the need that exists—no larger. In particular, our plans for staffing are modest: we plan to gradually fill demand that exists now, and build our reputation as an organization that provides value by responsively filling the needs of ambitious groups with promising visions. This way, our growth will be balanced with the growth of existential risk as an academic priority.
In summary: Trial collaborations are BERI’s way of investing in the growth of the x-risk ecosystem, and a bet on BERI’s university collaboration strategy for improving human civilization’s long-term prospects for survival and flourishing.
SERI converted to main collaboration
BERI has received a $60,000 grant from the Long-Term Future Fund to support our collaboration with the Stanford Existential Risks Initiative (SERI).
BERI has received a $60,000 grant from the Long-Term Future Fund to support our collaboration with the Stanford Existential Risks Initiative (SERI). We launched a trial collaboration with SERI in August 2020, and with this donation we’re converting our collaboration with SERI from a trial into one of our main collaborators.
We are truly thankful to have donors like the Long-Term Future Fund to support our work and mission. We’re also thankful for groups like SERI, with the vision and ambition to work towards a long future of survival and flourishing for human civilization. We’re proud to support their work.
This is the first time BERI has converted a trial collaboration into a main collaboration, and we believe this donation provides evidence for the strength of the trial collaborations approach. For more information on trial collaborations, see this post.
BERI 2020 Annual Report
We've released our 2020 annual report.
We've released our 2020 annual report here.
This document includes both high-level and program-specific summaries of our activities in 2020, plans for each program in 2021, and some specific, quantitative predictions for 2021.
BERI's last attempt at such a document was in August 2017. Our intention moving forward is to publish a document similar to this one annually.
We are extremely grateful to all of our donors for making this work possible. Thank you!
BERI Overhead
This post explains how BERI thinks about per-project overhead rates, and how we raise funds to support our core operations.
This post explains how BERI thinks about per-project overhead rates, and how we raise funds to support our core operations.
What are "indirect" costs at BERI?
When you’re collaborating with BERI on a project, in our accounting we classify each of our expenses relating to your project as either "direct" or "indirect". Direct expenses are for things you request (e.g. buying a plane ticket or paying a contractor). Indirect expenses are for “back-end” administration for your project (e.g. accounting, legal) and generally “keeping the lights on” at BERI: payroll, office space, etc.
Different objectives for different costs
Direct expenses are expenses you’d need to make irrespective of whether BERI was your sponsor. We generally do not try to keep those costs low, except insofar as our collaborators want us to, or if the expenses look particularly egregious for the non-profit sector. That’s because BERI’s role is to help by spending money in whatever ways are most directly helpful to you and the projects we collaborate on.
Indirect expenses, by contrast, are a function of how efficiently or inefficiently BERI can administer your project, so we like to hold ourselves accountable for keeping our indirect costs as low as we can while still functioning well as an organization.
How much of BERI’s expenses are "indirect", on average?
In our first couple years of operation, BERI’s "indirect" expense rate on university collaborations was ~40% of our direct expenses. This is higher than we’d like it to be, but it’s still lower than many universities. One reason our overhead rate has historically been a bit high is that we spent a lot of time thinking about university-compatible policies for BERI to follow as a collaborating institution.
After settling on many of our policies, narrowing our focus, and downsizing, our average indirect expense rate decreased to ~14% of our direct expenses in 2020. In 2021 and years to come, we expect our overhead rate to fall further still, due to economies of scale and other efficiency gains.
How does BERI decide how much “overhead” funding to request for a specific project?
Suppose you have a project that will cost $100k, and BERI asks for an additional $20k of funding to help us administer it for you. How did BERI estimate that $20k figure?
We ask ourselves the following four questions:
How much BERI staff time will be needed to manage this project? (And what is the corresponding cost of that staff time?)
What external services, like accountants and lawyers, do we anticipate needing to make use of to support this project?
How much buffer room do we want for unexpected indirect costs in case this project becomes more complicated than expected? (This happens a lot!)
How much financial buffer do we need to account for the cost of preparing to execute projects like this when they don’t all pan out in terms of funding to BERI? And how much of that buffer is fair to allocate to this project?
Then, we add up (1)+(2)+(3)+(4), and ask for that much overhead, in dollars, up front. We try not to spend a lot of time haggling about our overhead rate, because the additional infrastructure needed to do that well would further increase our overhead rate! Instead, we ask for what we think is fair given our needs, and count on outsiders to say “no” if that’s not worth pursuing. This creates the following set of incentives for our admin team:
If we find we’re often overestimating the overhead funds needed for the projects we take on, we note that we were more efficient than expected, and use that confidence (and the additional slack the money buys us, in staff time) to connect with more potential collaborators and projects.
If we find we’re often underestimating the overhead funds we need, we don’t go to the individual project funders to ask them for more project-specific money. Instead, we take the hit, draw from our reserves, and try to adapt to become more efficient before taking on new responsibilities.
On average we don’t want to be drawing from our reserve funding to subsidize per-project overhead rates, as explained in the next section.
Why doesn’t BERI ask a third party to fund most or all of its overhead costs across projects, so BERI can operate for zero or reduced per-project overhead rates?
The short answer is that doing so would be asking that third party to evaluate the value of many projects that are already being evaluated for cost-effectiveness by better-informed funders, creating duplicated effort and wasting funder time and attention. Here’s the same answer, but in the form of a detailed example:
Suppose five funders—Funders 1 through 5—are looking to fund five projects—Projects 1 through 5, respectively. Each project will involve $100k of direct expense, for a total of $500k over a year. Now let’s say it takes the equivalent of one full-time BERI staff member to administer these projects, as well as taking care of the legal and accounting back-end that’s required. Altogether, let’s say those things cost $100k over the year. That would mean BERI really needs $600k to take on these projects. Who should pay for that?
The fair thing to do is distribute the $600k across the funders of Projects 1-5, by requiring $120k of funding for each one. Starting in 2020, this is how we’ve tried to operate. If Funder 1 looks at Project 1 and thinks “That’s not actually worth the opportunity cost of paying $120k to accomplish”, or “I can get that done for $105k somewhere else”, they are usually in a relatively good position to make that decision.
By comparison, what would happen if we went to a new funder, Funder 6, to provide all or part of the overhead funding for these projects? Funder 6 would wonder, “Hmm, what are these 5 projects doing? Are they of positive value? Are they worth the money that’s going to be spent on them? And why am I the one who has to figure out if they’re worth $120k each, rather than Funders 1-5 who have already put a bunch of effort into evaluating the value of these projects?”
In other words, Funders 1 through 5 will usually be in a better position than Funder 6 to evaluate whether it’s worth paying BERI’s overhead rate to take on each project, accounting both for the value of the project and the non-BERI alternatives available for hosting the project.
Conclusion
BERI tracks indirect costs separately from direct costs, because those are the costs that we feel (and are) most accountable for keeping low. We are continually working toward fundraising more of our total indirect costs from per-project overhead rates, because we think it creates good incentives for us to stay efficient, and because we think it distributes the burden of evaluating cost-effectiveness more fairly across funders.
Fundraiser success
Thanks to the generosity of our donors, we raised $60,702 as part of our December fundraiser, exceeding our goal of $50,000! These funds will go towards launching new trial collaborations in 2021.
Thanks to the generosity of our donors, we raised $60,702 as part of our December fundraiser, exceeding our goal of $50,000! These funds will go towards launching new trial collaborations in 2021, as described here.
In addition, we raised over $66,000 in support of our collaboration with the Center for Human-Compatible Artificial Intelligence. This number is likely to increase once Facebook distributes any matching funds from Giving Tuesday.
It means so much to us that, in such a difficult year, people chose to support BERI’s mission to improve human civilization’s long-term prospects for survival and flourishing. With so many new problems arising every day, it can be hard to stay focused on low-probability, high-impact events like existential risks. BERI is working to improve the prospects of not only everyone living today, but also people who have yet to be born. Despite the abstract nature of the problem, we believe that reducing existential risk is a crucial task for our time, and it’s heartening to have donors who support us in this work.
We’re planning to open applications for new university collaborations in May of this year. With the financial backing of our donors, we’re excited to expand our support of researchers working to reduce existential risk. Thank you!
New University Collaborators
We’ve completed our evaluation of the applications for new collaborators. We received a total of 19 applications from groups and individuals, across a wide variety of academic disciplines, geographical areas, and project types. After careful consideration, we’ve decided to start collaborations with 6 of them.
We’ve completed our evaluation of the applications for new collaborators. We received a total of 19 applications from groups and individuals, across a wide variety of academic disciplines, geographical areas, and project types. After careful consideration, we’ve decided to start collaborations with the following groups:
The Autonomous Learning Laboratory at UMass Amherst, led by Phil Thomas
Meir Friedenberg and Joe Halpern at Cornell
InterACT at UC Berkeley, led by Anca Dragan
The Stanford Existential Risks Initiative
Yale Effective Altruism, to support x-risk discussion groups
Baobao Zhang and Sarah Kreps at Cornell
These groups join CHAI, FHI, and CSER as active BERI collaborators.
We consider these six new groups to be “trial” collaborations. They will initially be supported through BERI’s general use funds, as opposed to dedicated donations like with our older collaborations. If a trial collaboration is successful (i.e. they find it useful and we find it cost-effective), we will likely attempt to raise additional funds in support of specific collaborations.
We greatly appreciate the time each applicant took to apply. It's inspiring to see so many people passionate about the long-term survival and flourishing of humankind!
New Collaborator Applications
BERI is expanding our offerings to provide free services to a wider set of university-affiliated groups and projects, and we’re now accepting applications from groups and individuals interested in receiving our support.
BERI is expanding our offerings to provide free services to a wider set of university-affiliated groups and projects, and we’re now accepting applications from groups and individuals interested in receiving our support. Particularly in light of COVID-19, we’re standing ready to help people try out new ways of getting things done.
If you’re a member of a research group, or an individual researcher, working on long-termist projects, we encourage you to apply. If you know anyone who might be interested, please share the application with them!
For more information on what we do for our collaborators, see our FAQ.
Board and Staff Changes
At the most recent meeting of BERI’s Board of Directors, Sawyer Bernath was promoted to Deputy Director of BERI, and joined the Board.
At the most recent meeting of BERI’s Board of Directors, Sawyer Bernath was promoted to Deputy Director of BERI, and joined the Board. I’m excited about Sawyer’s increasingly central role at BERI, which is key to our “distillation” plan for 2020, as described in this blog post.
In addition, Alex Flint has stepped down from the Board. Alex will retain membership on BERI’s grants committee, as we continue to distribute BERI’s remaining grant-making funds, primarily to other institutions for re-granting. (For more information on BERI’s grants, see this blog post.)
In his role as a Director, Alex has been a huge help in orienting BERI’s culture toward an action-oriented mode of operation, and for that his contribution to BERI is greatly appreciated. He will also remain as an Advisor to the Board.
BERI's Plans for 2020
In 2020, BERI’s main focus will be almost exclusively on its collaborations program, which supports projects with CHAI, CSER, and FHI.
In 2020, BERI’s main focus will be almost exclusively on its collaborations program, which supports projects with CHAI, CSER, and FHI. I think collaboration with universities remains BERI’s greatest potential value-add to the world, and it’s the capacity that I’m most excited to see BERI maintain and expand. We will try to keep up with the existing responsibilities of BERI’s other programs, but will not be taking on new responsibilities, and are generally aiming to wind down non-collaboration activities. See this post for more information about the winding down of BERI’s grants program.
As a result of this narrowing of scope—as well as the hard work and dedication of present and past BERI staff toward streamlining BERI’s operations—there is now a lot less day-to-day work needed to fulfill BERI’s current responsibilities. In particular, in 2020, I expect many fewer meetings will be needed to discuss potential conflicts between future programs, and fewer organizational principles will be needed to prioritize between activity areas.
Since a number of BERI staff members are ready to transition to new job opportunities and are willing to make these transitions somewhat gradually, this presents a natural opportunity for BERI, which I’ll call “distillation and amplification” after a related concept developed in AI alignment.
The distillation phase works as follows: Before the end of 2020, BERI will reduce its core staff count to between 1 and 1.5 FTE. In particular, there will be one person whose job is to understand and manage the operation of all of BERI’s collaboration responsibilities. So, we’ll be trying to “distill” all the knowledge and skills necessary to operate BERI into that one person.
After this distillation, the remaining staff person will be in a very good position to judge—relatively independently, without turnaround delays for meetings or email exchanges between different ‘departments’ at BERI—what BERI is capable of doing in 2021, and how to do those things efficiently. At that point, if good opportunities for expanding BERI’s collaborations exist, we may attempt to enter an ‘amplification’ phase to hire additional people with a similar or complementary skill set for the purpose of expanding BERI’s university collaboration function.
The main person serving as BERI's "distillation center" for the purpose of this plan will be Sawyer Bernath. That is to say, Sawyer will serve as employee #1 of the 1-1.5 FTE target that BERI is aiming for this year.
Two important things remain to be emphasized:
Throughout this process, BERI’s collaboration program will remain its top priority. This might mean we give less attention to certain other BERI programs (e.g., BSF support), but we remain dedicated to university collaborations as BERI’s primary purpose and priority.
This plan for 2020 is only made possible by the incredible goodwill and dedication of BERI’s past and present staff, board, and funders. I feel blessed for their support of BERI’s future, and of the existential safety cause area more broadly. If I’m wrong about the value of this process, the failure is my own. However, if I’m right, I’d very much like to thank BERI’s staff and funders for helping me to reach this conclusion and for being supportive of this phase of BERI’s development.
Thanks for reading!
The Future of Grant-making Funded by Jaan Tallinn at BERI
BERI is planning to hand off our involvement in grant-making to 501(c)(3) organizations from the donations of philanthropist Jaan Tallinn. The hand-off will be to one or more other teams and/or processes that are separate from BERI.
BERI is planning to hand off our involvement in grant-making to 501(c)(3) organizations from the donations of philanthropist Jaan Tallinn. The hand-off will be to one or more other teams and/or processes that are separate from BERI. Andrew Critch, who has been instrumental in making grants funded by Jaan’s donations, will oversee this hand-off process, and will likely continue to act as an independent advisor to Jaan’s grant-making in the future. It is BERI’s understanding that Jaan will continue to sponsor grant-making in the area of existential risk reduction through other grant-making entities going forward.
Update (2019-09-07): As part of this hand-off, BERI is granting $2MM from its organizational grants program to the Survival and Flourishing Fund (SFF), a new Donor Advised Fund at the Silicon Valley Community Foundation, advised by a committee comprising Alex Flint, Andrew Critch, and Eric Rogstad. While SFF's advisory committee and BERI's Board of Directors currently comprise the same set of individuals, SFF will be free to make grants and reorganize itself in the future without further control from BERI as an institution.
This post does not pertain to individual grants from BERI, which Jaan has also sponsored with donations in the past. BERI will continue to administer collaboration-specific individual grants, and has not made a final decision as to when or whether we will seek funding from Jaan for individual grants competitions in the future, as we did in Fall 2018.
Rebecca Raible, the staff member at BERI who had most focused on grant-making (in her part-time role), left BERI in June after giving notice in March. This timing is coincidental, but we do not intend to hire to replace her.
Rationale
We feel that alternative methods exist by which Jaan may be more easily able to accomplish his goals for funding future grants to organizations, and we expect those options to produce grants that are more aligned with the full range of philanthropic priorities that Jaan would like to support. Jaan and BERI and are in agreement about this, and in general BERI does not want to selfishly hold on to activities that may be a better fit for another organization or structure.
Impact on existing grantees
For individuals and organizations who have already received grants, or commitments to such grants, from BERI, there will be no impact. BERI will complete the administration of their grant, for the period of time that the grant was for. BERI will either follow through with any pledges it has already made, or with the grantee’s agreement, it will otherwise honor those pledges in a way that both parties prefer (such as by granting money earlier, or by granting to another organization willing to uphold the pledge).
Impact on prospective grantees
For prospective grantees, BERI will:
notify the individual or organization that BERI is not actively considering new grants outside of its university collaborations,
add them to our notification list should this change in the future, and
either point the applicant to one or more other sources of funding that the applicant might wish to consider, or ask the applicant for permission to hand off their information, and their request for funding, to one or more external funders.
Moving forward
BERI is excited to continue its focus on collaborative support for aligned x-risk researchers at universities, as well as our other current and future initiatives. We encourage others in the x-risk ecosystem to reach out when they feel that BERI would be a good fit for a particular project, and we’re hiring for a couple of positions. In general, we believe BERI’s mission and structure continues to allow us to be flexible, such that we hope to be able to take on a wide variety of different types of activities in the future, provided that they make sense for BERI to do.
Activity Update - November 2018-March 2019
This is an update covering since our last blog post, November 2018 - March 2019. We appreciate your review, feedback, and/or advice.
This is an update covering since our last blog post, November 2018 - March 2019. We appreciate your review, feedback, and/or advice.
Our activity updates are not exhaustive and may sometimes only briefly describe what we've been up to. Note that some of our time will be spent investigating future project opportunities (or on other activities) that may not be covered in these posts until our plans are more solid.
Request:
BERI is searching for a web developer to manage various Jekyll sites. We are particularly looking for:
a. Someone with strong UX intuitions / design ability
b. Willing to accept pay at $100/hr or lower (based on ability)
c. Self-directed, can accept broad, rather than narrowly specified asks
d. Able to work well with the existing code and program according to its conventions
If you know anyone, please refer them to [our contractor roster](http://existence.org/jobs/contractors) to apply.
Collaborations:
CHAI
We have hired two full-time Machine Learning Research Engineers to work as part of our collaboration with CHAI. Steven Wang began in November and Cody Wild began work in early April.
We have also hired a part-time employee, Brandon Perry, to assist with CHAI's operations, and contracted Martin Fukui to help with planning the CHAI Annual Workshop and other events.
We are in the midst of a few capacity-building explorations, including:
Ongoing fundraising
Immigration support of collaboration personnel
Salary benchmarking of the broader talent market
FHI
We hired a contractor to provide fact-checking and continued to contract an additional four researchers for the Governance of AI program.
We set-up a BERI Upwork account and hired our first two Upwork contractors for the collaboration.
We printed a pamphlet of the technical report Reframing Superintelligence and sent copies for each attendee to the Puerto Rico conference.
Grants:
Organization
We've made grants to the following organizations:
ALLFED, $25,000.
Leverage Research, $25,000.
MIRI, $600,000.
CEA U.S., restricted to support the activities of 80,000 hours, $350,000.
Americans for Oxford, to support the research of Yarin Gal, $25,000.
Leverage Research, $50,000.
Future of Life Institute, to support the Future of Life Award, $50,000.
We also spent significant time developing an internal tracker of expected donations for our grants program and expected funding opportunities over the next three years, to help us better assess trade-offs between our grant opportunities. We are continuing work to improve this model.
Individual
We have delayed our next individual grants round, which was originally expected to be a "Level Up" grants round supporting individuals who wanted to improve their skills in support of eventual x-risk-reducing projects. We've been focused on a) catching up on assessing applications to our Organization Grants program and b) responding to inquiries from and reviewing submitted expenses of the individual grant recipients from our previous Project Grants round.
We next expect to set up a rolling Project Grants program to avoid future large gaps in the availability of individual grants from BERI. The timing for this program is still quite uncertain.
Operations:
Financials
We have spent the past several months (and anticipate spending the next few months) working on improving our financial accounting and reporting. As we complete this task, we're increasing documentation to improve finance task management and future handoffs.
HR
Colleen Gleason left our organization around the new year, and we started a full-time trial of a new candidate, Sofia Davis-Fogel, for our core team in late April. We added a number of employee benefits starting March 2019.
Some Potential Upcoming Projects:
Redoing our employee handbook and building an overall guide to our collaborations
Investigating our ability to support US and UK visas for employees and collaboration personnel
A website refresh
Rolling individual grants
Activity Update - October 2018
This post is part of our monthly activity update series. Each activity update briefly describes what we've been up to in the last month and, hopefully, helps to shed light on how our priorities shift over time.
This post is part of our monthly activity update series. Each activity update briefly describes what we've been up to in the last month and, hopefully, helps to shed light on how our priorities shift over time. Note that some of our time each month will be spent investigating future project opportunities (or on other activities) that may not be covered in these posts until our plans are more solid.
Accomplishments
In collaboration with the Center for Human-Compatible AI (CHAI), we…
Began the trial of a full-time ML Engineer.
Agreed to an upcoming full-time trial with another ML Engineer candidate.
Agreed to a collaboration grant to a CHAI intern for their work this summer (this is the fifth CHAI intern that BERI has supported).
Hired a contractor to provide CHAI with event planning support for its Annual Workshop.
Explored options for office space to accommodate CHAI (and perhaps BERI's) future growth.
Began funding in-office snacks, additional team meals, and team-building events for CHAI's team.
In collaboration with the Future of Humanity Institute (FHI), we...
Hired 3 contractors to provide:
Research assistance to the Governance of AI Program.
Communications and application support to the AI Safety team.
Publication support (light editing, LaTeX formatting, etc.).
Used existing contractors to provide copyediting and graphic design support for an upcoming publication.
We granted $20,000 to the Association for the Advancement of Artificial Intelligence in support of 2019 Artificial Intelligence, Ethics, and Society conference.
We announced our Round One Project Grant winners in this blog post.
We received a grant of $129,000 from the Open Philanthropy Project to support projects done in collaboration with the Future of Humanity Institute.
We concluded a trial of a potential Project Manager (we decided to not extend an offer).
Works in Progres****s
We are…
Hiring! We are seeking a Project Manager, a Director of Operations, and a Machine Learning Research Engineer interested in working with CHAI,: http://existence.org/jobs/.
Investigating several organizations for possible grants from BERI.
Developing policies around confidentiality to share with our collaborators.
Currently prioritizing revamping our financial systems above other projects. This prioritization includes working with an external consulting firm to improve and streamline our processes.
Seeking Testimonials - IPR, Leverage, and Paradigm
Leverage Research ("Leverage"), the Institute for Philosophical Research (“IPR”), and Paradigm Academy (“Paradigm”) are three closely collaborating organizations who, in one way or another, are working to advance and develop
self-improvement techniques,
movement-building techniques, and
philosophical and intellectual methodology and training methodology.
Leverage Research ("Leverage"), the Institute for Philosophical Research (“IPR”), and Paradigm Academy (“Paradigm”) are three closely collaborating organizations who, in one way or another, are working to advance and develop
self-improvement techniques,
movement-building techniques, and
philosophical and intellectual methodology and training methodology.
Internally, BERI has begun referring to this group of organizations as "ILP". Together, the ILP organizations employ over 40 people.
BERI is currently investigating both IPR and Leverage for possible grants. BERI previously made a $25k grant to IPR, and recently approved another $50k grant to provide "stop-gap" support to IPR, while we conduct a deeper grant investigation for a larger amount on the order of $100k - $500k.
Given that...
a) BERI is not as familiar with ILP organizations as some of the other organizations that we have made grants to, and
b) there seem to be somewhat divisive opinions about the sign and/or magnitude of ILP's value to the world, more so than for other organizations BERI has supported at similar levels of funding,
we are seeking broader community input. We are hoping to collect first-person testimonials from those who have directly interacted with ILP, and we intend to use these testimonials to inform our grant decisions.
If you would like to submit a testimonial, you can do so by filling out this testimonial form. Please only fill out this form once. We will accept submissions through December 20, 2018. IPR, Leverage, and Paradigm have all agreed, via statements from Directors of each, for BERI to share this post and the testimonial form.
Note that we do not currently expect to be able to publish the results of this research. We may be able to share some results if IPR, Leverage, Paradigm, & BERI all mutually agree to publish a broad summary of the findings (in a way that respects the selected privacy levels of all testimonials received).
Testimonial Types
All else equal, BERI will weight the following types of report with decreasing levels of importance:
Type 3: First-person testimonials provided directly to BERI, that BERI is free to share publicly if we so choose.
Type 2: First-person testimonials provided directly to BERI, that BERI is free to share with ILP if we so choose.
Type 1: First-person testimonials provided directly to BERI, which BERI is not free to share with ILP or the public
Of course, other factors will determine the importance of any given testimonial besides its Type as listed above, and we'd like to encourage responses of all types.
For the most part, BERI will pay less attention to non-first-person testimonials, such as:
"I've heard Leverage is (great / mediocre / terrible)";
"My friend had a (great / mediocre / terrible) time at a Paradigm workshop"
BERI will also put less weight on anonymous testimonials, as anonymous testimonials are more easily faked and difficult to follow up on. However, if you are only comfortable submitting a testimonial anonymously, we do encourage you to do so.
We also encourage readers to pass along this post to anyone they know who has had direct experience with ILP.
Please email organization-grants@existence.org if you have any concerns or questions that you would like to share with BERI.
Thanks for helping BERI develop our opinion of IPR, Leverage, and Paradigm!
Activity Update - September 2018
This post is part of our monthly activity update series. Each activity update briefly describes what we've been up to in the last month and, hopefully, helps to shed light on how our priorities shift over time. Note that some of our time each month will be spent investigating future project opportunities that may not be covered in these posts until our plans are more solid.
This post is part of our monthly activity update series. Each activity update briefly describes what we've been up to in the last month and, hopefully, helps to shed light on how our priorities shift over time. Note that some of our time each month will be spent investigating future project opportunities that may not be covered in these posts until our plans are more solid.
Accomplishments
In collaboration with the Center for Human-Compatible AI (CHAI), we…
Funded three interns to work with CHAI over the summer
Continued development of graphic and software projects
Put together preliminary plans for the next year of collaboration
In collaboration with the Future of Humanity Institute (FHI), we….
Contracted three academic researchers to collaborate with FHI's Governance of AI Program.
We contracted Roam Research, Inc. (Roam) to provide free use of and customer service for its pre-launch software to x-risk-focused organizations and their employees, if they desire to try it. Roam will also provide free troubleshooting and feature request support. We expect to follow Roam's development and hope that our engagement will help Roam create tools specifically designed to serve the x-risk ecosystem.
We donated an unrestricted gift of $4,524 to support the work of the Centre for the Study of Existential Risk.
Works in Progres****s
We are…
Hiring! We are seeking a Machine Learning Research Engineer interested in working with CHAI, a COO, and Project Managers: http://existence.org/jobs/.
Wrapping up our Round One Project Grants program—grantees have been selected and notified, and we are in the process of sending out funds.
Investigating four organizations for possible grants from BERI.
Preparing to work with an external contractor to improve BERI's financial management.
Exploring the possibility of setting up a program that would allow groups to apply for fiscal sponsorship with BERI.
Activity Update - July & August 2018
This post is part of our monthly activity update series. Each activity update briefly describes what we've been up to in the last month and, hopefully, helps to shed light on how our priorities shift over time. Note that some of our time each month will be spent investigating future project opportunities that may not be covered in these posts until our plans are more solid.
This post is part of our monthly activity update series. Each activity update briefly describes what we've been up to in the last month and, hopefully, helps to shed light on how our priorities shift over time. Note that some of our time each month will be spent investigating future project opportunities that may not be covered in these posts until our plans are more solid.
Accomplishments
We published a list of questions usable in prediction markets for thinking about AI timelines: http://existence.org/prediction-market-questions/.
In collaboration with the Center for Human-Compatible AI (CHAI), we…
Interviewed 6 candidates for the Machine Learning Research Engineer position and completed a work trial with one candidate.
Issued a BERI debit card that may be used by CHAI staff in a BERI volunteer capacity to speed up BERI's purchases made in support of CHAI’s activities. We also added core CHAI staff to BERI’s Uber for Business account.
Purchased computing equipment for new staff and purchased stickers with the CHAI logo for equipment labelling.
Renewed CHAI's web domain and upgraded the website set-up.
Financially supported an event to watch the results of OpenAI's DOTA research.
Officially recorded the nature of our partnership with CHAI.
In collaboration with the Future of Humanity Institute (FHI), we hired a contractor to work with Professor Allan Dafoe on research into legal and governance questions around the development of transformative artificial intelligence.
In collaboration with the Centre for the Study of Existential Risk (CSER), we extended our support of Simon Beard's work with Patrick Kaczmarek to enable the completion of several papers on the ethical considerations of extinction and the long-term future.
We developed an "Organizational Service Request Form" and “Organizational Service Agreement Form” to make it easier for us to evaluate requests from collaborating organizations.
We made the following grants:
A $300k general support grant to the Center for Applied Rationality. We expect the grant will be used for core organizational improvements, new strategic projects, and workshop subsidies.
A $50k grant to Theiss Research. We expect the grant will be used to support Carrick Flynn's research with Prof. Allan DaFoe at the Future of Humanity Institute.
A $400k unrestricted gift to the UC Berkeley Foundation. We expect the gift will be used to support the work of Dr. Andrew Critch at the Center for Human-compatible AI (but will not go towards his salary). This grant was decided on by BERI's board, with Andrew Critch recused.
A $50k general support grant to the Institute for Philosophical Research (IPR). We are currently investigating IPR for a larger grant. However, because IPR is significantly behind in its fundraising efforts, and is a sufficiently promising grantee in expectation at this point, we wish to provide them IPR stop-gap support while it continues to participate in our grant investigation process.
A $150k grant to the Center for Applied Rationality for the support of activities to develop and improve the website LessWrong 2.0.
We finalized our budget for the next 12 months. We plan to use this budget in our upcoming fundraising efforts.
Mistakes & setbacks
In this section, we may briefly highlight significant mistakes that we feel capable of communicating publicly about. It is not meant to imply that these are the only mistakes BERI makes (far from it!), nor that these are the only substantial mistakes BERI makes. (For example, we might exclude substantial mistakes if they occur on projects that are confidential, or if it is challenging to accurately discuss how the mistake occurred, due to many contributing factors.)
The form that we sent out to references when requesting feedback on our round one Project Grants applicants was confusing. In the future, we plan to clarify the type of information we are seeking from references.
We've advertised for a Machine Learning (Research) Engineer position for ~1 year, and we have not been able to fill the role on a permanent basis. We’re increasing our focus on this shortcoming by spending more time and effort on recruitment and hiring a contractor to provide feedback and help attract candidates. We think filling this role is one of the most valuable things we can potentially do for our CHAI collaboration.
While creating our budget, we realized that our finances were not as in order as we felt they ought to be at this stage in our development. As a result, we are beginning a search for outside CFO services to advise on improving, scaling up, and implementing new systems for our financial management and accounting.
Works in Progress
We are…
Hiring! We are seeking a Machine Learning Research Engineer interested in working with CHAI: http://existence.org/jobs/ml-engineer.
Exploring the possibility of setting up a travel abroad program for x-risk relevant activities.
Wrapping up our Round One Project Grants program—grantees have been selected and notified, and we are now in the process of sending out funds.
Investigating six organizations for possible grants from BERI.
Other notes
Alex Turner, a BERI Contractor, and CHAI intern, won a Round 3 AI Alignment Prize (funded by Paul Cristiano) for his posts for his posts Worrying About the Vase: Whitelisting and Overcoming Clinginess in Impact Measures.
X-risk Relevant Prediction Market Question Suggestions (Medium Term)
A few months ago, BERI commissioned me (Eli Tyre) to collaborate with local individuals working on AI safety to develop a set of existential-risk-relevant questions that would be appropriate for use in prediction markets. BERI is making the questions available for general use in prediction markets, forecasting tournaments, and the like. Feel free to make use of these questions in any capacity.
Background & motivation
The questions
General performance benchmarks
Will reinforcement learning be the dominant paradigm in n years?
How well will reinforcement learning or its successor work?
Operationalizations of AI relevant compute
Architecture, modularity, and deep learning
How many modules?
Gradient descent modules vs. non-gradient descent modules?
How many meta-modules?
Sociological questions
Governments and nationality
Consensus in the ML community that the “alignment problem” is important
Background & motivation
A few months ago, BERI commissioned me (Eli Tyre) to collaborate with local individuals working on AI safety to develop a set of existential-risk-relevant questions that would be appropriate for use in prediction markets. BERI is making the questions available for general use in prediction markets, forecasting tournaments, and the like. Feel free to make use of these questions in any capacity.
BERI believes that accurate forecasts are useful for trying to prevent existential risks. Predictions about the near- and medium-term future could inform strategic decisions, such as what types of research should be prioritized. In order to encourage forecasting and prediction markets in this area, BERI is open-sourcing the following questions for general use.
If you are running a prediction market, you are welcome to use any of these questions freely, in any capacity.
Note that this is only a preliminary list of questions and should not be treated as exhaustive. Due to the breadth of the space, there are certainly good questions that my collaborators and I did not consider, or for which we did not find suitable operationalizations. I don't want the existence of this list to discourage others from trying to develop their own prediction market questions.
Also note that this is simply a list of questions that we believe are could be answered by a prediction market; there are other questions that are strategically important to AI x-risk which cannot be easily resolved by simply-worded prediction market questions.
Although we tried to crisply operationalize each question, it would be infeasible to give full the resolution criteria for each one (at least, without making turning this blogpost into hundreds of pages of minutia). Anyone using one of these questions as the basis of a prediction market is responsible for delineating specific resolution criteria and mechanisms for settling edge cases for their market.
If you have questions about the motivation or operationalization of any of the following items, please email eli@existence.org.
The questions
Many of the following questions are open-ended: e.g., "In what year will X benchmark be reached?" These questions could be formulated as a contract that pays out an amount proportional to the year the task is solved, or a contract that pays out only if the task is solved before a specified cutoff year.
In contrast, many other questions are of the form "By a given date, will X event have occurred?" We give suggested specific dates for these questions. However, in most cases, variants of these questions using dates between now and 2030 are expected to be useful.
Below, I list the questions, grouped by broad category.
General performance benchmarks
The following are some standard benchmarks for AI performance. Prediction markets forecasting the arrival of technologies that hit these benchmarks would be useful.
(A note on attribution: many of these questions were taken directly from either AI Impact's 2016 Expert Survey in Progress in AI, the 2017 AI Index, or the Electronic Frontier Foundation’s AI Progress Measurement open-source notebook. Attributions are noted in brackets after the question.)
By what date will an RL agent match human performance on Starcraft II with no domain-specific hardcoded knowledge, trained using no more than $10,000 of compute of public hardware?
By what date will an AI system surpass human performance on the open-ended visual question answering task, COCO? [EFF]
By what date will an AI system surpass human performance at visual question answering on the VQA 2.0 dataset? [AI Index]
By what date will an AI system match human performance in reading comprehension on the Facebook BABI 20 QA dataset, with only one thousand training examples? [EFF]
By what date will an AI chatbot definitely pass the Turing test: a human can do no better than chance at distinguishing a chatbot from another human, after an hour of textual conversation?
By what date will an AI system see only one labeled image of a new object, and then be able to recognize the object in real world scenes, to the extent that a typical human can?
The new object should be one of the ImageNet classes; the training example can but need not be from ImageNet. The system will be considered to match human performance if it is able to achieve or exceed human accuracy in photos of a wide range of real world scenes. The system may train on labeled images of other objects. [AI Impacts]
By what date will an AI system be able to take a written passage and output a recording that cannot be distinguished from a voice actor, by an expert listener? [AI Impacts]
By what date will there be commercially available software that takes a body of text and produces a synthetic reading of the text that cannot be distinguished from a non-synthetic production?
By what date will an AI system routinely and autonomously prove mathematical theorems that are publishable in top mathematics journals today, including generating the theorems to prove? [AI Impacts]
By what date will an AI system be able to play new levels of Angry Birds better than the best human players? (For context, this is the goal of the IJCAI Angry Birds AI competition.) [AI Impacts]
By what date will an AI system be able to outperform professional game testers on all Atari games using no game-specific knowledge? (This includes games like Frostbite, which require planning to achieve sub-goals and have posed problems for deep Q-networks.)
By what date will an AI system surpass average human performance in OpenAI's Gym retro contest?
By what date will there be at least one company that has deployed at least 1,000 fully automated cars, operating on public roads in a major urban area and serving the public?
By what date will there be a publicly available 10-minute clip of synthetic video that cannot be distinguished from a non-synthetic video? (A skilled human, shown a clip of real video and a clip of synthetic video and told to identify the real clip, can do no better than chance.)
By what date will there be a commercially available video game with fully photorealistic graphics, that cannot be distinguished from recorded footage of a similar scene?
In a given year [2025, 2030], how many professional translators and interpreters will work in the United States, according to the Bureau of Labor Statistics?
By what date will there be a single AI architecture that can be trained, using only self play, to play either Go or each of the seven Atari 2600 games used in Deep Mind's *Playing Atari with Deep Reinforcement Learnin*g, at a superhuman level?
The architecture must be able to learn each game. That is, this the criteria of this question are met if one copy of the system is trained on Go, and another copy is trained on Atari, even if no single system can play each game. However, the system may not be tuned or modified by a human for the differing tasks. "Superhuman", here, means performance superior to that of the best human experts in each domain)
By what date will there be a single AI architecture that can be trained, using only self play, to play Go, Starcraft II, poker, or each of the seven Atari 2600 games used in Deep Mind's *Playing Atari with Deep Reinforcement Learnin*g, each at a superhuman level?
The architecture must be able to learn each game. That is, this the criteria of this question are met if one copy of the system is trained on Go, and another copy is trained on Atari, etc., even if no single system can play each game. However, the system may not be tuned or modified by a human for the differing tasks. "Superhuman", here, means performance superior to that of the best human experts in each domain.
Will reinforcement learning be the dominant paradigm in n years?
In a given year [2025, 2027, 2030], what percentage of papers published on arXiv in the Computer Science and Statistics categories (or whatever the most commonly used repository of the era is) will include the phrase "Reinforcement Learning" in the title?
In a given year [2025, 2027, 2030], will 3 of the 6 most cited papers (in the fields of AI and Machine Learning) of that year involve reinforcement learning (as judged by a survey of experts, such as NIPS authors)?
How well will reinforcement learning or its successor work?
By what date will there be a general purpose robot that can learn at least three of the following tasks: 1) make a bed, 2) wash dishes, 3) fold laundry, and 4) vacuum a room, requiring only 10 minutes worth of data (explanation, demonstration, etc.) from a human per task? (The robot can be pre-trained on other tasks for arbitrary amounts of time/compute/data. The robot does not need to be commercially available.)
By what date will there be an agent that can beat the a variation of the OpenAI Minecraft task that rewards no points for gold and red stone, only for picking up a diamond, learning only from pixels with no minecraft-specific hard-coded knowledge?
Operationalizations of AI relevant compute
Answers to the following questions have high information value. However, each of the following involve continuous values, that will be hard to determine exactly. Markets on the following question should either be formulated in the form of individual binary contracts (e.g. "In 2025, the maximum training compute used by a published AI system will be greater than 4,000 petaflops/s-day") or resolve to ranges instead of point values.
In a given year [2020, 2025, 2030], what will be the maximum compute (measured in petaflop/s-days), used in training by a published AI system.
A "published AI system" is a system that is the topic of a published research paper or blogpost. In order to be admissible, the paper/blog post must give sufficient information to estimate training compute, within some error threshold.
[This question was inspired by this blog post, which delinates a methodology for estimating training compute.]
What will be the retail cloud computing price of conducting a total of ten million 4000 x 4000 matrix multiplications, consisting of 10,000 multiplications of 1,000 different matrices in a given year [2020, 2025, 2030]?
The multiplications may be parallelized. The elements of the matrix should be randomly sampled from a standard normal distribution. The number of matrices is constrained only to avoid repeatedly incurring the cost of a random number generator. You may not exploit the property that the matrices are duplicated. The cost should be prorated to fractions of an hour (or other relevant billing cycle), and be for an 'on-demand' (non-preemptible, no reservation) price, without any bulk or other special discounts.
In a given year [2025, 2030], how much power will it take to implement 1e14 Traversed Edges Per Second for an hour on the best case machine from the graph500 list?
In a given year [2025, 2030], how much power will it take to implement 1,000,000,000,000 Traversed Edges Per Second for an hour on the best case machine from the graph500 list?
Serial computation: In a given year [2025, 2030], how many minutes will it take to train ResNet-152 on ILSVRC 2012 using a system available to purchase on the public market for less than $5000?
(Where ResNet-152 refers to the test performed in this TensorFlow benchmark, trained until it reaches top-1 error of <=28% and top-5 error <7%.)
Parallel computation: In a given year [2025, 2030], how many instances of ResNet-152 will be able to be trained in parallel on ILSVRC 2012 in fewer than 24 hours using a system available to purchase on the public market in [2025, 2030] for less than $5000?
(Where ResNet-152 refers to the test performed in this TensorFlow benchmark, trained until it reaches top-1 error of <=28% and top-5 error <7%.)
Architecture, modularity, and deep learning
For each of the following questions, a "module" refers to some division of an AI system such that all information between modules is human legible.
As an example, AlphaZero has two modules: a neural net, and a monte carlo tree search. The neural net, when given a boardstate, has two outputs to the tree search: a valuation of the board, and a policy over all available actions.
The "board value" is a single number between -1, and 1. A human cannot easily assess how the neural net reached that number, but the human can say crisply what the number represents: how good this board state is for the player. Similarly with the policy output. The policy is a probability vector. A human can conceptualize what sort of object it is: a series of weightings on the available moves by how likely those move are to lead to a win. The board value and the policy are both “human legible”.
Contrast this with a given floating point number inside of a neural net, which will rarely correspond to anything specific from a high-level human perspective. A floating point number in a neural net is not "human legible".
A module is a component of a neural net that only outputs data that is legible in this way. (In some cases, such as the Monte Carlo Tree search of AlphaZero, the internal representation of a module will be human legible, and therefore that module could instead be thought of as several modules. In such cases, prefer the division that has the fewest number of modules.)
Therefore AlphaZero is made up of two modules: the neural net and the monte carlo tree search.
An end-to-end neural network that takes in all sense data and outputs motor plans should be thought of as composed of only a single module.
How many modules?
In a given year [2025, 2030] how many modules will the state-of-the-art conversational chatbot have?
In a given year [2025, 2030] how many modules will the state-of-the-art AI system for architecture-search have?
In a given year [2025, 2030] how many modules will the state-of-the-art general use robotics system (that can do multiple learned, as opposed to hardcoded, tasks) have?
Gradient descent modules vs. non-gradient descent modules?
In a given year [2025, 2030], how many modules that employ gradient descent on a loss function will the state-of-the-art conversational chatbot have?
In a given year [2025, 2030] how many modules that employ gradient descent on a loss function will the state-of-the-art general use robotics system (that can do multiple learned, as opposed to hardcoded, tasks) have?
In a given year [2025, 2030], what proportion of the power for deploying (not training), the state-of-the-art conversational chatbot will go to modules that do have a loss function? (If the system is a single module, the proportion would be either zero or one.)
In a given year [2025, 2030], what proportion of the power for deploying (not training), the state-of-the-art general use robotics system (that can do multiple learned, as opposed to hardcoded, tasks) will go to modules that do have a loss function? (If the system is a single module, the proportion would be either zero or one.)
A meta-module is a module that controls the allocation of computational resources to other modules.
How many meta-modules?
In a given year [2025, 2030] will the state-of-the-art conversational chatbot have an architecture that includes a meta-module?
In a given year [2025, 2030] will the state-of-the-art AI system for architecture-search have an architecture that includes a meta-module?
Sociological questions
In a given year [2020, 2025, 2030], what will be the number of submissions to arXiv (in machine Learning and Artificial Intelligence), in that year?
In a given year [2020, 2025, 2030], what will the average (arithmetic mean) number of attendees across NIPS, ICML, and AAAI be?
In a given year [2025, 2030], what percentage of graduates leaving top 10 CS undergraduate programs will either go into a PhD in AI, or accept a research job in AI or Machine Learning?
In a given year [2025, 2030] what will be the largest amount spent on research and development of AI technology by any one company in 2018 dollars?
What will be the smallest number of organizations that will account for 10% of the first authors of papers published in [NIPS, ICML, other top venue] in a given year [2020, 2025].
In a given year [2020, 2025], what will be the proportion of job ads on Hacker News Who's Hiring mentioning "Artificial Intelligence", “Machine Learning”, or “Deep Learning” (or an abbreviation of any of those)?
In a given year [2025, 2030], will DeepMind be a financial subsidiary of Google or Alphabet?
In a given year [2025, 2030], how many people will OpenAI employ?
Governments and nationality
By a given year [2030, 2025], a major AI lab in North America, EU27, UK or Australia will have been nationalized.
In a given year [2030], fewer than 50% of major fundamental discoveries (of similar significance to DQN, LSTM, AlphaGo, etc.) made in the previous ten years will be published within two years of discovery.
By a given date [2030], a major AI lab will have been nationalized.
Consensus in the ML community that the "alignment problem" is important
In a given year [2025, 2030], what percentage of authors at the top three AI conferences would agree with the statement, "Artificial General Intelligence poses an extinction risk to humanity"?
In a given year [2025, 2030] what percentage of authors at the top three AI conferences would agree with the statement, "AGI alignment is a critical problem for our generation"?
In a given year [2025, 2030], how many technical employees at OpenAI will work directly on problems of AI alignment?
In a given year [2025, 2030], how many technical employees at DeepMind will work directly on problems of AI alignment?
In a given year [2025, 2030], what percentage of the technical employees at DeepMind will work directly on problems of AI alignment?
In a given year [2025, 2030], what percentage of the technical employees at OpenAI will work directly on problems of AI alignment?
Activity Update - June
This post is part of our monthly activity update series. Each activity update briefly describes what we’ve been up to in the last month and, hopefully, helps to shed light on how our priorities shift over time. Note that some of our time each month will be spent investigating future project opportunities that may not be covered in these posts until our plans are more solid.
This post is part of our monthly activity update series. Each activity update briefly describes what we’ve been up to in the last month and, hopefully, helps to shed light on how our priorities shift over time. Note that some of our time each month will be spent investigating future project opportunities that may not be covered in these posts until our plans are more solid.
Accomplishments
In collaboration with the Center for Human-Compatible AI (CHAI), we...
Identified and interviewed a promising candidate for CHAI’s office assistant position who is now being hired by CHAI as a limited appointment.
Examined various regulations to understand what purchasing support was enabled under the collaboration.
Commissioned multiple CHAI-specific academic poster templates in Powerpoint from an external designer.
Supported efforts at hiring for the collaboration at Effective Altruism Global at the BERI & CHAI careers event tables.
In collaboration with the Future of Humanity Institute (FHI), we hired four contractors to provide:
Research assistance to the Governance of AI Program
Operations and management assistance to the Governance of AI Program
Copy-editing for Nick Bostrom and other researchers as needed
LaTeX drafting and editing to prepare technical FHI reports
We advertised round one of our Project Grants Program and answered incoming questions from grantees, successfully attracting over 50 applicants.
Mistakes
We feel that there have been several occasions where BERI has inadequately explained its reasons for our activities (e.g., the reasons behind the structure we chose for our grants program, or our reasons for engaging in extensive due diligence before agreeing to certain forms of collaboration). While this has not been resolved, we are aware of this weakness, and we intend to work on improving our communication, especially with collaborators, over the next several months.
Works in Progress
We are…
Trialling another machine learning engineer candidate for our collaboration with CHAI.
Completing our first annual financial audit. (BERI has commissioned an outside auditor because we took in over $2 million in revenue last year.)
Evaluating the individual project grant applications we received.
Investigating six organizations for possible grants from BERI.
Looking to add to our contractor roster. In particular we hope to find a web designer and web developer.