We are happy to announce that BERI is now receiving applications for our experimental Computing Grant program. The program is designed for researchers at the graduate level and above who could use additional computing resources (GPUs) for their relevant projects.

BERI is interested in supporting three distinct but synergistic categories of technical research within the field of artificial intelligence:

  1. AI ethics: technical research targeted at applying ethical constraints to the way in which AI systems operate, e.g. fairness, transparency, and accountability for AI systems that award microloans.
  2. AI safety: technical research targeted at making robots and other autonomous systems generally safer, e.g., autonomous vehicle control.
  3. AI x-risk: technical research targeted directly at reducing the risk that human civilization will be destroyed or permanently curtailed by the future development of artificial intelligence.

You may apply for the grants via the following application form:

Application Form

Applications are accepted on a rolling basis and reviewed in batches every couple of months. Once you have submitted your application we’ll let you know when you can expect a decision from us.

FAQ

This section will be updated over time in response to questions about this program.

Why is BERI interested in AI safety and AI ethics?

It may not be immediately obvious why BERI is supporting research in AI ethics and AI safety, rather than only research targeted directly at AI x-risk. The reason is that we believe some fraction of the mathematical principles underlying AI ethics and AI safety can also be helpful for reducing existential risk. Ideally, researchers focused directly on AI x-risk may be able to take ideas and inspiration from more general work on AI safety and ethics, selecting for the solutions that they believe are most likely to be applicable to longer-term control and risk reduction.

Why is BERI directly providing computing resources?

BERI could simply provide funding to other research groups, so why provide the computing directly? The reason is that by providing the computing directly as a free service, we can also provide centralized tech support for any of our grant recipients, and take note of any requests they have for improving the computing resources provided. Ideally, the program might evolve to provide computing resources similar to what research non-profits like OpenAI are able to provide in-house, but furnished directly to researchers at a variety of institutions, with low-overhad cost, and with a special focus on supporting work that we think will reduce existential risk. We won’t know right away whether this ideal will be sufficiently attainable to justify continuing the program instead of just making more monetary grants, but we do expect to learn the answer to this question within the first year of the program’s operation.