Senior AI Standards Development Researcher (Full-Time)

Help lead our AI standards team! We work to help set appropriate standards for safety-related practices for developers of highly advanced AI systems (including dual-use foundation models, frontier models, etc.) across regulatory regimes, reducing chances that developers would have to compromise on safety, security, ethics or related qualities of AI systems in order to be competitive. We believe that such compromises could lead to accidents, malicious misuse, or other events with severe or catastrophic impacts at societal scale. We are seeking a senior researcher to make key contributions to, and help manage our team efforts on, each of our AI standards-related workstreams.

  • Ideal start date: December 2024

  • Status: Full-time, salaried employee (40 hours / week)

  • Compensation: $150,000 to $200,000 / year, depending on experience and qualifications 

  • Work location: Remote, US-based

  • Reports to: Elizabeth Cooper, Executive Director, Berkeley Existential Risk Initiative (BERI), and to Jessica Newman, Director, AI Security Initiative, Center for Long-Term Cybersecurity (CLTC) at UC Berkeley

For best consideration, please apply by Thursday, November 14th, 2024, 1pm Eastern Time. Applications received after that date may also be considered.

Click here to apply.

Responsibilities

Lead or make key contributions to work with UC Berkeley colleagues, in one or more of the following AI standards-related areas:

  • Research on aspects of AI governance, safety, security, impact assessment, or other topics related to AI risk management and standards. These will include intolerable-risk thresholds, with a primary focus on recommendations to inform threshold definitions and operationalization by government and industry actors, e.g., in terms of dual-use foundation model capabilities that could be misused by an adversary seeking to create chemical, biological, radiological, nuclear (CBRN) or cyber weapons, and other intolerable risks to human rights and safety.

  • Updates to our AI Risk-Management Standards Profile for General-Purpose AI Systems (GPAIS) and Foundation Models. These will include integration of new guidance and resources on important topics such as frontier-model evaluations. These would maintain the Profile’s usefulness and relevance as a norm-setting contribution to standards for GPAIS, foundation models, and frontier models, ready to be adapted or incorporated into relevant standards by NIST, ISO/IEC or other standards organizations.

  • Drafting actionable-guidance text for AI developers and auditors, suitable as contributions to US or international AI standards organizations’ working groups, such as in ISO/IEC JTC1 SC42 or in the US AI Safety Institute. This work would likely focus on standards or guidance related to AI governance, safety, security, foundation models, or other high-consequence AI risk management topics as mentioned above.

Technical research can include:

  • Literature searches on technical methods for safety, security, and trustworthiness of machine learning models

  • Application of research methods in engineering (e.g. statistical analysis), social science (e.g. survey design), or other fields as appropriate, to problems related to AI safety, security, impact assessment, or other AI risk management topics

  • Gap analysis to check that draft guidance would address key technical and governance issues in AI safety, security or other areas

Policy research can include: 

  • Identifying and analyzing related standards or regulations

  • Mapping specific sections of draft guidance to specific parts of related standards or regulations

  • Stakeholder engagement via workshops and other modes, to obtain input and feedback (e.g., on actionability) of recommendations in draft reports  

In addition to the above, the role would include light management of multiple AI standards team members’ work on the above topics.

Qualification Criteria

The most competitive candidates will meet the below criteria. 

  • Graduate-level education (at least Master’s; Ph.D. or equivalent preferred)

  • At least five years professional work experience; 10+ years experience preferred

  • Education or experience in one or more of the following: 

    • AI development techniques and procedures used at leading AI labs developing large language models or other increasingly general-purpose AI; 

    • Technical concepts, techniques and literature related to AI safety, security or other AI risk management topics; 

    • Industry standards and best practices for AI or other software, and compliance with standards language; 

    • Public policy or regulations (especially in the United States) for AI or other software 

  • Experience researching and analyzing technical and/or policy issues in AI standards or related topics

  • Project management experience, including supervision of other team members’ progress toward completing tasks while balancing priorities under schedule and other constraints

  • Excellent communication skills in English, including written (e.g., editing other team members’ drafts), and verbal for both online and in-person events (e.g., speaking on a panel at a professional conference)

  • Availability for video calls (e.g. via Zoom) for at least 1 hour four days a week, at some points between 9am and 5pm Pacific Time (it’s not necessary to be available that whole time, and otherwise you can choose your own working hours)

  • Based in the United States; San Francisco Bay Area or Washington DC area preferred

Application Process

Click here to apply.

For best consideration, please apply by Thursday, November 14th, 2024, 1pm Eastern Time. 

Candidates invited to interview will also be asked to perform a paid, written work test, which we expect to take one to two hours.

A successful candidate will become an employee of Berkeley Existential Risk Initiative (BERI) for this work and may also have an affiliation as a Non-Resident Research Fellow or Visiting Scholar at the UC Berkeley Center for Long-Term Cybersecurity. Our work has been funded by grants from Open Philanthropy and others.

BERI is proud to be an Equal Employment Opportunity employer. Our mission to improve human civilization’s long-term prospects for survival and flourishing is in service of all of humanity, and is incompatible with unfair discrimination practices that would pit factions of humanity against one another. We do not discriminate against qualified employees or applicants based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, sexual preference, marital status, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetic information, or any other characteristic protected by federal or state law or local ordinance. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law.