AI Standards Development Researcher

We are no longer hiring for this position.

  • Ideal start date: February 2024
  • Hours: 40 hours/week
  • Compensation: $50/hour to $70/hour, depending on experience and qualifications
  • Work location: Remote, US-based
  • Reports to: Tony Barrett, Senior Policy Analyst, Berkeley Existential Risk Initiative (BERI), and Visiting Scholar, AI Security Initiative, Center for Long-Term Cybersecurity (CLTC) at UC Berkeley

For best consideration, please apply by Wednesday January 31st, 2024, 5pm Eastern Time. Applications received after that date may also be considered after applications that met the deadline.

Responsibilities

Contribute to work planned by Tony Barrett and UC Berkeley colleagues, in one or more of the following AI standards-related areas:

  • Research on aspects of AI safety, security, impact assessment, or other topics related to AI risk management and standards. These may include red-teaming practices, dangerous-capability assessments, benchmarking, etc., such as for rating the extent to which a particular dual-use foundation model would be more useful than internet searches for an adversary seeking to create chemical, biological, radiological or nuclear (CBRN) weapons.
  • Updates to our AI Risk-Management Standards Profile for General-Purpose AI Systems (GPAIS) and Foundation Models. These will include integration of new guidance and resources on important topics such as frontier-model evaluations. These would maintain the Profile’s usefulness and relevance as a norm-setting contribution to standards for GPAIS, foundation models, and frontier models, ready to be adapted or incorporated into relevant standards by NIST, ISO/IEC or other standards organizations.
  • Drafting actionable-guidance text for AI developers and auditors, suitable as contributions to US or international AI standards organizations’ working groups, such as in ISO/IEC JTC1 SC42 or in the US AI Safety Institute. This work would likely focus on standards or guidance related to AI safety, security, foundation models, or other high-consequence AI risk management topics as mentioned above.

Technical research tasks may include:

  • Literature searches on technical methods for safety or security of machine learning models
  • Application of research methods in engineering (e.g. statistical analysis), social science (e.g. survey design), or other fields as appropriate, to problems related to AI safety, security, impact assessment, or other AI risk management topics
  • Gap analysis to check that draft guidance would address key technical and governance issues in AI safety, security or other areas

Policy research tasks may include:

  • Identifying and analyzing related standards or regulations
  • Mapping specific sections of draft guidance to specific parts of related standards or regulations
  • Checking that draft guidance would meet the intent and requirements of related standards or regulations

Qualification Criteria

The most competitive candidates will meet the below criteria.

  • Education or experience in one or more of the following:
    • AI development techniques and procedures used at leading AI labs developing large language models or other increasingly general-purpose AI;
    • Technical concepts, techniques and literature related to AI safety, security or other AI risk management topics;
    • Industry standards and best practices for AI or other software, and compliance with standards language;
    • Public policy or regulations (especially in the United States) for AI or other software
  • Experience researching and analyzing technical and/or policy issues in AI standards or related topics
  • Experience tracking and completing multiple tasks to meet deadlines with little or no supervision
  • Good English communication skills, both written and verbal, including editing text to improve understandability
  • Availability for video calls (e.g. via Zoom) for at least 30 minutes three days a week, at some point between 9am and 5pm Eastern Time (it’s not necessary to be available that whole time, and otherwise you can choose your own working hours)
  • Based in the United States

Application Process

Click here to apply.

For best consideration, please apply by Wednesday January 31st, 2024, 5pm Eastern Time.

Candidates invited to interview will also be asked to perform a written work test, which we expect to take one to two hours.

A successful candidate will become an employee of the Berkeley Existential Risk Initiative (BERI) for this work and may also have an affiliation as a Non-Resident Research Fellow or Visiting Scholar at the UC Berkeley Center for Long-Term Cybersecurity. We currently have funding for approximately two years of work, but we have potential to obtain additional funding to renew or expand this work.

BERI is proud to be an Equal Employment Opportunity employer. Our mission to improve human civilization’s long-term prospects for survival and flourishing is in service of all of humanity, and is incompatible with unfair discrimination practices that would pit factions of humanity against one another. We do not discriminate against qualified employees or applicants based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, sexual preference, marital status, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetic information, or any other characteristic protected by federal or state law or local ordinance. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law.